---
reviewers:
- erictune
- deads2k
- liggitt
title: Using RBAC Authorization
content_template: templates/concept
weight: 70
---
{{% capture overview %}}
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise.
{{% /capture %}}
{{% capture body %}}
`RBAC` uses the `rbac.authorization.k8s.io` API group
to drive authorization decisions, allowing admins to dynamically configure policies
through the Kubernetes API.
As of 1.8, RBAC mode is stable and backed by the rbac.authorization.k8s.io/v1 API.
To enable RBAC, start the apiserver with `--authorization-mode=RBAC`.
## API Overview
The RBAC API declares four top-level types which will be covered in this
section. Users can interact with these resources as they would with any other
API resource (via `kubectl`, API calls, etc.). For instance,
`kubectl create -f (resource).yml` can be used with any of these examples,
though readers who wish to follow along should review the section on
bootstrapping first.
### Role and ClusterRole
In the RBAC API, a role contains rules that represent a set of permissions.
Permissions are purely additive (there are no "deny" rules).
A role can be defined within a namespace with a `Role`, or cluster-wide with a `ClusterRole`.
A `Role` can only be used to grant access to resources within a single namespace.
Here's an example `Role` in the "default" namespace that can be used to grant read access to pods:
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
```
A `ClusterRole` can be used to grant the same permissions as a `Role`,
but because they are cluster-scoped, they can also be used to grant access to:
* cluster-scoped resources (like nodes)
* non-resource endpoints (like "/healthz")
* namespaced resources (like pods) across all namespaces (needed to run `kubectl get pods --all-namespaces`, for example)
The following `ClusterRole` can be used to grant read access to secrets in any particular namespace,
or across all namespaces (depending on how it is [bound](#rolebinding-and-clusterrolebinding)):
```yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
```
### RoleBinding and ClusterRoleBinding
A role binding grants the permissions defined in a role to a user or set of users.
It holds a list of subjects (users, groups, or service accounts), and a reference to the role being granted.
Permissions can be granted within a namespace with a `RoleBinding`, or cluster-wide with a `ClusterRoleBinding`.
A `RoleBinding` may reference a `Role` in the same namespace.
The following `RoleBinding` grants the "pod-reader" role to the user "jane" within the "default" namespace.
This allows "jane" to read pods in the "default" namespace.
`roleRef` is how you will actually create the binding. The `kind` will be either `Role` or `ClusterRole`, and the `name` will reference the name of the specific `Role` or `ClusterRole` you want. In the example below, this RoleBinding is using `roleRef` to bind the user "jane" to the `Role` created above named `pod-reader`.
```yaml
# This role binding allows "jane" to read pods in the "default" namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
```
A `RoleBinding` may also reference a `ClusterRole` to grant the permissions to namespaced
resources defined in the `ClusterRole` within the `RoleBinding`'s namespace.
This allows administrators to define a set of common roles for the entire cluster,
then reuse them within multiple namespaces.
For instance, even though the following `RoleBinding` refers to a `ClusterRole`,
"dave" (the subject, case sensitive) will only be able to read secrets in the "development"
namespace (the namespace of the `RoleBinding`).
```yaml
# This role binding allows "dave" to read secrets in the "development" namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-secrets
namespace: development # This only grants permissions within the "development" namespace.
subjects:
- kind: User
name: dave # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
```
Finally, a `ClusterRoleBinding` may be used to grant permission at the cluster level and in all
namespaces. The following `ClusterRoleBinding` allows any user in the group "manager" to read
secrets in any namespace.
```yaml
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
```
### Referring to Resources
Most resources are represented by a string representation of their name, such as "pods", just as it
appears in the URL for the relevant API endpoint. However, some Kubernetes APIs involve a
"subresource", such as the logs for a pod. The URL for the pods logs endpoint is:
```http
GET /api/v1/namespaces/{namespace}/pods/{name}/log
```
In this case, "pods" is the namespaced resource, and "log" is a subresource of pods. To represent
this in an RBAC role, use a slash to delimit the resource and subresource. To allow a subject
to read both pods and pod logs, you would write:
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
```
Resources can also be referred to by name for certain requests through the `resourceNames` list.
When specified, requests using the "get", "delete", "update", and "patch" verbs can be restricted
to individual instances of a resource. To restrict a subject to only "get" and "update" a single
configmap, you would write:
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
```
Notably, if `resourceNames` are set, then the verb must not be list, watch, create, or deletecollection.
Because resource names are not present in the URL for create, list, watch, and deletecollection API requests,
those verbs would not be allowed by a rule with `resourceNames` set, since the `resourceNames` portion of the
rule would not match the request.
### Aggregated ClusterRoles
As of 1.9, ClusterRoles can be created by combining other ClusterRoles using an `aggregationRule`. The
permissions of aggregated ClusterRoles are controller-managed, and filled in by unioning the rules of any
ClusterRole that matches the provided label selector. An example aggregated ClusterRole:
```yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: monitoring
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.example.com/aggregate-to-monitoring: "true"
rules: [] # Rules are automatically filled in by the controller manager.
```
Creating a ClusterRole that matches the label selector will add rules to the aggregated ClusterRole. In this case
rules can be added to the "monitoring" ClusterRole by creating another ClusterRole that has the label
`rbac.example.com/aggregate-to-monitoring: true`.
```yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: monitoring-endpoints
labels:
rbac.example.com/aggregate-to-monitoring: "true"
# These rules will be added to the "monitoring" role.
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "pods"]
verbs: ["get", "list", "watch"]
```
The default user-facing roles (described below) use ClusterRole aggregation. This lets admins include rules
for custom resources, such as those served by CustomResourceDefinitions or Aggregated API servers, on the
default roles.
For example, the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource
"CronTabs" and the "view" role perform read-only actions on the resource.
```yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: aggregate-cron-tabs-edit
labels:
# Add these permissions to the "admin" and "edit" default roles.
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups: ["stable.example.com"]
resources: ["crontabs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: aggregate-cron-tabs-view
labels:
# Add these permissions to the "view" default role.
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["stable.example.com"]
resources: ["crontabs"]
verbs: ["get", "list", "watch"]
```
#### Role Examples
Only the `rules` section is shown in the following examples.
Allow reading the resource "pods" in the core API group:
```yaml
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
```
Allow reading/writing "deployments" in both the "extensions" and "apps" API groups:
```yaml
rules:
- apiGroups: ["extensions", "apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
```
Allow reading "pods" and reading/writing "jobs":
```yaml
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
```
Allow reading a `ConfigMap` named "my-config" (must be bound with a `RoleBinding` to limit to a single `ConfigMap` in a single namespace):
```yaml
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-config"]
verbs: ["get"]
```
Allow reading the resource "nodes" in the core group (because a `Node` is cluster-scoped, this must be in a `ClusterRole` bound with a `ClusterRoleBinding` to be effective):
```yaml
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
```
Allow "GET" and "POST" requests to the non-resource endpoint "/healthz" and all subpaths (must be in a `ClusterRole` bound with a `ClusterRoleBinding` to be effective):
```yaml
rules:
- nonResourceURLs: ["/healthz", "/healthz/*"] # '*' in a nonResourceURL is a suffix glob match
verbs: ["get", "post"]
```
### Referring to Subjects
A `RoleBinding` or `ClusterRoleBinding` binds a role to *subjects*.
Subjects can be groups, users or service accounts.
Users are represented by strings. These can be plain usernames, like
"alice", email-style names, like "bob@example.com", or numeric IDs
represented as a string. It is up to the Kubernetes admin to configure
the [authentication modules](/docs/reference/access-authn-authz/authentication/) to produce
usernames in the desired format. The RBAC authorization system does
not require any particular format. However, the prefix `system:` is
reserved for Kubernetes system use, and so the admin should ensure
usernames do not contain this prefix by accident.
Group information in Kubernetes is currently provided by the Authenticator
modules. Groups, like users, are represented as strings, and that string
has no format requirements, other than that the prefix `system:` is reserved.
[Service Accounts](/docs/tasks/configure-pod-container/configure-service-account/) have usernames with the `system:serviceaccount:` prefix and belong
to groups with the `system:serviceaccounts:` prefix.
#### Role Binding Examples
Only the `subjects` section of a `RoleBinding` is shown in the following examples.
For a user named "alice@example.com":
```yaml
subjects:
- kind: User
name: "alice@example.com"
apiGroup: rbac.authorization.k8s.io
```
For a group named "frontend-admins":
```yaml
subjects:
- kind: Group
name: "frontend-admins"
apiGroup: rbac.authorization.k8s.io
```
For the default service account in the kube-system namespace:
```yaml
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
```
For all service accounts in the "qa" namespace:
```yaml
subjects:
- kind: Group
name: system:serviceaccounts:qa
apiGroup: rbac.authorization.k8s.io
```
For all service accounts everywhere:
```yaml
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
```
For all authenticated users (version 1.5+):
```yaml
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
```
For all unauthenticated users (version 1.5+):
```yaml
subjects:
- kind: Group
name: system:unauthenticated
apiGroup: rbac.authorization.k8s.io
```
For all users (version 1.5+):
```yaml
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:unauthenticated
apiGroup: rbac.authorization.k8s.io
```
## Default Roles and Role Bindings
API servers create a set of default `ClusterRole` and `ClusterRoleBinding` objects.
Many of these are `system:` prefixed, which indicates that the resource is "owned" by the infrastructure.
Modifications to these resources can result in non-functional clusters. One example is the `system:node` ClusterRole.
This role defines permissions for kubelets. If the role is modified, it can prevent kubelets from working.
All of the default cluster roles and rolebindings are labeled with `kubernetes.io/bootstrapping=rbac-defaults`.
### Auto-reconciliation
At each start-up, the API server updates default cluster roles with any missing permissions,
and updates default cluster role bindings with any missing subjects.
This allows the cluster to repair accidental modifications,
and to keep roles and rolebindings up-to-date as permissions and subjects change in new releases.
To opt out of this reconciliation, set the `rbac.authorization.kubernetes.io/autoupdate`
annotation on a default cluster role or rolebinding to `false`.
Be aware that missing default permissions and subjects can result in non-functional clusters.
Auto-reconciliation is enabled in Kubernetes version 1.6+ when the RBAC authorizer is active.
### Discovery Roles
Default ClusterRole |
Default ClusterRoleBinding |
Description |
system:basic-user |
system:authenticated and system:unauthenticated groups |
Allows a user read-only access to basic information about themselves. |
system:discovery |
system:authenticated and system:unauthenticated groups |
Allows read-only access to API discovery endpoints needed to discover and negotiate an API level. |
### User-facing Roles
Some of the default roles are not `system:` prefixed. These are intended to be user-facing roles.
They include super-user roles (`cluster-admin`),
roles intended to be granted cluster-wide using ClusterRoleBindings (`cluster-status`),
and roles intended to be granted within particular namespaces using RoleBindings (`admin`, `edit`, `view`).
As of 1.9, user-facing roles use [ClusterRole Aggregation](#aggregated-clusterroles) to allow admins to include
rules for custom resources on these roles. To add rules to the "admin", "edit", or "view" role, create a
ClusterRole with one or more of the following labels:
```yaml
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
```
Default ClusterRole |
Default ClusterRoleBinding |
Description |
cluster-admin |
system:masters group |
Allows super-user access to perform any action on any resource.
When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces.
When used in a RoleBinding, it gives full control over every resource in the rolebinding's namespace, including the namespace itself. |
admin |
None |
Allows admin access, intended to be granted within a namespace using a RoleBinding.
If used in a RoleBinding, allows read/write access to most resources in a namespace,
including the ability to create roles and rolebindings within the namespace.
It does not allow write access to resource quota or to the namespace itself. |
edit |
None |
Allows read/write access to most objects in a namespace.
It does not allow viewing or modifying roles or rolebindings. |
view |
None |
Allows read-only access to see most objects in a namespace.
It does not allow viewing roles or rolebindings.
It does not allow viewing secrets, since those are escalating. |
### Core Component Roles
Default ClusterRole |
Default ClusterRoleBinding |
Description |
system:kube-scheduler |
system:kube-scheduler user |
Allows access to the resources required by the kube-scheduler component. |
system:volume-scheduler |
system:kube-scheduler user |
Allows access to the volume resources required by the kube-scheduler component. |
system:kube-controller-manager |
system:kube-controller-manager user |
Allows access to the resources required by the kube-controller-manager component.
The permissions required by individual control loops are contained in the controller roles. |
system:node |
None in 1.8+ |
Allows access to resources required by the kubelet component, including read access to all secrets, and write access to all pod status objects.
As of 1.7, use of the Node authorizer and NodeRestriction admission plugin is recommended instead of this role, and allow granting API access to kubelets based on the pods scheduled to run on them.
Prior to 1.7, this role was automatically bound to the `system:nodes` group.
In 1.7, this role was automatically bound to the `system:nodes` group if the `Node` authorization mode is not enabled.
In 1.8+, no binding is automatically created.
|
system:node-proxier |
system:kube-proxy user |
Allows access to the resources required by the kube-proxy component. |
### Other Component Roles
Default ClusterRole |
Default ClusterRoleBinding |
Description |
system:auth-delegator |
None |
Allows delegated authentication and authorization checks.
This is commonly used by add-on API servers for unified authentication and authorization. |
system:heapster |
None |
Role for the Heapster component. |
system:kube-aggregator |
None |
Role for the kube-aggregator component. |
system:kube-dns |
kube-dns service account in the kube-system namespace |
Role for the kube-dns component. |
system:kubelet-api-admin |
None |
Allows full access to the kubelet API. |
system:node-bootstrapper |
None |
Allows access to the resources required to perform
Kubelet TLS bootstrapping. |
system:node-problem-detector |
None |
Role for the node-problem-detector component. |
system:persistent-volume-provisioner |
None |
Allows access to the resources required by most dynamic volume provisioners. |
### Controller Roles
The [Kubernetes controller manager](/docs/admin/kube-controller-manager/) runs core control loops.
When invoked with `--use-service-account-credentials`, each control loop is started using a separate service account.
Corresponding roles exist for each control loop, prefixed with `system:controller:`.
If the controller manager is not started with `--use-service-account-credentials`,
it runs all control loops using its own credential, which must be granted all the relevant roles.
These roles include:
* system:controller:attachdetach-controller
* system:controller:certificate-controller
* system:controller:cronjob-controller
* system:controller:daemon-set-controller
* system:controller:deployment-controller
* system:controller:disruption-controller
* system:controller:endpoint-controller
* system:controller:generic-garbage-collector
* system:controller:horizontal-pod-autoscaler
* system:controller:job-controller
* system:controller:namespace-controller
* system:controller:node-controller
* system:controller:persistent-volume-binder
* system:controller:pod-garbage-collector
* system:controller:pv-protection-controller
* system:controller:pvc-protection-controller
* system:controller:replicaset-controller
* system:controller:replication-controller
* system:controller:resourcequota-controller
* system:controller:route-controller
* system:controller:service-account-controller
* system:controller:service-controller
* system:controller:statefulset-controller
* system:controller:ttl-controller
## Privilege Escalation Prevention and Bootstrapping
The RBAC API prevents users from escalating privileges by editing roles or role bindings.
Because this is enforced at the API level, it applies even when the RBAC authorizer is not in use.
A user can only create/update a role if they already have all the permissions contained in the role,
at the same scope as the role (cluster-wide for a `ClusterRole`, within the same namespace or cluster-wide for a `Role`).
For example, if "user-1" does not have the ability to list secrets cluster-wide, they cannot create a `ClusterRole`
containing that permission. To allow a user to create/update roles:
1. Grant them a role that allows them to create/update `Role` or `ClusterRole` objects, as desired.
2. Grant them roles containing the permissions you would want them to be able to set in a `Role` or `ClusterRole`. If they attempt to create or modify a `Role` or `ClusterRole` with permissions they themselves have not been granted, the API request will be forbidden.
A user can only create/update a role binding if they already have all the permissions contained in the referenced role
(at the same scope as the role binding) *or* if they've been given explicit permission to perform the `bind` verb on the referenced role.
For example, if "user-1" does not have the ability to list secrets cluster-wide, they cannot create a `ClusterRoleBinding`
to a role that grants that permission. To allow a user to create/update role bindings:
1. Grant them a role that allows them to create/update `RoleBinding` or `ClusterRoleBinding` objects, as desired.
2. Grant them permissions needed to bind a particular role:
* implicitly, by giving them the permissions contained in the role.
* explicitly, by giving them permission to perform the `bind` verb on the particular role (or cluster role).
For example, this cluster role and role binding would allow "user-1" to grant other users the `admin`, `edit`, and `view` roles in the "user-1-namespace" namespace:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role-grantor
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings"]
verbs: ["create"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterroles"]
verbs: ["bind"]
resourceNames: ["admin","edit","view"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-grantor-binding
namespace: user-1-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-grantor
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user-1
```
When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have.
To bootstrap initial roles and role bindings:
* Use a credential with the `system:masters` group, which is bound to the `cluster-admin` super-user role by the default bindings.
* If your API server runs with the insecure port enabled (`--insecure-port`), you can also make API calls via that port, which does not enforce authentication or authorization.
## Command-line Utilities
Two `kubectl` commands exist to grant roles within a namespace or across the entire cluster.
### `kubectl create rolebinding`
Grants a `Role` or `ClusterRole` within a specific namespace. Examples:
* Grant the `admin` `ClusterRole` to a user named "bob" in the namespace "acme":
```
kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme
```
* Grant the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme":
```
kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme
```
### `kubectl create clusterrolebinding`
Grants a `ClusterRole` across the entire cluster, including all namespaces. Examples:
* Grant the `cluster-admin` `ClusterRole` to a user named "root" across the entire cluster:
```
kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root
```
* Grant the `system:node` `ClusterRole` to a user named "kubelet" across the entire cluster:
```
kubectl create clusterrolebinding kubelet-node-binding --clusterrole=system:node --user=kubelet
```
* Grant the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme" across the entire cluster:
```
kubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp
```
See the CLI help for detailed usage.
## Service Account Permissions
Default RBAC policies grant scoped permissions to control-plane components, nodes,
and controllers, but grant *no permissions* to service accounts outside the `kube-system` namespace
(beyond discovery permissions given to all authenticated users).
This allows you to grant particular roles to particular service accounts as needed.
Fine-grained role bindings provide greater security, but require more effort to administrate.
Broader grants can give unnecessary (and potentially escalating) API access to service accounts, but are easier to administrate.
In order from most secure to least secure, the approaches are:
1. Grant a role to an application-specific service account (best practice)
This requires the application to specify a `serviceAccountName` in its pod spec,
and for the service account to be created (via the API, application manifest, `kubectl create serviceaccount`, etc.).
For example, grant read-only permission within "my-namespace" to the "my-sa" service account:
```shell
kubectl create rolebinding my-sa-view \
--clusterrole=view \
--serviceaccount=my-namespace:my-sa \
--namespace=my-namespace
```
2. Grant a role to the "default" service account in a namespace
If an application does not specify a `serviceAccountName`, it uses the "default" service account.
{{< note >}}**Note:** Permissions given to the "default" service
account are available to any pod in the namespace that does not
specify a `serviceAccountName`.{{< /note >}}
For example, grant read-only permission within "my-namespace" to the "default" service account:
```shell
kubectl create rolebinding default-view \
--clusterrole=view \
--serviceaccount=my-namespace:default \
--namespace=my-namespace
```
Many [add-ons](/docs/concepts/cluster-administration/addons/) currently run as the "default" service account in the `kube-system` namespace.
To allow those add-ons to run with super-user access, grant cluster-admin permissions to the "default" service account in the `kube-system` namespace.
{{< note >}}**Note:** Enabling this means the `kube-system`
namespace contains secrets that grant super-user access to the
API.{{< /note >}}
```shell
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
```
3. Grant a role to all service accounts in a namespace
If you want all applications in a namespace to have a role, no matter what service account they use,
you can grant a role to the service account group for that namespace.
For example, grant read-only permission within "my-namespace" to all service accounts in that namespace:
```shell
kubectl create rolebinding serviceaccounts-view \
--clusterrole=view \
--group=system:serviceaccounts:my-namespace \
--namespace=my-namespace
```
4. Grant a limited role to all service accounts cluster-wide (discouraged)
If you don't want to manage permissions per-namespace, you can grant a cluster-wide role to all service accounts.
For example, grant read-only permission across all namespaces to all service accounts in the cluster:
```shell
kubectl create clusterrolebinding serviceaccounts-view \
--clusterrole=view \
--group=system:serviceaccounts
```
5. Grant super-user access to all service accounts cluster-wide (strongly discouraged)
If you don't care about partitioning permissions at all, you can grant super-user access to all service accounts.
{{< warning >}}**Warning:** This allows any user with read access
to secrets or the ability to create a pod to access super-user
credentials.{{< /warning >}}
```shell
kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin \
--group=system:serviceaccounts
```
## Upgrading from 1.5
Prior to Kubernetes 1.6, many deployments used very permissive ABAC policies,
including granting full API access to all service accounts.
Default RBAC policies grant scoped permissions to control-plane components, nodes,
and controllers, but grant *no permissions* to service accounts outside the `kube-system` namespace
(beyond discovery permissions given to all authenticated users).
While far more secure, this can be disruptive to existing workloads expecting to automatically receive API permissions.
Here are two approaches for managing this transition:
### Parallel Authorizers
Run both the RBAC and ABAC authorizers, and specify a policy file that contains
[the legacy ABAC policy](/docs/reference/access-authn-authz/abac/#policy-file-format):
```
--authorization-mode=RBAC,ABAC --authorization-policy-file=mypolicy.json
```
The RBAC authorizer will attempt to authorize requests first. If it denies an API request,
the ABAC authorizer is then run. This means that any request allowed by *either* the RBAC
or ABAC policies is allowed.
When run with a log level of 2 or higher (`--v=2`), you can see RBAC denials in the apiserver log (prefixed with `RBAC DENY:`).
You can use that information to determine which roles need to be granted to which users, groups, or service accounts.
Once you have [granted roles to service accounts](#service-account-permissions) and workloads are running with no RBAC denial messages
in the server logs, you can remove the ABAC authorizer.
## Permissive RBAC Permissions
You can replicate a permissive policy using RBAC role bindings.
{{< warning >}}
**Warning:** The following policy allows **ALL** service accounts to act as cluster administrators.
Any application running in a container receives service account credentials automatically,
and could perform any action against the API, including viewing secrets and modifying permissions.
This is not a recommended policy.
```
kubectl create clusterrolebinding permissive-binding \
--clusterrole=cluster-admin \
--user=admin \
--user=kubelet \
--group=system:serviceaccounts
```
{{< /warning >}}
{{% /capture %}}