From abdcfbea3006749fd39abe79acb4f673ac8effba Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Mon, 28 Mar 2016 18:08:58 -0700 Subject: [PATCH 1/4] Further update docs to reflect changes kubectl run --- docs/admin/limitrange/index.md | 108 +++++---- docs/admin/namespaces/index.md | 243 ++++++++++++-------- docs/admin/namespaces/walkthrough.md | 67 ++---- docs/admin/resourcequota/index.md | 278 ++++++++++++----------- docs/admin/resourcequota/walkthrough.md | 14 +- docs/user-guide/debugging-services.md | 3 +- docs/user-guide/docker-cli-to-kubectl.md | 7 + docs/user-guide/pods/single-container.md | 1 + 8 files changed, 393 insertions(+), 328 deletions(-) mode change 100755 => 100644 docs/admin/resourcequota/index.md diff --git a/docs/admin/limitrange/index.md b/docs/admin/limitrange/index.md index 2be370d157..10180514f8 100644 --- a/docs/admin/limitrange/index.md +++ b/docs/admin/limitrange/index.md @@ -22,7 +22,7 @@ may be too small to be useful, but big enough for the waste to be costly over th the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their average node size in order to provide for more uniform scheduling and to limit waste. -This example demonstrates how limits can be applied to a Kubernetes namespace to control +This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces) to control min/max resource limits per pod. In addition, this example demonstrates how you can apply default resource limits to pods in the absence of an end-user specified value. @@ -41,12 +41,17 @@ This example will work in a custom namespace to demonstrate the concepts involve Let's create a new namespace called limit-example: ```shell -$ kubectl create -f docs/admin/limitrange/namespace.yaml -namespace "limit-example" created +$ kubectl create namespace limit-example +namespace "limit-example" created +``` + +Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands: + +```shell $ kubectl get namespaces -NAME LABELS STATUS AGE -default Active 5m -limit-example Active 53s +NAME STATUS AGE +default Active 51s +limit-example Active 45s ``` ## Step 2: Apply a limit to the namespace @@ -95,36 +100,45 @@ were previously created in a namespace. If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time of creation explaining why. -Let's first spin up a deployment that creates a single container pod to demonstrate +Let's first spin up a [Deployment](/docs/user-guide/deployments) that creates a single container Pod to demonstrate how default values are applied to each pod. ```shell $ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example deployment "nginx" created +``` + +Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](docs/user-guide/kubectl/kubectl_run/) for more details. +The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod: + +```shell $ kubectl get pods --namespace=limit-example NAME READY STATUS RESTARTS AGE nginx-2040093540-s8vzu 1/1 Running 0 11s -$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8 ``` -```yaml - resourceVersion: "127" - selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf - uid: 51be42a7-7156-11e5-9921-286ed488f785 -spec: - containers: - - image: nginx - imagePullPolicy: IfNotPresent - name: nginx - resources: - limits: - cpu: 300m - memory: 200Mi - requests: - cpu: 200m - memory: 100Mi - terminationMessagePath: /dev/termination-log - volumeMounts: +Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different. + +``` shell +$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8 + resourceVersion: "57" + selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu + uid: 67b20741-f53b-11e5-b066-64510658e388 +spec: + containers: + - image: nginx + imagePullPolicy: Always + name: nginx + resources: + limits: + cpu: 300m + memory: 200Mi + requests: + cpu: 200m + memory: 100Mi + terminationMessagePath: /dev/termination-log + volumeMounts: ``` Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*. @@ -141,37 +155,39 @@ Let's create a pod that falls within the allowed limit boundaries. ```shell $ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example pod "valid-pod" created -$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources ``` -```yaml - uid: 162a12aa-7157-11e5-9921-286ed488f785 -spec: - containers: - - image: gcr.io/google_containers/serve_hostname - imagePullPolicy: IfNotPresent - name: kubernetes-serve-hostname - resources: - limits: - cpu: "1" - memory: 512Mi - requests: - cpu: "1" - memory: 512Mi +Now look at the Pod's resources field: + +```shell +$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources + uid: 3b1bfd7a-f53c-11e5-b066-64510658e388 +spec: + containers: + - image: gcr.io/google_containers/serve_hostname + imagePullPolicy: Always + name: kubernetes-serve-hostname + resources: + limits: + cpu: "1" + memory: 512Mi + requests: + cpu: "1" + memory: 512Mi ``` Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace default values. -Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node +Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node that runs the container unless the administrator deploys the kubelet with the folllowing flag: ```shell $ kubelet --help Usage of kubelet .... - --cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits -$ kubelet --cpu-cfs-quota=true ... + --cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits +$ kubelet --cpu-cfs-quota=false ... ``` ## Step 4: Cleanup @@ -182,8 +198,8 @@ To remove the resources used by this example, you can just delete the limit-exam $ kubectl delete namespace limit-example namespace "limit-example" deleted $ kubectl get namespaces -NAME LABELS STATUS AGE -default Active 20m +NAME STATUS AGE +default Active 12m ``` ## Summary diff --git a/docs/admin/namespaces/index.md b/docs/admin/namespaces/index.md index 6a5265eadd..082b9fc2ac 100644 --- a/docs/admin/namespaces/index.md +++ b/docs/admin/namespaces/index.md @@ -1,145 +1,200 @@ --- --- -A Namespace is a mechanism to partition resources created by users into -a logically named group. +Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster. -## Motivation +It does this by providing the following: -A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a 'user community'). +1. A scope for [Names](/docs/user-guide/identifiers). +2. A mechanism to attach authorization and policy to a subsection of the cluster. -Each user community wants to be able to work in isolation from other communities. +Use of multiple namespaces is optional. -Each user community has its own: +This example demonstrates how to use Kubernetes namespaces to subdivide your cluster. -1. resources (pods, services, replication controllers, etc.) -2. policies (who can or cannot perform actions in their community) -3. constraints (this community is allowed this much quota, etc.) +### Step Zero: Prerequisites -A cluster operator may create a Namespace for each unique user community. +This example assumes the following: -The Namespace provides a unique scope for: +1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/). +2. You have a basic understanding of Kubernetes _[Pods](/docs/user-guide/pods)_, _[Services](/docs/user-guide/services)_, and _[Deployments](/docs/user-guide/deployments)_. -1. named resources (to avoid basic naming collisions) -2. delegated management authority to trusted users -3. ability to limit community resource consumption +### Step One: Understand the default namespace -## Use cases +By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, +Services, and Deployments used by the cluster. -1. As a cluster operator, I want to support multiple user communities on a single cluster. -2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users - in those communities. -3. As a cluster operator, I want to limit the amount of resources each community can consume in order - to limit the impact to other communities using the cluster. -4. As a cluster user, I want to interact with resources that are pertinent to my user community in - isolation of what other user communities are doing on the cluster. - - -## Usage - -Look [here](/docs/admin/namespaces/) for an in depth example of namespaces. - -### Viewing namespaces - -You can list the current namespaces in a cluster using: +Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: ```shell $ kubectl get namespaces -NAME LABELS STATUS -default Active -kube-system Active +NAME STATUS AGE +default Active 13m ``` -Kubernetes starts with two initial namespaces: - * `default` The default namespace for objects with no other namespace - * `kube-system` The namespace for objects created by the Kubernetes system +### Step Two: Create new namespaces -You can also get the summary of a specific namespace using: +For this exercise, we will create two additional Kubernetes namespaces to hold our content. + +Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases. + +The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments +they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources +are relaxed to enable agile development. + +The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of +Pods, Services, and Deployments that run the production site. + +One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. + +Let's create two new namespaces to hold our work. + +Use the file [`namespace-dev.json`](/docs/admin/namespaces/namespace-dev.json) which describes a development namespace: + +{% include code.html language="json" file="namespace-dev.json" ghlink="/docs/admin/namespaces/namespace-dev.json" %} + +Create the development namespace using kubectl. ```shell -$ kubectl get namespaces +$ kubectl create -f docs/admin/namespaces/namespace-dev.json ``` -Or you can get detailed information with: +And then lets create the production namespace using kubectl. ```shell -$ kubectl describe namespaces -Name: default -Labels: -Status: Active - -No resource quota. - -Resource Limits - Type Resource Min Max Default - ---- -------- --- --- --- - Container cpu - - 100m +$ kubectl create -f docs/admin/namespaces/namespace-prod.json ``` -Note that these details show both resource quota (if present) as well as resource limit ranges. +To be sure things are right, let's list all of the namespaces in our cluster. -Resource quota tracks aggregate usage of resources in the *Namespace* and allows cluster operators -to define *Hard* resource usage limits that a *Namespace* may consume. +```shell +$ kubectl get namespaces --show-labels +NAME STATUS AGE LABELS +default Active 32m +development Active 29s name=development +production Active 23s name=production +``` -A limit range defines min/max constraints on the amount of resources a single entity can consume in -a *Namespace*. +### Step Three: Create pods in each namespace -See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) +A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. -A namespace can be in one of two phases: - * `Active` the namespace is in use - * `Terminating` the namespace is being deleted, and can not be used for new objects +Users interacting with one namespace do not see the content in another namespace. -See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#phases) for more details. +To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace. -### Creating a new namespace +We first check what is the current context: -To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents: - -```yaml +```shell +$ kubectl config view apiVersion: v1 -kind: Namespace -metadata: - name: +clusters: +- cluster: + certificate-authority-data: REDACTED + server: https://130.211.122.180 + name: lithe-cocoa-92103_kubernetes +contexts: +- context: + cluster: lithe-cocoa-92103_kubernetes + user: lithe-cocoa-92103_kubernetes + name: lithe-cocoa-92103_kubernetes +current-context: lithe-cocoa-92103_kubernetes +kind: Config +preferences: {} +users: +- name: lithe-cocoa-92103_kubernetes + user: + client-certificate-data: REDACTED + client-key-data: REDACTED + token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b +- name: lithe-cocoa-92103_kubernetes-basic-auth + user: + password: h5M0FtUUIflBSdI7 + username: admin + +$ kubectl config current-context +lithe-cocoa-92103_kubernetes ``` -Note that the name of your namespace must be a DNS compatible label. - -More information on the `finalizers` field can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#finalizers). - -Then run: +The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. ```shell -$ kubectl create -f ./my-namespace.yaml +$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes +$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes ``` -### Working in namespaces +The above commands provided two request contexts you can alternate against depending on what namespace you +wish to work against. -See [Setting the namespace for a request](/docs/user-guide/namespaces/#setting-the-namespace-for-a-request) -and [Setting the namespace preference](/docs/user-guide/namespaces/#setting-the-namespace-preference). - -### Deleting a namespace - -You can delete a namespace with +Let's switch to operate in the development namespace. ```shell -$ kubectl delete namespaces +$ kubectl config use-context dev ``` -**WARNING, this deletes _everything_ under the namespace!** +You can verify your current context by doing the following: -This delete is asynchronous, so for a time you will see the namespace in the `Terminating` state. +```shell +$ kubectl config current-context +dev +``` -## Namespaces and DNS +At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. -When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/admin/dns). -This entry is of the form `..svc.cluster.local`, which means -that if a container just uses `` it will resolve to the service which -is local to a namespace. This is useful for using the same configuration across -multiple namespaces such as Development, Staging and Production. If you want to reach -across namespaces, you need to use the fully qualified domain name (FQDN). +Let's create some content. -## Design +```shell +$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 +``` +We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. +Note that `kubectl run` creates deployments only on kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](docs/user-guide/kubectl/kubectl_run/) for more details. -Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace) -can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md) \ No newline at end of file +```shell +$ kubectl get deployment +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +snowflake 2 2 2 2 2m + +$ kubectl get pods -l run=snowflake +NAME READY STATUS RESTARTS AGE +snowflake-3968820950-9dgr8 1/1 Running 0 2m +snowflake-3968820950-vgc4n 1/1 Running 0 2m +``` + +And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace. + +Let's switch to the production namespace and show how resources in one namespace are hidden from the other. + +```shell +$ kubectl config use-context prod +``` + +The production namespace should be empty, and the following commands should return nothing. + +```shell +$ kubectl get deployment +$ kubectl get pods +``` + +Production likes to run cattle, so let's create some cattle pods. + +```shell +$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 + +$ kubectl get deployment +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +cattle 5 5 5 5 10s + +kubectl get pods -l run=cattle +NAME READY STATUS RESTARTS AGE +cattle-2263376956-41xy6 1/1 Running 0 34s +cattle-2263376956-kw466 1/1 Running 0 34s +cattle-2263376956-n4v97 1/1 Running 0 34s +cattle-2263376956-p5p3i 1/1 Running 0 34s +cattle-2263376956-sxpth 1/1 Running 0 34s +``` + +At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace. + +As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different +authorization rules for each namespace. diff --git a/docs/admin/namespaces/walkthrough.md b/docs/admin/namespaces/walkthrough.md index 93868d0df4..c3808b3209 100644 --- a/docs/admin/namespaces/walkthrough.md +++ b/docs/admin/namespaces/walkthrough.md @@ -5,7 +5,7 @@ Kubernetes _namespaces_ help different projects, teams, or customers to share a It does this by providing the following: -1. A scope for [Names](/docs/user-guide/identifiers/). +1. A scope for [Names](/docs/user-guide/identifiers). 2. A mechanism to attach authorization and policy to a subsection of the cluster. Use of multiple namespaces is optional. @@ -17,7 +17,7 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu This example assumes the following: 1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/). -2. You have a basic understanding of Kubernetes _[Pods](/docs/user-guide/pods/)_, _[Services](/docs/user-guide/services/)_, and _[Deployments](/docs/user-guide/deployments/)_. +2. You have a basic understanding of Kubernetes _[Pods](/docs/user-guide/pods)_, _[Services](/docs/user-guide/services)_, and _[Deployments](/docs/user-guide/deployments)_. ### Step One: Understand the default namespace @@ -28,8 +28,8 @@ Assuming you have a fresh cluster, you can introspect the available namespace's ```shell $ kubectl get namespaces -NAME LABELS -default +NAME STATUS AGE +default Active 13m ``` ### Step Two: Create new namespaces @@ -43,7 +43,7 @@ they use to build and run their application. In this space, Kubernetes resource are relaxed to enable agile development. The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of -pods, services, and Deployments that run the production site. +Pods, Services, and Deployments that run the production site. One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. @@ -68,11 +68,11 @@ $ kubectl create -f docs/admin/namespaces/namespace-prod.json To be sure things are right, let's list all of the namespaces in our cluster. ```shell -$ kubectl get namespaces -NAME LABELS STATUS -default Active -development name=development Active -production name=production Active +$ kubectl get namespaces --show-labels +NAME STATUS AGE LABELS +default Active 32m +development Active 29s name=development +production Active 23s name=production ``` ### Step Three: Create pods in each namespace @@ -85,7 +85,8 @@ To demonstrate this, let's spin up a simple Deployment and Pods in the developme We first check what is the current context: -```yaml +```shell +$ kubectl config view apiVersion: v1 clusters: - cluster: @@ -110,6 +111,9 @@ users: user: password: h5M0FtUUIflBSdI7 username: admin + +$ kubectl config current-context +lithe-cocoa-92103_kubernetes ``` The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. @@ -131,44 +135,8 @@ $ kubectl config use-context dev You can verify your current context by doing the following: ```shell -$ kubectl config view -``` - -```yaml -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: REDACTED - server: https://130.211.122.180 - name: lithe-cocoa-92103_kubernetes -contexts: -- context: - cluster: lithe-cocoa-92103_kubernetes - namespace: development - user: lithe-cocoa-92103_kubernetes - name: dev -- context: - cluster: lithe-cocoa-92103_kubernetes - user: lithe-cocoa-92103_kubernetes - name: lithe-cocoa-92103_kubernetes -- context: - cluster: lithe-cocoa-92103_kubernetes - namespace: production - user: lithe-cocoa-92103_kubernetes - name: prod -current-context: dev -kind: Config -preferences: {} -users: -- name: lithe-cocoa-92103_kubernetes - user: - client-certificate-data: REDACTED - client-key-data: REDACTED - token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b -- name: lithe-cocoa-92103_kubernetes-basic-auth - user: - password: h5M0FtUUIflBSdI7 - username: admin +$ kubectl config current-context +dev ``` At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. @@ -180,6 +148,7 @@ $ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 ``` We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. Note that `kubectl run` creates deployments only on kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. ```shell $ kubectl get deployment diff --git a/docs/admin/resourcequota/index.md b/docs/admin/resourcequota/index.md old mode 100755 new mode 100644 index beafc08279..eeacbddabf --- a/docs/admin/resourcequota/index.md +++ b/docs/admin/resourcequota/index.md @@ -1,154 +1,164 @@ --- --- -When several users or teams share a cluster with a fixed number of nodes, -there is a concern that one team could use more than its fair share of resources. +This example demonstrates how [resource quota](/docs/admin/admission-controllers/#resourcequota) and +[limitsranger](/docs/admin/admission-controllers/#limitranger) can be applied to a Kubernetes namespace. +See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information. -Resource quotas are a tool for administrators to address this concern. Resource quotas -work like this: +This example assumes you have a functional Kubernetes setup. -- Different teams work in different namespaces. Currently this is voluntary, but - support for making this mandatory via ACLs is planned. -- The administrator creates a Resource Quota for each namespace. -- Users put compute resource requests on their pods. The sum of all resource requests across - all pods in the same namespace must not exceed any hard resource limit in any Resource Quota - document for the namespace. Note that we used to verify Resource Quota by taking the sum of - resource limits of the pods, but this was altered to use resource requests. Backwards compatibility - for those pods previously created is preserved because pods that only specify a resource limit have - their resource requests defaulted to match their defined limits. The user is only charged for the - resources they request in the Resource Quota versus their limits because the request is the minimum - amount of resource guaranteed by the cluster during scheduling. For more information on over commit, - see [compute-resources](/docs/user-guide/compute-resources). -- If creating a pod would cause the namespace to exceed any of the limits specified in the - the Resource Quota for that namespace, then the request will fail with HTTP status - code `403 FORBIDDEN`. -- If quota is enabled in a namespace and the user does not specify *requests* on the pod for each - of the resources for which quota is enabled, then the POST of the pod will fail with HTTP - status code `403 FORBIDDEN`. Hint: Use the LimitRange admission controller to force default - values of *limits* (then resource *requests* would be equal to *limits* by default, see - [admission controller](/docs/admin/admission-controllers)) before the quota is checked to avoid this problem. +## Step 1: Create a namespace -Examples of policies that could be created using namespaces and quotas are: +This example will work in a custom namespace to demonstrate the concepts involved. -- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 Gib and 10 cores, - let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation. -- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace - use any amount. - -In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, -there may be contention for resources. This is handled on a first-come-first-served basis. - -Neither contention nor changes to quota will affect already-running pods. - -## Enabling Resource Quota - -Resource Quota support is enabled by default for many Kubernetes distributions. It is -enabled when the apiserver `--admission-control=` flag has `ResourceQuota` as -one of its arguments. - -Resource Quota is enforced in a particular namespace when there is a -`ResourceQuota` object in that namespace. There should be at most one -`ResourceQuota` object in a namespace. - -## Compute Resource Quota - -The total sum of [compute resources](/docs/user-guide/compute-resources) requested by pods -in a namespace can be limited. The following compute resource types are supported: - -| ResourceName | Description | -| ------------ | ----------- | -| cpu | Total cpu requests of containers | -| memory | Total memory requests of containers - -For example, `cpu` quota sums up the `resources.requests.cpu` fields of every -container of every pod in the namespace, and enforces a maximum on that sum. - -## Object Count Quota - -The number of objects of a given type can be restricted. The following types -are supported: - -| ResourceName | Description | -| ------------ | ----------- | -| pods | Total number of pods | -| services | Total number of services | -| replicationcontrollers | Total number of replication controllers | -| resourcequotas | Total number of [resource quotas](/docs/admin/admission-controllers/#resourcequota) | -| secrets | Total number of secrets | -| persistentvolumeclaims | Total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | - -For example, `pods` quota counts and enforces a maximum on the number of `pods` -created in a single namespace. - -You might want to set a pods quota on a namespace -to avoid the case where a user creates many small pods and exhausts the cluster's -supply of Pod IPs. - -## Viewing and Setting Quotas - -Kubectl supports creating, updating, and viewing quotas: +Let's create a new namespace called quota-example: ```shell -$ kubectl namespace myspace -$ cat < quota.json -{ - "apiVersion": "v1", - "kind": "ResourceQuota", - "metadata": { - "name": "quota" - }, - "spec": { - "hard": { - "memory": "1Gi", - "cpu": "20", - "pods": "10", - "services": "5", - "replicationcontrollers":"20", - "resourcequotas":"1" - } - } -} -EOF -$ kubectl create -f ./quota.json -$ kubectl get quota -NAME -quota -$ kubectl describe quota quota -Name: quota -Resource Used Hard --------- ---- ---- -cpu 0m 20 -memory 0 1Gi -pods 5 10 -replicationcontrollers 5 20 -resourcequotas 1 1 -services 3 5 +$ kubectl create namespace quota-example +namespace "quota-example" created ``` -## Quota and Cluster Capacity +Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands: -Resource Quota objects are independent of the Cluster Capacity. They are -expressed in absolute units. So, if you add nodes to your cluster, this does *not* -automatically give each namespace the ability to consume more resources. +```shell +$ kubectl get namespaces +NAME STATUS AGE +default Active 50m +quota-example Active 2s +``` -Sometimes more complex policies may be desired, such as: +## Step 2: Apply a quota to the namespace - - proportionally divide total cluster resources among several teams. - - allow each tenant to grow resource usage as needed, but have a generous - limit to prevent accidental resource exhaustion. - - detect demand from one namespace, add nodes, and increase quota. +By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the +system will be able to consume as much CPU and memory on the node that executes the pod. -Such policies could be implemented using ResourceQuota as a building-block, by -writing a 'controller' which watches the quota usage and adjusts the quota -hard limits of each namespace according to other signals. +Users may want to restrict how much of the cluster resources a given namespace may consume +across all of its pods in order to manage cluster usage. To do this, a user applies a quota to +a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory) +and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes +checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace. -Note that resource quota divides up aggregate cluster resources, but it creates no -restrictions around nodes: pods from several namespaces may run on the same node. +Let's create a simple quota in our namespace: -## Example +```shell +$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example +resourcequota "quota" created +``` -See a [detailed example for how to use resource quota](/docs/admin/resourcequota/). +Once your quota is applied to a namespace, the system will restrict any creation of content +in the namespace until the quota usage has been calculated. This should happen quickly. -## Read More +You can describe your current quota usage to see what resources are being consumed in your +namespace. -See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information. \ No newline at end of file +```shell +$ kubectl describe quota quota --namespace=quota-example +Name: quota +Namespace: quota-example +Resource Used Hard +-------- ---- ---- +cpu 0 20 +memory 0 1Gi +persistentvolumeclaims 0 10 +pods 0 10 +replicationcontrollers 0 20 +resourcequotas 1 1 +secrets 1 10 +services 0 5 +``` + +## Step 3: Applying default resource requests and limits + +Pod authors rarely specify resource requests and limits for their pods. + +Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded +cpu and memory by creating an nginx container. + +To demonstrate, lets create a Deployment that runs nginx: + +```shell +$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example +deployment "nginx" created +``` + +This creates a Deployment "nginx" with its underlying resource, a ReplicaSet, which handles the creation and deletion of Pod replicas. Now let's look at the pods that were created. + +```shell +$ kubectl get pods --namespace=quota-example +NAME READY STATUS RESTARTS AGE +``` + +What happened? I have no pods! Let's describe the ReplicaSet managed by the nginx Deployment to get a view of what is happening. +Note that `kubectl describe rs` works only on kubernetes cluster >= v1.2. If you are running older versions, use `kubectl describe rc` instead. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. + +```shell +$ kubectl describe rs -l run=nginx --namespace=quota-example +Name: nginx-2040093540 +Namespace: quota-example +Image(s): nginx +Selector: pod-template-hash=2040093540,run=nginx +Labels: pod-template-hash=2040093540,run=nginx +Replicas: 0 current / 1 desired +Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed +No volumes. +Events: + FirstSeen LastSeen Count From SubobjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 48s 26s 4 {replicaset-controller } Warning FailedCreate Error creating: pods "nginx-2040093540-" is forbidden: Failed quota: quota: must specify cpu,memory +``` + +The Kubernetes API server is rejecting the ReplicaSet requests to create a pod because our pods +do not specify any memory usage *request*. + +So let's set some default values for the amount of cpu and memory a pod can consume: + +```shell +$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example +limitrange "limits" created +$ kubectl describe limits limits --namespace=quota-example +Name: limits +Namespace: quota-example +Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio +---- -------- --- --- --------------- ------------- ----------------------- +Container cpu - - 100m 200m - +Container memory - - 256Mi 512Mi - +``` + +Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default +amount of cpu and memory per container will be applied, and the request will be used as part of admission control. + +Now that we have applied default resource *request* for our namespace, our Deployment should be able to +create its pods. + +```shell +$ kubectl get pods --namespace=quota-example +NAME READY STATUS RESTARTS AGE +nginx-2040093540-miohp 1/1 Running 0 5s +``` + +And if we print out our quota usage in the namespace: + +```shell +$ kubectl describe quota quota --namespace=quota-example +Name: quota +Namespace: quota-example +Resource Used Hard +-------- ---- ---- +cpu 100m 20 +memory 256Mi 1Gi +persistentvolumeclaims 0 10 +pods 1 10 +replicationcontrollers 1 20 +resourcequotas 1 1 +secrets 1 10 +services 0 5 +``` + +You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*), and the usage is being tracked by the Kubernetes system properly. + +## Summary + +Actions that consume node resources for cpu and memory can be subject to hard quota limits defined by the namespace quota. The resource consumption is measured by resource *request* in pod specification. + +Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to meet your end goal. diff --git a/docs/admin/resourcequota/walkthrough.md b/docs/admin/resourcequota/walkthrough.md index 27b34d708b..eeacbddabf 100644 --- a/docs/admin/resourcequota/walkthrough.md +++ b/docs/admin/resourcequota/walkthrough.md @@ -14,12 +14,17 @@ This example will work in a custom namespace to demonstrate the concepts involve Let's create a new namespace called quota-example: ```shell -$ kubectl create -f docs/admin/resourcequota/namespace.yaml +$ kubectl create namespace quota-example namespace "quota-example" created +``` + +Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands: + +```shell $ kubectl get namespaces -NAME LABELS STATUS AGE -default Active 2m -quota-example Active 39s +NAME STATUS AGE +default Active 50m +quota-example Active 2s ``` ## Step 2: Apply a quota to the namespace @@ -85,6 +90,7 @@ NAME READY STATUS RESTARTS AGE What happened? I have no pods! Let's describe the ReplicaSet managed by the nginx Deployment to get a view of what is happening. Note that `kubectl describe rs` works only on kubernetes cluster >= v1.2. If you are running older versions, use `kubectl describe rc` instead. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. ```shell $ kubectl describe rs -l run=nginx --namespace=quota-example diff --git a/docs/user-guide/debugging-services.md b/docs/user-guide/debugging-services.md index e092593630..9bbc5ca768 100644 --- a/docs/user-guide/debugging-services.md +++ b/docs/user-guide/debugging-services.md @@ -57,9 +57,10 @@ spec: - sleep - "1000000" EOF -pods/busybox-sleep +pod "busybox-sleep" created ``` +Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands. Now, when you need to run a command (even an interactive shell) in a `Pod`-like context, use: diff --git a/docs/user-guide/docker-cli-to-kubectl.md b/docs/user-guide/docker-cli-to-kubectl.md index 0b2e33c396..62c763c0e6 100644 --- a/docs/user-guide/docker-cli-to-kubectl.md +++ b/docs/user-guide/docker-cli-to-kubectl.md @@ -26,6 +26,13 @@ With kubectl: # start the pod running nginx $ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster" deployment "nginx-app" created +``` + +`kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. +Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands. Now, we can expose a new Service with the deployment created above: + +```shell # expose a port through with a service $ kubectl expose deployment nginx-app --port=80 --name=nginx-http service "nginx-http" exposed diff --git a/docs/user-guide/pods/single-container.md b/docs/user-guide/pods/single-container.md index 57e8556240..66e606175b 100644 --- a/docs/user-guide/pods/single-container.md +++ b/docs/user-guide/pods/single-container.md @@ -35,6 +35,7 @@ $ kubectl run NAME Where: +* `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. * `NAME` (required) is the name of the container to create. This value is also applied as the name of the Deployment, and as the prefix of the pod name. For example: From 147c23a3722db0afb9e73d1a7707da32b1525197 Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Tue, 29 Mar 2016 10:59:35 -0700 Subject: [PATCH 2/4] Address comments --- docs/admin/limitrange/index.md | 2 +- docs/admin/namespaces/index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/admin/limitrange/index.md b/docs/admin/limitrange/index.md index 10180514f8..08cf999423 100644 --- a/docs/admin/limitrange/index.md +++ b/docs/admin/limitrange/index.md @@ -109,7 +109,7 @@ deployment "nginx" created ``` Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. -If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](docs/user-guide/kubectl/kubectl_run/) for more details. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod: ```shell diff --git a/docs/admin/namespaces/index.md b/docs/admin/namespaces/index.md index 082b9fc2ac..c3808b3209 100644 --- a/docs/admin/namespaces/index.md +++ b/docs/admin/namespaces/index.md @@ -148,7 +148,7 @@ $ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 ``` We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. Note that `kubectl run` creates deployments only on kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. -If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](docs/user-guide/kubectl/kubectl_run/) for more details. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. ```shell $ kubectl get deployment From a0600caf94169af0769036388ab0f33beac3e341 Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Tue, 29 Mar 2016 16:27:59 -0700 Subject: [PATCH 3/4] Update interactive pods section --- docs/user-guide/debugging-services.md | 34 ++++++++++----------------- 1 file changed, 12 insertions(+), 22 deletions(-) diff --git a/docs/user-guide/debugging-services.md b/docs/user-guide/debugging-services.md index 9bbc5ca768..b4f68e7e0c 100644 --- a/docs/user-guide/debugging-services.md +++ b/docs/user-guide/debugging-services.md @@ -40,38 +40,27 @@ OUTPUT ## Running commands in a Pod For many steps here you will want to see what a `Pod` running in the cluster -sees. Kubernetes does not directly support interactive `Pod`s (yet), but you can -approximate it: +sees. You can start a busybox `Pod` and run commands in it: ```shell -$ cat < +$ kubectl exec -c -- ``` -or +or run an interactive shell with: ```shell -$ kubectl exec -ti busybox-sleep sh +$ kubectl exec -ti -c sh / # ``` @@ -89,6 +78,7 @@ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \ deployment "hostnames" created ``` +`kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands. Note that this is the same as if you had started the `Deployment` with the following YAML: From 969453c37edea635d2754425903684c8d9f3d19a Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Tue, 29 Mar 2016 16:45:37 -0700 Subject: [PATCH 4/4] rebase --- docs/admin/limitrange/index.md | 2 +- docs/admin/namespaces/index.md | 243 ++++++++------------- docs/admin/namespaces/walkthrough.md | 4 +- docs/admin/resourcequota/index.md | 310 +++++++++++++-------------- 4 files changed, 247 insertions(+), 312 deletions(-) diff --git a/docs/admin/limitrange/index.md b/docs/admin/limitrange/index.md index 08cf999423..7c89b3058b 100644 --- a/docs/admin/limitrange/index.md +++ b/docs/admin/limitrange/index.md @@ -22,7 +22,7 @@ may be too small to be useful, but big enough for the waste to be costly over th the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their average node size in order to provide for more uniform scheduling and to limit waste. -This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces) to control +This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control min/max resource limits per pod. In addition, this example demonstrates how you can apply default resource limits to pods in the absence of an end-user specified value. diff --git a/docs/admin/namespaces/index.md b/docs/admin/namespaces/index.md index c3808b3209..973041c519 100644 --- a/docs/admin/namespaces/index.md +++ b/docs/admin/namespaces/index.md @@ -1,200 +1,145 @@ --- --- -Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster. +A Namespace is a mechanism to partition resources created by users into +a logically named group. -It does this by providing the following: +## Motivation -1. A scope for [Names](/docs/user-guide/identifiers). -2. A mechanism to attach authorization and policy to a subsection of the cluster. +A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a 'user community'). -Use of multiple namespaces is optional. +Each user community wants to be able to work in isolation from other communities. -This example demonstrates how to use Kubernetes namespaces to subdivide your cluster. +Each user community has its own: -### Step Zero: Prerequisites +1. resources (pods, services, replication controllers, etc.) +2. policies (who can or cannot perform actions in their community) +3. constraints (this community is allowed this much quota, etc.) -This example assumes the following: +A cluster operator may create a Namespace for each unique user community. -1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/). -2. You have a basic understanding of Kubernetes _[Pods](/docs/user-guide/pods)_, _[Services](/docs/user-guide/services)_, and _[Deployments](/docs/user-guide/deployments)_. +The Namespace provides a unique scope for: -### Step One: Understand the default namespace +1. named resources (to avoid basic naming collisions) +2. delegated management authority to trusted users +3. ability to limit community resource consumption -By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, -Services, and Deployments used by the cluster. +## Use cases -Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: +1. As a cluster operator, I want to support multiple user communities on a single cluster. +2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users + in those communities. +3. As a cluster operator, I want to limit the amount of resources each community can consume in order + to limit the impact to other communities using the cluster. +4. As a cluster user, I want to interact with resources that are pertinent to my user community in + isolation of what other user communities are doing on the cluster. + + +## Usage + +Look [here](/docs/admin/namespaces/) for an in depth example of namespaces. + +### Viewing namespaces + +You can list the current namespaces in a cluster using: ```shell $ kubectl get namespaces -NAME STATUS AGE -default Active 13m +NAME LABELS STATUS +default Active +kube-system Active ``` -### Step Two: Create new namespaces +Kubernetes starts with two initial namespaces: + * `default` The default namespace for objects with no other namespace + * `kube-system` The namespace for objects created by the Kubernetes system -For this exercise, we will create two additional Kubernetes namespaces to hold our content. - -Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases. - -The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments -they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources -are relaxed to enable agile development. - -The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of -Pods, Services, and Deployments that run the production site. - -One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. - -Let's create two new namespaces to hold our work. - -Use the file [`namespace-dev.json`](/docs/admin/namespaces/namespace-dev.json) which describes a development namespace: - -{% include code.html language="json" file="namespace-dev.json" ghlink="/docs/admin/namespaces/namespace-dev.json" %} - -Create the development namespace using kubectl. +You can also get the summary of a specific namespace using: ```shell -$ kubectl create -f docs/admin/namespaces/namespace-dev.json +$ kubectl get namespaces ``` -And then lets create the production namespace using kubectl. +Or you can get detailed information with: ```shell -$ kubectl create -f docs/admin/namespaces/namespace-prod.json +$ kubectl describe namespaces +Name: default +Labels: +Status: Active + +No resource quota. + +Resource Limits + Type Resource Min Max Default + ---- -------- --- --- --- + Container cpu - - 100m ``` -To be sure things are right, let's list all of the namespaces in our cluster. +Note that these details show both resource quota (if present) as well as resource limit ranges. -```shell -$ kubectl get namespaces --show-labels -NAME STATUS AGE LABELS -default Active 32m -development Active 29s name=development -production Active 23s name=production -``` +Resource quota tracks aggregate usage of resources in the *Namespace* and allows cluster operators +to define *Hard* resource usage limits that a *Namespace* may consume. -### Step Three: Create pods in each namespace +A limit range defines min/max constraints on the amount of resources a single entity can consume in +a *Namespace*. -A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. +See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) -Users interacting with one namespace do not see the content in another namespace. +A namespace can be in one of two phases: + * `Active` the namespace is in use + * `Terminating` the namespace is being deleted, and can not be used for new objects -To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace. +See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#phases) for more details. -We first check what is the current context: +### Creating a new namespace -```shell -$ kubectl config view +To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents: + +```yaml apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: REDACTED - server: https://130.211.122.180 - name: lithe-cocoa-92103_kubernetes -contexts: -- context: - cluster: lithe-cocoa-92103_kubernetes - user: lithe-cocoa-92103_kubernetes - name: lithe-cocoa-92103_kubernetes -current-context: lithe-cocoa-92103_kubernetes -kind: Config -preferences: {} -users: -- name: lithe-cocoa-92103_kubernetes - user: - client-certificate-data: REDACTED - client-key-data: REDACTED - token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b -- name: lithe-cocoa-92103_kubernetes-basic-auth - user: - password: h5M0FtUUIflBSdI7 - username: admin - -$ kubectl config current-context -lithe-cocoa-92103_kubernetes +kind: Namespace +metadata: + name: ``` -The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. +Note that the name of your namespace must be a DNS compatible label. + +More information on the `finalizers` field can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#finalizers). + +Then run: ```shell -$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes -$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes +$ kubectl create -f ./my-namespace.yaml ``` -The above commands provided two request contexts you can alternate against depending on what namespace you -wish to work against. +### Working in namespaces -Let's switch to operate in the development namespace. +See [Setting the namespace for a request](/docs/user-guide/namespaces/#setting-the-namespace-for-a-request) +and [Setting the namespace preference](/docs/user-guide/namespaces/#setting-the-namespace-preference). + +### Deleting a namespace + +You can delete a namespace with ```shell -$ kubectl config use-context dev +$ kubectl delete namespaces ``` -You can verify your current context by doing the following: +**WARNING, this deletes _everything_ under the namespace!** -```shell -$ kubectl config current-context -dev -``` +This delete is asynchronous, so for a time you will see the namespace in the `Terminating` state. -At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. +## Namespaces and DNS -Let's create some content. +When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/admin/dns). +This entry is of the form `..svc.cluster.local`, which means +that if a container just uses `` it will resolve to the service which +is local to a namespace. This is useful for using the same configuration across +multiple namespaces such as Development, Staging and Production. If you want to reach +across namespaces, you need to use the fully qualified domain name (FQDN). -```shell -$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 -``` -We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. -Note that `kubectl run` creates deployments only on kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. -If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. +## Design -```shell -$ kubectl get deployment -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -snowflake 2 2 2 2 2m - -$ kubectl get pods -l run=snowflake -NAME READY STATUS RESTARTS AGE -snowflake-3968820950-9dgr8 1/1 Running 0 2m -snowflake-3968820950-vgc4n 1/1 Running 0 2m -``` - -And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace. - -Let's switch to the production namespace and show how resources in one namespace are hidden from the other. - -```shell -$ kubectl config use-context prod -``` - -The production namespace should be empty, and the following commands should return nothing. - -```shell -$ kubectl get deployment -$ kubectl get pods -``` - -Production likes to run cattle, so let's create some cattle pods. - -```shell -$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 - -$ kubectl get deployment -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -cattle 5 5 5 5 10s - -kubectl get pods -l run=cattle -NAME READY STATUS RESTARTS AGE -cattle-2263376956-41xy6 1/1 Running 0 34s -cattle-2263376956-kw466 1/1 Running 0 34s -cattle-2263376956-n4v97 1/1 Running 0 34s -cattle-2263376956-p5p3i 1/1 Running 0 34s -cattle-2263376956-sxpth 1/1 Running 0 34s -``` - -At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace. - -As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different -authorization rules for each namespace. +Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace) +can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md) diff --git a/docs/admin/namespaces/walkthrough.md b/docs/admin/namespaces/walkthrough.md index c3808b3209..e3b87c7f65 100644 --- a/docs/admin/namespaces/walkthrough.md +++ b/docs/admin/namespaces/walkthrough.md @@ -5,7 +5,7 @@ Kubernetes _namespaces_ help different projects, teams, or customers to share a It does this by providing the following: -1. A scope for [Names](/docs/user-guide/identifiers). +1. A scope for [Names](/docs/user-guide/identifiers/). 2. A mechanism to attach authorization and policy to a subsection of the cluster. Use of multiple namespaces is optional. @@ -17,7 +17,7 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu This example assumes the following: 1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/). -2. You have a basic understanding of Kubernetes _[Pods](/docs/user-guide/pods)_, _[Services](/docs/user-guide/services)_, and _[Deployments](/docs/user-guide/deployments)_. +2. You have a basic understanding of Kubernetes _[Pods](/docs/user-guide/pods/)_, _[Services](/docs/user-guide/services/)_, and _[Deployments](/docs/user-guide/deployments/)_. ### Step One: Understand the default namespace diff --git a/docs/admin/resourcequota/index.md b/docs/admin/resourcequota/index.md index eeacbddabf..69667cf208 100644 --- a/docs/admin/resourcequota/index.md +++ b/docs/admin/resourcequota/index.md @@ -1,164 +1,154 @@ --- --- -This example demonstrates how [resource quota](/docs/admin/admission-controllers/#resourcequota) and -[limitsranger](/docs/admin/admission-controllers/#limitranger) can be applied to a Kubernetes namespace. +When several users or teams share a cluster with a fixed number of nodes, +there is a concern that one team could use more than its fair share of resources. + +Resource quotas are a tool for administrators to address this concern. Resource quotas +work like this: + +- Different teams work in different namespaces. Currently this is voluntary, but + support for making this mandatory via ACLs is planned. +- The administrator creates a Resource Quota for each namespace. +- Users put compute resource requests on their pods. The sum of all resource requests across + all pods in the same namespace must not exceed any hard resource limit in any Resource Quota + document for the namespace. Note that we used to verify Resource Quota by taking the sum of + resource limits of the pods, but this was altered to use resource requests. Backwards compatibility + for those pods previously created is preserved because pods that only specify a resource limit have + their resource requests defaulted to match their defined limits. The user is only charged for the + resources they request in the Resource Quota versus their limits because the request is the minimum + amount of resource guaranteed by the cluster during scheduling. For more information on over commit, + see [compute-resources](/docs/user-guide/compute-resources). +- If creating a pod would cause the namespace to exceed any of the limits specified in the + the Resource Quota for that namespace, then the request will fail with HTTP status + code `403 FORBIDDEN`. +- If quota is enabled in a namespace and the user does not specify *requests* on the pod for each + of the resources for which quota is enabled, then the POST of the pod will fail with HTTP + status code `403 FORBIDDEN`. Hint: Use the LimitRange admission controller to force default + values of *limits* (then resource *requests* would be equal to *limits* by default, see + [admission controller](/docs/admin/admission-controllers)) before the quota is checked to avoid this problem. + +Examples of policies that could be created using namespaces and quotas are: + +- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 Gib and 10 cores, + let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation. +- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace + use any amount. + +In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, +there may be contention for resources. This is handled on a first-come-first-served basis. + +Neither contention nor changes to quota will affect already-running pods. + +## Enabling Resource Quota + +Resource Quota support is enabled by default for many Kubernetes distributions. It is +enabled when the apiserver `--admission-control=` flag has `ResourceQuota` as +one of its arguments. + +Resource Quota is enforced in a particular namespace when there is a +`ResourceQuota` object in that namespace. There should be at most one +`ResourceQuota` object in a namespace. + +## Compute Resource Quota + +The total sum of [compute resources](/docs/user-guide/compute-resources) requested by pods +in a namespace can be limited. The following compute resource types are supported: + +| ResourceName | Description | +| ------------ | ----------- | +| cpu | Total cpu requests of containers | +| memory | Total memory requests of containers + +For example, `cpu` quota sums up the `resources.requests.cpu` fields of every +container of every pod in the namespace, and enforces a maximum on that sum. + +## Object Count Quota + +The number of objects of a given type can be restricted. The following types +are supported: + +| ResourceName | Description | +| ------------ | ----------- | +| pods | Total number of pods | +| services | Total number of services | +| replicationcontrollers | Total number of replication controllers | +| resourcequotas | Total number of [resource quotas](/docs/admin/admission-controllers/#resourcequota) | +| secrets | Total number of secrets | +| persistentvolumeclaims | Total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | + +For example, `pods` quota counts and enforces a maximum on the number of `pods` +created in a single namespace. + +You might want to set a pods quota on a namespace +to avoid the case where a user creates many small pods and exhausts the cluster's +supply of Pod IPs. + +## Viewing and Setting Quotas + +Kubectl supports creating, updating, and viewing quotas: + +```shell +$ kubectl namespace myspace +$ cat < quota.json +{ + "apiVersion": "v1", + "kind": "ResourceQuota", + "metadata": { + "name": "quota" + }, + "spec": { + "hard": { + "memory": "1Gi", + "cpu": "20", + "pods": "10", + "services": "5", + "replicationcontrollers":"20", + "resourcequotas":"1" + } + } +} +EOF +$ kubectl create -f ./quota.json +$ kubectl get quota +NAME +quota +$ kubectl describe quota quota +Name: quota +Resource Used Hard +-------- ---- ---- +cpu 0m 20 +memory 0 1Gi +pods 5 10 +replicationcontrollers 5 20 +resourcequotas 1 1 +services 3 5 +``` + +## Quota and Cluster Capacity + +Resource Quota objects are independent of the Cluster Capacity. They are +expressed in absolute units. So, if you add nodes to your cluster, this does *not* +automatically give each namespace the ability to consume more resources. + +Sometimes more complex policies may be desired, such as: + + - proportionally divide total cluster resources among several teams. + - allow each tenant to grow resource usage as needed, but have a generous + limit to prevent accidental resource exhaustion. + - detect demand from one namespace, add nodes, and increase quota. + +Such policies could be implemented using ResourceQuota as a building-block, by +writing a 'controller' which watches the quota usage and adjusts the quota +hard limits of each namespace according to other signals. + +Note that resource quota divides up aggregate cluster resources, but it creates no +restrictions around nodes: pods from several namespaces may run on the same node. + +## Example + +See a [detailed example for how to use resource quota](/docs/admin/resourcequota/). + +## Read More + See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information. - -This example assumes you have a functional Kubernetes setup. - -## Step 1: Create a namespace - -This example will work in a custom namespace to demonstrate the concepts involved. - -Let's create a new namespace called quota-example: - -```shell -$ kubectl create namespace quota-example -namespace "quota-example" created -``` - -Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands: - -```shell -$ kubectl get namespaces -NAME STATUS AGE -default Active 50m -quota-example Active 2s -``` - -## Step 2: Apply a quota to the namespace - -By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the -system will be able to consume as much CPU and memory on the node that executes the pod. - -Users may want to restrict how much of the cluster resources a given namespace may consume -across all of its pods in order to manage cluster usage. To do this, a user applies a quota to -a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory) -and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes -checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace. - -Let's create a simple quota in our namespace: - -```shell -$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example -resourcequota "quota" created -``` - -Once your quota is applied to a namespace, the system will restrict any creation of content -in the namespace until the quota usage has been calculated. This should happen quickly. - -You can describe your current quota usage to see what resources are being consumed in your -namespace. - -```shell -$ kubectl describe quota quota --namespace=quota-example -Name: quota -Namespace: quota-example -Resource Used Hard --------- ---- ---- -cpu 0 20 -memory 0 1Gi -persistentvolumeclaims 0 10 -pods 0 10 -replicationcontrollers 0 20 -resourcequotas 1 1 -secrets 1 10 -services 0 5 -``` - -## Step 3: Applying default resource requests and limits - -Pod authors rarely specify resource requests and limits for their pods. - -Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded -cpu and memory by creating an nginx container. - -To demonstrate, lets create a Deployment that runs nginx: - -```shell -$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example -deployment "nginx" created -``` - -This creates a Deployment "nginx" with its underlying resource, a ReplicaSet, which handles the creation and deletion of Pod replicas. Now let's look at the pods that were created. - -```shell -$ kubectl get pods --namespace=quota-example -NAME READY STATUS RESTARTS AGE -``` - -What happened? I have no pods! Let's describe the ReplicaSet managed by the nginx Deployment to get a view of what is happening. -Note that `kubectl describe rs` works only on kubernetes cluster >= v1.2. If you are running older versions, use `kubectl describe rc` instead. -If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. - -```shell -$ kubectl describe rs -l run=nginx --namespace=quota-example -Name: nginx-2040093540 -Namespace: quota-example -Image(s): nginx -Selector: pod-template-hash=2040093540,run=nginx -Labels: pod-template-hash=2040093540,run=nginx -Replicas: 0 current / 1 desired -Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed -No volumes. -Events: - FirstSeen LastSeen Count From SubobjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 48s 26s 4 {replicaset-controller } Warning FailedCreate Error creating: pods "nginx-2040093540-" is forbidden: Failed quota: quota: must specify cpu,memory -``` - -The Kubernetes API server is rejecting the ReplicaSet requests to create a pod because our pods -do not specify any memory usage *request*. - -So let's set some default values for the amount of cpu and memory a pod can consume: - -```shell -$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example -limitrange "limits" created -$ kubectl describe limits limits --namespace=quota-example -Name: limits -Namespace: quota-example -Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ----- -------- --- --- --------------- ------------- ----------------------- -Container cpu - - 100m 200m - -Container memory - - 256Mi 512Mi - -``` - -Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default -amount of cpu and memory per container will be applied, and the request will be used as part of admission control. - -Now that we have applied default resource *request* for our namespace, our Deployment should be able to -create its pods. - -```shell -$ kubectl get pods --namespace=quota-example -NAME READY STATUS RESTARTS AGE -nginx-2040093540-miohp 1/1 Running 0 5s -``` - -And if we print out our quota usage in the namespace: - -```shell -$ kubectl describe quota quota --namespace=quota-example -Name: quota -Namespace: quota-example -Resource Used Hard --------- ---- ---- -cpu 100m 20 -memory 256Mi 1Gi -persistentvolumeclaims 0 10 -pods 1 10 -replicationcontrollers 1 20 -resourcequotas 1 1 -secrets 1 10 -services 0 5 -``` - -You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*), and the usage is being tracked by the Kubernetes system properly. - -## Summary - -Actions that consume node resources for cpu and memory can be subject to hard quota limits defined by the namespace quota. The resource consumption is measured by resource *request* in pod specification. - -Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to meet your end goal.