Update Task topics on resource management. (#4158)

pull/4127/merge
Steve Perry 2017-08-07 15:30:29 -07:00 committed by GitHub
parent 6da5f52fb2
commit 3d2c06c4c3
59 changed files with 2781 additions and 829 deletions

View File

@ -11,7 +11,9 @@ toc:
- title: Configure Pods and Containers
section:
- docs/tasks/configure-pod-container/assign-cpu-ram-container.md
- docs/tasks/configure-pod-container/assign-memory-resource.md
- docs/tasks/configure-pod-container/assign-cpu-resource.md
- docs/tasks/configure-pod-container/quality-service-pod.md
- docs/tasks/configure-pod-container/configure-volume-storage.md
- docs/tasks/configure-pod-container/configure-persistent-volume-storage.md
- docs/tasks/configure-pod-container/configure-projected-volume-storage.md
@ -108,17 +110,22 @@ toc:
- title: Administer a Cluster
section:
- title: Manage Memory, CPU, and API Resources
section:
- docs/tasks/administer-cluster/memory-default-namespace.md
- docs/tasks/administer-cluster/cpu-default-namespace.md
- docs/tasks/administer-cluster/memory-constraint-namespace.md
- docs/tasks/administer-cluster/cpu-constraint-namespace.md
- docs/tasks/administer-cluster/apply-resource-quota-limit.md
- docs/tasks/administer-cluster/quota-memory-cpu-namespace.md
- docs/tasks/administer-cluster/quota-pod-namespace.md
- docs/tasks/administer-cluster/quota-api-object.md
- docs/tasks/administer-cluster/access-cluster-api.md
- docs/tasks/administer-cluster/access-cluster-services.md
- docs/tasks/administer-cluster/securing-a-cluster.md
- docs/tasks/administer-cluster/encrypt-data.md
- docs/tasks/administer-cluster/configure-upgrade-etcd.md
- docs/tasks/administer-cluster/apply-resource-quota-limit.md
- docs/tasks/administer-cluster/out-of-resource.md
- docs/tasks/administer-cluster/cpu-memory-limit.md
- docs/tasks/administer-cluster/reserve-compute-resources.md
- docs/tasks/administer-cluster/static-pod.md
- docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md
- docs/tasks/administer-cluster/cluster-management.md
- docs/tasks/administer-cluster/upgrade-1-6.md
- docs/tasks/administer-cluster/kubeadm-upgrade-1-7.md
@ -126,6 +133,10 @@ toc:
- docs/tasks/administer-cluster/namespaces-walkthrough.md
- docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
- docs/tasks/administer-cluster/safely-drain-node.md
- docs/tasks/administer-cluster/cpu-memory-limit.md
- docs/tasks/administer-cluster/out-of-resource.md
- docs/tasks/administer-cluster/reserve-compute-resources.md
- docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md
- docs/tasks/administer-cluster/declare-network-policy.md
- title: Install Network Policy Provider
section:

View File

@ -87,6 +87,7 @@
/docs/samples /docs/tutorials/ 301
/docs/tasks/administer-cluster/assign-pods-nodes /docs/tasks/configure-pod-container/assign-pods-nodes 301
/docs/tasks/administer-cluster/overview /docs/concepts/cluster-administration/cluster-administration-overview 301
/docs/tasks/administer-cluster/cpu-memory-limit /docs/tasks/administer-cluster/memory-default-namespace 301
/docs/tasks/configure-pod-container/apply-resource-quota-limit /docs/tasks/administer-cluster/apply-resource-quota-limit 301
/docs/tasks/configure-pod-container/calico-network-policy /docs/tasks/administer-cluster/calico-network-policy 301
@ -98,7 +99,8 @@
/docs/tasks/configure-pod-container/environment-variable-expose-pod-information /docs/tasks/inject-data-application/environment-variable-expose-pod-information 301
/docs/tasks/configure-pod-container/limit-range /docs/tasks/administer-cluster/cpu-memory-limit 301
/docs/tasks/configure-pod-container/romana-network-policy /docs/tasks/administer-cluster/romana-network-policy 301
/docs/tasks/configure-pod-container/weave-network-policy /docs/tasks/administer-cluster/weave-network-policy 301
/docs/tasks/configure-pod-container/weave-network-policy /docs/tasks/administer-cluster/weave-network-policy 301
/docs/tasks/configure-pod-container/assign-cpu-ram-container /docs/tasks/configure-pod-container/assign-memory-resource 301
/docs/tasks/kubectl/get-shell-running-container /docs/tasks/debug-application-cluster/get-shell-running-container 301
/docs/tasks/kubectl/install /docs/tasks/tools/install-kubectl 301

View File

@ -1,378 +0,0 @@
---
approvers:
- derekwaynecarr
- janetkuo
title: Apply Resource Quotas and Limits
---
{% capture overview %}
This example demonstrates a typical setup to control resource usage in a namespace.
It demonstrates using the following resources: [Namespace](/docs/admin/namespaces), [ResourceQuota](/docs/concepts/policy/resource-quotas/), and [LimitRange](/docs/tasks/configure-pod-container/limit-range/).
{% endcapture %}
{% capture prerequisites %}
* {% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
## Scenario
The cluster-admin is operating a cluster on behalf of a user population and the cluster-admin
wants to control the amount of resources that can be consumed in a particular namespace to promote
fair sharing of the cluster and control cost.
The cluster-admin has the following goals:
* Limit the amount of compute resource for running pods
* Limit the number of persistent volume claims to control access to storage
* Limit the number of load balancers to control cost
* Prevent the use of node ports to preserve scarce resources
* Provide default compute resource requests to enable better scheduling decisions
## Create a namespace
This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called quota-example:
```shell
$ kubectl create namespace quota-example
namespace "quota-example" created
$ kubectl get namespaces
NAME STATUS AGE
default Active 2m
kube-system Active 2m
quota-example Active 39s
```
## Apply an object-count quota to the namespace
The cluster-admin wants to control the following resources:
* persistent volume claims
* load balancers
* node ports
Let's create a simple quota that controls object counts for those resource types in this namespace.
```shell
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-object-counts.yaml --namespace=quota-example
resourcequota "object-counts" created
```
The quota system will observe that a quota has been created, and will calculate consumption
in the namespace in response. This should happen quickly.
Let's describe the quota to see what is currently being consumed in this namespace:
```shell
$ kubectl describe quota object-counts --namespace=quota-example
Name: object-counts
Namespace: quota-example
Resource Used Hard
-------- ---- ----
persistentvolumeclaims 0 2
services.loadbalancers 0 2
services.nodeports 0 0
```
The quota system will now prevent users from creating more than the specified amount for each resource.
## Apply a compute-resource quota to the namespace
To limit the amount of compute resource that can be consumed in this namespace,
let's create a quota that tracks compute resources.
```shell
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-compute-resources.yaml --namespace=quota-example
resourcequota "compute-resources" created
```
Let's describe the quota to see what is currently being consumed in this namespace:
```shell
$ kubectl describe quota compute-resources --namespace=quota-example
Name: compute-resources
Namespace: quota-example
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 4
requests.cpu 0 1
requests.memory 0 1Gi
```
The quota system will now prevent the namespace from having more than 4 non-terminal pods. In
addition, it will enforce that each container in a pod makes a `request` and defines a `limit` for
`cpu` and `memory`.
## Applying default resource requests and limits
Pod authors rarely specify resource requests and limits for their pods.
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
cpu and memory by creating an nginx container.
To demonstrate, lets create a deployment that runs nginx:
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
deployment "nginx" created
```
Now let's look at the pods that were created.
```shell
$ kubectl get pods --namespace=quota-example
```
What happened? I have no pods! Let's describe the deployment to get a view of what is happening.
```shell
$ kubectl describe deployment nginx --namespace=quota-example
Name: nginx
Namespace: quota-example
CreationTimestamp: Mon, 06 Jun 2016 16:11:37 -0400
Labels: run=nginx
Selector: run=nginx
Replicas: 0 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: <none>
NewReplicaSet: nginx-3137573019 (0/1 replicas created)
...
```
A deployment created a corresponding replica set and attempted to size it to create a single pod.
Let's look at the replica set to get more detail.
```shell
$ kubectl describe rs nginx-3137573019 --namespace=quota-example
Name: nginx-3137573019
Namespace: quota-example
Image(s): nginx
Selector: pod-template-hash=3137573019,run=nginx
Labels: pod-template-hash=3137573019
run=nginx
Replicas: 0 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 7s 11 {replicaset-controller } Warning FailedCreate Error creating: pods "nginx-3137573019-" is forbidden: Failed quota: compute-resources: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
```
The Kubernetes API server is rejecting the replica set requests to create a pod because our pods
do not specify `requests` or `limits` for `cpu` and `memory`.
So let's set some default values for the amount of `cpu` and `memory` a pod can consume:
```shell
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-limits.yaml --namespace=quota-example
limitrange "limits" created
$ kubectl describe limits limits --namespace=quota-example
Name: limits
Namespace: quota-example
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 512Mi -
Container cpu - - 100m 200m -
```
If the Kubernetes API server observes a request to create a pod in this namespace, and the containers
in that pod do not make any compute resource requests, a default request and default limit will be applied
as part of admission control.
In this example, each pod created will have compute resources equivalent to the following:
```shell
$ kubectl run nginx \
--image=nginx \
--replicas=1 \
--requests=cpu=100m,memory=256Mi \
--limits=cpu=200m,memory=512Mi \
--namespace=quota-example
```
Now that we have applied default compute resources for our namespace, our replica set should be able to create
its pods.
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
nginx-3137573019-fvrig 1/1 Running 0 6m
```
And if we print out our quota usage in the namespace:
```shell
$ kubectl describe quota --namespace=quota-example
Name: compute-resources
Namespace: quota-example
Resource Used Hard
-------- ---- ----
limits.cpu 200m 2
limits.memory 512Mi 2Gi
pods 1 4
requests.cpu 100m 1
requests.memory 256Mi 1Gi
Name: object-counts
Namespace: quota-example
Resource Used Hard
-------- ---- ----
persistentvolumeclaims 0 2
services.loadbalancers 0 2
services.nodeports 0 0
```
As you can see, the pod that was created is consuming explicit amounts of compute resources, and the usage is being
tracked by Kubernetes properly.
## Advanced quota scopes
Let's imagine you did not want to specify default compute resource consumption in your namespace.
Instead, you want to let users run a specific number of `BestEffort` pods in their namespace to take
advantage of slack compute resources, and then require that users make an explicit resource request for
pods that require a higher quality of service.
Let's create a new namespace with two quotas to demonstrate this behavior:
```shell
$ kubectl create namespace quota-scopes
namespace "quota-scopes" created
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-best-effort.yaml --namespace=quota-scopes
resourcequota "best-effort" created
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-not-best-effort.yaml --namespace=quota-scopes
resourcequota "not-best-effort" created
$ kubectl describe quota --namespace=quota-scopes
Name: best-effort
Namespace: quota-scopes
Scopes: BestEffort
* Matches all pods that have best effort quality of service.
Resource Used Hard
-------- ---- ----
pods 0 10
Name: not-best-effort
Namespace: quota-scopes
Scopes: NotBestEffort
* Matches all pods that do not have best effort quality of service.
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 4
requests.cpu 0 1
requests.memory 0 1Gi
```
In this scenario, a pod that makes no compute resource requests will be tracked by the `best-effort` quota.
A pod that does make compute resource requests will be tracked by the `not-best-effort` quota.
Let's demonstrate this by creating two deployments:
```shell
$ kubectl run best-effort-nginx --image=nginx --replicas=8 --namespace=quota-scopes
deployment "best-effort-nginx" created
$ kubectl run not-best-effort-nginx \
--image=nginx \
--replicas=2 \
--requests=cpu=100m,memory=256Mi \
--limits=cpu=200m,memory=512Mi \
--namespace=quota-scopes
deployment "not-best-effort-nginx" created
```
Even though no default limits were specified, the `best-effort-nginx` deployment will create
all 8 pods. This is because it is tracked by the `best-effort` quota, and the `not-best-effort`
quota will just ignore it. The `not-best-effort` quota will track the `not-best-effort-nginx`
deployment since it creates pods with `Burstable` quality of service.
Let's list the pods in the namespace:
```shell
$ kubectl get pods --namespace=quota-scopes
NAME READY STATUS RESTARTS AGE
best-effort-nginx-3488455095-2qb41 1/1 Running 0 51s
best-effort-nginx-3488455095-3go7n 1/1 Running 0 51s
best-effort-nginx-3488455095-9o2xg 1/1 Running 0 51s
best-effort-nginx-3488455095-eyg40 1/1 Running 0 51s
best-effort-nginx-3488455095-gcs3v 1/1 Running 0 51s
best-effort-nginx-3488455095-rq8p1 1/1 Running 0 51s
best-effort-nginx-3488455095-udhhd 1/1 Running 0 51s
best-effort-nginx-3488455095-zmk12 1/1 Running 0 51s
not-best-effort-nginx-2204666826-7sl61 1/1 Running 0 23s
not-best-effort-nginx-2204666826-ke746 1/1 Running 0 23s
```
As you can see, all 10 pods have been allowed to be created.
Let's describe current quota usage in the namespace:
```shell
$ kubectl describe quota --namespace=quota-scopes
Name: best-effort
Namespace: quota-scopes
Scopes: BestEffort
* Matches all pods that have best effort quality of service.
Resource Used Hard
-------- ---- ----
pods 8 10
Name: not-best-effort
Namespace: quota-scopes
Scopes: NotBestEffort
* Matches all pods that do not have best effort quality of service.
Resource Used Hard
-------- ---- ----
limits.cpu 400m 2
limits.memory 1Gi 2Gi
pods 2 4
requests.cpu 200m 1
requests.memory 512Mi 1Gi
```
As you can see, the `best-effort` quota has tracked the usage for the 8 pods we created in
the `best-effort-nginx` deployment, and the `not-best-effort` quota has tracked the usage for
the 2 pods we created in the `not-best-effort-nginx` quota.
Scopes provide a mechanism to subdivide the set of resources that are tracked by
any quota document to allow greater flexibility in how operators deploy and track resource
consumption.
In addition to `BestEffort` and `NotBestEffort` scopes, there are scopes to restrict
long-running versus time-bound pods. The `Terminating` scope will match any pod
where `spec.activeDeadlineSeconds is not nil`. The `NotTerminating` scope will match any pod
where `spec.activeDeadlineSeconds is nil`. These scopes allow you to quota pods based on their
anticipated permanence on a node in your cluster.
{% endcapture %}
{% capture discussion %}
## Summary
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined by the namespace quota.
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to meet your end goal.
Quota can be apportioned based on quality of service and anticipated permanence on a node in your cluster.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,270 @@
---
title: Configure Minimum and Maximum CPU Constraints for a Namespace
---
{% capture overview %}
This page shows how to set minimum and maximum values for the CPU resources used by Containers
and Pods in a namespace. You specify minimum and maximum CPU values in a
[LimitRange](/docs/api-reference/v1.6/#limitrange-v1-core)
object. If a Pod does not meet the constraints imposed by the LimitRange, it cannot be created
in the namespace.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
Each node in your cluster must have at least 1 CPU.
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace constraints-cpu-example
```
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange:
{% include code.html language="yaml" file="cpu-constraints.yaml" ghlink="/docs/tasks/administer-cluster/cpu-constraints.yaml" %}
Create the LimitRange:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-constraints.yaml --namespace=constraints-cpu-example
```
View detailed information about the LimitRange:
```shell
kubectl get limitrange cpu-min-max-demo-lr --output=yaml --namespace=constraints-cpu-example
```
The output shows the minimum and maximum CPU constraints as expected. But
notice that even though you didn't specify default values in the configuration
file for the LimitRange, they were created automatically.
```yaml
limits:
- default:
cpu: 800m
defaultRequest:
cpu: 800m
max:
cpu: 800m
min:
cpu: 200m
type: Container
```
Now whenever a Container is created in the constraints-cpu-example namespace, Kubernetes
performs these steps:
* If the Container does not specify its own CPU request and limit, assign the default
CPU request and limit to the Container.
* Verify that the Container specifies a CPU request that is greater than or equal to 200 millicpu.
* Verify that the Container specifies a memory limit that is less than or equal to 800 millicpu.
Here's the configuration file for a Pod that has one Container. The Container manifest
specifies a CPU request of 500 millicpu and a CPU limit of 800 millicpu. These satisfy the
minimum and maximum CPU constraints imposed by the LimitRange.
{% include code.html language="yaml" file="cpu-constraints-pod.yaml" ghlink="/docs/tasks/administer-cluster/cpu-constraints-pod.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-constraints-pod.yaml --namespace=constraints-cpu-example
```
Verify that the Pod's Container is running:
```shell
kubectl get pod constraints-cpu-demo --namespace=constraints-cpu-example
```
View detailed information about the Pod:
```shell
kubectl get pod constraints-cpu-demo --output=yaml --namespace=constraints-cpu-example
```
The output shows that the Container has a CPU request of 500 millicpu and CPU limit
of 800 millicpu. These satisfy the constraints imposed by the LimitRange.
```yaml
resources:
limits:
cpu: 800m
requests:
cpu: 500m
```
## Delete the Pod
```shell
kubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example
```
## Attempt to create a Pod that exceeds the maximum CPU constraint
Here's the configuration file for a Pod that has one Container. The Container specifies a
CPU request of 500 millicpu and a cpu limit of 1.5 cpu.
{% include code.html language="yaml" file="cpu-constraints-pod-2.yaml" ghlink="/docs/tasks/administer-cluster/cpu-constraints-pod-2.yaml" %}
Attempt to create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example
```
The output shows that the Pod does not get created, because the Container specifies a CPU limit that is
too large:
```
Error from server (Forbidden): error when creating "docs/tasks/administer-cluster/cpu-constraints-pod-2.yaml":
pods "constraints-cpu-demo-2" is forbidden: maximum cpu usage per Container is 800m, but limit is 1500m.
```
## Attempt to create a Pod that does not meet the minimum CPU request
Here's the configuration file for a Pod that has one Container. The Container specifies a
CPU request of 100 millicpu and a CPU limit of 800 millicpu.
{% include code.html language="yaml" file="cpu-constraints-pod-3.yaml" ghlink="/docs/tasks/administer-cluster/cpu-constraints-pod-3.yaml" %}
Attempt to create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example
```
The output shows that the Pod does not get created, because the Container specifies a CPU
request that is too small:
```
Error from server (Forbidden): error when creating "docs/tasks/administer-cluster/cpu-constraints-pod-3.yaml":
pods "constraints-cpu-demo-4" is forbidden: minimum cpu usage per Container is 200m, but request is 100m.
```
## Create a Pod that does not specify any CPU request or limit
Here's the configuration file for a Pod that has one Container. The Container does not
specify a CPU request, and it does not specify a CPU limit.
{% include code.html language="yaml" file="cpu-constraints-pod-4.yaml" ghlink="/docs/tasks/administer-cluster/cpu-constraints-pod-4.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-constraints-pod-4.yaml --namespace=constraints-cpu-example
```
View detailed information about the Pod:
```
kubectl get pod constraints-cpu-demo-4 --namespace=constraints-cpu-example --output=yaml
```
The output shows that the Pod's Container has a CPU request of 800 millicpu and a CPU limit of 800 millicpu.
How did the Container get those values?
```yaml
resources:
limits:
cpu: 800m
requests:
cpu: 800m
```
Because your Container did not specify its own CPU request and limit, it was given the
[default CPU request and limit](/docs/tasks/administer-cluster/default-cpu-request-limit/)
from the LimitRange.
* [Configure Memory and CPU Quotas for a Namespace](docs/tasks/administer-cluster/quota-memory-cpu-namespace)
At this point, your Container might be running or it might not be running. Recall that a prerequisite
for this task is that your Nodes have at least 1 CPU. If each of your Nodes has only
1 CPU, then there might not be enough allocatable CPU on any Node to accommodate a request
of 800 millicpu. If you happen to be using Nodes with 2 CPU, then you probably have
enough CPU to accommodate the 800 millicpu request.
Delete your Pod:
```
kubectl delete pod constraints-cpu-demo-4 --namespace=constraints-cpu-example
```
## Enforcement of minimum and maximum CPU constraints
The maximum and minimum CPU constraints imposed on a namespace by a LimitRange are enforced only
when a Pod is created or updated. If you change the LimitRange, it does not affect
Pods that were created previously.
## Motivation for minimum and maximum CPU constraints
As a cluster administrator, you might want to impose restrictions on the CPU resources that Pods can use.
For example:
* Each Node in a cluster has 2 cpu. You do not want to accept any Pod that requests
more than 2 cpu, because no Node in the cluster can support the request.
* A cluster is shared by your production and development departments.
You want to allow production workloads to consume up to 3 cpu, but you want development workloads to be limited
to 1 cpu. You create separate namespaces for production and development, and you apply CPU constraints to
each namespace.
## Clean up
Delete your namespace:
```shell
kubectl delete namespace constraints-cpu-example
```
{% endcapture %}
{% capture whatsnext %}
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: constraints-cpu-demo-2
spec:
containers:
- name: constraints-cpu-demo-2-ctr
image: nginx
resources:
limits:
cpu: "1.5"
requests:
cpu: "500m"

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: constraints-cpu-demo-4
spec:
containers:
- name: constraints-cpu-demo-4-ctr
image: nginx
resources:
limits:
cpu: "800m"
requests:
cpu: "100m"

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Pod
metadata:
name: constraints-cpu-demo-4
spec:
containers:
- name: constraints-cpu-demo-4-ctr
image: vish/stress

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: constraints-cpu-demo
spec:
containers:
- name: constraints-cpu-demo-ctr
image: nginx
resources:
limits:
cpu: "800m"
requests:
cpu: "500m"

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-min-max-demo-lr
spec:
limits:
- max:
cpu: "800m"
min:
cpu: "200m"
type: Container

View File

@ -0,0 +1,178 @@
---
title: Configure Default CPU Requests and Limits for a Namespace
---
{% capture overview %}
This page shows how to configure default CPU requests and limits for a namespace.
A Kubernetes cluster can be divided into namespaces. If a Container is created in a namespace
that has a default CPU limit, and the Container does not specify its own CPU limit, then
the Container is assigned the default CPU limit. Kubernetes assigns a default CPU request
under certain conditions that are explained later in this topic.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace default-cpu-example
```
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange object. The configuration specifies
a default CPU request and a default CPU limit.
{% include code.html language="yaml" file="cpu-defaults.yaml" ghlink="/docs/tasks/administer-cluster/cpu-defaults.yaml" %}
Create the LimitRange in the default-cpu-example namespace:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-defaults.yaml --namespace=default-cpu-example
```
Now if a Container is created in the default-cpu-example namespace, and the
Container does not specify its own values for CPU request and CPU limit,
the Container is given a default CPU request of 0.5 and a default
CPU limit of 1.
Here's the configuration file for a Pod that has one Container. The Container
does not specify a CPU request and limit.
{% include code.html language="yaml" file="cpu-defaults-pod.yaml" ghlink="/docs/tasks/administer-cluster/cpu-defaults-pod.yaml" %}
Create the Pod.
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-defaults-pod.yaml --namespace=default-cpu-example
```
View the Pod's specification:
```shell
kubectl get pod default-cpu-demo --output=yaml --namespace=default-cpu-example
```
The output shows that the Pod's Container has a CPU request of 500 millicpus and
a CPU limit of 1 cpu. These are the default values specified by the LimitRange.
```shel
containers:
- image: nginx
imagePullPolicy: Always
name: default-cpu-demo-ctr
resources:
limits:
cpu: "1"
requests:
cpu: 500m
```
## What if you specify a Container's limit, but not its request?
Here's the configuration file for a Pod that has one Container. The Container
specifies a CPU limit, but not a request:
{% include code.html language="yaml" file="cpu-defaults-pod-2.yaml" ghlink="/docs/tasks/administer-cluster/cpu-defaults-pod-2.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-defaults-pod-2.yaml --namespace=default-cpu-example
```
View the Pod specification:
```
kubectl get pod cpu-limit-no-request --output=yaml --namespace=default-cpu-example
```
The output shows that the Container's CPU request is set to match its CPU limit.
Notice that the Container was not assigned the default CPU request value of 0.5 cpu.
```
resources:
limits:
cpu: "1"
requests:
cpu: "1"
```
## What if you specify a Container's request, but not its limit?
Here's the configuration file for a Pod that has one Container. The Container
specifies a CPU request, but not a limit:
{% include code.html language="yaml" file="cpu-defaults-pod-3.yaml" ghlink="/docs/tasks/administer-cluster/cpu-defaults-pod-3.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/cpu-defaults-pod-3.yaml --namespace=default-cpu-example
```
The output shows that the Container's CPU request is set to the value specified in the
Container's configuration file. The Container's CPU limit is set to 1 cpu, wh70cb02113b7c7cc1604d1951ef82e1c82850eef2ich is the
default CPU limit for the namespace.
```
resources:
limits:
cpu: "1"
requests:
cpu: 750m
```
## Motivation for default CPU limits and requests
If your namespace has a
[resource quota](),
it is helpful to have a default value in place for CPU limit.
Here are two of the restrictions that a resource quota imposes on a namespace:
* Every Container that runs in the namespace must have its own CPU limit.
* The total amount of CPU used by all Containers in the namespace must not exceed a specified limit.
If a Container does not specify its own CPU limit, it is given the default limit, and then
it can be allowed to run in a namespace that is restricted by a quota.
{% endcapture %}
{% capture whatsnext %}
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo-2
spec:
containers:
- name: default-cpu-demo-2-ctr
image: nginx
resources:
limits:
cpu: "1"

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo-3
spec:
containers:
- name: default-cpu-demo-3-ctr
image: nginx
resources:
requests:
cpu: "0.75"

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo
spec:
containers:
- name: default-cpu-demo-ctr
image: nginx

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container

View File

@ -1,229 +0,0 @@
---
approvers:
- derekwaynecarr
- janetkuo
title: Set Pod CPU and Memory Limits
---
{% capture overview %}
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
system will be able to consume as much CPU and memory as is on the node that executes the pod.
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces-walkthrough/) to control
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.
{% endcapture %}
{% capture prerequisites %}
* {% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
## Create a namespace
This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called limit-example:
```shell
$ kubectl create namespace limit-example
namespace "limit-example" created
```
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands:
```shell
$ kubectl get namespaces
NAME STATUS AGE
default Active 51s
limit-example Active 45s
```
## Apply a limit to the namespace
Let's create a simple limit in our namespace.
```shell
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/limits.yaml --namespace=limit-example
limitrange "mylimits" created
```
Let's describe the limits that were imposed in the namespace.
```shell
$ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits
Namespace: limit-example
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Pod cpu 200m 2 - - -
Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
```
In this scenario, the following limits were specified:
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
must be specified for that resource across all containers. Failure to specify a limit will result in
a validation error when attempting to create the pod. Note that a default value of limit is set by
*default* in file `limits.yaml` (300m CPU and 200Mi memory).
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
request must be specified for that resource across all containers. Failure to specify a request will
result in a validation error when attempting to create the pod. Note that a default value of request is
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
containers CPU limits must be <= 2.
## Enforcing limits at point of creation
The limits enumerated in a namespace are only enforced when a pod is created or updated in
the cluster. If you change the limits to a different value range, it does not affect pods that
were previously created in a namespace.
If a resource (CPU or memory) is being restricted by a limit, the user will get an error at time
of creation explaining why.
Let's first spin up a [Deployment](/docs/concepts/workloads/controllers/deployment/) that creates a single container Pod to demonstrate
how default values are applied to each pod.
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
deployment "nginx" created
```
Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/{{page.version}}/#run) for more details.
The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod:
```shell
$ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE
nginx-2040093540-s8vzu 1/1 Running 0 11s
```
Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different.
```shell
$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8
resourceVersion: "57"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu
uid: 67b20741-f53b-11e5-b066-64510658e388
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
```
Note that our nginx container has picked up the namespace default CPU and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 CPU cores.
```shell
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "http://k8s.io/docs/tasks/configure-pod-container/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
```
Let's create a pod that falls within the allowed limit boundaries.
```shell
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
```
Now look at the Pod's resources field:
```shell
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
uid: 3b1bfd7a-f53c-11e5-b066-64510658e388
spec:
containers:
- image: gcr.io/google_containers/serve_hostname
imagePullPolicy: Always
name: kubernetes-serve-hostname
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "1"
memory: 512Mi
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the following flag:
```shell
$ kubelet --help
Usage of kubelet
....
--cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=false ...
```
## Cleanup
To remove the resources used by this example, you can just delete the limit-example namespace.
```shell
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
$ kubectl get namespaces
NAME STATUS AGE
default Active 12m
```
{% endcapture %}
{% capture discussion %}
## Motivation for setting resource limits
Users may want to impose restrictions on the amount of resources a single pod in the system may consume
for a variety of reasons.
For example:
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
of memory as part of admission control.
2. A cluster is shared by two communities in an organization that runs production and development workloads
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
each namespace.
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
the cluster operator may want to set limits that a pod must consume no more than 20% of the memory and CPU of
their average node size in order to provide for more uniform scheduling and limit waste.
## Summary
Cluster operators that want to restrict the amount of resources a single container or pod may consume
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
constrain the amount of resource a pod consumes on a node.
{% endcapture %}
{% capture whatsnext %}
* See [LimitRange design doc](https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md) for more information.
* See [Resources](/docs/concepts/configuration/manage-compute-resources-container/) for a detailed description of the Kubernetes resource model.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,271 @@
---
title: Configure Minimum and Maximum Memory Constraints for a Namespace
---
{% capture overview %}
This page shows how to set minimum and maximum values for memory used by Containers
running in a namespace. You specify minimum and maximum memory values in a
[LimitRange](/docs/api-reference/v1.6/#limitrange-v1-core)
object. If a Pod does not meet the constraints imposed by the LimitRange,
it cannot be created in the namespace.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
Each node in your cluster must have at least 1 GiB of memory.
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace constraints-mem-example
```
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange:
{% include code.html language="yaml" file="memory-constraints.yaml" ghlink="/docs/tasks/administer-cluster/memory-constraints.yaml" %}
Create the LimitRange:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-constraints.yaml --namespace=constraints-mem-example
```
View detailed information about the LimitRange:
```shell
kubectl get limitrange cpu-min-max-demo --namespace=constraints-mem-example --output=yaml
```
The output shows the minimum and maximum memory constraints as expected. But
notice that even though you didn't specify default values in the configuration
file for the LimitRange, they were created automatically.
```
limits:
- default:
memory: 1Gi
defaultRequest:
memory: 1Gi
max:
memory: 1Gi
min:
memory: 500Mi
type: Container
```
Now whenever a Container is created in the constraints-mem-example namespace, Kubernetes
performs these steps:
* If the Container does not specify its own memory request and limit, assign the default
memory request and limit to the Container.
* Verify that the Container has a memory request that is greater than or equal to 500 MiB.
* Verify that the Container has a memory limit that is less than or equal to 1 GiB.
Here's the configuration file for a Pod that has one Container. The Container manifest
specifies a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the
minimum and maximum memory constraints imposed by the LimitRange.
{% include code.html language="yaml" file="memory-constraints-pod.yaml" ghlink="/docs/tasks/administer-cluster/memory-constraints-pod.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-constraints-pod.yaml --namespace=constraints-mem-example
```
Verify that the Pod's Container is running:
```shell
kubectl get pod constraints-mem-demo --namespace=constraints-mem-example
```
View detailed information about the Pod:
```shell
kubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example
```
The output shows that the Container has a memory request of 600 MiB and a memory limit
of 800 MiB. These satisfy the constraints imposed by the LimitRange.
```yaml
resources:
limits:
memory: 800Mi
requests:
memory: 600Mi
```
Delete your Pod:
```shell
kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example
```
## Attempt to create a Pod that exceeds the maximum memory constraint
Here's the configuration file for a Pod that has one Container. The Container specifies a
memory request of 700 MiB and a memory limit of 1.5 GiB.
{% include code.html language="yaml" file="memory-constraints-pod-2.yaml" ghlink="/docs/tasks/administer-cluster/memory-constraints-pod-2.yaml" %}
Attempt to create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-constraints-pod-2.yaml --namespace=constraints-mem-example
```
The output shows that the Pod does not get created, because the Container specifies a memory limit that is
too large:
```
Error from server (Forbidden): error when creating "docs/tasks/administer-cluster/memory-constraints-pod-2.yaml":
pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container is 1Gi, but limit is 1536Mi.
```
## Attempt to create a Pod that does not meet the minimum memory request
Here's the configuration file for a Pod that has one Container. The Container specifies a
memory request of 200 MiB and a memory limit of 800 MiB.
{% include code.html language="yaml" file="memory-constraints-pod-3.yaml" ghlink="/docs/tasks/administer-cluster/memory-constraints-pod-3.yaml" %}
Attempt to create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-constraints-pod-3.yaml --namespace=constraints-mem-example
```
The output shows that the Pod does not get created, because the Container specifies a memory
request that is too small:
```
Error from server (Forbidden): error when creating "docs/tasks/administer-cluster/memory-constraints-pod-3.yaml":
pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container is 500Mi, but request is 100Mi.
```
## Create a Pod that does not specify any CPU request or limit
Here's the configuration file for a Pod that has one Container. The Container does not
specify a memory request, and it does not specify a memory limit.
{% include code.html language="yaml" file="memory-constraints-pod-4.yaml" ghlink="/docs/tasks/administer-cluster/memory-constraints-pod-4.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-constraints-pod-4.yaml --namespace=constraints-mem-example
```
View detailed information about the Pod:
```
kubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml
```
The output shows that the Pod's Container has a memory request of 1 GiB and a memory limit of 1 GiB.
How did the Container get those values?
```
resources:
limits:
memory: 1Gi
requests:
memory: 1Gi
```
Because your Container did not specify its own memory request and limit, it was given the
[default memory request and limit](/docs/tasks/administer-cluster/default-memory-request-limit/)
from the LimitRange.
At this point, your Container might be running or it might not be running. Recall that a prerequisite
for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only
1 GiB of memory, then there is not enough allocatable memory on any Node to accommodate a memory
request of 1 GiB. If you happen to be using Nodes with 2 GiB of memory, then you probably have
enough space to accommodate the 1 GiB request.
Delete your Pod:
```
kubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example
```
## Enforcement of minimum and maximum memory constraints
The maximum and minimum memory constraints imposed on a namespace by a LimitRange are enforced only
when a Pod is created or updated. If you change the LimitRange, it does not affect
Pods that were created previously.
## Motivation for minimum and maximum memory constraints
As a cluster administrator, you might want to impose restrictions on the amount of memory that Pods can use.
For example:
* Each Node in a cluster has 2 GB of memory. You do not want to accept any Pod that requests
more than 2 GB of memory, because no Node in the cluster can support the request.
* A cluster is shared by your production and development departments.
You want to allow production workloads to consume up to 8 GB of memory, but
you want development workloads to be limited to 512 MB. You create separate namespaces
for production and development, and you apply memory constraints to each namespace.
## Clean up
Delete your namespace:
```shell
kubectl delete namespace constraints-mem-example
```
{% endcapture %}
{% capture whatsnext %}
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo-2
spec:
containers:
- name: constraints-mem-demo-2-ctr
image: nginx
resources:
limits:
memory: "1.5Gi"
requests:
memory: "800Mi"

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo-3
spec:
containers:
- name: constraints-mem-demo-3-ctr
image: nginx
resources:
limits:
memory: "800Mi"
requests:
memory: "100Mi"

View File

@ -0,0 +1,9 @@
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo-4
spec:
containers:
- name: constraints-mem-demo-4-ctr
image: nginx

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo
spec:
containers:
- name: constraints-mem-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
requests:
memory: "600Mi"

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: LimitRange
metadata:
name: mem-min-max-demo-lr
spec:
limits:
- max:
memory: 1Gi
min:
memory: 500Mi
type: Container

View File

@ -0,0 +1,191 @@
---
title: Configure Default Memory Requests and Limits for a Namespace
---
{% capture overview %}
This page shows how to configure default memory requests and limits for a namespace.
If a Container is created in a namespace that has a default memory limit, and the Container
does not specify its own memory limit, then the Container is assigned the default memory limit.
Kubernetes assigns a default memory request under certain conditions that are explained later in this topic.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
Each node in your cluster must have at least 300 GiB of memory.
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace default-mem-example
```
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange object. The configuration specifies
a default memory request and a default memory limit.
{% include code.html language="yaml" file="memory-defaults.yaml" ghlink="/docs/tasks/administer-cluster/memory-defaults.yaml" %}
Create the LimitRange in the default-mem-example namespace:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-defaults.yaml --namespace=default-mem-example
```
Now if a Container is created in the default-mem-example namespace, and the
Container does not specify its own values for memory request and memory limit,
the Container is given a default memory request of 256 MiB and a default
memory limit of 512 MiB.
Here's the configuration file for a Pod that has one Container. The Container
does not specify a memory request and limit.
{% include code.html language="yaml" file="memory-defaults-pod.yaml" ghlink="/docs/tasks/administer-cluster/memory-defaults-pod.yaml" %}
Create the Pod.
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-defaults-pod.yaml --namespace=default-mem-example
```
View detailed information about the Pod:
```shell
kubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example
```
The output shows that the Pod's Container has a memory request of 256 MiB and
a memory limit of 512 MiB. These are the default values specified by the LimitRange.
```shel
containers:
- image: nginx
imagePullPolicy: Always
name: default-mem-demo-ctr
resources:
limits:
memory: 512Mi
requests:
memory: 256Mi
```
Delete your Pod:
```shell
kubectl delete pod default-mem-demo --namespace=default-mem-example
```
## What if you specify a Container's limit, but not its request?
Here's the configuration file for a Pod that has one Container. The Container
specifies a memory limit, but not a request:
{% include code.html language="yaml" file="memory-defaults-pod-2.yaml" ghlink="/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml --namespace=default-mem-example
```
View detailed information about the Pod:
```shell
kubectl get pod mem-limit-no-request --output=yaml --namespace=default-mem-example
```
The output shows that the Container's memory request is set to match its memory limit.
Notice that the Container was not assigned the default memory request value of 256Mi.
```
resources:
limits:
memory: 1Gi
requests:
memory: 1Gi
```
## What if you specify a Container's request, but not its limit?
Here's the configuration file for a Pod that has one Container. The Container
specifies a memory request, but not a limit:
{% include code.html language="yaml" file="memory-defaults-pod-3.yaml" ghlink="/docs/tasks/administer-cluster/memory-defaults-pod-3.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/memory-defaults-pod-3.yaml --namespace=default-mem-example
```
View the Pod's specification:
```shell
kubectl get pod default-mem-request-no-limit --output=yaml --namespace=default-mem-example
```
The output shows that the Container's memory request is set to the value specified in the
Container's configuration file. The Container's memory limit is set to 512Mi, which is the
default memory limit for the namespace.
```
resources:
limits:
memory: 512Mi
requests:
memory: 128Mi
```
## Motivation for default memory limits and requests
If your namespace has a resource quota,
it is helpful to have a default value in place for memory limit.
Here are two of the restrictions that a resource quota imposes on a namespace:
* Every Container that runs in the namespace must have its own memory limit.
* The total amount of memory used by all Containers in the namespace must not exceed a specified limit.
If a Container does not specify its own memory limit, it is given the default limit, and then
it can be allowed to run in a namespace that is restricted by a quota.
{% endcapture %}
{% capture whatsnext %}
### For cluster administrators
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Pod
metadata:
name: default-mem-demo-2
spec:
containers:
- name: defalt-mem-demo-2-ctr
image: nginx
resources:
limits:
memory: "1Gi"

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Pod
metadata:
name: default-mem-demo-3
spec:
containers:
- name: default-mem-demo-3-ctr
image: nginx
resources:
requests:
memory: "128Mi"

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Pod
metadata:
name: default-mem-demo
spec:
containers:
- name: default-mem-demo-ctr
image: nginx

View File

@ -1,13 +1,11 @@
apiVersion: v1
kind: LimitRange
metadata:
name: limits
name: mem-limit-range
spec:
limits:
- default:
cpu: 200m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 256Mi
type: Container

View File

@ -0,0 +1,174 @@
---
title: Configure Quotas for API Objects
---
{% capture overview %}
This page shows how to configure quotas for API objects, including
PersistentVolumeClaims and Services. A quota restricts the number of
objects, of a particular type, that can be created in a namespace.
You specify quotas in a
[ResourceQuota](/docs/api-reference/v1.7/#resourcequota-v1-core)
object.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace quota-object-example
```
## Create a ResourceQuota
Here is the configuration file for a ResourceQuota object:
{% include code.html language="yaml" file="quota-objects.yaml" ghlink="/docs/tasks/administer-cluster/quota-objects.yaml" %}
Create the ResourceQuota:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-objects.yaml --namespace=quota-object-example
```
View detailed information about the ResourceQuota:
```shell
kubectl get resourcequota object-quota-demo --namespace=quota-object-example --output=yaml
```
The output shows that in the quota-object-example namespace, there can be at most
one PersistentVolumeClaim, at most two Services of type LoadBalancer, and no Services
of type NodePort.
```yaml
status:
hard:
persistentvolumeclaims: "1"
services.loadbalancers: "2"
services.nodeports: "0"
used:
persistentvolumeclaims: "0"
services.loadbalancers: "0"
services.nodeports: "0"
```
## Create a PersistentVolumeClaim:
Here is the configuration file for a PersistentVolumeClaim object:
{% include code.html language="yaml" file="quota-objects-pvc.yaml" ghlink="/docs/tasks/administer-cluster/quota-objects-pvc.yaml" %}
Create the PersistentVolumeClaim:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-objects-pvc.yaml --namespace=quota-object-example
```
Verify that the PersistentVolumeClaim was created:
```shell
kubectl get persistentvolumeclaims --namespace=quota-object-example
```
The output shows that the PersistentVolumeClaim exists and has status Pending:
```shell
NAME STATUS
pvc-quota-demo Pending
```
## Attempt to create a second PersistentVolumeClaim:
Here is the configuration file for a second PersistentVolumeClaim:
{% include code.html language="yaml" file="quota-objects-pvc-2.yaml" ghlink="/docs/tasks/administer-cluster/quota-objects-pvc-2.yaml" %}
Attempt to create the second PersistentVolumeClaim:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-objects-pvc-2.yaml --namespace=quota-object-example
```
The output shows that the second PersistentVolumeClaim was not created,
because it would have exceeded the quota for the namespace.
```
persistentvolumeclaims "pvc-quota-demo-2" is forbidden:
exceeded quota: object-quota-demo, requested: persistentvolumeclaims=1,
used: persistentvolumeclaims=1, limited: persistentvolumeclaims=1
```
## Notes
These are the strings used to identify API resources that can be constrained
by quotas:
<table>
<tr><th>String</th><th>API Object</th></tr>
<tr><td>"pods"</td><td>Pod</td></tr>
<tr><td>"services</td><td>Service</td></tr>
<tr><td>"replicationcontrollers"</td><td>ReplicationController</td></tr>
<tr><td>"resourcequotas"</td><td>ResourceQuota</td></tr>
<tr><td>"secrets"</td><td>Secret</td></tr>
<tr><td>"configmaps"</td><td>ConfigMap</td></tr>
<tr><td>"persistentvolumeclaims"</td><td>PersistentVolumeClaim</td></tr>
<tr><td>"services.nodeports"</td><td>Service of type NodePort</td></tr>
<tr><td>"services.loadbalancers"</td><td>Service of type LoadBalancer</td></tr>
</table>
## Clean up
Delete your namespace:
```shell
kubectl delete namespace quota-object-example
```
{% endcapture %}
{% capture whatsnext %}
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo-2
spec:
containers:
- name: quota-mem-cpu-demo-2-ctr
image: redis
resources:
limits:
memory: "1Gi"
cpu: "800m"
requests:
memory: "700Mi"
cpu: "400m"

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"

View File

@ -1,13 +1,10 @@
apiVersion: v1
kind: ResourceQuota
metadata:
name: not-best-effort
name: mem-cpu-demo
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
scopes:
- NotBestEffort

View File

@ -0,0 +1,178 @@
---
title: Configure Memory and CPU Quotas for a Namespace
---
{% capture overview %}
This page shows how to set quotas for the total amount memory and CPU that
can be used by all Containers running in a namespace. You specify quotas in a
[ResourceQuota](/docs/api-reference/v1.7/#resourcequota-v1-core)
object.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
Each node in your cluster must have at least 1 GiB of memory.
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace quota-mem-cpu-example
```
## Create a ResourceQuota
Here is the configuration file for a ResourceQuota object:
{% include code.html language="yaml" file="quota-mem-cpu.yaml" ghlink="/docs/tasks/administer-cluster/quota-mem-cpu.yaml" %}
Create the ResourceQuota:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-mem-cpu.yaml --namespace=quota-mem-cpu-example
```
View detailed information about the ResourceQuota:
```shell
kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml
```
The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:
* Every Container must have a memory request, memory limit, cpu request, and cpu limit.
* The memory request total for all Containers must not exceed 1 GiB.
* The memory limit total for all Containers must not exceed 2 GiB.
* The CPU request total for all Containers must not exceed 1 cpu.
* The CPU limit total for all Containers must not exceed 2 cpu.
## Create a Pod
Here is the configuration file for a Pod:
{% include code.html language="yaml" file="quota-mem-cpu-pod.yaml" ghlink="/docs/tasks/administer-cluster/quota-mem-cpu-pod.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example
```
Verify that the Pod's Container is running:
```
kubectl get pod quota-mem-cpu-demo --namespace=quota-mem-cpu-example
```
Once again, view detailed information about the ResourceQuota:
```
kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml
```
The output shows the quota along with how much of the quota has been used.
You can see that the memory and CPU requests and limits for your Pod do not
exceed the quota.
```
status:
hard:
limits.cpu: "2"
limits.memory: 2Gi
requests.cpu: "1"
requests.memory: 1Gi
used:
limits.cpu: 800m
limits.memory: 800Mi
requests.cpu: 400m
requests.memory: 600Mi
```
## Attempt to create a second Pod
Here is the configuration file for a second Pod:
{% include code.html language="yaml" file="quota-mem-cpu-pod-2.yaml" ghlink="/docs/tasks/administer-cluster/quota-mem-cpu-pod-2.yaml" %}
In the configuration file, you can see that the Pod has a memory request of 700 MiB.
Notice that the sum of the used memory request and this new memory
request exceeds the memory request quota. 600 MiB + 700 MiB > 1 GiB.
Attempt to create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-mem-cpu-pod-2.yaml --namespace=quota-mem-cpu-example
```
The second Pod does not get created. The output shows that creating the second Pod
would cause the memory request total to exceed the memory request quota.
```
Error from server (Forbidden): error when creating "docs/tasks/administer-cluster/quota-mem-cpu-pod-2.yaml":
pods "quota-mem-cpu-demo-2" is forbidden: exceeded quota: mem-cpu-demo,
requested: requests.memory=700Mi,used: requests.memory=600Mi, limited: requests.memory=1Gi
```
## Discussion
As you have seen in this exercise, you can use a ResourceQuota to restrict
the memory request total for all Containers running in a namespace.
You can also restrict the totals for memory limit, cpu request, and cpu limit.
If you want to restrict individual Containers, instead of totals for all Containers, use a
[LimitRange](/docs/tasks/administer-cluster/memory-constraint-namespace/).
## Clean up
Delete your namespace:
```shell
kubectl delete namespace quota-mem-cpu-example
```
{% endcapture %}
{% capture whatsnext %}
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,11 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-quota-demo-2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi

View File

@ -0,0 +1,11 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-quota-demo
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi

View File

@ -0,0 +1,9 @@
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-quota-demo
spec:
hard:
persistentvolumeclaims: "1"
services.loadbalancers: "2"
services.nodeports: "0"

View File

@ -0,0 +1,14 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: pod-quota-demo
spec:
replicas: 3
template:
metadata:
labels:
purpose: quota-demo
spec:
containers:
- name: pod-quota-demo
image: nginx

View File

@ -0,0 +1,139 @@
---
title: Configure a Pod Quota for a Namespace
---
{% capture overview %}
This page shows how to set a quota for the total number of Pods that can run
in a namespace. You specify quotas in a
[ResourceQuota](/docs/api-reference/v1.7/#resourcequota-v1-core)
object.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace quota-pod-example
```
## Create a ResourceQuota
Here is the configuration file for a ResourceQuota object:
{% include code.html language="yaml" file="quota-pod.yaml" ghlink="/docs/tasks/administer-cluster/quota-pod.yaml" %}
Create the ResourceQuota:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-pod.yaml --namespace=quota-pod-example
```
View detailed information about the ResourceQuota:
```shell
kubectl get resourcequota pod-demo --namespace=quota-pod-example --output=yaml
```
The output shows that the namespace has a quota of two Pods, and that currently there are
no Pods; that is, none of the quota is used.
```yaml
spec:
hard:
pods: "2"
status:
hard:
pods: "2"
used:
pods: "0"
```
Here is the configuration file for a Deployment:
{% include code.html language="yaml" file="quota-pod-deployment.yaml" ghlink="/docs/tasks/administer-cluster/quota-pod-deployment.yaml" %}
In the configuration file, `replicas: 3` tells Kubernetes to attempt to create three Pods, all running the same application.
Create the Deployment:
```shell
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-pod-deployment.yaml --namespace=quota-pod-example
```
View detailed information about the Deployment:
```shell
kubectl get deployment pod-quota-demo --namespace=quota-pod-example --output=yaml
```
The output shows that even though the Deployment specifies three replicas, only two
Pods were created because of the quota.
```yaml
spec:
...
replicas: 3
...
status:
availableReplicas: 2
...
lastUpdateTime: 2017-07-07T20:57:05Z
message: 'unable to create pods: pods "pod-quota-demo-1650323038-" is forbidden:
exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'
```
## Clean up
Delete your namespace:
```shell
kubectl delete namespace quota-pod-example
```
{% endcapture %}
{% capture whatsnext %}
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: ResourceQuota
metadata:
name: pod-demo
spec:
hard:
pods: "2"

View File

@ -0,0 +1,11 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-quota-demo-2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi

View File

@ -1,122 +0,0 @@
---
title: Assign CPU and RAM Resources to a Container
description: When you create a Pod, you can request CPU and RAM resources for the containers that run in the Pod. You can also set limits for CPU and RAM use.
---
{% capture overview %}
This page shows how to assign CPU and RAM resources to containers running
in a Kubernetes Pod.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
## Assign CPU and RAM resources to a container
When you create a Pod, you can request CPU and RAM resources for the containers
that run in the Pod. You can also set limits for CPU and RAM resources. To
request CPU and RAM resources, include the `resources:requests` field in the
configuration file. To set limits on CPU and RAM resources, include the
`resources:limits` field.
Kubernetes schedules a Pod to run on a Node only if the Node has enough CPU and
RAM available to satisfy the total CPU and RAM requested by all of the
containers in the Pod. Also, as a container runs on a Node, Kubernetes doesn't
allow the CPU and RAM consumed by the container to exceed the limits you specify
for the container. If a container exceeds its RAM limit, it is terminated. If a
container exceeds its CPU limit, it becomes a candidate for having its CPU use
throttled.
In this exercise, you create a Pod that runs one container. The configuration
file for the Pod requests 250 milicpu and 64 mebibytes of RAM. It also sets
upper limits of 1 cpu and 128 mebibytes of RAM. Here is the configuration file
for the `Pod`:
{% include code.html language="yaml" file="cpu-ram.yaml" ghlink="/docs/tasks/configure-pod-container/cpu-ram.yaml" %}
1. Create a Pod based on the YAML configuration file:
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/cpu-ram.yaml
1. Display information about the pod:
kubectl describe pod cpu-ram-demo
The output is similar to this:
Name: cpu-ram-demo
...
Containers:
cpu-ram-demo-container:
...
Limits:
cpu: 1
memory: 128Mi
Requests:
cpu: 250m
memory: 64Mi
## CPU and RAM units
The CPU resource is measured in *cpu*s. Fractional values are allowed. You can
use the suffix *m* to mean mili. For example 100m cpu is 100 milicpu, and is
the same as 0.1 cpu.
The RAM resource is measured in bytes. You can express RAM as a plain integer
or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi,
Ti, Gi, Mi, Ki. For example, the following represent approximately the same value:
128974848, 129e6, 129M , 123Mi
If you're not sure how much resources to request, you can first launch the
application without specifying resources, and use
[resource usage monitoring](/docs/user-guide/monitoring) to determine
appropriate values.
If a Container exceeds its RAM limit, it dies from an out-of-memory condition.
You can improve reliability by specifying a value that is a little higher
than what you expect to use.
If you specify a request, a Pod is guaranteed to be able to use that much
of the resource. See
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/resource-qos.md) for the difference between resource limits and requests.
## If you don't specify limits or requests
If you don't specify a RAM limit, Kubernetes places no upper bound on the
amount of RAM a Container can use. A Container could use all the RAM
available on the Node where the Container is running. Similarly, if you don't
specify a CPU limit, Kubernetes places no upper bound on CPU resources, and a
Container could use all of the CPU resources available on the Node.
Default limits are applied according to a limit range for the default
[namespace](/docs/user-guide/namespaces). You can use `kubectl describe limitrange limits`
to see the default limits.
For information about why you would want to specify limits, see
[Setting Pod CPU and Memory Limits](/docs/tasks/configure-pod-container/limit-range/).
For information about what happens if you don't specify CPU and RAM requests, see
[Resource Requests and Limits of Pod and Container](/docs/concepts/configuration/manage-compute-resources-container/).
{% endcapture %}
{% capture whatsnext %}
* Learn more about [managing compute resources](/docs/concepts/configuration/manage-compute-resources-container/).
* See [ResourceRequirements](/docs/api-reference/{{page.version}}/#resourcerequirements-v1-core).
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,273 @@
---
title: Assign CPU Resources to Containers and Pods
---
{% capture overview %}
This page shows how to assign a CPU *request* and a CPU *limit* to
a Container. A Container is guaranteed to have as much CPU as it requests,
but is not allowed to use more CPU than its limit.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
Each node in your cluster must have at least 1 CPU.
A few of the steps on this page require that the
[Heapster](https://github.com/kubernetes/heapster) service is running
in your cluster. But if you don't have Heapster running, you can do most
of the steps, and it won't be a problem if you skip the Heapster steps.
To see whether the Heapster service is running, enter this command:
```shell
kubectl get services --namespace=kube-system
```
If the heapster service is running, it shows in the output:
```shell
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system heapster 10.11.240.9 <none> 80/TCP 6d
```
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace cpu-example
```
## Specify a CPU request and a CPU limit
To specify a CPU request for a Container, include the `resources:requests` field
in the Container's resource manifest. To specify a CPU limit, include `resources:limits`.
In this exercise, you create a Pod that has one Container. The container has a CPU
request of 0.5 cpu and a CPU limit of 1 cpu. Here's the configuration file
for the Pod:
{% include code.html language="yaml" file="cpu-request-limit.yaml" ghlink="/docs/tasks/configure-pod-container/cpu-request-limit.yaml" %}
In the configuration file, the `args` section provides arguments for the Container when it starts.
The `-cpus "2"` argument tells the Container to attempt to use 2 cpus.
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/cpu-request-limit.yaml --namespace=cpu-example
```
Verify that the Pod's Container is running:
```shell
kubectl get pod cpu-demo --namespace=cpu-example
```
View detailed information about the Pod:
```shell
kubectl get pod cpu-demo --output=yaml --namespace=cpu-example
```
The output shows that the one Container in the Pod has a CPU request of 500 millicpu
and a CPU limit of 1 cpu.
```shell
resources:
limits:
cpu: "1"
requests:
cpu: 500m
```
Start a proxy so that you can call the heapster service:
```shell
kubectl proxy
```
In another command window, get the CPU usage rate from the heapster service:
```
curl http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/heapster/api/v1/model/namespaces/cpu-example/pods/cpu-demo/metrics/cpu/usage_rate
```
The output shows that the Pod is using 974 millicpu, which is just a bit less than
the limit of 1 cpu specified in the Pod's configuration file.
```json
{
"timestamp": "2017-06-22T18:48:00Z",
"value": 974
}
```
Recall that by setting `-cpu "2"`, you configured the Container to attempt to use 2 cpus.
But the container is only being allowed to use about 1 cpu. The Container's CPU use is being
throttled, because the Container is attempting to use more CPU resources than its limit.
Note: There's another possible explanation for the CPU throttling. The Node might not have
enough CPU resources available. Recall that the prerequisites for this exercise require that each of
your Nodes has at least 1 cpu. If your Container is running on a Node that has only 1 cpu, the Container
cannot use more than 1 cpu regardless of the CPU limit specified for the Container.
## CPU units
The CPU resource is measured in *cpu* units. One cpu, in Kubernetes, is equivalent to:
* 1 AWS vCPU
* 1 GCP Core
* 1 Azure vCore
* 1 Hyperthread on a bare-metal Intel processor with Hyperthreading
Fractional values are allowed. A Container that requests 0.5 cpu is guaranteed half as much
CPU as a Container that requests 1 cpu. You can use the suffix m to mean milli. For example
100m cpu, 100 millicpu, and 0.1 cpu are all the same. Precision finer than 1m is not allowed.
CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same
amount of CPU on a single-core, dual-core, or 48-core machine.
Delete your Pod:
```shell
kubectl delete pod cpu-demo --namespace=cpu-example
```
## Specify a CPU request that is too big for your Nodes
CPU requests and limits are associated with Containers, but it is useful to think
of a Pod as having a CPU request and limit. The CPU request for a Pod is the sum
of the CPU requests for all the Containers in the Pod. Likewise, the CPU limit for
a Pod is the sum of the CPU limits for all the Containers in the Pod.
Pod scheduling is based on requests. A Pod is scheduled to run on a Node only if
the Node has enough CPU resources available to satisfy the Pods CPU request.
In this exercise, you create a Pod that has a CPU request so big that it exceeds
the capacity of any Node in your cluster. Here is the configuration file for a Pod
that has one Container. The Container requests 100 cpu, which is likely to exceed the
capacity of any Node in your cluster.
{% include code.html language="yaml" file="cpu-request-limit-2.yaml" ghlink="/docs/tasks/configure-pod-container/cpu-request-limit-2.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/cpu-request-limit-2.yaml --namespace=cpu-example
```
View the Pod's status:
```shell
kubectl get pod cpu-demo-2 --namespace=cpu-example
```
The output shows that the Pod's status is Pending. That is, the Pod has not been
scheduled to run on any Node, and it will remain in the Pending state indefinitely:
```
kubectl get pod cpu-demo-2 --namespace=cpu-example
NAME READY STATUS RESTARTS AGE
cpu-demo-2 0/1 Pending 0 7m
```
View detailed information about the Pod, including events:
```shell
kubectl describe pod cpu-demo-2 --namespace=cpu-example
```
The output shows that the Container cannot be scheduled because of insufficient
CPU resources on the Nodes:
```shell
Events:
Reason Message
------ -------
FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3).
```
Delete your Pod:
```shell
kubectl delete pod cpu-demo-2 --namespace=cpu-example
```
## If you dont specify a CPU limit
If you dont specify a CPU limit for a Container, then one of these situations applies:
* The Container has no upper bound on the CPU resources it can use. The Container
could use all of the CPU resources available on the Node where it is running.
* The Container is running in a namespace that has a default CPU limit, and the
Container is automatically assigned the default limit. Cluster administrators can use a
[LimitRange](https://kubernetes.io/docs/api-reference/v1.6/)
to specify a default value for the CPU limit.
## Motivation for CPU requests and limits
By configuring the CPU requests and limits of the Containers that run in your
cluster, you can make efficient use of the CPU resources available on your cluster's
Nodes. By keeping a Pod's CPU request low, you give the Pod a good chance of being
scheduled. By having a CPU limit that is greater than the CPU request, you accomplish two things:
* The Pod can have bursts of activity where it makes use of CPU resources that happen to be available.
* The amount of CPU resources a Pod can use during a burst is limited to some reasonable amount.
## Clean up
Delete your namespace:
```shell
kubectl delete namespace cpu-example
```
{% endcapture %}
{% capture whatsnext %}
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,368 @@
---
title: Assign Memory Resources to Containers and Pods
---
{% capture overview %}
This page shows how to assign a memory *request* and a memory *limit* to a
Container. A Container is guaranteed to have as much memory as it requests,
but is not allowed to use more memory than its limit.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
Each node in your cluster must have at least 300 MiB of memory.
A few of the steps on this page require that the
[Heapster](https://github.com/kubernetes/heapster) service is running
in your cluster. But if you don't have Heapster running, you can do most
of the steps, and it won't be a problem if you skip the Heapster steps.
To see whether the Heapster service is running, enter this command:
```shell
kubectl get services --namespace=kube-system
```
If the Heapster service is running, it shows in the output:
```shell
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system heapster 10.11.240.9 <none> 80/TCP 6d
```
{% endcapture %}
{% capture steps %}
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace mem-example
```
## Specify a memory request and a memory limit
To specify a memory request for a Container, include the `resources:requests` field
in the Container's resource manifest. To specify a memory limit, include `resources:limits`.
In this exercise, you create a Pod that has one Container. The container has a memory
request of 100 MiB and a memory limit of 200 MiB. Here's the configuration file
for the Pod:
{% include code.html language="yaml" file="memory-request-limit.yaml" ghlink="/docs/tasks/configure-pod-container/memory-request-limit.yaml" %}
In the configuration file, the `args` section provides arguments for the Container when it starts.
The `-mem-total 150Mi` argument tells the Container to attempt to allocate 150 MiB of memory.
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/memory-request-limit.yaml --namespace=mem-example
```
Verify that the Pod's Container is running:
```shell
kubectl get pod memory-demo --namespace=mem-example
```
View detailed information about the Pod:
```shell
kubectl get pod memory-demo --output=yaml --namespace=mem-example
```
The output shows that the one Container in the Pod has a memory request of 100 MiB
and a memory limit of 200 MiB.
```yaml
...
resources:
limits:
memory: 200Mi
requests:
memory: 100Mi
...
```
Start a proxy so that you can call the Heapster service:
```shell
kubectl proxy
```
In another command window, get the memory usage from the Heapster service:
```
curl http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/heapster/api/v1/model/namespaces/mem-example/pods/memory-demo/metrics/memory/usage
```
The output shows that the Pod is using about 162,900,000 bytes of memory, which
is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the
Pod's 200 MiB limit.
```json
{
"timestamp": "2017-06-20T18:54:00Z",
"value": 162856960
}
```
Delete your Pod:
```shell
kubectl delete pod memory-demo --namespace=mem-example
```
## Exceed a Container's memory limit
A Container can exceed its memory request if the Node has memory available. But a Container
is not allowed to use more than its memory limit. If a container allocates more memory than
its limit, the Container becomes a candidate for termination. If the Container continues to
to consume memory beyond its limit, the Container is terminated. If a terminated Container is
restartable, the kubelet will restart it, as with any other type of runtime failure.
In this exercise, you create a Pod that attempts to allocate more memory than its limit.
Here is the configuration file for a Pod that has one Container. The Container has a
memory request of 50 MiB and a memory limit of 100 MiB.
{% include code.html language="yaml" file="memory-request-limit-2.yaml" ghlink="/docs/tasks/configure-pod-container/memory-request-limit-2.yaml" %}
In the configuration file, in the `args` section, you can see that the Container
will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/memory-request-limit-2.yaml --namespace=mem-example
```
View detailed information about the Pod:
```shell
kubectl get pod memory-demo-2 --namespace=mem-example
```
At this point, the Container might be running, or it might have been killed. If the
Container has not yet been killed, repeat the preceding command until you see that
the Container has been killed:
```shell
NAME READY STATUS RESTARTS AGE
memory-demo-2 0/1 OOMKilled 1 24s
```
Get a more detailed view of the Container's status:
```shell
kubectl get pod memory-demo-2 --output=yaml --namespace=mem-example
```
The output shows that the Container has been killed because it is out of memory (OOM).
```shell
lastState:
terminated:
containerID: docker://65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f
exitCode: 137
finishedAt: 2017-06-20T20:52:19Z
reason: OOMKilled
startedAt: null
```
The Container in this exercise is restartable, so the kubelet will restart it. Enter
this command several times to see that the Container gets repeatedly killed and restarted:
```shell
kubectl get pod memory-demo-2 --namespace=mem-example
```
The output shows that the Container gets killed, restarted, killed again, restarted again, and so on:
```
stevepe@sperry-1:~/steveperry-53.github.io$ kubectl get pod memory-demo-2 --namespace=mem-example
NAME READY STATUS RESTARTS AGE
memory-demo-2 0/1 OOMKilled 1 37s
stevepe@sperry-1:~/steveperry-53.github.io$ kubectl get pod memory-demo-2 --namespace=mem-example
NAME READY STATUS RESTARTS AGE
memory-demo-2 1/1 Running 2 40s
```
View detailed information about the Pod's history:
```
kubectl describe pod memory-demo-2 --namespace=mem-example
```
The output shows that the Container starts and fails repeatedly:
```
... Normal Created Created container with id 66a3a20aa7980e61be4922780bf9d24d1a1d8b7395c09861225b0eba1b1f8511
... Warning BackOff Back-off restarting failed container
```
View detailed information about your cluster's Nodes:
```
kubectl describe nodes
```
The output includes a record of the Container being killed because of an out-of-memory condition:
```
Warning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child
```
Delete your Pod:
```shell
kubectl delete pod memory-demo-2 --namespace=mem-example
```
## Specify a memory request that is too big for your Nodes
Memory requests and limits are associated with Containers, but it is useful to think
of a Pod as having a memory request and limit. The memory request for the Pod is the
sum of the memory requests for all the Containers in the Pod. Likewise, the memory
limit for the Pod is the sum of the limits of all the Containers in the Pod.
Pod scheduling is based on requests. A Pod is scheduled to run on a Node only if the Node
has enough available memory to satisfy the Pod's memory request.
In this exercise, you create a Pod that has a memory request so big that it exceeds the
capacity of any Node in your cluster. Here is the configuration file for a Pod that has one
Container. The Container requests 1000 GiB of memory, which is likely to exceed the capacity
of any Node in your cluster.
{% include code.html language="yaml" file="memory-request-limit-3.yaml" ghlink="/docs/tasks/configure-pod-container/memory-request-limit-3.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/memory-request-limit-3.yaml --namespace=mem-example
```
View the Pod's status:
```shell
kubectl get pod memory-demo-3 --namespace=mem-example
```
The output shows that the Pod's status is PENDING. That is, the Pod has not been
scheduled to run on any Node, and it will remain in the PENDING state indefinitely:
```
kubectl get pod memory-demo-3 --namespace=mem-example
NAME READY STATUS RESTARTS AGE
memory-demo-3 0/1 Pending 0 25s
```
View detailed information about the Pod, including events:
```shell
kubectl describe pod memory-demo-3 --namespace=mem-example
```
The output shows that the Container cannot be scheduled because of insufficient memory on the Nodes:
```shell
Events:
... Reason Message
------ -------
... FailedScheduling No nodes are available that match all of the following predicates:: Insufficient memory (3).
```
## Memory units
The memory resource is measured in bytes. You can express memory as a plain integer or a
fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki.
For example, the following represent approximately the same value:
```shell
128974848, 129e6, 129M , 123Mi
```
Delete your Pod:
```shell
kubectl delete pod memory-demo-3 --namespace=mem-example
```
## If you dont specify a memory limit
If you dont specify a memory limit for a Container, then one of these situations applies:
* The Container has no upper bound on the amount of memory it uses. The Container
could use all of the memory available on the Node where it is running.
* The Container is running in a namespace that has a default memory limit, and the
Container is automatically assigned the default limit. Cluster administrators can use a
[LimitRange](https://kubernetes.io/docs/api-reference/v1.6/)
to specify a default value for the memory limit.
## Motivation for memory requests and limits
By configuring memory requests and limits for the Containers that run in your
cluster, you can make efficient use of the memory resources available on your cluster's
Nodes. By keeping a Pod's memory request low, you give the Pod a good chance of being
scheduled. By having a memory limit that is greater than the memory request, you accomplish two things:
* The Pod can have bursts of activity where it makes use of memory that happens to be available.
* The amount of memory a Pod can use during a burst is limited to some reasonable amount.
## Clean up
Delete your namespace. This deletes all the Pods that you created for this task:
```shell
kubectl delete namespace mem-example
```
{% endcapture %}
{% capture whatsnext %}
### For app developers
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
* [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/)
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: cpu-ram-demo
spec:
containers:
- name: cpu-ram-demo-container
image: gcr.io/google-samples/node-hello:1.0
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "1"

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: cpu-demo-2
spec:
containers:
- name: cpu-demo-ctr-2
image: vish/stress
resources:
limits:
cpu: "100"
requests:
cpu: "100"
args:
- -cpus
- "2"

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: cpu-demo
spec:
containers:
- name: cpu-demo-ctr
image: vish/stress
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
args:
- -cpus
- "2"

View File

@ -1,12 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: invalid-pod
spec:
containers:
- name: kubernetes-serve-hostname
image: gcr.io/google_containers/serve_hostname
resources:
limits:
cpu: "3"
memory: 100Mi

View File

@ -1,26 +0,0 @@
apiVersion: v1
kind: LimitRange
metadata:
name: mylimits
spec:
limits:
- max:
cpu: "2"
memory: 1Gi
min:
cpu: 200m
memory: 6Mi
type: Pod
- default:
cpu: 300m
memory: 200Mi
defaultRequest:
cpu: 200m
memory: 100Mi
max:
cpu: "2"
memory: 1Gi
min:
cpu: 100m
memory: 3Mi
type: Container

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: memory-demo-2
spec:
containers:
- name: memory-demo-2-ctr
image: vish/stress
resources:
requests:
memory: 50Mi
limits:
memory: "100Mi"
args:
- -mem-total
- 250Mi
- -mem-alloc-size
- 10Mi
- -mem-alloc-sleep
- 1s

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: memory-demo-3
spec:
containers:
- name: memory-demo-3-ctr
image: vish/stress
resources:
limits:
memory: "1000Gi"
requests:
memory: "1000Gi"
args:
- -mem-total
- 150Mi
- -mem-alloc-size
- 10Mi
- -mem-alloc-sleep
- 1s

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
spec:
containers:
- name: memory-demo-ctr
image: vish/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
args:
- -mem-total
- 150Mi
- -mem-alloc-size
- 10Mi
- -mem-alloc-sleep
- 1s

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: qos-demo-2
spec:
containers:
- name: qos-demo-2-ctr
image: nginx
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Pod
metadata:
name: qos-demo-3
spec:
containers:
- name: qos-demo-3-ctr
image: nginx

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: qos-demo-4
spec:
containers:
- name: qos-demo-4-ctr-1
image: nginx
resources:
requests:
memory: "200Mi"
- name: qos-demo-4-ctr-2
image: redis

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: qos-demo
spec:
containers:
- name: qos-demo-ctr
image: nginx
resources:
limits:
memory: "200Mi"
cpu: "700m"
requests:
memory: "200Mi"
cpu: "700m"

View File

@ -0,0 +1,265 @@
---
title: Configure Quality of Service for Pods
---
{% capture overview %}
This page shows how to configure Pods so that they will be assigned particular
Quality of Service (QoS) classes. Kubernetes uses QoS classes to make decisions about
scheduling and evicting Pods.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
## QoS classes
When Kubernetes creates a Pod it assigns one of these QoS classes to the Pod:
* Guaranteed
* Burstable
* BestEffort
## Create a namespace
Create a namespace so that the resources you create in this exercise are
isolated from the rest of your cluster.
```shell
kubectl create namespace qos-example
```
## Create a Pod that gets assigned a QoS class of Guaranteed
For a Pod to be given a QoS class of Guaranteed:
* Every Container in the Pod must have a memory limit and a memory request, and they must be the same.
* Every Container in the Pod must have a cpu limit and a cpu request, and they must be the same.
Here is the configuration file for a Pod that has one Container. The Container has a memory limit and a
memory request, both equal to 200 MiB. The Container has a cpu limit and a cpu request, both equal to 700 millicpu:
{% include code.html language="yaml" file="qos-pod.yaml" ghlink="/docs/tasks/configure-pod-container/qos-pod.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/qos-pod.yaml --namespace=qos-example
```
View detailed information about the Pod:
```shell
kubectl get pod qos-demo --namespace=qos-example --output=yaml
```
The output shows that Kubernetes gave the Pod a QoS class of Guaranteed. The output also
verifies that the Pod's Container has a memory request that matches its memory limit, and it has
a cpu request that matches its cpu limit.
```yaml
spec:
containers:
...
resources:
limits:
cpu: 700m
memory: 200Mi
requests:
cpu: 700m
memory: 200Mi
...
qosClass: Guaranteed
```
**Note**: If a Container specifies its own memory limit, but does not specify a memory request, Kubernetes
automatically assigns a memory request that matches the limit. Similarly, if a Container specifies its own
cpu limit, but does not specify a cpu request, Kubernetes automatically assigns a cpu request that matches
the limit.
Delete your Pod:
```shell
kubectl delete pod qos-demo --namespace=qos-example
```
## Create a Pod that gets assigned a QoS class of Burstable
A Pod is given a QoS class of Burstable if:
* The Pod does not meet the criteria for QoS class Guaranteed.
* At least one Container in the Pod has a memory or cpu request.
Here is the configuration file for a Pod that has one Container. The Container has a memory limit of 200 MiB
and a memory request of 100 MiB.
{% include code.html language="yaml" file="qos-pod-2.yaml" ghlink="/docs/tasks/configure-pod-container/qos-pod-2.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/qos-pod-2.yaml --namespace=qos-example
```
View detailed information about the Pod:
```shell
kubectl get pod qos-demo-2 --namespace=qos-example --output=yaml
```
The output shows that Kubernetes gave the Pod a QoS class of Burstable.
```yaml
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: qos-demo-2-ctr
resources:
limits:
memory: 200Mi
requests:
memory: 100Mi
...
qosClass: Burstable
```
Delete your Pod:
```shell
kubectl delete pod qos-demo-2 --namespace=qos-example
```
## Create a Pod that gets assigned a QoS class of BestEffort
For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not
have any memory or cpu limits or requests.
Here is the configuration file for a Pod that has one Container. The Container has no memory or cpu
limits or requests:
{% include code.html language="yaml" file="qos-pod-3.yaml" ghlink="/docs/tasks/configure-pod-container/qos-pod-3.yaml" %}
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/qos-pod-3.yaml --namespace=qos-example
```
View detailed information about the Pod:
```shell
kubectl get pod qos-demo-3 --namespace=qos-example --output=yaml
```
The output shows that Kubernetes gave the Pod a QoS class of BestEffort.
```yaml
spec:
containers:
...
resources: {}
...
qosClass: BestEffort
```
Delete your Pod:
```shell
kubectl delete pod qos-demo-3 --namespace=qos-example
```
## Create a Pod that has two Containers
Here is the configuration file for a Pod that has two Containers. One container specifies a memory
request of 200 MiB. The other Container does not specify any requests or limits.
{% include code.html language="yaml" file="qos-pod-4.yaml" ghlink="/docs/tasks/configure-pod-container/qos-pod-4.yaml" %}
Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the
criteria for QoS class Guaranteed, and one of its Containers has a memory request.
Create the Pod:
```shell
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/qos-pod-4.yaml --namespace=qos-example
```
View detailed information about the Pod:
```shell
kubectl get pod qos-demo-4 --namespace=qos-example --output=yaml
```
The output shows that Kubernetes gave the Pod a QoS class of Burstable:
```yaml
spec:
containers:
...
name: qos-demo-4-ctr-1
resources:
requests:
memory: 200Mi
...
name: qos-demo-4-ctr-2
resources: {}
...
qosClass: Burstable
```
Delete your Pod:
```shell
kubectl delete pod qos-demo-4 --namespace=qos-example
```
## Clean up
Delete your namespace:
```shell
kubectl delete namespace qos-example
```
{% endcapture %}
{% capture whatsnext %}
### For app developers
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [Assign CPU Resources to Containers and Pods](docs/tasks/configure-pod-container/assign-cpu-resource/)
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-memory-request-limit/)
* [Configure Default CPU Requests and Limits for a Namespace](docs/tasks/administer-cluster/default-cpu-request-limit/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
{% endcapture %}
{% include templates/task.md %}

View File

@ -1,9 +0,0 @@
apiVersion: v1
kind: ResourceQuota
metadata:
name: best-effort
spec:
hard:
pods: "10"
scopes:
- BestEffort

View File

@ -1,9 +0,0 @@
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
spec:
hard:
persistentvolumeclaims: "2"
services.loadbalancers: "2"
services.nodeports: "0"

View File

@ -1,14 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: valid-pod
labels:
name: valid-pod
spec:
containers:
- name: kubernetes-serve-hostname
image: gcr.io/google_containers/serve_hostname
resources:
limits:
cpu: "1"
memory: 512Mi

14
quota-pod-deployment.yaml Normal file
View File

@ -0,0 +1,14 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: pod-quota-demo
spec:
replicas: 3
template:
metadata:
labels:
app: pod-quota-demo
spec:
containers:
- name: pod-quota-demo
image: nginx