website/content/en/docs/concepts/policy/resource-quotas.md

583 lines
20 KiB
Markdown
Raw Normal View History

---
reviewers:
- derekwaynecarr
title: Resource Quotas
content_template: templates/concept
weight: 10
---
{{% capture overview %}}
When several users or teams share a cluster with a fixed number of nodes,
there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern.
{{% /capture %}}
{{% capture body %}}
A resource quota, defined by a `ResourceQuota` object, provides constraints that limit
aggregate resource consumption per namespace. It can limit the quantity of objects that can
be created in a namespace by type, as well as the total amount of compute resources that may
be consumed by resources in that project.
Resource quotas work like this:
- Different teams work in different namespaces. Currently this is voluntary, but
support for making this mandatory via ACLs is planned.
- The administrator creates one `ResourceQuota` for each namespace.
- Users create resources (pods, services, etc.) in the namespace, and the quota system
tracks usage to ensure it does not exceed hard resource limits defined in a `ResourceQuota`.
- If creating or updating a resource violates a quota constraint, the request will fail with HTTP
status code `403 FORBIDDEN` with a message explaining the constraint that would have been violated.
- If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
Examples of policies that could be created using namespaces and quotas are:
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 GiB and 10 cores,
let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation.
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
use any amount.
In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces,
there may be contention for resources. This is handled on a first-come-first-served basis.
Neither contention nor changes to quota will affect already created resources.
## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the apiserver `--enable-admission-plugins=` flag has `ResourceQuota` as
one of its arguments.
A resource quota is enforced in a particular namespace when there is a
`ResourceQuota` in that namespace.
## Compute Resource Quota
You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) that can be requested in a given namespace.
The following resource types are supported:
| Resource Name | Description |
| --------------------- | ----------------------------------------------------------- |
| `cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. |
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
| `memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
### Resource Quota For Extended Resources
In addition to the resources mentioned above, in release 1.10, quota support for
[extended resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) is added.
As overcommit is not allowed for extended resources, it makes no sense to specify both `requests`
and `limits` for the same extended resource in a quota. So for extended resources, only quota items
with prefix `requests.` is allowed for now.
Take the GPU resource as an example, if the resource name is `nvidia.com/gpu`, and you want to
limit the total number of GPUs requested in a namespace to 4, you can define a quota as follows:
2018-04-12 21:51:59 +00:00
* `requests.nvidia.com/gpu: 4`
See [Viewing and Setting Quotas](#viewing-and-setting-quotas) for more detail information.
## Storage Resource Quota
You can limit the total sum of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested in a given namespace.
In addition, you can limit consumption of storage resources based on associated storage-class.
| Resource Name | Description |
| --------------------- | ----------------------------------------------------------- |
| `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. |
| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the storage-class-name, the sum of storage requests cannot exceed this value. |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the storage-class-name, the total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
define a quota as follows:
* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi`
* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi`
In release 1.8, quota support for local ephemeral storage is added as an alpha feature:
| Resource Name | Description |
| ------------------------------- |----------------------------------------------------------- |
| `requests.ephemeral-storage` | Across all pods in the namespace, the sum of local ephemeral storage requests cannot exceed this value. |
| `limits.ephemeral-storage` | Across all pods in the namespace, the sum of local ephemeral storage limits cannot exceed this value. |
## Object Count Quota
The 1.9 release added support to quota all standard namespaced resource types using the following syntax:
* `count/<resource>.<group>`
Here is an example set of resources users may want to put under object count quota:
* `count/persistentvolumeclaims`
* `count/services`
* `count/secrets`
* `count/configmaps`
* `count/replicationcontrollers`
* `count/deployments.apps`
* `count/replicasets.apps`
* `count/statefulsets.apps`
* `count/jobs.batch`
* `count/cronjobs.batch`
* `count/deployments.extensions`
When using `count/*` resource quota, an object is charged against the quota if it exists in server storage.
These types of quotas are useful to protect against exhaustion of storage resources. For example, you may
want to quota the number of secrets in a server given their large size. Too many secrets in a cluster can
actually prevent servers and controllers from starting! You may choose to quota jobs to protect against
a poorly configured cronjob creating too many jobs in a namespace causing a denial of service.
Prior to the 1.9 release, it was possible to do generic object count quota on a limited set of resources.
In addition, it is possible to further constrain quota for particular resources by their type.
The following types are supported:
| Resource Name | Description |
| ------------------------------- | ------------------------------------------------- |
| `configmaps` | The total number of config maps that can exist in the namespace. |
| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `pods` | The total number of pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `.status.phase in (Failed, Succeeded)` is true. |
| `replicationcontrollers` | The total number of replication controllers that can exist in the namespace. |
| `resourcequotas` | The total number of [resource quotas](/docs/reference/access-authn-authz/admission-controllers/#resourcequota) that can exist in the namespace. |
| `services` | The total number of services that can exist in the namespace. |
| `services.loadbalancers` | The total number of services of type load balancer that can exist in the namespace. |
| `services.nodeports` | The total number of services of type node port that can exist in the namespace. |
| `secrets` | The total number of secrets that can exist in the namespace. |
For example, `pods` quota counts and enforces a maximum on the number of `pods`
created in a single namespace that are not terminal. You might want to set a `pods`
quota on a namespace to avoid the case where a user creates many small pods and
exhausts the cluster's supply of Pod IPs.
## Quota Scopes
Each quota can have an associated set of scopes. A quota will only measure usage for a resource if it matches
the intersection of enumerated scopes.
When a scope is added to the quota, it limits the number of resources it supports to those that pertain to the scope.
Resources specified on the quota outside of the allowed set results in a validation error.
| Scope | Description |
| ----- | ----------- |
| `Terminating` | Match pods where `.spec.activeDeadlineSeconds >= 0` |
| `NotTerminating` | Match pods where `.spec.activeDeadlineSeconds is nil` |
| `BestEffort` | Match pods that have best effort quality of service. |
| `NotBestEffort` | Match pods that do not have best effort quality of service. |
The `BestEffort` scope restricts a quota to tracking the following resource: `pods`
The `Terminating`, `NotTerminating`, and `NotBestEffort` scopes restrict a quota to tracking the following resources:
* `cpu`
* `limits.cpu`
* `limits.memory`
* `memory`
* `pods`
* `requests.cpu`
* `requests.memory`
### Resource Quota Per PriorityClass
[Do Not Merge] Release 1.12 (#10292) * Update docs for fields allowed at root of CRD schema (#9973) * add plugin docs and examples (#10053) * docs update to promote TaintNodesByCondition to beta (#9626) * HPA Specificity Improvements (#8757) Updated the HPA docs to reference the `autoscaling/v2beta2` API version, and added documentation about the new fields. * adjust docs for pod ready++ (#10049) * Remove --cadvisor-port - has been deprecated since v1.10 (#10023) Change-Id: Id2a685473a243aef492a98ff450759f39e362557 * Add Documentation for Snapshot Feature (#9948) * Add documentation for snapshot feature * Update volume-snapshots.md * Add dry-run to api-concepts (#10033) * kubeadm-init: Update the offline support section (#10062) The update includes the following things (in mind with Kubernetes 1.12): - Remove the 1.8 image versions - Add the 1.10 image versions that were missing until now - Include a comment for the missing arch suffixes in 1.12 Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com> * Say bye to `DynamicProvisioningScheduling` (#10157) The mentioned feature gate is now collapsed into `VolumeScheduling`. xref: kubernetes/kubernetes#67432 * Update ResourceQuota per PriorityClass state for 1.12 (#10229) * TokenRequest and TokenRequestProjection now beta (#10161) xref: kubernetes/kubernetes#67349 * Change feature state for kms provider to beta. (#10230) KMS Provider will be graduating to beta in v1.12, reflecting this change on the website. * coredns default (#10200) * Promote ShareProcessNamespace to beta in docs (#9996) * Add CoreDNS details to DNS Debug docs (#10201) * add coredns details * address nits, add query logging section * Update docs with topology aware dynamic provisioning (#9939) * Document topology aware volume binding feature * update for readability * Update storage-classes.md * comma splice * don't abbreviate * HPA Algorithm Information Improvements (#9780) * Update HPA docs with more algorithm details The HPA docs pointed to an out-of-date document for information on the algorithm details, which users were finding confusing. This sticks a section on the algorithm in the HPA docs instead, documenting both general behavior and corner cases. * Add glossary info, HPA docs on quantities People often ask about the quantity notation when working with the metrics APIs, so this adds a glossary entry on quantities (since they're used elsewhere in the system), and a short explantation in the HPA walkthough. * Information about HPA readiness and stabilization This adds information about the new changes to HPA readiness and stabilization from kubernetes/features#591, and other minor changes that landed in Kubernetes 1.12. * Update horizontal-pod-autoscale.md * Audit 1.12 doc (#9953) * audit 1.12 document * remove legacy audit feature https://github.com/kubernetes/kubernetes/pull/65862 * update feature gate doc * MountPropagation is now GA (#10090) * RuntimeClass documentation (#10102) * RuntimeClass documentation * Update runtime-class.md * Add documentation for Scheduler performance tuning (#10048) * Add documentation for Scheduler performance tuning * Update scheduler-perf-tuning.md * TTL controller for cleaning up finished resources (#10064) * TTL controller for cleaning up finished resources * Address comments * Update ttlafterfinished.md * Bump quota configuration api version (#10217) * Incremental update from master (#10278) * fix invalid href of cloud controller manager (#10240) * fix invalid yaml format (#10238) * update storage-limits doc with Azure disk part (#10224) update storage-limits doc with Azure disk part fix comments * Update kubelet-config-file.md (#10222) Update link to KubeletConfiguration struct. * fix a trivial misspelling (#10244) * Fix cassandra-statefulset.yaml indent level (#10243) * Mention minimum etcd versions (#10208) Source: https://groups.google.com/d/msg/kubernetes-dev/jMPA4JzKiY4/HIx2ugvLBAAJ * fix 404 error (#10250) * Small verb tweak (#10190) Present participle, ftw. * Add AnchorJS logic for header links (#10155) * Add AnchorJS JavaScript * Remove existing inpage_heading logic * Remove underline from anchor tags * Use single icon and add touch visibility * Use paragraph link icon for AnchorJS * Update Sass to use code formatting in docsContent headers * Update header size coverage to H3-H6 * fix broken link in kubefed.md (#10254) * Update the version numbers for the X-Remote-Extra- and Impersonate-Extra- key fixes (#9827) The fix was cherry picked into 1.11.3, 1.10.7, and 1.9.11: https://github.com/kubernetes/kubernetes/pull/67162 https://github.com/kubernetes/kubernetes/pull/67163 https://github.com/kubernetes/kubernetes/pull/67164 * fix typo (#10168) * fix typo * addressing comments. * Update setup-ha-etcd-with-kubeadm.md * fix typos (#10252) * fix description of contribute guide (#10253) * describe truncate feature about advanced audit (#10236) * describe truncate feature about advanced audit * Update audit.md * docs update to promote ScheduleDaemonSetPods to beta (#9923) * Dynamic volume limit updates for 1.12 (#10211) * add a placeholder commit * Update docs for csi volume limits * Update storage-limits.md * Add "MayRunAs" value among other GroupStrategies (#9888) * Add CoreDNS details to the customize DNS doc (#10228) * Add CoreDNS details to the customize DNS doc Rewrite the document to include more details about CoreDNS, since it's now the default from v1.12 * Address comments * Improve doc wording * Fix link * Update dns-custom-nameservers.md * Update dns-custom-nameservers.md * Fix secrets docs in 1.12 branch (#10056) * Fix secrets docs * Update secret.md * Revert CoreDNS Docs (#10319) * Revert "Add CoreDNS details to DNS Debug docs (#10201)" This reverts commit 462817a67479fcc3481648981a4b90df35b86fdc. * Revert "Add CoreDNS details to the customize DNS doc (#10228)" This reverts commit e7319eeb8cde914d06cad039867e6213ecef1001. * Revert "coredns default (#10200)" This reverts commit 698e93b4415600d1a67f117132d8b09713282aa4. * Add CRI installation instructions page Added cri-installation page with CRI installation instructions Referenced it from kubeadm-init and install-kubeadm pages. * kubeadm: update API types documentation for 1.12 (#10283) v1alpha2 -> v1alpha3 MasterConfiguration -> [new-api-types] * TokenRequest feature documentation (#10295) * AdvancedAuditing is now GA (#10156) xref: kubernetes/kubernetes#65862 `AdvancedAuditing` feature is GA in 1.12. This PR adjusts the related docs. * update runtime-class.md (#10332) * update runtime-class.md * Update runtime-class.md * Document cross-authorizer permissions for creating RBAC roles (#10015) * Document cross-authorizer permissions for creating RBAC roles * Update rbac.md * kubeadm: update authored content for 1.12 (reference docs and cluster creation) (#10348) * kubeadm: update authored content in reference docs for 1.12 * kubeadm: add time frame in create-cluster-kubeadm for 1.12 * add AllowedProcMountTypes and ProcMountType to docs (#9911) Signed-off-by: Jess Frazelle <acidburn@microsoft.com> * kubeadm: add new command line reference (#10306) Add: - placeholder files - include place holder files - include "renew" sub command - add missing tabs for "alpha phase kubelet" * Documenting SCTP support in Kubernetes (#10279) * Documenting SCTP support in Kubernetes Service, Endpoint, NetworkPolicy and Pod * Updates based on comments on the PR * kubectl expose update with SCTP support * Updated according to comments in the PR * Revert "kubectl expose update with SCTP support" This reverts commit 0d5a1e6720a012390cf100c83e16b4a8c0782356. * TLS Bootstrap and Server Cert Rotation feature documentation (#10232) * TokenRequest feature documentation * line wrapping to make review not insane * update content for GA without major refactor * Update kubelet-tls-bootstrapping.md * Add clarifications for volume snapshots (#10296) * Update kubadm ha installation for 1.12 (#10264) * Update kubadm ha installation for 1.12 Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * update stable version Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * Update stacked control plane for v1.12 (#2) * use v1alpha3 Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * more v1alpha3 (#4) * updates Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * Document how to run in-tree cloud providers with kubeadm (#10357) Change-Id: Iab6b996a830503d74a6eb0c507c5f8ca7a39235b * kubeadm reference doc for release 1.12 (#10359) * Revert "Revert "Add CoreDNS details to DNS Debug docs (#10201)"" This reverts commit bb30f4d1fcd6fba2fe6190778ead99f8010033b7. * Revert "Revert "Add CoreDNS details to the customize DNS doc (#10228)"" This reverts commit bc23d45c09d7b83cac130fe22a0bd91e72435862. * Revert "Revert "coredns default (#10200)"" This reverts commit 7f4350d6ab7fc554ee53126d3875e845d2e43d1f. * add missing instruction for ha guide (#10374) Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * kubeadm - Ha upgrade updates (#10340) * Update HA upgrade docs * Adds external etcd HA upgrade guide Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * copyedit * more edits * add runasgroup in psp (#10076) * update KubeletPluginsWatcher feature gate (#10205) * generated 1.12 docs * Building Multi-arch images with Manifests (#10379) In 1.12, a variety of images used in a typical kubernetes installation have started to using manifests to better support environments with arm or ppc64le architectures. For example all images used with kubeadm by default have manifests, another would be all the tests in the conformance test suite. Here we capture the best practices for everyone to start using manifests in their own workflows. Change-Id: I5ba4c5fe55ffc9486a8251760f3352be4f2e1494 * Upgrade docs for v1.12 (#10344) * generated assets and docs * remove 1.7 * update 1.12 * update plugin documentation under docs>tasks>extend-kubectl (#10259) * update plugin documentation under docs>tasks>extend-kubectl * Update kubectl-plugins.md
2018-09-27 23:41:39 +00:00
{{< feature-state for_k8s_version="1.12" state="beta" >}}
Pods can be created at a specific [priority](/docs/concepts/configuration/pod-priority-preemption/#pod-priority).
You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector`
field in the quota spec.
A quota is matched and consumed only if `scopeSelector` in the quota spec selects the pod.
{{< note >}}
You need to enable the feature gate `ResourceQuotaScopeSelectors`before using resource quotas
per PriorityClass.
{{< /note >}}
This example creates a quota object and matches it with pods at specific priorities. The example
works as follows:
- Pods in the cluster have one of the three priority classes, "low", "medium", "high".
- One quota object is created for each priority.
Save the following YAML to a file `quota.yml`.
```yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-high
spec:
hard:
cpu: "1000"
memory: 200Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
2018-08-23 18:30:55 +00:00
scopeName: PriorityClass
values: ["high"]
- apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-medium
spec:
hard:
cpu: "10"
memory: 20Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
2018-08-23 18:30:55 +00:00
scopeName: PriorityClass
values: ["medium"]
- apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-low
spec:
hard:
cpu: "5"
memory: 10Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
2018-08-23 18:30:55 +00:00
scopeName: PriorityClass
values: ["low"]
```
Apply the YAML using `kubectl create`.
```shell
kubectl create -f ./quota.yml
2018-08-23 18:30:55 +00:00
```
2018-08-23 18:30:55 +00:00
```shell
resourcequota/pods-high created
resourcequota/pods-medium created
resourcequota/pods-low created
```
Verify that `Used` quota is `0` using `kubectl describe quota`.
```shell
kubectl describe quota
2018-08-23 18:30:55 +00:00
```
2018-08-23 18:30:55 +00:00
```shell
Name: pods-high
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 1k
memory 0 200Gi
pods 0 10
Name: pods-low
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 5
memory 0 10Gi
pods 0 10
Name: pods-medium
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 10
memory 0 20Gi
pods 0 10
```
Create a pod with priority "high". Save the following YAML to a
file `high-priority-pod.yml`.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: high-priority
spec:
containers:
- name: high-priority
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
resources:
requests:
2018-08-23 18:30:55 +00:00
memory: "10Gi"
cpu: "500m"
limits:
2018-08-23 18:30:55 +00:00
memory: "10Gi"
cpu: "500m"
priorityClassName: high
```
Apply it with `kubectl create`.
```shell
kubectl create -f ./high-priority-pod.yml
```
Verify that "Used" stats for "high" priority quota, `pods-high`, has changed and that
the other two quotas are unchanged.
```shell
kubectl describe quota
2018-08-23 18:30:55 +00:00
```
2018-08-23 18:30:55 +00:00
```shell
Name: pods-high
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 500m 1k
memory 10Gi 200Gi
pods 1 10
Name: pods-low
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 5
memory 0 10Gi
pods 0 10
Name: pods-medium
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 10
memory 0 20Gi
pods 0 10
```
`scopeSelector` supports the following values in the `operator` field:
* `In`
* `NotIn`
* `Exist`
* `DoesNotExist`
## Requests vs Limits
When allocating compute resources, each container may specify a request and a limit value for either CPU or memory.
The quota can be configured to quota either value.
If the quota has a value specified for `requests.cpu` or `requests.memory`, then it requires that every incoming
container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`,
then it requires that every incoming container specifies an explicit limit for those resources.
## Viewing and Setting Quotas
Kubectl supports creating, updating, and viewing quotas:
```shell
kubectl create namespace myspace
```
```shell
cat <<EOF > compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
requests.nvidia.com/gpu: 4
EOF
```
```shell
kubectl create -f ./compute-resources.yaml --namespace=myspace
```
```shell
cat <<EOF > object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
services.loadbalancers: "2"
EOF
```
```shell
kubectl create -f ./object-counts.yaml --namespace=myspace
```
```shell
kubectl get quota --namespace=myspace
```
```shell
NAME AGE
compute-resources 30s
object-counts 32s
```
```shell
kubectl describe quota compute-resources --namespace=myspace
```
```shell
Name: compute-resources
Namespace: myspace
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 4
requests.cpu 0 1
requests.memory 0 1Gi
requests.nvidia.com/gpu 0 4
```
```shell
kubectl describe quota object-counts --namespace=myspace
```
```shell
Name: object-counts
Namespace: myspace
Resource Used Hard
-------- ---- ----
configmaps 0 10
persistentvolumeclaims 0 4
replicationcontrollers 0 20
secrets 1 10
services 0 10
services.loadbalancers 0 2
```
Kubectl also supports object count quota for all standard namespaced resources
using the syntax `count/<resource>.<group>`:
```shell
kubectl create namespace myspace
```
```shell
kubectl create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 --namespace=myspace
```
```shell
kubectl run nginx --image=nginx --replicas=2 --namespace=myspace
```
```shell
kubectl describe quota --namespace=myspace
```
```shell
Name: test
Namespace: myspace
Resource Used Hard
-------- ---- ----
count/deployments.extensions 1 2
count/pods 2 3
count/replicasets.extensions 1 4
count/secrets 1 4
```
## Quota and Cluster Capacity
`ResourceQuotas` are independent of the cluster capacity. They are
expressed in absolute units. So, if you add nodes to your cluster, this does *not*
automatically give each namespace the ability to consume more resources.
Sometimes more complex policies may be desired, such as:
- Proportionally divide total cluster resources among several teams.
- Allow each tenant to grow resource usage as needed, but have a generous
limit to prevent accidental resource exhaustion.
- Detect demand from one namespace, add nodes, and increase quota.
Such policies could be implemented using `ResourceQuotas` as building blocks, by
writing a "controller" that watches the quota usage and adjusts the quota
hard limits of each namespace according to other signals.
Note that resource quota divides up aggregate cluster resources, but it creates no
restrictions around nodes: pods from several namespaces may run on the same node.
## Limit Priority Class consumption by default
It may be desired that pods at a particular priority, eg. "cluster-services", should be allowed in a namespace, if and only if, a matching quota object exists.
With this mechanism, operators will be able to restrict usage of certain high priority classes to a limited number of namespaces and not every namespaces will be able to consume these priority classes by default.
To enforce this, kube-apiserver flag `--admission-control-config-file` should be used to pass path to the following configuration file:
```yaml
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: "ResourceQuota"
configuration:
[Do Not Merge] Release 1.12 (#10292) * Update docs for fields allowed at root of CRD schema (#9973) * add plugin docs and examples (#10053) * docs update to promote TaintNodesByCondition to beta (#9626) * HPA Specificity Improvements (#8757) Updated the HPA docs to reference the `autoscaling/v2beta2` API version, and added documentation about the new fields. * adjust docs for pod ready++ (#10049) * Remove --cadvisor-port - has been deprecated since v1.10 (#10023) Change-Id: Id2a685473a243aef492a98ff450759f39e362557 * Add Documentation for Snapshot Feature (#9948) * Add documentation for snapshot feature * Update volume-snapshots.md * Add dry-run to api-concepts (#10033) * kubeadm-init: Update the offline support section (#10062) The update includes the following things (in mind with Kubernetes 1.12): - Remove the 1.8 image versions - Add the 1.10 image versions that were missing until now - Include a comment for the missing arch suffixes in 1.12 Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com> * Say bye to `DynamicProvisioningScheduling` (#10157) The mentioned feature gate is now collapsed into `VolumeScheduling`. xref: kubernetes/kubernetes#67432 * Update ResourceQuota per PriorityClass state for 1.12 (#10229) * TokenRequest and TokenRequestProjection now beta (#10161) xref: kubernetes/kubernetes#67349 * Change feature state for kms provider to beta. (#10230) KMS Provider will be graduating to beta in v1.12, reflecting this change on the website. * coredns default (#10200) * Promote ShareProcessNamespace to beta in docs (#9996) * Add CoreDNS details to DNS Debug docs (#10201) * add coredns details * address nits, add query logging section * Update docs with topology aware dynamic provisioning (#9939) * Document topology aware volume binding feature * update for readability * Update storage-classes.md * comma splice * don't abbreviate * HPA Algorithm Information Improvements (#9780) * Update HPA docs with more algorithm details The HPA docs pointed to an out-of-date document for information on the algorithm details, which users were finding confusing. This sticks a section on the algorithm in the HPA docs instead, documenting both general behavior and corner cases. * Add glossary info, HPA docs on quantities People often ask about the quantity notation when working with the metrics APIs, so this adds a glossary entry on quantities (since they're used elsewhere in the system), and a short explantation in the HPA walkthough. * Information about HPA readiness and stabilization This adds information about the new changes to HPA readiness and stabilization from kubernetes/features#591, and other minor changes that landed in Kubernetes 1.12. * Update horizontal-pod-autoscale.md * Audit 1.12 doc (#9953) * audit 1.12 document * remove legacy audit feature https://github.com/kubernetes/kubernetes/pull/65862 * update feature gate doc * MountPropagation is now GA (#10090) * RuntimeClass documentation (#10102) * RuntimeClass documentation * Update runtime-class.md * Add documentation for Scheduler performance tuning (#10048) * Add documentation for Scheduler performance tuning * Update scheduler-perf-tuning.md * TTL controller for cleaning up finished resources (#10064) * TTL controller for cleaning up finished resources * Address comments * Update ttlafterfinished.md * Bump quota configuration api version (#10217) * Incremental update from master (#10278) * fix invalid href of cloud controller manager (#10240) * fix invalid yaml format (#10238) * update storage-limits doc with Azure disk part (#10224) update storage-limits doc with Azure disk part fix comments * Update kubelet-config-file.md (#10222) Update link to KubeletConfiguration struct. * fix a trivial misspelling (#10244) * Fix cassandra-statefulset.yaml indent level (#10243) * Mention minimum etcd versions (#10208) Source: https://groups.google.com/d/msg/kubernetes-dev/jMPA4JzKiY4/HIx2ugvLBAAJ * fix 404 error (#10250) * Small verb tweak (#10190) Present participle, ftw. * Add AnchorJS logic for header links (#10155) * Add AnchorJS JavaScript * Remove existing inpage_heading logic * Remove underline from anchor tags * Use single icon and add touch visibility * Use paragraph link icon for AnchorJS * Update Sass to use code formatting in docsContent headers * Update header size coverage to H3-H6 * fix broken link in kubefed.md (#10254) * Update the version numbers for the X-Remote-Extra- and Impersonate-Extra- key fixes (#9827) The fix was cherry picked into 1.11.3, 1.10.7, and 1.9.11: https://github.com/kubernetes/kubernetes/pull/67162 https://github.com/kubernetes/kubernetes/pull/67163 https://github.com/kubernetes/kubernetes/pull/67164 * fix typo (#10168) * fix typo * addressing comments. * Update setup-ha-etcd-with-kubeadm.md * fix typos (#10252) * fix description of contribute guide (#10253) * describe truncate feature about advanced audit (#10236) * describe truncate feature about advanced audit * Update audit.md * docs update to promote ScheduleDaemonSetPods to beta (#9923) * Dynamic volume limit updates for 1.12 (#10211) * add a placeholder commit * Update docs for csi volume limits * Update storage-limits.md * Add "MayRunAs" value among other GroupStrategies (#9888) * Add CoreDNS details to the customize DNS doc (#10228) * Add CoreDNS details to the customize DNS doc Rewrite the document to include more details about CoreDNS, since it's now the default from v1.12 * Address comments * Improve doc wording * Fix link * Update dns-custom-nameservers.md * Update dns-custom-nameservers.md * Fix secrets docs in 1.12 branch (#10056) * Fix secrets docs * Update secret.md * Revert CoreDNS Docs (#10319) * Revert "Add CoreDNS details to DNS Debug docs (#10201)" This reverts commit 462817a67479fcc3481648981a4b90df35b86fdc. * Revert "Add CoreDNS details to the customize DNS doc (#10228)" This reverts commit e7319eeb8cde914d06cad039867e6213ecef1001. * Revert "coredns default (#10200)" This reverts commit 698e93b4415600d1a67f117132d8b09713282aa4. * Add CRI installation instructions page Added cri-installation page with CRI installation instructions Referenced it from kubeadm-init and install-kubeadm pages. * kubeadm: update API types documentation for 1.12 (#10283) v1alpha2 -> v1alpha3 MasterConfiguration -> [new-api-types] * TokenRequest feature documentation (#10295) * AdvancedAuditing is now GA (#10156) xref: kubernetes/kubernetes#65862 `AdvancedAuditing` feature is GA in 1.12. This PR adjusts the related docs. * update runtime-class.md (#10332) * update runtime-class.md * Update runtime-class.md * Document cross-authorizer permissions for creating RBAC roles (#10015) * Document cross-authorizer permissions for creating RBAC roles * Update rbac.md * kubeadm: update authored content for 1.12 (reference docs and cluster creation) (#10348) * kubeadm: update authored content in reference docs for 1.12 * kubeadm: add time frame in create-cluster-kubeadm for 1.12 * add AllowedProcMountTypes and ProcMountType to docs (#9911) Signed-off-by: Jess Frazelle <acidburn@microsoft.com> * kubeadm: add new command line reference (#10306) Add: - placeholder files - include place holder files - include "renew" sub command - add missing tabs for "alpha phase kubelet" * Documenting SCTP support in Kubernetes (#10279) * Documenting SCTP support in Kubernetes Service, Endpoint, NetworkPolicy and Pod * Updates based on comments on the PR * kubectl expose update with SCTP support * Updated according to comments in the PR * Revert "kubectl expose update with SCTP support" This reverts commit 0d5a1e6720a012390cf100c83e16b4a8c0782356. * TLS Bootstrap and Server Cert Rotation feature documentation (#10232) * TokenRequest feature documentation * line wrapping to make review not insane * update content for GA without major refactor * Update kubelet-tls-bootstrapping.md * Add clarifications for volume snapshots (#10296) * Update kubadm ha installation for 1.12 (#10264) * Update kubadm ha installation for 1.12 Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * update stable version Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * Update stacked control plane for v1.12 (#2) * use v1alpha3 Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * more v1alpha3 (#4) * updates Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * Document how to run in-tree cloud providers with kubeadm (#10357) Change-Id: Iab6b996a830503d74a6eb0c507c5f8ca7a39235b * kubeadm reference doc for release 1.12 (#10359) * Revert "Revert "Add CoreDNS details to DNS Debug docs (#10201)"" This reverts commit bb30f4d1fcd6fba2fe6190778ead99f8010033b7. * Revert "Revert "Add CoreDNS details to the customize DNS doc (#10228)"" This reverts commit bc23d45c09d7b83cac130fe22a0bd91e72435862. * Revert "Revert "coredns default (#10200)"" This reverts commit 7f4350d6ab7fc554ee53126d3875e845d2e43d1f. * add missing instruction for ha guide (#10374) Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * kubeadm - Ha upgrade updates (#10340) * Update HA upgrade docs * Adds external etcd HA upgrade guide Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * copyedit * more edits * add runasgroup in psp (#10076) * update KubeletPluginsWatcher feature gate (#10205) * generated 1.12 docs * Building Multi-arch images with Manifests (#10379) In 1.12, a variety of images used in a typical kubernetes installation have started to using manifests to better support environments with arm or ppc64le architectures. For example all images used with kubeadm by default have manifests, another would be all the tests in the conformance test suite. Here we capture the best practices for everyone to start using manifests in their own workflows. Change-Id: I5ba4c5fe55ffc9486a8251760f3352be4f2e1494 * Upgrade docs for v1.12 (#10344) * generated assets and docs * remove 1.7 * update 1.12 * update plugin documentation under docs>tasks>extend-kubectl (#10259) * update plugin documentation under docs>tasks>extend-kubectl * Update kubectl-plugins.md
2018-09-27 23:41:39 +00:00
apiVersion: resourcequota.admission.k8s.io/v1beta1
kind: Configuration
limitedResources:
- resource: pods
matchScopes:
- operator : In
scopeName: PriorityClass
values: ["cluster-services"]
```
Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present.
For example:
```yaml
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["cluster-services"]
```
See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) and [Quota support for priority class design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md) for more information.
## Example
See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
{{% /capture %}}
{{% capture whatsnext %}}
See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
{{% /capture %}}