website/docs/concepts/configuration/pod-priority-preemption.md

240 lines
10 KiB
Markdown
Raw Normal View History

Release 1.8 (#5659) * GC now supports non-core resources * Add two examples about how to analysis audits of kube-apiserver (#4264) * Deprecate system:nodes binding * [1.8] StatefulSet `initialized` annotation is now ignored. * inits the kubeadm upgrade docs addresses kubernetes/kubernetes.github.io/issues/4689 * adds kubeadm upgrade cmd to ToC addresses kubernetes/kubernetes.github.io/issues/4689 * add workload placement docs * ScaleIO - document udpate for 1.8 * Add documentation on storageClass.mountOptions and PV.mountOptions (#5254) * Add documentation on storageClass.mountOptions and PV.mountOptions * convert notes into callouts * Add docs for CustomResource validation add info about supported fields * advanced audit beta features (#5300) * Update job workload doc with backoff failure policy (#5319) Add to the Jobs documentation how to use the new backoffLimit field that limit the number of Pod failure before considering the Job as failed. * Documented additional AWS Service annotations (#4864) * Add device plugin doc under concepts/cluster-administration. (#5261) * Add device plugin doc under concepts/cluster-administration. * Update device-plugins.md * Update device-plugins.md Add meta description. Fix typo. Change bare metal deployment to manual deployment. * Update device-plugins.md Fix typo again. * Update page.version. (#5341) * Add documentation on storageClass.reclaimPolicy (#5171) * [Advanced audit] use new herf for audit-api (#5349) This tag contains all the changes in v1beta1 version. Update it now. * Added documentation around creating the InitializerConfiguration for the persistent volume label controller in the cloud-controller-manager (#5255) * Documentation for kubectl plugins (#5294) * Documentation for kubectl plugins * Update kubectl-plugins.md * Update kubectl-plugins.md * Updated CPU manager docs to match implementation. (#5332) * Noted limitation of alpha static cpumanager. * Updated CPU manager docs to match implementation. - Removed references to CPU pressure node condition and evictions. - Added note about new --cpu-manager-reconcile-period flag. - Added note about node allocatable requirements for static policy. - Noted limitation of alpha static cpumanager. * Move cpu-manager task link to rsc mgmt section. * init containers annotation removed in 1.8 (#5390) * Add documentation for TaintNodesByCondition (#5352) * Add documentation for TaintNodesByCondition * Update nodes.md * Update taint-and-toleration.md * Update daemonset.md * Update nodes.md * Update taint-and-toleration.md * Update daemonset.md * Fix deployments (#5421) * Document extended resources and OIR deprecation. (#5399) * Document extended resources and OIR deprecation. * Updated extended resources doc per reviews. * reverts extra spacing in _data/tasks.yml * addresses `kubeadm upgrade` review comments Feedback from @chenopis, @luxas, and @steveperry-53 addressed with this commit * HugePages documentation (#5419) * Update cpu-management-policies.md (#5407) Fixed the bad link. Modified "cpu" to "CPU". Added more 'yaml' as supplement. * Update RBAC docs for v1 (#5445) * Add user docs for pod priority and preemption (#5328) * Add user docs for pod priority and preemption * Update pod-priority-preemption.md * More updates * Update docs/admin/kubeadm.md for 1.8 (#5440) - Made a couple of minor wording changes (not strictly 1.8 related). - Did some reformatting (not strictly 1.8 related). - Updated references to the default token TTL (was infinite, now 24 hours). - Documented the new `--discovery-token-ca-cert-hash` and `--discovery-token-unsafe-skip-ca-verification` flags for `kubeadm join`. - Added references to the new `--discovery-token-ca-cert-hash` flag in all the default examples. - Added a new _Security model_ section that describes the security tradeoffs of the various discovery modes. - Documented the new `--groups` flag for `kubeadm token create`. - Added a note of caution under _Automating kubeadm_ that references the _Security model_ section. - Updated the component version table to drop 1.6 and add 1.8. - Update `_data/reference.yml` to try to get the sidebar fixed up and more consistent with `kubefed`. * Update StatefulSet Basics for 1.8 release (#5398) * addresses `kubeadm upgrade` review comments 2nd iteration review comments by @luxas * adds kubelet upgrade section to kubeadm upgrade * Fix a bulleted list on docs/admin/kubeadm.md. (#5458) I updated this doc yesterday and I was absolutely sure I fixed this, but I just saw that this commit got lost somehow. This was introduced recently in https://github.com/kubernetes/kubernetes.github.io/pull/5440. * Clarify the API to check for device plugins * Moving Flexvolume to separate out-of-tree section * addresses `kubeadm upgrade` review comments CC: @luxas * fixes kubeadm upgrade index * Update Stackdriver Logging documentation (#5495) * Re-update WordPress and MySQL PV doc to use apps/v1beta2 APIs (#5526) * Update statefulset concepts doc to use apps/v1beta2 APIs (#5420) * add document on kubectl's behavior regarding initializers (#5505) * Update docs/admin/kubeadm.md to cover self-hosting in 1.8. (#5497) This is a new beta feature in 1.8. * Update kubectl patch doc to use apps/v1beta2 APIs (#5422) * [1.8] Update "Run Applications" tasks to apps/v1beta2. (#5525) * Update replicated stateful application task for 1.8. * Update single instance stateful app task for 1.8. * Update stateless app task for 1.8. * Update kubectl patch task for 1.8. * fix the link of persistent storage (#5515) * update the admission-controllers.md index.md what-is-kubernetes.md link * fix the link of persistent storage * Add quota support for local ephemeral storage (#5493) * Add quota support for local ephemeral storage update the doc to this alpha feature * Update resource-quotas.md * Updated Deployments concepts doc (#5491) * Updated Deployments concepts doc * Addressed comments * Addressed more comments * Modify allocatable storage to ephemeral-storage (#5490) Update the doc to use ephemeral-storage instead of storage * Revamped concepts doc for ReplicaSet (#5463) * Revamped concepts doc for ReplicaSet * Minor changes to call out specific versions for selector defaulting and immutability * Addressed doc review comments * Remove petset documentations (#5395) * Update docs to use batch/v1beta1 cronjobs (#5475) * add federation job doc (#5485) * add federation job doc * Update job.md Edits for clarity and consistency * Update job.md Fixed a typo * update DaemonSet concept for 1.8 release (#5397) * update DaemonSet concept for 1.8 release * Update daemonset.md Fix typo. than -> then * Update bootstrap tokens doc for 1.8. (#5479) * Update bootstrap tokens doc for 1.8. This has some changes I missed when I was updating the main kubeadm documention: - Bootstrap tokens are now beta, not alpha (https://github.com/kubernetes/features/issues/130) - The apiserver flag to enable the authenticator changedin 1.8 (https://github.com/kubernetes/kubernetes/pull/51198) - Added `auth-extra-groups` documentaion (https://github.com/kubernetes/kubernetes/pull/50933) - Updated the _Token Management with `kubeadm`_ section to link to the main kubeadm docs, since it was just duplicated information. * Update bootstrap-tokens.md * Updated the Cassandra tutorial to use apps/v1beta2 (#5548) * add docs for AllowPrivilegeEscalation (#5448) Signed-off-by: Jess Frazelle <acidburn@microsoft.com> * Add local ephemeral storage alpha feature in managing compute resource (#5522) * Add local ephemeral storage alpha feature in managing compute resource Since 1.8, we add the local ephemeral storage alpha feature as one resource type to manage. Add this feature into the doc. * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Added documentation for Metrics Server (#5560) * authorization: improve authorization debugging docs (#5549) * Document mount propagation (#5544) * Update /docs/setup/independent/create-cluster-kubeadm.md for 1.8. (#5524) This introduction needed a couple of small tweaks to cover the `--discovery-token-ca-cert-hash` flag added in https://github.com/kubernetes/kubernetes/pull/49520 and some version bumps. * Add task doc for alpha dynamic kubelet configuration (#5523) * Fix input/output of selfsubjectaccess review (#5593) * Add docs for implementing resize (#5528) * Add docs for implementing resize * Update admission-controllers.md * Added link to PVC section * minor typo fixes * Update NetworkPolicy concept guide with egress and CIDR changes (#5529) * update zookeeper tutorial for 1.8 release * add doc for hostpath type (#5503) * Federated Hpa feature doc (#5487) * Federated Hpa feature doc * Federated Hpa feature doc review fixes * Update hpa.md * Update hpa.md * update cloud controller manager docs for v1.8 * Update cronjob with defaults information (#5556) * Kubernetes 1.8 reference docs (#5632) * Kubernetes 1.8 reference docs * Kubectl reference docs for 1.8 * Update side bar with 1.8 kubectl and api ref docs links * remove petset.md * update on state of HostAlias in 1.8 with hostNetwork Pod support (#5644) * Fix cron job deletion section (#5655) * update imported docs (#5656) * Add documentation for certificate rotation. (#5639) * Link to using kubeadm page * fix the command output fix the command output * fix typo in api/resources reference: "Worloads" * Add documentation for certificate rotation. * Create TOC entry for cloud controller manager. (#5662) * Updates for new versions of API types * Followup 5655: fix link to garbage collection (#5666) * Temporarily redirect resources-reference to api-reference. (#5668) * Update config for 1.8 release. (#5661) * Update config for 1.8 release. * Address reviewer comments. * Switch references in HPA docs from alpha to beta (#5671) The HPA docs still referenced the alpha version. This switches them to talk about v2beta1, which is the appropriate version for Kubernetes 1.8 * Deprecate openstack heat (#5670) * Fix typo in pod preset conflict example Move container port definition to the correct line. * Highlight openstack-heat provider deprecation The openstack-heat provider for kube-up is being deprecated and will be removed in a future release. * Temporarily fix broken links by redirecting. (#5672) * Fix broken links. (#5675) * Fix render of code block (#5674) * Fix broken links. (#5677) * Add a small note about auto-bootstrapped CSR ClusterRoles (#5660) * Update kubeadm install doc for v1.8 (#5676) * add draft workloads api content for 1.8 (#5650) * add draft workloads api content for 1.8 * edits per review, add tables, for 1.8 workloads api doc * fix typo * Minor fixes to kubeadm 1.8 upgrade guide. (#5678) - The kubelet upgrade instructions should be done on every host, not just worker nodes. - We should just upgrade all packages, instead of calling out kubelet specifically. This will also upgrade kubectl, kubeadm, and kubernetes-cni, if installed. - Draining nodes should also ignore daemonsets, and master errors can be ignored. - Make sure that the new kubeadm download is chmoded correctly. - Add a step to run `kubeadm version` to verify after downloading. - Manually approve new kubelet CSRs if rotation is enabled (known issue). * Release 1.8 (#5680) * Fix versions for 1.8 API ref docs * Updates for 1.8 kubectl reference docs * Kubeadm /docs/admin/kubeadm.md cleanup, editing. (#5681) * Update docs/admin/kubeadm.md (mostly 1.8 related). This is Fabrizio's work, which I'm committing along with my edits (in a commit on top of this). * A few of my own edits to clarify and clean up some Markdown.
2017-09-29 04:46:51 +00:00
---
approvers:
- davidopp
- wojtek-t
title: Pod Priority and Preemption
---
{% capture overview %}
{% include feature-state-alpha.md %}
[Pods](/docs/user-guide/pods) in Kubernetes 1.8 and later can have priority. Priority
indicates the importance of a Pod relative to other Pods. When a Pod cannot be scheduled,
the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the
pending Pod possible. In a future Kubernetes release, priority will also affect
out-of-resource eviction ordering on the Node.
**Note:** Preemption does not respect PodDisruptionBudget; see
[the limitations section](#poddisruptionbudget-is-not-supported) for more details.
{: .note}
{% endcapture %}
{% capture body %}
## How to use priority and preemption
To use priority and preemption in Kubernetes 1.8, follow these steps:
1. Enable the feature.
1. Add one or more PriorityClasses.
1. Create Pods with `PriorityClassName` set to one of the added PriorityClasses.
Of course you do not need to create the Pods directly; normally you would add
`PriorityClassName` to the Pod template of a collection object like a Deployment.
The following sections provide more information about these steps.
## Enabling priority and preemption
Pod priority and preemption is disabled by default in Kubernetes 1.8.
To enable the feature, set this command-line flag for the API server
and the scheduler:
```
--feature-gates=PodPriority=true
```
Also set this flag for API server:
```
--runtime-config=scheduling.k8s.io/v1alpha1=true
```
After the feature is enabled, you can create [PriorityClasses](#priorityclass)
and create Pods with [`PriorityClassName`](#pod-priority) set.
If you try the feature and then decide to disable it, you must remove the PodPriority
command-line flag or set it to false, and then restart the API server and
scheduler. After the feature is disabled, the existing Pods keep their priority
fields, but preemption is disabled, and priority fields are ignored, and you
cannot set PriorityClassName in new Pods.
## PriorityClass
A PriorityClass is a non-namespaced object that defines a mapping from a priority
class name to the integer value of the priority. The name is specified in the `name`
field of the PriorityClass object's metadata. The value is specified in the required
`value` field. The higher the value, the higher the priority.
A PriorityClass object can have any 32-bit integer value smaller than or equal to
1 billion. Larger numbers are reserved for critical system Pods that should not
normally be preempted or evicted. A cluster admin should create one PriorityClass
object for each such mapping that they want.
PriorityClass also has two optional fields: `globalDefault` and `description`.
The `globalDefault` field indicates that the value of this PriorityClass should
be used for Pods without a `PriorityClassName`. Only one PriorityClass with
`globalDefault` set to true can exist in the system. If there is no PriorityClass
with `globalDefault` set, the priority of Pods with no `PriorityClassName` is zero.
The `description` field is an arbitrary string. It is meant to tell users of
the cluster when they should use this PriorityClass.
**Note 1**: If you upgrade your existing cluster and enable this feature, the priority
of your existing Pods will be considered to be zero.
{: .note}
**Note 2**: Addition of a PriorityClass with `globalDefault` set to true does not
change the priorities of existing Pods. The value of such a PriorityClass is used only
for Pods created after the PriorityClass is added.
{: .note}
**Note 3**: If you delete a PriorityClass, existing Pods that use the name of the
deleted priority class remain unchanged, but you are not able to create more Pods
that use the name of the deleted PriorityClass.
{: .note}
### Example PriorityClass
```yaml
apiVersion: v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
```
## Pod priority
After you have one or more PriorityClasses, you can create Pods that specify one
of those PriorityClass names in their specifications. The priority admission
controller uses the `priorityClassName` field and populates the integer value
of the priority. If the priority class is not found, the Pod is rejected.
The following YAML is an example of a Pod configuration that uses the PriorityClass
created in the preceding example. The priority admission controller checks the
specification and resolves the priority of the Pod to 1000000.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
priorityClassName: high-priority
```
## Preemption
When Pods are created, they go to a queue and wait to be scheduled. The scheduler
picks a Pod from the queue and tries to schedule it on a Node. If no Node is found
that satisfies all the specified requirements of the Pod, preemption logic is triggered
for the pending Pod. Let's call the pending pod P. Preemption logic tries to find a Node
where removal of one or more Pods with lower priority than P would enable P to be scheduled
on that Node. If such a Node is found, one or more lower priority Pods get
deleted from the Node. After the Pods are gone, P can be scheduled on the Node.
### Limitations of preemption (alpha version)
#### Starvation of preempting Pod
When Pods are preempted, the victims get their
[graceful termination period](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods).
They have that much time to finish their work and exit. If they don't, they are
killed. This graceful termination period creates a time gap between the point
that the scheduler preempts Pods and the time when the pending Pod (P) can be
scheduled on the Node (N). In the meantime, the scheduler keeps scheduling other
pending Pods. As victims exit or get terminated, the scheduler tries to schedule
Pods in the pending queue, and one or more of them may be considered and
scheduled to N before the scheduler considers scheduling P on N. In such a case,
it is likely that when all the victims exit, Pod P won't fit on Node N anymore.
So, scheduler will have to preempt other Pods on Node N or another Node so that
P can be scheduled. This scenario might be repeated again for the second and
subsequent rounds of preemption, and P might not get scheduled for a while.
This scenario can cause problems in various clusters, but is particularly
problematic in clusters with a high Pod creation rate.
We will address this problem in the beta version of Pod preemption. The solution
we plan to implement is
[provided here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/pod-preemption.md#preemption-mechanics).
#### PodDisruptionBudget is not supported
A [Pod Disruption Budget (PDB)](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/)
allows application owners to limit the number Pods of a replicated application that
are down simultaneously from voluntary disruptions. However, the alpha version of
preemption does not respect PDB when choosing preemption victims.
We plan to add PDB support in beta, but even in beta, respecting PDB will be best
effort. The Scheduler will try to find victims whose PDB won't be violated by preemption,
but if no such victims are found, preemption will still happen, and lower priority Pods
will be removed despite their PDBs being violated.
#### Inter-Pod affinity on lower-priority Pods
In version 1.8, a Node is considered for preemption only when
the answer to this question is yes: "If all the Pods with lower priority than
the pending Pod are removed from the Node, can the pending pod be scheduled on
the Node?"
**Note:** Preemption does not necessarily remove all lower-priority Pods. If the
pending pod can be scheduled by removing fewer than all lower-priority Pods, then
only a portion of the lower-priority Pods are removed. Even so, the answer to the
preceding question must be yes. If the answer is no, the Node is not considered
for preemption.
{: .note}
If a pending Pod has inter-pod affinity to one or more of the lower-priority Pods
on the Node, the inter-Pod affinity rule cannot be satisfied in the absence of those
lower-priority Pods. In this case, the scheduler does not preempt any Pods on the
Node. Instead, it looks for another Node. The scheduler might find a suitable Node
or it might not. There is no guarantee that the pending Pod can be scheduled.
We might address this issue in future versions, but we don't have a clear plan yet.
We will not consider it a blocker for Beta or GA. Part
of the reason is that finding the set of lower-priority Pods that satisfy all
inter-Pod affinity rules is computationally expensive, and adds substantial
complexity to the preemption logic. Besides, even if preemption keeps the lower-priority
Pods to satisfy inter-Pod affinity, the lower priority Pods might be preempted
later by other Pods, which removes the benefits of having the complex logic of
respecting inter-Pod affinity.
Our recommended solution for this problem is to create inter-Pod affinity only towards
equal or higher priority pods.
#### Cross node preemption
Suppose a Node N is being considered for preemption so that a pending Pod P
can be scheduled on N. P might become feasible on N only if a Pod on another
Node is preempted. Here's an example:
* Pod P is being considered for Node N.
* Pod Q is running on another Node in the same zone as Node N.
* Pod P has anit-affinity with Pod Q.
* There are no other cases of anti-affinity between Pod P and other Pods in the zone.
* In order to schedule Pod P on Node N, Pod Q should be preempted, but scheduler
does not perform cross-node preemption. So, Pod P will be deemed unschedulable
on Node N.
If Pod Q were removed from its Node, the anti-affinity violation would be gone,
and Pod P could possibly be scheduled on Node N.
We may consider adding cross Node preemption in future versions if we find an
algorithm with reasonable performance. We cannot promise anything at this point,
and cross Node preemption will not be considered a blocker for Beta or GA.
{% endcapture %}
{% include templates/concept.md %}