Merge pull request #44539 from haripriya9647/values
Improve an expression from 'value's to 'values' for taint-and-tolerationpull/44623/head
commit
a5e512e0e9
|
@ -71,7 +71,7 @@ The default value for `operator` is `Equal`.
|
|||
A toleration "matches" a taint if the keys are the same and the effects are the same, and:
|
||||
|
||||
* the `operator` is `Exists` (in which case no `value` should be specified), or
|
||||
* the `operator` is `Equal` and the `value`s are equal.
|
||||
* the `operator` is `Equal` and the values should be equal.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
|
@ -97,7 +97,7 @@ The allowed values for the `effect` field are:
|
|||
* Pods that tolerate the taint with a specified `tolerationSeconds` remain
|
||||
bound for the specified amount of time. After that time elapses, the node
|
||||
lifecycle controller evicts the Pods from the node.
|
||||
|
||||
|
||||
`NoSchedule`
|
||||
: No new Pods will be scheduled on the tainted node unless they have a matching
|
||||
toleration. Pods currently running on the node are **not** evicted.
|
||||
|
@ -105,7 +105,7 @@ The allowed values for the `effect` field are:
|
|||
`PreferNoSchedule`
|
||||
: `PreferNoSchedule` is a "preference" or "soft" version of `NoSchedule`.
|
||||
The control plane will *try* to avoid placing a Pod that does not tolerate
|
||||
the taint on the node, but it is not guaranteed.
|
||||
the taint on the node, but it is not guaranteed.
|
||||
|
||||
You can put multiple taints on the same node and multiple tolerations on the same pod.
|
||||
The way Kubernetes processes multiple taints and tolerations is like a filter: start
|
||||
|
@ -293,15 +293,15 @@ decisions. This ensures that node conditions don't directly affect scheduling.
|
|||
For example, if the `DiskPressure` node condition is active, the control plane
|
||||
adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods
|
||||
onto the affected node. If the `MemoryPressure` node condition is active, the
|
||||
control plane adds the `node.kubernetes.io/memory-pressure` taint.
|
||||
control plane adds the `node.kubernetes.io/memory-pressure` taint.
|
||||
|
||||
You can ignore node conditions for newly created pods by adding the corresponding
|
||||
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
|
||||
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
|
||||
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
|
||||
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
|
||||
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
|
||||
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
|
||||
or `Burstable` QoS classes (even pods with no memory request set) as if they are
|
||||
able to cope with memory pressure, while new `BestEffort` pods are not scheduled
|
||||
onto the affected node.
|
||||
onto the affected node.
|
||||
|
||||
The DaemonSet controller automatically adds the following `NoSchedule`
|
||||
tolerations to all daemons, to prevent DaemonSets from breaking.
|
||||
|
|
Loading…
Reference in New Issue