From 362f84dea0daf495a191fc26ba1839f7dc13c85d Mon Sep 17 00:00:00 2001 From: Haripriya Date: Wed, 27 Dec 2023 12:38:00 +0530 Subject: [PATCH] 'value's to 'values' commit --- .../scheduling-eviction/taint-and-toleration.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index c9afb795a1..d2e78c98a8 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -71,7 +71,7 @@ The default value for `operator` is `Equal`. A toleration "matches" a taint if the keys are the same and the effects are the same, and: * the `operator` is `Exists` (in which case no `value` should be specified), or -* the `operator` is `Equal` and the `value`s are equal. +* the `operator` is `Equal` and the values should be equal. {{< note >}} @@ -97,7 +97,7 @@ The allowed values for the `effect` field are: * Pods that tolerate the taint with a specified `tolerationSeconds` remain bound for the specified amount of time. After that time elapses, the node lifecycle controller evicts the Pods from the node. - + `NoSchedule` : No new Pods will be scheduled on the tainted node unless they have a matching toleration. Pods currently running on the node are **not** evicted. @@ -105,7 +105,7 @@ The allowed values for the `effect` field are: `PreferNoSchedule` : `PreferNoSchedule` is a "preference" or "soft" version of `NoSchedule`. The control plane will *try* to avoid placing a Pod that does not tolerate - the taint on the node, but it is not guaranteed. + the taint on the node, but it is not guaranteed. You can put multiple taints on the same node and multiple tolerations on the same pod. The way Kubernetes processes multiple taints and tolerations is like a filter: start @@ -293,15 +293,15 @@ decisions. This ensures that node conditions don't directly affect scheduling. For example, if the `DiskPressure` node condition is active, the control plane adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods onto the affected node. If the `MemoryPressure` node condition is active, the -control plane adds the `node.kubernetes.io/memory-pressure` taint. +control plane adds the `node.kubernetes.io/memory-pressure` taint. You can ignore node conditions for newly created pods by adding the corresponding -Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure` -toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}} -other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed` +Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure` +toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}} +other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed` or `Burstable` QoS classes (even pods with no memory request set) as if they are able to cope with memory pressure, while new `BestEffort` pods are not scheduled -onto the affected node. +onto the affected node. The DaemonSet controller automatically adds the following `NoSchedule` tolerations to all daemons, to prevent DaemonSets from breaking.