Merge pull request #19970 from tsahiduek/pod_topology_spread_en
pod-topology-spread links - Language enpull/19459/head
commit
2c694c97ab
|
@ -18,7 +18,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te
|
|||
|
||||
### Enable Feature Gate
|
||||
|
||||
The `EvenPodsSpread` [feature gate] (/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
The `EvenPodsSpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
must be enabled for the
|
||||
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} **and**
|
||||
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}.
|
||||
|
@ -160,6 +160,7 @@ There are some implicit conventions worth noting here:
|
|||
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
|
||||
|
||||
- Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that:
|
||||
|
||||
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA".
|
||||
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
|
||||
|
||||
|
@ -229,14 +230,14 @@ In Kubernetes, directives related to "Affinity" control how Pods are
|
|||
scheduled - more packed or more scattered.
|
||||
|
||||
- For `PodAffinity`, you can try to pack any number of Pods into qualifying
|
||||
topology domain(s)
|
||||
topology domain(s)
|
||||
- For `PodAntiAffinity`, only one Pod can be scheduled into a
|
||||
single topology domain.
|
||||
single topology domain.
|
||||
|
||||
The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different
|
||||
topology domains - to achieve high availability or cost-saving. This can also help on rolling update
|
||||
workloads and scaling out replicas smoothly.
|
||||
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation) for more details.
|
||||
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details.
|
||||
|
||||
## Known Limitations
|
||||
|
||||
|
|
Loading…
Reference in New Issue