Merge pull request #45648 from asa3311/sync-zh-108
[zh] sync kube-scheduler taint-and-toleration topology-spread-constraintspull/45716/head
commit
73edd86532
|
@ -122,7 +122,7 @@ kube-scheduler 给一个 Pod 做调度选择时包含两个步骤:
|
|||
<!--
|
||||
The _filtering_ step finds the set of Nodes where it's feasible to
|
||||
schedule the Pod. For example, the PodFitsResources filter checks whether a
|
||||
candidate Node has enough available resource to meet a Pod's specific
|
||||
candidate Node has enough available resources to meet a Pod's specific
|
||||
resource requests. After this step, the node list contains any suitable
|
||||
Nodes; often, there will be more than one. If the list is empty, that
|
||||
Pod isn't (yet) schedulable.
|
||||
|
|
|
@ -141,7 +141,7 @@ An empty `effect` matches all effects with key `key1`.
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`.
|
||||
The above example used the `effect` of `NoSchedule`. Alternatively, you can use the `effect` of `PreferNoSchedule`.
|
||||
-->
|
||||
上述例子中 `effect` 使用的值为 `NoSchedule`,你也可以使用另外一个值 `PreferNoSchedule`。
|
||||
|
||||
|
@ -389,7 +389,7 @@ are true. The following taints are built in:
|
|||
* `node.kubernetes.io/network-unavailable`: Node's network is unavailable.
|
||||
* `node.kubernetes.io/unschedulable`: Node is unschedulable.
|
||||
* `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started
|
||||
with "external" cloud provider, this taint is set on a node to mark it
|
||||
with an "external" cloud provider, this taint is set on a node to mark it
|
||||
as unusable. After a controller from the cloud-controller-manager initializes
|
||||
this node, the kubelet removes this taint.
|
||||
-->
|
||||
|
|
|
@ -496,7 +496,7 @@ can use a manifest similar to:
|
|||
|
||||
<!--
|
||||
From that manifest, `topologyKey: zone` implies the even distribution will only be applied
|
||||
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
|
||||
to nodes that are labeled `zone: <any value>` (nodes that don't have a `zone` label
|
||||
are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the
|
||||
incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint.
|
||||
|
||||
|
@ -780,7 +780,7 @@ There are some implicit conventions worth noting here:
|
|||
above example, if you remove the incoming Pod's labels, it can still be placed onto
|
||||
nodes in zone `B`, since the constraints are still satisfied. However, after that
|
||||
placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A`
|
||||
having 2 Pods labelled as `foo: bar`, and zone `B` having 1 Pod labelled as
|
||||
having 2 Pods labeled as `foo: bar`, and zone `B` having 1 Pod labeled as
|
||||
`foo: bar`. If this is not what you expect, update the workload's
|
||||
`topologySpreadConstraints[*].labelSelector` to match the labels in the pod template.
|
||||
-->
|
||||
|
@ -981,7 +981,7 @@ section of the enhancement proposal about Pod topology spread constraints.
|
|||
because, in this case, those topology domains won't be considered until there is
|
||||
at least one node in them.
|
||||
|
||||
You can work around this by using an cluster autoscaling tool that is aware of
|
||||
You can work around this by using a cluster autoscaling tool that is aware of
|
||||
Pod topology spread constraints and is also aware of the overall set of topology
|
||||
domains.
|
||||
-->
|
||||
|
|
Loading…
Reference in New Issue