zh: sync concepts/workloads files
zh: sync concepts/workloads/pods/disruptions zh: sync concepts/workloads/pods/init-containers zh: sync concepts/workloads/pods/pod-topology-spread-constraintpull/28179/head
parent
57cb82ed38
commit
29e7f7c045
|
@ -155,18 +155,23 @@ and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/)
|
|||
|
||||
<!--
|
||||
The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are
|
||||
no voluntary disruptions at all. However, your cluster administrator or hosting provider
|
||||
no automated voluntary disruptions (only user-triggered ones). However, your cluster administrator or hosting provider
|
||||
may run some additional services which cause voluntary disruptions. For example,
|
||||
rolling out node software updates can cause voluntary disruptions. Also, some implementations
|
||||
of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes.
|
||||
Your cluster administrator or hosting provider should have documented what level of voluntary
|
||||
disruptions, if any, to expect.
|
||||
disruptions, if any, to expect. Certain configuration options, such as
|
||||
[using PriorityClasses](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/)
|
||||
in your pod spec can also cause voluntary (and involuntary) disruptions.
|
||||
-->
|
||||
自愿干扰的频率各不相同。在一个基本的 Kubernetes 集群中,根本没有自愿干扰。然而,集群管理
|
||||
或托管提供商可能运行一些可能导致自愿干扰的额外服务。例如,节点软
|
||||
自愿干扰的频率各不相同。在一个基本的 Kubernetes 集群中,没有自愿干扰(只有用户触发的干扰)。
|
||||
然而,集群管理员或托管提供商可能运行一些可能导致自愿干扰的额外服务。例如,节点软
|
||||
更新可能导致自愿干扰。另外,集群(节点)自动缩放的某些
|
||||
实现可能导致碎片整理和紧缩节点的自愿干扰。集群
|
||||
管理员或托管提供商应该已经记录了各级别的自愿干扰(如果有的话)。
|
||||
有些配置选项,例如在 pod spec 中
|
||||
[使用 PriorityClasses](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/)
|
||||
也会产生自愿(和非自愿)的干扰。
|
||||
|
||||
<!--
|
||||
Kubernetes offers features to help run highly available applications at the same
|
||||
|
@ -267,7 +272,7 @@ during application updates is configured in spec for the specific workload resou
|
|||
<!--
|
||||
When a pod is evicted using the eviction API, it is gracefully
|
||||
[terminated](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination),
|
||||
hornoring the
|
||||
hornoring the
|
||||
`terminationGracePeriodSeconds` setting in its [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
|
||||
-->
|
||||
当使用驱逐 API 驱逐 Pod 时,Pod 会被体面地
|
||||
|
@ -504,4 +509,3 @@ the nodes in your cluster, such as a node or system software upgrade, here are s
|
|||
* 进一步了解[排空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)的信息。
|
||||
* 了解[更新 Deployment](/zh/docs/concepts/workloads/controllers/deployment/#updating-a-deployment)
|
||||
的过程,包括如何在其进程中维持应用的可用性
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ Init 容器与普通的容器非常像,除了如下两点:
|
|||
* 每个都必须在下一个启动之前成功完成。
|
||||
|
||||
<!--
|
||||
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
|
||||
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
|
||||
However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.
|
||||
-->
|
||||
如果 Pod 的 Init 容器失败,kubelet 会不断地重启该 Init 容器直到该容器成功为止。
|
||||
|
@ -391,10 +391,10 @@ myapp-pod 1/1 Running 0 9m
|
|||
|
||||
<!--
|
||||
This simple example should provide some inspiration for you to create your own
|
||||
init containers. [What's next](#whats-next) contains a link to a more detailed example.
|
||||
init containers. [What's next](#what-s-next) contains a link to a more detailed example.
|
||||
-->
|
||||
这个简单例子应该能为你创建自己的 Init 容器提供一些启发。
|
||||
[接下来](#whats-next)节提供了更详细例子的链接。
|
||||
[接下来](#what-s-next)节提供了更详细例子的链接。
|
||||
|
||||
<!--
|
||||
## Detailed behavior
|
||||
|
@ -546,4 +546,3 @@ Pod 不会被重启。这一行为适用于 Kubernetes v1.20 及更新版本。
|
|||
-->
|
||||
* 阅读[创建包含 Init 容器的 Pod](/zh/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
|
||||
* 学习如何[调试 Init 容器](/zh/docs/tasks/debug-application-cluster/debug-init-containers/)
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te
|
|||
|
||||
<!--
|
||||
{{< note >}}
|
||||
In versions of Kubernetes before v1.19, you must enable the `EvenPodsSpread`
|
||||
In versions of Kubernetes before v1.18, you must enable the `EvenPodsSpread`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on
|
||||
the [API server](/docs/concepts/overview/components/#kube-apiserver) and the
|
||||
[scheduler](/docs/reference/generated/kube-scheduler/) in order to use Pod
|
||||
|
@ -36,7 +36,8 @@ topology spread constraints.
|
|||
-->
|
||||
|
||||
{{< note >}}
|
||||
在 v1.19 之前的 Kubernetes 版本中,如果要使用 Pod 拓扑扩展约束,你必须在 [API 服务器](/zh/docs/concepts/overview/components/#kube-apiserver)
|
||||
在 v1.18 之前的 Kubernetes 版本中,如果要使用 Pod 拓扑扩展约束,你必须在
|
||||
[API 服务器](/zh/docs/concepts/overview/components/#kube-apiserver)
|
||||
和[调度器](/zh/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
中启用 `EvenPodsSpread` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
{{< /note >}}
|
||||
|
@ -218,7 +219,7 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones,
|
|||
则让它保持悬决状态。
|
||||
|
||||
<!--
|
||||
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1],
|
||||
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1],
|
||||
hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
|
||||
-->
|
||||
如果调度器将新的 Pod 放入 "zoneA",Pods 分布将变为 [3, 1],因此实际的偏差
|
||||
|
@ -645,4 +646,3 @@ See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig
|
|||
-->
|
||||
- [博客: PodTopologySpread介绍](https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/)
|
||||
详细解释了 `maxSkew`,并给出了一些高级的使用示例。
|
||||
|
||||
|
|
Loading…
Reference in New Issue