Merge pull request #42475 from my-git9/controllers2

[zh-cn] sync workloads/controllers/* labels-annotations-taints/_index.md
pull/42478/head
Kubernetes Prow Robot 2023-08-10 01:51:27 -07:00 committed by GitHub
commit 21a1fbe341
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 31 additions and 70 deletions

View File

@ -77,7 +77,7 @@ This example CronJob manifest prints the current time and a hello message every
下面的 CronJob 示例清单会在每分钟打印出当前时间和问候消息:
{{< codenew file="application/job/cronjob.yaml" >}}
{{% code file="application/job/cronjob.yaml" %}}
<!--
([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)

View File

@ -73,7 +73,7 @@ describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
你可以在 YAML 文件中描述 DaemonSet。
例如,下面的 daemonset.yaml 文件描述了一个运行 fluentd-elasticsearch Docker 镜像的 DaemonSet
{{< codenew file="controllers/daemonset.yaml" >}}
{{% code file="controllers/daemonset.yaml" %}}
<!--
Create a DaemonSet based on the YAML file:

View File

@ -73,7 +73,7 @@ It takes around 10s to complete.
下面是一个 Job 配置示例。它负责计算 π 到小数点后 2000 位,并将结果打印出来。
此计算大约需要 10 秒钟完成。
{{< codenew file="controllers/job.yaml" >}}
{{% code file="controllers/job.yaml" %}}
<!--
You can run the example with this command:
@ -692,7 +692,7 @@ Here is a manifest for a Job that defines a `podFailurePolicy`:
-->
下面是一个定义了 `podFailurePolicy` 的 Job 的清单:
{{< codenew file="/controllers/job-pod-failure-policy-example.yaml" >}}
{{% code file="/controllers/job-pod-failure-policy-example.yaml" %}}
<!--
In the example above, the first rule of the Pod failure policy specifies that
@ -1443,52 +1443,17 @@ mismatch.
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
{{< note >}}
<!--
The control plane doesn't track Jobs using finalizers, if the Jobs were created
when the feature gate `JobTrackingWithFinalizers` was disabled, even after you
upgrade the control plane to 1.26.
-->
如果 Job 是在特性门控 `JobTrackingWithFinalizers` 被禁用时创建的,即使你将控制面升级到 1.26
控制面也不会使用 Finalizer 跟踪 Job。
{{< /note >}}
<!--
The control plane keeps track of the Pods that belong to any Job and notices if
any such Pod is removed from the API server. To do that, the Job controller
creates Pods with the finalizer `batch.kubernetes.io/job-tracking`. The
controller removes the finalizer only after the Pod has been accounted for in
the Job status, allowing the Pod to be removed by other controllers or users.
Jobs created before upgrading to Kubernetes 1.26 or before the feature gate
`JobTrackingWithFinalizers` is enabled are tracked without the use of Pod
finalizers.
The Job {{< glossary_tooltip term_id="controller" text="controller" >}} updates
the status counters for `succeeded` and `failed` Pods based only on the Pods
that exist in the cluster. The contol plane can lose track of the progress of
the Job if Pods are deleted from the cluster.
-->
控制面会跟踪属于任何 Job 的 Pod并通知是否有任何这样的 Pod 被从 API 服务器中移除。
为了实现这一点Job 控制器创建的 Pod 带有 Finalizer `batch.kubernetes.io/job-tracking`
控制器只有在 Pod 被记入 Job 状态后才会移除 Finalizer允许 Pod 可以被其他控制器或用户移除。
在升级到 Kubernetes 1.26 之前或在启用特性门控 `JobTrackingWithFinalizers`
之前创建的 Job 被跟踪时不使用 Pod Finalizer。
Job {{< glossary_tooltip term_id="controller" text="控制器" >}}仅根据集群中存在的 Pod
更新 `succeeded``failed` Pod 的状态计数器。如果 Pod 被从集群中删除,控制面可能无法跟踪 Job 的进度。
<!--
You can determine if the control plane is tracking a Job using Pod finalizers by
checking if the Job has the annotation
`batch.kubernetes.io/job-tracking`. You should **not** manually add or remove
this annotation from Jobs. Instead, you can recreate the Jobs to ensure they
are tracked using Pod finalizers.
-->
你可以根据检查 Job 是否含有 `batch.kubernetes.io/job-tracking` 注解,
来确定控制面是否正在使用 Pod Finalizer 追踪 Job。
你**不**应该给 Job 手动添加或删除该注解。
取而代之的是你可以重新创建 Job 以确保使用 Pod Finalizer 跟踪这些 Job。
<!--
### Elastic Indexed Jobs
-->

View File

@ -113,7 +113,7 @@ Deployment并在 spec 部分定义你的应用。
-->
## 示例 {#example}
{{< codenew file="controllers/frontend.yaml" >}}
{{% code file="controllers/frontend.yaml" %}}
<!--
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will
@ -263,7 +263,7 @@ Pod它还可以像前面小节中所描述的那样获得其他 Pod。
以前面的 frontend ReplicaSet 为例,并在以下清单中指定这些 Pod
{{< codenew file="pods/pod-rs.yaml" >}}
{{% code file="pods/pod-rs.yaml" %}}
<!--
As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend
@ -668,7 +668,7 @@ ReplicaSet 也可以作为[水平的 Pod 扩缩器 (HPA)](/zh-cn/docs/tasks/run-
的目标。也就是说ReplicaSet 可以被 HPA 自动扩缩。
以下是 HPA 以我们在前一个示例中创建的副本集为目标的示例。
{{< codenew file="controllers/hpa-rs.yaml" >}}
{{% code file="controllers/hpa-rs.yaml" %}}
<!--
Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should

View File

@ -2,6 +2,9 @@
title: ReplicationController
content_type: concept
weight: 90
description: >-
用于管理可水平扩展的工作负载的旧版 API。
被 Deployment 和 ReplicaSet API 取代。
---
<!--
@ -11,6 +14,9 @@ reviewers:
title: ReplicationController
content_type: concept
weight: 90
description: >-
Legacy API for managing workloads that can scale horizontally.
Superseded by the Deployment and ReplicaSet APIs.
-->
<!-- overview -->
@ -34,7 +40,7 @@ always up and available.
<!-- body -->
<!--
## How a ReplicationController Works
## How a ReplicationController works
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the
ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a
@ -75,7 +81,7 @@ This example ReplicationController config runs three copies of the nginx web ser
这个示例 ReplicationController 配置运行 nginx Web 服务器的三个副本。
{{< codenew file="controllers/replication.yaml" >}}
{{% code file="controllers/replication.yaml" %}}
<!--
Run the example job by downloading the example file and then running this command:
@ -608,4 +614,3 @@ ReplicationController。
- 了解 [Depolyment](/zh-cn/docs/concepts/workloads/controllers/deployment/)ReplicationController 的替代品。
- `ReplicationController` 是 Kubernetes REST API 的一部分,阅读 {{< api-reference page="workload-resources/replication-controller-v1" >}}
对象定义以了解 replication controllers 的 API。

View File

@ -83,7 +83,7 @@ Starting from v1.9, this label is deprecated.
<!--
### app.kubernetes.io/instance
Type: Label
Type: Label
Example: `app.kubernetes.io/instance: "mysql-abcxzy"`
@ -765,7 +765,7 @@ This label can have one of three values: `Reconcile`, `EnsureExists`, or `Ignore
- `Ignore`: Addon resources will be ignored. This mode is useful for add-ons that are not
compatible with the add-on manager or that are managed by another controller.
For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md)
For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md).
-->
### addonmanager.kubernetes.io/mode
@ -785,7 +785,7 @@ For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/b
- `Ignore`:插件资源将被忽略。此模式对于与外接插件管理器不兼容或由其他控制器管理的插件程序非常有用。
有关详细信息,请参见
[Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md)
[Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md)
<!--
### beta.kubernetes.io/arch (deprecated)
@ -1469,7 +1469,7 @@ Kubernetes makes a few assumptions about the structure of zones and regions:
1. regions and zones are hierarchical: zones are strict subsets of regions and
no zone can be in 2 regions
2) zone names are unique across regions; for example region "africa-east-1" might be comprised
2. zone names are unique across regions; for example region "africa-east-1" might be comprised
of zones "africa-east-1a" and "africa-east-1b"
-->
Kubernetes 对 Zone 和 Region 的结构做了一些假设:
@ -1540,7 +1540,7 @@ Example: `volume.beta.kubernetes.io/storage-provisioner: "k8s.io/minikube-hostpa
Used on: PersistentVolumeClaim
This annotation has been deprecated since v1.23.
See [volume.kubernetes.io/storage-provisioner](#volume-kubernetes-io-storage-provisioner)
See [volume.kubernetes.io/storage-provisioner](#volume-kubernetes-io-storage-provisioner).
-->
### volume.beta.kubernetes.io/storage-provisioner (已弃用) {#volume-beta-kubernetes-io-storage-provisioner}
@ -1581,7 +1581,7 @@ This annotation has been deprecated. Instead, set the
[`storageClassName` field](/docs/concepts/storage/persistent-volumes/#class)
for the PersistentVolumeClaim or PersistentVolume.
-->
此注解可以为 PersistentVolume (PV) 或 PersistentVolumeClaim (PVC) 指定
此注解可以为 PersistentVolumePV或 PersistentVolumeClaimPVC指定
[StorageClass](/zh-cn/docs/concepts/storage/storage-classes/)。
`storageClassName` 属性和 `volume.beta.kubernetes.io/storage-class` 注解均被指定时,
注解 `volume.beta.kubernetes.io/storage-class` 将优先于 `storageClassName` 属性。
@ -1997,7 +1997,7 @@ resource without a class specified will be assigned this default class.
资源将被设置为此默认类。
<!--
### alpha.kubernetes.io/provided-node-ip
### alpha.kubernetes.io/provided-node-ip (alpha) {#alpha-kubernetes-io-provided-node-ip}
Type: Annotation
@ -2012,7 +2012,7 @@ and legacy in-tree cloud providers), it sets this annotation on the Node to deno
set from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid
by the cloud-controller-manager.
-->
### alpha.kubernetes.io/provided-node-ip {#alpha-kubernetes-io-provided-node-ip}
### alpha.kubernetes.io/provided-node-ip (alpha) {#alpha-kubernetes-io-provided-node-ip}
类别:注解
@ -2094,8 +2094,7 @@ container.
<!--
This annotation is deprecated. You should use the
[`kubectl.kubernetes.io/default-container`](#kubectl-kubernetes-io-default-container)
annotation instead.
Kubernetes versions 1.25 and newer ignore this annotation.
annotation instead. Kubernetes versions 1.25 and newer ignore this annotation.
-->
此注解已被弃用。取而代之的是使用
[`kubectl.kubernetes.io/default-container`](#kubectl-kubernetes-io-default-container) 注解。
@ -2143,11 +2142,8 @@ Example: `batch.kubernetes.io/job-tracking: ""`
Used on: Jobs
The presence of this annotation on a Job indicates that the control plane is
The presence of this annotation on a Job used to indicate that the control plane is
[tracking the Job status using finalizers](/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers).
The control plane uses this annotation to safely transition to tracking Jobs
using finalizers, while the feature is in development.
You should **not** manually add or remove this annotation.
-->
### batch.kubernetes.io/job-tracking (已弃用) {#batch-kubernetes-io-job-tracking}
@ -2158,18 +2154,13 @@ You should **not** manually add or remove this annotation.
用于Job
Job 上存在此注解表明控制平面正在[使用 Finalizer 追踪 Job](/zh-cn/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers)。
控制平面使用此注解来安全地转换为使用 Finalizer 追踪 Job而此特性正在开发中。
**不** 可以手动添加或删除此注解。
{{< note >}}
<!--
Starting from Kubernetes 1.26, this annotation is deprecated.
Kubernetes 1.27 and newer will ignore this annotation and always track Jobs
using finalizers.
Adding or removing this annotation no longer has an effect (Kubernetes v1.27 and later)
All Jobs are tracked with finalizers.
-->
从 Kubernetes 1.26 开始,该注解被弃用。
Kubernetes 1.27 及以上版本将忽略此注解,并始终使用 Finalizer 追踪 Job。
{{< /note >}}
添加或删除此注解不再有效Kubernetes v1.27 及更高版本),
所有 Job 均通过 Finalizer 进行追踪。
<!--
### job-name (deprecated) {#job-name}
@ -2605,7 +2596,7 @@ Type: Label
Example: `feature.node.kubernetes.io/network-sriov.capable: "true"`
Used on: Node
Used on: Node
These labels are used by the Node Feature Discovery (NFD) component to advertise
features on a node. All built-in labels use the `feature.node.kubernetes.io` label
@ -3975,7 +3966,7 @@ ignores that node while calculating Topology Aware Hints.
类别:标签
用于: 节点
用于:节点
用来指示该节点用于运行控制平面组件的标记标签。Kubeadm 工具将此标签应用于其管理的控制平面节点。
其他集群管理工具通常也会设置此污点。