[zh]Sync changes under ` content/zh/docs/reference` folder for Issues kubernetes/website#25247. (#25308)

* sync

sys labels-annotations-taints.md

* sync 'scheduling/config.md'

* sync issues-security/security.md

* sync _index.md

* sync reference/tools.md

* update

* update

* misssing words

* update

* update to fix some lines

* update to fix some lines

* update to delete wihtpsace

* line is  too long
pull/25329/head
Xu Yuandong 2020-12-01 17:02:50 +08:00 committed by GitHub
parent 123744b1bd
commit 4f9b919a88
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 119 additions and 125 deletions

View File

@ -29,13 +29,13 @@ This section of the Kubernetes documentation contains references.
<!--
## API Reference
* [Kubernetes API Overview](/docs/reference/using-api/api-overview/) - Overview of the API for Kubernetes.
* [Kubernetes API Reference {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
* [Using The Kubernetes API](/docs/reference/using-api/) - overview of the API for Kubernetes.
-->
## API 参考
* [Kubernetes API 概述](/zh/docs/reference/using-api/api-overview/) - Kubernetes API 概述
* [Kubernetes API 参考 {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
* [Kubernetes API 参考 {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
* [使用 Kubernetes API ](/zh/docs/reference/using-api/) - Kubernetes 的 API 概述
<!--
## API Client Libraries

View File

@ -31,14 +31,14 @@ This page describes Kubernetes security and disclosure information.
## 安全公告
<!--
Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group for emails about security and major API announcements.
Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) group for emails about security and major API announcements.
-->
加入 [kubernets-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) 组,以获取关于安全性和主要 API 公告的电子邮件。
加入 [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) 组,以获取关于安全性和主要 API 公告的电子邮件。
<!--
You can also subscribe to an RSS feed of the above using [this link](https://groups.google.com/forum/feed/kubernetes-announce/msgs/rss_v2_0.xml?num=50).
You can also subscribe to an RSS feed of the above using [this link](https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50).
-->
也可以使用[此链接](https://groups.google.com/forum/feed/kubernetes-announce/msgs/rss_v2_0.xml?num=50)订阅上述的 RSS 反馈。
也可以使用[此链接](https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50) 订阅上述的 RSS 反馈。
<!--
## Report a Vulnerability
@ -54,17 +54,18 @@ Were extremely grateful for security researchers and users that report vulner
<!--
To make a report, please email the private [security@kubernetes.io](mailto:security@kubernetes.io) list with the security details and the details expected for [all Kubernetes bug reports](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md).
-->
如需报告,请连同安全细节以及预期的[所有 Kubernetes bug 报告](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md)详细信息电邮到[security@kubernetes.io](mailto:security@kubernetes.io) 列表。
如需报告,请连同安全细节以及预期的[所有 Kubernetes bug 报告](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md)
详细信息电子邮件到[security@kubernetes.io](mailto:security@kubernetes.io)列表。
<!--
You can also email the private [security@kubernetes.io](mailto:security@kubernetes.io) list with the security details and the details expected for [all Kubernetes bug reports](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md).
-->
还可以通过电子邮件向私有 [security@kubernetes.io](mailto:security@kubernetes.io) 列表发送电子邮件,邮件中应该包含[所有 Kubernetes 错误报告](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md)所需的详细信息。
还可以通过电子邮件向私有 [security@kubernetes.io](mailto:security@kubernetes.io) 列表发送电子邮件,邮件中应该包含[所有 Kubernetes 错误报告](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md)所需的详细信息。
<!--
You may encrypt your email to this list using the GPG keys of the [Product Security Team members](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#product-security-team-pst). Encryption using GPG is NOT required to make a disclosure.
You may encrypt your email to this list using the GPG keys of the [Product Security Committee members](https://git.k8s.io/security/README.md#product-security-committee-psc). Encryption using GPG is NOT required to make a disclosure.
-->
您可以使用[产品安全团队成员](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#product-security-team-pst)的 GPG 密钥加密您的电子邮件到此列表。
使用 GPG 加密不需要公开。
你可以使用[产品安全团队成员](https://git.k8s.io/security/README.md#product-security-committee-psc)
的 GPG 密钥加密你的电子邮件到此列表。使用 GPG 加密不需要公开。
<!--
### When Should I Report a Vulnerability?
@ -77,10 +78,10 @@ You may encrypt your email to this list using the GPG keys of the [Product Secur
- You think you discovered a vulnerability in another project that Kubernetes depends on
- For projects with their own vulnerability reporting and disclosure process, please report it directly there
-->
- 认为在 Kubernetes 中发现了一个潜在的安全漏洞
- 不确定漏洞如何影响 Kubernetes
- 您认为您在 Kubernetes 依赖的另一个项目中发现了一个漏洞
- 对于具有漏洞报告和披露流程的项目,请直接在该项目处报告
- 认为在 Kubernetes 中发现了一个潜在的安全漏洞
- 不确定漏洞如何影响 Kubernetes
- 你认为你在 Kubernetes 依赖的另一个项目中发现了一个漏洞
- 对于具有漏洞报告和披露流程的项目,请直接在该项目处报告
<!--
### When Should I NOT Report a Vulnerability?
@ -92,9 +93,9 @@ You may encrypt your email to this list using the GPG keys of the [Product Secur
- You need help applying security related updates
- Your issue is not security related
-->
- 需要帮助调整 Kubernetes 组件的安全性
- 需要帮助应用与安全相关的更新
- 的问题与安全无关
- 需要帮助调整 Kubernetes 组件的安全性
- 需要帮助应用与安全相关的更新
- 的问题与安全无关
<!--
## Security Vulnerability Response

View File

@ -14,7 +14,7 @@ This document serves both as a reference to the values and as a coordination poi
Kubernetes 保留了 kubernetes.io 命名空间下的所有标签和注解。
本文既作为这些标签和注解的参考,也就这些标签和注解的赋值进行了说明
本文档提供这些标签、注解和污点的参考,也可用来协调对这类标签、注解和污点的设置
@ -70,7 +70,7 @@ This label has been deprecated. Please use `kubernetes.io/arch` instead.
<!--
This label has been deprecated. Please use `kubernetes.io/os` instead.
-->
该标签已被弃用。请使用 `kubernetes.io/arch`。
该标签已被弃用。请使用 `kubernetes.io/os`。
## kubernetes.io/hostname
@ -89,6 +89,11 @@ The Kubelet populates this label with the hostname. Note that the hostname can b
Kubelet 用 hostname 值来填充该标签。注意:可以通过向 `kubelet` 传入 `--hostname-override`
参数对 “真正的” hostname 进行修改。
<!--
This label is also used as part of the topology hierarchy. See [topology.kubernetes.io/zone](#topologykubernetesiozone) for more information.
-->
此标签还用作拓扑层次结构的一部分。有关更多信息,请参见 [topology.kubernetes.io/zone](#topologykubernetesiozone)。
<!--
## beta.kubernetes.io/instance-type (deprecated)
-->
@ -98,7 +103,7 @@ Kubelet 用 hostname 值来填充该标签。注意:可以通过向 `kubelet`
<!--
Starting in v1.17, this label is deprecated in favor of [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type).
-->
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用[node.kubernetes.io/instance-type](#nodekubernetesioinstance-type)。
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用 [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type)。
{{< /note >}}
## node.kubernetes.io/instance-type {#nodekubernetesioinstance-type}
@ -118,92 +123,47 @@ to rely on the Kubernetes scheduler to perform resource-based scheduling. You sh
用于Node
Kubelet 用 `cloudprovider` 中定义的实例类型来填充该标签。未使用 `cloudprovider` 时不会设置该标签。该标签在想要将某些负载定向到特定实例类型的节点上时会很有用,但通常用户更希望依赖 Kubernetes 调度器来执行基于资源的调度,所以用户应该致力于基于属性而不是实例类型来进行调度(例如:需要一个 GPU而不是 `g2.2xlarge`)。
Kubelet 用 `cloudprovider` 中定义的实例类型来填充该标签。未使用 `cloudprovider` 时不会设置该标签。
该标签在想要将某些负载定向到特定实例类型的节点上时会很有用,但通常用户更希望依赖 Kubernetes 调度器来执行基于资源的调度,
所以用户应该致力于基于属性而不是实例类型来进行调度(例如:需要一个 GPU而不是 `g2.2xlarge`)。
## failure-domain.beta.kubernetes.io/region (已弃用) {#failure-domainbetakubernetesioregion}
<!--
See [failure-domain.beta.kubernetes.io/zone](#failure-domainbetakubernetesiozone).
See [topology.kubernetes.io/region](#topologykubernetesioregion).
-->
参考 [failure-domain.beta.kubernetes.io/zone](#failure-domainbetakubernetesiozone)。
参考 [topology.kubernetes.io/region](#topologykubernetesioregion)。
{{< note >}}
<!--
Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/region](#topologykubernetesioregion).
-->
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用[topology.kubernetes.io/region](#topologykubernetesioregion)。
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用 [topology.kubernetes.io/region](#topologykubernetesioregion)。
{{< /note >}}
## failure-domain.beta.kubernetes.io/zone (已弃用) {#failure-domainbetakubernetesiozone}
<!--
Example:
`failure-domain.beta.kubernetes.io/region=us-east-1`
`failure-domain.beta.kubernetes.io/zone=us-east-1c`
Used on: Node, PersistentVolume
On the Node: The `kubelet` populates this with the zone information as defined by the `cloudprovider`.
This will be set only if you are using a `cloudprovider`. However, you should consider setting this
on the nodes if it makes sense in your topology.
See [topology.kubernetes.io/zone](#topologykubernetesiozone).
-->
示例:
`failure-domain.beta.kubernetes.io/region=us-east-1`
`failure-domain.beta.kubernetes.io/zone=us-east-1c`
用于Node、PersistentVolume
对于 Node Kubelet 用 `cloudprovider` 中定义的区域zone信息来填充该标签。未使用 `cloudprovider` 时不会设置该标签,但如果该标签在你的拓扑中有意义的话,应该考虑设置。
<!--
On the PersistentVolume: The `PersistentVolumeLabel` admission controller will automatically add zone labels to PersistentVolumes, on GCE and AWS.
-->
用于 PersistentVolume在 GCE 和 AWS 中,`PersistentVolumeLabel` 准入控制器会自动添加区域标签。
<!--
Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
-->
在单区的集群中Kubernetes 会自动将同一副本控制器或服务下的 pod 分散到不同的节点上 (以降低故障的影响)。在多区的集群中,这种分散的行为扩展到跨区的层面 (以降低区域故障的影响)。跨区分散通过 _SelectorSpreadPriority_ 来实现。
<!--
_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
-->
_SelectorSpreadPriority_ 是一种尽力而为best-effort的处理方式如果集群中的区域是异构的 (例如:不同区域之间的节点数量、节点类型或 pod 资源需求不同),可能使得 pod 在各区域间无法均匀分布。如有需要,用户可以使用同质的区域(节点数量和类型相同) 来减小 pod 分布不均的可能性。
<!--
The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
-->
由于卷不能跨区域挂载attach调度器 (通过 _VolumeZonePredicate_ 预选) 也会保证需要特定卷的 pod
被调度到卷所在的区域中。
<!--
The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined.
The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
-->
区域和地域region的实际值无关紧要两者的层次含义也没有严格的定义。最终期望是除非整个地域故障
否则某一区域节点的故障不应该影响到其他区域的节点。例如,通常区域间应该避免共用同一个网络交换机。
具体的规划取决于特定的基础设备 - three-rack 设备所选择的设置与多数据中心截然不同。
<!--
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
-->
如果 `PersistentVolumeLabel` 准入控制器不支持自动为 PersistentVolume 打标签,且用户希望防止 pod
跨区域进行卷的挂载,应考虑手动打标签 (或对 `PersistentVolumeLabel` 增加支持)。如果用户的基础设施没有这种约束,则不需要为卷添加区域标签。
参考 [topology.kubernetes.io/zone](#topologykubernetesiozone)。
{{< note >}}
<!--
Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/zone](#topologykubernetesiozone).
-->
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用[topology.kubernetes.io/zone](#topologykubernetesiozone)。
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用 [topology.kubernetes.io/zone](#topologykubernetesiozone)。
{{< /note >}}
## topology.kubernetes.io/region {#topologykubernetesioregion}
<!--
Example:
`topology.kubernetes.io/region=us-east-1`
-->
示例:
`topology.kubernetes.io/region=us-east-1`
<!--
See [topology.kubernetes.io/zone](#topologykubernetesiozone).
@ -215,59 +175,89 @@ See [topology.kubernetes.io/zone](#topologykubernetesiozone).
<!--
Example:
`topology.kubernetes.io/region=us-east-1`
`topology.kubernetes.io/zone=us-east-1c`
Used on: Node, PersistentVolume
On the Node: The `kubelet` populates this with the zone information as defined by the `cloudprovider`.
This will be set only if you are using a `cloudprovider`. However, you should consider setting this
on the nodes if it makes sense in your topology.
On Node: The `kubelet` or the external `cloud-controller-manager` populates this with the information as provided by the `cloudprovider`. This will be set only if you are using a `cloudprovider`. However, you should consider setting this on nodes if it makes sense in your topology.
-->
示例:
`topology.kubernetes.io/region=us-east-1`
`topology.kubernetes.io/zone=us-east-1c`
用于Node、PersistentVolume
对于 Node Kubelet 用 `cloudprovider` 中定义的区域zone信息来填充该标签。未使用 `cloudprovider` 时不会设置该标签,但如果该标签在你的拓扑中有意义的话,应该考虑设置。
对于 Node `Kubelet` 或外部 `cloud-controller-manager``cloudprovider` 中定义的区域信息来填充该标签。
未使用 `cloudprovider` 时不会设置该标签,但如果该标签在你的拓扑中有意义的话,应该考虑设置。
<!--
On the PersistentVolume: The `PersistentVolumeLabel` admission controller will automatically add zone labels to PersistentVolumes, on GCE and AWS.
On PersistentVolume: topology-aware volume provisioners will automatically set node affinity constraints on `PersistentVolumes`.
-->
用于 PersistentVolume在 GCE 和 AWS 中,`PersistentVolumeLabel` 准入控制器会自动添加区域标签
对于 PersistentVolume可感知拓扑的卷制备程序将自动在 `PersistentVolumes` 上设置节点亲和性约束
<!--
Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
A zone represents a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. While the exact definition of a zone is left to infrastructure implementations, common properties of a zone include very low network latency within a zone, no-cost network traffic within a zone, and failure independence from other zones. For example, nodes within a zone might share a network switch, but nodes in different zones should not.
-->
在单区的集群中Kubernetes 会自动将同一副本控制器或服务下的 pod 分散到不同的节点上 (以降低故障的影响)。在多区的集群中,这种分散的行为扩展到跨区的层面 (以降低区域故障的影响)。跨区分散通过 _SelectorSpreadPriority_ 来实现。
区域代表逻辑故障域。Kubernetes 集群通常跨越多个区域以提高可用性。
虽然区域的确切定义留给基础架构实现,但是区域的常见属性包括区域内的网络延迟非常低,区域内的免费网络流量以及与其他区域的故障独立性。
例如,一个区域内的节点可能共享一个网络交换机,但不同区域内的节点则不应共享。
<!--
_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not.
-->
地区代表一个更大的域,由一个或多个区域组成。
Kubernetes 集群跨越多个地域是不常见的,而地域或区域的确切定义则留给基础设施实现,
地域的共同属性包括它们之间的网络延迟比它们内部更高,它们之间的网络流量成本不为零,故障独立于其他区域或域。
例如,一个地域内的节点可能共享电力基础设施(例如 UPS 或发电机),但不同地域的节点通常不会共享。
<!--
Kubernetes makes a few assumptions about the structure of zones and regions:
1) regions and zones are hierarchical: zones are strict subsets of regions and no zone can be in 2 regions
2) zone names are unique across regions; for example region "africa-east-1" might be comprised of zones "africa-east-1a" and "africa-east-1b"
-->
Kubernetes 对区域和区域的结构做了一些假设:
1) 地域和区域是分层的:区域是地域的严格子集,任何区域都不能位于两个地域中。
2) 区域名称在地域之间是唯一的;例如,地域 “africa-east-1” 可能包含区域 “africa-east-1a” 和 “africa-east-1b”。
<!--
It should be safe to assume that topology labels do not change.
Even though labels are strictly mutable,
consumers of them can assume that a given node is not going to be moved between zones without being destroyed and recreated.
-->
标签的使用者可以安全地假设拓扑标签不变。
即使标签是严格可变的,标签的使用者也可以认为节点只能通过被销毁并重建才能从一个区域迁移到另一个区域。
<!--
Kubernetes can use this information in various ways.
For example,
the scheduler automatically tries to spread the Pods in a ReplicaSet across nodes in a single-zone cluster (to reduce the impact of node failures,
see [kubernetes.io/hostname](#kubernetesiohostname)). With multiple-zone clusters, this spreading behavior also applies to zones (to reduce the impact of zone failures).
This is achieved via _SelectorSpreadPriority_.
-->
Kubernetes 可以以各种方式使用这些信息。
例如,调度器自动尝试将 ReplicaSet 中的多个 Pods 分布到单区域集群中的多个节点上(为了减少节点故障的影响,
请参阅 [kubernetesiohostname](#kubernetesiohostname))。
对于多区域集群,这种分布行为也被应用到区域上(以减少区域故障的影响)。
这是通过 _SelectorSpreadPriority_ 实现的。
<!--
_SelectorSpreadPriority_ is a best effort placement.
If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements),
this placement might prevent equal spreading of your Pods across zones.
If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
-->
_SelectorSpreadPriority_ 是一种尽力而为best-effort的处理方式如果集群中的区域是异构的
(例如:不同区域之间的节点数量、节点类型或 pod 资源需求不同),可能使得 pod 在各区域间无法均匀分布。
如有需要,用户可以使用同质的区域(节点数量和类型相同) 来减小 pod 分布不均的可能性。
(例如:不同区域之间的节点数量、节点类型或 Pod 资源需求不同),可能使得 Pod 在各区域间无法均匀分布。
如有需要,用户可以使用同质的区域(节点数量和类型相同) 来减小 Pod 分布不均的可能性。
<!--
The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
-->
由于卷不能跨区域挂载attach调度器 (通过 _VolumeZonePredicate_ 预选) 也会保证需要特定卷的 pod 被调度到卷所在的区域中。
<!--
The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined.
The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
-->
区域和地域region的实际值无关紧要两者的层次含义也没有严格的定义。最终期望是除非整个地域故障
否则某一区域节点的故障不应该影响到其他区域的节点。例如,通常区域间应该避免共用同一个网络交换机。
具体的规划取决于特定的基础设备 - 三机架安装所选择的设置与多数据中心截然不同。
由于卷不能跨区域挂载Attach调度器通过 _VolumeZonePredicate_ 预选)也会保证需要特定卷的 Pod 被调度到卷所在的区域中。
<!--
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`,
the scheduler prevents Pods from mounting volumes in a different zone.
If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
-->
如果 `PersistentVolumeLabel` 准入控制器不支持自动为 PersistentVolume 打标签,且用户希望防止 pod 跨区域进行卷的挂载,
应考虑手动打标签 (或对 `PersistentVolumeLabel` 增加支持)。如果用户的基础设施没有这种约束,则不需要为卷添加区域标签。
应考虑手动打标签 (或增加对 `PersistentVolumeLabel`支持)。如果用户的基础设施没有这种约束,则不需要为卷添加区域标签。

View File

@ -19,19 +19,22 @@ file and passing its path as a command line argument.
<!-- overview -->
<!-- body -->
<!--
A scheduling Profile allows you to configure the different stages of scheduling
in the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}.
Each stage is exposed in a extension point. Plugins provide scheduling behaviors
by implementing one or more of these extension points.
-->
调度模板Profile允许你配置 {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
中的不同调度阶段。每个阶段都暴露于某个扩展点中。插件通过实现一个或多个扩展点来提供调度行为。
<!--
<!--
You can specify scheduling profiles by running `kube-scheduler --config <filename>`,
using the component config APIs
([`v1alpha1`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1alpha1?tab=doc#KubeSchedulerConfiguration)
or [`v1alpha2`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1alpha2?tab=doc#KubeSchedulerConfiguration)).
The `v1alpha2` API allows you to configure kube-scheduler to run
[multiple profiles](#multiple-profiles).
([`v1beta1`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.19.0/config/v1beta1?tab=doc#KubeSchedulerConfiguration)).
-->
你可以通过运行 `kube-scheduler --config <filename>` 来设置调度配置,
配置文件使用组件配置的 API ([`v1alpha1`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1alpha1?tab=doc#KubeSchedulerConfiguration)
或 [`v1alpha2`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1alpha2?tab=doc#KubeSchedulerConfiguration))。
`v1alpha2` 可以配置 kube-scheduler 运行[多个配置文件](#multiple-profiles)。
你可以通过运行 `kube-scheduler --config <filename>` 来设置调度模板,
配置文件使用组件配置的 API ([`v1alpha1`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.19.0/config/v1beta1?tab=doc#KubeSchedulerConfiguration))。
<!-- A minimal configuration looks as follows: -->
最简单的配置如下:

View File

@ -41,11 +41,11 @@ Kubernetes 包含一些内置工具,可以帮助用户更好的使用 Kubernet
## Minikube
<!--
[`minikube`](/docs/tasks/tools/install-minikube/) is a tool that makes it
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that makes it
easy to run a single-node Kubernetes cluster locally on your workstation for
development and testing purposes.
-->
[`minikube`](/docs/tasks/tools/install-minikube/) 是一个可以方便用户在其工作站点本地部署一个单节点 Kubernetes 集群的工具,用于开发和测试。
[`minikube`](https://minikube.sigs.k8s.io/docs/) 是一个可以方便用户在其工作站点本地部署一个单节点 Kubernetes 集群的工具,用于开发和测试。
## Dashboard
@ -84,9 +84,9 @@ Use Helm to:
## Kompose
<!--
[`Kompose`](https://github.com/kubernetes-incubator/kompose) is a tool to help Docker Compose users move to Kubernetes.
[`Kompose`](https://github.com/kubernetes/kompose) is a tool to help Docker Compose users move to Kubernetes.
-->
[`Kompose`](https://github.com/kubernetes-incubator/kompose) 一个转换工具,用来帮助 Docker Compose 用户迁移至 Kubernetes。
[`Kompose`](https://github.com/kubernetes/kompose) 一个转换工具,用来帮助 Docker Compose 用户迁移至 Kubernetes。
<!--
Use Kompose to: