* sync
sys labels-annotations-taints.md
* sync 'scheduling/config.md'
* sync issues-security/security.md
* sync _index.md
* sync reference/tools.md
* update
* update
* misssing words
* update
* update to fix some lines
* update to fix some lines
* update to delete wihtpsace
* line is too long
@ -31,14 +31,14 @@ This page describes Kubernetes security and disclosure information.
## 安全公告
<!--
Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group for emails about security and major API announcements.
Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) group for emails about security and major API announcements.
-->
加入 [kubernets-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) 组,以获取关于安全性和主要 API 公告的电子邮件。
加入 [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) 组,以获取关于安全性和主要 API 公告的电子邮件。
<!--
You can also subscribe to an RSS feed of the above using [this link](https://groups.google.com/forum/feed/kubernetes-announce/msgs/rss_v2_0.xml?num=50).
You can also subscribe to an RSS feed of the above using [this link](https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50).
@ -54,17 +54,18 @@ We’re extremely grateful for security researchers and users that report vulner
<!--
To make a report, please email the private [security@kubernetes.io](mailto:security@kubernetes.io) list with the security details and the details expected for [all Kubernetes bug reports](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md).
You can also email the private [security@kubernetes.io](mailto:security@kubernetes.io) list with the security details and the details expected for [all Kubernetes bug reports](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md).
You may encrypt your email to this list using the GPG keys of the [Product Security Team members](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#product-security-team-pst). Encryption using GPG is NOT required to make a disclosure.
You may encrypt your email to this list using the GPG keys of the [Product Security Committee members](https://git.k8s.io/security/README.md#product-security-committee-psc). Encryption using GPG is NOT required to make a disclosure.
Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
-->
在单区的集群中,Kubernetes 会自动将同一副本控制器或服务下的 pod 分散到不同的节点上 (以降低故障的影响)。在多区的集群中,这种分散的行为扩展到跨区的层面 (以降低区域故障的影响)。跨区分散通过 _SelectorSpreadPriority_ 来实现。
<!--
_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
-->
_SelectorSpreadPriority_ 是一种尽力而为(best-effort)的处理方式,如果集群中的区域是异构的 (例如:不同区域之间的节点数量、节点类型或 pod 资源需求不同),可能使得 pod 在各区域间无法均匀分布。如有需要,用户可以使用同质的区域(节点数量和类型相同) 来减小 pod 分布不均的可能性。
<!--
The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
-->
由于卷不能跨区域挂载(attach),调度器 (通过 _VolumeZonePredicate_ 预选) 也会保证需要特定卷的 pod
被调度到卷所在的区域中。
<!--
The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined.
The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
-->
如果 `PersistentVolumeLabel` 准入控制器不支持自动为 PersistentVolume 打标签,且用户希望防止 pod
See [topology.kubernetes.io/zone](#topologykubernetesiozone).
@ -215,59 +175,89 @@ See [topology.kubernetes.io/zone](#topologykubernetesiozone).
<!--
Example:
`topology.kubernetes.io/region=us-east-1`
`topology.kubernetes.io/zone=us-east-1c`
Used on: Node, PersistentVolume
On the Node: The `kubelet` populates this with the zone information as defined by the `cloudprovider`.
This will be set only if you are using a `cloudprovider`. However, you should consider setting this
on the nodes if it makes sense in your topology.
On Node: The `kubelet` or the external `cloud-controller-manager` populates this with the information as provided by the `cloudprovider`. This will be set only if you are using a `cloudprovider`. However, you should consider setting this on nodes if it makes sense in your topology.
Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
A zone represents a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. While the exact definition of a zone is left to infrastructure implementations, common properties of a zone include very low network latency within a zone, no-cost network traffic within a zone, and failure independence from other zones. For example, nodes within a zone might share a network switch, but nodes in different zones should not.
-->
在单区的集群中,Kubernetes 会自动将同一副本控制器或服务下的 pod 分散到不同的节点上 (以降低故障的影响)。在多区的集群中,这种分散的行为扩展到跨区的层面 (以降低区域故障的影响)。跨区分散通过 _SelectorSpreadPriority_ 来实现。
_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not.
Kubernetes can use this information in various ways.
For example,
the scheduler automatically tries to spread the Pods in a ReplicaSet across nodes in a single-zone cluster (to reduce the impact of node failures,
see [kubernetes.io/hostname](#kubernetesiohostname)). With multiple-zone clusters, this spreading behavior also applies to zones (to reduce the impact of zone failures).
_SelectorSpreadPriority_ is a best effort placement.
If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements),
this placement might prevent equal spreading of your Pods across zones.
If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
(例如:不同区域之间的节点数量、节点类型或 pod 资源需求不同),可能使得 pod 在各区域间无法均匀分布。
如有需要,用户可以使用同质的区域(节点数量和类型相同) 来减小 pod 分布不均的可能性。
(例如:不同区域之间的节点数量、节点类型或 Pod 资源需求不同),可能使得 Pod 在各区域间无法均匀分布。
如有需要,用户可以使用同质的区域(节点数量和类型相同) 来减小 Pod 分布不均的可能性。
<!--
The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
-->
由于卷不能跨区域挂载(attach),调度器 (通过 _VolumeZonePredicate_ 预选) 也会保证需要特定卷的 pod 被调度到卷所在的区域中。
<!--
The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined.
The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
由于卷不能跨区域挂载(Attach),调度器(通过 _VolumeZonePredicate_ 预选)也会保证需要特定卷的 Pod 被调度到卷所在的区域中。
<!--
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`,
the scheduler prevents Pods from mounting volumes in a different zone.
If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
-->
如果 `PersistentVolumeLabel` 准入控制器不支持自动为 PersistentVolume 打标签,且用户希望防止 pod 跨区域进行卷的挂载,