[zh] Resync some concepts pages (concepts-4)

pull/34249/head
Qiming Teng 2022-06-12 23:25:11 +08:00
parent 89a4ec69d2
commit 91f46804c6
10 changed files with 266 additions and 289 deletions

View File

@ -20,6 +20,11 @@ Kubernetes {{< skew currentVersion >}} 支持[容器网络接口](https://github
你必须使用和你的集群相兼容并且满足你的需求的 CNI 插件。
在更广泛的 Kubernetes 生态系统中你可以使用不同的插件(开源和闭源)。
<!--
A CNI plugin is required to implement the [Kubernetes network model](/docs/concepts/services-networking/#the-kubernetes-network-model).
-->
要实现 [Kubernetes 网络模型](/zh-cn/docs/concepts/services-networking/#the-kubernetes-network-model),你需要一个 CNI 插件。
<!--
You must use a CNI plugin that is compatible with the
[v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) or later
@ -37,72 +42,88 @@ CNI 规范的插件(插件可以兼容多个规范版本)。
<!--
## Installation
A CNI plugin is required to implement the [Kubernetes network model](/docs/concepts/services-networking/#the-kubernetes-network-model). The CRI manages its own CNI plugins. There are two Kubelet command line parameters to keep in mind when using plugins:
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is "cni".
A Container Runtime, in the networking context, is a daemon on a node configured to provide CRI Services for kubelet. In particular, the Container Runtime must be configured to load the CNI plugins required to implement the Kubernetes network model.
-->
## 安装
## 安装 {#installation}
CNI 插件需要实现 [Kubernetes 网络模型](/zh/docs/concepts/services-networking/#the-kubernetes-network-model)。
CRI 管理它自己的 CNI 插件。
在使用插件时,需要记住两个 kubelet 命令行参数:
在网络语境中容器运行时Container Runtime是在节点上的守护进程
被配置用来为 kubelet 提供 CRI 服务。具体而言,容器运行时必须配置为加载所需的
CNI 插件,从而实现 Kubernetes 网络模型。
* `cni-bin-dir` kubelet 在启动时探测这个目录中的插件
* `network-plugin` 要使用的网络插件来自 `cni-bin-dir`
它必须与从插件目录探测到的插件报告的名称匹配。
对于 CNI 插件,其值为 "cni"。
{{< note >}}
<!--
Prior to Kubernetes 1.24, the CNI plugins could also be managed by the kubelet using the `cni-bin-dir` and `network-plugin` command-line parameters.
These command-line parameters were removed in Kubernetes 1.24, with management of the CNI no longer in scope for kubelet.
-->
在 Kubernetes 1.24 之前CNI 插件也可以由 kubelet 使用命令行参数 `cni-bin-dir`
`network-plugin` 管理。Kubernetes 1.24 移除了这些命令行参数,
CNI 的管理不再是 kubelet 的工作。
<!--
See [Troubleshooting CNI plugin-related errors](/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/)
if you are facing issues following the removal of dockershim.
-->
如果你在移除 dockershim 之后遇到问题,请参阅[排查 CNI 插件相关的错误](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/)。
{{< /note >}}
<!--
For specific information about how a Container Runtime manages the CNI plugins, see the documentation for that Container Runtime, for example:
-->
要了解容器运行时如何管理 CNI 插件的具体信息,可参见对应容器运行时的文档,例如:
- [containerd](https://github.com/containerd/containerd/blob/main/script/setup/install-cni)
- [CRI-O](https://github.com/cri-o/cri-o/blob/main/contrib/cni/README.md)
<!--
For specific information about how to install and manage a CNI plugin, see the documentation for that plugin or [networking provider](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model).
-->
要了解如何安装和管理 CNI 插件的具体信息,可参阅对应的插件或
[网络驱动Networking Provider](/zh-cn/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model)
的文档。
<!--
## Network Plugin Requirements
Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
By default if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge) work correctly with the iptables proxy.
For plugin developers and users who regularly build or deploy Kubernetes, the plugin may also need specific configuration to support kube-proxy.
The iptables proxy depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables.
For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly.
If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
-->
## 网络插件要求
## 网络插件要求 {#network-plugin-requirements}
除了提供
[`NetworkPlugin` 接口](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go)
来配置和清理 Pod 网络之外,该插件还可能需要对 kube-proxy 的特定支持。
iptables 代理显然依赖于 iptables插件可能需要确保 iptables 能够监控容器的网络通信。
对于插件开发人员以及时常会构建并部署 Kubernetes 的用户而言,
插件可能也需要特定的配置来支持 kube-proxy。
iptables 代理依赖于 iptables插件可能需要确保 iptables 能够监控容器的网络通信。
例如,如果插件将容器连接到 Linux 网桥,插件必须将 `net/bridge/bridge-nf-call-iptables`
系统参数设置为`1`,以确保 iptables 代理正常工作。
sysctl 参数设置为 `1`,以确保 iptables 代理正常工作。
如果插件不使用 Linux 网桥(而是类似于 Open vSwitch 或者其它一些机制),
它应该确保为代理对容器通信执行正确的路由。
<!--
By default if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge) work correctly with the iptables proxy.
-->
默认情况下,如果未指定 kubelet 网络插件,则使用 `noop` 插件,
该插件设置 `net/bridge/bridge-nf-call-iptables=1`,以确保简单的配置
(如带网桥的 Docker )与 iptables 代理正常工作。
<!--
### CNI
### Loopback CNI
The CNI plugin is selected by passing Kubelet the `--network-plugin=cni` command-line option. Kubelet reads a file from `--cni-conf-dir` (default `/etc/cni/net.d`) and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration), and any required CNI plugins referenced by the configuration must be present in `--cni-bin-dir` (default `/opt/cni/bin`).
If there are multiple CNI configuration files in the directory, the kubelet uses the configuration file that comes first by name in lexicographic order.
In addition to the CNI plugin specified by the configuration file, Kubernetes requires the standard CNI [`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) plugin, at minimum version 0.2.0
In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network model, Kubernetes also requires the container runtimes to provide a loopback interface `lo`, which is used for each sandbox (pod sandboxes, vm sandboxes, ...).
Implementing the loopback interface can be accomplished by re-using the [CNI loopback plugin.](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) or by developing your own code to achieve this (see [this example from CRI-O](https://github.com/cri-o/ocicni/blob/release-1.24/pkg/ocicni/util_linux.go#L91)).
-->
### CNI
### 本地回路 CNI {#loopback-cni}
通过给 Kubelet 传递 `--network-plugin=cni` 命令行选项可以选择 CNI 插件。
Kubelet 从 `--cni-conf-dir` (默认是 `/etc/cni/net.d` 读取文件并使用
该文件中的 CNI 配置来设置各个 Pod 的网络。
CNI 配置文件必须与
[CNI 规约](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration)
匹配,并且配置所引用的所有所需的 CNI 插件都应存在于
`--cni-bin-dir`(默认是 `/opt/cni/bin`)下。
如果这个目录中有多个 CNI 配置文件kubelet 将会使用按文件名的字典顺序排列
的第一个作为配置文件。
除了配置文件指定的 CNI 插件外Kubernetes 还需要标准的 CNI
[`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go)
插件最低版本是0.2.0。
除了安装到节点上用于实现 Kubernetes 网络模型的 CNI 插件外Kubernetes
还需要容器运行时提供一个本地回路接口 `lo`用于各个沙箱Pod 沙箱、虚机沙箱……)。
实现本地回路接口的工作可以通过复用
[CNI 本地回路插件](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go)来实现,
也可以通过开发自己的代码来实现
(参阅 [CRI-O 中的示例](https://github.com/cri-o/ocicni/blob/release-1.24/pkg/ocicni/util_linux.go#L91))。
<!--
#### Support hostPort
### Support hostPort
The CNI networking plugin supports `hostPort`. You can use the official [portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap)
plugin offered by the CNI plugin team or use your own plugin with portMapping functionality.
@ -110,7 +131,7 @@ plugin offered by the CNI plugin team or use your own plugin with portMapping fu
If you want to enable `hostPort` support, you must specify `portMappings capability` in your `cni-conf-dir`.
For example:
-->
#### 支持 hostPort
### 支持 hostPort {#support-hostport}
CNI 网络插件支持 `hostPort`。 你可以使用官方
[portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap)
@ -149,7 +170,7 @@ CNI 网络插件支持 `hostPort`。 你可以使用官方
```
<!--
#### Support traffic shaping
### Support traffic shaping
**Experimental Feature**
@ -159,7 +180,7 @@ plugin offered by the CNI plugin team or use your own plugin with bandwidth cont
If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI configuration file
(default `/etc/cni/net.d`) and ensure that the binary is included in your CNI bin dir (default `/opt/cni/bin`).
-->
#### 支持流量整形
### 支持流量整形 {#support-traffic-shaping}
**实验功能**
@ -216,16 +237,6 @@ metadata:
kubernetes.io/egress-bandwidth: 1M
...
```
<!--
## Usage Summary
* `--network-plugin=cni` specifies that we use the `cni` network plugin with actual CNI plugin binaries located in `--cni-bin-dir` (default `/opt/cni/bin`) and CNI plugin configuration located in `--cni-conf-dir` (default `/etc/cni/net.d`).
-->
## 用法总结
* `--network-plugin=cni` 用来表明我们要使用 `cni` 网络插件,实际的 CNI 插件
可执行文件位于 `--cni-bin-dir`(默认是 `/opt/cni/bin`)下, CNI 插件配置位于
`--cni-conf-dir`(默认是 `/etc/cni/net.d`)下。
## {{% heading "whatsnext" %}}

View File

@ -7,43 +7,46 @@ description: Kubernetes 网络背后的概念和资源。
<!--
## The Kubernetes network model
Every [`Pod`](/docs/concepts/workloads/pods/) gets its own IP address.
Every [`Pod`](/docs/concepts/workloads/pods/) in a cluster gets its own unique cluster-wide IP address.
This means you do not need to explicitly create links between `Pods` and you
almost never need to deal with mapping container ports to host ports.
This creates a clean, backwards-compatible model where `Pods` can be treated
much like VMs or physical hosts from the perspectives of port allocation,
naming, service discovery, [load balancing](/docs/concepts/services-networking/ingress/#load-balancing), application configuration,
and migration.
Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
* pods on a [node](/docs/concepts/architecture/nodes/) can communicate with all pods on all nodes without NAT
* agents on a node (e.g. system daemons, kubelet) can communicate with all
pods on that node
Note: For those platforms that support `Pods` running in the host network (e.g.
Linux):
* pods in the host network of a node can communicate with all pods on all
nodes without NAT
naming, service discovery, [load balancing](/docs/concepts/services-networking/ingress/#load-balancing),
application configuration, and migration.
-->
## Kubernetes 网络模型 {#the-kubernetes-network-model}
每一个 [`Pod`](/zh/docs/concepts/workloads/pods/) 都有它自己的IP地址
集群中每一个 [`Pod`](/zh-cn/docs/concepts/workloads/pods/) 都会获得自己的、
独一无二的 IP 地址,
这就意味着你不需要显式地在 `Pod` 之间创建链接,你几乎不需要处理容器端口到主机端口之间的映射。
这将形成一个干净的、向后兼容的模型;在这个模型里,从端口分配、命名、服务发现、
[负载均衡](/zh/docs/concepts/services-networking/ingress/#load-balancing)、应用配置和迁移的角度来看,
`Pod` 可以被视作虚拟机或者物理主机。
[负载均衡](/zh/docs/concepts/services-networking/ingress/#load-balancing)、
应用配置和迁移的角度来看,`Pod` 可以被视作虚拟机或者物理主机。
<!--
Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
-->
Kubernetes 强制要求所有网络设施都满足以下基本要求(从而排除了有意隔离网络的策略):
* [节点](/zh/docs/concepts/architecture/nodes/)上的 Pod 可以不通过 NAT 和其他任何节点上的 Pod 通信
<!--
* pods can communicate with all other pods on any other [node](/docs/concepts/architecture/nodes/)
without NAT
* agents on a node (e.g. system daemons, kubelet) can communicate with all
pods on that node
-->
* Pod 能够与所有其他[节点](/zh-cn/docs/concepts/architecture/nodes/)上的 Pod 通信,
且不需要网络地址转译NAT
* 节点上的代理比如系统守护进程、kubelet可以和节点上的所有 Pod 通信
备注:对于支持在主机网络中运行 `Pod` 的平台比如Linux
* 运行在节点主机网络里的 Pod 可以不通过 NAT 和所有节点上的 Pod 通信
<!--
Note: For those platforms that support `Pods` running in the host network (e.g.
Linux), when pods are attached to the host network of a node they can still communicate
with all pods on all nodes without NAT.
-->
说明:对于支持在主机网络中运行 `Pod` 的平台比如Linux
当 Pod 挂接到节点的宿主网络上时,它们仍可以不通过 NAT 和所有节点上的 Pod 通信。
<!--
This model is not only less complex overall, but it is principally compatible
@ -62,7 +65,8 @@ usage, but this is no different from processes in a VM. This is called the
如果你的任务开始是在虚拟机中运行的,你的虚拟机有一个 IP
可以和项目中其他虚拟机通信。这里的模型是基本相同的。
Kubernetes 的 IP 地址存在于 `Pod` 范围内 - 容器共享它们的网络命名空间 - 包括它们的 IP 地址和 MAC 地址。
Kubernetes 的 IP 地址存在于 `Pod` 范围内 —— 容器共享它们的网络命名空间 ——
包括它们的 IP 地址和 MAC 地址。
这就意味着 `Pod` 内的容器都可以通过 `localhost` 到达对方端口。
这也意味着 `Pod` 内的容器需要相互协调端口的使用,但是这和虚拟机中的进程似乎没有什么不同,
这也被称为“一个 Pod 一个 IP”模型。
@ -90,9 +94,10 @@ Kubernetes networking addresses four concerns:
-->
Kubernetes 网络解决四方面的问题:
- 一个 Pod 中的容器之间[通过本地回路loopback通信](/zh/docs/concepts/services-networking/dns-pod-service/)。
- 一个 Pod 中的容器之间[通过本地回路loopback通信](/zh-cn/docs/concepts/services-networking/dns-pod-service/)。
- 集群网络在不同 pod 之间提供通信。
- [Service 资源](/zh/docs/concepts/services-networking/service/)允许你
[对外暴露 Pods 中运行的应用程序](/zh/docs/concepts/services-networking/connect-applications-service/)
- [Service 资源](/zh-cn/docs/concepts/services-networking/service/)允许你
[向外暴露 Pods 中运行的应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/)
以支持来自于集群外部的访问。
- 可以使用 Services 来[发布仅供集群内部使用的服务](/zh/docs/concepts/services-networking/service-traffic-policy/)。
- 可以使用 Services 来[发布仅供集群内部使用的服务](/zh-cn/docs/concepts/services-networking/service-traffic-policy/)。

View File

@ -20,7 +20,7 @@ weight: 40
For clarity, this guide defines the following terms:
-->
## 术语
## 术语 {#terminology}
为了表达更加清晰,本指南定义了以下术语:
@ -48,10 +48,11 @@ For clarity, this guide defines the following terms:
{{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
-->
## Ingress 是什么?
## Ingress 是什么? {#what-is-ingress}
[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io)
公开了从集群外部到集群内[服务](/zh/docs/concepts/services-networking/service/)的 HTTP 和 HTTPS 路由。
公开从集群外部到集群内[服务](/zh/docs/concepts/services-networking/service/)的
HTTP 和 HTTPS 路由。
流量路由由 Ingress 资源上定义的规则控制。
<!--
@ -59,22 +60,7 @@ Here is a simple example where an Ingress sends all its traffic to one Service:
-->
下面是一个将所有流量都发送到同一 Service 的简单 Ingress 示例:
{{< mermaid >}}
graph LR;
client([客户端])-. Ingress-管理的 <br> 负载均衡器 .->ingress[Ingress];
ingress-->|路由规则|service[Service];
subgraph cluster
ingress;
service-->pod1[Pod];
service-->pod2[Pod];
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class ingress,service,pod1,pod2 k8s;
class client plain;
class cluster cluster;
{{</ mermaid >}}
{{< figure src="/zh-cn/docs/images/ingress.svg" alt="ingress-diagram" class="diagram-large" caption="图. Ingress" link="https://mermaid.live/edit#pako:eNqNkstuwyAQRX8F4U0r2VHqPlSRKqt0UamLqlnaWWAYJygYLB59KMm_Fxcix-qmGwbuXA7DwAEzzQETXKutof0Ovb4vaoUQkwKUu6pi3FwXM_QSHGBt0VFFt8DRU2OWSGrKUUMlVQwMmhVLEV1Vcm9-aUksiuXRaO_CEhkv4WjBfAgG1TrGaLa-iaUw6a0DcwGI-WgOsF7zm-pN881fvRx1UDzeiFq7ghb1kgqFWiElyTjnuXVG74FkbdumefEpuNuRu_4rZ1pqQ7L5fL6YQPaPNiFuywcG9_-ihNyUkm6YSONWkjVNM8WUIyaeOJLO3clTB_KhL8NQDmVe-OJjxgZM5FhFiiFTK5zjDkxHBQ9_4zB4a-x20EGNSZhyaKmXrg7f5hSsvufUwTMXThtMWiot5Jh6p9ffimHijIezaSVoeN0uiqcfMJvf7w" >}}
<!--
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
@ -699,25 +685,8 @@ down to a minimum. For example, a setup like:
一个扇出fanout配置根据请求的 HTTP URI 将来自同一 IP 地址的流量路由到多个 Service。
Ingress 允许你将负载均衡器的数量降至最低。例如,这样的设置:
{{< mermaid >}}
graph LR;
client([客户端])-. Ingress-管理的 <br> 负载均衡器 .->ingress[Ingress, 178.91.123.132];
ingress-->|/foo|service1[Service service1:4200];
ingress-->|/bar|service2[Service service2:8080];
subgraph cluster
ingress;
service1-->pod1[Pod];
service1-->pod2[Pod];
service2-->pod3[Pod];
service2-->pod4[Pod];
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s;
class client plain;
class cluster cluster;
{{</ mermaid >}}
{{< figure src="/zh-cn/docs/images/ingressFanOut.svg" alt="ingress-fanout-diagram" class="diagram-large" caption="图. Ingress 扇出" link="https://mermaid.live/edit#pako:eNqNUslOwzAQ_RXLvYCUhMQpUFzUUzkgcUBwbHpw4klr4diR7bCo8O8k2FFbFomLPZq3jP00O1xpDpjijWHtFt09zAuFUCUFKHey8vf6NE7QrdoYsDZumGIb4Oi6NAskNeOoZJKpCgxK4oXwrFVgRyi7nCVXWZKRPMlysv5yD6Q4Xryf1Vq_WzDPooJs9egLNDbolKTpT03JzKgh3zWEztJZ0Niu9L-qZGcdmAMfj4cxvWmreba613z9C0B-AMQD-V_AdA-A4j5QZu0SatRKJhSqhZR0wjmPrDP6CeikrutQxy-Cuy2dtq9RpaU2dJKm6fzI5Glmg0VOLio4_5dLjx27hFSC015KJ2VZHtuQvY2fuHcaE43G0MaCREOow_FV5cMxHZ5-oPX75UM5avuXhXuOI9yAaZjg_aLuBl6B3RYaKDDtSw4166QrcKE-emrXcubghgunDaY1kxYizDqnH99UhakzHYykpWD9hjS--fEJoIELqQ" >}}
<!--
would require an Ingress such as:
@ -783,25 +752,7 @@ Name-based virtual hosts support routing HTTP traffic to multiple host names at
基于名称的虚拟主机支持将针对多个主机名的 HTTP 流量路由到同一 IP 地址上。
{{< mermaid >}}
graph LR;
client([客户端])-. Ingress-管理的 <br> 负载均衡器 .->ingress[Ingress, 178.91.123.132];
ingress-->|Host: foo.bar.com|service1[Service service1:80];
ingress-->|Host: bar.foo.com|service2[Service service2:80];
subgraph cluster
ingress;
service1-->pod1[Pod];
service1-->pod2[Pod];
service2-->pod3[Pod];
service2-->pod4[Pod];
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s;
class client plain;
class cluster cluster;
{{</ mermaid >}}
{{< figure src="/zh-cn/docs/images/ingressNameBased.svg" alt="ingress-namebase-diagram" class="diagram-large" caption="图. 基于名称实现虚拟托管的 Ingress" link="https://mermaid.live/edit#pako:eNqNkl9PwyAUxb8KYS-atM1Kp05m9qSJJj4Y97jugcLtRqTQAPVPdN_dVlq3qUt8gZt7zvkBN7xjbgRgiteW1Rt0_zjLNUJcSdD-ZBn21WmcoDu9tuBcXDHN1iDQVWHnSBkmUMEU0xwsSuK5DK5l745QejFNLtMkJVmSZmT1Re9NcTz_uDXOU1QakxTMJtxUHw7ss-SQLhehQEODTsdH4l20Q-zFyc84-Y67pghv5apxHuweMuj9eS2_NiJdPhix-kMgvwQShOyYMNkJoEUYM3PuGkpUKyY1KqVSdCSEiJy35gnoqCzLvo5fpPAbOqlfI26UsXQ0Ho9nB5CnqesRGTnncPYvSqsdUvqp9KRdlI6KojjEkB0mnLgjDRONhqENBYm6oXbLV5V1y6S7-l42_LowlIN2uFm_twqOcAW2YlK0H_i9c-bYb6CCHNO2FFCyRvkc53rbWptaMA83QnpjMS2ZchBh1nizeNMcU28bGEzXkrV_pArN7Sc0rBTu" >}}
<!--
The following Ingress tells the backing load balancer to route requests based on

View File

@ -1,5 +1,5 @@
---
title: 服务
title: 服务Service
feature:
title: 服务发现与负载均衡
description: >
@ -205,7 +205,7 @@ metadata:
spec:
containers:
- name: nginx
image: nginx:11.14.2
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
@ -307,6 +307,7 @@ where it's running, by adding an Endpoints object manually:
apiVersion: v1
kind: Endpoints
metadata:
# 这里的 name 要与 Service 的名字相同
name: my-service
subsets:
- addresses:
@ -321,6 +322,15 @@ The name of the Endpoints object must be a valid
Endpoints 对象的名称必须是合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
<!--
When you create an [Endpoints](docs/reference/kubernetes-api/service-resources/endpoints-v1/)
object for a Service, you set the name of the new object to be the same as that
of the Service.
-->
当你为某个 Service 创建一个 [Endpoints](/zh-cn/docs/reference/kubernetes-api/service-resources/endpoints-v1/)
对象时,你要将新对象的名称设置为与 Service 的名称相同。
{{< note >}}
<!--
The endpoint IPs _must not_ be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or
link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
@ -329,7 +339,6 @@ Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services,
because {{< glossary_tooltip term_id="kube-proxy" >}} doesn't support virtual IPs
as a destination.
-->
{{< note >}}
端点 IPs _必须不可以_本地回路IPv4 的 127.0.0.0/8, IPv6 的 ::1/128
本地链接IPv4 的 169.254.0.0/16 和 224.0.0.0/24IPv6 的 fe80::/64)。

View File

@ -48,7 +48,7 @@ ReplicaSet's identifying information within their ownerReferences field. It's th
knows of the state of the Pods it is maintaining and plans accordingly.
-->
ReplicaSet 通过 Pod 上的
[metadata.ownerReferences](/zh/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
[metadata.ownerReferences](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
字段连接到附属 Pod该字段给出当前对象的属主资源。
ReplicaSet 所获得的 Pod 都在其 ownerReferences 字段中包含了属主 ReplicaSet
的标识信息。正是通过这一连接ReplicaSet 知道它所维护的 Pod 集合的状态,
@ -101,7 +101,6 @@ create the defined ReplicaSet and the Pods that it manages.
将此清单保存到 `frontend.yaml` 中,并将其提交到 Kubernetes 集群,
就能创建 yaml 文件所定义的 ReplicaSet 及其管理的 Pod。
```shell
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
```
@ -237,9 +236,9 @@ to owning Pods specified by its template-- it can acquire other Pods in the mann
Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest:
-->
尽管你完全可以直接创建裸的 Pods,强烈建议你确保这些裸的 Pods 并不包含可能与你
的某个 ReplicaSet 的选择算符相匹配的标签。原因在于 ReplicaSet 并不仅限于拥有
在其模板中设置的 Pods,它还可以像前面小节中所描述的那样获得其他 Pods
尽管你完全可以直接创建裸的 Pod强烈建议你确保这些裸的 Pod 并不包含可能与你的某个
ReplicaSet 的选择算符相匹配的标签。原因在于 ReplicaSet 并不仅限于拥有在其模板中设置的
Pod它还可以像前面小节中所描述的那样获得其他 Pod。
{{< codenew file="pods/pod-rs.yaml" >}}
@ -250,11 +249,10 @@ ReplicaSet, they will immediately be acquired by it.
Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its initial Pod replicas to
fulfill its replica count requirement:
-->
由于这些 Pod 没有控制器Controller或其他对象作为其属主引用并且
其标签与 frontend ReplicaSet 的选择算符匹配,它们会立即被该 ReplicaSet
获取。
由于这些 Pod 没有控制器Controller或其他对象作为其属主引用
并且其标签与 frontend ReplicaSet 的选择算符匹配,它们会立即被该 ReplicaSet 获取。
假定你在 frontend ReplicaSet 已经被部署之后创建 Pods,并且你已经在 ReplicaSet
假定你在 frontend ReplicaSet 已经被部署之后创建 Pod并且你已经在 ReplicaSet
中设置了其初始的 Pod 副本数以满足其副本计数需要:
```shell
@ -267,8 +265,8 @@ its desired count.
Fetching the Pods:
-->
新的 Pods 会被该 ReplicaSet 获取,并立即被 ReplicaSet 终止,因为
它们的存在会使得 ReplicaSet 中 Pod 个数超出其期望值。
新的 Pod 会被该 ReplicaSet 获取,并立即被 ReplicaSet 终止,
因为它们的存在会使得 ReplicaSet 中 Pod 个数超出其期望值。
取回 Pods
@ -280,7 +278,7 @@ kubectl get pods
<!--
The output shows that the new Pods are either already terminated, or in the process of being terminated:
-->
输出显示新的 Pods 或者已经被终止,或者处于终止过程中:
输出显示新的 Pod 或者已经被终止,或者处于终止过程中:
```
NAME READY STATUS RESTARTS AGE
@ -313,9 +311,9 @@ kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the
number of its new Pods and the original matches its desired count. As fetching the Pods:
-->
你会看到 ReplicaSet 已经获得了该 Pods,并仅根据其规约创建新的 Pods直到
新的 Pods 和原来的 Pods 的总数达到其预期个数。
这时取回 Pods
你会看到 ReplicaSet 已经获得了该 Pod并仅根据其规约创建新的 Pod
直到新的 Pod 和原来的 Pod 的总数达到其预期个数。
这时取回 Pod 列表
```shell
kubectl get pods
@ -355,7 +353,7 @@ A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contrib
对于 ReplicaSets 而言,其 `kind` 始终是 ReplicaSet。
ReplicaSet 对象的名称必须是合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
ReplicaSet 也需要
[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
@ -373,11 +371,11 @@ For the template's [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/
-->
### Pod 模版 {#pod-template}
`.spec.template` 是一个 [Pod 模版](/zh/docs/concepts/workloads/pods/#pod-templates)
`.spec.template` 是一个 [Pod 模版](/zh-cn/docs/concepts/workloads/pods/#pod-templates)
要求设置标签。在 `frontend.yaml` 示例中,我们指定了标签 `tier: frontend`
注意不要将标签与其他控制器的选择算符重叠,否则那些控制器会尝试收养此 Pod。
对于模板的[重启策略](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
对于模板的[重启策略](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
字段,`.spec.template.spec.restartPolicy`,唯一允许的取值是 `Always`,这也是默认值.
<!--
@ -397,7 +395,7 @@ be rejected by the API.
-->
### Pod 选择算符 {#pod-selector}
`.spec.selector` 字段是一个[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)。
`.spec.selector` 字段是一个[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)。
如前文中[所讨论的](#how-a-replicaset-works),这些是用来标识要被获取的 Pods
的标签。在签名的 `frontend.yaml` 示例中,选择算符为:
@ -406,17 +404,16 @@ matchLabels:
tier: frontend
```
在 ReplicaSet 中,`.spec.template.metadata.labels` 的值必须与 `spec.selector`
相匹配,否则该配置会被 API 拒绝。
在 ReplicaSet 中,`.spec.template.metadata.labels` 的值必须与 `spec.selector`
相匹配,否则该配置会被 API 拒绝。
{{< note >}}
<!--
For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.
-->
对于设置了相同的 `.spec.selector`,但
`.spec.template.metadata.labels``.spec.template.spec` 字段不同的
两个 ReplicaSet 而言,每个 ReplicaSet 都会忽略被另一个 ReplicaSet 所
创建的 Pods。
`.spec.template.metadata.labels``.spec.template.spec` 字段不同的两个
ReplicaSet 而言,每个 ReplicaSet 都会忽略被另一个 ReplicaSet 所创建的 Pods。
{{< /note >}}
<!--
@ -451,7 +448,7 @@ For example:
要删除 ReplicaSet 和它的所有 Pod使用
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令。
默认情况下,[垃圾收集器](/zh/docs/concepts/workloads/controllers/garbage-collection/)
默认情况下,[垃圾收集器](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/)
自动删除所有依赖的 Pod。
当使用 REST API 或 `client-go` 库时,你必须在 `-d` 选项中将 `propagationPolicy`
@ -498,7 +495,7 @@ To update Pods to a new spec in a controlled way, use a
由于新旧 ReplicaSet 的 `.spec.selector` 是相同的,新的 ReplicaSet 将接管老的 Pod。
但是,它不会努力使现有的 Pod 与新的、不同的 Pod 模板匹配。
若想要以可控的方式更新 Pod 的规约,可以使用
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/#creating-a-deployment)
[Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/#creating-a-deployment)
资源,因为 ReplicaSet 并不直接支持滚动更新。
<!--
@ -546,8 +543,7 @@ prioritize scaling down pods based on the following general algorithm:
较小的优先被裁减掉
3. 所处节点上副本个数较多的 Pod 优先于所处节点上副本较少者
4. 如果 Pod 的创建时间不同,最近创建的 Pod 优先于早前创建的 Pod 被裁减。
(当 `LogarithmicScaleDown` 这一
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
(当 `LogarithmicScaleDown` 这一[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
被启用时,创建时间是按整数幂级来分组的)。
如果以上比较结果都相同,则随机选择。
@ -563,7 +559,7 @@ prioritize scaling down pods based on the following general algorithm:
Using the [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost)
annotation, users can set a preference regarding which pods to remove first when downscaling a ReplicaSet.
-->
通过使用 [`controller.kubernetes.io/pod-deletion-cost`](/zh/docs/reference/labels-annotations-taints/#pod-deletion-cost)
通过使用 [`controller.kubernetes.io/pod-deletion-cost`](/zh-cn/docs/reference/labels-annotations-taints/#pod-deletion-cost)
注解,用户可以对 ReplicaSet 缩容时要先删除哪些 Pods 设置偏好。
<!--
@ -588,8 +584,7 @@ This feature is beta and enabled by default. You can disable it using the
`PodDeletionCost` in both kube-apiserver and kube-controller-manager.
-->
此功能特性处于 Beta 阶段,默认被禁用。你可以通过为 kube-apiserver 和
kube-controller-manager 设置
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
kube-controller-manager 设置[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
`PodDeletionCost` 来启用此功能。
{{< note >}}
@ -631,8 +626,7 @@ the ReplicaSet we created in the previous example.
-->
### ReplicaSet 作为水平的 Pod 自动缩放器目标 {#replicaset-as-a-horizontal-pod-autoscaler-target}
ReplicaSet 也可以作为
[水平的 Pod 缩放器 (HPA)](/zh/docs/tasks/run-application/horizontal-pod-autoscale/)
ReplicaSet 也可以作为[水平的 Pod 缩放器 (HPA)](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/)
的目标。也就是说ReplicaSet 可以被 HPA 自动缩放。
以下是 HPA 以我们在前一个示例中创建的副本集为目标的示例。
@ -676,17 +670,17 @@ As such, it is recommended to use Deployments when you want ReplicaSets.
### Deployment推荐 {#deployment-recommended}
[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 是一个
可以拥有 ReplicaSet 并使用声明式方式在服务器端完成对 Pods 滚动更新的对象。
尽管 ReplicaSet 可以独立使用,目前它们的主要用途是提供给 Deployment 作为
编排 Pod 创建、删除和更新的一种机制。当使用 Deployment 时,你不必关心
如何管理它所创建的 ReplicaSetDeployment 拥有并管理其 ReplicaSet。
[`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/) 是一个可以拥有
ReplicaSet 并使用声明式方式在服务器端完成对 Pods 滚动更新的对象。
尽管 ReplicaSet 可以独立使用,目前它们的主要用途是提供给 Deployment 作为编排
Pod 创建、删除和更新的一种机制。当使用 Deployment 时,你不必关心如何管理它所创建的
ReplicaSetDeployment 拥有并管理其 ReplicaSet。
因此,建议你在需要 ReplicaSet 时使用 Deployment。
<!--
### Bare Pods
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node such as kubelet.
-->
### 裸 Pod {#bare-pods}
@ -695,7 +689,7 @@ Pod例如在节点故障或破坏性的节点维护如内核升级
因为这个原因,我们建议你使用 ReplicaSet即使应用程序只需要一个 Pod。
想像一下ReplicaSet 类似于进程监视器,只不过它在多个节点上监视多个 Pod
而不是在单个节点上监视单个进程。
ReplicaSet 将本地容器重启的任务委托给了节点上的某个代理例如Kubelet 或 Docker)去完成。
ReplicaSet 将本地容器重启的任务委托给了节点上的某个代理例如Kubelet去完成。
<!--
### Job
@ -703,10 +697,9 @@ ReplicaSet 将本地容器重启的任务委托给了节点上的某个代理(
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are expected to terminate on their own
(that is, batch jobs).
-->
### Job
使用[`Job`](/zh/docs/concepts/workloads/controllers/job/) 代替ReplicaSet
使用[`Job`](/zh-cn/docs/concepts/workloads/controllers/job/) 代替 ReplicaSet
可以用于那些期望自行终止的 Pod。
<!--
@ -720,7 +713,7 @@ safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
### DaemonSet
对于管理那些提供主机级别功能(如主机监控和主机日志)的容器,
就要用 [`DaemonSet`](/zh/docs/concepts/workloads/controllers/daemonset/)
就要用 [`DaemonSet`](/zh-cn/docs/concepts/workloads/controllers/daemonset/)
而不用 ReplicaSet。
这些 Pod 的寿命与主机寿命有关:这些 Pod 需要先于主机上的其他 Pod 运行,
并且在机器准备重新启动/关闭时安全地终止。
@ -733,9 +726,9 @@ The two serve the same purpose, and behave similarly, except that a ReplicationC
selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
As such, ReplicaSets are preferred over ReplicationControllers
-->
ReplicaSet 是 [ReplicationController](/zh/docs/concepts/workloads/controllers/replicationcontroller/)
ReplicaSet 是 [ReplicationController](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/)
的后继者。二者目的相同且行为类似,只是 ReplicationController 不支持
[标签用户指南](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)
[标签用户指南](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors)
中讨论的基于集合的选择算符需求。
因此,相比于 ReplicationController应优先考虑 ReplicaSet。
@ -752,11 +745,13 @@ ReplicaSet 是 [ReplicationController](/zh/docs/concepts/workloads/controllers/r
* Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how
you can use it to manage application availability during disruptions.
-->
* 了解 [Pods](/zh/docs/concepts/workloads/pods)。
* 了解 [Deployments](/zh/docs/concepts/workloads/controllers/deployment/)。
* [使用 Deployment 运行一个无状态应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/),它依赖于 ReplicaSet。
* 了解 [Pod](/zh-cn/docs/concepts/workloads/pods)。
* 了解 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)。
* [使用 Deployment 运行一个无状态应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/)
它依赖于 ReplicaSet。
* `ReplicaSet` 是 Kubernetes REST API 中的顶级资源。阅读
{{< api-reference page="workload-resources/replica-set-v1" >}}
对象定义理解关于该资源的 API。
* 阅读 [Pod 干扰预算Disruption Budget](/zh/docs/concepts/workloads/pods/disruptions/)
* 阅读 [Pod 干扰预算Disruption Budget](/zh-cn/docs/concepts/workloads/pods/disruptions/)
了解如何在干扰下运行高度可用的应用。

View File

@ -145,8 +145,8 @@ have some advantages for start-up related code:
* Init 容器能以不同于 Pod 内应用容器的文件系统视图运行。因此Init 容器可以访问
应用容器不能访问的 {{< glossary_tooltip text="Secret" term_id="secret" >}} 的权限。
* 由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器
提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。
* 由于 Init 容器必须在应用容器启动之前运行完成,因此 Init
容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。
一旦前置条件满足Pod 内的所有的应用容器会并行启动。
<!--
@ -156,19 +156,35 @@ Here are some ideas for how to use init containers:
* Wait for a {{< glossary_tooltip text="Service" term_id="service">}} to
be created, using a shell one-line command like:
-->
### 示例 {#examples}
下面是一些如何使用 Init 容器的想法:
* 等待一个 Service 完成创建,通过类似如下 Shell 命令:
```shell
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
```
<!--
* Register this Pod with a remote server from the downward API with a command like:
-->
* 注册这个 Pod 到远程服务器,通过在命令中调用 API类似如下
```shell
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
```
<!--
* Wait for some time before starting the app container with a command like
-->
* 在启动应用容器之前等一段时间,使用类似命令:
```shell
sleep 60
```
<!--
* Clone a Git repository into a {{< glossary_tooltip text="Volume" term_id="volume" >}}
* Place values into a configuration file and run a template tool to dynamically
@ -176,29 +192,6 @@ Here are some ideas for how to use init containers:
place the `POD_IP` value in a configuration and generate the main app
configuration file using Jinja.
-->
### 示例 {#examples}
下面是一些如何使用 Init 容器的想法:
* 等待一个 Service 完成创建,通过类似如下 shell 命令:
```shell
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
```
* 注册这个 Pod 到远程服务器,通过在命令中调用 API类似如下
```shell
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register \
-d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
```
* 在启动应用容器之前等一段时间,使用类似命令:
```shell
sleep 60
```
* 克隆 Git 仓库到{{< glossary_tooltip text="卷" term_id="volume" >}}中。
* 将配置值放到配置文件中,运行模板工具为主应用容器动态地生成配置文件。
@ -249,6 +242,7 @@ kubectl apply -f myapp.yaml
The output is similar to this:
-->
输出类似于:
```
pod/myapp-pod created
```
@ -261,10 +255,12 @@ And check on its status with:
```shell
kubectl get -f myapp.yaml
```
<!--
The output is similar to this:
-->
输出类似于:
```
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 6m
@ -278,10 +274,12 @@ or for more details:
```shell
kubectl describe -f myapp.yaml
```
<!--
The output is similar to this:
-->
输出类似于:
```
Name: myapp-pod
Namespace: default
@ -408,13 +406,26 @@ init containers. [What's next](#what-s-next) contains a link to a more detailed
During Pod startup, the kubelet delays running init containers until the networking
and storage are ready. Then the kubelet runs the Pod's init containers in the order
they appear in the Pod's spec.
-->
## 具体行为 {#detailed-behavior}
在 Pod 启动过程中,每个 Init 容器会在网络和数据卷初始化之后按顺序启动。
kubelet 运行依据 Init 容器在 Pod 规约中的出现顺序依次运行之。
<!--
Each init container must exit successfully before
the next container starts. If a container fails to start due to the runtime or
exits with failure, it is retried according to the Pod `restartPolicy`. However,
if the Pod `restartPolicy` is set to Always, the init containers use
`restartPolicy` OnFailure.
-->
每个 Init 容器成功退出后才会启动下一个 Init 容器。
如果某容器因为容器运行时的原因无法启动或以错误状态退出kubelet 会根据
Pod 的 `restartPolicy` 策略进行重试。
然而,如果 Pod 的 `restartPolicy` 设置为 "Always"Init 容器失败时会使用
`restartPolicy` 的 "OnFailure" 策略。
<!--
A Pod cannot be `Ready` until all init containers have succeeded. The ports on an
init container are not aggregated under a Service. A Pod that is initializing
is in the `Pending` state but should have a condition `Initialized` set to false.
@ -422,17 +433,6 @@ is in the `Pending` state but should have a condition `Initialized` set to false
If the Pod [restarts](#pod-restart-reasons), or is restarted, all init containers
must execute again.
-->
## 具体行为 {#detailed-behavior}
在 Pod 启动过程中,每个 Init 容器会在网络和数据卷初始化之后按顺序启动。
kubelet 运行依据 Init 容器在 Pod 规约中的出现顺序依次运行之。
每个 Init 容器成功退出后才会启动下一个 Init 容器。
如果某容器因为容器运行时的原因无法启动或以错误状态退出kubelet 会根据
Pod 的 `restartPolicy` 策略进行重试。
然而,如果 Pod 的 `restartPolicy` 设置为 "Always"Init 容器失败时会使用
`restartPolicy` 的 "OnFailure" 策略。
在所有的 Init 容器没有成功之前Pod 将不会变成 `Ready` 状态。
Init 容器的端口将不会在 Service 中进行聚集。正在初始化中的 Pod 处于 `Pending` 状态,
但会将状况 `Initializing` 设置为 false。
@ -446,11 +446,6 @@ Altering an init container image field is equivalent to restarting the Pod.
Because init containers can be restarted, retried, or re-executed, init container
code should be idempotent. In particular, code that writes to files on `EmptyDirs`
should be prepared for the possibility that an output file already exists.
Init containers have all of the fields of an app container. However, Kubernetes
prohibits `readinessProbe` from being used because init containers cannot
define readiness distinct from completion. This is enforced during validation.
-->
对 Init 容器规约的修改仅限于容器的 `image` 字段。
更改 Init 容器的 `image` 字段,等同于重启该 Pod。
@ -458,6 +453,11 @@ define readiness distinct from completion. This is enforced during validation.
因为 Init 容器可能会被重启、重试或者重新执行,所以 Init 容器的代码应该是幂等的。
特别地,基于 `emptyDirs` 写文件的代码,应该对输出文件可能已经存在做好准备。
<!--
Init containers have all of the fields of an app container. However, Kubernetes
prohibits `readinessProbe` from being used because init containers cannot
define readiness distinct from completion. This is enforced during validation.
-->
Init 容器具有应用容器的所有字段。然而 Kubernetes 禁止使用 `readinessProbe`
因为 Init 容器不能定义不同于完成态Completion的就绪态Readiness
Kubernetes 会在校验时强制执行此检查。
@ -487,31 +487,36 @@ Init 容器一直重复失败。
Given the ordering and execution for init containers, the following rules
for resource usage apply:
-->
### 资源 {#resources}
在给定的 Init 容器执行顺序下,资源使用适用于如下规则:
<!--
* The highest of any particular resource request or limit defined on all init
containers is the *effective init request/limit*. If any resource has no
resource limit specified this is considered as the highest limit.
* The Pod's *effective request/limit* for a resource is the higher of:
* the sum of all app containers request/limit for a resource
* the effective init request/limit for a resource
-->
* 所有 Init 容器上定义的任何特定资源的 limit 或 request 的最大值,作为
Pod **有效初始 request/limit**。
如果任何资源没有指定资源限制,这被视为最高限制。
* Pod 对资源的 **有效 limit/request** 是如下两者中的较大者:
* 所有应用容器对某个资源的 limit/request 之和
* 对某个资源的有效初始 limit/request
<!--
* Scheduling is done based on effective requests/limits, which means
init containers can reserve resources for initialization that are not used
during the life of the Pod.
* The QoS (quality of service) tier of the Pod's *effective QoS tier* is the
QoS tier for init containers and app containers alike.
-->
### 资源 {#resources}
在给定的 Init 容器执行顺序下,资源使用适用于如下规则:
* 所有 Init 容器上定义的任何特定资源的 limit 或 request 的最大值,作为 Pod *有效初始 request/limit*
如果任何资源没有指定资源限制,这被视为最高限制。
* Pod 对资源的 *有效 limit/request* 是如下两者的较大者:
* 所有应用容器对某个资源的 limit/request 之和
* 对某个资源的有效初始 limit/request
* 基于有效 limit/request 完成调度,这意味着 Init 容器能够为初始化过程预留资源,
这些资源在 Pod 生命周期过程中并没有被使用。
* Pod 的 *有效 QoS 层* ,与 Init 容器和应用容器的一样。
* Pod 的 **有效 QoS 层** ,与 Init 容器和应用容器的一样。
<!--
Quota and limits are applied based on the effective Pod request and limit.
@ -525,17 +530,18 @@ Pod 级别的 cgroups 是基于有效 Pod 的请求和限制值,和调度器
A Pod can restart, causing re-execution of init containers, for the following
reasons:
-->
### Pod 重启的原因 {#pod-restart-reasons}
Pod 重启会导致 Init 容器重新执行,主要有如下几个原因:
<!--
* The Pod infrastructure container is restarted. This is uncommon and would
have to be done by someone with root access to nodes.
* All containers in a Pod are terminated while `restartPolicy` is set to Always,
forcing a restart, and the init container completion record has been lost due
to garbage collection.
-->
### Pod 重启的原因 {#pod-restart-reasons}
Pod 重启会导致 Init 容器重新执行,主要有如下几个原因:
* Pod 的基础设施容器 (译者注:如 `pause` 容器) 被重启。这种情况不多见,
必须由具备 root 权限访问节点的人员来完成。
@ -549,8 +555,8 @@ applies for Kubernetes v1.20 and later. If you are using an earlier version of
Kubernetes, consult the documentation for the version you are using.
-->
当 Init 容器的镜像发生改变或者 Init 容器的完成记录因为垃圾收集等原因被丢失时,
Pod 不会被重启。这一行为适用于 Kubernetes v1.20 及更新版本。如果你在使用较早
版本的 Kubernetes可查阅你所使用的版本对应的文档。
Pod 不会被重启。这一行为适用于 Kubernetes v1.20 及更新版本。
如果你在使用较早版本的 Kubernetes可查阅你所使用的版本对应的文档。
## {{% heading "whatsnext" %}}
@ -558,5 +564,6 @@ Pod 不会被重启。这一行为适用于 Kubernetes v1.20 及更新版本。
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* Learn how to [debug init containers](/docs/tasks/debug/debug-application/debug-init-containers/)
-->
* 阅读[创建包含 Init 容器的 Pod](/zh/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* 学习如何[调试 Init 容器](/zh/docs/tasks/debug/debug-application/debug-init-containers/)
* 阅读[创建包含 Init 容器的 Pod](/zh-cn/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* 学习如何[调试 Init 容器](/zh-cn/docs/tasks/debug/debug-application/debug-init-containers/)

View File

@ -16,8 +16,8 @@ weight: 40
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
-->
你可以使用 _拓扑分布约束Topology Spread Constraints_ 来控制
{{< glossary_tooltip text="Pods" term_id="Pod" >}} 在集群内故障域
之间的分布,例如区域Region、可用区Zone、节点和其他用户自定义拓扑域。
{{< glossary_tooltip text="Pod" term_id="Pod" >}} 在集群内故障域之间的分布,
例如区域Region、可用区Zone、节点和其他用户自定义拓扑域。
这样做有助于实现高可用并提升资源利用率。
<!-- body -->
@ -125,14 +125,13 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
precedence to topologies that would help reduce the skew.
-->
- **maxSkew** 描述 Pod 分布不均的程度。这是给定拓扑类型中任意两个拓扑域中
匹配的 pod 之间的最大允许差值。它必须大于零。取决于 `whenUnsatisfiable`
取值,其语义会有不同。
- 当 `whenUnsatisfiable` 等于 "DoNotSchedule" 时,`maxSkew` 是目标拓扑域
中匹配的 Pod 数与全局最小值(一个拓扑域中与标签选择器匹配的 Pod 的最小数量。例如,如果你有
- **maxSkew** 描述 Pod 分布不均的程度。这是给定拓扑类型中任意两个拓扑域中匹配的
Pod 之间的最大允许差值。它必须大于零。取决于 `whenUnsatisfiable`取值,
其语义会有不同。
- 当 `whenUnsatisfiable` 等于 "DoNotSchedule" 时,`maxSkew` 是目标拓扑域中匹配的
Pod 数与全局最小值(一个拓扑域中与标签选择器匹配的 Pod 的最小数量。例如,如果你有
3 个区域,分别具有 0 个、2 个 和 3 个匹配的 Pod则全局最小值为 0。之间可存在的差异。
- 当 `whenUnsatisfiable` 等于 "ScheduleAnyway" 时,调度器会更为偏向能够降低
偏差值的拓扑域。
- 当 `whenUnsatisfiable` 等于 "ScheduleAnyway" 时,调度器会更为偏向能够降低偏差值的拓扑域。
<!--
- **minDomains** indicates a minimum number of eligible domains.
@ -153,9 +152,9 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
符合条件的域是其节点与节点选择器匹配的域。
- 指定的 `minDomains` 的值必须大于 0。
- 当符合条件的、拓扑键匹配的域的数量小于 `minDomains`Pod 拓扑分布将“全局最小值”global minimum 设为
0然后进行 `skew` 计算。“全局最小值”是一个符合条件的域中匹配 Pods 的最小数量,
如果符合条件的域的数量小于 `minDomains`,则全局最小值为零。
- 当符合条件的、拓扑键匹配的域的数量小于 `minDomains`Pod 拓扑分布将“全局最小值”
global minimum设为 0然后进行 `skew` 计算。“全局最小值”是一个符合条件的域中匹配
Pod 的最小数量,如果符合条件的域的数量小于 `minDomains`,则全局最小值为零。
- 当符合条件的拓扑键匹配域的个数等于或大于 `minDomains` 时,该值对调度没有影响。
- 当 `minDomains` 为 nil 时,约束的行为等于 `minDomains` 为 1。
- 当 `minDomains` 不为 nil 时,`whenUnsatisfiable` 的值必须为 "`DoNotSchedule`" 。
@ -180,16 +179,14 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
-->
- **topologyKey** 是节点标签的键。如果两个节点使用此键标记并且具有相同的标签值,
则调度器会将这两个节点视为处于同一拓扑域中。调度器试图在每个拓扑域中放置数量
均衡的 Pod。
则调度器会将这两个节点视为处于同一拓扑域中。调度器试图在每个拓扑域中放置数量均衡的 Pod。
- **whenUnsatisfiable** 指示如果 Pod 不满足分布约束时如何处理:
- `DoNotSchedule`(默认)告诉调度器不要调度。
- `ScheduleAnyway` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对
节点进行排序。
- `ScheduleAnyway` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对节点进行排序。
- **labelSelector** 用于查找匹配的 pod。匹配此标签的 Pod 将被统计,以确定相应
拓扑域中 Pod 的数量。
- **labelSelector** 用于查找匹配的 Pod。匹配此标签的 Pod 将被统计,
以确定相应拓扑域中 Pod 的数量。
有关详细信息,请参考[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)。
<!--
@ -201,8 +198,8 @@ kube-scheduler 会为新的 Pod 寻找一个能够满足所有约束的节点。
<!--
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
-->
你可以执行 `kubectl explain Pod.spec.topologySpreadConstraints` 命令以
了解关于 topologySpreadConstraints 的更多信息。
你可以执行 `kubectl explain Pod.spec.topologySpreadConstraints`
命令以了解关于 topologySpreadConstraints 的更多信息。
<!--
### Example: One TopologySpreadConstraint
@ -243,16 +240,15 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones,
`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:&lt;any value&gt;" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod cant satisfy the constraint.
-->
`topologyKey: zone` 意味着均匀分布将只应用于存在标签键值对为
"zone:&lt;any value&gt;" 的节点。
"zone:&lt;任何值&gt;" 的节点。
`whenUnsatisfiable: DoNotSchedule` 告诉调度器如果新的 Pod 不满足约束,
则让它保持悬决状态。
<!--
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1],
hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
-->
如果调度器将新的 Pod 放入 "zoneA"Pods 分布将变为 [3, 1],因此实际的偏差
23 - 1。这违反了 `maxSkew: 1` 的约定。此示例中,新 Pod 只能放置在
如果调度器将新的 Pod 放入 "zoneA"Pods 分布将变为 [3, 1],因此实际的偏差
23 - 1。这违反了 `maxSkew: 1` 的约定。此示例中,新 Pod 只能放置在
"zoneB" 上:
{{<mermaid>}}
@ -355,8 +351,8 @@ You can use 2 TopologySpreadConstraints to control the Pods spreading on both zo
In this case, to match the first constraint, the incoming Pod can only be placed onto "zoneB"; while in terms of the second constraint, the incoming Pod can only be placed onto "node4". Then the results of 2 constraints are ANDed, so the only viable option is to place on "node4".
-->
在这种情况下,为了匹配第一个约束,新的 Pod 只能放置在 "zoneB" 中;而在第二个约束中,
新的 Pod 只能放置在 "node4" 上。最后两个约束的结果加在一起,唯一可行的选择是放置
"node4" 上。
新的 Pod 只能放置在 "node4" 上。最后两个约束的结果加在一起,唯一可行的选择是放置
"node4" 上。
<!--
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
@ -383,7 +379,7 @@ graph BT
{{< /mermaid >}}
<!--
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put onto "node2". Then a joint result of "zoneB" and "node2" returns nothing.
-->
如果对集群应用 "two-constraints.yaml",会发现 "mypod" 处于 `Pending` 状态。
这是因为:为了满足第一个约束,"mypod" 只能放在 "zoneB" 中,而第二个约束要求
@ -448,7 +444,7 @@ class zoneC cluster;
<!--
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed into "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
-->
而且你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 YAML
以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector`
@ -479,7 +475,7 @@ There are some implicit conventions worth noting here:
- The scheduler will bypass the nodes without `topologySpreadConstraints[*].topologyKey` present. This implies that:
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA".
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
2. the incoming Pod has no chances to be scheduled onto such nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
-->
- 只有与新的 Pod 具有相同命名空间的 Pod 才能作为匹配候选者。
- 调度器会忽略没有 `topologySpreadConstraints[*].topologyKey` 的节点。这意味着:
@ -494,8 +490,8 @@ There are some implicit conventions worth noting here:
<!--
- Be aware of what will happen if the incomingPods `topologySpreadConstraints[*].labelSelector` doesnt match its own labels. In the above example, if we remove the incoming Pods labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - its still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workloads `topologySpreadConstraints[*].labelSelector` to match its own labels.
-->
- 注意,如果新 Pod 的 `topologySpreadConstraints[*].labelSelector` 与自身的
标签不匹配,将会发生什么。
- 注意,如果新 Pod 的 `topologySpreadConstraints[*].labelSelector`
与自身的标签不匹配,将会发生什么。
在上面的例子中,如果移除新 Pod 上的标签Pod 仍然可以调度到 "zoneB",因为约束仍然满足。
然而在调度之后集群的不平衡程度保持不变。zoneA 仍然有 2 个带有 {foo:bar} 标签的 Pod
zoneB 有 1 个带有 {foo:bar} 标签的 Pod。
@ -513,8 +509,8 @@ topology spread constraints are applied to a Pod if, and only if:
-->
### 集群级别的默认约束 {#cluster-level-default-constraints}
为集群设置默认的拓扑分布约束也是可能的。默认拓扑分布约束在且仅在以下条件满足
时才会应用到 Pod 上:
为集群设置默认的拓扑分布约束也是可能的。
默认拓扑分布约束在且仅在以下条件满足时才会应用到 Pod 上:
- Pod 没有在其 `.spec.topologySpreadConstraints` 设置任何约束;
- Pod 隶属于某个服务、副本控制器、ReplicaSet 或 StatefulSet。

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 10 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 15 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 15 KiB