Merge pull request #36355 from windsonsea/nodecon
[zh] Sync1.25 /production-environment/container-runtimes.mdpull/36601/head
commit
982d01b3bb
|
@ -140,45 +140,118 @@ sudo sysctl --system
|
||||||
On Linux, {{< glossary_tooltip text="control groups" term_id="cgroup" >}}
|
On Linux, {{< glossary_tooltip text="control groups" term_id="cgroup" >}}
|
||||||
are used to constrain resources that are allocated to processes.
|
are used to constrain resources that are allocated to processes.
|
||||||
-->
|
-->
|
||||||
## Cgroup 驱动程序 {#cgroup-drivers}
|
## cgroup 驱动 {#cgroup-drivers}
|
||||||
|
|
||||||
在 Linux 上,{{<glossary_tooltip text="控制组(CGroup)" term_id="cgroup" >}}
|
在 Linux 上,{{<glossary_tooltip text="控制组(CGroup)" term_id="cgroup" >}}用于限制分配给进程的资源。
|
||||||
用于限制分配给进程的资源。
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
Both {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
|
||||||
|
underlying container runtime need to interface with control groups to enforce
|
||||||
|
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/) and set
|
||||||
|
resources such as cpu/memory requests and limits. To interface with control
|
||||||
|
groups, the kubelet and the container runtime need to use a *cgroup driver*.
|
||||||
|
It's critical that the kubelet and the container runtime uses the same cgroup
|
||||||
|
driver and are configured the same.
|
||||||
|
-->
|
||||||
|
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 和底层容器运行时都需要对接控制组来强制执行
|
||||||
|
[为 Pod 和容器管理资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/)
|
||||||
|
并为诸如 CPU、内存这类资源设置请求和限制。若要对接控制组,kubelet 和容器运行时需要使用一个 **cgroup 驱动**。
|
||||||
|
关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
There are two cgroup drivers available:
|
||||||
|
|
||||||
|
* [`cgroupfs`](#cgroupfs-cgroup-driver)
|
||||||
|
* [`systemd`](#systemd-cgroup-driver)
|
||||||
|
-->
|
||||||
|
可用的 cgroup 驱动有两个:
|
||||||
|
|
||||||
|
* [`cgroupfs`](#cgroupfs-cgroup-driver)
|
||||||
|
* [`systemd`](#systemd-cgroup-driver)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### cgroupfs driver {#cgroupfs-cgroup-driver}
|
||||||
|
|
||||||
|
The `cgroupfs` driver is the default cgroup driver in the kubelet. When the `cgroupfs`
|
||||||
|
driver is used, the kubelet and the container runtime directly interface with
|
||||||
|
the cgroup filesystem to configure cgroups.
|
||||||
|
|
||||||
|
The `cgroupfs` driver is **not** recommended when
|
||||||
|
[systemd](https://www.freedesktop.org/wiki/Software/systemd/) is the
|
||||||
|
init system because systemd expects a single cgroup manager on
|
||||||
|
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups)
|
||||||
|
, use the `systemd` cgroup driver instead of
|
||||||
|
`cgroupfs`.
|
||||||
|
-->
|
||||||
|
### cgroupfs 驱动 {#cgroupfs-cgroup-driver}
|
||||||
|
|
||||||
|
`cgroupfs` 驱动是 kubelet 中默认的 cgroup 驱动。当使用 `cgroupfs` 驱动时,
|
||||||
|
kubelet 和容器运行时将直接对接 cgroup 文件系统来配置 cgroup。
|
||||||
|
|
||||||
|
当 [systemd](https://www.freedesktop.org/wiki/Software/systemd/) 是初始化系统时,
|
||||||
|
**不** 推荐使用 `cgroupfs` 驱动,因为 systemd 期望系统上只有一个 cgroup 管理器。
|
||||||
|
此外,如果你使用 [cgroup v2](/zh-cn/docs/concepts/architecture/cgroups),
|
||||||
|
则应用 `systemd` cgroup 驱动取代 `cgroupfs`。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### systemd cgroup driver {#systemd-cgroup-driver}
|
||||||
|
|
||||||
When [systemd](https://www.freedesktop.org/wiki/Software/systemd/) is chosen as the init
|
When [systemd](https://www.freedesktop.org/wiki/Software/systemd/) is chosen as the init
|
||||||
system for a Linux distribution, the init process generates and consumes a root control group
|
system for a Linux distribution, the init process generates and consumes a root control group
|
||||||
(`cgroup`) and acts as a cgroup manager.
|
(`cgroup`) and acts as a cgroup manager.
|
||||||
Systemd has a tight integration with cgroups and allocates a cgroup per systemd unit. It's possible
|
|
||||||
to configure your container runtime and the kubelet to use `cgroupfs`. Using `cgroupfs` alongside
|
systemd has a tight integration with cgroups and allocates a cgroup per systemd
|
||||||
systemd means that there will be two different cgroup managers.
|
unit. As a result, if you use `systemd` as the init system with the `cgroupfs`
|
||||||
|
driver, the system gets two different cgroup managers.
|
||||||
-->
|
-->
|
||||||
|
### systemd cgroup 驱动 {#systemd-cgroup-driver}
|
||||||
|
|
||||||
当某个 Linux 系统发行版使用 [systemd](https://www.freedesktop.org/wiki/Software/systemd/)
|
当某个 Linux 系统发行版使用 [systemd](https://www.freedesktop.org/wiki/Software/systemd/)
|
||||||
作为其初始化系统时,初始化进程会生成并使用一个 root 控制组(`cgroup`),并充当 cgroup 管理器。
|
作为其初始化系统时,初始化进程会生成并使用一个 root 控制组(`cgroup`),并充当 cgroup 管理器。
|
||||||
Systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。
|
|
||||||
你也可以配置容器运行时和 kubelet 使用 `cgroupfs`。
|
systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。
|
||||||
连同 systemd 一起使用 `cgroupfs` 意味着将有两个不同的 cgroup 管理器。
|
因此,如果你 `systemd` 用作初始化系统,同时使用 `cgroupfs` 驱动,则系统中会存在两个不同的 cgroup 管理器。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
A single cgroup manager simplifies the view of what resources are being allocated
|
Two cgroup managers result in two views of the available and in-use resources in
|
||||||
and will by default have a more consistent view of the available and in-use resources.
|
the system. In some cases, nodes that are configured to use `cgroupfs` for the
|
||||||
When there are two cgroup managers on a system, you end up with two views of those resources.
|
kubelet and container runtime, but use `systemd` for the rest of the processes become
|
||||||
In the field, people have reported cases where nodes that are configured to use `cgroupfs`
|
unstable under resource pressure.
|
||||||
for the kubelet and Docker, but `systemd` for the rest of the processes, become unstable under
|
|
||||||
resource pressure.
|
The approach to mitigate this instability is to use `systemd` as the cgroup driver for
|
||||||
|
the kubelet and the container runtime when systemd is the selected init system.
|
||||||
-->
|
-->
|
||||||
单个 cgroup 管理器将简化分配资源的视图,并且默认情况下将对可用资源和使用中的资源具有更一致的视图。
|
同时存在两个 cgroup 管理器将造成系统中针对可用的资源和使用中的资源出现两个视图。某些情况下,
|
||||||
当有两个管理器共存于一个系统中时,最终将对这些资源产生两种视图。
|
将 kubelet 和容器运行时配置为使用 `cgroupfs`、但为剩余的进程使用 `systemd`
|
||||||
在此领域人们已经报告过一些案例,某些节点配置让 kubelet 和 docker 使用
|
的那些节点将在资源压力增大时变得不稳定。
|
||||||
`cgroupfs`,而节点上运行的其余进程则使用 systemd;
|
|
||||||
这类节点在资源压力下会变得不稳定。
|
当 systemd 是选定的初始化系统时,缓解这个不稳定问题的方法是针对 kubelet 和容器运行时将
|
||||||
|
`systemd` 用作 cgroup 驱动。
|
||||||
|
<!--
|
||||||
|
To set `systemd` as the cgroup driver, edit the
|
||||||
|
[`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||||
|
option of `cgroupDriver` and set it to `systemd`. For example:
|
||||||
|
-->
|
||||||
|
要将 `systemd` 设置为 cgroup 驱动,需编辑 [`KubeletConfiguration`](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||||
|
的 `cgroupDriver` 选项,并将其设置为 `systemd`。例如:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||||
|
kind: KubeletConfiguration
|
||||||
|
...
|
||||||
|
cgroupDriver: systemd
|
||||||
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Changing the settings such that your container runtime and kubelet use `systemd` as the cgroup driver
|
If you configure `systemd` as the cgroup driver for the kubelet, you must also
|
||||||
stabilized the system. To configure this for Docker, set `native.cgroupdriver=systemd`.
|
configure `systemd` as the cgroup driver for the container runtime. Refer to
|
||||||
|
the documentation for your container runtime for instructions. For example:
|
||||||
-->
|
-->
|
||||||
更改设置,令容器运行时和 kubelet 使用 `systemd` 作为 cgroup 驱动,以此使系统更为稳定。
|
如果你将 `systemd` 配置为 kubelet 的 cgroup 驱动,你也必须将 `systemd` 配置为容器运行时的 cgroup 驱动。
|
||||||
对于 Docker,要设置 `native.cgroupdriver=systemd` 选项。
|
参阅容器运行时文档,了解指示说明。例如:
|
||||||
|
|
||||||
|
* [containerd](#containerd-systemd)
|
||||||
|
* [CRI-O](#cri-o)
|
||||||
|
|
||||||
{{< caution >}}
|
{{< caution >}}
|
||||||
<!--
|
<!--
|
||||||
|
@ -199,80 +272,6 @@ cgroup 驱动,当为现有 Pod 重新创建 PodSandbox 时会产生错误。
|
||||||
或者使用自动化方案来重新安装。
|
或者使用自动化方案来重新安装。
|
||||||
{{< /caution >}}
|
{{< /caution >}}
|
||||||
|
|
||||||
<!--
|
|
||||||
### Cgroup version 2 {#cgroup-v2}
|
|
||||||
|
|
||||||
Cgroup v2 is the next version of the cgroup Linux API. Differently than cgroup v1, there is a single
|
|
||||||
hierarchy instead of a different one for each controller.
|
|
||||||
-->
|
|
||||||
### Cgroup v2 {#cgroup-v2}
|
|
||||||
|
|
||||||
Cgroup v2 是 cgroup Linux API 的下一个版本。与 cgroup v1 不同的是,
|
|
||||||
Cgroup v2 只有一个层次结构,而不是每个控制器有一个不同的层次结构。
|
|
||||||
|
|
||||||
<!--
|
|
||||||
The new version offers several improvements over cgroup v1, some of these improvements are:
|
|
||||||
|
|
||||||
- cleaner and easier to use API
|
|
||||||
- safe sub-tree delegation to containers
|
|
||||||
- newer features like Pressure Stall Information
|
|
||||||
-->
|
|
||||||
新版本对 cgroup v1 进行了多项改进,其中一些改进是:
|
|
||||||
|
|
||||||
- 更简洁、更易于使用的 API
|
|
||||||
- 可将安全子树委派给容器
|
|
||||||
- 更新的功能,如压力失速信息(Pressure Stall Information)
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Even if the kernel supports a hybrid configuration where some controllers are managed by cgroup v1
|
|
||||||
and some others by cgroup v2, Kubernetes supports only the same cgroup version to manage all the
|
|
||||||
controllers.
|
|
||||||
|
|
||||||
If systemd doesn't use cgroup v2 by default, you can configure the system to use it by adding
|
|
||||||
`systemd.unified_cgroup_hierarchy=1` to the kernel command line.
|
|
||||||
-->
|
|
||||||
尽管内核支持混合配置,即其中一些控制器由 cgroup v1 管理,另一些由 cgroup v2 管理,
|
|
||||||
Kubernetes 仅支持使用同一 cgroup 版本来管理所有控制器。
|
|
||||||
|
|
||||||
如果 systemd 默认不使用 cgroup v2,你可以通过在内核命令行中添加
|
|
||||||
`systemd.unified_cgroup_hierarchy=1` 来配置系统去使用它。
|
|
||||||
|
|
||||||
<!--
|
|
||||||
```shell
|
|
||||||
# This example is for a Linux OS that uses the DNF package manager
|
|
||||||
# Your system might use a different method for setting the command line
|
|
||||||
# that the Linux kernel uses.
|
|
||||||
sudo dnf install -y grubby && \
|
|
||||||
sudo grubby \
|
|
||||||
--update-kernel=ALL \
|
|
||||||
--args="systemd.unified_cgroup_hierarchy=1"
|
|
||||||
```
|
|
||||||
-->
|
|
||||||
|
|
||||||
```shell
|
|
||||||
# 此示例适用于使用 DNF 包管理器的 Linux 操作系统
|
|
||||||
# 你的系统可能使用不同的方法来设置 Linux 内核使用的命令行。
|
|
||||||
sudo dnf install -y grubby && \
|
|
||||||
sudo grubby \
|
|
||||||
--update-kernel=ALL \
|
|
||||||
--args="systemd.unified_cgroup_hierarchy=1"
|
|
||||||
```
|
|
||||||
|
|
||||||
<!--
|
|
||||||
If you change the command line for the kernel, you must reboot the node before your
|
|
||||||
change takes effect.
|
|
||||||
|
|
||||||
There should not be any noticeable difference in the user experience when switching to cgroup v2, unless
|
|
||||||
users are accessing the cgroup file system directly, either on the node or from within the containers.
|
|
||||||
|
|
||||||
In order to use it, cgroup v2 must be supported by the CRI runtime as well.
|
|
||||||
-->
|
|
||||||
如果更改内核的命令行,则必须重新启动节点才能使更改生效。
|
|
||||||
|
|
||||||
切换到 cgroup v2 时,用户体验不应有任何明显差异,
|
|
||||||
除非用户直接在节点上或在容器内访问 cgroup 文件系统。
|
|
||||||
为了使用它,CRI 运行时也必须支持 cgroup v2。
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
### Migrating to the `systemd` driver in kubeadm managed clusters
|
### Migrating to the `systemd` driver in kubeadm managed clusters
|
||||||
|
|
||||||
|
@ -281,8 +280,8 @@ follow [configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/conf
|
||||||
-->
|
-->
|
||||||
### 将 kubeadm 管理的集群迁移到 `systemd` 驱动
|
### 将 kubeadm 管理的集群迁移到 `systemd` 驱动
|
||||||
|
|
||||||
如果你希望将现有的由 kubeadm 管理的集群迁移到 `systemd` cgroup 驱动程序,
|
如果你希望将现有的由 kubeadm 管理的集群迁移到 `systemd` cgroup 驱动,
|
||||||
请按照[配置 cgroup 驱动程序](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)操作。
|
请按照[配置 cgroup 驱动](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)操作。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## CRI version support {#cri-versions}
|
## CRI version support {#cri-versions}
|
||||||
|
@ -348,9 +347,9 @@ On Windows the default CRI endpoint is `npipe://./pipe/containerd-containerd`.
|
||||||
|
|
||||||
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
|
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
|
||||||
-->
|
-->
|
||||||
#### 配置 `systemd` cgroup 驱动程序 {#containerd-systemd}
|
#### 配置 `systemd` cgroup 驱动 {#containerd-systemd}
|
||||||
|
|
||||||
结合 `runc` 使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置
|
结合 `runc` 使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置:
|
||||||
|
|
||||||
```
|
```
|
||||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||||
|
@ -359,6 +358,11 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
|
||||||
SystemdCgroup = true
|
SystemdCgroup = true
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The `systemd` cgroup driver is recommended if you use [cgroup v2](/docs/concepts/architecture/cgroups).
|
||||||
|
-->
|
||||||
|
如果你使用 [cgroup v2](/zh-cn/docs/concepts/architecture/cgroups),则推荐 `systemd` cgroup 驱动。
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
<!--
|
<!--
|
||||||
If you installed containerd from a package (for example, RPM or `.deb`), you may find
|
If you installed containerd from a package (for example, RPM or `.deb`), you may find
|
||||||
|
@ -405,7 +409,7 @@ sandbox image by setting the following config:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[plugins."io.containerd.grpc.v1.cri"]
|
[plugins."io.containerd.grpc.v1.cri"]
|
||||||
sandbox_image = "k8s.gcr.io/pause:3.2"
|
sandbox_image = "registry.k8s.io/pause:3.2"
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -432,10 +436,10 @@ for you. To switch to the `cgroupfs` cgroup driver, either edit
|
||||||
`/etc/crio/crio.conf` or place a drop-in configuration in
|
`/etc/crio/crio.conf` or place a drop-in configuration in
|
||||||
`/etc/crio/crio.conf.d/02-cgroup-manager.conf`, for example:
|
`/etc/crio/crio.conf.d/02-cgroup-manager.conf`, for example:
|
||||||
-->
|
-->
|
||||||
#### cgroup 驱动程序 {#cgroup-driver}
|
#### cgroup 驱动 {#cgroup-driver}
|
||||||
|
|
||||||
CRI-O 默认使用 systemd cgroup 驱动程序,这对你来说可能工作得很好。
|
CRI-O 默认使用 systemd cgroup 驱动,这对你来说可能工作得很好。
|
||||||
要切换到 `cgroupfs` cgroup 驱动程序,请编辑 `/etc/crio/crio.conf` 或在
|
要切换到 `cgroupfs` cgroup 驱动,请编辑 `/etc/crio/crio.conf` 或在
|
||||||
`/etc/crio/crio.conf.d/02-cgroup-manager.conf` 中放置一个插入式配置,例如:
|
`/etc/crio/crio.conf.d/02-cgroup-manager.conf` 中放置一个插入式配置,例如:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
|
@ -522,7 +526,7 @@ You can use Mirantis Container Runtime with Kubernetes using the open source
|
||||||
-->
|
-->
|
||||||
### Mirantis 容器运行时 {#mcr}
|
### Mirantis 容器运行时 {#mcr}
|
||||||
|
|
||||||
[Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR)
|
[Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR)
|
||||||
是一种商用容器运行时,以前称为 Docker 企业版。
|
是一种商用容器运行时,以前称为 Docker 企业版。
|
||||||
你可以使用 MCR 中包含的开源 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)
|
你可以使用 MCR 中包含的开源 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)
|
||||||
组件将 Mirantis Container Runtime 与 Kubernetes 一起使用。
|
组件将 Mirantis Container Runtime 与 Kubernetes 一起使用。
|
||||||
|
|
Loading…
Reference in New Issue