Merge pull request #41265 from windsonsea/conrunt

[zh] sync container-runtimes.md and front matters in tutorials
pull/41273/head
Kubernetes Prow Robot 2023-05-22 18:36:20 -07:00 committed by GitHub
commit 5ad241025c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 81 additions and 31 deletions

View File

@ -21,7 +21,7 @@ You need to install a
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
into each node in the cluster so that Pods can run there. This page outlines
what is involved and describes related tasks for setting up nodes.
-->
-->
你需要在集群内每个节点上安装一个
{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}
以使 Pod 可以运行在上面。本文概述了所涉及的内容并描述了与节点设置相关的任务。
@ -145,7 +145,7 @@ sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables ne
```
<!--
## Cgroup drivers
## cgroup drivers
On Linux, {{< glossary_tooltip text="control groups" term_id="cgroup" >}}
are used to constrain resources that are allocated to processes.
@ -155,12 +155,12 @@ are used to constrain resources that are allocated to processes.
在 Linux 上,{{<glossary_tooltip text="控制组CGroup" term_id="cgroup" >}}用于限制分配给进程的资源。
<!--
Both {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
Both the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
underlying container runtime need to interface with control groups to enforce
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/) and set
resources such as cpu/memory requests and limits. To interface with control
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/)
and set resources such as cpu/memory requests and limits. To interface with control
groups, the kubelet and the container runtime need to use a *cgroup driver*.
It's critical that the kubelet and the container runtime uses the same cgroup
It's critical that the kubelet and the container runtime use the same cgroup
driver and are configured the same.
-->
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 和底层容器运行时都需要对接控制组来强制执行
@ -182,16 +182,15 @@ There are two cgroup drivers available:
<!--
### cgroupfs driver {#cgroupfs-cgroup-driver}
The `cgroupfs` driver is the default cgroup driver in the kubelet. When the `cgroupfs`
driver is used, the kubelet and the container runtime directly interface with
The `cgroupfs` driver is the [default cgroup driver in the kubelet](docs/reference/config-api/kubelet-config.v1beta1).
When the `cgroupfs` driver is used, the kubelet and the container runtime directly interface with
the cgroup filesystem to configure cgroups.
The `cgroupfs` driver is **not** recommended when
[systemd](https://www.freedesktop.org/wiki/Software/systemd/) is the
init system because systemd expects a single cgroup manager on
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups)
, use the `systemd` cgroup driver instead of
`cgroupfs`.
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups), use the `systemd`
cgroup driver instead of `cgroupfs`.
-->
### cgroupfs 驱动 {#cgroupfs-cgroup-driver}
@ -237,6 +236,7 @@ the kubelet and the container runtime when systemd is the selected init system.
当 systemd 是选定的初始化系统时,缓解这个不稳定问题的方法是针对 kubelet 和容器运行时将
`systemd` 用作 cgroup 驱动。
<!--
To set `systemd` as the cgroup driver, edit the
[`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/)
@ -252,13 +252,22 @@ kind: KubeletConfiguration
cgroupDriver: systemd
```
{{< note >}}
<!--
Starting with v1.22 and later, when creating a cluster with kubeadm, if the user does not set
the `cgroupDriver` field under `KubeletConfiguration`, kubeadm defaults it to `systemd`.
-->
从 v1.22 开始,在使用 kubeadm 创建集群时,如果用户没有在
`KubeletConfiguration` 下设置 `cgroupDriver` 字段kubeadm 默认使用 `systemd`
{{< /note >}}
<!--
If you configure `systemd` as the cgroup driver for the kubelet, you must also
configure `systemd` as the cgroup driver for the container runtime. Refer to
the documentation for your container runtime for instructions. For example:
-->
如果你将 `systemd` 配置为 kubelet 的 cgroup 驱动,你也必须将 `systemd` 配置为容器运行时的 cgroup 驱动。
参阅容器运行时文档,了解指示说明。例如:
如果你将 `systemd` 配置为 kubelet 的 cgroup 驱动,你也必须将 `systemd`
配置为容器运行时的 cgroup 驱动。参阅容器运行时文档,了解指示说明。例如:
* [containerd](#containerd-systemd)
* [CRI-O](#cri-o)
@ -311,14 +320,14 @@ using the (deprecated) v1alpha2 API instead.
你的容器运行时必须至少支持 v1alpha2 版本的容器运行时接口。
Kubernetes [从 1.26 版本开始](/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#cri-api-removal)**仅适用于**
Kubernetes [从 1.26 版本开始](/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#cri-api-removal)**仅适用于**
v1 版本的容器运行时CRIAPI。早期版本默认为 v1 版本,
但是如果容器运行时不支持 v1 版本的 API
则 kubelet 会回退到使用已弃用的v1alpha2 版本的 API。
<!--
## Container runtimes
-->
-->
## 容器运行时
{{% thirdparty-content %}}
@ -331,9 +340,9 @@ This section outlines the necessary steps to use containerd as CRI runtime.
本节概述了使用 containerd 作为 CRI 运行时的必要步骤。
<!--
Follow the instructions for [getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md). Return to this step once you've created a valid configuration file, `config.toml`.
To install containerd on your system, follow the instructions on [getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).Return to this step once you've created a valid `config.toml` configuration file.
To install containerd on your system, follow the instructions on
[getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).
Return to this step once you've created a valid `config.toml` configuration file.
-->
要在系统上安装 containerd请按照[开始使用 containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md)
的说明进行操作。创建有效的 `config.toml` 配置文件后返回此步骤。
@ -606,4 +615,3 @@ As well as a container runtime, your cluster will need a working
[network plugin](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model).
-->
除了容器运行时,你的集群还需要有效的[网络插件](/zh-cn/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model)。

View File

@ -2,3 +2,7 @@
title: "配置"
weight: 30
---
<!--
title: "Configuration"
weight: 30
-->

View File

@ -2,6 +2,10 @@
title: 创建集群
weight: 10
---
<!--
title: Create a Cluster
weight: 10
-->
<!--
Learn about Kubernetes {{< glossary_tooltip text="cluster" term_id="cluster" length="all" >}} and create a simple cluster using Minikube.

View File

@ -2,3 +2,7 @@
title: 部署应用
weight: 20
---
<!--
title: Deploy an App
weight: 20
-->

View File

@ -2,3 +2,7 @@
title: 了解你的应用
weight: 30
---
<!--
title: Explore Your App
weight: 30
-->

View File

@ -2,3 +2,7 @@
title: 公开地暴露你的应用
weight: 40
---
<!--
title: Expose Your App Publicly
weight: 40
-->

View File

@ -1,4 +1,8 @@
---
title: 缩你的应用
title: 缩你的应用
weight: 50
---
<!--
title: Scale Your App
weight: 50
-->

View File

@ -2,3 +2,7 @@
title: 更新你的应用
weight: 60
---
<!--
title: Update Your App
weight: 60
-->

View File

@ -2,4 +2,7 @@
title: "安全"
weight: 40
---
<!--
title: "Security"
weight: 40
-->

View File

@ -1,4 +1,8 @@
---
title: "Services"
title: "Service"
weight: 70
---
<!--
title: "Services"
weight: 70
-->

View File

@ -2,3 +2,7 @@
title: "有状态的应用"
weight: 50
---
<!--
title: "Stateful Applications"
weight: 50
-->

View File

@ -1,4 +1,8 @@
---
title: "无状态应用程序"
title: "无状态应用"
weight: 40
---
<!--
title: "Stateless Applications"
weight: 40
-->

View File

@ -28,11 +28,11 @@ external IP address.
* Configure `kubectl` to communicate with your Kubernetes API server. For instructions, see the
documentation for your cloud provider.
-->
* 安装 [kubectl](/zh-cn/docs/tasks/tools/)。
* 使用 Google Kubernetes Engine 或 Amazon Web Services 等云供应商创建 Kubernetes 集群。
本教程创建了一个[外部负载均衡器](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/)
需要云供应商。
* 配置 `kubectl` 与 Kubernetes API 服务器通信。有关说明,请参阅云供应商文档。
* 安装 [kubectl](/zh-cn/docs/tasks/tools/)。
* 使用 Google Kubernetes Engine 或 Amazon Web Services 等云供应商创建 Kubernetes 集群。
本教程创建了一个[外部负载均衡器](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/)
需要云供应商。
* 配置 `kubectl` 与 Kubernetes API 服务器通信。有关说明,请参阅云供应商文档。
## {{% heading "objectives" %}}
@ -77,7 +77,7 @@ external IP address.
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}
对象和一个关联的
{{< glossary_tooltip term_id="replica-set" text="ReplicaSet" >}} 对象。
ReplicaSet 有五个 {{< glossary_tooltip text="Pods" term_id="pod" >}}
ReplicaSet 有五个 {{< glossary_tooltip text="Pod" term_id="pod" >}}
每个都运行 Hello World 应用程序。
<!--
@ -182,7 +182,7 @@ external IP address.
is 8080 and the `NodePort` is 32377.
-->
记下服务公开的外部 IP 地址(`LoadBalancer Ingress`)
记下服务公开的外部 IP 地址(`LoadBalancer Ingress`
在本例中,外部 IP 地址是 104.198.205.71。还要注意 `Port``NodePort` 的值。
在本例中,`Port` 是 8080`NodePort` 是 32377。
@ -276,4 +276,3 @@ Learn more about
[connecting applications with services](/docs/tutorials/services/connect-applications-service/).
-->
进一步了解[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)。