Merge pull request #34235 from tengqm/zh-34221-task-3

[zh] Resync tasks pages (task-3)
pull/34256/head
Kubernetes Prow Robot 2022-06-12 20:16:10 -07:00 committed by GitHub
commit b97980db91
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 60 additions and 64 deletions

View File

@ -128,8 +128,8 @@ Name | Encryption | Strength | Speed | Key Length | Other Considerations
`identity` | None | N/A | N/A | N/A | Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written.
`secretbox` | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review.
`aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented.
`aescbc` | AES-CBC with PKCS#7 padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks.
`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with PKCS#7 padding, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/)
`aescbc` | AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks.
`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/)
Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider
is the first provider, the first key is used for encryption.
@ -140,8 +140,8 @@ is the first provider, the first key is used for encryption.
`identity` | 无 | N/A | N/A | N/A | 不加密写入的资源。当设置为第一个 provider 时,资源将在新值写入时被解密。
`secretbox` | XSalsa20 和 Poly1305 | 强 | 更快 | 32字节 | 较新的标准,在需要高度评审的环境中可能不被接受。
`aesgcm` | 带有随机数的 AES-GCM | 必须每 200k 写入一次 | 最快 | 16, 24 或者 32字节 | 建议不要使用,除非实施了自动密钥循环方案。
`aescbc` | 填充 PKCS#7 的 AES-CBC | 弱 | 快 | 32字节 | 由于 CBC 容易受到密文填塞攻击Padding Oracle Attack不推荐使用。
`kms` | 使用信封加密方案:数据使用带有 PKCS#7 填充的 AES-CBC 通过数据加密密钥DEK加密DEK 根据 Key Management ServiceKMS中的配置通过密钥加密密钥Key Encryption KeysKEK加密 | 最强 | 快 | 32字节 | 建议使用第三方工具进行密钥管理。为每个加密生成新的 DEK并由用户控制 KEK 轮换来简化密钥轮换。[配置 KMS 提供程序](/zh/docs/tasks/administer-cluster/kms-provider/)
`aescbc` | 填充 [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) 的 AES-CBC | 弱 | 快 | 32字节 | 由于 CBC 容易受到密文填塞攻击Padding Oracle Attack不推荐使用。
`kms` | 使用信封加密方案:数据使用带有 [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) 填充的 AES-CBC 通过数据加密密钥DEK加密DEK 根据 Key Management ServiceKMS中的配置通过密钥加密密钥Key Encryption KeysKEK加密 | 最强 | 快 | 32字节 | 建议使用第三方工具进行密钥管理。为每个加密生成新的 DEK并由用户控制 KEK 轮换来简化密钥轮换。[配置 KMS 提供程序](/zh/docs/tasks/administer-cluster/kms-provider/)
每个 provider 都支持多个密钥 - 在解密时会按顺序使用密钥,如果是第一个 provider则第一个密钥用于加密。

View File

@ -4,11 +4,9 @@ content_type: task
weight: 10
---
<!--
---
title: Configuring a cgroup driver
content_type: task
weight: 10
---
-->
<!-- overview -->
@ -26,6 +24,7 @@ You should be familiar with the Kubernetes
[container runtime requirements](/docs/setup/production-environment/container-runtimes).
-->
你应该熟悉 Kubernetes 的[容器运行时需求](/zh/docs/setup/production-environment/container-runtimes)。
<!-- steps -->
<!--
@ -122,18 +121,13 @@ Kubeadm 对集群所有的节点,使用相同的 `KubeletConfiguration`。
# 使用 `cgroupfs` 驱动
<!--
As this guide explains using the `cgroupfs` driver with kubeadm is not recommended.
To continue using `cgroupfs` and to prevent `kubeadm upgrade` from modifying the
To use `cgroupfs` and to prevent `kubeadm upgrade` from modifying the
`KubeletConfiguration` cgroup driver on existing setups, you must be explicit
about its value. This applies to a case where you do not wish future versions
of kubeadm to apply the `systemd` driver by default.
-->
正如本指南阐述的:不推荐与 kubeadm 一起使用 `cgroupfs` 驱动。
如仍需使用 `cgroupfs`
且要防止 `kubeadm upgrade` 修改现有系统中 `KubeletConfiguration` 的 cgroup 驱动,
你必须显式声明它的值。
如仍需使用 `cgroupfs` 且要防止 `kubeadm upgrade` 修改现有系统中
`KubeletConfiguration` 的 cgroup 驱动,你必须显式声明它的值。
此方法应对的场景为:在将来某个版本的 kubeadm 中,你不想使用默认的 `systemd` 驱动。
<!--

View File

@ -150,11 +150,11 @@ on these resources, the two values must be the same.
<!--
Here's a manifest for a Pod that has one container. The container manifest
specifies a CPU request of 500 millicpu and a CPU limit of 800 millicpu. These satisfy the
minimum and maximum CPU constraints imposed by the LimitRange.
minimum and maximum CPU constraints imposed by the LimitRange for this namespace.
-->
以下为某个仅包含一个容器的 Pod 的清单。
该容器声明了 CPU 请求 500 millicpu 和 CPU 限制 800 millicpu 。
这些参数满足了 LimitRange 对象规定的 CPU 最小和最大限制。
这些参数满足了 LimitRange 对象为此名字空间规定的 CPU 最小和最大限制。
{{< codenew file="admin/resource/cpu-constraints-pod.yaml" >}}
@ -336,7 +336,7 @@ from the LimitRange for this namespace.
设置[默认的 CPU 请求和限制](/zh/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)。
<!--
At this point, your Pod might be running or it might not be running. Recall that a prerequisite for
At this point, your Pod may or may not be running. Recall that a prerequisite for
this task is that your Nodes must have at least 1 CPU available for use. If each of your Nodes has only 1 CPU,
then there might not be enough allocatable CPU on any Node to accommodate a request of 800 millicpu.
If you happen to be using Nodes with 2 CPU, then you probably have enough CPU to accommodate the 800 millicpu request.
@ -348,6 +348,8 @@ Delete your Pod:
如果你的每个节点都只有 1 CPU那将没有一个节点拥有足够的可分配 CPU 来满足 800 millicpu 的请求。
如果你在用的节点恰好有 2 CPU那么有可能有足够的 CPU 来满足 800 millicpu 的请求。
删除你的 Pod
```
kubectl delete pod constraints-cpu-demo-4 --namespace=constraints-cpu-example
```

View File

@ -213,9 +213,10 @@ kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml
```
<!--
View the specification of the Pod that you created:
View the [specification](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
of the Pod that you created:
-->
查看所你创建的 Pod 的规约:
查看你所创建的 Pod 的[规约](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
```
kubectl get pod default-cpu-demo-3 --output=yaml --namespace=default-cpu-example

View File

@ -3,7 +3,8 @@ title: 配置命名空间的最小和最大内存约束
content_type: task
weight: 30
description: >-
为命名口空间定义一个有效的内存资源限制范围,在该命名空间中每个新创建 Pod 的内存资源是在设置的范围内。
为命名口空间定义一个有效的内存资源限制范围,在该命名空间中每个新创建
Pod 的内存资源是在设置的范围内。
---
<!--
@ -16,14 +17,17 @@ weight: 30
<!--
This page shows how to set minimum and maximum values for memory used by containers
running in a namespace. You specify minimum and maximum memory values in a
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core)
running in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
You specify minimum and maximum memory values in a
[LimitRange](/docs/reference/kubernetes-api/policy-resources/limit-range-v1/)
object. If a Pod does not meet the constraints imposed by the LimitRange,
it cannot be created in the namespace.
-->
本页介绍如何设置在命名空间中运行的容器使用的内存的最小值和最大值。 你可以在
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core)
对象中指定最小和最大内存值。如果 Pod 不满足 LimitRange 施加的约束,则无法在命名空间中创建它。
本页介绍如何设置在{{< glossary_tooltip text="名字空间" term_id="namespace" >}}
中运行的容器所使用的内存的最小值和最大值。你可以在
[LimitRange](/docs/reference/kubernetes-api/policy-resources/limit-range-v1/)
对象中指定最小和最大内存值。如果 Pod 不满足 LimitRange 施加的约束,
则无法在名字空间中创建它。
## {{% heading "prerequisites" %}}
@ -32,7 +36,6 @@ it cannot be created in the namespace.
<!--
You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 1 GiB of memory available for Pods.
-->
在你的集群里你必须要有创建命名空间的权限。
@ -108,7 +111,7 @@ Now whenever you define a Pod within the constraints-mem-example namespace, Kube
performs these steps:
* If any container in that Pod does not specify its own memory request and limit,
the control plane assig nthe default memory request and limit to that container.
the control plane assigns the default memory request and limit to that container.
* Verify that every container in that Pod requests at least 500 MiB of memory.
@ -122,9 +125,7 @@ minimum and maximum memory constraints imposed by the LimitRange.
现在,每当在 constraints-mem-example 命名空间中创建 Pod 时Kubernetes 就会执行下面的步骤:
* 如果 Pod 中的任何容器未声明自己的内存请求和限制,控制面将为该容器设置默认的内存请求和限制。
* 确保该 Pod 中的每个容器的内存请求至少 500 MiB。
* 确保该 Pod 中每个容器内存请求不大于 1 GiB。
以下为包含一个容器的 Pod 清单。该容器声明了 600 MiB 的内存请求和 800 MiB 的内存限制,

View File

@ -3,7 +3,7 @@ title: 为命名空间配置默认的内存请求和限制
content_type: task
weight: 10
description: >-
为命名空间定义默认的内存资源限制,在该命名空间中每个新建的 Pod 都会被配置上内存资源限制。
为命名空间定义默认的内存资源限制,这样在该命名空间中每个新建的 Pod 都会被配置上内存资源限制。
---
<!--
@ -113,7 +113,7 @@ does not specify a memory request and limit.
<!--
Create the Pod.
-->
创建 Pod
创建 Pod
```shell
kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod.yaml --namespace=default-mem-example

View File

@ -1,5 +1,5 @@
---
title: "从 dockershim 迁移"
title: 从 dockershim 迁移
weight: 10
content_type: task
no_list: true
@ -31,11 +31,11 @@ to understand the problem better.
<!--
Dockershim was removed from Kubernetes with the release of v1.24.
If you use Docker via dockershim as your container runtime, and wish to upgrade to v1.24,
If you use Docker Engine via dockershim as your container runtime, and wish to upgrade to v1.24,
it is recommended that you either migrate to another runtime or find an alternative means to obtain Docker Engine support.
-->
Dockershim 在 Kubernetes v1.24 版本已经被移除。
如果你集群内是通过 dockershim 使用 Docker 作为容器运行时,并希望 Kubernetes 升级到 v1.24
如果你集群内是通过 dockershim 使用 Docker Engine 作为容器运行时,并希望 Kubernetes 升级到 v1.24
建议你迁移到其他容器运行时或使用其他方法以获得 Docker 引擎支持。
<!--
@ -58,19 +58,20 @@ configuration.
These tasks will help you to migrate:
* [Check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)
* [Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/)
* [Migrate Docker Engine nodes from dockershim to cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/)
* [Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/)
-->
你的集群中可以有不止一种类型的节点,尽管这不是常见的情况。
下面这些任务可以帮助你完成迁移:
* [检查弃用 Dockershim 对你的影响](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)
* [dockershim 迁移](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/)
* [检查弃用 Dockershim 是否影响到你](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)
* [将 Docker Engine 节点从 dockershim 迁移到 cri-dockerd](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/)
* [从 dockershim 迁移遥测和安全代理](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/)
<!--
## {{% heading "whatsnext" %}}
<!--
* Check out [container runtimes](/docs/setup/production-environment/container-runtimes/)
to understand your options for a container runtime.
* There is a
@ -80,11 +81,9 @@ These tasks will help you to migrate:
you can [report an issue](https://github.com/kubernetes/kubernetes/issues/new/choose)
to the Kubernetes project.
-->
## 下一步
* 查看[容器运行时](/zh/docs/setup/production-environment/container-runtimes/)了解可选的容器运行时。
* [GitHub 问题](https://github.com/kubernetes/kubernetes/issues/106917)跟踪有关 dockershim 的弃用和删除的讨论。
* [GitHub 问题](https://github.com/kubernetes/kubernetes/issues/106917)跟踪有关
dockershim 的弃用和删除的讨论。
* 如果你发现与 dockershim 迁移相关的缺陷或其他技术问题,
可以在 Kubernetes 项目[报告问题](https://github.com/kubernetes/kubernetes/issues/new/choose)。

View File

@ -69,18 +69,15 @@ node-3 Ready v1.16.15 docker://19.3.1
```
<!--
If your runtime shows as Docker Engine, you still might not be affected by the
removal of dockershim in Kubernetes 1.24. [Check the runtime
removal of dockershim in Kubernetes v1.24. [Check the runtime
endpoint](#which-endpoint) to see if you use dockershim. If you don't use
dockershim, you aren't affected.
For containerd, the output is similar to this:
-->
如果你的容器运行时显示为 Docker Engine你仍然可能不会被 1.24 中 dockershim 的移除所影响。
如果你的容器运行时显示为 Docker Engine你仍然可能不会被 v1.24 中 dockershim 的移除所影响。
通过[检查运行时端点](#which-endpoint),可以查看你是否在使用 dockershim。
如果你没有使用 dockershim你就不会被影响。
看下是否是使用的 dockershim如何是 dockershim 则会受到在 Kubernetes 1.24 中移除 dockershim 的影响。
反之则不会受到影响。
对于 containerd输出类似于这样
@ -110,27 +107,24 @@ The container runtime talks to the kubelet over a Unix socket using the [CRI
protocol](/docs/concepts/architecture/cri/), which is based on the gRPC
framework. The kubelet acts as a client, and the runtime acts as the server.
In some cases, you might find it useful to know which socket your nodes use. For
example, with the removal of dockershim in Kubernetes 1.24 and later, you might
example, with the removal of dockershim in Kubernetes v1.24 and later, you might
want to know whether you use Docker Engine with dockershim.
-->
容器运行时使用 Unix Socket 与 kubelet 通信,这一通信使用基于 gRPC 框架的
[CRI 协议](/zh/docs/concepts/architecture/cri/)。kubelet 扮演客户端,运行时扮演服务器端。
在某些情况下,你可能想知道你的节点使用的是哪个 socket。
如若集群是 Kubernetes 1.24 及以后的版本,
如若集群是 Kubernetes v1.24 及以后的版本,
或许你想知道当前运行时是否是使用 dockershim 的 Docker Engine。
{{< note >}}
<!--
{{<note>}}
If you currently use Docker Engine in your nodes with `cri-dockerd`, you aren't
affected by the dockershim removal.
{{</note>}}
-->
{{<note>}}
如果你的节点在通过 `cri-dockerd` 使用 Docker Engine
那么集群不会受到 Kubernetes 移除 dockershim 的影响。
{{</note>}}
{{< /note >}}
<!--
You can check which socket you use by checking the kubelet configuration on your
@ -175,12 +169,14 @@ nodes.
如若套接字 `unix:///run/containerd/containerd.sock` 是 containerd 的端点。
<!--
If you use Docker Engine with the dockershim, [migrate to a different runtime](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/),
If you want to change the Container Runtime on a Node from Docker Engine to containerd,
you can find out more information on [migrating from Docker Engine to containerd](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/),
or, if you want to continue using Docker Engine in v1.24 and later, migrate to a
CRI-compatible adapter like [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd).
-->
如果你通过 dockershim 来使用 Docker Engine,可在
如果你将节点上的容器运行时从 Docker Engine 改变为 containerd,可在
[迁移到不同的运行时](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/)
找到更多信息。或者,如果你想在 Kubernetes v1.24 及以后的版本仍使用 Docker Engine
可以安装 CRI 兼容的适配器实现,如 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)。
[`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)。
[`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)。

View File

@ -39,12 +39,13 @@ each {{< glossary_tooltip term_id="node" >}} to use it.
[动态 kubelet 配置](https://github.com/kubernetes/enhancements/issues/281)
允许你通过部署并配置{{< glossary_tooltip text="节点" term_id="node" >}}使用的
{{< glossary_tooltip text="ConfigMap" term_id="configmap" >}}
达到更改正在运行的 Kubernetes 集群的 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 配置的目的。
达到更改正在运行的 Kubernetes 集群的 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
配置的目的。
<!--
Please find documentation on this feature in [earlier versions of documentation](https://v1-23.docs.kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/).
-->
请在 [早期版本的文档](https://v1-23.docs.kubernetes.io/zh/docs/tasks/administer-cluster/reconfigure-kubelet/) 中找有关此功能的文档。
请在[早期版本的文档](https://v1-23.docs.kubernetes.io/zh/docs/tasks/administer-cluster/reconfigure-kubelet/)中找有关此功能的文档。
<!--
## Migrating from using Dynamic Kubelet Configuration
@ -52,7 +53,7 @@ Please find documentation on this feature in [earlier versions of documentation]
There is no recommended replacement for this feature that works generically
across various Kubernetes distributions. If you are using managed Kubernetes
version, please consult with the vendor hosting Kubernetes for the best
practices for customizing your Kubernetes. If you are using KubeAdm, refer to
practices for customizing your Kubernetes. If you are using `kubeadm`, refer to
[Configuring each kubelet in your cluster using kubeadm](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/).
-->
## 不再使用动态 Kubelet 配置
@ -60,8 +61,8 @@ practices for customizing your Kubernetes. If you are using KubeAdm, refer to
这里没有跨不同的 Kubernetes 发行版替换这个功能的建议方法。
如果你使用托管 Kubernetes 版本,
请咨询托管 Kubernetes 的供应商,以获得自定义 Kubernetes 的最佳实践。
如果你使用的是 `kubeadm`请参考
[使用 kubeadm 配置集群中的每个 kubelet](/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration/)。
如果你使用的是 `kubeadm`
请参考[使用 kubeadm 配置集群中的各个 kubelet](/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration/)。
<!--
In order to migrate off the Dynamic Kubelet Configuration feature, the
@ -84,4 +85,6 @@ satisfying the [Kubernetes version skew policy](/releases/version-skew-policy/).
-->
请注意,从 v1.24 开始 `DynamicKubeletConfig` 特性门控无法在 kubelet 上设置,
因为不会生效。在 v1.26 之前 API 服务器和控制器管理器不会移除该特性门控。
这是专为控制面支持有旧版本 kubelet 的节点以及满足 [Kubernetes 版本偏差策略](/releases/version-skew-policy/)。
这是专为控制面支持有旧版本 kubelet 的节点以及满足
[Kubernetes 版本偏差策略](/zh-cn/releases/version-skew-policy/)。