[zh] sync memory-default-namespace.md and reconfigure-kubelet.md

Signed-off-by: song <tinysong1226@gmail.com>
pull/33831/head
song 2022-05-19 22:31:17 +08:00
parent 17199b8ae2
commit 1928ca2e9f
2 changed files with 131 additions and 668 deletions

View File

@ -2,35 +2,55 @@
title: 为命名空间配置默认的内存请求和限制
content_type: task
weight: 10
description: >-
为命名空间定义默认的内存资源限制,在该命名空间中每个新建的 Pod 都会被配置上内存资源限制。
---
<!--
title: Configure Default Memory Requests and Limits for a Namespace
content_type: task
weight: 10
description: >-
Define a default memory resource limit for a namespace, so that every new Pod
in that namespace has a memory resource limit configured.
-->
<!-- overview -->
<!--
This page shows how to configure default memory requests and limits for a namespace.
If a Container is created in a namespace that has a default memory limit, and the Container
does not specify its own memory limit, then the Container is assigned the default memory limit.
This page shows how to configure default memory requests and limits for a
{{< glossary_tooltip text="namespace" term_id="namespace" >}}.
A Kubernetes cluster can be divided into namespaces. Once you have a namespace that
has a default memory
[limit](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits),
and you then try to create a Pod with a container that does not specify its own memory
limit, then the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} assigns the default
memory limit to that container.
Kubernetes assigns a default memory request under certain conditions that are explained later in this topic.
-->
本章介绍如何为{{< glossary_tooltip text="命名空间" term_id="namespace" >}}配置默认的内存请求和限制。
一个 Kubernetes 集群可被划分为多个命名空间。
如果你在具有默认内存[限制](/zh/docs/concepts/configuration/manage-resources-containers/#requests-and-limits)
的命名空间内尝试创建一个 Pod并且这个 Pod 中的容器没有声明自己的内存资源限制,
那么{{< glossary_tooltip text="控制面" term_id="control-plane" >}}会为该容器设定默认的内存限制。
本文介绍怎样给命名空间配置默认的内存请求和限制。
如果在一个有默认内存限制的命名空间创建容器,该容器没有声明自己的内存限制时,
将会被指定默认内存限制。
Kubernetes 还为某些情况指定了默认的内存请求,本章后面会进行介绍。
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{< include "task-tutorial-prereqs.md" >}}
<!--
You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 2 GiB of memory.
-->
在你的集群里你必须要有创建命名空间的权限。
你的集群中的每个节点必须至少有 2 GiB 的内存。
<!-- steps -->
@ -52,12 +72,14 @@ kubectl create namespace default-mem-example
<!--
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange object. The configuration specifies
a default memory request and a default memory limit.
Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}.
The manifest specifies a default memory
request and a default memory limit.
-->
## 创建 LimitRange 和 Pod
这里给出了一个限制范围对象的配置文件。该配置声明了一个默认的内存请求和一个默认的内存限制。
以下为 {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}} 的示例清单。
清单中声明了默认的内存请求和默认的内存限制。
{{< codenew file="admin/resource/memory-defaults.yaml" >}}
@ -71,19 +93,20 @@ kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --n
```
<!--
Now if a Container is created in the default-mem-example namespace, and the
Container does not specify its own values for memory request and memory limit,
the Container is given a default memory request of 256 MiB and a default
memory limit of 512 MiB.
Now if you create a Pod in the default-mem-example namespace, and any container
within that Pod does not specify its own values for memory request and memory limit,
then the {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
applies default values: a memory request of 256MiB and a memory limit of 512MiB.
Here's the configuration file for a Pod that has one Container. The Container
Here's an example manifest for a Pod that has one container. The container
does not specify a memory request and limit.
-->
现在,如果在 default-mem-example 命名空间创建容器,并且该容器没有声明自己的内存请求和限制值,
它将被指定默认的内存请求 256 MiB 和默认的内存限制 512 MiB。
现在如果你在 default-mem-example 命名空间中创建一个 Pod
并且该 Pod 中所有容器都没有声明自己的内存请求和内存限制,
{{< glossary_tooltip text="控制面" term_id="control-plane" >}}
会将内存的默认请求值 256MiB 和默认限制值 512MiB 应用到 Pod 上。
下面是具有一个容器的 Pod 的配置文件。
容器未指定内存请求和限制。
以下为只包含一个容器的 Pod 的清单。该容器没有声明内存请求和限制。
{{< codenew file="admin/resource/memory-defaults-pod.yaml" >}}
@ -106,7 +129,7 @@ kubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example
```
<!--
The output shows that the Pod's Container has a memory request of 256 MiB and
The output shows that the Pod's container has a memory request of 256 MiB and
a memory limit of 512 MiB. These are the default values specified by the LimitRange.
-->
输出内容显示该 Pod 的容器有 256 MiB 的内存请求和 512 MiB 的内存限制。
@ -134,14 +157,14 @@ kubectl delete pod default-mem-demo --namespace=default-mem-example
```
<!--
## What if you specify a Container's limit, but not its request?
## What if you specify a container's limit, but not its request?
Here's the configuration file for a Pod that has one Container. The Container
Here's a manifest for a Pod that has one container. The container
specifies a memory limit, but not a request:
-->
## 声明容器的限制而不声明它的请求会怎么样?
这里给出了包含一个容器的 Pod 的配置文件。该容器声明了内存限制,而没有声明内存请求:
以下为只包含一个容器的 Pod 的清单。该容器声明了内存限制,而没有声明内存请求。
{{< codenew file="admin/resource/memory-defaults-pod-2.yaml" >}}
@ -164,8 +187,8 @@ kubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example
```
<!--
The output shows that the Container's memory request is set to match its memory limit.
Notice that the Container was not assigned the default memory request value of 256Mi.
The output shows that the container's memory request is set to match its memory limit.
Notice that the container was not assigned the default memory request value of 256Mi.
-->
输出结果显示容器的内存请求被设置为它的内存限制相同的值。注意该容器没有被指定默认的内存请求值 256MiB。
@ -178,15 +201,15 @@ resources:
```
<!--
## What if you specify a Container's request, but not its limit?
## What if you specify a container's request, but not its limit?
-->
## 声明容器的内存请求而不声明内存限制会怎么样?
<!--
Here's the configuration file for a Pod that has one Container. The Container
Here's a manifest for a Pod that has one container. The container
specifies a memory request, but not a limit:
-->
这里给出了一个包含一个容器的 Pod 的配置文件。该容器声明了内存请求,但没有内存限制:
以下为只包含一个容器的 Pod 的清单。该容器声明了内存请求,但没有内存限制:
{{< codenew file="admin/resource/memory-defaults-pod-3.yaml" >}}
@ -209,12 +232,12 @@ kubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example
```
<!--
The output shows that the Container's memory request is set to the value specified in the
Container's configuration file. The Container's memory limit is set to 512Mi, which is the
default memory limit for the namespace.
The output shows that the container's memory request is set to the value specified in the
container's manifest. The container is limited to use no more than 512MiB of
memory, which matches the default memory limit for the namespace.
-->
输出结果显示该容器的内存请求被设置为了容器配置文件中声明的数值。
容器的内存限制被设置为 512MiB即命名空间的默认内存限制
输出结果显示所创建的 Pod 中,容器的内存请求为 Pod 清单中声明的值。
然而同一容器的内存限制被设置为 512MiB此值是该命名空间的默认内存限制值
```
resources:
@ -227,27 +250,45 @@ resources:
<!--
## Motivation for default memory limits and requests
If your namespace has a resource quota,
If your namespace has a memory {{< glossary_tooltip text="resource quota" term_id="resource-quota" >}}
configured,
it is helpful to have a default value in place for memory limit.
Here are two of the restrictions that a resource quota imposes on a namespace:
Here are three of the restrictions that a resource quota imposes on a namespace:
* For every Pod that runs in the namespace, the Pod and each of its containers must have a memory limit.
(If you specify a memory limit for every container in a Pod, Kubernetes can infer the Pod-level memory
limit by adding up the limits for its containers).
* Memory limits apply a resource reservation on the node where the Pod in question is scheduled.
The total amount of memory reserved for all Pods in the namespace must not exceed a specified limit.
* The total amount of memory actually used by all Pods in the namespace must also not exceed a specified limit.
-->
## 设置默认内存限制和请求的动机
如果你的命名空间有资源配额,那么默认内存限制是很有帮助的。
下面是一个例子,通过资源配额为命名空间设置两项约束:
如果你的命名空间设置了内存 {{< glossary_tooltip text="资源配额" term_id="resource-quota" >}}
那么为内存限制设置一个默认值会很有帮助。
以下是内存资源配额对命名空间的施加的三条限制:
* 命名空间中运行的每个 Pod 中的容器都必须有内存限制。
(如果为 Pod 中的每个容器声明了内存限制,
Kubernetes 可以通过将其容器的内存限制相加推断出 Pod 级别的内存限制)。
* 内存限制用来在 Pod 被调度到的节点上执行资源预留。
预留给命名空间中所有 Pod 使用的内存总量不能超过规定的限制。
* 命名空间中所有 Pod 实际使用的内存总量也不能超过规定的限制。
<!--
* Every Container that runs in the namespace must have its own memory limit.
* The total amount of memory used by all Containers in the namespace must not exceed a specified limit.
-->
* 运行在命名空间中的每个容器必须有自己的内存限制。
* 命名空间中所有容器的内存使用量之和不能超过声明的限制值。
When you add a LimitRange:
<!--
If a Container does not specify its own memory limit, it is given the default limit, and then
it can be allowed to run in a namespace that is restricted by a quota.
If any Pod in that namespace that includes a container does not specify its own memory limit,
the control plane applies the default memory limit to that container, and the Pod can be
allowed to run in a namespace that is restricted by a memory ResourceQuota.
-->
如果一个容器没有声明自己的内存限制,会被指定默认限制,然后它才会被允许在限定了配额的命名空间中运行。
当你添加 LimitRange 时:
如果该命名空间中的任何 Pod 的容器未指定内存限制,
控制面将默认内存限制应用于该容器,
这样 Pod 可以在受到内存 ResourceQuota 限制的命名空间中运行。
<!--
## Clean up

View File

@ -1,6 +1,7 @@
---
title: 在运行中的集群上重新配置节点的 kubelet
content_type: task
min-kubernetes-server-version: v1.11
---
<!--
@ -18,648 +19,69 @@ content_type: task
<!--
{{< caution >}}
The [Dynamic Kubelet Configuration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/281-dynamic-kubelet-configuration)
feature is deprecated and should not be used.
feature is deprecated in 1.22 and removed in 1.24.
Please switch to alternative means distributing configuration to the Nodes of your cluster.
{{< /caution >}}
-->
{{< caution >}}
[动态 kubelet 配置](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/281-dynamic-kubelet-configuration)
已经废弃不建议使用。请选择其他方法将配置分发到集群中的节点。
功能在 Kubernetes 1.22 版本弃用,并在 1.24 版本中移除。
请选择其他方法将配置分发到集群中的节点。
{{< /caution >}}
<!--
[Dynamic Kubelet Configuration](https://github.com/kubernetes/enhancements/issues/281)
allows you to change the configuration of each Kubelet in a live Kubernetes
cluster by deploying a ConfigMap and configuring each Node to use it.
allowed you to change the configuration of each
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} in a running kubernetes cluster,
by deploying a {{< glossary_tooltip text="kubelet" text="ConfigMap" term_id="configmap" >}} and configuring
each {{< glossary_tooltip term_id="node" >}} to use it.
-->
[动态 kubelet 配置](https://github.com/kubernetes/enhancements/issues/281)
允许你通过部署一个所有节点都会使用的 ConfigMap
达到在运行中的 Kubernetes 集群中更改 kubelet 配置的目的。
允许你通过部署并配置{{< glossary_tooltip text="节点" term_id="node" >}}使用的
{{< glossary_tooltip text="ConfigMap" term_id="configmap" >}}
达到更改正在运行的 Kubernetes 集群的 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 配置的目的。
<!--
All kubelet configuration parameters can be changed dynamically,
but this is unsafe for some parameters. Before deciding to change a parameter
dynamically, you need a strong understanding of how that change will affect your
cluster's behavior. Always carefully test configuration changes on a small set
of nodes before rolling them out cluster-wide. Advice on configuring specific
fields is available in the inline
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/).
Please find documentation on this feature in [earlier versions of documentation](https://v1-23.docs.kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/).
-->
{{< warning >}}
所有 kubelet 配置参数都可以被动态更改,但对某些参数来说这类更改是不安全的。
在决定动态更改参数之前,你需要深刻理解这个改动将会如何影响集群的行为。
在将变更扩散到整个集群之前,你需要先在小规模的节点集合上仔细地测试这些配置变动。
特定字段相关的配置建议可以在文档
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)中找到。
{{< /warning >}}
## {{% heading "prerequisites" %}}
请在 [早期版本的文档](https://v1-23.docs.kubernetes.io/zh/docs/tasks/administer-cluster/reconfigure-kubelet/) 中找到有关此功能的文档。
<!--
You need to have a Kubernetes cluster.
You also need `kubectl`, [installed](/docs/tasks/tools/#kubectl) and configured to communicate with your cluster.
Make sure that you are using a version of `kubectl` that is
[compatible](/releases/version-skew-policy/) with your cluster.
{{< version-check >}}
## Migrating from using Dynamic Kubelet Configuration
There is no recommended replacement for this feature that works generically
across various Kubernetes distributions. If you are using managed Kubernetes
version, please consult with the vendor hosting Kubernetes for the best
practices for customizing your Kubernetes. If you are using KubeAdm, refer to
[Configuring each kubelet in your cluster using kubeadm](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/).
-->
你需要一个 Kubernetes 集群。
你还需要 `kubectl`[安装](/zh/docs/tasks/tools/#kubectl)并配置好与集群的通信。
{{< version-check >}}
确保你使用的 `kubectl` 版本与集群 [兼容](/releases/version-skew-policy/)。
## 不再使用动态 Kubelet 配置
这里没有跨不同的 Kubernetes 发行版替换这个功能的建议方法。
如果你使用托管 Kubernetes 版本,
请咨询托管 Kubernetes 的供应商,以获得自定义 Kubernetes 的最佳实践。
如果你使用的是 `kubeadm`,请参考
[使用 kubeadm 配置集群中的每个 kubelet](/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration/)。
<!--
Some of the examples use the command line tool
[jq](https://stedolan.github.io/jq/). You do not need `jq` to complete the task,
because there are manual alternatives.
For each node that you're reconfiguring, you must set the kubelet
`-dynamic-config-dir` flag to a writable directory.
In order to migrate off the Dynamic Kubelet Configuration feature, the
alternative mechanism should be used to distribute kubelet configuration files.
In order to apply configuration, config file must be updated and kubelet restarted.
See the [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/)
for information.
-->
在某些例子中使用了命令行工具 [jq](https://stedolan.github.io/jq/)。
你并不一定需要 `jq` 才能完成这些任务,因为总是有一些手工替代的方式。
针对你重新配置的每个节点,你必须设置 kubelet 的标志
`-dynamic-config-dir`,使之指向一个可写的目录。
<!-- steps -->
为了停止使用动态 Kubelet 配置功能,
应该使用替代机制分发 kubelet 配置文件。
为了使配置生效,必须更新配置文件并重新启动 kubelet。
请参考[通过配置文件设置 Kubelet 参数](/zh/docs/tasks/administer-cluster/kubelet-config-file/)。
<!--
## Reconfiguring the kubelet on a running node in your cluster
### Basic Workflow Overview
Please note, the `DynamicKubeletConfig` feature gate cannot be set on a kubelet
starting v1.24 as it has no effect. However, the feature gate is not removed
from the API server or the controller manager before v1.26. This is designed for
the control plane to support nodes with older versions of kubelets and for
satisfying the [Kubernetes version skew policy](/releases/version-skew-policy/).
-->
## 重配置 集群中运行节点上的 kubelet
### 基本工作流程概览
<!--
The basic workflow for configuring a Kubelet in a live cluster is as follows:
1. Write a YAML or JSON configuration file containing the
kubelet's configuration.
2. Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
3. Update the Kubelet's corresponding Node object to use this ConfigMap.
-->
在运行中的集群中配置 kubelet 的基本工作流程如下:
1. 编写一个包含 kubelet 配置的 YAML 或 JSON 文件。
2. 将此文件包装在 ConfigMap 中并将其保存到 Kubernetes 控制平面。
3. 更新 kubelet 所在节点对象以使用此 ConfigMap。
<!--
Each kubelet watches a configuration reference on its respective Node object.
When this reference changes, the Kubelet downloads the new configuration,
updates a local reference to refer to the file, and exits.
For the feature to work correctly, you must be running an OS-level service
manager (such as systemd), which will restart the Kubelet if it exits. When the
Kubelet is restarted, it will begin using the new configuration.
-->
每个 kubelet 都会在其各自的节点对象上监测Watch配置引用。当引用更改时kubelet 将下载新的配置文件,
更新本地引用指向该文件,然后退出。
为了使该功能正常地工作,你必须运行操作系统级别的服务管理器(如 systemd
它将会在 kubelet 退出后将其重启。
kubelet 重新启动时,将开始使用新配置。
<!--
The new configuration completely overrides configuration provided by `--config`,
and is overridden by command-line flags. Unspecified values in the new configuration
will receive default values appropriate to the configuration version
(e.g. `kubelet.config.k8s.io/v1beta1`), unless overridden by flags.
-->
新配置将会完全地覆盖 `--config` 所提供的配置,并被命令行标志覆盖。
新配置中未指定的值将收到适合配置版本的默认值
(e.g. `kubelet.config.k8s.io/v1beta1`),除非被命令行标志覆盖。
<!--
The status of the Node's Kubelet configuration is reported via
`Node.Spec.Status.Config`. Once you have updated a Node to use the new
ConfigMap, you can observe this status to confirm that the Node is using the
intended configuration.
-->
节点 kubelet 配置状态可通过 `node.spec.status.config` 获取。
一旦你更新了一个节点去使用新的 ConfigMap
就可以通过观察此状态来确认该节点是否正在使用预期配置。
<!--
This document describes editing Nodes using `kubectl edit`.
There are other ways to modify a Node's spec, including `kubectl patch`, for
example, which facilitate scripted workflows.
-->
本文中使用命令 `kubectl edit` 来编辑节点,还有其他的方式可以修改节点的规约,
比如更利于脚本化工作流程的 `kubectl patch`
<!--
This document only describes a single Node consuming each ConfigMap. Keep in
mind that it is also valid for multiple Nodes to consume the same ConfigMap.
-->
本文仅仅讲述在单节点上使用每个 ConfigMap。请注意对于多个节点使用相同的 ConfigMap
也是合法的。
<!--
While it is *possible* to change the configuration by
updating the ConfigMap in-place, this causes all Kubelets configured with
that ConfigMap to update simultaneously. It is much safer to treat ConfigMaps
as immutable by convention, aided by `kubectl`'s `-append-hash` option,
and incrementally roll out updates to `Node.Spec.ConfigSource`.
-->
{{< warning >}}
尽管通过就地更新 ConfigMap 来更改配置是 *可能的*
但是这样做会导致所有使用该 ConfigMap 配置的 kubelet 同时更新。
更安全的做法是按惯例将 ConfigMap 视为不可变更的,借助于
`kubectl``--append-hash` 选项逐步把更新推广到 `node.spec.configSource`
{{< /warning >}}
<!--
### Automatic RBAC rules for Node Authorizer
Previously, you were required to manually create RBAC rules
to allow Nodes to access their assigned ConfigMaps. The Node Authorizer now
automatically configures these rules.
-->
### 节点鉴权器的自动 RBAC 规则
以前,你需要手动创建 RBAC 规则以允许节点访问其分配的 ConfigMap。节点鉴权器现在
能够自动配置这些规则。
<!--
### Generating a file that contains the current configuration
The Dynamic Kubelet Configuration feature allows you to provide an override for
the entire configuration object, rather than a per-field overlay. This is a
simpler model that makes it easier to trace the source of configuration values
and debug issues. The compromise, however, is that you must start with knowledge
of the existing configuration to ensure that you only change the fields you
intend to change.
-->
### 生成包含当前配置的文件
动态 kubelet 配置特性允许你为整个配置对象提供一个重载配置,而不是靠单个字段的叠加。
这是一个更简单的模型,可以更轻松地跟踪配置值的来源,更便于调试问题。
然而,相应的代价是你必须首先了解现有配置,以确保你只更改你打算修改的字段。
<!--
The kubelet loads settings from its configuration file, but you can set command
line flags to override the configuration in the file. This means that if you
only know the contents of the configuration file, and you don't know the
command line overrides, then you do not know the running configuration either.
-->
组件 kubelet 从其配置文件中加载配置数据,不过你可以通过设置命令行标志
来重载文件中的一些配置。这意味着,如果你仅知道配置文件的内容,而你不知道
命令行重载了哪些配置,你就无法知道 kubelet 的运行时配置是什么。
<!--
Because you need to know the running configuration in order to override it,
you can fetch the running configuration from the kubelet. You can generate a
config file containing a Node's current configuration by accessing the kubelet's
`configz` endpoint, through `kubectl proxy`. The next section explains how to
do this.
-->
因为你需要知道运行时所使用的配置才能重载之,你可以从 kubelet 取回其运行时配置。
你可以通过访问 kubelet 的 `configz` 末端来生成包含节点当前配置的配置文件;
这一操作可以通过 `kubectl proxy` 来完成。
下一节解释如何完成这一操作。
<!--
The kubelet's `configz` endpoint is there to help with debugging, and is not
a stable part of kubelet behavior.
Do not rely on the behavior of this endpoint for production scenarios or for
use with automated tools.
-->
{{< caution >}}
组件 `kubelet` 上的 `configz` 末端是用来协助调试的,并非 kubelet 稳定行为的一部分。
请不要在产品环境下依赖此末端的行为,也不要在自动化工具中使用此末端。
{{< /caution >}}
<!--
For more information on configuring the kubelet via a configuration file, see
[Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file)).
-->
关于如何使用配置文件来配置 kubelet 行为的更多信息可参见
[通过配置文件设置 kubelet 参数](/zh/docs/tasks/administer-cluster/kubelet-config-file)
文档。
<!-- #### Generate the configuration file -->
#### 生成配置文件
<!--
The steps below use the `jq` command to streamline working with JSON.
To follow the tasks as written, you need to have `jq` installed. You can
adapt the steps if you prefer to extract the `kubeletconfig` subobject manually.
-->
{{< note >}}
下面的任务步骤中使用了 `jq` 命令以方便处理 JSON 数据。为了完成这里讲述的任务,
你需要安装 `jq`。如果你更希望手动提取 `kubeletconfig` 子对象,也可以对这里
的对应步骤做一些调整。
{{< /note >}}
<!--
1. Choose a Node to reconfigure. In this example, the name of this Node is
referred to as `NODE_NAME`.
2. Start the kubectl proxy in the background using the following command:
-->
1. 选择要重新配置的节点。在本例中,此节点的名称为 `NODE_NAME`
2. 使用以下命令在后台启动 kubectl 代理:
```shell
kubectl proxy --port=8001 &
```
<!--
3. Run the following command to download and unpack the configuration from the
`configz` endpoint. The command is long, so be careful when copying and
pasting. **If you use zsh**, note that common zsh configurations add backslashes
to escape the opening and closing curly braces around the variable name in the URL.
For example: `${NODE_NAME}` will be rewritten as `$\{NODE_NAME\}` during the paste.
You must remove the backslashes before running the command, or the command will fail.
-->
3. 运行以下命令从 `configz` 端点中下载并解压配置。这个命令很长,因此在复制粘贴时要小心。
**如果你使用 zsh**,请注意常见的 zsh 配置要添加反斜杠转义 URL 中变量名称周围的大括号。
例如:在粘贴时,`${NODE_NAME}` 将被重写为 `$\{NODE_NAME\}`
你必须在运行命令之前删除反斜杠,否则命令将失败。
```bash
NODE_NAME="the-name-of-the-node-you-are-reconfiguring"; curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME}
```
<!--
You need to manually add the `kind` and `apiVersion` to the downloaded
objectbecause they are not reported by the `configz` endpoint。
-->
{{< note >}}
你需要手动将 `kind``apiVersion` 添加到下载对象中,因为它们不是由 `configz` 末端
返回的。
{{< /note >}}
<!--
#### Edit the configuration file
Using a text editor, change one of the parameters in the
file generated by the previous procedure. For example, you
might edit the QPS parameter `eventRecordQPS`.
-->
#### 修改配置文件
使用文本编辑器,改变上述操作生成的文件中一个参数。
例如,你或许会修改 QPS 参数 `eventRecordQPS`
<!--
#### Push the configuration file to the control plane
Push the edited configuration file to the control plane with the
following command:
-->
#### 把配置文件推送到控制平面
用以下命令把编辑后的配置文件推送到控制平面:
```bash
kubectl -n kube-system create configmap my-node-config \
--from-file=kubelet=kubelet_configz_${NODE_NAME} \
--append-hash -o yaml
```
<!--
This is an example of a valid response:
-->
下面是合法响应的一个例子:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2017-09-14T20:23:33Z
name: my-node-config-gkt4c2m4b2
namespace: kube-system
resourceVersion: "119980"
selfLink: /api/v1/namespaces/kube-system/configmaps/my-node-config-gkt4c2m4b2
uid: 946d785e-998a-11e7-a8dd-42010a800006
data:
kubelet: |
{...}
```
<!--
You created that ConfigMap inside the `kube-system` namespace because the kubelet
is a Kubernetes system component.
-->
你会在 `kube-system` 命名空间中创建 ConfigMap因为 kubelet 是 Kubernetes 的系统组件。
<!--
The `-append-hash` option appends a short checksum of the ConfigMap contents
to the name. This is convenient for an edit-then-push workflow, because it
automatically, yet deterministically, generates new names for new ConfigMaps.
The name that includes this generated hash is referred to as `CONFIG_MAP_NAME`
in the following examples.
-->
`--append-hash` 选项给 ConfigMap 内容附加了一个简短校验和。
这对于先编辑后推送的工作流程很方便,
因为它自动并确定地为新 ConfigMap 生成新的名称。
在以下示例中,包含生成的哈希字符串的对象名被称为 `CONFIG_MAP_NAME`
<!--
#### Set the Node to use the new configuration
Edit the Node's reference to point to the new ConfigMap with the
following command:
-->
#### 配置节点使用新的配置
```bash
kubectl edit node ${NODE_NAME}
```
<!--
In your text editor, add the following YAML under `spec`:
-->
在你的文本编辑器中,在 `spec` 下增添以下 YAML
```yaml
configSource:
configMap:
name: CONFIG_MAP_NAME
namespace: kube-system
kubeletConfigKey: kubelet
```
<!--
You must specify all three of `name`, `namespace`, and `kubeletConfigKey`.
The `kubeletConfigKey` parameter shows the Kubelet which key of the ConfigMap
contains its config.
-->
你必须同时指定 `name`、`namespace` 和 `kubeletConfigKey` 这三个属性。
`kubeletConfigKey` 这个参数通知 kubelet ConfigMap 中的哪个键下面包含所要的配置。
<!--
#### Observe that the Node begins using the new configuration
Retrieve the Node using the `kubectl get node ${NODE_NAME} -o yaml` command and inspect
`Node.Status.Config`. The config sources corresponding to the `active`,
`assigned`, and `lastKnownGood` configurations are reported in the status.
- The `active` configuration is the version the Kubelet is currently running with.
- The `assigned` configuration is the latest version the Kubelet has resolved based on
`Node.Spec.ConfigSource`.
- The `lastKnownGood` configuration is the version the
Kubelet will fall back to if an invalid config is assigned in `Node.Spec.ConfigSource`.
-->
#### 观察节点开始使用新配置
`kubectl get node ${NODE_NAME} -o yaml` 命令读取节点并检查 `node.status.config` 内容。
状态部分报告了对应 `active`(使用中的)配置、`assigned`(被赋予的)配置和
`lastKnownGood`(最近已知可用的)配置的配置源。
- `active` 是 kubelet 当前运行时所使用的版本。
- `assigned` 参数是 kubelet 基于 `node.spec.configSource` 所解析出来的最新版本。
- `lastKnownGood` 参数是 kubelet 的回退版本;如果在 `node.spec.configSource`
包含了无效的配置值kubelet 可以回退到这个版本。
<!--
The`lastKnownGood` configuration might not be present if it is set to its default value,
the local config deployed with the node. The status will update `lastKnownGood` to
match a valid `assigned` config after the Kubelet becomes comfortable with the config.
The details of how the Kubelet determines a config should become the `lastKnownGood` are
not guaranteed by the API, but is currently implemented as a 10-minute grace period.
-->
如果用本地配置部署节点,使其设置成默认值,这个 `lastKnownGood` 配置可能不存在。
在 kubelet 配置好后,将更新 `lastKnownGood` 为一个有效的 `assigned` 配置。
决定如何确定某配置成为 `lastKnownGood` 配置的细节并不在 API 保障范畴,
不过目前实现中采用了 10 分钟的宽限期。
<!--
You can use the following command (using `jq`) to filter down
to the config status:
-->
你可以使用以下命令(使用 `jq`)过滤出配置状态:
```bash
kubectl get no ${NODE_NAME} -o json | jq '.status.config'
```
<!--
The following is an example response:
-->
以下是一个响应示例:
```json
{
"active": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "my-node-config-9mbkccg2cc",
"namespace": "kube-system",
"resourceVersion": "1326",
"uid": "705ab4f5-6393-11e8-b7cc-42010a800002"
}
},
"assigned": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "my-node-config-9mbkccg2cc",
"namespace": "kube-system",
"resourceVersion": "1326",
"uid": "705ab4f5-6393-11e8-b7cc-42010a800002"
}
},
"lastKnownGood": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "my-node-config-9mbkccg2cc",
"namespace": "kube-system",
"resourceVersion": "1326",
"uid": "705ab4f5-6393-11e8-b7cc-42010a800002"
}
}
}
```
<!--
If you do not have `jq`, you can look at the whole response and find `Node.Status.Config`
by eye.
-->
如果你没有安装 `jq`,你可以查看整个响应对象,查找其中的 `node.status.config`
部分。
<!--
If an error occurs, the Kubelet reports it in the `Node.Status.Config.Error`
structure. Possible errors are listed in
[Understanding Node.Status.Config.Error messages](#understanding-node-config-status-errors).
You can search for the identical text in the Kubelet log for additional details
and context about the error.
-->
如果发生错误kubelet 会在 `Node.Status.Config.Error` 中显示出错误信息的结构体。
错误可能出现在列表[理解节点状态配置错误信息](#understanding-node-config-status-errors)中。
你可以在 kubelet 日志中搜索相同的文本以获取更多详细信息和有关错误的上下文。
<!--
#### Make more changes
Follow the workflow above to make more changes and push them again. Each time
you push a ConfigMap with new contents, the -append-hash kubectl option creates
the ConfigMap with a new name. The safest rollout strategy is to first create a
new ConfigMap, and then update the Node to use the new ConfigMap.
-->
#### 做出更多的改变 {#make-more-changes}
按照下面的工作流程做出更多的改变并再次推送它们。
你每次推送一个 ConfigMap 的新内容时kubectl 的 `--append-hash` 选项都会给
ConfigMap 创建一个新的名称。
最安全的上线策略是首先创建一个新的 ConfigMap然后更新节点以使用新的 ConfigMap。
<!--
#### Reset the Node to use its local default configuration
To reset the Node to use the configuration it was provisioned with, edit the
Node using `kubectl edit node ${NODE_NAME}` and remove the
`Node.Spec.ConfigSource` field.
-->
#### 重置节点以使用其本地默认配置
要重置节点,使其使用节点创建时使用的配置,可以用
`kubectl edit node $ {NODE_NAME}` 命令编辑节点,并删除 `node.spec.configSource`
字段。
<!--
#### Observe that the Node is using its local default configuration
After removing this subfield, `Node.Status.Config` eventually becomes
empty, since all config sources have been reset to `nil`, which indicates that
the local default config is `assigned`, `active`, and `lastKnownGood`, and no
error is reported.
-->
#### 观察节点正在使用本地默认配置
在删除此字段后,`node.status.config` 最终变成空,所有配置源都已重置为 `nil`
这表示本地默认配置成为了 `assigned`、`active` 和 `lastKnownGood` 配置,
并且没有报告错误。
<!-- discussion -->
<!--
## `kubectl patch` example
You can change a Node's configSource using several different mechanisms.
This example uses `kubectl patch`:
-->
## `kubectl patch` 示例
你可以使用几种不同的机制来更改节点的 configSource。
本例使用`kubectl patch`
```bash
kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}"
```
<!--
## Understanding how the Kubelet checkpoints config
When a new config is assigned to the Node, the Kubelet downloads and unpacks the
config payload as a set of files on the local disk. The Kubelet also records metadata
that locally tracks the assigned and last-known-good config sources, so that the
Kubelet knows which config to use across restarts, even if the API server becomes
unavailable. After checkpointing a config and the relevant metadata, the Kubelet
exits if it detects that the assigned config has changed. When the Kubelet is
restarted by the OS-level service manager (such as `systemd`), it reads the new
metadata and uses the new config.
-->
## 了解 Kubelet 如何为配置生成检查点
当为节点赋予新配置时kubelet 会下载并解压配置负载为本地磁盘上的一组文件。
kubelet 还记录一些元数据,用以在本地跟踪已赋予的和最近已知良好的配置源,以便
kubelet 在重新启动时知道使用哪个配置,即使 API 服务器变为不可用。
在为配置信息和相关元数据生成检查点之后,如果检测到已赋予的配置发生改变,则 kubelet 退出。
当 kubelet 被 OS 级服务管理器(例如 `systemd`)重新启动时,它会读取新的元数据并使用新配置。
<!--
The recorded metadata is fully resolved, meaning that it contains all necessary
information to choose a specific config version - typically a `UID` and `ResourceVersion`.
This is in contrast to `Node.Spec.ConfigSource`, where the intended config is declared
via the idempotent `namespace/name` that identifies the target ConfigMap; the Kubelet
tries to use the latest version of this ConfigMap.
-->
当记录的元数据已被完全解析时,意味着它包含选择一个指定的配置版本所需的所有信息
-- 通常是 `UID``ResourceVersion`
这与 `node.spec.configSource` 形成对比,后者通过幂等的 `namespace/name` 声明来标识
目标 ConfigMapkubelet 尝试使用此 ConfigMap 的最新版本。
<!--
When you are debugging problems on a node, you can inspect the Kubelet's config
metadata and checkpoints. The structure of the Kubelet's checkpointing directory is:
-->
当你在调试节点上问题时,可以检查 kubelet 的配置元数据和检查点。kubelet 的检查点目录结构是:
<!--
```none
- -dynamic-config-dir (root for managing dynamic config)
| - meta
| - assigned (encoded kubeletconfig/v1beta1.SerializedNodeConfigSource object, indicating the assigned config)
| - last-known-good (encoded kubeletconfig/v1beta1.SerializedNodeConfigSource object, indicating the last-known-good config)
| - checkpoints
| - uid1 (dir for versions of object identified by uid1)
| - resourceVersion1 (dir for unpacked files from resourceVersion1 of object with uid1)
| - ...
| - ...
```
-->
```none
- --dynamic-config-dir (用于管理动态配置的根目录)
|-- meta
| - assigned (编码后的 kubeletconfig/v1beta1.SerializedNodeConfigSource 对象,对应赋予的配置)
| - last-known-good (编码后的 kubeletconfig/v1beta1.SerializedNodeConfigSource 对象,对应最近已知可用配置)
| - checkpoints
| - uid1 (用 uid1 来标识的对象版本目录)
| - resourceVersion1 uid1 对象 resourceVersion1 版本下所有解压文件的目录)
| - ...
| - ...
```
<!--
## Understanding `Node.Status.Config.Error` messages {#understanding-node-config-status-errors}
The following table describes error messages that can occur
when using Dynamic Kubelet Config. You can search for the identical text
in the Kubelet log for additional details and context about the error.
-->
## 理解 `Node.Status.Config.Error` 消息 {#understanding-node-config-status-errors}
下表描述了使用动态 kubelet 配置时可能发生的错误消息。
你可以在 kubelet 日志中搜索相同的文本来获取有关错误的其他详细信息和上下文。
<!--
Error Message | Possible Causes
:----------------| :----------------
failed to load config, see Kubelet log for details | The kubelet likely could not parse the downloaded config payload, or encountered a filesystem error attempting to load the payload from disk.
failed to validate config, see Kubelet log for details | The configuration in the payload, combined with any command-line flag overrides, and the sum of feature gates from flags, the config file, and the remote payload, was determined to be invalid by the kubelet.
invalid NodeConfigSource, exactly one subfield must be non-nil, but all were nil | Since Node.Spec.ConfigSource is validated by the API server to contain at least one non-nil subfield, this likely means that the kubelet is older than the API server and does not recognize a newer source type.
failed to sync: failed to download config, see Kubelet log for details | The kubelet could not download the config. It is possible that Node.Spec.ConfigSource could not be resolved to a concrete API object, or that network errors disrupted the download attempt. The kubelet will retry the download when in this error state.
failed to sync: internal failure, see Kubelet log for details | The kubelet encountered some internal problem and failed to update its config as a result. Examples include filesystem errors and reading objects from the internal informer cache.
internal failure, see Kubelet log for details | The kubelet encountered some internal problem while manipulating config, outside of the configuration sync loop.
-->
{{< table caption = "理解 node.status.config.error 消息" >}}
错误信息 | 可能的原因
:----------------| :----------------
failed to load config, see Kubelet log for details | kubelet 可能无法解析下载配置的有效负载,或者当尝试从磁盘中加载有效负载时,遇到文件系统错误。
failed to validate config, see Kubelet log for details | 有效负载中的配置,与命令行标志所产生的覆盖配置以及特行门控的组合、配置文件本身、远程负载被 kubelet 判定为无效。
invalid NodeConfigSource, exactly one subfield must be non-nil, but all were nil | 由于 API 服务器负责对 node.spec.configSource 执行验证,检查其中是否包含至少一个非空子字段,这个消息可能意味着 kubelet 比 API 服务器版本低,因而无法识别更新的源类型。
failed to sync: failed to download config, see Kubelet log for details | kubelet 无法下载配置数据。可能是 node.spec.configSource 无法解析为具体的 API 对象或者网络错误破坏了下载。处于此错误状态时kubelet 将重新尝试下载。
failed to sync: internal failure, see Kubelet log for details | kubelet 遇到了一些内部问题,因此无法更新其配置。 例如:发生文件系统错误或无法从内部缓存中读取对象。
internal failure, see Kubelet log for details | 在对配置进行同步的循环之外操作配置时kubelet 遇到了一些内部问题。
{{< /table >}}
## {{% heading "whatsnext" %}}
<!--
- [Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file)
explains the supported way to configure a kubelet.
- See the reference documentation for Node, including the `configSource` field within
the Node's [.spec](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeSpec)
- Learn more about kubelet configuration by checking the
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
reference.
-->
- [使用配置文件设置 kubelet 参数](/zh/docs/tasks/administer-cluster/kubelet-config-file)说明了配置 kubelet 的方法。
- 阅读 Node 的参考文档,包括 [.spec](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeSpec) 里的 `configSource` 字段
- 查阅[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)文献进一步了解 kubelet
配置信息。
请注意,从 v1.24 开始 `DynamicKubeletConfig` 特性门控无法在 kubelet 上设置,
因为不会生效。在 v1.26 之前 API 服务器和控制器管理器不会移除该特性门控。
这是专为控制面支持有旧版本 kubelet 的节点以及满足 [Kubernetes 版本偏差策略](/releases/version-skew-policy/)。