[zh] Sync manage-resources-containers.md

pull/47161/head
windsonsea 2024-07-15 09:59:47 +08:00
parent df0f009e16
commit d8fd08688c
1 changed files with 122 additions and 42 deletions

View File

@ -385,11 +385,13 @@ limits you defined.
<!--
- The memory limit for the Pod or container can also apply to pages in memory backed
volumes, such as an `emptyDir`. The kubelet tracks `tmpfs` emptyDir volumes as container
memory use, rather than as local ephemeral storage.
memory use, rather than as local ephemeral storage. When using memory backed `emptyDir`,
be sure to check the notes [below](#memory-backed-emptydir).
-->
- Pod 或容器的内存限制也适用于通过内存供应的卷,例如 `emptyDir` 卷。
- Pod 或容器的内存限制也适用于通过内存作为介质的卷,例如 `emptyDir` 卷。
kubelet 会跟踪 `tmpfs` 形式的 emptyDir 卷用量,将其作为容器的内存用量,
而不是临时存储用量。
而不是临时存储用量。当使用内存作为介质的 `emptyDir` 时,
请务必查看[下面](#memory-backed-emptydir)的注意事项。
<!--
If a container exceeds its memory request and the node that it runs on becomes short of
@ -431,6 +433,85 @@ kubelet 会将 Pod 的资源使用情况作为 Pod
则可以直接从[指标 API](/zh-cn/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api)
或者监控工具获得 Pod 的资源使用情况。
<!--
### Considerations for memory backed `emptyDir` volumes {#memory-backed-emptydir}
-->
### 使用内存作为介质的 `emptyDir` 卷的注意事项 {#memory-backed-emptydir}
{{< caution >}}
<!--
If you do not specify a `sizeLimit` for an `emptyDir` volume, that volume may
consume up to that pod's memory limit (`Pod.spec.containers[].resources.limits.memory`).
If you do not set a memory limit, the pod has no upper bound on memory consumption,
and can consume all available memory on the node. Kubernetes schedules pods based
on resource requests (`Pod.spec.containers[].resources.requests`) and will not
consider memory usage above the request when deciding if another pod can fit on
a given node. This can result in a denial of service and cause the OS to do
out-of-memory (OOM) handling. It is possible to create any number of `emptyDir`s
that could potentially consume all available memory on the node, making OOM
more likely.
-->
如果你没有为 `emptyDir` 卷指定 `sizeLimit`,该卷就会消耗 Pod 的内存,
卷的用量上限为 Pod 的内存限制(`Pod.spec.containers[].resources.limits.memory`)。
如果你没有设置内存限制Pod 的内存消耗将没有上限,并且可能会用掉节点上的所有可用内存。
Kubernetes 基于资源请求(`Pod.spec.containers[].resources.requests`)调度 Pod
并且在决定另一个 Pod 是否适合调度到某个给定的节点上时,不会考虑超出请求的内存用量。
这可能导致拒绝服务并使操作系统出现需要处理内存不足OOM的情况。
用户可以创建任意数量的 `emptyDir`,可能会消耗节点上的所有可用内存,使得 OOM 更有可能发生。
{{< /caution >}}
<!--
From the perspective of memory management, there are some similarities between
when a process uses memory as a work area and when using memory-backed
`emptyDir`. But when using memory as a volume like memory-backed `emptyDir`,
there are additional points below that you should be careful of.
-->
从内存管理的角度来看,进程使用内存作为工作区与使用内存作为 `emptyDir` 的介质有一些相似之处。
但当将内存用作存储卷(例如内存为介质的 `emptyDir` 卷)时,你需要额外注意以下几点:
<!--
* Files stored on a memory-backed volume are almost entirely managed by the
user application. Unlike when used as a work area for a process, you can not
rely on things like language-level garbage collection.
* The purpose of writing files to a volume is to save data or pass it between
applications. Neither Kubernetes nor the OS may automatically delete files
from a volume, so memory used by those files can not be reclaimed when the
system or the pod are under memory pressure.
* A memory-backed `emptyDir` is useful because of its performance, but memory
is generally much smaller in size and much higher in cost than other storage
media, such as disks or SSDs. Using large amounts of memory for `emptyDir`
volumes may affect the normal operation of your pod or of the whole node,
so should be used carefully.
-->
* 存储在内存为介质的卷上的文件几乎完全由用户应用所管理。
与用作进程工作区的用法不同,你无法依赖语言级别垃圾回收这类机制。
* 将文件写入某个卷的目的是保存数据或在应用之间传递数据。
Kubernetes 或操作系统都不会自动从卷中删除文件,
因此当系统或 Pod 面临内存压力时,将无法回收这些文件所使用的内存。
* 以内存为介质的 `emptyDir` 因性能较好而很有用,但内存通常比其他存储介质(如磁盘或 SSD小得多且成本更高。
`emptyDir` 卷使用大量内存可能会影响 Pod 或整个节点的正常运行,因此你应谨慎使用。
<!--
If you are administering a cluster or namespace, you can also set
[ResourceQuota](/docs/concepts/policy/resource-quotas/) that limits memory use;
you may also want to define a [LimitRange](/docs/concepts/policy/limit-range/)
for additional enforcement.
If you specify a `spec.containers[].resources.limits.memory` for each Pod,
then the muximum size of an `emptyDir` volume will be the pod's memory limit.
-->
如果你在管理集群或命名空间,还可以设置限制内存使用的 [ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/)
你可能还希望定义一个 [LimitRange](/zh-cn/docs/concepts/policy/limit-range/) 以施加额外的限制。如果为每个 Pod
指定 `spec.containers[].resources.limits.memory`,那么 `emptyDir` 卷的最大尺寸将是该 Pod 的内存限制。
<!--
As an alternative, a cluster administrator can enforce size limits for
`emptyDir` volumes in new Pods using a policy mechanism such as
[ValidationAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy).
-->
作为一种替代方案,集群管理员可以使用诸如
[ValidationAdmissionPolicy](/zh-cn/docs/reference/access-authn-authz/validating-admission-policy)
之类的策略机制来强制对新 Pod 的 `emptyDir` 卷进行大小限制。
<!--
## Local ephemeral storage
@ -449,8 +530,7 @@ mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
节点通常还可以具有本地的临时性存储,由本地挂接的可写入设备或者有时也用 RAM
来提供支持。
“临时Ephemeral”意味着对所存储的数据不提供长期可用性的保证。
来提供支持。“临时Ephemeral”意味着对所存储的数据不提供长期可用性的保证。
Pods 通常可以使用临时性本地存储来实现缓冲区、保存日志等功能。
kubelet 可以为使用本地临时存储的 Pods 提供这种存储空间,允许后者使用
@ -549,6 +629,7 @@ kubelet 会将日志写入到所配置的日志目录(默认为 `/var/log`
你的集群节点当然可以包含其他的、并非用于 Kubernetes 的很多文件系统。
{{% /tab %}}
{{% tab name="双文件系统" %}}
<!--
You have a filesystem on the node that you're using for ephemeral data that
comes from running Pods: logs, and `emptyDir` volumes. You can use this filesystem
@ -558,7 +639,15 @@ be the root filesystem.
The kubelet also writes
[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
into the first filesystem, and treats these similarly to ephemeral local storage.
-->
你使用节点上的某个文件系统来保存运行 Pod 时产生的临时性数据:日志和
`emptyDir` 卷等。你可以使用这个文件系统来保存其他数据(例如:与 Kubernetes
无关的其他系统日志);这个文件系统还可以是根文件系统。
kubelet 也将[节点层面的容器日志](/zh-cn/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
写入到第一个文件系统中,并按临时性本地存储的方式对待之。
<!--
You also use a separate filesystem, backed by a different logical storage device.
In this configuration, the directory where you tell the kubelet to place
container image layers and writeable layers is on this second filesystem.
@ -568,16 +657,6 @@ The first filesystem does not hold any image layers or writeable layers.
Your node can have as many other filesystems, not used for Kubernetes,
as you like.
-->
{{% tab name="双文件系统" %}}
你使用节点上的某个文件系统来保存运行 Pods 时产生的临时性数据:日志和
`emptyDir` 卷等。你可以使用这个文件系统来保存其他数据(例如:与 Kubernetes
无关的其他系统日志);这个文件系统还可以是根文件系统。
kubelet 也将[节点层面的容器日志](/zh-cn/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
写入到第一个文件系统中,并按临时性本地存储的方式对待之。
同时你使用另一个由不同逻辑存储设备支持的文件系统。在这种配置下,你会告诉
kubelet 将容器镜像层和可写层保存到这第二个文件系统上的某个目录中。
@ -624,11 +703,6 @@ container of a Pod can specify either or both of the following:
* `spec.containers[].resources.limits.ephemeral-storage`
* `spec.containers[].resources.requests.ephemeral-storage`
Limits and requests for `ephemeral-storage` are measured in byte quantities.
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following quantities all represent roughly the same value:
-->
### 为本地临时性存储设置请求和限制 {#setting-requests-and-limits-for-local-ephemeral-storage}
@ -638,6 +712,12 @@ Pod 中的每个容器可以设置以下属性:
* `spec.containers[].resources.limits.ephemeral-storage`
* `spec.containers[].resources.requests.ephemeral-storage`
<!--
Limits and requests for `ephemeral-storage` are measured in byte quantities.
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following quantities all represent roughly the same value:
-->
`ephemeral-storage` 的请求和限制是按量纲计量的。
你可以使用一般整数或者定点数字加上下面的后缀来表达存储量E、P、T、G、M、k。
你也可以使用对应的 2 的幂级数来表达Ei、Pi、Ti、Gi、Mi、Ki。
@ -664,7 +744,6 @@ storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, an
a limit of 8GiB of local ephemeral storage. 500Mi of that limit could be
consumed by the `emptyDir` volume.
-->
在下面的例子中Pod 包含两个容器。每个容器请求 2 GiB 大小的本地临时性存储。
每个容器都设置了 4 GiB 作为其本地临时性存储的限制。
因此,整个 Pod 的本地临时性存储请求是 4 GiB且其本地临时性存储的限制为 8 GiB。
@ -713,7 +792,6 @@ For more information, see
The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node.
-->
### 带临时性存储的 Pods 的调度行为 {#how-pods-with-ephemeral-storage-requests-are-scheduled}
当你创建一个 Pod 时Kubernetes 调度器会为 Pod 选择一个节点来运行之。
@ -735,15 +813,6 @@ kubelet measures storage use in:
If a Pod is using more ephemeral storage than you allow it to, the kubelet
sets an eviction signal that triggers Pod eviction.
For container-level isolation, if a container's writable layer and log
usage exceeds its storage limit, the kubelet marks the Pod for eviction.
For pod-level isolation the kubelet works out an overall Pod storage limit by
summing the limits for the containers in that Pod. In this case, if the sum of
the local ephemeral storage usage from all containers and also the Pod's `emptyDir`
volumes exceeds the overall Pod storage limit, then the kubelet also marks the Pod
for eviction.
-->
### 临时性存储消耗的管理 {#resource-emphemeralstorage-consumption}
@ -756,6 +825,16 @@ for eviction.
如果某 Pod 的临时存储用量超出了你所允许的范围kubelet
会向其发出逐出eviction信号触发该 Pod 被逐出所在节点。
<!--
For container-level isolation, if a container's writable layer and log
usage exceeds its storage limit, the kubelet marks the Pod for eviction.
For pod-level isolation the kubelet works out an overall Pod storage limit by
summing the limits for the containers in that Pod. In this case, if the sum of
the local ephemeral storage usage from all containers and also the Pod's `emptyDir`
volumes exceeds the overall Pod storage limit, then the kubelet also marks the Pod
for eviction.
-->
就容器层面的隔离而言,如果某容器的可写入镜像层和日志用量超出其存储限制,
kubelet 也会将所在的 Pod 标记为逐出候选。
@ -882,7 +961,14 @@ If you want to use project quotas, you should:
using the `featureGates` field in the
[kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/)
or the `--feature-gates` command line flag.
-->
如果你希望使用项目配额,你需要:
* 在 [kubelet 配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/)中使用
`featureGates` 字段或者使用 `--feature-gates` 命令行参数启用
`LocalStorageCapacityIsolationFSQuotaMonitoring=true` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
<!--
* Ensure that the root filesystem (or optional runtime filesystem)
has project quotas enabled. All XFS filesystems support project quotas.
For ext4 filesystems, you need to enable the project quota tracking feature
@ -892,18 +978,7 @@ If you want to use project quotas, you should:
# For ext4, with /dev/block-device not mounted
sudo tune2fs -O project -Q prjquota /dev/block-device
```
* Ensure that the root filesystem (or optional runtime filesystem) is
mounted with project quotas enabled. For both XFS and ext4fs, the
mount option is named `prjquota`.
-->
如果你希望使用项目配额,你需要:
* 在 [kubelet 配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/)中使用
`featureGates` 字段或者使用 `--feature-gates` 命令行参数启用
`LocalStorageCapacityIsolationFSQuotaMonitoring=true` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
* 确保根文件系统(或者可选的运行时文件系统)启用了项目配额。所有 XFS
文件系统都支持项目配额。
对 extf 文件系统而言,你需要在文件系统尚未被挂载时启用项目配额跟踪特性:
@ -913,6 +988,11 @@ If you want to use project quotas, you should:
sudo tune2fs -O project -Q prjquota /dev/block-device
```
<!--
* Ensure that the root filesystem (or optional runtime filesystem) is
mounted with project quotas enabled. For both XFS and ext4fs, the
mount option is named `prjquota`.
-->
* 确保根文件系统(或者可选的运行时文件系统)在挂载时项目配额特性是被启用了的。
对于 XFS 和 ext4fs 而言,对应的挂载选项称作 `prjquota`