[zh-cn] Resync manage-resources-containers page
parent
e28e43b5e5
commit
d03daa8485
|
@ -14,7 +14,7 @@ title: Resource Management for Pods and Containers
|
|||
content_type: concept
|
||||
weight: 40
|
||||
feature:
|
||||
title: Automatic binpacking
|
||||
title: Automatic bin packing
|
||||
description: >
|
||||
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability.
|
||||
Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
|
||||
|
@ -155,7 +155,7 @@ Kubernetes API 服务器读取和修改的对象。
|
|||
For each container, you can specify resource limits and requests,
|
||||
including the following:
|
||||
-->
|
||||
## Pod 和 容器的资源请求和约束
|
||||
## Pod 和 容器的资源请求和约束 {#resource-requests-and-limits-of-pod-and-container}
|
||||
|
||||
针对每个容器,你都可以指定其资源约束和请求,包括如下选项:
|
||||
|
||||
|
@ -170,7 +170,6 @@ including the following:
|
|||
Although you can only specify requests and limits for individual containers,
|
||||
it is also useful to think about the overall resource requests and limits for
|
||||
a Pod.
|
||||
A
|
||||
For a particular resource, a *Pod resource request/limit* is the sum of the
|
||||
resource requests/limits of that type for each container in the Pod.
|
||||
-->
|
||||
|
@ -184,7 +183,7 @@ resource requests/limits of that type for each container in the Pod.
|
|||
|
||||
Limits and requests for CPU resources are measured in *cpu* units.
|
||||
In Kubernetes, 1 CPU unit is equivalent to **1 physical CPU core**,
|
||||
or **1 virtual core**, depending on whether the node is a physical host
|
||||
or **1 virtual core**, depending on whether the node is a physical host
|
||||
or a virtual machine running inside a physical machine.
|
||||
-->
|
||||
## Kubernetes 中的资源单位 {#resource-units-in-kubernetes}
|
||||
|
@ -316,7 +315,7 @@ a Pod on a node if the capacity check fails. This protects against a resource
|
|||
shortage on a node when resource usage later increases, for example, during a
|
||||
daily peak in request rate.
|
||||
-->
|
||||
## 带资源请求的 Pod 如何调度
|
||||
## 带资源请求的 Pod 如何调度 {#how-pods-with-resource-limits-are-run}
|
||||
|
||||
当你创建一个 Pod 时,Kubernetes 调度程序将为 Pod 选择一个节点。
|
||||
每个节点对每种资源类型都有一个容量上限:可为 Pod 提供的 CPU 和内存量。
|
||||
|
@ -328,7 +327,7 @@ daily peak in request rate.
|
|||
<!--
|
||||
## How Kubernetes applies resource requests and limits {#how-pods-with-resource-limits-are-run}
|
||||
|
||||
When the kubelet starts a container of a Pod, the kubelet passes that container's
|
||||
When the kubelet starts a container as part of a Pod, the kubelet passes that container's
|
||||
requests and limits for memory and CPU to the container runtime.
|
||||
|
||||
On Linux, the container runtime typically configures
|
||||
|
@ -337,7 +336,7 @@ limits you defined.
|
|||
-->
|
||||
## Kubernetes 应用资源请求与约束的方式 {#how-pods-with-resource-limits-are-run}
|
||||
|
||||
当 kubelet 启动 Pod 中的容器时,它会将容器的 CPU 和内存请求与约束信息传递给容器运行时。
|
||||
当 kubelet 将容器作为 Pod 的一部分启动时,它会将容器的 CPU 和内存请求与约束信息传递给容器运行时。
|
||||
|
||||
在 Linux 系统上,容器运行时通常会配置内核
|
||||
{{< glossary_tooltip text="CGroups" term_id="cgroup" >}},负责应用并实施所定义的请求。
|
||||
|
@ -414,7 +413,7 @@ are available in your cluster, then Pod resource usage can be retrieved either
|
|||
from the [Metrics API](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api)
|
||||
directly or from your monitoring tools.
|
||||
-->
|
||||
## 监控计算和内存资源用量
|
||||
## 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage}
|
||||
|
||||
kubelet 会将 Pod 的资源使用情况作为 Pod
|
||||
[`status`](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
|
||||
|
@ -433,7 +432,7 @@ locally-attached writeable devices or, sometimes, by RAM.
|
|||
|
||||
Pods use ephemeral local storage for scratch space, caching, and for logs.
|
||||
The kubelet can provide scratch space to Pods using local ephemeral storage to
|
||||
mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
|
||||
mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
|
||||
{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
|
||||
-->
|
||||
## 本地临时存储 {#local-ephemeral-storage}
|
||||
|
@ -490,7 +489,7 @@ The kubelet also writes
|
|||
[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
|
||||
and treats these similarly to ephemeral local storage.
|
||||
-->
|
||||
### 本地临时性存储的配置
|
||||
### 本地临时性存储的配置 {##configurations-for-local-ephemeral-storage}
|
||||
|
||||
Kubernetes 有两种方式支持节点上配置本地临时性存储:
|
||||
|
||||
|
@ -606,12 +605,12 @@ container of a Pod can specify either or both of the following:
|
|||
* `spec.containers[].resources.limits.ephemeral-storage`
|
||||
* `spec.containers[].resources.requests.ephemeral-storage`
|
||||
|
||||
Limits and requests for `ephemeral-storage` are measured in quantities.
|
||||
Limits and requests for `ephemeral-storage` are measured in byte quantities.
|
||||
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
|
||||
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following represent roughly the same value:
|
||||
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following quantities all represent roughly the same value:
|
||||
-->
|
||||
### 为本地临时性存储设置请求和约束值
|
||||
### 为本地临时性存储设置请求和约束值 {#setting-requests-and-limits-for-local-ephemeral-storage}
|
||||
|
||||
你可以使用 `ephemeral-storage` 来管理本地临时性存储。
|
||||
Pod 中的每个容器可以设置以下属性:
|
||||
|
@ -620,7 +619,7 @@ Pod 中的每个容器可以设置以下属性:
|
|||
* `spec.containers[].resources.requests.ephemeral-storage`
|
||||
|
||||
`ephemeral-storage` 的请求和约束值是按量纲计量的。你可以使用一般整数或者定点数字
|
||||
加上下面的后缀来表达存储量:E、P、T、G、M、K。
|
||||
加上下面的后缀来表达存储量:E、P、T、G、M、k。
|
||||
你也可以使用对应的 2 的幂级数来表达:Ei、Pi、Ti、Gi、Mi、Ki。
|
||||
例如,下面的表达式所表达的大致是同一个值:
|
||||
|
||||
|
@ -641,8 +640,8 @@ or 400 megabytes (`400M`).
|
|||
<!--
|
||||
In the following example, the Pod has two containers. Each container has a request of
|
||||
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
|
||||
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a
|
||||
limit of 8GiB of local ephemeral storage.
|
||||
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
|
||||
a limit of 8GiB of local ephemeral storage.
|
||||
-->
|
||||
|
||||
在下面的例子中,Pod 包含两个容器。每个容器请求 2 GiB 大小的本地临时性存储。
|
||||
|
@ -692,7 +691,7 @@ For more information, see
|
|||
The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node.
|
||||
-->
|
||||
|
||||
### 带临时性存储的 Pods 的调度行为
|
||||
### 带临时性存储的 Pods 的调度行为 {#how-pods-with-ephemeral-storage-requests-are-scheduled}
|
||||
|
||||
当你创建一个 Pod 时,Kubernetes 调度器会为 Pod 选择一个节点来运行之。
|
||||
每个节点都有一个本地临时性存储的上限,是其可提供给 Pods 使用的总量。
|
||||
|
@ -870,6 +869,7 @@ If you want to use project quotas, you should:
|
|||
has project quotas enabled. All XFS filesystems support project quotas.
|
||||
For ext4 filesystems, you need to enable the project quota tracking feature
|
||||
while the filesystem is not mounted.
|
||||
|
||||
```bash
|
||||
# For ext4, with /dev/block-device not mounted
|
||||
sudo tune2fs -O project -Q prjquota /dev/block-device
|
||||
|
@ -962,11 +962,11 @@ asynchronously by the kubelet.
|
|||
kubelet 会异步地对 `status.allocatable` 字段执行自动更新操作,使之包含新资源。
|
||||
|
||||
<!--
|
||||
Because the scheduler uses the node `status.allocatable` value when
|
||||
evaluating Pod fitness, the shceduler only takes account of the new value after
|
||||
the asynchronous update. There may be a short delay between patching the
|
||||
Because the scheduler uses the node's `status.allocatable` value when
|
||||
evaluating Pod fitness, the scheduler only takes account of the new value after
|
||||
that asynchronous update. There may be a short delay between patching the
|
||||
node capacity with a new resource and the time when the first Pod that requests
|
||||
the resource to be scheduled on that node.
|
||||
the resource can be scheduled on that node.
|
||||
-->
|
||||
由于调度器在评估 Pod 是否适合在某节点上执行时会使用节点的 `status.allocatable` 值,
|
||||
调度器只会考虑异步更新之后的新值。
|
||||
|
@ -997,6 +997,7 @@ http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
|
|||
In the preceding request, `~1` is the encoding for the character `/`
|
||||
in the patch path. The operation path value in JSON-Patch is interpreted as a
|
||||
JSON-Pointer. For more details, see
|
||||
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
|
@ -1013,14 +1014,14 @@ Cluster-level extended resources are not tied to nodes. They are usually managed
|
|||
by scheduler extenders, which handle the resource consumption and resource quota.
|
||||
|
||||
You can specify the extended resources that are handled by scheduler extenders
|
||||
in [scheduler policy configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
in [scheduler configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
-->
|
||||
#### 集群层面的扩展资源 {#cluster-level-extended-resources}
|
||||
|
||||
集群层面的扩展资源并不绑定到具体节点。
|
||||
它们通常由调度器扩展程序(Scheduler Extenders)管理,这些程序处理资源消耗和资源配额。
|
||||
|
||||
你可以在[调度器策略配置](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
你可以在[调度器配置](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
中指定由调度器扩展程序处理的扩展资源。
|
||||
|
||||
<!--
|
||||
|
@ -1158,12 +1159,12 @@ to limit the number of PIDs that a given Pod can consume. See
|
|||
If the scheduler cannot find any node where a Pod can fit, the Pod remains
|
||||
unscheduled until a place can be found. An
|
||||
[Event](/docs/reference/kubernetes-api/cluster-resources/event-v1/) is produced
|
||||
each time the scheduler fails to find a place for the Pod, You can use `kubectl`
|
||||
each time the scheduler fails to find a place for the Pod. You can use `kubectl`
|
||||
to view the events for a Pod; for example:
|
||||
-->
|
||||
## 疑难解答
|
||||
## 疑难解答 {#troubleshooting}
|
||||
|
||||
### 我的 Pod 处于悬决状态且事件信息显示 `FailedScheduling`
|
||||
### 我的 Pod 处于悬决状态且事件信息显示 `FailedScheduling` {#my-pods-are-pending-with-event-message-failedscheduling}
|
||||
|
||||
如果调度器找不到该 Pod 可以匹配的任何节点,则该 Pod 将保持未被调度状态,
|
||||
直到找到一个可以被调度到的位置。每当调度器找不到 Pod 可以调度的地方时,
|
||||
|
@ -1240,22 +1241,22 @@ Allocated resources:
|
|||
(Total limits may be over 100 percent, i.e., overcommitted.)
|
||||
CPU Requests CPU Limits Memory Requests Memory Limits
|
||||
------------ ---------- --------------- -------------
|
||||
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
|
||||
680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)
|
||||
```
|
||||
|
||||
<!--
|
||||
In the preceding output, you can see that if a Pod requests more than 1120m
|
||||
CPUs or 6.23Gi of memory, it will not fit on the node.
|
||||
In the preceding output, you can see that if a Pod requests more than 1.120 CPUs
|
||||
or more than 6.23Gi of memory, that Pod will not fit on the node.
|
||||
|
||||
By looking at the "Pods" section, you can see which Pods are taking up space on
|
||||
the node.
|
||||
-->
|
||||
在上面的输出中,你可以看到如果 Pod 请求超过 1120m CPU 或者 6.23Gi 内存,节点将无法满足。
|
||||
在上面的输出中,你可以看到如果 Pod 请求超过 1.120 CPU 或者 6.23Gi 内存,节点将无法满足。
|
||||
|
||||
通过查看 "Pods" 部分,你将看到哪些 Pod 占用了节点上的资源。
|
||||
|
||||
<!--
|
||||
The amount of resources available to Pods is less than the node capacity, because
|
||||
The amount of resources available to Pods is less than the node capacity because
|
||||
system daemons use a portion of the available resources. Within the Kubernetes API,
|
||||
each Node has a `.status.allocatable` field
|
||||
(see [NodeStatus](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeStatus)
|
||||
|
@ -1286,7 +1287,7 @@ prevent one team from using so much of any resource that this over-use affects o
|
|||
|
||||
You should also consider what access you grant to that namespace:
|
||||
**full** write access to a namespace allows someone with that access to remove any
|
||||
resource, include a configured ResourceQuota.
|
||||
resource, including a configured ResourceQuota.
|
||||
-->
|
||||
你可以配置[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/)功能特性以限制每个名字空间可以使用的资源总量。
|
||||
当某名字空间中存在 ResourceQuota 时,Kubernetes 会在该名字空间中的对象强制实施配额。
|
||||
|
@ -1305,7 +1306,7 @@ whether a Container is being killed because it is hitting a resource limit, call
|
|||
`kubectl describe pod` on the Pod of interest:
|
||||
-->
|
||||
|
||||
### 我的容器被终止了
|
||||
### 我的容器被终止了 {#my-container-is-terminated}
|
||||
|
||||
你的容器可能因为资源紧张而被终止。要查看容器是否因为遇到资源限制而被杀死,
|
||||
请针对相关的 Pod 执行 `kubectl describe pod`:
|
||||
|
@ -1331,18 +1332,19 @@ Message:
|
|||
IP: 10.244.2.75
|
||||
Containers:
|
||||
simmemleak:
|
||||
Image: saadali/simmemleak
|
||||
Image: saadali/simmemleak:latest
|
||||
Limits:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
State: Running
|
||||
Started: Tue, 07 Jul 2015 12:54:41 -0700
|
||||
Last Termination State: Terminated
|
||||
Exit Code: 1
|
||||
Started: Fri, 07 Jul 2015 12:54:30 -0700
|
||||
Finished: Fri, 07 Jul 2015 12:54:33 -0700
|
||||
Ready: False
|
||||
Restart Count: 5
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
State: Running
|
||||
Started: Tue, 07 Jul 2019 12:54:41 -0700
|
||||
Last State: Terminated
|
||||
Reason: OOMKilled
|
||||
Exit Code: 137
|
||||
Started: Fri, 07 Jul 2019 12:54:30 -0700
|
||||
Finished: Fri, 07 Jul 2019 12:54:33 -0700
|
||||
Ready: False
|
||||
Restart Count: 5
|
||||
Conditions:
|
||||
Type Status
|
||||
Ready False
|
||||
|
@ -1381,13 +1383,13 @@ memory limit (and possibly request) for that container.
|
|||
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
|
||||
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
|
||||
and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
|
||||
* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
|
||||
* Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F) in XFS
|
||||
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
-->
|
||||
* 获取[分配内存资源给容器和 Pod ](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验
|
||||
* 获取[分配 CPU 资源给容器和 Pod ](/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验
|
||||
* 阅读 API 参考中 [Container](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
|
||||
和其[资源请求](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)定义。
|
||||
* 阅读 XFS 中[配额](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html)的文档
|
||||
* 阅读 XFS 中[配额](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F)的文档
|
||||
* 进一步阅读 [kube-scheduler 配置参考 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
|
||||
|
|
Loading…
Reference in New Issue