[zh] adjust storage-limits.md

Signed-off-by: xin.li <xin.li@daocloud.io>
pull/33700/head
xin.li 2022-05-15 23:39:39 +08:00
parent bf46c15bdf
commit a2ba9c25d9
1 changed files with 10 additions and 17 deletions

View File

@ -16,22 +16,21 @@ content_type: concept
<!-- overview --> <!-- overview -->
<!-- This page describes the maximum number of volumes that can be attached <!--
to a Node for various cloud providers. --> This page describes the maximum number of volumes that can be attached
to a Node for various cloud providers.
-->
此页面描述了各个云供应商可关联至一个节点的最大卷数。 此页面描述了各个云供应商可关联至一个节点的最大卷数。
<!-- Cloud providers like Google, Amazon, and Microsoft typically have a limit on <!--
Cloud providers like Google, Amazon, and Microsoft typically have a limit on
how many volumes can be attached to a Node. It is important for Kubernetes to how many volumes can be attached to a Node. It is important for Kubernetes to
respect those limits. Otherwise, Pods scheduled on a Node could get stuck respect those limits. Otherwise, Pods scheduled on a Node could get stuck
waiting for volumes to attach. --> waiting for volumes to attach.
-->
谷歌、亚马逊和微软等云供应商通常对可以关联到节点的卷数量进行限制。 谷歌、亚马逊和微软等云供应商通常对可以关联到节点的卷数量进行限制。
Kubernetes 需要尊重这些限制。 否则,在节点上调度的 Pod 可能会卡住去等待卷的关联。 Kubernetes 需要尊重这些限制。 否则,在节点上调度的 Pod 可能会卡住去等待卷的关联。
<!-- body --> <!-- body -->
<!-- <!--
@ -40,7 +39,6 @@ Kubernetes 需要尊重这些限制。 否则,在节点上调度的 Pod 可能
The Kubernetes scheduler has default limits on the number of volumes The Kubernetes scheduler has default limits on the number of volumes
that can be attached to a Node: that can be attached to a Node:
--> -->
## Kubernetes 的默认限制 ## Kubernetes 的默认限制
The Kubernetes 调度器对关联于一个节点的卷数有默认限制: The Kubernetes 调度器对关联于一个节点的卷数有默认限制:
@ -73,20 +71,18 @@ the limit you set.
The limit applies to the entire cluster, so it affects all Nodes. The limit applies to the entire cluster, so it affects all Nodes.
--> -->
## 自定义限制 ## 自定义限制
可以通过设置 `KUBE_MAX_PD_VOLS` 环境变量的值来设置这些限制,然后再启动调度器。 可以通过设置 `KUBE_MAX_PD_VOLS` 环境变量的值来设置这些限制,然后再启动调度器。
CSI 驱动程序可能具有不同的过程,关于如何自定义其限制请参阅相关文档。 CSI 驱动程序可能具有不同的过程,关于如何自定义其限制请参阅相关文档。
如果设置的限制高于默认限制,请谨慎使用。请参阅云提供商的文档以确保节点可支持设置的限制。 如果设置的限制高于默认限制,请谨慎使用。请参阅云提供商的文档以确保节点可支持设置的限制。
此限制应用于整个集群,所以它会影响所有节点。 此限制应用于整个集群,所以它会影响所有节点。
<!-- <!--
## Dynamic volume limits ## Dynamic volume limits
--> -->
## 动态卷限制 ## 动态卷限制
{{< feature-state state="stable" for_k8s_version="v1.17" >}} {{< feature-state state="stable" for_k8s_version="v1.17" >}}
@ -99,7 +95,6 @@ Dynamic volume limits are supported for following volume types.
- Azure Disk - Azure Disk
- CSI - CSI
--> -->
以下卷类型支持动态卷限制。 以下卷类型支持动态卷限制。
- Amazon EBS - Amazon EBS
@ -111,7 +106,6 @@ Dynamic volume limits are supported for following volume types.
For volumes managed by in-tree volume plugins, Kubernetes automatically determines the Node For volumes managed by in-tree volume plugins, Kubernetes automatically determines the Node
type and enforces the appropriate maximum number of volumes for the node. For example: type and enforces the appropriate maximum number of volumes for the node. For example:
--> -->
对于由内建插件管理的卷Kubernetes 会自动确定节点类型并确保节点上可关联的卷数目合规。 例如: 对于由内建插件管理的卷Kubernetes 会自动确定节点类型并确保节点上可关联的卷数目合规。 例如:
<!-- <!--
@ -132,7 +126,6 @@ Refer to the [CSI specifications](https://github.com/container-storage-interface
* For volumes managed by in-tree plugins that have been migrated to a CSI driver, the maximum number of volumes will be the one reported by the CSI driver. * For volumes managed by in-tree plugins that have been migrated to a CSI driver, the maximum number of volumes will be the one reported by the CSI driver.
--> -->
* 在 * 在
<a href="https://cloud.google.com/compute/">Google Compute Engine</a>环境中, <a href="https://cloud.google.com/compute/">Google Compute Engine</a>环境中,
[根据节点类型](https://cloud.google.com/compute/docs/disks/#pdnumberlimits)最多可以将127个卷关联到节点。 [根据节点类型](https://cloud.google.com/compute/docs/disks/#pdnumberlimits)最多可以将127个卷关联到节点。