Merge pull request #39369 from Zhuzhenghao/zh/volume-snapshots
[zh] Sync small changes in page storage-capacity,storage-limits,volume-snapshotpull/39383/head
commit
9dc5989ff5
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 存储容量
|
||||
content_type: concept
|
||||
weight: 70
|
||||
weight: 80
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
|
@ -12,7 +12,7 @@ reviewers:
|
|||
- pohly
|
||||
title: Storage Capacity
|
||||
content_type: concept
|
||||
weight: 70
|
||||
weight: 80
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
|||
|
|
@ -18,17 +18,17 @@ weight: 90
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
<!--
|
||||
This page describes the maximum number of volumes that can be attached
|
||||
to a Node for various cloud providers.
|
||||
to a Node for various cloud providers.
|
||||
-->
|
||||
此页面描述了各个云供应商可关联至一个节点的最大卷数。
|
||||
|
||||
<!--
|
||||
<!--
|
||||
Cloud providers like Google, Amazon, and Microsoft typically have a limit on
|
||||
how many volumes can be attached to a Node. It is important for Kubernetes to
|
||||
respect those limits. Otherwise, Pods scheduled on a Node could get stuck
|
||||
waiting for volumes to attach.
|
||||
waiting for volumes to attach.
|
||||
-->
|
||||
谷歌、亚马逊和微软等云供应商通常对可以关联到节点的卷数量进行限制。
|
||||
Kubernetes 需要尊重这些限制。 否则,在节点上调度的 Pod 可能会卡住去等待卷的关联。
|
||||
|
|
|
|||
|
|
@ -390,12 +390,12 @@ the `VolumeSnapshotContent` that corresponds to the `VolumeSnapshot`.
|
|||
到对应 `VolumeSnapshot` 的 `VolumeSnapshotContent` 中。
|
||||
|
||||
<!--
|
||||
For pre-provisioned snapshots, `spec.SourceVolumeMode` needs to be populated
|
||||
For pre-provisioned snapshots, `spec.sourceVolumeMode` needs to be populated
|
||||
by the cluster administrator.
|
||||
|
||||
An example `VolumeSnapshotContent` resource with this feature enabled would look like:
|
||||
-->
|
||||
对于预制备的快照,`spec.SourceVolumeMode` 需要由集群管理员填充。
|
||||
对于预制备的快照,`spec.sourceVolumeMode` 需要由集群管理员填充。
|
||||
|
||||
启用此特性的 `VolumeSnapshotContent` 资源示例如下所示:
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue