[zh] Resync concepts section (3)
parent
e2b3564d10
commit
4657f11dbf
|
@ -248,13 +248,17 @@ features:
|
|||
|
||||
Example:
|
||||
-->
|
||||
通用临时卷与 `emptyDir` 卷类似,因为它们为暂存数据提供了一个 per-pod 的目录,该目录通常在置备后为空。
|
||||
但他们可能还会有其他特征:
|
||||
通用临时卷类似于 `emptyDir` 卷,因为它为每个 Pod 提供临时数据存放目录,
|
||||
在最初制备完毕时一般为空。不过通用临时卷也有一些额外的功能特性:
|
||||
|
||||
- 存储可以是本地的,也可以是网络连接的。
|
||||
- 卷可以有固定的大小,pod不能超量使用。
|
||||
- 卷可能有一些初始数据,这取决于驱动程序和参数。
|
||||
- 当驱动程序支持,卷上的典型操作将被支持,包括([快照](/zh/docs/concepts/storage/volume-snapshots/)、[克隆](/zh/docs/concepts/storage/volume-pvc-datasource/)、[调整大小](/zh/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)和[存储容量跟踪](/zh/docs/concepts/storage/storage-capacity/))。
|
||||
- 当驱动程序支持,卷上的典型操作将被支持,包括
|
||||
([快照](/zh/docs/concepts/storage/volume-snapshots/)、
|
||||
[克隆](/zh/docs/concepts/storage/volume-pvc-datasource/)、
|
||||
[调整大小](/zh/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)和
|
||||
[存储容量跟踪](/zh/docs/concepts/storage/storage-capacity/))。
|
||||
|
||||
示例:
|
||||
|
||||
|
@ -375,8 +379,8 @@ Pods (a Pod "pod-a" with volume "scratch" and another Pod with name
|
|||
-->
|
||||
这种确定性命名方式也引入了潜在的冲突,
|
||||
比如在不同的 Pod 之间(名为 “Pod-a” 的 Pod 挂载名为 "scratch" 的卷,
|
||||
名为 "pod" 的 Pod 挂载名为 “a-scratch” 的卷,这两者均会生成名为 "pod-a-scratch" 的PVC),
|
||||
或者在 Pod 和手工创建的 PVC 之间。
|
||||
和名为 "pod" 的 Pod 挂载名为 “a-scratch” 的卷,这两者均会生成名为
|
||||
"pod-a-scratch" 的PVC),或者在 Pod 和手工创建的 PVC 之间。
|
||||
|
||||
<!--
|
||||
Such conflicts are detected: a PVC is only used for an ephemeral
|
||||
|
@ -412,7 +416,8 @@ two choices:
|
|||
启用 GenericEphemeralVolume 特性会导致那些没有 PVCs 创建权限的用户,
|
||||
在创建 Pods 时,被允许间接的创建 PVCs。
|
||||
集群管理员必须意识到这一点。
|
||||
如果这不符合他们的安全模型,他们有两种选择:
|
||||
如果这不符合他们的安全模型,他们有如下选择:
|
||||
|
||||
<!--
|
||||
- Explicitly disable the feature through the feature gate.
|
||||
- Use a [Pod Security
|
||||
|
@ -424,19 +429,19 @@ two choices:
|
|||
volume.
|
||||
-->
|
||||
- 通过特性门控显式禁用该特性。
|
||||
- 当`卷`列表不包含 `ephemeral` 卷类型时,使用
|
||||
[Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)
|
||||
(在 Kubernetes 1.21 中已弃用)。
|
||||
- 使用[准入 Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
拒绝像 Pod 这样具有通用临时卷。
|
||||
- 当 `volumes` 列表不包含 `ephemeral` 卷类型时,使用
|
||||
[Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)。
|
||||
(这一方式在 Kubernetes 1.21 版本已经弃用)
|
||||
- 使用一个[准入 Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
拒绝包含通用临时卷的 Pods。
|
||||
|
||||
<!--
|
||||
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota) still applies, so
|
||||
even if users are allowed to use this new mechanism, they cannot use
|
||||
it to circumvent other policies.
|
||||
-->
|
||||
在一个命名空间中,用于 PVCs 的常规命名空间配额[用于 PVCs 的常规命名空间配额](/zh/docs/concepts/policy/resource-quotas/#storage-resource-quota)仍然适用,
|
||||
因此即使允许用户使用这种新机制,他们也不能使用它来规避其他策略。
|
||||
[为 PVC 卷所设置的逐名字空间的配额](/zh/docs/concepts/policy/resource-quotas/#storage-resource-quota)
|
||||
仍然有效,因此即使允许用户使用这种新机制,他们也不能使用它来规避其他策略。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -474,5 +479,5 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont
|
|||
|
||||
- 有关设计的更多信息,参阅
|
||||
[Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md)。
|
||||
- 本特性下一步开发的更多信息,参阅
|
||||
[enhancement tracking issue #1698](https://github.com/kubernetes/enhancements/issues/1698).
|
||||
- 关于本特性下一步开发的更多信息,参阅
|
||||
[enhancement tracking issue #1698](https://github.com/kubernetes/enhancements/issues/1698)。
|
||||
|
|
|
@ -11,6 +11,7 @@ which a pod runs: network-attached storage might not be accessible by
|
|||
all nodes, or storage is local to a node to begin with.
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
|
||||
|
||||
This page describes how Kubernetes keeps track of storage capacity and
|
||||
how the scheduler uses that information to schedule Pods onto nodes
|
||||
|
@ -27,6 +28,7 @@ text="Container Storage Interface" term_id="csi" >}} (CSI) drivers and
|
|||
网络存储可能并非所有节点都能够访问,或者对于某个节点存储是本地的。
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
|
||||
|
||||
本页面描述了 Kubernetes 如何跟踪存储容量以及调度程序如何为了余下的尚未挂载的卷使用该信息将
|
||||
Pod 调度到能够访问到足够存储容量的节点上。
|
||||
|
@ -156,64 +158,16 @@ to handle this automatically.
|
|||
<!--
|
||||
## Enabling storage capacity tracking
|
||||
|
||||
Storage capacity tracking is an *alpha feature* and only enabled when
|
||||
the `CSIStorageCapacity` [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/) and
|
||||
the `storage.k8s.io/v1alpha1` {{< glossary_tooltip text="API group" term_id="api-group" >}} are enabled. For details on
|
||||
that, see the `--feature-gates` and `--runtime-config` [kube-apiserver
|
||||
parameters](/docs/reference/command-line-tools-reference/kube-apiserver/).
|
||||
Storage capacity tracking is a beta feature and enabled by default in
|
||||
a Kubernetes cluster since Kubernetes 1.21. In addition to having the
|
||||
feature enabled in the cluster, a CSI driver also has to support
|
||||
it. Please refer to the driver's documentation for details.
|
||||
-->
|
||||
## 开启存储容量跟踪
|
||||
|
||||
存储容量跟踪是一个 *alpha 特性*,只有当 `CSIStorageCapacity`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
和 `storage.k8s.io/v1alpha1` {{< glossary_tooltip text="API 组" term_id="api-group" >}}启用时才能启用。
|
||||
更多详细信息,可以查看`--feature-gates` 和 `--runtime-config`
|
||||
[kube-apiserver 参数](/zh/docs/reference/command-line-tools-reference/kube-apiserver/)。
|
||||
|
||||
<!--
|
||||
A quick check
|
||||
whether a Kubernetes cluster supports the feature is to list
|
||||
CSIStorageCapacity objects with:
|
||||
```shell
|
||||
kubectl get csistoragecapacities --all-namespaces
|
||||
```
|
||||
|
||||
If your cluster supports CSIStorageCapacity, the response is either a list of CSIStorageCapacity objects or:
|
||||
```
|
||||
No resources found
|
||||
```
|
||||
-->
|
||||
快速检查 Kubernetes 集群是否支持这个特性,可以通过下面命令列出 CSIStorageCapacity 对象:
|
||||
|
||||
```shell
|
||||
kubectl get csistoragecapacities --all-namespaces
|
||||
```
|
||||
|
||||
如果集群支持 CSIStorageCapacity,就会返回 CSIStorageCapacity 的对象列表或者:
|
||||
```
|
||||
No resources found
|
||||
```
|
||||
|
||||
<!--
|
||||
If not supported, this error is printed instead:
|
||||
```
|
||||
error: the server doesn't have a resource type "csistoragecapacities"
|
||||
```
|
||||
|
||||
In addition to enabling the feature in the cluster, a CSI
|
||||
driver also has to
|
||||
support it. Please refer to the driver's documentation for
|
||||
details.
|
||||
-->
|
||||
如果不支持,下面这个错误就会被打印出来:
|
||||
|
||||
```
|
||||
error: the server doesn't have a resource type "csistoragecapacities"
|
||||
```
|
||||
|
||||
除了在集群中启用该功能外,CSI 驱动程序还必须支持它。有关详细信息,请参阅驱动程序的文档。
|
||||
|
||||
存储容量跟踪是一个 Beta 特性,从 Kubernetes 1.21 版本起在 Kubernetes 集群
|
||||
中默认被启用。除了在集群中启用此功能特性之外,还要求 CSI 驱动支持此特性。
|
||||
请参阅驱动的文档了解详细信息。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -225,6 +179,6 @@ error: the server doesn't have a resource type "csistoragecapacities"
|
|||
-->
|
||||
- 想要获得更多该设计的信息,查看
|
||||
[Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1472-storage-capacity-tracking/README.md)。
|
||||
- 有关此功能的进一步开发信息,查看
|
||||
- 有关此功能的下一步开发信息,查看
|
||||
[enhancement tracking issue #1472](https://github.com/kubernetes/enhancements/issues/1472)。
|
||||
- 学习 [Kubernetes 调度器](/zh/docs/concepts/scheduling-eviction/kube-scheduler/)。
|
||||
|
|
|
@ -5,6 +5,11 @@ weight: 30
|
|||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- jsafrane
|
||||
- saad-ali
|
||||
- thockin
|
||||
- msau42
|
||||
title: Storage Classes
|
||||
content_type: concept
|
||||
weight: 30
|
||||
|
@ -17,8 +22,9 @@ This document describes the concept of a StorageClass in Kubernetes. Familiarity
|
|||
with [volumes](/docs/concepts/storage/volumes/) and
|
||||
[persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested.
|
||||
-->
|
||||
本文描述了 Kubernetes 中 StorageClass 的概念。建议先熟悉 [卷](/zh/docs/concepts/storage/volumes/) 和
|
||||
[持久卷](/zh/docs/concepts/storage/persistent-volumes) 的概念。
|
||||
本文描述了 Kubernetes 中 StorageClass 的概念。建议先熟悉
|
||||
[卷](/zh/docs/concepts/storage/volumes/)和
|
||||
[持久卷](/zh/docs/concepts/storage/persistent-volumes)的概念。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -45,7 +51,7 @@ Each StorageClass contains the fields `provisioner`, `parameters`, and
|
|||
`reclaimPolicy`, which are used when a PersistentVolume belonging to the
|
||||
class needs to be dynamically provisioned.
|
||||
|
||||
-->
|
||||
-->
|
||||
## StorageClass 资源
|
||||
|
||||
每个 StorageClass 都包含 `provisioner`、`parameters` 和 `reclaimPolicy` 字段,
|
||||
|
@ -61,12 +67,12 @@ StorageClass 对象的命名很重要,用户使用这个命名来请求生成
|
|||
当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数,一旦创建了对象就不能再对其更新。
|
||||
|
||||
<!--
|
||||
Administrators can specify a default StorageClass just for PVCs that don't
|
||||
Administrators can specify a default StorageClass only for PVCs that don't
|
||||
request any particular class to bind to: see the
|
||||
[PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
|
||||
for details.
|
||||
-->
|
||||
管理员可以为没有申请绑定到特定 StorageClass 的 PVC 指定一个默认的存储类 :
|
||||
管理员可以为没有申请绑定到特定 StorageClass 的 PVC 指定一个默认的存储类:
|
||||
更多详情请参阅
|
||||
[PersistentVolumeClaim 章节](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。
|
||||
|
||||
|
@ -928,20 +934,19 @@ parameters:
|
|||
`"http(s)://api-server:7860"`
|
||||
* `registry`: Quobyte registry to use to mount the volume. You can specify the
|
||||
registry as ``<host>:<port>`` pair or if you want to specify multiple
|
||||
registries you just have to put a comma between them e.q.
|
||||
registries, put a comma between them.
|
||||
``<host1>:<port>,<host2>:<port>,<host3>:<port>``.
|
||||
The host can be an IP address or if you have a working DNS you can also
|
||||
provide the DNS names.
|
||||
* `adminSecretNamespace`: The namespace for `adminSecretName`.
|
||||
Default is "default".
|
||||
-->
|
||||
* `quobyteAPIServer`:Quobyte API 服务器的格式是
|
||||
`"http(s)://api-server:7860"`
|
||||
* `registry`:用于挂载卷的 Quobyte registry。你可以指定 registry 为 ``<host>:<port>``
|
||||
或者如果你想指定多个 registry,你只需要在他们之间添加逗号,例如
|
||||
``<host1>:<port>,<host2>:<port>,<host3>:<port>``。
|
||||
* `quobyteAPIServer`:Quobyte API 服务器的格式是 `"http(s)://api-server:7860"`
|
||||
* `registry`:用于挂载卷的 Quobyte 仓库。你可以指定仓库为 `<host>:<port>`
|
||||
或者如果你想指定多个 registry,在它们之间添加逗号,例如
|
||||
`<host1>:<port>,<host2>:<port>,<host3>:<port>`。
|
||||
主机可以是一个 IP 地址,或者如果你有正在运行的 DNS,你也可以提供 DNS 名称。
|
||||
* `adminSecretNamespace`:`adminSecretName`的 namespace。
|
||||
* `adminSecretNamespace`:`adminSecretName` 的名字空间。
|
||||
默认值是 "default"。
|
||||
|
||||
<!--
|
||||
|
@ -957,15 +962,16 @@ parameters:
|
|||
```
|
||||
-->
|
||||
|
||||
* `adminSecretName`:保存关于 Quobyte 用户和密码的 secret,用于对 API 服务器进行身份验证。
|
||||
提供的 secret 必须有值为 "kubernetes.io/quobyte" 的 type 参数 和 `user` 与 `password` 的键值,
|
||||
* `adminSecretName`:保存关于 Quobyte 用户和密码的 Secret,用于对 API 服务器进行身份验证。
|
||||
提供的 secret 必须有值为 "kubernetes.io/quobyte" 的 type 参数和 `user`
|
||||
与 `password` 的键值,
|
||||
例如以这种方式创建:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic quobyte-admin-secret \
|
||||
--type="kubernetes.io/quobyte" --from-literal=key='opensesame' \
|
||||
--namespace=kube-system
|
||||
```
|
||||
```shell
|
||||
kubectl create secret generic quobyte-admin-secret \
|
||||
--type="kubernetes.io/quobyte" --from-literal=key='opensesame' \
|
||||
--namespace=kube-system
|
||||
```
|
||||
<!--
|
||||
* `user`: maps all access to this user. Default is "root".
|
||||
* `group`: maps all access to this group. Default is "nfsnobody".
|
||||
|
@ -978,10 +984,10 @@ parameters:
|
|||
-->
|
||||
* `user`:对这个用户映射的所有访问权限。默认是 "root"。
|
||||
* `group`:对这个组映射的所有访问权限。默认是 "nfsnobody"。
|
||||
* `quobyteConfig`:使用指定的配置来创建卷。你可以创建一个新的配置,或者,可以修改 Web console 或
|
||||
quobyte CLI 中现有的配置。默认是 "BASE"。
|
||||
* `quobyteTenant`:使用指定的租户 ID 创建/删除卷。这个 Quobyte 租户必须已经于 Quobyte。
|
||||
默认是 "DEFAULT"。
|
||||
* `quobyteConfig`:使用指定的配置来创建卷。你可以创建一个新的配置,
|
||||
或者,可以修改 Web 控制台或 quobyte CLI 中现有的配置。默认是 "BASE"。
|
||||
* `quobyteTenant`:使用指定的租户 ID 创建/删除卷。这个 Quobyte 租户必须
|
||||
已经于 Quobyte 中存在。默认是 "DEFAULT"。
|
||||
|
||||
<!--
|
||||
### Azure Disk
|
||||
|
@ -1015,7 +1021,9 @@ parameters:
|
|||
-->
|
||||
* `skuName`:Azure 存储帐户 Sku 层。默认为空。
|
||||
* `location`:Azure 存储帐户位置。默认为空。
|
||||
* `storageAccount`:Azure 存储帐户名称。如果提供存储帐户,它必须位于与集群相同的资源组中,并且 `location` 是被忽略的。如果未提供存储帐户,则会在与群集相同的资源组中创建新的存储帐户。
|
||||
* `storageAccount`:Azure 存储帐户名称。
|
||||
如果提供存储帐户,它必须位于与集群相同的资源组中,并且 `location`
|
||||
是被忽略的。如果未提供存储帐户,则会在与群集相同的资源组中创建新的存储帐户。
|
||||
|
||||
<!--
|
||||
#### Azure Disk Storage Class (starting from v1.7.2) {#azure-disk-storage-class}
|
||||
|
@ -1058,7 +1066,8 @@ parameters:
|
|||
- Managed VM can only attach managed disks and unmanaged VM can only attach
|
||||
unmanaged disks.
|
||||
-->
|
||||
- Premium VM 可以同时添加 Standard_LRS 和 Premium_LRS 磁盘,而 Standard 虚拟机只能添加 Standard_LRS 磁盘。
|
||||
- Premium VM 可以同时添加 Standard_LRS 和 Premium_LRS 磁盘,而 Standard
|
||||
虚拟机只能添加 Standard_LRS 磁盘。
|
||||
- 托管虚拟机只能连接托管磁盘,非托管虚拟机只能连接非托管磁盘。
|
||||
|
||||
<!--
|
||||
|
@ -1097,11 +1106,15 @@ parameters:
|
|||
* `skuName`:Azure 存储帐户 Sku 层。默认为空。
|
||||
* `location`:Azure 存储帐户位置。默认为空。
|
||||
* `storageAccount`:Azure 存储帐户名称。默认为空。
|
||||
如果不提供存储帐户,会搜索所有与资源相关的存储帐户,以找到一个匹配 `skuName` 和 `location` 的账号。
|
||||
如果不提供存储帐户,会搜索所有与资源相关的存储帐户,以找到一个匹配
|
||||
`skuName` 和 `location` 的账号。
|
||||
如果提供存储帐户,它必须存在于与集群相同的资源组中,`skuName` 和 `location` 会被忽略。
|
||||
* `secretNamespace`:包含 Azure 存储帐户名称和密钥的密钥的名称空间。 默认值与 Pod 相同。
|
||||
* `secretName`:包含 Azure 存储帐户名称和密钥的密钥的名称。 默认值为 `azure-storage-account-<accountName>-secret`
|
||||
* `readOnly`:指示是否将存储安装为只读的标志。默认为 false,表示 读/写 挂载。 该设置也会影响VolumeMounts中的 `ReadOnly` 设置。
|
||||
* `secretNamespace`:包含 Azure 存储帐户名称和密钥的密钥的名称空间。
|
||||
默认值与 Pod 相同。
|
||||
* `secretName`:包含 Azure 存储帐户名称和密钥的密钥的名称。
|
||||
默认值为 `azure-storage-account-<accountName>-secret`
|
||||
* `readOnly`:指示是否将存储安装为只读的标志。默认为 false,表示"读/写"挂载。
|
||||
该设置也会影响VolumeMounts中的 `ReadOnly` 设置。
|
||||
|
||||
<!--
|
||||
During storage provisioning, a secret named by `secretName` is created for the
|
||||
|
@ -1305,7 +1318,7 @@ kubectl create secret generic storageos-secret \
|
|||
-->
|
||||
StorageOS Kubernetes 卷插件可以使 Secret 对象来指定用于访问 StorageOS API 的端点和凭据。
|
||||
只有当默认值已被更改时,这才是必须的。
|
||||
secret 必须使用 `kubernetes.io/storageos` 类型创建,如以下命令:
|
||||
Secret 必须使用 `kubernetes.io/storageos` 类型创建,如以下命令:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic storageos-secret \
|
||||
|
|
|
@ -5,6 +5,11 @@ weight: 30
|
|||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- jsafrane
|
||||
- saad-ali
|
||||
- thockin
|
||||
- msau42
|
||||
title: CSI Volume Cloning
|
||||
content_type: concept
|
||||
weight: 30
|
||||
|
@ -15,7 +20,8 @@ weight: 30
|
|||
<!--
|
||||
This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
|
||||
-->
|
||||
本文档介绍 Kubernetes 中克隆现有 CSI 卷的概念。阅读前建议先熟悉[卷](/zh/docs/concepts/storage/volumes)。
|
||||
本文档介绍 Kubernetes 中克隆现有 CSI 卷的概念。阅读前建议先熟悉
|
||||
[卷](/zh/docs/concepts/storage/volumes)。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -24,7 +30,6 @@ This document describes the concept of cloning existing CSI Volumes in Kubernete
|
|||
|
||||
The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}.
|
||||
-->
|
||||
|
||||
## 介绍
|
||||
|
||||
{{< glossary_tooltip text="CSI" term_id="csi" >}} 卷克隆功能增加了通过在
|
||||
|
@ -35,16 +40,14 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add
|
|||
<!--
|
||||
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
|
||||
-->
|
||||
|
||||
克隆,意思是为已有的 Kubernetes 卷创建副本,它可以像任何其它标准卷一样被使用。
|
||||
克隆(Clone),意思是为已有的 Kubernetes 卷创建副本,它可以像任何其它标准卷一样被使用。
|
||||
唯一的区别就是配置后,后端设备将创建指定完全相同的副本,而不是创建一个“新的”空卷。
|
||||
|
||||
<!--
|
||||
The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
|
||||
The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
|
||||
|
||||
Users need to be aware of the following when using this feature:
|
||||
-->
|
||||
|
||||
从 Kubernetes API 的角度看,克隆的实现只是在创建新的 PVC 时,
|
||||
增加了指定一个现有 PVC 作为数据源的能力。源 PVC 必须是 bound
|
||||
状态且可用的(不在使用中)。
|
||||
|
@ -61,7 +64,6 @@ Users need to be aware of the following when using this feature:
|
|||
- Default storage class can be used and storageClassName omitted in the spec
|
||||
* Cloning can only be performed between two volumes that use the same VolumeMode setting (if you request a block mode volume, the source MUST also be block mode)
|
||||
-->
|
||||
|
||||
* 克隆支持(`VolumePVCDataSource`)仅适用于 CSI 驱动。
|
||||
* 克隆支持仅适用于 动态供应器。
|
||||
* CSI 驱动可能实现,也可能未实现卷克隆功能。
|
||||
|
@ -75,9 +77,9 @@ Users need to be aware of the following when using this feature:
|
|||
<!--
|
||||
## Provisioning
|
||||
|
||||
Clones are provisioned just like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
|
||||
Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
|
||||
-->
|
||||
## 供应
|
||||
## 制备
|
||||
|
||||
克隆卷与其他任何 PVC 一样配置,除了需要增加 dataSource 来引用同一命名空间中现有的 PVC。
|
||||
|
||||
|
@ -99,19 +101,17 @@ spec:
|
|||
name: pvc-1
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You must specify a capacity value for `spec.resources.requests.storage`,
|
||||
and the value you specify must be the same or larger than the capacity of the source volume.
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
你必须为 `spec.resources.requests.storage` 指定一个值,并且你指定的值必须大于或等于源卷的值。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The result is a new PVC with the name `clone-of-pvc-1` that has the exact same content as the specified source `pvc-1`.
|
||||
-->
|
||||
|
||||
结果是一个名称为 `clone-of-pvc-1` 的新 PVC 与指定的源 `pvc-1` 拥有相同的内容。
|
||||
|
||||
<!--
|
||||
|
@ -119,8 +119,7 @@ The result is a new PVC with the name `clone-of-pvc-1` that has the exact same c
|
|||
|
||||
Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone.
|
||||
-->
|
||||
|
||||
## 用法
|
||||
## 使用
|
||||
|
||||
一旦新的 PVC 可用,被克隆的 PVC 像其他 PVC 一样被使用。
|
||||
可以预期的是,新创建的 PVC 是一个独立的对象。
|
||||
|
|
|
@ -5,6 +5,11 @@ weight: 10
|
|||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- jsafrane
|
||||
- saad-ali
|
||||
- thockin
|
||||
- msau42
|
||||
title: Volumes
|
||||
content_type: concept
|
||||
weight: 10
|
||||
|
@ -54,24 +59,23 @@ Docker 提供卷驱动程序,但是其功能非常有限。
|
|||
Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod" text="Pod" >}}
|
||||
can use any number of volume types simultaneously.
|
||||
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
|
||||
the lifetime of a pod. Consequently, a volume outlives any containers
|
||||
that run within the pod, and data is preserved across container restarts. When a
|
||||
pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
|
||||
destroy persistent volumes.
|
||||
the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes;
|
||||
however, Kubernetes does not destroy persistent volumes.
|
||||
For any kind of volume in a given pod, data is preserved across container restarts.
|
||||
-->
|
||||
Kubernetes 支持很多类型的卷。
|
||||
{{< glossary_tooltip term_id="pod" text="Pod" >}} 可以同时使用任意数目的卷类型。
|
||||
临时卷类型的生命周期与 Pod 相同,但持久卷可以比 Pod 的存活期长。
|
||||
因此,卷的存在时间会超出 Pod 中运行的所有容器,并且在容器重新启动时数据也会得到保留。
|
||||
当 Pod 不再存在时,临时卷也将不再存在。但是持久卷会继续存在。
|
||||
当 Pod 不再存在时,Kubernetes 也会销毁临时卷;不过 Kubernetes 不会销毁
|
||||
持久卷。对于给定 Pod 中任何类型的卷,在容器重启期间数据都不会丢失。
|
||||
|
||||
<!--
|
||||
At its core, a volume is just a directory, possibly with some data in it, which
|
||||
is accessible to the Containers in a Pod. How that directory comes to be, the
|
||||
is accessible to the containers in a pod. How that directory comes to be, the
|
||||
medium that backs it, and the contents of it are determined by the particular
|
||||
volume type used.
|
||||
-->
|
||||
卷的核心是包含一些数据的一个目录,Pod 中的容器可以访问该目录。
|
||||
卷的核心是一个目录,其中可能存有数据,Pod 中的容器可以访问该目录中的数据。
|
||||
所采用的特定的卷类型将决定该目录如何形成的、使用何种介质保存数据以及目录中存放
|
||||
的内容。
|
||||
|
||||
|
@ -269,9 +273,9 @@ For more details, see the [`azureFile` volume plugin](https://github.com/kuberne
|
|||
<!--
|
||||
#### azureFile CSI migration
|
||||
-->
|
||||
#### CSI 迁移 {#azurefile-csi-migration}
|
||||
#### azureFile CSI 迁移 {#azurefile-csi-migration}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
|
||||
|
||||
<!--
|
||||
The CSI Migration feature for azureFile, when enabled, redirects all plugin operations
|
||||
|
@ -279,12 +283,20 @@ from the existing in-tree plugin to the `file.csi.azure.com` Container
|
|||
Storage Interface (CSI) Driver. In order to use this feature, the [Azure File CSI
|
||||
Driver](https://github.com/kubernetes-sigs/azurefile-csi-driver)
|
||||
must be installed on the cluster and the `CSIMigration` and `CSIMigrationAzureFile`
|
||||
Alpha features must be enabled.
|
||||
[feature gates](/docs/reference/command-line-tools-reference/feature-gates/) must be enabled.
|
||||
-->
|
||||
启用 `azureFile` 的 `CSIMigration` 功能后,所有插件操作将从现有的树内插件重定向到
|
||||
`file.csi.azure.com` 容器存储接口(CSI)驱动程序。
|
||||
要使用此功能,必须在集群中安装 [Azure 文件 CSI 驱动程序](https://github.com/kubernetes-sigs/azurefile-csi-driver),
|
||||
并且 `CSIMigration` 和 `CSIMigrationAzureFile` Alpha 功能特性必须被启用。
|
||||
`file.csi.azure.com` 容器存储接口(CSI)驱动程序。要使用此功能,必须在集群中安装
|
||||
[Azure 文件 CSI 驱动程序](https://github.com/kubernetes-sigs/azurefile-csi-driver),
|
||||
并且 `CSIMigration` 和 `CSIMigrationAzureFile`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
必须被启用。
|
||||
|
||||
<!--
|
||||
Azure File CSI driver does not support using same volume with different fsgroups, if Azurefile CSI migration is enabled, using same volume with different fsgroups won't be supported at all.
|
||||
-->
|
||||
Azure 文件 CSI 驱动尚不支持为同一卷设置不同的 fsgroup。
|
||||
如果 AzureFile CSI 迁移被启用,用不同的 fsgroup 来使用同一卷也是不被支持的。
|
||||
|
||||
### cephfs {#cephfs}
|
||||
|
||||
|
@ -356,20 +368,29 @@ spec:
|
|||
-->
|
||||
#### OpenStack CSI 迁移
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
|
||||
|
||||
<!--
|
||||
The `CSIMigration` feature for Cinder, when enabled, redirects all plugin operations
|
||||
from the existing in-tree plugin to the `cinder.csi.openstack.org` Container
|
||||
Storage Interface (CSI) Driver. In order to use this feature, the [OpenStack Cinder CSI
|
||||
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
|
||||
must be installed on the cluster and the `CSIMigration` and `CSIMigrationOpenStack`
|
||||
beta features must be enabled.
|
||||
The `CSIMigration` feature for Cinder is enabled by default in Kubernetes 1.21.
|
||||
It redirects all plugin operations from the existing in-tree plugin to the
|
||||
`cinder.csi.openstack.org` Container Storage Interface (CSI) Driver.
|
||||
[OpenStack Cinder CSI Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
|
||||
must be installed on the cluster.
|
||||
You can disable Cinder CSI migration for your cluster by setting the `CSIMigrationOpenStack`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to `false`.
|
||||
If you disable the `CSIMigrationOpenStack` feature, the in-tree Cinder volume plugin takes responsibility
|
||||
for all aspects of Cinder volume storage management.
|
||||
-->
|
||||
启用 Cinder 的 `CSIMigration` 功能后,所有插件操作会从现有的树内插件重定向到
|
||||
Cinder 的 `CSIMigration` 功能在 Kubernetes 1.21 版本中是默认被启用的。
|
||||
此特性会将插件的所有操作从现有的树内插件重定向到
|
||||
`cinder.csi.openstack.org` 容器存储接口(CSI)驱动程序。
|
||||
为了使用此功能,必须在集群中安装 [OpenStack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md),
|
||||
并且 `CSIMigration` 和 `CSIMigrationOpenStack` Beta 功能必须被启用。
|
||||
为了使用此功能,必须在集群中安装
|
||||
[OpenStack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md),
|
||||
你可以通过设置 `CSIMigrationOpenStack`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
为 `false` 来禁止 Cinder CSI 迁移。
|
||||
如果你禁用了 `CSIMigrationOpenStack` 功能特性,则树内的 Cinder 卷插件
|
||||
会负责 Cinder 卷存储管理的方方面面。
|
||||
|
||||
### configMap
|
||||
|
||||
|
@ -1516,7 +1537,8 @@ RBD 的一个特性是它可以同时被多个用户以只读方式挂载。
|
|||
这意味着你可以用数据集预先填充卷,然后根据需要在尽可能多的 Pod 中并行地使用卷。
|
||||
不幸的是,RBD 卷只能由单个使用者以读写模式安装。不允许同时写入。
|
||||
|
||||
更多详情请参考 [RBD 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/rbd)。
|
||||
更多详情请参考
|
||||
[RBD 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/rbd)。
|
||||
|
||||
<!--
|
||||
### scaleIO (deprecated) {#scaleio}
|
||||
|
|
Loading…
Reference in New Issue