Merge pull request #39319 from Zhuzhenghao/zh/storage-classes
[zh] Resync storage-classes.mdpull/39323/head
commit
c59e86f8bb
|
@ -104,7 +104,7 @@ for provisioning PVs. This field must be specified.
|
|||
-->
|
||||
|
||||
| 卷插件 | 内置制备器 | 配置例子 |
|
||||
|:---------------------|:----------:|:-------------------------------------:|
|
||||
| :------------------- | :--------: | :-----------------------------------: |
|
||||
| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) |
|
||||
| AzureFile | ✓ | [Azure File](#azure-文件) |
|
||||
| AzureDisk | ✓ | [Azure Disk](#azure-磁盘) |
|
||||
|
@ -117,8 +117,8 @@ for provisioning PVs. This field must be specified.
|
|||
| NFS | - | [NFS](#nfs) |
|
||||
| RBD | ✓ | [Ceph RBD](#ceph-rbd) |
|
||||
| VsphereVolume | ✓ | [vSphere](#vsphere) |
|
||||
| PortworxVolume | ✓ | [Portworx Volume](#portworx-卷) |
|
||||
| Local | - | [Local](#本地) |
|
||||
| PortworxVolume | ✓ | [Portworx Volume](#portworx-卷) |
|
||||
| Local | - | [Local](#本地) |
|
||||
|
||||
<!--
|
||||
You are not restricted to specifying the "internal" provisioners
|
||||
|
@ -189,11 +189,11 @@ PersistentVolume 可以配置为可扩展。将此功能设置为 `true` 时,
|
|||
|
||||
{{< table caption = "Table of Volume types and the version of Kubernetes they require" >}}
|
||||
|
||||
<!--
|
||||
<!--
|
||||
Volume type | Required Kubernetes version
|
||||
-->
|
||||
| 卷类型 | Kubernetes 版本要求 |
|
||||
|:---------------------|:--------------------------|
|
||||
| 卷类型 | Kubernetes 版本要求 |
|
||||
| :------------------- | :------------------------ |
|
||||
| gcePersistentDisk | 1.11 |
|
||||
| awsElasticBlockStore | 1.11 |
|
||||
| Cinder | 1.11 |
|
||||
|
@ -221,14 +221,14 @@ mount options specified in the `mountOptions` field of the class.
|
|||
|
||||
If the volume plugin does not support mount options but mount options are
|
||||
specified, provisioning will fail. Mount options are not validated on either
|
||||
the class or PV, If a mount option is invalid, the PV mount fails.
|
||||
the class or PV. If a mount option is invalid, the PV mount fails.
|
||||
-->
|
||||
### 挂载选项 {#mount-options}
|
||||
|
||||
由 StorageClass 动态创建的 PersistentVolume 将使用类中 `mountOptions` 字段指定的挂载选项。
|
||||
|
||||
如果卷插件不支持挂载选项,却指定了挂载选项,则制备操作会失败。
|
||||
挂载选项在 StorageClass 和 PV 上都不会做验证,如果其中一个挂载选项无效,那么这个 PV 挂载操作就会失败。
|
||||
挂载选项在 StorageClass 和 PV 上都不会做验证。如果其中一个挂载选项无效,那么这个 PV 挂载操作就会失败。
|
||||
|
||||
<!--
|
||||
### Volume Binding Mode
|
||||
|
@ -237,19 +237,20 @@ the class or PV, If a mount option is invalid, the PV mount fails.
|
|||
|
||||
<!--
|
||||
The `volumeBindingMode` field controls when [volume binding and dynamic
|
||||
provisioning](/docs/concepts/storage/persistent-volumes/#provisioning) should occur.
|
||||
provisioning](/docs/concepts/storage/persistent-volumes/#provisioning) should occur. When unset, "Immediate" mode is used by default.
|
||||
-->
|
||||
`volumeBindingMode`
|
||||
字段控制了[卷绑定和动态制备](/zh-cn/docs/concepts/storage/persistent-volumes/#provisioning)应该发生在什么时候。
|
||||
当未设置时,默认使用 `Immediate` 模式。
|
||||
|
||||
<!--
|
||||
By default, the `Immediate` mode indicates that volume binding and dynamic
|
||||
The `Immediate` mode indicates that volume binding and dynamic
|
||||
provisioning occurs once the PersistentVolumeClaim is created. For storage
|
||||
backends that are topology-constrained and not globally accessible from all Nodes
|
||||
in the cluster, PersistentVolumes will be bound or provisioned without knowledge of the Pod's scheduling
|
||||
requirements. This may result in unschedulable Pods.
|
||||
-->
|
||||
默认情况下,`Immediate` 模式表示一旦创建了 PersistentVolumeClaim 也就完成了卷绑定和动态制备。
|
||||
`Immediate` 模式表示一旦创建了 PersistentVolumeClaim 也就完成了卷绑定和动态制备。
|
||||
对于由于拓扑限制而非集群所有节点可达的存储后端,PersistentVolume
|
||||
会在不知道 Pod 调度要求的情况下绑定或者制备。
|
||||
|
||||
|
@ -275,26 +276,26 @@ PersistentVolume 会根据 Pod 调度约束指定的拓扑来选择或制备。
|
|||
<!--
|
||||
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
|
||||
|
||||
* [AWSElasticBlockStore](#aws-ebs)
|
||||
* [GCEPersistentDisk](#gce-pd)
|
||||
* [AzureDisk](#azure-disk)
|
||||
- [AWSElasticBlockStore](#aws-ebs)
|
||||
- [GCEPersistentDisk](#gce-pd)
|
||||
- [AzureDisk](#azure-disk)
|
||||
-->
|
||||
以下插件支持动态制备的 `WaitForFirstConsumer` 模式:
|
||||
|
||||
* [AWSElasticBlockStore](#aws-ebs)
|
||||
* [GCEPersistentDisk](#gce-pd)
|
||||
* [AzureDisk](#azure-disk)
|
||||
- [AWSElasticBlockStore](#aws-ebs)
|
||||
- [GCEPersistentDisk](#gce-pd)
|
||||
- [AzureDisk](#azure-disk)
|
||||
|
||||
<!--
|
||||
The following plugins support `WaitForFirstConsumer` with pre-created PersistentVolume binding:
|
||||
|
||||
* All of the above
|
||||
* [Local](#local)
|
||||
- All of the above
|
||||
- [Local](#local)
|
||||
-->
|
||||
以下插件支持预创建绑定 PersistentVolume 的 `WaitForFirstConsumer` 模式:
|
||||
|
||||
* 上述全部
|
||||
* [Local](#local)
|
||||
- 上述全部
|
||||
- [Local](#local)
|
||||
|
||||
{{< feature-state state="stable" for_k8s_version="v1.17" >}}
|
||||
|
||||
|
@ -307,16 +308,16 @@ to see its supported topology keys and examples.
|
|||
但是你需要查看特定 CSI 驱动的文档以查看其支持的拓扑键名和例子。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If you choose to use `WaitForFirstConsumer`, do not use `nodeName` in the Pod spec
|
||||
to specify node affinity. If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
|
||||
<!--
|
||||
If you choose to use `WaitForFirstConsumer`, do not use `nodeName` in the Pod spec
|
||||
to specify node affinity. If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
|
||||
|
||||
Instead, you can use node selector for hostname in this case as shown below.
|
||||
Instead, you can use node selector for hostname in this case as shown below.
|
||||
-->
|
||||
如果你选择使用 `WaitForFirstConsumer`,请不要在 Pod 规约中使用 `nodeName` 来指定节点亲和性。
|
||||
如果在这种情况下使用 `nodeName`,Pod 将会绕过调度程序,PVC 将停留在 `pending` 状态。
|
||||
如果你选择使用 `WaitForFirstConsumer`,请不要在 Pod 规约中使用 `nodeName` 来指定节点亲和性。
|
||||
如果在这种情况下使用 `nodeName`,Pod 将会绕过调度程序,PVC 将停留在 `pending` 状态。
|
||||
|
||||
相反,在这种情况下,你可以使用节点选择器作为主机名,如下所示。
|
||||
相反,在这种情况下,你可以使用节点选择器作为主机名,如下所示。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -387,7 +388,7 @@ allowedTopologies:
|
|||
|
||||
Storage Classes have parameters that describe volumes belonging to the storage
|
||||
class. Different parameters may be accepted depending on the `provisioner`. For
|
||||
example, the value `io1`, for the parameter `type`, and the parameter
|
||||
example, the value `io1`, for the parameter `type`, and the parameter
|
||||
`iopsPerGB` are specific to EBS. When a parameter is omitted, some default is
|
||||
used.
|
||||
|
||||
|
@ -418,46 +419,46 @@ parameters:
|
|||
```
|
||||
|
||||
<!--
|
||||
* `type`: `io1`, `gp2`, `sc1`, `st1`. See
|
||||
- `type`: `io1`, `gp2`, `sc1`, `st1`. See
|
||||
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)
|
||||
for details. Default: `gp2`.
|
||||
* `zone` (Deprecated): AWS zone. If neither `zone` nor `zones` is specified, volumes are
|
||||
- `zone` (Deprecated): AWS zone. If neither `zone` nor `zones` is specified, volumes are
|
||||
generally round-robin-ed across all active zones where Kubernetes cluster
|
||||
has a node. `zone` and `zones` parameters must not be used at the same time.
|
||||
* `zones` (Deprecated): A comma separated list of AWS zone(s). If neither `zone` nor `zones`
|
||||
- `zones` (Deprecated): A comma separated list of AWS zone(s). If neither `zone` nor `zones`
|
||||
is specified, volumes are generally round-robin-ed across all active zones
|
||||
where Kubernetes cluster has a node. `zone` and `zones` parameters must not
|
||||
be used at the same time.
|
||||
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS
|
||||
- `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS
|
||||
volume plugin multiplies this with size of requested volume to compute IOPS
|
||||
of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see
|
||||
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)).
|
||||
A string is expected here, i.e. `"10"`, not `10`.
|
||||
* `fsType`: fsType that is supported by kubernetes. Default: `"ext4"`.
|
||||
* `encrypted`: denotes whether the EBS volume should be encrypted or not.
|
||||
- `fsType`: fsType that is supported by kubernetes. Default: `"ext4"`.
|
||||
- `encrypted`: denotes whether the EBS volume should be encrypted or not.
|
||||
Valid values are `"true"` or `"false"`. A string is expected here,
|
||||
i.e. `"true"`, not `true`.
|
||||
* `kmsKeyId`: optional. The full Amazon Resource Name of the key to use when
|
||||
- `kmsKeyId`: optional. The full Amazon Resource Name of the key to use when
|
||||
encrypting the volume. If none is supplied but `encrypted` is true, a key is
|
||||
generated by AWS. See AWS docs for valid ARN value.
|
||||
-->
|
||||
* `type`:`io1`,`gp2`,`sc1`,`st1`。详细信息参见
|
||||
- `type`:`io1`,`gp2`,`sc1`,`st1`。详细信息参见
|
||||
[AWS 文档](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)。默认值:`gp2`。
|
||||
* `zone`(弃用):AWS 区域。如果没有指定 `zone` 和 `zones`,
|
||||
- `zone`(弃用):AWS 区域。如果没有指定 `zone` 和 `zones`,
|
||||
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
|
||||
`zone` 和 `zones` 参数不能同时使用。
|
||||
* `zones`(弃用):以逗号分隔的 AWS 区域列表。
|
||||
- `zones`(弃用):以逗号分隔的 AWS 区域列表。
|
||||
如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
|
||||
`zone`和`zones`参数不能同时使用。
|
||||
* `iopsPerGB`:只适用于 `io1` 卷。每 GiB 每秒 I/O 操作。
|
||||
- `iopsPerGB`:只适用于 `io1` 卷。每 GiB 每秒 I/O 操作。
|
||||
AWS 卷插件将其与请求卷的大小相乘以计算 IOPS 的容量,
|
||||
并将其限制在 20000 IOPS(AWS 支持的最高值,请参阅
|
||||
[AWS 文档](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html))。
|
||||
这里需要输入一个字符串,即 `"10"`,而不是 `10`。
|
||||
* `fsType`:受 Kubernetes 支持的文件类型。默认值:`"ext4"`。
|
||||
* `encrypted`:指定 EBS 卷是否应该被加密。合法值为 `"true"` 或者 `"false"`。
|
||||
- `fsType`:受 Kubernetes 支持的文件类型。默认值:`"ext4"`。
|
||||
- `encrypted`:指定 EBS 卷是否应该被加密。合法值为 `"true"` 或者 `"false"`。
|
||||
这里需要输入字符串,即 `"true"`, 而非 `true`。
|
||||
* `kmsKeyId`:可选。加密卷时使用密钥的完整 Amazon 资源名称。
|
||||
- `kmsKeyId`:可选。加密卷时使用密钥的完整 Amazon 资源名称。
|
||||
如果没有提供,但 `encrypted` 值为 true,AWS 生成一个密钥。关于有效的 ARN 值,请参阅 AWS 文档。
|
||||
|
||||
{{< note >}}
|
||||
|
@ -478,31 +479,31 @@ metadata:
|
|||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-standard
|
||||
fstype: ext4
|
||||
fstype: ext4
|
||||
replication-type: none
|
||||
```
|
||||
|
||||
<!--
|
||||
* `type`: `pd-standard` or `pd-ssd`. Default: `pd-standard`
|
||||
* `zone` (Deprecated): GCE zone. If neither `zone` nor `zones` is specified, volumes are
|
||||
- `type`: `pd-standard` or `pd-ssd`. Default: `pd-standard`
|
||||
- `zone` (Deprecated): GCE zone. If neither `zone` nor `zones` is specified, volumes are
|
||||
generally round-robin-ed across all active zones where Kubernetes cluster has
|
||||
a node. `zone` and `zones` parameters must not be used at the same time.
|
||||
* `zones` (Deprecated): A comma separated list of GCE zone(s). If neither `zone` nor `zones`
|
||||
- `zones` (Deprecated): A comma separated list of GCE zone(s). If neither `zone` nor `zones`
|
||||
is specified, volumes are generally round-robin-ed across all active zones
|
||||
where Kubernetes cluster has a node. `zone` and `zones` parameters must not
|
||||
be used at the same time.
|
||||
* `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system.
|
||||
* `replication-type`: `none` or `regional-pd`. Default: `none`.
|
||||
- `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system.
|
||||
- `replication-type`: `none` or `regional-pd`. Default: `none`.
|
||||
-->
|
||||
* `type`:`pd-standard` 或者 `pd-ssd`。默认:`pd-standard`
|
||||
* `zone`(弃用):GCE 区域。如果没有指定 `zone` 和 `zones`,
|
||||
- `type`:`pd-standard` 或者 `pd-ssd`。默认:`pd-standard`
|
||||
- `zone`(弃用):GCE 区域。如果没有指定 `zone` 和 `zones`,
|
||||
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
|
||||
`zone` 和 `zones` 参数不能同时使用。
|
||||
* `zones`(弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`,
|
||||
- `zones`(弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`,
|
||||
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度(round-robin)分配。
|
||||
`zone` 和 `zones` 参数不能同时使用。
|
||||
* `fstype`:`ext4` 或 `xfs`。 默认:`ext4`。宿主机操作系统必须支持所定义的文件系统类型。
|
||||
* `replication-type`:`none` 或者 `regional-pd`。默认值:`none`。
|
||||
- `fstype`:`ext4` 或 `xfs`。 默认:`ext4`。宿主机操作系统必须支持所定义的文件系统类型。
|
||||
- `replication-type`:`none` 或者 `regional-pd`。默认值:`none`。
|
||||
|
||||
<!--
|
||||
If `replication-type` is set to `none`, a regular (zonal) PD will be provisioned.
|
||||
|
@ -550,16 +551,16 @@ parameters:
|
|||
readOnly: "false"
|
||||
```
|
||||
|
||||
<!--
|
||||
* `server`: Server is the hostname or IP address of the NFS server.
|
||||
* `path`: Path that is exported by the NFS server.
|
||||
* `readOnly`: A flag indicating whether the storage will be mounted as read only (default false).
|
||||
<!--
|
||||
- `server`: Server is the hostname or IP address of the NFS server.
|
||||
- `path`: Path that is exported by the NFS server.
|
||||
- `readOnly`: A flag indicating whether the storage will be mounted as read only (default false).
|
||||
-->
|
||||
* `server`:NFS 服务器的主机名或 IP 地址。
|
||||
* `path`:NFS 服务器导出的路径。
|
||||
* `readOnly`:是否将存储挂载为只读的标志(默认为 false)。
|
||||
- `server`:NFS 服务器的主机名或 IP 地址。
|
||||
- `path`:NFS 服务器导出的路径。
|
||||
- `readOnly`:是否将存储挂载为只读的标志(默认为 false)。
|
||||
|
||||
<!--
|
||||
<!--
|
||||
Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS.
|
||||
Here are some examples:
|
||||
* [NFS Ganesha server and external provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
|
||||
|
@ -567,8 +568,8 @@ Here are some examples:
|
|||
-->
|
||||
Kubernetes 不包含内部 NFS 驱动。你需要使用外部驱动为 NFS 创建 StorageClass。
|
||||
这里有些例子:
|
||||
* [NFS Ganesha 服务器和外部驱动](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
|
||||
* [NFS subdir 外部驱动](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
|
||||
- [NFS Ganesha 服务器和外部驱动](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
|
||||
- [NFS subdir 外部驱动](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
|
||||
|
||||
### OpenStack Cinder
|
||||
|
||||
|
@ -583,16 +584,16 @@ parameters:
|
|||
```
|
||||
|
||||
<!--
|
||||
* `availability`: Availability Zone. If not specified, volumes are generally
|
||||
- `availability`: Availability Zone. If not specified, volumes are generally
|
||||
round-robin-ed across all active zones where Kubernetes cluster has a node.
|
||||
-->
|
||||
* `availability`:可用区域。如果没有指定,通常卷会在 Kubernetes 集群节点所在的活动区域中轮转调度。
|
||||
- `availability`:可用区域。如果没有指定,通常卷会在 Kubernetes 集群节点所在的活动区域中轮转调度。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
This internal provisioner of OpenStack is deprecated. Please use [the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
|
||||
-->
|
||||
{{< feature-state state="deprecated" for_k8s_version="1.11" >}}
|
||||
{{< feature-state state="deprecated" for_k8s_version="v1.11" >}}
|
||||
OpenStack 的内部驱动已经被弃用。请使用
|
||||
[OpenStack 的外部云驱动](https://github.com/kubernetes/cloud-provider-openstack)。
|
||||
{{< /note >}}
|
||||
|
@ -600,7 +601,7 @@ OpenStack 的内部驱动已经被弃用。请使用
|
|||
### vSphere {#vsphere}
|
||||
|
||||
<!--
|
||||
There are two types of provisioners for vSphere storage classes:
|
||||
There are two types of provisioners for vSphere storage classes:
|
||||
|
||||
- [CSI provisioner](#vsphere-provisioner-csi): `csi.vsphere.vmware.com`
|
||||
- [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume`
|
||||
|
@ -629,9 +630,9 @@ vSphere CSI StorageClass 制备器在 Tanzu Kubernetes 集群下运行。示例
|
|||
[vSphere CSI 仓库](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml)。
|
||||
|
||||
<!--
|
||||
#### vCP Provisioner
|
||||
#### vCP Provisioner
|
||||
|
||||
The following examples use the VMware Cloud Provider (vCP) StorageClass provisioner.
|
||||
The following examples use the VMware Cloud Provider (vCP) StorageClass provisioner.
|
||||
-->
|
||||
#### vCP 制备器 {#vcp-provisioner}
|
||||
|
||||
|
@ -669,13 +670,13 @@ The following examples use the VMware Cloud Provider (vCP) StorageClass provisio
|
|||
name: fast
|
||||
provisioner: kubernetes.io/vsphere-volume
|
||||
parameters:
|
||||
diskformat: zeroedthick
|
||||
datastore: VSANDatastore
|
||||
diskformat: zeroedthick
|
||||
datastore: VSANDatastore
|
||||
```
|
||||
|
||||
<!--
|
||||
`datastore`: The user can also specify the datastore in the StorageClass.
|
||||
The volume will be created on the datastore specified in the storage class,
|
||||
The volume will be created on the datastore specified in the StorageClass,
|
||||
which in this case is `VSANDatastore`. This field is optional. If the
|
||||
datastore is not specified, then the volume will be created on the datastore
|
||||
specified in the vSphere config file used to initialize the vSphere Cloud
|
||||
|
@ -683,7 +684,7 @@ The following examples use the VMware Cloud Provider (vCP) StorageClass provisio
|
|||
-->
|
||||
|
||||
`datastore`:用户也可以在 StorageClass 中指定数据存储。
|
||||
卷将在 storage class 中指定的数据存储上创建,在这种情况下是 `VSANDatastore`。
|
||||
卷将在 StorageClass 中指定的数据存储上创建,在这种情况下是 `VSANDatastore`。
|
||||
该字段是可选的。
|
||||
如果未指定数据存储,则将在用于初始化 vSphere Cloud Provider 的 vSphere
|
||||
配置文件中指定的数据存储上创建该卷。
|
||||
|
@ -694,7 +695,7 @@ The following examples use the VMware Cloud Provider (vCP) StorageClass provisio
|
|||
3. Kubernetes 中的存储策略管理
|
||||
|
||||
<!--
|
||||
* Using existing vCenter SPBM policy
|
||||
- Using existing vCenter SPBM policy
|
||||
|
||||
One of the most important features of vSphere for Storage Management is
|
||||
policy based Management. Storage Policy Based Management (SPBM) is a
|
||||
|
@ -708,40 +709,40 @@ The following examples use the VMware Cloud Provider (vCP) StorageClass provisio
|
|||
`storagePolicyName` parameter.
|
||||
-->
|
||||
|
||||
* 使用现有的 vCenter SPBM 策略
|
||||
- 使用现有的 vCenter SPBM 策略
|
||||
|
||||
vSphere 用于存储管理的最重要特性之一是基于策略的管理。
|
||||
基于存储策略的管理(SPBM)是一个存储策略框架,提供单一的统一控制平面的跨越广泛的数据服务和存储解决方案。
|
||||
SPBM 使得 vSphere 管理员能够克服先期的存储配置挑战,如容量规划、差异化服务等级和管理容量空间。
|
||||
vSphere 用于存储管理的最重要特性之一是基于策略的管理。
|
||||
基于存储策略的管理(SPBM)是一个存储策略框架,提供单一的统一控制平面的跨越广泛的数据服务和存储解决方案。
|
||||
SPBM 使得 vSphere 管理员能够克服先期的存储配置挑战,如容量规划、差异化服务等级和管理容量空间。
|
||||
|
||||
SPBM 策略可以在 StorageClass 中使用 `storagePolicyName` 参数声明。
|
||||
SPBM 策略可以在 StorageClass 中使用 `storagePolicyName` 参数声明。
|
||||
|
||||
<!--
|
||||
* Virtual SAN policy support inside Kubernetes
|
||||
- Virtual SAN policy support inside Kubernetes
|
||||
|
||||
Vsphere Infrastructure (VI) Admins will have the ability to specify custom
|
||||
Virtual SAN Storage Capabilities during dynamic volume provisioning. You
|
||||
can now define storage requirements, such as performance and availability,
|
||||
in the form of storage capabilities during dynamic volume provisioning.
|
||||
The storage capability requirements are converted into a Virtual SAN
|
||||
policy which are then pushed down to the Virtual SAN layer when a
|
||||
persistent volume (virtual disk) is being created. The virtual disk is
|
||||
distributed across the Virtual SAN datastore to meet the requirements.
|
||||
Vsphere Infrastructure (VI) Admins will have the ability to specify custom
|
||||
Virtual SAN Storage Capabilities during dynamic volume provisioning. You
|
||||
can now define storage requirements, such as performance and availability,
|
||||
in the form of storage capabilities during dynamic volume provisioning.
|
||||
The storage capability requirements are converted into a Virtual SAN
|
||||
policy which are then pushed down to the Virtual SAN layer when a
|
||||
persistent volume (virtual disk) is being created. The virtual disk is
|
||||
distributed across the Virtual SAN datastore to meet the requirements.
|
||||
|
||||
You can see [Storage Policy Based Management for dynamic provisioning of volumes](https://github.com/vmware-archive/vsphere-storage-for-kubernetes/blob/fa4c8b8ad46a85b6555d715dd9d27ff69839df53/documentation/policy-based-mgmt.md)
|
||||
for more details on how to use storage policies for persistent volumes
|
||||
management.
|
||||
You can see [Storage Policy Based Management for dynamic provisioning of volumes](https://github.com/vmware-archive/vsphere-storage-for-kubernetes/blob/fa4c8b8ad46a85b6555d715dd9d27ff69839df53/documentation/policy-based-mgmt.md)
|
||||
for more details on how to use storage policies for persistent volumes
|
||||
management.
|
||||
-->
|
||||
|
||||
* Kubernetes 内的 Virtual SAN 策略支持
|
||||
- Kubernetes 内的 Virtual SAN 策略支持
|
||||
|
||||
Vsphere Infrastructure(VI)管理员将能够在动态卷配置期间指定自定义 Virtual SAN
|
||||
存储功能。你现在可以在动态制备卷期间以存储能力的形式定义存储需求,例如性能和可用性。
|
||||
存储能力需求会转换为 Virtual SAN 策略,之后当持久卷(虚拟磁盘)被创建时,
|
||||
会将其推送到 Virtual SAN 层。虚拟磁盘分布在 Virtual SAN 数据存储中以满足要求。
|
||||
Vsphere Infrastructure(VI)管理员将能够在动态卷配置期间指定自定义 Virtual SAN
|
||||
存储功能。你现在可以在动态制备卷期间以存储能力的形式定义存储需求,例如性能和可用性。
|
||||
存储能力需求会转换为 Virtual SAN 策略,之后当持久卷(虚拟磁盘)被创建时,
|
||||
会将其推送到 Virtual SAN 层。虚拟磁盘分布在 Virtual SAN 数据存储中以满足要求。
|
||||
|
||||
你可以参考[基于存储策略的动态制备卷管理](https://github.com/vmware-archive/vsphere-storage-for-kubernetes/blob/fa4c8b8ad46a85b6555d715dd9d27ff69839df53/documentation/policy-based-mgmt.md),
|
||||
进一步了解有关持久卷管理的存储策略的详细信息。
|
||||
你可以参考[基于存储策略的动态制备卷管理](https://github.com/vmware-archive/vsphere-storage-for-kubernetes/blob/fa4c8b8ad46a85b6555d715dd9d27ff69839df53/documentation/policy-based-mgmt.md),
|
||||
进一步了解有关持久卷管理的存储策略的详细信息。
|
||||
|
||||
<!--
|
||||
There are few
|
||||
|
@ -774,52 +775,52 @@ parameters:
|
|||
```
|
||||
|
||||
<!--
|
||||
* `monitors`: Ceph monitors, comma delimited. This parameter is required.
|
||||
* `adminId`: Ceph client ID that is capable of creating images in the pool.
|
||||
- `monitors`: Ceph monitors, comma delimited. This parameter is required.
|
||||
- `adminId`: Ceph client ID that is capable of creating images in the pool.
|
||||
Default is "admin".
|
||||
* `adminSecretName`: Secret Name for `adminId`. This parameter is required.
|
||||
- `adminSecretName`: Secret Name for `adminId`. This parameter is required.
|
||||
The provided secret must have type "kubernetes.io/rbd".
|
||||
* `adminSecretNamespace`: The namespace for `adminSecretName`. Default is "default".
|
||||
* `pool`: Ceph RBD pool. Default is "rbd".
|
||||
* `userId`: Ceph client ID that is used to map the RBD image. Default is the
|
||||
- `adminSecretNamespace`: The namespace for `adminSecretName`. Default is "default".
|
||||
- `pool`: Ceph RBD pool. Default is "rbd".
|
||||
- `userId`: Ceph client ID that is used to map the RBD image. Default is the
|
||||
same as `adminId`.
|
||||
-->
|
||||
* `monitors`:Ceph monitor,逗号分隔。该参数是必需的。
|
||||
* `adminId`:Ceph 客户端 ID,用于在池 ceph 池中创建映像。默认是 "admin"。
|
||||
* `adminSecret`:`adminId` 的 Secret 名称。该参数是必需的。
|
||||
- `monitors`:Ceph monitor,逗号分隔。该参数是必需的。
|
||||
- `adminId`:Ceph 客户端 ID,用于在池 ceph 池中创建映像。默认是 "admin"。
|
||||
- `adminSecret`:`adminId` 的 Secret 名称。该参数是必需的。
|
||||
提供的 secret 必须有值为 "kubernetes.io/rbd" 的 type 参数。
|
||||
* `adminSecretNamespace`:`adminSecret` 的命名空间。默认是 "default"。
|
||||
* `pool`:Ceph RBD 池。默认是 "rbd"。
|
||||
* `userId`:Ceph 客户端 ID,用于映射 RBD 镜像。默认与 `adminId` 相同。
|
||||
- `adminSecretNamespace`:`adminSecret` 的命名空间。默认是 "default"。
|
||||
- `pool`:Ceph RBD 池。默认是 "rbd"。
|
||||
- `userId`:Ceph 客户端 ID,用于映射 RBD 镜像。默认与 `adminId` 相同。
|
||||
|
||||
<!--
|
||||
* `userSecretName`: The name of Ceph Secret for `userId` to map RBD image. It
|
||||
- `userSecretName`: The name of Ceph Secret for `userId` to map RBD image. It
|
||||
must exist in the same namespace as PVCs. This parameter is required.
|
||||
The provided secret must have type "kubernetes.io/rbd", e.g. created in this
|
||||
The provided secret must have type "kubernetes.io/rbd", for example created in this
|
||||
way:
|
||||
-->
|
||||
* `userSecretName`:用于映射 RBD 镜像的 `userId` 的 Ceph Secret 的名字。
|
||||
- `userSecretName`:用于映射 RBD 镜像的 `userId` 的 Ceph Secret 的名字。
|
||||
它必须与 PVC 存在于相同的 namespace 中。该参数是必需的。
|
||||
提供的 secret 必须具有值为 "kubernetes.io/rbd" 的 type 参数,例如以这样的方式创建:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
|
||||
--from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \
|
||||
--namespace=kube-system
|
||||
```
|
||||
```shell
|
||||
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
|
||||
--from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \
|
||||
--namespace=kube-system
|
||||
```
|
||||
|
||||
<!--
|
||||
* `userSecretNamespace`: The namespace for `userSecretName`.
|
||||
* `fsType`: fsType that is supported by kubernetes. Default: `"ext4"`.
|
||||
* `imageFormat`: Ceph RBD image format, "1" or "2". Default is "2".
|
||||
* `imageFeatures`: This parameter is optional and should only be used if you
|
||||
- `userSecretNamespace`: The namespace for `userSecretName`.
|
||||
- `fsType`: fsType that is supported by kubernetes. Default: `"ext4"`.
|
||||
- `imageFormat`: Ceph RBD image format, "1" or "2". Default is "2".
|
||||
- `imageFeatures`: This parameter is optional and should only be used if you
|
||||
set `imageFormat` to "2". Currently supported features are `layering` only.
|
||||
Default is "", and no features are turned on.
|
||||
-->
|
||||
* `userSecretNamespace`:`userSecretName` 的命名空间。
|
||||
* `fsType`:Kubernetes 支持的 fsType。默认:`"ext4"`。
|
||||
* `imageFormat`:Ceph RBD 镜像格式,"1" 或者 "2"。默认值是 "1"。
|
||||
* `imageFeatures`:这个参数是可选的,只能在你将 `imageFormat` 设置为 "2" 才使用。
|
||||
- `userSecretNamespace`:`userSecretName` 的命名空间。
|
||||
- `fsType`:Kubernetes 支持的 fsType。默认:`"ext4"`。
|
||||
- `imageFormat`:Ceph RBD 镜像格式,"1" 或者 "2"。默认值是 "1"。
|
||||
- `imageFeatures`:这个参数是可选的,只能在你将 `imageFormat` 设置为 "2" 才使用。
|
||||
目前支持的功能只是 `layering`。默认是 "",没有功能打开。
|
||||
|
||||
<!--
|
||||
|
@ -828,13 +829,13 @@ parameters:
|
|||
### Azure 磁盘 {#azure-disk}
|
||||
|
||||
<!--
|
||||
#### Azure Unmanaged Disk Storage Class {#azure-disk-storage-class}
|
||||
#### Azure Unmanaged Disk storage class {#azure-unmanaged-disk-storage-class}
|
||||
-->
|
||||
#### Azure Unmanaged Disk Storage Class(非托管磁盘存储类){#azure-unmanaged-disk-storage-class}
|
||||
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
|
@ -845,27 +846,27 @@ parameters:
|
|||
```
|
||||
|
||||
<!--
|
||||
* `skuName`: Azure storage account Sku tier. Default is empty.
|
||||
* `location`: Azure storage account location. Default is empty.
|
||||
* `storageAccount`: Azure storage account name. If a storage account is provided,
|
||||
- `skuName`: Azure storage account Sku tier. Default is empty.
|
||||
- `location`: Azure storage account location. Default is empty.
|
||||
- `storageAccount`: Azure storage account name. If a storage account is provided,
|
||||
it must reside in the same resource group as the cluster, and `location` is
|
||||
ignored. If a storage account is not provided, a new storage account will be
|
||||
created in the same resource group as the cluster.
|
||||
-->
|
||||
* `skuName`:Azure 存储帐户 Sku 层。默认为空。
|
||||
* `location`:Azure 存储帐户位置。默认为空。
|
||||
* `storageAccount`:Azure 存储帐户名称。
|
||||
- `skuName`:Azure 存储帐户 Sku 层。默认为空。
|
||||
- `location`:Azure 存储帐户位置。默认为空。
|
||||
- `storageAccount`:Azure 存储帐户名称。
|
||||
如果提供存储帐户,它必须位于与集群相同的资源组中,并且 `location`
|
||||
是被忽略的。如果未提供存储帐户,则会在与集群相同的资源组中创建新的存储帐户。
|
||||
|
||||
<!--
|
||||
#### Azure Disk Storage Class (starting from v1.7.2) {#azure-disk-storage-class}
|
||||
#### Azure Disk storage class (starting from v1.7.2) {#azure-disk-storage-class}
|
||||
-->
|
||||
#### Azure 磁盘 Storage Class(从 v1.7.2 开始){#azure-disk-storage-class}
|
||||
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
|
@ -875,33 +876,34 @@ parameters:
|
|||
```
|
||||
|
||||
<!--
|
||||
* `storageaccounttype`: Azure storage account Sku tier. Default is empty.
|
||||
* `kind`: Possible values are `shared`, `dedicated`, and `managed` (default).
|
||||
- `storageaccounttype`: Azure storage account Sku tier. Default is empty.
|
||||
- `kind`: Possible values are `shared`, `dedicated`, and `managed` (default).
|
||||
When `kind` is `shared`, all unmanaged disks are created in a few shared
|
||||
storage accounts in the same resource group as the cluster. When `kind` is
|
||||
`dedicated`, a new dedicated storage account will be created for the new
|
||||
unmanaged disk in the same resource group as the cluster. When `kind` is
|
||||
`managed`, all managed disks are created in the same resource group as
|
||||
the cluster.
|
||||
* `resourceGroup`: Specify the resource group in which the Azure disk will be created.
|
||||
It must be an existing resource group name. If it is unspecified, the disk will be
|
||||
placed in the same resource group as the current Kubernetes cluster.
|
||||
- `resourceGroup`: Specify the resource group in which the Azure disk will be created.
|
||||
It must be an existing resource group name. If it is unspecified, the disk will be
|
||||
placed in the same resource group as the current Kubernetes cluster.
|
||||
-->
|
||||
* `storageaccounttype`:Azure 存储帐户 Sku 层。默认为空。
|
||||
* `kind`:可能的值是 `shared`、`dedicated` 和 `managed`(默认)。
|
||||
|
||||
- `storageaccounttype`:Azure 存储帐户 Sku 层。默认为空。
|
||||
- `kind`:可能的值是 `shared`、`dedicated` 和 `managed`(默认)。
|
||||
当 `kind` 的值是 `shared` 时,所有非托管磁盘都在集群的同一个资源组中的几个共享存储帐户中创建。
|
||||
当 `kind` 的值是 `dedicated` 时,将为在集群的同一个资源组中新的非托管磁盘创建新的专用存储帐户。
|
||||
* `resourceGroup`:指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。
|
||||
若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。
|
||||
- `resourceGroup`:指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。
|
||||
若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。
|
||||
<!--
|
||||
- Premium VM can attach both Standard_LRS and Premium_LRS disks, while Standard
|
||||
* Premium VM can attach both Standard_LRS and Premium_LRS disks, while Standard
|
||||
VM can only attach Standard_LRS disks.
|
||||
- Managed VM can only attach managed disks and unmanaged VM can only attach
|
||||
* Managed VM can only attach managed disks and unmanaged VM can only attach
|
||||
unmanaged disks.
|
||||
-->
|
||||
- Premium VM 可以同时添加 Standard_LRS 和 Premium_LRS 磁盘,而 Standard
|
||||
-->
|
||||
* Premium VM 可以同时添加 Standard_LRS 和 Premium_LRS 磁盘,而 Standard
|
||||
虚拟机只能添加 Standard_LRS 磁盘。
|
||||
- 托管虚拟机只能连接托管磁盘,非托管虚拟机只能连接非托管磁盘。
|
||||
* 托管虚拟机只能连接托管磁盘,非托管虚拟机只能连接非托管磁盘。
|
||||
|
||||
<!--
|
||||
### Azure File
|
||||
|
@ -909,8 +911,8 @@ parameters:
|
|||
### Azure 文件 {#azure-file}
|
||||
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: azurefile
|
||||
provisioner: kubernetes.io/azure-file
|
||||
|
@ -921,33 +923,33 @@ parameters:
|
|||
```
|
||||
|
||||
<!--
|
||||
* `skuName`: Azure storage account Sku tier. Default is empty.
|
||||
* `location`: Azure storage account location. Default is empty.
|
||||
* `storageAccount`: Azure storage account name. Default is empty. If a storage
|
||||
- `skuName`: Azure storage account Sku tier. Default is empty.
|
||||
- `location`: Azure storage account location. Default is empty.
|
||||
- `storageAccount`: Azure storage account name. Default is empty. If a storage
|
||||
account is not provided, all storage accounts associated with the resource
|
||||
group are searched to find one that matches `skuName` and `location`. If a
|
||||
storage account is provided, it must reside in the same resource group as the
|
||||
cluster, and `skuName` and `location` are ignored.
|
||||
* `secretNamespace`: the namespace of the secret that contains the Azure Storage
|
||||
- `secretNamespace`: the namespace of the secret that contains the Azure Storage
|
||||
Account Name and Key. Default is the same as the Pod.
|
||||
* `secretName`: the name of the secret that contains the Azure Storage Account Name and
|
||||
- `secretName`: the name of the secret that contains the Azure Storage Account Name and
|
||||
Key. Default is `azure-storage-account-<accountName>-secret`
|
||||
* `readOnly`: a flag indicating whether the storage will be mounted as read only.
|
||||
- `readOnly`: a flag indicating whether the storage will be mounted as read only.
|
||||
Defaults to false which means a read/write mount. This setting will impact the
|
||||
`ReadOnly` setting in VolumeMounts as well.
|
||||
-->
|
||||
* `skuName`:Azure 存储帐户 Sku 层。默认为空。
|
||||
* `location`:Azure 存储帐户位置。默认为空。
|
||||
* `storageAccount`:Azure 存储帐户名称。默认为空。
|
||||
- `skuName`:Azure 存储帐户 Sku 层。默认为空。
|
||||
- `location`:Azure 存储帐户位置。默认为空。
|
||||
- `storageAccount`:Azure 存储帐户名称。默认为空。
|
||||
如果不提供存储帐户,会搜索所有与资源相关的存储帐户,以找到一个匹配
|
||||
`skuName` 和 `location` 的账号。
|
||||
如果提供存储帐户,它必须存在于与集群相同的资源组中,`skuName` 和 `location` 会被忽略。
|
||||
* `secretNamespace`:包含 Azure 存储帐户名称和密钥的密钥的名字空间。
|
||||
- `secretNamespace`:包含 Azure 存储帐户名称和密钥的密钥的名字空间。
|
||||
默认值与 Pod 相同。
|
||||
* `secretName`:包含 Azure 存储帐户名称和密钥的密钥的名称。
|
||||
- `secretName`:包含 Azure 存储帐户名称和密钥的密钥的名称。
|
||||
默认值为 `azure-storage-account-<accountName>-secret`
|
||||
* `readOnly`:指示是否将存储安装为只读的标志。默认为 false,表示"读/写"挂载。
|
||||
该设置也会影响VolumeMounts中的 `ReadOnly` 设置。
|
||||
- `readOnly`:指示是否将存储安装为只读的标志。默认为 false,表示"读/写"挂载。
|
||||
该设置也会影响 VolumeMounts 中的 `ReadOnly` 设置。
|
||||
|
||||
<!--
|
||||
During storage provisioning, a secret named by `secretName` is created for the
|
||||
|
@ -983,44 +985,43 @@ metadata:
|
|||
provisioner: kubernetes.io/portworx-volume
|
||||
parameters:
|
||||
repl: "1"
|
||||
snap_interval: "70"
|
||||
priority_io: "high"
|
||||
|
||||
snap_interval: "70"
|
||||
priority_io: "high"
|
||||
```
|
||||
|
||||
<!--
|
||||
* `fs`: filesystem to be laid out: `none/xfs/ext4` (default: `ext4`).
|
||||
* `block_size`: block size in Kbytes (default: `32`).
|
||||
* `repl`: number of synchronous replicas to be provided in the form of
|
||||
- `fs`: filesystem to be laid out: `none/xfs/ext4` (default: `ext4`).
|
||||
- `block_size`: block size in Kbytes (default: `32`).
|
||||
- `repl`: number of synchronous replicas to be provided in the form of
|
||||
replication factor `1..3` (default: `1`) A string is expected here i.e.
|
||||
`"1"` and not `1`.
|
||||
* `priority_io`: determines whether the volume will be created from higher
|
||||
- `priority_io`: determines whether the volume will be created from higher
|
||||
performance or a lower priority storage `high/medium/low` (default: `low`).
|
||||
* `snap_interval`: clock/time interval in minutes for when to trigger snapshots.
|
||||
- `snap_interval`: clock/time interval in minutes for when to trigger snapshots.
|
||||
Snapshots are incremental based on difference with the prior snapshot, 0
|
||||
disables snaps (default: `0`). A string is expected here i.e.
|
||||
`"70"` and not `70`.
|
||||
* `aggregation_level`: specifies the number of chunks the volume would be
|
||||
- `aggregation_level`: specifies the number of chunks the volume would be
|
||||
distributed into, 0 indicates a non-aggregated volume (default: `0`). A string
|
||||
is expected here i.e. `"0"` and not `0`
|
||||
* `ephemeral`: specifies whether the volume should be cleaned-up after unmount
|
||||
- `ephemeral`: specifies whether the volume should be cleaned-up after unmount
|
||||
or should be persistent. `emptyDir` use case can set this value to true and
|
||||
`persistent volumes` use case such as for databases like Cassandra should set
|
||||
to false, `true/false` (default `false`). A string is expected here i.e.
|
||||
`"true"` and not `true`.
|
||||
-->
|
||||
* `fs`:选择的文件系统:`none/xfs/ext4`(默认:`ext4`)。
|
||||
* `block_size`:以 Kbytes 为单位的块大小(默认值:`32`)。
|
||||
* `repl`:同步副本数量,以复制因子 `1..3`(默认值:`1`)的形式提供。
|
||||
- `fs`:选择的文件系统:`none/xfs/ext4`(默认:`ext4`)。
|
||||
- `block_size`:以 Kbytes 为单位的块大小(默认值:`32`)。
|
||||
- `repl`:同步副本数量,以复制因子 `1..3`(默认值:`1`)的形式提供。
|
||||
这里需要填写字符串,即,`"1"` 而不是 `1`。
|
||||
* `io_priority`:决定是否从更高性能或者较低优先级存储创建卷
|
||||
- `io_priority`:决定是否从更高性能或者较低优先级存储创建卷
|
||||
`high/medium/low`(默认值:`low`)。
|
||||
* `snap_interval`:触发快照的时钟/时间间隔(分钟)。
|
||||
- `snap_interval`:触发快照的时钟/时间间隔(分钟)。
|
||||
快照是基于与先前快照的增量变化,0 是禁用快照(默认:`0`)。
|
||||
这里需要填写字符串,即,是 `"70"` 而不是 `70`。
|
||||
* `aggregation_level`:指定卷分配到的块数量,0 表示一个非聚合卷(默认:`0`)。
|
||||
- `aggregation_level`:指定卷分配到的块数量,0 表示一个非聚合卷(默认:`0`)。
|
||||
这里需要填写字符串,即,是 `"0"` 而不是 `0`。
|
||||
* `ephemeral`:指定卷在卸载后进行清理还是持久化。
|
||||
- `ephemeral`:指定卷在卸载后进行清理还是持久化。
|
||||
`emptyDir` 的使用场景可以将这个值设置为 true,
|
||||
`persistent volumes` 的使用场景可以将这个值设置为 false
|
||||
(例如 Cassandra 这样的数据库)
|
||||
|
|
Loading…
Reference in New Issue