Merge pull request #41555 from Zhuzhenghao/volume-health-monitoring

[zh] resync volume-health-monitoring & storage-classes
pull/41574/head
Kubernetes Prow Robot 2023-06-09 07:06:13 -07:00 committed by GitHub
commit 07c8c3687d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 77 additions and 44 deletions

View File

@ -103,22 +103,22 @@ for provisioning PVs. This field must be specified.
| Volume Plugin | Internal Provisioner | Config Example |
-->
| 卷插件 | 内置制备器 | 配置示例 |
| :------------------- | :------------------: | :-----------------------------------: |
| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) |
| AzureFile | ✓ | [Azure File](#azure-file) |
| AzureDisk | ✓ | [Azure Disk](#azure-disk) |
| CephFS | - | - |
| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder) |
| FC | - | - |
| FlexVolume | - | - |
| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) |
| iSCSI | - | - |
| NFS | - | [NFS](#nfs) |
| RBD | ✓ | [Ceph RBD](#ceph-rbd) |
| VsphereVolume | ✓ | [vSphere](#vsphere) |
| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) |
| Local | - | [Local](#local) |
| 卷插件 | 内置制备器 | 配置示例 |
| :------------------- | :--------: | :-----------------------------------: |
| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) |
| AzureFile | ✓ | [Azure File](#azure-file) |
| AzureDisk | ✓ | [Azure Disk](#azure-disk) |
| CephFS | - | - |
| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder) |
| FC | - | - |
| FlexVolume | - | - |
| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) |
| iSCSI | - | - |
| NFS | - | [NFS](#nfs) |
| RBD | ✓ | [Ceph RBD](#ceph-rbd) |
| VsphereVolume | ✓ | [vSphere](#vsphere) |
| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) |
| Local | - | [Local](#local) |
<!--
You are not restricted to specifying the "internal" provisioners
@ -155,7 +155,8 @@ vendors provide their own external provisioner.
### Reclaim Policy
PersistentVolumes that are dynamically created by a StorageClass will have the
reclaim policy specified in the `reclaimPolicy` field of the class, which can be
[reclaim policy](/docs/concepts/storage/persistent-volumes/#reclaiming)
specified in the `reclaimPolicy` field of the class, which can be
either `Delete` or `Retain`. If no `reclaimPolicy` is specified when a
StorageClass object is created, it will default to `Delete`.
@ -164,8 +165,10 @@ whatever reclaim policy they were assigned at creation.
-->
### 回收策略 {#reclaim-policy}
由 StorageClass 动态创建的 PersistentVolume 会在类的 `reclaimPolicy` 字段中指定回收策略,可以是
`Delete` 或者 `Retain`。如果 StorageClass 对象被创建时没有指定 `reclaimPolicy`,它将默认为 `Delete`
由 StorageClass 动态创建的 PersistentVolume 会在类的
[reclaimPolicy](/zh-cn/docs/concepts/storage/persistent-volumes/#reclaiming)
字段中指定回收策略,可以是 `Delete` 或者 `Retain`
如果 StorageClass 对象被创建时没有指定 `reclaimPolicy`,它将默认为 `Delete`
通过 StorageClass 手动创建并管理的 PersistentVolume 会使用它们被创建时指定的回收策略。
@ -236,8 +239,9 @@ the class or PV. If a mount option is invalid, the PV mount fails.
### 卷绑定模式 {#volume-binding-mode}
<!--
The `volumeBindingMode` field controls when [volume binding and dynamic
provisioning](/docs/concepts/storage/persistent-volumes/#provisioning) should occur. When unset, "Immediate" mode is used by default.
The `volumeBindingMode` field controls when
[volume binding and dynamic provisioning](/docs/concepts/storage/persistent-volumes/#provisioning)
should occur. When unset, "Immediate" mode is used by default.
-->
`volumeBindingMode`
字段控制了[卷绑定和动态制备](/zh-cn/docs/concepts/storage/persistent-volumes/#provisioning)应该发生在什么时候。
@ -310,7 +314,8 @@ to see its supported topology keys and examples.
{{< note >}}
<!--
If you choose to use `WaitForFirstConsumer`, do not use `nodeName` in the Pod spec
to specify node affinity. If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
to specify node affinity.
If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
Instead, you can use node selector for hostname in this case as shown below.
-->
@ -493,6 +498,7 @@ parameters:
where Kubernetes cluster has a node. `zone` and `zones` parameters must not
be used at the same time.
- `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system.
- `replication-type`: `none` or `regional-pd`. Default: `none`.
-->
- `type``pd-standard` 或者 `pd-ssd`。默认:`pd-standard`
@ -561,11 +567,12 @@ parameters:
- `readOnly`:是否将存储挂载为只读的标志(默认为 false
<!--
Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS.
Kubernetes doesn't include an internal NFS provisioner.
You need to use an external provisioner to create a StorageClass for NFS.
Here are some examples:
* [NFS Ganesha server and external provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
* [NFS subdir external provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
- [NFS Ganesha server and external provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
- [NFS subdir external provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
-->
Kubernetes 不包含内部 NFS 驱动。你需要使用外部驱动为 NFS 创建 StorageClass。
这里有些例子:
@ -591,10 +598,11 @@ parameters:
- `availability`:可用区域。如果没有指定,通常卷会在 Kubernetes 集群节点所在的活动区域中轮转调度。
{{< note >}}
<!--
This internal provisioner of OpenStack is deprecated. Please use [the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
-->
{{< feature-state state="deprecated" for_k8s_version="v1.11" >}}
<!--
This internal provisioner of OpenStack is deprecated. Please use
[the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
-->
OpenStack 的内部驱动已经被弃用。请使用
[OpenStack 的外部云驱动](https://github.com/kubernetes/cloud-provider-openstack)。
{{< /note >}}
@ -607,7 +615,10 @@ There are two types of provisioners for vSphere storage classes:
- [CSI provisioner](#vsphere-provisioner-csi): `csi.vsphere.vmware.com`
- [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume`
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). For more information on the CSI provisioner, see [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#vsphere-csi-migration).
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi).
For more information on the CSI provisioner, see
[Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and
[vSphereVolume CSI migration](/docs/concepts/storage/volumes/#vsphere-csi-migration).
-->
vSphere 存储类有两种制备器:
@ -623,7 +634,8 @@ vSphere 存储类有两种制备器:
<!--
#### CSI Provisioner {#vsphere-provisioner-csi}
The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the [vSphere CSI repository](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml).
The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters.
For an example, refer to the [vSphere CSI repository](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml).
-->
#### CSI 制备器 {#vsphere-provisioner-csi}

View File

@ -18,8 +18,11 @@ weight: 100
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
<!--
{{< glossary_tooltip text="CSI" term_id="csi" >}} volume health monitoring allows CSI Drivers to detect abnormal volume conditions from the underlying storage systems and report them as events on {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} or {{< glossary_tooltip text="Pods" term_id="pod" >}}.
<!--
{{< glossary_tooltip text="CSI" term_id="csi" >}} volume health monitoring allows
CSI Drivers to detect abnormal volume conditions from the underlying storage systems
and report them as events on {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}
or {{< glossary_tooltip text="Pods" term_id="pod" >}}.
-->
{{< glossary_tooltip text="CSI" term_id="csi" >}} 卷健康监测支持 CSI 驱动从底层的存储系统着手,
探测异常的卷状态,并以事件的形式上报到 {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}
@ -27,15 +30,21 @@ weight: 100
<!-- body -->
<!--
<!--
## Volume health monitoring
-->
## 卷健康监测 {#volume-health-monitoring}
<!--
Kubernetes _volume health monitoring_ is part of how Kubernetes implements the Container Storage Interface (CSI). Volume health monitoring feature is implemented in two components: an External Health Monitor controller, and the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}.
<!--
Kubernetes _volume health monitoring_ is part of how Kubernetes implements the
Container Storage Interface (CSI). Volume health monitoring feature is implemented
in two components: an External Health Monitor controller, and the
{{< glossary_tooltip term_id="kubelet" text="kubelet" >}}.
If a CSI Driver supports Volume Health Monitoring feature from the controller side, an event will be reported on the related {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC) when an abnormal volume condition is detected on a CSI volume.
If a CSI Driver supports Volume Health Monitoring feature from the controller side,
an event will be reported on the related
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC)
when an abnormal volume condition is detected on a CSI volume.
-->
Kubernetes _卷健康监测_ 是 Kubernetes 容器存储接口CSI实现的一部分。
卷健康监测特性由两个组件实现:外部健康监测控制器和 {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}。
@ -45,9 +54,20 @@ Kubernetes _卷健康监测_ 是 Kubernetes 容器存储接口CSI实现的
中上报一个事件。
<!--
The External Health Monitor {{< glossary_tooltip text="controller" term_id="controller" >}} also watches for node failure events. You can enable node failure monitoring by setting the `enable-node-watcher` flag to true. When the external health monitor detects a node failure event, the controller reports an Event will be reported on the PVC to indicate that pods using this PVC are on a failed node.
The External Health Monitor {{< glossary_tooltip text="controller" term_id="controller" >}}
also watches for node failure events. You can enable node failure monitoring by setting
the `enable-node-watcher` flag to true. When the external health monitor detects a node
failure event, the controller reports an Event will be reported on the PVC to indicate
that pods using this PVC are on a failed node.
If a CSI Driver supports Volume Health Monitoring feature from the node side, an Event will be reported on every Pod using the PVC when an abnormal volume condition is detected on a CSI volume. In addition, Volume Health information is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal is added. This metric includes two labels: `namespace` and `persistentvolumeclaim`. The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume is healthy. For more information, please check [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes).
If a CSI Driver supports Volume Health Monitoring feature from the node side,
an Event will be reported on every Pod using the PVC when an abnormal volume
condition is detected on a CSI volume. In addition, Volume Health information
is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal
is added. This metric includes two labels: `namespace` and `persistentvolumeclaim`.
The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume
is healthy. For more information, please check
[KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes).
-->
外部健康监测{{< glossary_tooltip text="控制器" term_id="controller" >}}也会监测节点失效事件。
如果要启动节点失效监测功能,你可以设置标志 `enable-node-watcher``true`
@ -55,14 +75,15 @@ If a CSI Driver supports Volume Health Monitoring feature from the node side, an
以表明使用此 PVC 的 Pod 正位于一个失效的节点上。
如果 CSI 驱动程序支持节点测的卷健康检测,那当在 CSI 卷上检测到异常卷时,
会在使用该 PVC 的每个Pod 上触发一个事件。
会在使用该 PVC 的每个 Pod 上触发一个事件。
此外,卷运行状况信息作为 Kubelet VolumeStats 指标公开。
添加了一个新的指标 kubelet_volume_stats_health_status_abnormal。
该指标包括两个标签:`namespace` 和 `persistentvolumeclaim`
计数为 1 或 0。1 表示卷不正常0 表示卷正常。更多信息请访问[KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes)。
<!--
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to use this feature from the node side.
<!--
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
to use this feature from the node side.
-->
{{< note >}}
你需要启用 `CSIVolumeHealth`
@ -72,9 +93,9 @@ You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-
## {{% heading "whatsnext" %}}
<!--
See the [CSI driver documentation](https://kubernetes-csi.github.io/docs/drivers.html) to find out which CSI drivers have implemented this feature.
<!--
See the [CSI driver documentation](https://kubernetes-csi.github.io/docs/drivers.html)
to find out which CSI drivers have implemented this feature.
-->
参阅 [CSI 驱动程序文档](https://kubernetes-csi.github.io/docs/drivers.html)
可以找出有哪些 CSI 驱动程序实现了此特性。