Merge pull request #34248 from tengqm/zh-34221-concepts-2
[zh] Resync concepts pages after lang renaming (concepts-2)pull/34261/head
commit
b11335cbcb
|
@ -17,75 +17,119 @@ weight: 80
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="1.16" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
The kube-scheduler can be configured to enable bin packing of resources along with extended resources using `RequestedToCapacityRatioResourceAllocation` priority function. Priority functions can be used to fine-tune the kube-scheduler as per custom needs.
|
||||
In the [scheduling-plugin](/docs/reference/scheduling/config/#scheduling-plugins) `NodeResourcesFit` of kube-scheduler, there are two
|
||||
scoring strategies that support the bin packing of resources: `MostAllocated` and `RequestedToCapacityRatio`.
|
||||
-->
|
||||
|
||||
使用 `RequestedToCapacityRatioResourceAllocation` 优先级函数,可以将 kube-scheduler
|
||||
配置为支持包含扩展资源在内的资源装箱操作。
|
||||
优先级函数可用于根据自定义需求微调 kube-scheduler 。
|
||||
在 kube-scheduler 的[调度插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins)
|
||||
`NodeResourcesFit` 中存在两种支持资源装箱(bin packing)的策略:`MostAllocated` 和
|
||||
`RequestedToCapacityRatio`。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation
|
||||
## Enabling bin packing using MostAllocated strategy
|
||||
|
||||
Kubernetes allows the users to specify the resources along with weights for
|
||||
each resource to score nodes based on the request to capacity ratio. This
|
||||
allows users to bin pack extended resources by using appropriate parameters
|
||||
and improves the utilization of scarce resources in large clusters. The
|
||||
behavior of the `RequestedToCapacityRatioResourceAllocation` priority function
|
||||
can be controlled by a configuration option called `RequestedToCapacityRatioArgs`.
|
||||
This argument consists of two parameters `shape` and `resources`. The `shape`
|
||||
parameter allows the user to tune the function as least requested or most
|
||||
requested based on `utilization` and `score` values. The `resources` parameter
|
||||
consists of `name` of the resource to be considered during scoring and `weight`
|
||||
specify the weight of each resource.
|
||||
The `MostAllocated` strategy scores the nodes based on the utilization of resources, favoring the ones with higher allocation.
|
||||
For each resource type, you can set a weight to modify its influence in the node score.
|
||||
|
||||
To set the `MostAllocated` strategy for the `NodeResourcesFit` plugin, use a
|
||||
[scheduler configuration](/docs/reference/scheduling/config) similar to the following:
|
||||
-->
|
||||
## 使用 MostAllocated 策略启用资源装箱 {#enabling-bin-packing-using-mostallocated-strategy}
|
||||
|
||||
## 使用 RequestedToCapacityRatioResourceAllocation 启用装箱
|
||||
`MostAllocated` 策略基于资源的利用率来为节点计分,优选分配比率较高的节点。
|
||||
针对每种资源类型,你可以设置一个权重值以改变其对节点得分的影响。
|
||||
|
||||
Kubernetes 允许用户指定资源以及每类资源的权重,
|
||||
以便根据请求数量与可用容量之比率为节点评分。
|
||||
这就使得用户可以通过使用适当的参数来对扩展资源执行装箱操作,从而提高了大型集群中稀缺资源的利用率。
|
||||
`RequestedToCapacityRatioResourceAllocation` 优先级函数的行为可以通过名为
|
||||
`RequestedToCapacityRatioArgs` 的配置选项进行控制。
|
||||
该标志由两个参数 `shape` 和 `resources` 组成。
|
||||
`shape` 允许用户根据 `utilization` 和 `score` 值将函数调整为
|
||||
最少请求(least requested)或最多请求(most requested)计算。
|
||||
`resources` 包含由 `name` 和 `weight` 组成,`name` 指定评分时要考虑的资源,
|
||||
`weight` 指定每种资源的权重。
|
||||
|
||||
<!--
|
||||
Below is an example configuration that sets
|
||||
`requestedToCapacityRatioArguments` to bin packing behavior for extended
|
||||
resources `intel.com/foo` and `intel.com/bar`.
|
||||
-->
|
||||
|
||||
以下是一个配置示例,该配置将 `requestedToCapacityRatioArguments` 设置为对扩展资源
|
||||
`intel.com/foo` 和 `intel.com/bar` 的装箱行为
|
||||
要为插件 `NodeResourcesFit` 设置 `MostAllocated` 策略,
|
||||
可以使用一个类似于下面这样的[调度器配置](/zh-cn/docs/reference/scheduling/config/):
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||
kind: KubeSchedulerConfiguration
|
||||
profiles:
|
||||
# ...
|
||||
pluginConfig:
|
||||
- name: RequestedToCapacityRatio
|
||||
args:
|
||||
shape:
|
||||
- utilization: 0
|
||||
score: 10
|
||||
- utilization: 100
|
||||
score: 0
|
||||
resources:
|
||||
- name: intel.com/foo
|
||||
weight: 3
|
||||
- name: intel.com/bar
|
||||
weight: 5
|
||||
- pluginConfig:
|
||||
- args:
|
||||
scoringStrategy:
|
||||
resources:
|
||||
- name: cpu
|
||||
weight: 1
|
||||
- name: memory
|
||||
weight: 1
|
||||
- name: intel.com/foo
|
||||
weight: 3
|
||||
- name: intel.com/bar
|
||||
weight: 3
|
||||
type: MostAllocated
|
||||
name: NodeResourcesFit
|
||||
```
|
||||
|
||||
<!--
|
||||
To learn more about other parameters and their default configuration, see the API documentation for
|
||||
[`NodeResourcesFitArgs`](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs).
|
||||
-->
|
||||
要进一步了解其它参数及其默认配置,请参阅
|
||||
[`NodeResourcesFitArgs`](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs)
|
||||
的 API 文档。
|
||||
|
||||
<!--
|
||||
## Enabling bin packing using RequestedToCapacityRatio
|
||||
|
||||
The `RequestedToCapacityRatio` strategy allows the users to specify the resources along with weights for
|
||||
each resource to score nodes based on the request to capacity ratio. This
|
||||
allows users to bin pack extended resources by using appropriate parameters
|
||||
to improve the utilization of scarce resources in large clusters. It favors nodes according to a
|
||||
configured function of the allocated resources. The behavior of the `RequestedToCapacityRatio` in
|
||||
the `NodeResourcesFit` score function can be controlled by the
|
||||
[scoringStrategy](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) field.
|
||||
Within the `scoringStrategy` field, you can configure two parameters: `requestedToCapacityRatioParam` and
|
||||
`resources`. The `shape` in `requestedToCapacityRatioParam`
|
||||
parameter allows the user to tune the function as least requested or most
|
||||
requested based on `utilization` and `score` values. The `resources` parameter
|
||||
consists of `name` of the resource to be considered during scoring and `weight`
|
||||
specify the weight of each resource.
|
||||
-->
|
||||
## 使用 RequestedToCapacityRatio 策略来启用资源装箱 {#enabling-bin-packing-using-requestedtocapacityratio}
|
||||
|
||||
`RequestedToCapacityRatio` 策略允许用户基于请求值与容量的比率,针对参与节点计分的每类资源设置权重。
|
||||
这一策略是的用户可以使用合适的参数来对扩展资源执行装箱操作,进而提升大规模集群中稀有资源的利用率。
|
||||
此策略根据所分配资源的一个配置函数来评价节点。
|
||||
`NodeResourcesFit` 计分函数中的 `RequestedToCapacityRatio` 可以通过字段
|
||||
[scoringStrategy](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy)
|
||||
来控制。
|
||||
在 `scoringStrategy` 字段中,你可以配置两个参数:`requestedToCapacityRatioParam`
|
||||
和 `resources`。`requestedToCapacityRatioParam` 参数中的 `shape`
|
||||
设置使得用户能够调整函数的算法,基于 `utilization` 和 `score` 值计算最少请求或最多请求。
|
||||
`resources` 参数中包含计分过程中需要考虑的资源的 `name`,以及用来设置每种资源权重的 `weight`。
|
||||
|
||||
<!--
|
||||
Below is an example configuration that sets
|
||||
the bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`
|
||||
using the `requestedToCapacityRatio` field.
|
||||
-->
|
||||
下面是一个配置示例,使用 `requestedToCapacityRatio` 字段为扩展资源 `intel.com/foo`
|
||||
和 `intel.com/bar` 设置装箱行为:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||
kind: KubeSchedulerConfiguration
|
||||
profiles:
|
||||
- pluginConfig:
|
||||
- args:
|
||||
scoringStrategy:
|
||||
resources:
|
||||
- name: intel.com/foo
|
||||
weight: 3
|
||||
- name: intel.com/bar
|
||||
weight: 3
|
||||
requestedToCapacityRatioParam:
|
||||
shape:
|
||||
- utilization: 0
|
||||
score: 0
|
||||
- utilization: 100
|
||||
score: 10
|
||||
type: RequestedToCapacityRatio
|
||||
name: NodeResourcesFit
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -94,22 +138,24 @@ flag `--config=/path/to/config/file` will pass the configuration to the
|
|||
scheduler.
|
||||
-->
|
||||
使用 kube-scheduler 标志 `--config=/path/to/config/file`
|
||||
引用 `KubeSchedulerConfiguration` 文件将配置传递给调度器。
|
||||
引用 `KubeSchedulerConfiguration` 文件,可以将配置传递给调度器。
|
||||
|
||||
<!--
|
||||
**This feature is disabled by default**
|
||||
To learn more about other parameters and their default configuration, see the API documentation for
|
||||
[`NodeResourcesFitArgs`](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs).
|
||||
-->
|
||||
|
||||
**默认情况下此功能处于被禁用状态**
|
||||
要进一步了解其它参数及其默认配置,可以参阅
|
||||
[`NodeResourcesFitArgs`](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs)
|
||||
的 API 文档。
|
||||
|
||||
<!--
|
||||
### Tuning RequestedToCapacityRatioResourceAllocation Priority Function
|
||||
### Tuning the score function
|
||||
|
||||
`shape` is used to specify the behavior of the `RequestedToCapacityRatioPriority` function.
|
||||
`shape` is used to specify the behavior of the `RequestedToCapacityRatio` function.
|
||||
-->
|
||||
### 调整 RequestedToCapacityRatioResourceAllocation 优先级函数
|
||||
### 调整计分函数 {#tuning-the-score-function}
|
||||
|
||||
`shape` 用于指定 `RequestedToCapacityRatioPriority` 函数的行为。
|
||||
`shape` 用于指定 `RequestedToCapacityRatio` 函数的行为。
|
||||
|
||||
```yaml
|
||||
shape:
|
||||
|
@ -120,9 +166,10 @@ shape:
|
|||
```
|
||||
|
||||
<!--
|
||||
The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follows.
|
||||
The above arguments give the node a `score` of 0 if `utilization` is 0% and 10 for
|
||||
`utilization` 100%, thus enabling bin packing behavior. To enable least
|
||||
requested the score value must be reversed as follows.
|
||||
-->
|
||||
|
||||
上面的参数在 `utilization` 为 0% 时给节点评分为 0,在 `utilization` 为
|
||||
100% 时给节点评分为 10,因此启用了装箱行为。
|
||||
要启用最少请求(least requested)模式,必须按如下方式反转得分值。
|
||||
|
@ -151,7 +198,7 @@ resources:
|
|||
<!--
|
||||
It can be used to add extended resources as follows:
|
||||
-->
|
||||
它可以用来添加扩展资源,如下所示:
|
||||
它可以像下面这样用来添加扩展资源:
|
||||
|
||||
```yaml
|
||||
resources:
|
||||
|
@ -164,10 +211,11 @@ resources:
|
|||
```
|
||||
|
||||
<!--
|
||||
The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.
|
||||
The `weight` parameter is optional and is set to 1 if not specified. Also, the
|
||||
`weight` cannot be set to a negative value.
|
||||
-->
|
||||
weight 参数是可选的,如果未指定,则设置为 1。
|
||||
同时,weight 不能设置为负值。
|
||||
`weight` 参数是可选的,如果未指定,则设置为 1。
|
||||
同时,`weight` 不能设置为负值。
|
||||
|
||||
<!--
|
||||
### Node scoring for capacity allocation
|
||||
|
@ -176,29 +224,43 @@ This section is intended for those who want to understand the internal details
|
|||
of this feature.
|
||||
Below is an example of how the node score is calculated for a given set of values.
|
||||
-->
|
||||
|
||||
### 节点容量分配的评分
|
||||
### 节点容量分配的评分 {#node-scoring-for-capacity-allocation}
|
||||
|
||||
本节适用于希望了解此功能的内部细节的人员。
|
||||
以下是如何针对给定的一组值来计算节点得分的示例。
|
||||
|
||||
```
|
||||
请求的资源
|
||||
<!--
|
||||
Requested resources:
|
||||
-->
|
||||
请求的资源:
|
||||
|
||||
```
|
||||
intel.com/foo : 2
|
||||
memory: 256MB
|
||||
cpu: 2
|
||||
```
|
||||
|
||||
资源权重
|
||||
<!--
|
||||
Resource weights:
|
||||
-->
|
||||
资源权重:
|
||||
|
||||
```
|
||||
intel.com/foo : 5
|
||||
memory: 1
|
||||
cpu: 3
|
||||
```
|
||||
|
||||
```
|
||||
FunctionShapePoint {{0, 0}, {100, 10}}
|
||||
```
|
||||
|
||||
节点 Node 1 配置
|
||||
<!--
|
||||
Node 1 spec:
|
||||
-->
|
||||
节点 1 配置:
|
||||
|
||||
```
|
||||
可用:
|
||||
intel.com/foo : 4
|
||||
memory : 1 GB
|
||||
|
@ -208,34 +270,43 @@ FunctionShapePoint {{0, 0}, {100, 10}}
|
|||
intel.com/foo: 1
|
||||
memory: 256MB
|
||||
cpu: 1
|
||||
```
|
||||
|
||||
<!--
|
||||
Node score:
|
||||
-->
|
||||
节点得分:
|
||||
|
||||
```
|
||||
intel.com/foo = resourceScoringFunction((2+1),4)
|
||||
= (100 - ((4-3)*100/4)
|
||||
= (100 - 25)
|
||||
= 75
|
||||
= rawScoringFunction(75)
|
||||
= 7
|
||||
= 75 # requested + used = 75% * available
|
||||
= rawScoringFunction(75)
|
||||
= 7 # floor(75/10)
|
||||
|
||||
memory = resourceScoringFunction((256+256),1024)
|
||||
= (100 -((1024-512)*100/1024))
|
||||
= 50
|
||||
= 50 # requested + used = 50% * available
|
||||
= rawScoringFunction(50)
|
||||
= 5
|
||||
= 5 # floor(50/10)
|
||||
|
||||
cpu = resourceScoringFunction((2+1),8)
|
||||
= (100 -((8-3)*100/8))
|
||||
= 37.5
|
||||
= 37.5 # requested + used = 37.5% * available
|
||||
= rawScoringFunction(37.5)
|
||||
= 3
|
||||
= 3 # floor(37.5/10)
|
||||
|
||||
NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3)
|
||||
= 5
|
||||
```
|
||||
|
||||
<!--
|
||||
Node 2 spec:
|
||||
-->
|
||||
节点 2 配置:
|
||||
|
||||
节点 Node 2 配置
|
||||
|
||||
```
|
||||
可用:
|
||||
intel.com/foo: 8
|
||||
memory: 1GB
|
||||
|
@ -245,9 +316,14 @@ NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3)
|
|||
intel.com/foo: 2
|
||||
memory: 512MB
|
||||
cpu: 6
|
||||
```
|
||||
|
||||
<!--
|
||||
Node score:
|
||||
-->
|
||||
节点得分:
|
||||
|
||||
```
|
||||
intel.com/foo = resourceScoringFunction((2+2),8)
|
||||
= (100 - ((8-4)*100/8)
|
||||
= (100 - 50)
|
||||
|
@ -270,3 +346,13 @@ cpu = resourceScoringFunction((2+6),8)
|
|||
NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3)
|
||||
= 7
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
- Read more about the [scheduling framework](/docs/concepts/scheduling-eviction/scheduling-framework/)
|
||||
- Read more about [scheduler configuration](/docs/reference/scheduling/config/)
|
||||
-->
|
||||
- 继续阅读[调度器框架](/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework/)
|
||||
- 继续阅读[调度器配置](/zh-cn/docs/reference/scheduling/config/)
|
||||
|
||||
|
|
|
@ -1012,6 +1012,23 @@ In the CLI, the access modes are abbreviated to:
|
|||
* RWX - ReadWriteMany
|
||||
* RWOP - ReadWriteOncePod
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Kubernetes uses volume access modes to match PersistentVolumeClaims and PersistentVolumes.
|
||||
In some cases, the volume access modes also constrain where the PersistentVolume can be mounted.
|
||||
Volume access modes do **not** enforce write protection once the storage has been mounted.
|
||||
Even if the access modes are specified as ReadWriteOnce, ReadOnlyMany, or ReadWriteMany, they don't set any constraints on the volume.
|
||||
For example, even if a PersistentVolume is created as ReadOnlyMany, it is no guarantee that it will be read-only.
|
||||
If the access modes are specified as ReadWriteOncePod, the volume is constrained and can be mounted on only a single Pod.
|
||||
-->
|
||||
Kubernetes 使用卷访问模式来匹配 PersistentVolumeClaim 和 PersistentVolume。
|
||||
在某些场合下,卷访问模式也会限制 PersistentVolume 可以挂载的位置。
|
||||
卷访问模式并**不会**在存储已经被挂载的情况下为其实施写保护。
|
||||
即使访问模式设置为 ReadWriteOnce、ReadOnlyMany 或 ReadWriteMany,它们也不会对卷形成限制。
|
||||
例如,即使某个卷创建时设置为 ReadOnlyMany,也无法保证该卷是只读的。
|
||||
如果访问模式设置为 ReadWriteOncePod,则卷会被限制起来并且只能挂载到一个 Pod 上。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
|
||||
-->
|
||||
|
@ -1821,11 +1838,11 @@ and need persistent storage, it is recommended that you use the following patter
|
|||
<!--
|
||||
* Learn more about [Creating a PersistentVolume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume).
|
||||
* Learn more about [Creating a PersistentVolumeClaim](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim).
|
||||
* Read the [Persistent Storage design document](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md).
|
||||
* Read the [Persistent Storage design document](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/persistent-storage.md).
|
||||
-->
|
||||
* 进一步了解[创建持久卷](/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume).
|
||||
* 进一步学习[创建 PVC 申领](/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim).
|
||||
* 阅读[持久存储的设计文档](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md).
|
||||
* 阅读[持久存储的设计文档](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/persistent-storage.md).
|
||||
|
||||
<!--
|
||||
### API references {#reference}
|
||||
|
@ -1841,3 +1858,4 @@ Read about the APIs described in this page:
|
|||
|
||||
* [`PersistentVolume`](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/)
|
||||
* [`PersistentVolumeClaim`](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/)
|
||||
|
||||
|
|
|
@ -25,6 +25,7 @@ but with a clean state. A second problem occurs when sharing files
|
|||
between containers running together in a `Pod`.
|
||||
The Kubernetes {{< glossary_tooltip text="volume" term_id="volume" >}} abstraction
|
||||
solves both of these problems.
|
||||
Familiarity with [Pods](/docs/concepts/workloads/pods/) is suggested.
|
||||
-->
|
||||
Container 中的文件在磁盘上是临时存放的,这给 Container 中运行的较重要的应用程序带来一些问题。
|
||||
问题之一是当容器崩溃时文件丢失。
|
||||
|
@ -33,10 +34,7 @@ kubelet 会重新启动容器,但容器会以干净的状态重启。
|
|||
Kubernetes {{< glossary_tooltip text="卷(Volume)" term_id="volume" >}}
|
||||
这一抽象概念能够解决这两个问题。
|
||||
|
||||
<!--
|
||||
Familiarity with [Pods](/docs/concepts/workloads/pods/) is suggested.
|
||||
-->
|
||||
阅读本文前建议你熟悉一下 [Pods](/zh/docs/concepts/workloads/pods)。
|
||||
阅读本文前建议你熟悉一下 [Pod](/zh-cn/docs/concepts/workloads/pods)。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -120,15 +118,21 @@ Kubernetes supports several types of Volumes:
|
|||
|
||||
Kubernetes 支持下列类型的卷:
|
||||
|
||||
### awsElasticBlockStore {#awselasticblockstore}
|
||||
|
||||
<!--
|
||||
### awsElasticBlockStore (deprecated) {#awselasticblockstore}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="deprecated" >}}
|
||||
|
||||
An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS)
|
||||
[EBS Volume](http://aws.amazon.com/ebs/) into your Pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of an EBS
|
||||
volume are persisted and the volume is unmounted. This means that an
|
||||
EBS volume can be pre-populated with data, and that data can be shared between pods.
|
||||
-->
|
||||
### awsElasticBlockStore (已弃用) {#awselasticblockstore}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="deprecated" >}}
|
||||
|
||||
`awsElasticBlockStore` 卷将 Amazon Web服务(AWS)[EBS 卷](https://aws.amazon.com/ebs/)
|
||||
挂载到你的 Pod 中。与 `emptyDir` 在 Pod 被删除时也被删除不同,EBS 卷的内容在删除 Pod
|
||||
时会被保留,卷只是被卸载掉了。
|
||||
|
@ -239,13 +243,18 @@ and the kubelet, set the `InTreePluginAWSUnregister` flag to `true`.
|
|||
要禁止控制器管理器和 kubelet 加载 `awsElasticBlockStore` 存储插件,
|
||||
请将 `InTreePluginAWSUnregister` 标志设置为 `true`。
|
||||
|
||||
### azureDisk {#azuredisk}
|
||||
|
||||
<!--
|
||||
### azureDisk (deprecated) {#azuredisk}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="deprecated" >}}
|
||||
|
||||
The `azureDisk` volume type mounts a Microsoft Azure [Data Disk](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) into a pod.
|
||||
|
||||
For more details, see the [`azureDisk` volume plugin](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_disk/README.md).
|
||||
-->
|
||||
### azureDisk (已弃用) {#azuredisk}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="deprecated" >}}
|
||||
|
||||
`azureDisk` 卷类型用来在 Pod 上挂载 Microsoft Azure
|
||||
[数据盘(Data Disk)](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) 。
|
||||
|
@ -287,14 +296,20 @@ and the kubelet, set the `InTreePluginAzureDiskUnregister` flag to `true`.
|
|||
要禁止控制器管理器和 kubelet 加载 `azureDisk` 存储插件,
|
||||
请将 `InTreePluginAzureDiskUnregister` 标志设置为 `true`。
|
||||
|
||||
### azureFile {#azurefile}
|
||||
|
||||
<!--
|
||||
### azureFile (deprecated) {#azurefile}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
The `azureFile` volume type mounts a Microsoft Azure File volume (SMB 2.1 and 3.0)
|
||||
into a Pod.
|
||||
|
||||
For more details, see the [`azureFile` volume plugin](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_file/README.md).
|
||||
-->
|
||||
### azureFile (已弃用) {#azurefile}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
`azureFile` 卷类型用来在 Pod 上挂载 Microsoft Azure 文件卷(File Volume)(SMB 2.1 和 3.0)。
|
||||
更多详情请参考 [`azureFile` 卷插件](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_file/README.md)。
|
||||
|
||||
|
@ -370,23 +385,29 @@ See the [CephFS example](https://github.com/kubernetes/examples/tree/master/volu
|
|||
-->
|
||||
更多信息请参考 [CephFS 示例](https://github.com/kubernetes/examples/tree/master/volumes/cephfs/)。
|
||||
|
||||
### cinder {#cinder}
|
||||
<!--
|
||||
### cinder (deprecated) {#cinder}
|
||||
-->
|
||||
### cinder (已弃用) {#cinder}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="deprecated" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Kubernetes must be configured with the OpenStack cloud provider.
|
||||
-->
|
||||
{{< note >}}
|
||||
Kubernetes 必须配置了 OpenStack Cloud Provider。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The `cinder` volume type is used to mount the OpenStack Cinder volume into your pod.
|
||||
|
||||
#### Cinder Volume Example configuration
|
||||
-->
|
||||
`cinder` 卷类型用于将 OpenStack Cinder 卷挂载到 Pod 中。
|
||||
|
||||
#### Cinder 卷示例配置
|
||||
<!--
|
||||
#### Cinder Volume Example configuration
|
||||
-->
|
||||
#### Cinder 卷示例配置 {#cinder-volume-example-configuration}
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -2102,12 +2123,38 @@ for more information.
|
|||
进一步的信息可参阅[临时卷](/zh/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volume)。
|
||||
|
||||
<!--
|
||||
For more information on how to develop a CSI driver, refer to the [kubernetes-csi
|
||||
documentation](https://kubernetes-csi.github.io/docs/)
|
||||
|
||||
For more information on how to develop a CSI driver, refer to the
|
||||
[kubernetes-csi documentation](https://kubernetes-csi.github.io/docs/)
|
||||
-->
|
||||
有关如何开发 CSI 驱动的更多信息,请参考 [kubernetes-csi 文档](https://kubernetes-csi.github.io/docs/)。
|
||||
|
||||
<!--
|
||||
#### Windows CSI proxy
|
||||
-->
|
||||
#### Windows CSI 代理 {#windows-csi-proxy}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="stable" >}}
|
||||
|
||||
<!--
|
||||
CSI node plugins need to perform various privileged
|
||||
operations like scanning of disk devices and mounting of file systems. These operations
|
||||
differ for each host operating system. For Linux worker nodes, containerized CSI node
|
||||
node plugins are typically deployed as privileged containers. For Windows worker nodes,
|
||||
privileged operations for containerized CSI node plugins is supported using
|
||||
[csi-proxy](https://github.com/kubernetes-csi/csi-proxy), a community-managed,
|
||||
stand-alone binary that needs to be pre-installed on each Windows node.
|
||||
|
||||
For more details, refer to the deployment guide of the CSI plugin you wish to deploy.
|
||||
-->
|
||||
CSI 节点插件需要执行多种特权操作,例如扫描磁盘设备和挂载文件系统等。
|
||||
这些操作在每个宿主操作系统上都是不同的。对于 Linux 工作节点而言,容器化的 CSI
|
||||
节点插件通常部署为特权容器。对于 Windows 工作节点而言,容器化 CSI
|
||||
节点插件的特权操作是通过 [csi-proxy](https://github.com/kubernetes-csi/csi-proxy)
|
||||
来支持的。csi-proxy 是一个由社区管理的、独立的可执行二进制文件,
|
||||
需要被预安装到每个 Windows 节点上。
|
||||
|
||||
要了解更多的细节,可以参考你要部署的 CSI 插件的部署指南。
|
||||
|
||||
<!--
|
||||
#### Migrating to CSI drivers from in-tree plugins
|
||||
-->
|
||||
|
@ -2124,9 +2171,6 @@ configuration changes to existing Storage Classes, PersistentVolumes or Persiste
|
|||
|
||||
The operations and features that are supported include:
|
||||
provisioning/delete, attach/detach, mount/unmount and resizing of volumes.
|
||||
|
||||
In-tree plugins that support `CSIMigration` and have a corresponding CSI driver implemented
|
||||
are listed in [Types of Volumes](#volume-types).
|
||||
-->
|
||||
启用 `CSIMigration` 特性后,针对现有树内插件的操作会被重定向到相应的 CSI 插件(应已安装和配置)。
|
||||
因此,操作员在过渡到取代树内插件的 CSI 驱动时,无需对现有存储类、PV 或 PVC(指树内插件)进行任何配置更改。
|
||||
|
@ -2134,9 +2178,23 @@ are listed in [Types of Volumes](#volume-types).
|
|||
所支持的操作和特性包括:配备(Provisioning)/删除、挂接(Attach)/解挂(Detach)、
|
||||
挂载(Mount)/卸载(Unmount)和调整卷大小。
|
||||
|
||||
<!--
|
||||
In-tree plugins that support `CSIMigration` and have a corresponding CSI driver implemented
|
||||
are listed in [Types of Volumes](#volume-types).
|
||||
|
||||
The following in-tree plugins support persistent storage on Windows nodes:
|
||||
-->
|
||||
上面的[卷类型](#volume-types)节列出了支持 `CSIMigration` 并已实现相应 CSI
|
||||
驱动程序的树内插件。
|
||||
|
||||
下面是支持 Windows 节点上持久性存储的树内插件:
|
||||
|
||||
* [`awsElasticBlockStore`](#awselasticblockstore)
|
||||
* [`azureDisk`](#azuredisk)
|
||||
* [`azureFile`](#azurefile)
|
||||
* [`gcePersistentDisk`](#gcepersistentdisk)
|
||||
* [`vsphereVolume`](#vspherevolume)
|
||||
|
||||
### flexVolume
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="deprecated" >}}
|
||||
|
@ -2156,14 +2214,24 @@ FlexVolume 是一个使用基于 exec 的模型来与驱动程序对接的树外
|
|||
Pod 通过 `flexvolume` 树内插件与 FlexVolume 驱动程序交互。
|
||||
更多详情请参考 FlexVolume [README](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md#readme) 文档。
|
||||
|
||||
<!--
|
||||
The following FlexVolume [plugins](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows),
|
||||
deployed as PowerShell scripts on the host, support Windows nodes:
|
||||
-->
|
||||
下面的 FlexVolume [插件](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows)
|
||||
以 PowerShell 脚本的形式部署在宿主系统上,支持 Windows 节点:
|
||||
|
||||
* [SMB](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~smb.cmd)
|
||||
* [iSCSI](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~iscsi.cmd)
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
FlexVolume is deprecated. Using an out-of-tree CSI driver is the recommended way to integrate external storage with Kubernetes.
|
||||
|
||||
Maintainers of FlexVolume driver should implement a CSI Driver and help to migrate users of FlexVolume drivers to CSI.
|
||||
Users of FlexVolume should move their workloads to use the equivalent CSI Driver.
|
||||
-->
|
||||
{{< note >}}
|
||||
FlexVolume 已弃用。推荐使用树外 CSI 驱动来将外部存储整合进 Kubernetes。
|
||||
FlexVolume 已被弃用。推荐使用树外 CSI 驱动来将外部存储整合进 Kubernetes。
|
||||
|
||||
FlexVolume 驱动的维护者应开发一个 CSI 驱动并帮助用户从 FlexVolume 驱动迁移到 CSI。
|
||||
FlexVolume 用户应迁移工作负载以使用对等的 CSI 驱动。
|
||||
|
@ -2288,10 +2356,8 @@ sudo systemctl restart docker
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
Follow an example of [deploying WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/).
|
||||
-->
|
||||
|
||||
参考[使用持久卷部署 WordPress 和 MySQL](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) 示例。
|
||||
|
||||
|
|
Loading…
Reference in New Issue