Merge pull request #22992 from tengqm/zh-links-concepts-7

[zh] Fix links/translation in concepts section (7)
pull/22536/head
Kubernetes Prow Robot 2020-08-09 06:36:22 -07:00 committed by GitHub
commit 36b9248faa
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 663 additions and 486 deletions

View File

@ -5,110 +5,86 @@ weight: 70
--- ---
<!-- <!--
---
title: Configuring kubelet Garbage Collection title: Configuring kubelet Garbage Collection
content_type: concept content_type: concept
weight: 70 weight: 70
---
--> -->
<!-- overview --> <!-- overview -->
垃圾回收是 kubelet 的一个有用功能,它将清理未使用的镜像和容器。
<!-- <!--
Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers.
-->
Kubelet 将每分钟对容器执行一次垃圾回收,每五分钟对镜像执行一次垃圾回收。
<!--
Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
-->
不建议使用外部垃圾收集工具,因为这些工具可能会删除原本期望存在的容器进而破坏 kubelet 的行为。
<!--
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist. External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
--> -->
垃圾回收是 kubelet 的一个有用功能,它将清理未使用的镜像和容器。
Kubelet 将每分钟对容器执行一次垃圾回收,每五分钟对镜像执行一次垃圾回收。
不建议使用外部垃圾收集工具,因为这些工具可能会删除原本期望存在的容器进而破坏 kubelet 的行为。
<!-- body --> <!-- body -->
## 镜像回收
<!-- <!--
## Image Collection ## Image Collection
Kubernetes manages lifecycle of all images through imageManager, with the cooperation
of cadvisor.
The policy for garbage collecting images takes two factors into consideration:
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold
will trigger garbage collection. The garbage collection will delete least recently used
images until the low threshold has been met.
--> -->
## 镜像回收 {#image-collection}
Kubernetes 借助于 cadvisor 通过 imageManager 来管理所有镜像的生命周期。 Kubernetes 借助于 cadvisor 通过 imageManager 来管理所有镜像的生命周期。
<!--
Kubernetes manages lifecycle of all images through imageManager, with the cooperation
of cadvisor.
-->
镜像垃圾回收策略只考虑两个因素:`HighThresholdPercent` 和 `LowThresholdPercent` 镜像垃圾回收策略只考虑两个因素:`HighThresholdPercent` 和 `LowThresholdPercent`
<!--
The policy for garbage collecting images takes two factors into consideration:
`HighThresholdPercent` and `LowThresholdPercent`.
-->
磁盘使用率超过上限阈值HighThresholdPercent将触发垃圾回收。 磁盘使用率超过上限阈值HighThresholdPercent将触发垃圾回收。
<!--
Disk usage above the high threshold will trigger garbage collection.
-->
垃圾回收将删除最近最少使用的镜像直到磁盘使用率满足下限阈值LowThresholdPercent 垃圾回收将删除最近最少使用的镜像直到磁盘使用率满足下限阈值LowThresholdPercent
<!--
The garbage collection will delete least recently used images until the low
threshold has been met.
-->
## 容器回收
<!-- <!--
## Container Collection ## Container Collection
-->
容器垃圾回收策略考虑三个用户定义变量。`MinAge` 是容器可以被执行垃圾回收的最小生命周期。`MaxPerPodContainer` 是每个 pod 内允许存在的死亡容器的最大数量。
`MaxContainers` 是全部死亡容器的最大数量。可以分别独立地通过将 `MinAge` 设置为 0以及将 `MaxPerPodContainer``MaxContainers` 设置为小于 0 来禁用这些变量。
<!--
The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers every single The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers every single
pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting `MinAge` to zero and setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero. pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting `MinAge` to zero and setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
--> -->
## 容器回收 {#container-collection}
Kubelet 将处理无法辨识的、已删除的以及超出前面提到的参数所设置范围的容器。最老的容器通常会先被移除。 容器垃圾回收策略考虑三个用户定义变量。
`MaxPerPodContainer``MaxContainer` 在某些场景下可能会存在冲突,例如在保证每个 pod 内死亡容器的最大数量(`MaxPerPodContainer`)的条件下可能会超过允许存在的全部死亡容器的最大数量(`MaxContainer`)。 `MinAge` 是容器可以被执行垃圾回收的最小生命周期。
`MaxPerPodContainer` 在这种情况下会被进行调整:最坏的情况是将 `MaxPerPodContainer` 降级为 1并驱逐最老的容器。 `MaxPerPodContainer` 是每个 pod 内允许存在的死亡容器的最大数量。
此外pod 内已经被删除的容器一旦年龄超过 `MinAge` 就会被清理。 `MaxContainers` 是全部死亡容器的最大数量。
可以分别独立地通过将 `MinAge` 设置为 0以及将 `MaxPerPodContainer``MaxContainers`
设置为小于 0 来禁用这些变量。
<!-- <!--
Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other in situations where retaining the maximum number of containers per pod (`MaxPerPodContainer`) would go outside the allowable range of global dead containers (`MaxContainers`). `MaxPerPodContainer` would be adjusted in this situation: A worst case scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`. Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other in situations where retaining the maximum number of containers per pod (`MaxPerPodContainer`) would go outside the allowable range of global dead containers (`MaxContainers`). `MaxPerPodContainer` would be adjusted in this situation: A worst case scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`.
--> -->
`kubelet` 将处理无法辨识的、已删除的以及超出前面提到的参数所设置范围的容器。
不被 kubelet 管理的容器不受容器垃圾回收的约束。 最老的容器通常会先被移除。
`MaxPerPodContainer``MaxContainer` 在某些场景下可能会存在冲突,
例如在保证每个 pod 内死亡容器的最大数量(`MaxPerPodContainer`)的条件下可能会超过
允许存在的全部死亡容器的最大数量(`MaxContainer`)。
`MaxPerPodContainer` 在这种情况下会被进行调整:
最坏的情况是将 `MaxPerPodContainer` 降级为 1并驱逐最老的容器。
此外pod 内已经被删除的容器一旦年龄超过 `MinAge` 就会被清理。
<!-- <!--
Containers that are not managed by kubelet are not subject to container garbage collection. Containers that are not managed by kubelet are not subject to container garbage collection.
--> -->
不被 kubelet 管理的容器不受容器垃圾回收的约束。
## 用户配置
<!-- <!--
## User Configuration ## User Configuration
-->
用户可以使用以下 kubelet 参数调整相关阈值来优化镜像垃圾回收:
<!--
Users can adjust the following thresholds to tune image garbage collection with the following kubelet flags : Users can adjust the following thresholds to tune image garbage collection with the following kubelet flags :
--> -->
## 用户配置 {#user-configuration}
用户可以使用以下 kubelet 参数调整相关阈值来优化镜像垃圾回收:
<!-- <!--
1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection. 1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
@ -119,14 +95,12 @@ to free. Default is 80%.
--> -->
1. `image-gc-high-threshold`,触发镜像垃圾回收的磁盘使用率百分比。默认值为 85%。 1. `image-gc-high-threshold`,触发镜像垃圾回收的磁盘使用率百分比。默认值为 85%。
2. `image-gc-low-threshold`,镜像垃圾回收试图释放资源后达到的磁盘使用率百分比。默认值为 80%。 2. `image-gc-low-threshold`,镜像垃圾回收试图释放资源后达到的磁盘使用率百分比。默认值为 80%。
我们还允许用户通过以下 kubelet 参数自定义垃圾收集策略:
<!-- <!--
We also allow users to customize garbage collection policy through the following kubelet flags: We also allow users to customize garbage collection policy through the following kubelet flags:
--> -->
我们还允许用户通过以下 kubelet 参数自定义垃圾收集策略:
<!-- <!--
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is 1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
@ -139,12 +113,13 @@ per container. Default is 1.
Default is -1, which means there is no global limit. Default is -1, which means there is no global limit.
--> -->
1. `minimum-container-ttl-duration`,完成的容器在被垃圾回收之前的最小年龄,默认是 0 分钟,这意味着每个完成的容器都会被执行垃圾回收。 1. `minimum-container-ttl-duration`,完成的容器在被垃圾回收之前的最小年龄,默认是 0 分钟。
这意味着每个完成的容器都会被执行垃圾回收。
2. `maximum-dead-containers-per-container`,每个容器要保留的旧实例的最大数量。默认值为 1。 2. `maximum-dead-containers-per-container`,每个容器要保留的旧实例的最大数量。默认值为 1。
3. `maximum-dead-containers`,要全局保留的旧容器实例的最大数量。默认值是 -1这意味着没有全局限制。 3. `maximum-dead-containers`,要全局保留的旧容器实例的最大数量。
默认值是 -1意味着没有全局限制。
<!-- <!--
Containers can potentially be garbage collected before their usefulness has expired. These containers Containers can potentially be garbage collected before their usefulness has expired. These containers
@ -152,45 +127,29 @@ can contain logs and other data that can be useful for troubleshooting. A suffic
`maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be `maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be
retained per expected container. A larger value for `maximum-dead-containers` is also recommended for a retained per expected container. A larger value for `maximum-dead-containers` is also recommended for a
similar reason. similar reason.
-->
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
-->
容器可能会在其效用过期之前被垃圾回收。这些容器可能包含日志和其他对故障诊断有用的数据。 容器可能会在其效用过期之前被垃圾回收。这些容器可能包含日志和其他对故障诊断有用的数据。
强烈建议为 `maximum-dead-containers-per-container` 设置一个足够大的值,以便每个预期容器至少保留一个死亡容器。 强烈建议为 `maximum-dead-containers-per-container` 设置一个足够大的值,以便每个预期容器至少保留一个死亡容器。
由于同样的原因,`maximum-dead-containers` 也建议使用一个足够大的值。 由于同样的原因,`maximum-dead-containers` 也建议使用一个足够大的值。
查阅 [这个问题](https://github.com/kubernetes/kubernetes/issues/13287) 获取更多细节。 查阅[这个 Issue](https://github.com/kubernetes/kubernetes/issues/13287) 获取更多细节。
<!--
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
-->
## 弃用
<!-- <!--
## Deprecation ## Deprecation
Some kubelet Garbage Collection features in this doc will be replaced by kubelet eviction in the future.
Including:
--> -->
## 弃用 {#deprecation}
这篇文档中的一些 kubelet 垃圾收集Garbage Collection功能将在未来被 kubelet 驱逐回收eviction所替代。 这篇文档中的一些 kubelet 垃圾收集Garbage Collection功能将在未来被 kubelet 驱逐回收eviction所替代。
<!--
Some kubelet Garbage Collection features in this doc will be replaced by kubelet eviction in the future.
-->
包括: 包括:
| 现存参数 | 新参数 | 解释 |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard``--eviction-soft` | 现存的驱逐回收信号可以触发镜像垃圾回收 |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | 驱逐回收实现相同行为 |
| `--maximum-dead-containers` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--maximum-dead-containers-per-container` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--minimum-container-ttl-duration` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | 驱逐回收将磁盘阈值泛化到其他资源 |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | 驱逐回收将磁盘压力转换到其他资源 |
<!-- <!--
Including:
| Existing Flag | New Flag | Rationale | | Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- | | ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection | | `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
@ -201,16 +160,21 @@ Including:
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources | | `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources | | `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
--> -->
| 现存参数 | 新参数 | 解释 |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard``--eviction-soft` | 现存的驱逐回收信号可以触发镜像垃圾回收 |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | 驱逐回收实现相同行为 |
| `--maximum-dead-containers` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--maximum-dead-containers-per-container` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--minimum-container-ttl-duration` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | 驱逐回收将磁盘阈值泛化到其他资源 |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | 驱逐回收将磁盘压力转换到其他资源 |
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}
查阅 [配置驱逐回收资源的策略](/docs/tasks/administer-cluster/out-of-resource/) 获取更多细节。
<!-- <!--
See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details. See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details.
--> -->
查阅[配置资源不足情况的处理](/zh/docs/tasks/administer-cluster/out-of-resource/)了解更多细节。

View File

@ -9,21 +9,24 @@ weight: 40
<!-- <!--
You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/). You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/).
--> -->
您已经部署了应用并通过服务暴露它。然后呢Kubernetes 提供了一些工具来帮助管理您的应用部署,包括缩扩容和更新。我们将更深入讨论的特性包括[配置文件](/docs/concepts/configuration/overview/)和[标签](/docs/concepts/overview/working-with-objects/labels/)。 你已经部署了应用并通过服务暴露它。然后呢?
Kubernetes 提供了一些工具来帮助管理你的应用部署,包括扩缩容和更新。
我们将更深入讨论的特性包括
[配置文件](/zh/docs/concepts/configuration/overview/)和
[标签](/zh/docs/concepts/overview/working-with-objects/labels/)。
<!-- body --> <!-- body -->
<!-- <!--
## Organizing resource configurations ## Organizing resource configurations
Many applications require multiple resources to be created, such as a Deployment and a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example: Many applications require multiple resources to be created, such as a Deployment and a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by in YAML). For example:
--> -->
## 组织资源配置 ## 组织资源配置
许多应用需要创建多个资源,例如 Deployment 和 Service。可以通过将多个资源组合在同一个文件中在 YAML 中以 `---` 分隔)来简化对它们的管理。例如: 许多应用需要创建多个资源,例如 Deployment 和 Service。
可以通过将多个资源组合在同一个文件中(在 YAML 中以 `---` 分隔)
来简化对它们的管理。例如:
{{< codenew file="application/nginx-app.yaml" >}} {{< codenew file="application/nginx-app.yaml" >}}
@ -36,7 +39,7 @@ Multiple resources can be created the same way as a single resource:
kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml
``` ```
```shell ```
service/my-nginx-svc created service/my-nginx-svc created
deployment.apps/my-nginx created deployment.apps/my-nginx created
``` ```
@ -44,7 +47,9 @@ deployment.apps/my-nginx created
<!-- <!--
The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment. The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment.
--> -->
资源将按照它们在文件中的顺序创建。因此,最好先指定服务,这样在控制器(例如 Deployment创建 Pod 时能够确保调度器可以将与服务关联的多个 Pod 分散到不同节点。 资源将按照它们在文件中的顺序创建。
因此,最好先指定服务,这样在控制器(例如 Deployment创建 Pod 时能够
确保调度器可以将与服务关联的多个 Pod 分散到不同节点。
<!-- <!--
`kubectl apply` also accepts multiple `-f` arguments: `kubectl apply` also accepts multiple `-f` arguments:
@ -71,9 +76,11 @@ It is a recommended practice to put resources related to the same microservice o
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github: A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
--> -->
`kubectl` 将读取任何后缀为 `.yaml``.yml` 或者 `.json` 的文件。 `kubectl` 将读取任何后缀为 `.yaml``.yml` 或者 `.json` 的文件。
建议的做法是,将同一个微服务或同一应用层相关的资源放到同一个文件中,将同一个应用相关的所有文件按组存放到同一个目录中。如果应用的各层使用 DNS 相互绑定,那么您可以简单地将堆栈的所有组件一起部署。 建议的做法是,将同一个微服务或同一应用层相关的资源放到同一个文件中,
将同一个应用相关的所有文件按组存放到同一个目录中。
如果应用的各层使用 DNS 相互绑定,那么你可以简单地将堆栈的所有组件一起部署。
还可以使用 URL 作为配置源,便于直接使用已经提交到 Github 上的配置文件进行部署: 还可以使用 URL 作为配置源,便于直接使用已经提交到 Github 上的配置文件进行部署:
@ -81,7 +88,7 @@ A URL can also be specified as a configuration source, which is handy for deploy
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/zh/examples/application/nginx/nginx-deployment.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/zh/examples/application/nginx/nginx-deployment.yaml
``` ```
```shell ```
deployment.apps/my-nginx created deployment.apps/my-nginx created
``` ```
@ -92,13 +99,15 @@ Resource creation isn't the only operation that `kubectl` can perform in bulk. I
--> -->
## kubectl 中的批量操作 ## kubectl 中的批量操作
资源创建并不是 `kubectl` 可以批量执行的唯一操作。`kubectl` 还可以从配置文件中提取资源名,以便执行其他操作,特别是删除您之前创建的资源: 资源创建并不是 `kubectl` 可以批量执行的唯一操作。
`kubectl` 还可以从配置文件中提取资源名,以便执行其他操作,
特别是删除你之前创建的资源:
```shell ```shell
kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml
``` ```
```shell ```
deployment.apps "my-nginx" deleted deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted service "my-nginx-svc" deleted
``` ```
@ -115,13 +124,14 @@ kubectl delete deployments/my-nginx services/my-nginx-svc
<!-- <!--
For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels: For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels:
--> -->
对于资源数目较大的情况,您会发现使用 `-l``--selector` 指定的筛选器(标签查询)能很容易根据标签筛选资源: 对于资源数目较大的情况,你会发现使用 `-l``--selector`
指定筛选器(标签查询)能很容易根据标签筛选资源:
```shell ```shell
kubectl delete deployment,services -l app=nginx kubectl delete deployment,services -l app=nginx
``` ```
```shell ```
deployment.apps "my-nginx" deleted deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted service "my-nginx-svc" deleted
``` ```
@ -129,13 +139,14 @@ service "my-nginx-svc" deleted
<!-- <!--
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`: Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
--> -->
由于 `kubectl` 用来输出资源名称的语法与其所接受的资源名称语法相同,所以很容易使用 `$()``xargs` 进行链式操作: 由于 `kubectl` 用来输出资源名称的语法与其所接受的资源名称语法相同,
所以很容易使用 `$()``xargs` 进行链式操作:
```shell ```shell
kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service) kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
``` ```
```shell ```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s my-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s
``` ```
@ -144,17 +155,21 @@ my-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s
With the above commands, we first create resources under `examples/application/nginx/` and print the resources created with `-o name` output format With the above commands, we first create resources under `examples/application/nginx/` and print the resources created with `-o name` output format
(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`. (print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`.
--> -->
上面的命令中,我们首先使用 `examples/application/nginx/` 下的配置文件创建资源,并使用 `-o name` 的输出格式(以"资源/名称"的形式打印每个资源)打印所创建的资源。然后,我们通过 `grep` 来过滤 "service",最后再打印 `kubectl get` 的内容。 上面的命令中,我们首先使用 `examples/application/nginx/` 下的配置文件创建资源,
并使用 `-o name` 的输出格式(以"资源/名称"的形式打印每个资源)打印所创建的资源。
然后,我们通过 `grep` 来过滤 "service",最后再打印 `kubectl get` 的内容。
<!-- <!--
If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename,-f` flag. If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename,-f` flag.
--> -->
如果您碰巧在某个路径下的多个子路径中组织资源,那么也可以递归地在所有子路径上执行操作,方法是在 `--filename,-f` 后面指定 `--recursive` 或者 `-R` 如果你碰巧在某个路径下的多个子路径中组织资源,那么也可以递归地在所有子路径上
执行操作,方法是在 `--filename,-f` 后面指定 `--recursive` 或者 `-R`
<!-- <!--
For instance, assume there is a directory `project/k8s/development` that holds all of the manifests needed for the development environment, organized by resource type: For instance, assume there is a directory `project/k8s/development` that holds all of the manifests needed for the development environment, organized by resource type:
--> -->
例如,假设有一个目录路径为 `project/k8s/development`,它保存开发环境所需的所有清单,并按资源类型组织: 例如,假设有一个目录路径为 `project/k8s/development`,它保存开发环境所需的
所有清单,并按资源类型组织:
``` ```
project/k8s/development project/k8s/development
@ -176,20 +191,20 @@ By default, performing a bulk operation on `project/k8s/development` will stop a
kubectl apply -f project/k8s/development kubectl apply -f project/k8s/development
``` ```
```shell ```
error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin) error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)
``` ```
<!-- <!--
Instead, specify the `--recursive` or `-R` flag with the `--filename,-f` flag as such: Instead, specify the `--recursive` or `-R` flag with the `--filename,-f` flag as such:
--> -->
然而,在 `--filename,-f` 后面标明 `--recursive` 或者 `-R` 之后: 正确的做法是,在 `--filename,-f` 后面标明 `--recursive` 或者 `-R` 之后:
```shell ```shell
kubectl apply -f project/k8s/development --recursive kubectl apply -f project/k8s/development --recursive
``` ```
```shell ```
configmap/my-config created configmap/my-config created
deployment.apps/my-deployment created deployment.apps/my-deployment created
persistentvolumeclaim/my-pvc created persistentvolumeclaim/my-pvc created
@ -200,7 +215,8 @@ The `--recursive` flag works with any operation that accepts the `--filename,-f`
The `--recursive` flag also works when multiple `-f` arguments are provided: The `--recursive` flag also works when multiple `-f` arguments are provided:
--> -->
`--recursive` 可以用于接受 `--filename,-f` 参数的任何操作,例如:`kubectl {create,get,delete,describe,rollout}` 等。 `--recursive` 可以用于接受 `--filename,-f` 参数的任何操作,例如:
`kubectl {create,get,delete,describe,rollout}` 等。
有多个 `-f` 参数出现的时候,`--recursive` 参数也能正常工作: 有多个 `-f` 参数出现的时候,`--recursive` 参数也能正常工作:
@ -208,7 +224,7 @@ The `--recursive` flag also works when multiple `-f` arguments are provided:
kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive
``` ```
```shell ```
namespace/development created namespace/development created
namespace/staging created namespace/staging created
configmap/my-config created configmap/my-config created
@ -219,7 +235,8 @@ persistentvolumeclaim/my-pvc created
<!-- <!--
If you're interested in learning more about `kubectl`, go ahead and read [kubectl Overview](/docs/reference/kubectl/overview/). If you're interested in learning more about `kubectl`, go ahead and read [kubectl Overview](/docs/reference/kubectl/overview/).
--> -->
如果您有兴趣学习更多关于 `kubectl` 的内容,请阅读 [kubectl 概述](/docs/reference/kubectl/overview/)。 如果你有兴趣进一步学习关于 `kubectl` 的内容,请阅读
[kubectl 概述](/zh/docs/reference/kubectl/overview/)。
<!-- <!--
## Using labels effectively ## Using labels effectively
@ -228,7 +245,8 @@ The examples we've used so far apply at most a single label to any resource. The
--> -->
## 有效地使用标签 ## 有效地使用标签
到目前为止我们使用的示例中的资源最多使用了一个标签。在许多情况下,应使用多个标签来区分集合。 到目前为止我们使用的示例中的资源最多使用了一个标签。
在许多情况下,应使用多个标签来区分集合。
<!-- <!--
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels: For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
@ -254,9 +272,7 @@ Redis 的主节点和从节点会有不同的 `tier` 标签,甚至还有一个
role: master role: master
``` ```
<!-- <!-- and -->
and
-->
以及 以及
```yaml ```yaml
@ -276,7 +292,7 @@ kubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
kubectl get pods -Lapp -Ltier -Lrole kubectl get pods -Lapp -Ltier -Lrole
``` ```
```shell ```
NAME READY STATUS RESTARTS AGE APP TIER ROLE NAME READY STATUS RESTARTS AGE APP TIER ROLE
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none> guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none> guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>
@ -291,7 +307,8 @@ my-nginx-o0ef1 1/1 Running 0 29m nginx
```shell ```shell
kubectl get pods -lapp=guestbook,role=slave kubectl get pods -lapp=guestbook,role=slave
``` ```
```shell
```
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
guestbook-redis-slave-2q2yf 1/1 Running 0 3m guestbook-redis-slave-2q2yf 1/1 Running 0 3m
guestbook-redis-slave-qgazl 1/1 Running 0 3m guestbook-redis-slave-qgazl 1/1 Running 0 3m
@ -299,20 +316,22 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m
<!-- <!--
## Canary deployments ## Canary deployments
-->
## 金丝雀部署
<!--
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. It is common practice to deploy a *canary* of a new application release (specified via image tag in the pod template) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. It is common practice to deploy a *canary* of a new application release (specified via image tag in the pod template) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out.
--> -->
另一个需要多标签的场景是用来区分同一组件的不同版本或者不同配置的多个部署。常见的做法是部署一个使用*金丝雀发布*来部署新应用版本(在 pod 模板中通过镜像标签指定),保持新旧版本应用同时运行,这样,新版本在完全发布之前也可以接收实时的生产流量。 ## 金丝雀部署Canary Deployments {#canary-deployments}
另一个需要多标签的场景是用来区分同一组件的不同版本或者不同配置的多个部署。
常见的做法是部署一个使用*金丝雀发布*来部署新应用版本
(在 Pod 模板中通过镜像标签指定),保持新旧版本应用同时运行。
这样,新版本在完全发布之前也可以接收实时的生产流量。
<!-- <!--
For instance, you can use a `track` label to differentiate different releases. For instance, you can use a `track` label to differentiate different releases.
The primary, stable release would have a `track` label with value as `stable`: The primary, stable release would have a `track` label with value as `stable`:
--> -->
例如,可以使用 `track` 标签来区分不同的版本。 例如,可以使用 `track` 标签来区分不同的版本。
主要稳定的发行版将有一个 `track` 标签,其值为 `stable` 主要稳定的发行版将有一个 `track` 标签,其值为 `stable`
@ -331,7 +350,8 @@ The primary, stable release would have a `track` label with value as `stable`:
<!-- <!--
and then you can create a new release of the guestbook frontend that carries the `track` label with different value (i.e. `canary`), so that two sets of pods would not overlap: and then you can create a new release of the guestbook frontend that carries the `track` label with different value (i.e. `canary`), so that two sets of pods would not overlap:
--> -->
然后,您可以创建 guestbook 前端的新版本,让这些版本的 `track` 标签带有不同的值(即 `canary`),以便两组 pod 不会重叠: 然后,你可以创建 guestbook 前端的新版本,让这些版本的 `track` 标签带有不同的值
(即 `canary`),以便两组 Pod 不会重叠:
```yaml ```yaml
name: frontend-canary name: frontend-canary
@ -360,7 +380,7 @@ The frontend service would span both sets of replicas by selecting the common su
You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live production traffic (in this case, 3:1). You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live production traffic (in this case, 3:1).
Once you're confident, you can update the stable track to the new application release and remove the canary one. Once you're confident, you can update the stable track to the new application release and remove the canary one.
--> -->
可以调整 `stable``canary` 版本的副本数量,以确定每个版本将接收实时生产流量的比例(在本例中为 3:1)。一旦有信心,就可以将新版本应用的 `track` 标签的值从 `canary` 替换为 `stable`,并且将老版本应用删除。 可以调整 `stable``canary` 版本的副本数量,以确定每个版本将接收实时生产流量的比例(在本例中为 3:1)。一旦有信心,就可以将新版本应用的 `track` 标签的值从 `canary` 替换为 `stable`,并且将老版本应用删除。
<!-- <!--
For a more concrete example, check the [tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary). For a more concrete example, check the [tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary).
@ -373,16 +393,17 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
For example, if you want to label all your nginx pods as frontend tier, simply run: For example, if you want to label all your nginx pods as frontend tier, simply run:
--> -->
## 更新标签 ## 更新标签 {#updating-labels}
有时,现有的 pod 和其它资源需要在创建新资源之前重新标记。这可以用 `kubectl label` 完成。 有时,现有的 pod 和其它资源需要在创建新资源之前重新标记。
这可以用 `kubectl label` 完成。
例如,如果想要将所有 nginx pod 标记为前端层,只需运行: 例如,如果想要将所有 nginx pod 标记为前端层,只需运行:
```shell ```shell
kubectl label pods -l app=nginx tier=fe kubectl label pods -l app=nginx tier=fe
``` ```
```shell ```
pod/my-nginx-2035384211-j5fhi labeled pod/my-nginx-2035384211-j5fhi labeled
pod/my-nginx-2035384211-u2c7e labeled pod/my-nginx-2035384211-u2c7e labeled
pod/my-nginx-2035384211-u3t6x labeled pod/my-nginx-2035384211-u3t6x labeled
@ -392,12 +413,14 @@ pod/my-nginx-2035384211-u3t6x labeled
This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe".
To see the pods you just labeled, run: To see the pods you just labeled, run:
--> -->
首先用标签 "app=nginx" 过滤所有的 pod然后用 "tier=fe" 标记它们。想要查看您刚才标记的 pod请运行 首先用标签 "app=nginx" 过滤所有的 Pod然后用 "tier=fe" 标记它们。
想要查看你刚才标记的 Pod请运行
```shell ```shell
kubectl get pods -l app=nginx -L tier kubectl get pods -l app=nginx -L tier
``` ```
```shell
```
NAME READY STATUS RESTARTS AGE TIER NAME READY STATUS RESTARTS AGE TIER
my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe
my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe
@ -409,18 +432,22 @@ This outputs all "app=nginx" pods, with an additional label column of pods' tier
For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) and [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label). For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) and [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label).
--> -->
这将输出所有 "app=nginx" 的 pod并有一个额外的描述 pod 的 tier 的标签列(用参数 `-L` 或者 `--label-columns` 标明)。 这将输出所有 "app=nginx" 的 Pod并有一个额外的描述 Pod 的 tier 的标签列
(用参数 `-L` 或者 `--label-columns` 标明)。
想要了解更多信息,请参考 [标签](/docs/concepts/overview/working-with-objects/labels/) 和 [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label)。 想要了解更多信息,请参考
[标签](/zh/docs/concepts/overview/working-with-objects/labels/) 和
[`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands/#label)
命令文档。
<!-- <!--
## Updating annotations ## Updating annotations
Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example: Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example:
--> -->
## 更新注解 ## 更新注解 {#updating-annotations}
有时,可能希望将注解附加到资源中。注解是 API 客户端(如工具、库等)用于检索的任意非标识元数据。这可以通过 `kubectl annotate` 来完成。例如: 有时,可能希望将注解附加到资源中。注解是 API 客户端(如工具、库等)用于检索的任意非标识元数据。这可以通过 `kubectl annotate` 来完成。例如:
```shell ```shell
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
@ -438,33 +465,38 @@ metadata:
<!-- <!--
For more information, please see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) document. For more information, please see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) document.
--> -->
想要了解更多信息,请参考 [注解](/docs/concepts/overview/working-with-objects/annotations/) 和 [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) 文档。 想要了解更多信息,请参考
[注解](/zh/docs/concepts/overview/working-with-objects/annotations/)和
[`kubectl annotate`](/docs/reference/generated/kubectl/kubectl-commands/#annotate)
命令文档。
<!-- <!--
## Scaling your application ## Scaling your application
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do: When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do:
--> -->
## 缩扩您的应用 ## 扩缩你的应用
当应用上的负载增长或收缩时,使用 `kubectl` 能够轻松实现规模的缩扩。例如,要将 nginx 副本的数量从 3 减少到 1请执行以下操作 当应用上的负载增长或收缩时,使用 `kubectl` 能够轻松实现规模的扩缩。
例如,要将 nginx 副本的数量从 3 减少到 1请执行以下操作
```shell ```shell
kubectl scale deployment/my-nginx --replicas=1 kubectl scale deployment/my-nginx --replicas=1
``` ```
```shell
```
deployment.extensions/my-nginx scaled deployment.extensions/my-nginx scaled
``` ```
<!-- <!--
Now you only have one pod managed by the deployment. Now you only have one pod managed by the deployment.
--> -->
现在,您的 deployment 管理的 pod 只有一个了。 现在,你的 Deployment 管理的 Pod 只有一个了。
```shell ```shell
kubectl get pods -l app=nginx kubectl get pods -l app=nginx
``` ```
```shell ```
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
my-nginx-2035384211-j5fhi 1/1 Running 0 30m my-nginx-2035384211-j5fhi 1/1 Running 0 30m
``` ```
@ -477,7 +509,8 @@ To have the system automatically choose the number of nginx replicas as needed,
```shell ```shell
kubectl autoscale deployment/my-nginx --min=1 --max=3 kubectl autoscale deployment/my-nginx --min=1 --max=3
``` ```
```shell
```
horizontalpodautoscaler.autoscaling/my-nginx autoscaled horizontalpodautoscaler.autoscaling/my-nginx autoscaled
``` ```
@ -486,9 +519,12 @@ Now your nginx replicas will be scaled up and down as needed, automatically.
For more information, please see [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) and [horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document. For more information, please see [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) and [horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document.
--> -->
现在,的 nginx 副本将根据需要自动地增加或者减少。 现在,的 nginx 副本将根据需要自动地增加或者减少。
想要了解更多信息,请参考 [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) 和 [pod 水平自动伸缩](/docs/tasks/run-application/horizontal-pod-autoscale/) 文档。 想要了解更多信息,请参考
[kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale)命令文档、
[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) 命令文档和
[水平 Pod 自动伸缩](/zh/docs/tasks/run-application/horizontal-pod-autoscale/) 文档。
<!-- <!--
## In-place updates of resources ## In-place updates of resources
@ -497,7 +533,7 @@ Sometimes it's necessary to make narrow, non-disruptive updates to resources you
--> -->
## 就地更新资源 ## 就地更新资源
有时,有必要对所创建的资源进行小范围、无干扰地更新。 有时,有必要对所创建的资源进行小范围、无干扰地更新。
### kubectl apply ### kubectl apply
@ -506,12 +542,15 @@ It is suggested to maintain a set of configuration files in source control (see
so that they can be maintained and versioned along with the code for the resources they configure. so that they can be maintained and versioned along with the code for the resources they configure.
Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to push your configuration changes to the cluster. Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to push your configuration changes to the cluster.
--> -->
建议在源代码管理中维护一组配置文件(参见[配置即代码](http://martinfowler.com/bliki/InfrastructureAsCode.html)),这样,它们就可以和应用代码一样进行维护和版本管理。然后,您可以用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) 将配置变更应用到集群中。 建议在源代码管理中维护一组配置文件
(参见[配置即代码](https://martinfowler.com/bliki/InfrastructureAsCode.html)
这样,它们就可以和应用代码一样进行维护和版本管理。
然后,你可以用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) 将配置变更应用到集群中。
<!-- <!--
This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified. This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
--> -->
这个命令将会把推送的版本与以前的版本进行比较,并应用所做的更改,但是不会自动覆盖任何你没有指定更改的属性。 这个命令将会把推送的版本与以前的版本进行比较,并应用所做的更改,但是不会自动覆盖任何你没有指定更改的属性。
```shell ```shell
kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
@ -534,9 +573,7 @@ All subsequent calls to `kubectl apply`, and other commands that modify the conf
所有后续调用 `kubectl apply` 以及其它修改配置的命令,如 `kubectl replace``kubectl edit`,都将更新注解,并允许随后调用的 `kubectl apply` 使用三方差异进行检查和执行删除。 所有后续调用 `kubectl apply` 以及其它修改配置的命令,如 `kubectl replace``kubectl edit`,都将更新注解,并允许随后调用的 `kubectl apply` 使用三方差异进行检查和执行删除。
<!-- <!--
{{< note >}}
To use apply, always create resource initially with either `kubectl apply` or `kubectl create --save-config`. To use apply, always create resource initially with either `kubectl apply` or `kubectl create --save-config`.
{{< /note >}}
--> -->
{{< note >}} {{< note >}}
想要使用 apply请始终使用 `kubectl apply``kubectl create --save-config` 创建资源。 想要使用 apply请始终使用 `kubectl apply``kubectl create --save-config` 创建资源。
@ -547,7 +584,7 @@ To use apply, always create resource initially with either `kubectl apply` or `k
<!-- <!--
Alternatively, you may also update resources with `kubectl edit`: Alternatively, you may also update resources with `kubectl edit`:
--> -->
或者,也可以使用 `kubectl edit` 更新资源: 或者,也可以使用 `kubectl edit` 更新资源:
```shell ```shell
kubectl edit deployment/my-nginx kubectl edit deployment/my-nginx
@ -569,13 +606,12 @@ deployment.apps/my-nginx configured
rm /tmp/nginx.yaml rm /tmp/nginx.yaml
``` ```
<!-- <!--
This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables. This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables.
For more information, please see [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) document. For more information, please see [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) document.
--> -->
这使可以更加容易地进行更重大的更改。请注意,可以使用 `EDITOR``KUBE_EDITOR` 环境变量来指定编辑器。 这使可以更加容易地进行更重大的更改。请注意,可以使用 `EDITOR``KUBE_EDITOR` 环境变量来指定编辑器。
想要了解更多信息,请参考 [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) 文档。 想要了解更多信息,请参考 [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) 文档。
@ -588,8 +624,8 @@ JSON merge patch, and strategic merge patch. See
and and
[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch). [kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch).
--> -->
可以使用 `kubectl patch` 来更新 API 对象。此命令支持 JSON patchJSON merge patch以及 strategic merge patch。 请参考 可以使用 `kubectl patch` 来更新 API 对象。此命令支持 JSON patchJSON merge patch以及 strategic merge patch。 请参考
[使用 kubectl patch 更新 API 对象](/docs/tasks/run-application/update-api-object-kubectl-patch/) [使用 kubectl patch 更新 API 对象](/zh/docs/tasks/run-application/update-api-object-kubectl-patch/)
[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch). [kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch).
@ -598,14 +634,15 @@ and
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file: In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
--> -->
## 破坏性的更新 ## 破坏性的更新 {#disruptive-updates}
在某些情况下,您可能需要更新某些初始化后无法更新的资源字段,或者您可能只想立即进行递归更改,例如修复 Deployment 创建的不正常的 Pod。若要更改这些字段请使用 `replace --force`,它将删除并重新创建资源。在这种情况下,可以简单地修改原始配置文件: 在某些情况下,你可能需要更新某些初始化后无法更新的资源字段,或者你可能只想立即进行递归更改,例如修复 Deployment 创建的不正常的 Pod。若要更改这些字段请使用 `replace --force`,它将删除并重新创建资源。在这种情况下,可以简单地修改原始配置文件:
```shell ```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
``` ```
```shell
```
deployment.apps/my-nginx deleted deployment.apps/my-nginx deleted
deployment.apps/my-nginx replaced deployment.apps/my-nginx replaced
``` ```
@ -618,29 +655,30 @@ deployment.apps/my-nginx replaced
<!-- <!--
At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios. At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
--> -->
在某些时候,最终需要更新已部署的应用,通常都是通过指定新的镜像或镜像标签,如上面的金丝雀发布的场景中所示。`kubectl` 支持几种更新操作,每种更新操作都适用于不同的场景。 在某些时候,最终需要更新已部署的应用,通常都是通过指定新的镜像或镜像标签,如上面的金丝雀发布的场景中所示。`kubectl` 支持几种更新操作,每种更新操作都适用于不同的场景。
<!-- <!--
We'll guide you through how to create and update applications with Deployments. We'll guide you through how to create and update applications with Deployments.
--> -->
我们将指导通过 Deployment 如何创建和更新应用。 我们将指导通过 Deployment 如何创建和更新应用。
<!-- <!--
Let's say you were running version 1.7.9 of nginx: Let's say you were running version 1.14.2 of nginx:
--> -->
假设您正运行的是 1.7.9 版本的 nginx 假设你正运行的是 1.14.2 版本的 nginx
```shell ```shell
kubectl create deployment my-nginx --image=nginx:1.7.9 kubectl create deployment my-nginx --image=nginx:1.14.2
```
``` ```
```shell
deployment.apps/my-nginx created deployment.apps/my-nginx created
``` ```
<!-- <!--
To update to version 1.9.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`, with the kubectl commands we learned above. To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above.
--> -->
要更新到 1.9.1 版本,只需使用我们前面学到的 kubectl 命令将 `.spec.template.spec.containers[0].image``nginx:1.7.9` 修改为 `nginx:1.9.1` 要更新到 1.16.1 版本,只需使用我们前面学到的 kubectl 命令将
`.spec.template.spec.containers[0].image``nginx:1.14.2` 修改为 `nginx:1.16.1`
```shell ```shell
kubectl edit deployment/my-nginx kubectl edit deployment/my-nginx
@ -649,18 +687,16 @@ kubectl edit deployment/my-nginx
<!-- <!--
That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/). That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/).
--> -->
没错就是这样Deployment 将在后台逐步更新已经部署的 nginx 应用。它确保在更新过程中,只有一定数量的旧副本被开闭,并且只有一定基于所需 pod 数量的新副本被创建。想要了解更多细节,请参考 [Deployment](/docs/concepts/workloads/controllers/deployment/)。 没错就是这样Deployment 将在后台逐步更新已经部署的 nginx 应用。
它确保在更新过程中,只有一定数量的旧副本被开闭,并且只有一定基于所需 Pod 数量的新副本被创建。
想要了解更多细节,请参考 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/)。
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}
<!-- <!--
- [Learn about how to use `kubectl` for application introspection and debugging.](/docs/tasks/debug-application-cluster/debug-application-introspection/) - [Learn about how to use `kubectl` for application introspection and debugging.](/docs/tasks/debug-application-cluster/debug-application-introspection/)
- [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/) - [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/)
--> -->
- [学习怎么样使用 `kubectl` 观察和调试应用](/docs/tasks/debug-application-cluster/debug-application-introspection/) - 学习[如何使用 `kubectl` 观察和调试应用](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/)
- [配置最佳实践和技巧](/docs/concepts/configuration/overview/) - 阅读[配置最佳实践和技巧](/zh/docs/concepts/configuration/overview/)

View File

@ -13,20 +13,19 @@ understand exactly how it is expected to work. There are 4 distinct networking
problems to address: problems to address:
1. Highly-coupled container-to-container communications: this is solved by 1. Highly-coupled container-to-container communications: this is solved by
[pods](/docs/concepts/workloads/pods/pod/) and `localhost` communications. {{< glossary_tooltip text="Pods" term_id="pod" >}} and `localhost` communications.
2. Pod-to-Pod communications: this is the primary focus of this document. 2. Pod-to-Pod communications: this is the primary focus of this document.
3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/). 3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/). 4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
--> -->
集群网络系统是 Kubernetes 的核心部分,但是想要准确了解它的工作原理可是个不小的挑战。下面列出的是网络系统的的四个主要问题: 集群网络系统是 Kubernetes 的核心部分,但是想要准确了解它的工作原理可是个不小的挑战。
下面列出的是网络系统的的四个主要问题:
1. 高度耦合的容器间通信:这个已经被 [pods](/docs/concepts/workloads/pods/pod) 和 `localhost` 通信解决了。 1. 高度耦合的容器间通信:这个已经被 {{< glossary_tooltip text="Pods" term_id="pod" >}}
`localhost` 通信解决了。
2. Pod 间通信:这个是本文档的重点要讲述的。 2. Pod 间通信:这个是本文档的重点要讲述的。
3. Pod 和 Service 间通信:这个已经在 [services](/docs/concepts/services-networking/service/) 里讲述过了。 3. Pod 和服务间通信:这个已经在[服务](/zh/docs/concepts/services-networking/service/) 里讲述过了。
4. 外部和 Service 间通信:这个也已经在 [services](/docs/concepts/services-networking/service/) 讲述过了。 4. 外部和服务间通信:这也已经在[服务](/zh/docs/concepts/services-networking/service/) 讲述过了。
<!-- body --> <!-- body -->
@ -42,9 +41,13 @@ insert dynamic port numbers into configuration blocks, services have to know
how to find each other, etc. Rather than deal with this, Kubernetes takes a how to find each other, etc. Rather than deal with this, Kubernetes takes a
different approach. different approach.
--> -->
Kubernetes 的宗旨就是在应用之间共享机器。通常来说,共享机器需要两个应用之间不能使用相同的端口,但是在多个应用开发者之间去大规模地协调端口是件很困难的事情,尤其是还要让用户暴露在他们控制范围之外的集群级别的问题上。 Kubernetes 的宗旨就是在应用之间共享机器。
通常来说,共享机器需要两个应用之间不能使用相同的端口,但是在多个应用开发者之间
去大规模地协调端口是件很困难的事情,尤其是还要让用户暴露在他们控制范围之外的集群级别的问题上。
动态分配端口也会给系统带来很多复杂度 - 每个应用都需要设置一个端口的参数,而 API 服务器还需要知道如何将动态端口数值插入到配置模块中服务也需要知道如何找到对方等等。与其去解决这些问题Kubernetes 选择了其他不同的方法。 动态分配端口也会给系统带来很多复杂度 - 每个应用都需要设置一个端口的参数,
而 API 服务器还需要知道如何将动态端口数值插入到配置模块中,服务也需要知道如何找到对方等等。
与其去解决这些问题Kubernetes 选择了其他不同的方法。
<!-- <!--
## The Kubernetes network model ## The Kubernetes network model
@ -68,7 +71,24 @@ Linux):
* pods in the host network of a node can communicate with all pods on all * pods in the host network of a node can communicate with all pods on all
nodes without NAT nodes without NAT
-->
## Kubernetes 网络模型 {#the-kubernetes-network-model}
每一个 `Pod` 都有它自己的IP地址这就意味着你不需要显式地在每个 `Pod` 之间创建链接,
你几乎不需要处理容器端口到主机端口之间的映射。
这将创建一个干净的、向后兼容的模型,在这个模型里,从端口分配、命名、服务发现、
负载均衡、应用配置和迁移的角度来看,`Pod` 可以被视作虚拟机或者物理主机。
Kubernetes 对所有网络设施的实施,都需要满足以下的基本要求(除非有设置一些特定的网络分段策略):
* 节点上的 Pod 可以不通过 NAT 和其他任何节点上的 Pod 通信
* 节点上的代理比如系统守护进程、kubelet 可以和节点上的所有Pod通信
备注:仅针对那些支持 `Pods` 在主机网络中运行的平台(比如Linux)
* 那些运行在节点的主机网络里的 Pod 可以不通过 NAT 和所有节点上的 Pod 通信
<!--
This model is not only less complex overall, but it is principally compatible This model is not only less complex overall, but it is principally compatible
with the desire for Kubernetes to enable low-friction porting of apps from VMs with the desire for Kubernetes to enable low-friction porting of apps from VMs
to containers. If your job previously ran in a VM, your VM had an IP and could to containers. If your job previously ran in a VM, your VM had an IP and could
@ -79,24 +99,15 @@ share their network namespaces - including their IP address. This means that
containers within a `Pod` can all reach each other's ports on `localhost`. This containers within a `Pod` can all reach each other's ports on `localhost`. This
also means that containers within a `Pod` must coordinate port usage, but this also means that containers within a `Pod` must coordinate port usage, but this
is no different from processes in a VM. This is called the "IP-per-pod" model. is no different from processes in a VM. This is called the "IP-per-pod" model.
--> -->
## Kubernetes 网络模型 这个模型不仅不复杂,而且还和 Kubernetes 的实现廉价的从虚拟机向容器迁移的初衷相兼容,
如果你的工作开始是在虚拟机中运行的,你的虚拟机有一个 IP
这样就可以和其他的虚拟机进行通信,这是基本相同的模型。
每一个 `Pod` 都有它自己的IP地址这就意味着你不需要显式地在每个 `Pod` 之间创建链接,你几乎不需要处理容器端口到主机端口之间的映射。这将创建一个干净的、向后兼容的模型,在这个模型里,从端口分配、命名、服务发现、负载均衡、应用配置和迁移的角度来看,`Pod` 可以被视作虚拟机或者物理主机。 Kubernetes 的 IP 地址存在于 `Pod` 范围内 - 容器分享它们的网络命名空间 - 包括它们的 IP 地址。
这就意味着 `Pod` 内的容器都可以通过 `localhost` 到达各个端口。
Kubernetes 对所有网络设施的实施,都需要满足以下的基本要求(除非有设置一些特定的网络分段策略): 这也意味着 `Pod` 内的容器都需要相互协调端口的使用,但是这和虚拟机中的进程似乎没有什么不同,
这也被称为“一个 Pod 一个 IP” 模型。
* 节点上的 pods 可以不通过 NAT 和其他任何节点上的 pods 通信
* 节点上的代理比如系统守护进程、kubelet 可以和节点上的所有pods通信
备注:仅针对那些支持 `Pods` 在主机网络中运行的平台(比如Linux)
* 那些运行在节点的主机网络里的 pods 可以不通过 NAT 和所有节点上的 pods 通信
这个模型不仅不复杂,而且还和 Kubernetes 的实现廉价的从虚拟机向容器迁移的初衷相兼容,如果你的工作开始是在虚拟机中运行的,你的虚拟机有一个 IP ,这样就可以和其他的虚拟机进行通信,这是基本相同的模型。
Kubernetes 的 IP 地址存在于 `Pod` 范围内 - 容器分享他们的网络命名空间 - 包括他们的 IP 地址。这就意味着 `Pod` 内的容器都可以通过 `localhost` 到达各个端口。这也意味着 `Pod` 内的容器都需要相互协调端口的使用,但是这和虚拟机中的进程似乎没有什么不同,这也被称为“一个 pod 一个 IP” 模型。
<!-- <!--
How this is implemented is a detail of the particular container runtime in use. How this is implemented is a detail of the particular container runtime in use.
@ -108,7 +119,9 @@ blind to the existence or non-existence of host ports.
--> -->
如何实现这一点是正在使用的容器运行时的特定信息。 如何实现这一点是正在使用的容器运行时的特定信息。
也可以在 `node` 本身通过端口去请求你的 `Pod` (称之为主机端口),但这是一个很特殊的操作。转发方式如何实现也是容器运行时的细节。`Pod` 自己并不知道这些主机端口是否存在。 也可以在 `node` 本身通过端口去请求你的 `Pod` (称之为主机端口),
但这是一个很特殊的操作。转发方式如何实现也是容器运行时的细节。
`Pod` 自己并不知道这些主机端口是否存在。
<!-- <!--
## How to implement the Kubernetes networking model ## How to implement the Kubernetes networking model
@ -122,9 +135,10 @@ imply any preferential status.
--> -->
## 如何实现 Kubernetes 的网络模型 ## 如何实现 Kubernetes 的网络模型
有很多种方式可以实现这种网络模型,本文档并不是对各种实现技术的详细研究,但是希望可以作为对各种技术的详细介绍,并且成为你研究的起点。 有很多种方式可以实现这种网络模型,本文档并不是对各种实现技术的详细研究,
但是希望可以作为对各种技术的详细介绍,并且成为你研究的起点。
接下来的网络技术是按照首字母排序,并无其他任何含义。 接下来的网络技术是按照首字母排序,顺序本身并无其他意义。
<!-- <!--
### ACI ### ACI
@ -132,7 +146,10 @@ imply any preferential status.
[Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. [ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf). [Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. [ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf).
--> -->
### ACI ### ACI
[Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) 提供了一个集成覆盖和底层 SDN 解决方案来支持容器、虚拟机和其他裸机服务器。[ACI](https://www.github.com/noironetworks/aci-containers) 为ACI提供了容器网络集成。点击[这里](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf)查看概述 [Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html)
提供了一个集成覆盖网络和底层 SDN 的解决方案来支持容器、虚拟机和其他裸机服务器。
[ACI](https://www.github.com/noironetworks/aci-containers) 为 ACI 提供了容器网络集成。
点击[这里](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf)查看概述。
<!-- <!--
### Antrea ### Antrea
@ -142,12 +159,16 @@ Thanks to the "programmable" characteristic of Open vSwitch, Antrea is able to i
--> -->
### Antrea ### Antrea
[Antrea](https://github.com/vmware-tanzu/antrea) 项目是一个开源的,旨在成为 Kubernetes 原生的网络解决方案。它利用 Open vSwitch 作为网络数据平面。Open vSwitch 是一个高性能可编程的虚拟交换机,支持 Linux 和 Windows 平台。Open vSwitch 使 Antrea 能够以高性能和高效的方式实现 Kubernetes 的网络策略。借助 Open vSwitch 可编程的特性, Antrea 能够在 Open vSwitch 之上实现广泛的网络,安全功能和服务。 [Antrea](https://github.com/vmware-tanzu/antrea) 项目是一个开源的联网解决方案,旨在成为
Kubernetes 原生的网络解决方案。它利用 Open vSwitch 作为网络数据平面。
Open vSwitch 是一个高性能可编程的虚拟交换机,支持 Linux 和 Windows 平台。
Open vSwitch 使 Antrea 能够以高性能和高效的方式实现 Kubernetes 的网络策略。
借助 Open vSwitch 可编程的特性Antrea 能够在 Open vSwitch 之上实现广泛的联网、安全功能和服务。
<!-- <!--
### AOS from Apstra ### AOS from Apstra
[AOS](http://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs. [AOS](https://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs.
The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment. The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment.
@ -155,19 +176,27 @@ AOS has a rich set of REST API endpoints that enable Kubernetes to quickly chang
AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux. AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux.
Details on how the AOS system works can be accessed here: http://www.apstra.com/products/how-it-works/ Details on how the AOS system works can be accessed here: https://www.apstra.com/products/how-it-works/
--> -->
### Apstra 的 AOS ### Apstra 的 AOS
[AOS](http://www.apstra.com/products/aos/) 是一个基于意图的网络系统可以通过一个简单的集成平台创建和管理复杂的数据中心环境。AOS 利用高度可扩展的分布式设计来消除网络中断,同时将成本降至最低。 [AOS](https://www.apstra.com/products/aos/) 是一个基于意图的网络系统,
可以通过一个简单的集成平台创建和管理复杂的数据中心环境。
AOS 利用高度可扩展的分布式设计来消除网络中断,同时将成本降至最低。
AOS 参考设计当前支持三层连接的主机,这些主机消除了旧的两层连接的交换问题。这些三层连接的主机可以是 LinuxDebian、Ubuntu、CentOS系统它们直接在机架式交换机TOR的顶部创建 BGP 邻居关系。AOS 自动执行路由邻接,然后提供对 Kubernetes 部署中常见的路由运行状况注入RHI的精细控制。 AOS 参考设计当前支持三层连接的主机,这些主机消除了旧的两层连接的交换问题。
这些三层连接的主机可以是 LinuxDebian、Ubuntu、CentOS系统
它们直接在机架式交换机TOR的顶部创建 BGP 邻居关系。
AOS 自动执行路由邻接,然后提供对 Kubernetes 部署中常见的路由运行状况注入RHI的精细控制。
AOS 具有一组丰富的 REST API 端点,这些端点使 Kubernetes 能够根据应用程序需求快速更改网络策略。进一步的增强功能将用于网络设计的 AOS Graph 模型与工作负载供应集成在一起,从而为私有云和公共云提供端到端管理系统。 AOS 具有一组丰富的 REST API 端点,这些端点使 Kubernetes 能够根据应用程序需求快速更改网络策略。
进一步的增强功能将用于网络设计的 AOS Graph 模型与工作负载供应集成在一起,
从而为私有云和公共云提供端到端管理系统。
AOS 支持使用包括 Cisco、Arista、Dell、Mellanox、HPE 在内的制造商提供的通用供应商设备,以及大量白盒系统和开放网络操作系统,例如 Microsoft SONiC、Dell OPX 和 Cumulus Linux 。 AOS 支持使用包括 Cisco、Arista、Dell、Mellanox、HPE 在内的制造商提供的通用供应商设备,
以及大量白盒系统和开放网络操作系统,例如 Microsoft SONiC、Dell OPX 和 Cumulus Linux 。
想要更详细地了解 AOS 系统是如何工作的可以点击这里: http://www.apstra.com/products/how-it-works/ 想要更详细地了解 AOS 系统是如何工作的可以点击这里https://www.apstra.com/products/how-it-works/
<!-- <!--
### AWS VPC CNI for Kubernetes ### AWS VPC CNI for Kubernetes
@ -180,11 +209,18 @@ Additionally, the CNI can be run alongside [Calico for network policy enforcemen
--> -->
### Kubernetes 的 AWS VPC CNI ### Kubernetes 的 AWS VPC CNI
[AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) 为 Kubernetes 集群提供了集成的 AWS 虚拟私有云VPC网络。该 CNI 插件提供了高吞吐量和可用性,低延迟以及最小的网络抖动。此外,用户可以使用现有的 AWS VPC 网络和安全最佳实践来构建 Kubernetes 集群。这包括使用 VPC 流日志VPC 路由策略和安全组进行网络流量隔离的功能。 [AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) 为 Kubernetes 集群提供了集成的
AWS 虚拟私有云VPC网络。该 CNI 插件提供了高吞吐量和可用性,低延迟以及最小的网络抖动。
此外,用户可以使用现有的 AWS VPC 网络和安全最佳实践来构建 Kubernetes 集群。
这包括使用 VPC 流日志、VPC 路由策略和安全组进行网络流量隔离的功能。
使用该 CNI 插件,可使 Kubernetes Pods 在 Pod 中拥有与在 VPC 网络上相同的 IP 地址。CNI 将 AWS 弹性网络接口ENI分配给每个 Kubernetes 节点,并将每个 ENI 的辅助 IP 范围用于该节点上的 Pod 。CNI 包含用于 ENI 和 IP 地址的预分配的控件,以便加快 Pod 的启动时间并且能够支持多达2000个节点的大型集群。 使用该 CNI 插件,可使 Kubernetes Pod 拥有与在 VPC 网络上相同的 IP 地址。
CNI 将 AWS 弹性网络接口ENI分配给每个 Kubernetes 节点,并将每个 ENI 的辅助 IP 范围用于该节点上的 Pod 。
CNI 包含用于 ENI 和 IP 地址的预分配的控件,以便加快 Pod 的启动时间并且能够支持多达2000个节点的大型集群。
此外CNI可以与[用于执行网络策略的 Calico](https://docs.aws.amazon.com/eks/latest/userguide/calico.html)一起运行。 AWS VPC CNI项目是开源的查看 [GitHub 上的文档](https://github.com/aws/amazon-vpc-cni-k8s)。 此外CNI 可以与
[用于执行网络策略的 Calico](https://docs.aws.amazon.com/eks/latest/userguide/calico.html)一起运行。
AWS VPC CNI 项目是开源的,请查看 [GitHub 上的文档](https://github.com/aws/amazon-vpc-cni-k8s)。
<!-- <!--
### Azure CNI for Kubernetes ### Azure CNI for Kubernetes
@ -194,9 +230,16 @@ Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https:/
--> -->
### Kubernetes 的 Azure CNI ### Kubernetes 的 Azure CNI
[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) 是一个[开源插件](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md),将 Kubernetes Pods 和 Azure 虚拟网络(也称为 VNet集成在一起可提供与 VN 相当的网络性能。Pod 可以通过 Express Route 或者 站点到站点的 VPN 来连接到对等的 VNet ,也可以从这些网络来直接访问 Pod。Pod 可以访问受服务端点或者受保护链接的 Azure 服务,比如存储和 SQL。你可以使用 VNet 安全策略和路由来筛选 Pod 流量。该插件通过利用在 Kubernetes 节点的网络接口上预分配的辅助 IP 池将 VNet 分配给 Pod 。 [Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview)
是一个[开源插件](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md)
将 Kubernetes Pods 和 Azure 虚拟网络(也称为 VNet集成在一起可提供与 VM 相当的网络性能。
Pod 可以通过 Express Route 或者 站点到站点的 VPN 来连接到对等的 VNet
也可以从这些网络来直接访问 Pod。Pod 可以访问受服务端点或者受保护链接的 Azure 服务,比如存储和 SQL。
你可以使用 VNet 安全策略和路由来筛选 Pod 流量。
该插件通过利用在 Kubernetes 节点的网络接口上预分配的辅助 IP 池将 VNet 分配给 Pod 。
Azure CNI 可以在 [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni) 中获得。 Azure CNI 可以在
[Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni) 中获得。
<!-- <!--
### Big Cloud Fabric from Big Switch Networks ### Big Cloud Fabric from Big Switch Networks
@ -205,15 +248,24 @@ Azure CNI 可以在 [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated alongside with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed. With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated alongside with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/). BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
--> -->
### Big Switch Networks 的 Big Cloud Fabric ### Big Switch Networks 的 Big Cloud Fabric
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) 是一个基于云原生的网络架构,旨在在私有云或者本地环境中运行 Kubernetes。它使用统一的物理和虚拟 SDNBig Cloud Fabric 解决了固有的容器网络问题,比如负载均衡、可见性、故障排除、安全策略和容器流量监控。 [Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) 是一个基于云原生的网络架构,
旨在在私有云或者本地环境中运行 Kubernetes。
它使用统一的物理和虚拟 SDNBig Cloud Fabric 解决了固有的容器网络问题,
比如负载均衡、可见性、故障排除、安全策略和容器流量监控。
在 Big Cloud Fabric 的虚拟 Pod 多租户架构的帮助下,容器编排系统(比如 Kubernetes、RedHat OpenShift、Mesosphere DC/OS 和 Docker Swarm将于VM本地编排系统比如 VMware、OpenStack 和 Nutanix进行本地集成。客户将能够安全地互联任意数量的这些集群并且在需要时启用他们之间的租户间通信。 在 Big Cloud Fabric 的虚拟 Pod 多租户架构的帮助下,容器编排系统
(比如 Kubernetes、RedHat OpenShift、Mesosphere DC/OS 和 Docker Swarm
将与 VM 本地编排系统(比如 VMware、OpenStack 和 Nutanix进行本地集成。
客户将能够安全地互联任意数量的这些集群,并且在需要时启用他们之间的租户间通信。
在最新的 [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html) 上BCF 被 Gartner 认为是非常有远见的。而 BCF 的一条关于 Kubernetes 的本地部署(其中包括 Kubernetes、DC/OS 和在不同地理区域的多个 DC 上运行的 VMware也在[这里](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/)被引用。 在最新的 [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html) 上,
BCF 被 Gartner 认为是非常有远见的。
而 BCF 的一条关于 Kubernetes 的本地部署(其中包括 Kubernetes、DC/OS 和在不同地理区域的多个
DC 上运行的 VMware也在[这里](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/)被引用。
<!-- <!--
### Cilium ### Cilium
@ -226,20 +278,31 @@ addressing, and it can be used in combination with other CNI plugins.
--> -->
### Cilium ### Cilium
[Cilium](https://github.com/cilium/cilium) 是一个开源软件用于提供并透明保护应用容器间的网络连接。Cilium 支持 L7/HTTP ,可以在 L3-L7 上通过使用与网络分离的基于身份的安全模型寻址来实施网络策略,并且可以与其他 CNI 插件结合使用。 [Cilium](https://github.com/cilium/cilium) 是一个开源软件,用于提供并透明保护应用容器间的网络连接。
Cilium 支持 L7/HTTP可以在 L3-L7 上通过使用与网络分离的基于身份的安全模型寻址来实施网络策略,
并且可以与其他 CNI 插件结合使用。
<!-- <!--
### CNI-Genie from Huawei ### CNI-Genie from Huawei
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/networking.md#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/). [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/networking.md#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](https://docs.projectcalico.org/), [Romana](https://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin. CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin.
--> -->
### 华为的 CNI-Genie ### 华为的 CNI-Genie
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 是一个 CNI 插件,可以让 Kubernetes 在运行时允许不同的 [Kubernetes 的网络模型](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/networking.md#the-kubernetes-network-model)的[实现同时被访问](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables)。这包括以 [CNI 插件](https://github.com/containernetworking/cni#3rd-party-plugins)运行的任何实现,比如 [Flannel](https://github.com/coreos/flannel#flannel)、[Calico](http://docs.projectcalico.org/)、[Romana](http://romana.io)、[Weave-net](https://www.weave.works/products/weave-net/)。 [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 是一个 CNI 插件,
可以让 Kubernetes 在运行时使用不同的[网络模型](#the-kubernetes-network-model)的
[实现同时被访问](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables)。
这包括以
[CNI 插件](https://github.com/containernetworking/cni#3rd-party-plugins)运行的任何实现,比如
[Flannel](https://github.com/coreos/flannel#flannel)、
[Calico](https://docs.projectcalico.org/)、
[Romana](https://romana.io)、
[Weave-net](https://www.weave.works/products/weave-net/)。
CNI-Genie 还支持[将多个 IP 地址分配给 Pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multi-ip-addresses-per-pod),每个都来自不同的 CNI 插件。 CNI-Genie 还支持[将多个 IP 地址分配给 Pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multi-ip-addresses-per-pod)
每个都来自不同的 CNI 插件。
<!-- <!--
### cni-ipvlan-vpc-k8s ### cni-ipvlan-vpc-k8s
@ -260,28 +323,41 @@ network complexity required to deploy Kubernetes at scale within AWS.
--> -->
### cni-ipvlan-vpc-k8s ### cni-ipvlan-vpc-k8s
[cni-ipvlan-vpc-k8s](https://github.com/lyft/cni-ipvlan-vpc-k8s) 包含了一组 CNI 和 IPAM 插件来提供一个简单的、本地主机、低延迟、高吞吐量以及通过使用 Amazon 弹性网络接口ENI并使用 Linux 内核的 IPv2 驱动程序以 L2 模式将 AWS 管理的 IP 绑定到 Pod 中,在 Amazon Virtual Private CloudVPC环境中为 Kubernetes 兼容的网络堆栈。 [cni-ipvlan-vpc-k8s](https://github.com/lyft/cni-ipvlan-vpc-k8s)
包含了一组 CNI 和 IPAM 插件来提供一个简单的、本地主机、低延迟、高吞吐量
以及通过使用 Amazon 弹性网络接口ENI并使用 Linux 内核的 IPv2 驱动程序
以 L2 模式将 AWS 管理的 IP 绑定到 Pod 中,
在 Amazon Virtual Private CloudVPC环境中为 Kubernetes 兼容的网络堆栈。
这些插件旨在直接在 VPC 中进行配置和部署Kubelets 先启动,然后根据需要进行自我配置和扩展它们的 IP 使用率,而无需经常建议复杂的管理覆盖网络, BGP ,禁用源/目标检查,或调整 VPC 路由表以向每个主机提供每个实例子网的复杂性(每个 VPC 限制为50-100个条目。简而言之 cni-ipvlan-vpc-k8s 大大降低了在 AWS 中大规模部署 Kubernetes 所需的网络复杂性。 这些插件旨在直接在 VPC 中进行配置和部署Kubelets 先启动,
然后根据需要进行自我配置和扩展它们的 IP 使用率,而无需经常建议复杂的管理
覆盖网络、BGP、禁用源/目标检查或调整 VPC 路由表以向每个主机提供每个实例子网的
复杂性(每个 VPC 限制为50-100个条目
简而言之cni-ipvlan-vpc-k8s 大大降低了在 AWS 中大规模部署 Kubernetes 所需的网络复杂性。
<!-- <!--
### Contiv ### Contiv
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced. [Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](https://contiv.io) is all open sourced.
--> -->
### Contiv ### Contiv
[Contiv](https://github.com/contiv/netplugin) 为各种使用情况提供了一个可配置网络(使用了 BGP 的本地 l3 ,使用 vxlan 的覆盖,经典 l2 或 Cisco-SDN/ACI。[Contiv](http://contiv.io) 是完全开源的。 [Contiv](https://github.com/contiv/netplugin)
为各种使用情况提供了一个可配置网络(使用了 BGP 的本地 L3
使用 vxlan 、经典 L2 或 Cisco-SDN/ACI 的覆盖网络)。
[Contiv](https://contiv.io) 是完全开源的。
<!-- <!--
### Contrail / Tungsten Fabric ### Contrail/Tungsten Fabric
[Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads. [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads.
--> -->
### Contrail/Tungsten Fabric
### Contrail / Tungsten Fabric [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)
是基于 [Tungsten Fabric](https://tungsten.io) 的,真正开放的多云网络虚拟化和策略管理平台。
[Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/) 是基于 [Tungsten Fabric](https://tungsten.io) 的真正开放的多云网络虚拟化和策略管理平台。Contrail 和 Tungsten Fabric 与各种编排系统集成在一起,例如 KubernetesOpenShiftOpenStack 和 Mesos并为虚拟机、容器或 Pods 以及裸机工作负载提供了不同的隔离模式。 Contrail 和 Tungsten Fabric 与各种编排系统集成在一起,例如 Kubernetes、OpenShift、OpenStack 和 Mesos
并为虚拟机、容器或 Pods 以及裸机工作负载提供了不同的隔离模式。
<!-- <!--
### DANM ### DANM
@ -298,15 +374,16 @@ With this toolset DANM is able to provide multiple separated network interfaces,
--> -->
### DANM ### DANM
[DANM](https://github.com/nokia/danm) 是一个针对在 Kubernetes 集群中运行的电信工作负载的网络解决方案。它由以下几个组件构成: [DANM](https://github.com/nokia/danm) 是一个针对在 Kubernetes 集群中运行的电信工作负载的网络解决方案。
它由以下几个组件构成:
* 能够配置具有高级功能的 IPVLAN 接口的 CNI 插件 * 能够配置具有高级功能的 IPVLAN 接口的 CNI 插件
* 一个内置的 IPAM 模块,能够管理多个、群集内的、不连续的 L3 网络,并按请求提供动态、静态或无 IP 分配方案 * 一个内置的 IPAM 模块,能够管理多个、群集内的、不连续的 L3 网络,并按请求提供动态、静态或无 IP 分配方案
* CNI 元插件能够通过自己的 CNI 或通过将任务授权给其他任何流行的 CNI 解决方案(例如 SRI-OV 或 Flannel来实现将多个网络接口连接到容器 * CNI 元插件能够通过自己的 CNI 或通过将任务授权给其他任何流行的 CNI 解决方案(例如 SRI-OV 或 Flannel来实现将多个网络接口连接到容器
* Kubernetes 控制器能够集中管理所有 Kubernetes 主机的 VxLAN 和 VLAN 接口 * Kubernetes 控制器能够集中管理所有 Kubernetes 主机的 VxLAN 和 VLAN 接口
* 另一个 Kubernetes 控制器扩展了 Kubernetes 的基于服务的服务发现概念,以在 Pod 的所有网络接口上工作 * 另一个 Kubernetes 控制器扩展了 Kubernetes 的基于服务的服务发现概念,以在 Pod 的所有网络接口上工作
通过这个工具集DANM 可以提供多个分离的网络接口,可以为 pods 使用不同的网络后端和高级 IPAM 功能。 通过这个工具集DANM 可以提供多个分离的网络接口,可以为 Pod 使用不同的网络后端和高级 IPAM 功能。
<!-- <!--
### Flannel ### Flannel
@ -317,7 +394,8 @@ people have reported success with Flannel and Kubernetes.
--> -->
### Flannel ### Flannel
[Flannel](https://github.com/coreos/flannel#flannel) 是一个非常简单的能够满足 Kubernetes 所需要的重叠网络。已经有许多人报告了使用 Flannel 和 Kubernetes 的成功案例。 [Flannel](https://github.com/coreos/flannel#flannel) 是一个非常简单的能够满足
Kubernetes 所需要的覆盖网络。已经有许多人报告了使用 Flannel 和 Kubernetes 的成功案例。
<!-- <!--
### Google Compute Engine (GCE) ### Google Compute Engine (GCE)
@ -328,7 +406,7 @@ assign each VM a subnet (default is `/24` - 254 IPs). Any traffic bound for tha
subnet will be routed directly to the VM by the GCE network fabric. This is in subnet will be routed directly to the VM by the GCE network fabric. This is in
addition to the "main" IP address assigned to the VM, which is NAT'ed for addition to the "main" IP address assigned to the VM, which is NAT'ed for
outbound internet access. A linux bridge (called `cbr0`) is configured to exist outbound internet access. A linux bridge (called `cbr0`) is configured to exist
on that subnet, and is passed to docker's `--bridge` flag. on that subnet, and is passed to docker's `-bridge` flag.
Docker is started with: Docker is started with:
@ -365,7 +443,11 @@ traffic to the internet.
--> -->
### Google Compute Engine (GCE) ### Google Compute Engine (GCE)
对于 Google Compute Engine 的集群配置脚本,[advanced routing](https://cloud.google.com/vpc/docs/routes) 用于为每个虚机分配一个子网(默认是 `/24` - 254个 IP绑定到该子网的任何流量都将通过 GCE 网络结构直接路由到虚机。这是除了分配给虚机的“主要” IP 地址之外的一个补充,该 IP 地址经过 NAT 转换以用于访问外网。linux网桥称为“cbr0”被配置为存在于该子网中并被传递到 docker 的 --bridge 参数上。 对于 Google Compute Engine 的集群配置脚本,
[高级路由器](https://cloud.google.com/vpc/docs/routes) 用于为每个虚机分配一个子网(默认是 `/24` - 254个 IP
绑定到该子网的任何流量都将通过 GCE 网络结构直接路由到虚机。
这是除了分配给虚机的“主” IP 地址之外的一个补充,该 IP 地址经过 NAT 转换以用于访问外网。
Linux 网桥称为“cbr0”被配置为存在于该子网中并被传递到 Docker 的 --bridge 参数上。
Docker 会以这样的参数启动: Docker 会以这样的参数启动:
@ -373,11 +455,14 @@ Docker 会以这样的参数启动:
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false" DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
``` ```
这个网桥是由 Kubelet由 --network-plugin=kubenet 参数控制)根据节点的 .spec.podCIDR 参数创建的。 这个网桥是由 Kubelet由 --network-plugin=kubenet 参数控制)根据节点的 `.spec.podCIDR` 参数创建的。
Docker 将会从 `cbr-cidr` 块分配 IP 。容器之间可以通过 cbr0 网桥相互访问,也可以访问节点。这些 IP 都可以在 GCE 的网络中被路由。 Docker 将会从 `cbr-cidr` 块分配 IP。
容器之间可以通过 `cbr0` 网桥相互访问,也可以访问节点。
而 GCE 本身并不知道这些 IP所以不会对访问外网的流量进行 NAT为了实现此目的使用了 iptables 规则来伪装(又称为 SNAT使数据包看起来好像是来自“节点”本身将通信绑定到 GCE 项目网络10.0.0.0/8之外的 IP。 这些 IP 都可以在 GCE 的网络中被路由。
而 GCE 本身并不知道这些 IP所以不会对访问外网的流量进行 NAT。
为了实现此目的,使用了 `iptables` 规则来伪装(又称为 SNAT使数据包看起来好像是来自“节点”本身
将通信绑定到 GCE 项目网络10.0.0.0/8之外的 IP。
```shell ```shell
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
@ -389,7 +474,7 @@ iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
sysctl net.ipv4.ip_forward=1 sysctl net.ipv4.ip_forward=1
``` ```
所有这些的结果是所有 `Pods` 都可以互相访问,并且可以将流量发送到互联网。 所有这些的结果是所有 Pod 都可以互相访问,并且可以将流量发送到互联网。
<!-- <!--
### Jaguar ### Jaguar
@ -402,11 +487,14 @@ sysctl net.ipv4.ip_forward=1
--> -->
### Jaguar ### Jaguar
[Jaguar](https://gitlab.com/sdnlab/jaguar) 是一个基于 OpenDaylight 的 Kubernetes 网络开源解决方案。Jaguar 使用 vxlan 提供覆盖网络,而 Jaguar CNIPlugin 为每个 Pod 提供一个 IP 地址。 [Jaguar](https://gitlab.com/sdnlab/jaguar) 是一个基于 OpenDaylight 的 Kubernetes 网络开源解决方案。
Jaguar 使用 vxlan 提供覆盖网络,而 Jaguar CNIPlugin 为每个 Pod 提供一个 IP 地址。
### k-vswitch ### k-vswitch
[k-vswitch](https://github.com/k-vswitch/k-vswitch) 是一个基于 [Open vSwitch](https://www.openvswitch.org/) 的简易 Kubernetes 网络插件。它利用 Open vSwitch 中现有的功能来提供强大的网络插件,该插件易于操作,高效且安全。 [k-vswitch](https://github.com/k-vswitch/k-vswitch) 是一个基于
[Open vSwitch](https://www.openvswitch.org/) 的简易 Kubernetes 网络插件。
它利用 Open vSwitch 中现有的功能来提供强大的网络插件,该插件易于操作,高效且安全。
<!-- <!--
### Knitter ### Knitter
@ -417,23 +505,29 @@ sysctl net.ipv4.ip_forward=1
[Kube-OVN](https://github.com/alauda/kube-ovn) is an OVN-based kubernetes network fabric for enterprises. With the help of OVN/OVS, it provides some advanced overlay network features like subnet, QoS, static IP allocation, traffic mirroring, gateway, openflow-based network policy and service proxy. [Kube-OVN](https://github.com/alauda/kube-ovn) is an OVN-based kubernetes network fabric for enterprises. With the help of OVN/OVS, it provides some advanced overlay network features like subnet, QoS, static IP allocation, traffic mirroring, gateway, openflow-based network policy and service proxy.
--> -->
### Knitter ### Knitter
[Knitter](https://github.com/ZTE/Knitter/) 是一个支持 Kubernetes 中实现多个网络系统的解决方案。它提供了租户管理和网络管理的功能。除了多个网络平面外Knitter 还包括一组端到端的 NFV 容器网络解决方案,例如为应用程序保留 IP 地址IP 地址迁移等。 [Knitter](https://github.com/ZTE/Knitter/) 是一个支持 Kubernetes 中实现多个网络系统的解决方案。
它提供了租户管理和网络管理的功能。除了多个网络平面外Knitter 还包括一组端到端的 NFV 容器网络解决方案,
例如为应用程序保留 IP 地址、IP 地址迁移等。
### Kube-OVN ### Kube-OVN
[Kube-OVN](https://github.com/alauda/kube-ovn) 是一个基于 OVN 的用于企业的 Kubernetes 网络架构。借助于 OVN/OVS 它提供了一些高级覆盖网络功能例如子网、QoS、静态 IP 分配、流量镜像、网关、基于开放流的网络策略和服务代理。 [Kube-OVN](https://github.com/alauda/kube-ovn) 是一个基于 OVN 的用于企业的 Kubernetes 网络架构。
借助于 OVN/OVS 它提供了一些高级覆盖网络功能例如子网、QoS、静态 IP 分配、流量镜像、网关、
基于 openflow 的网络策略和服务代理。
<!-- <!--
### Kube-router ### Kube-router
[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](http://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer. [Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](https://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
--> -->
### Kube-router ### Kube-router
[Kube-router](https://github.com/cloudnativelabs/kube-router) 是 Kubernetes 的专用网络解决方案,旨在提供高性能和易操作性。 Kube-router 提供了一个基于 Linux [LVS/IPVS](http://www.linuxvirtualserver.org/software/ipvs.html) 的服务代理,一个基于 Linux 内核转发的无覆盖 Pod-to-Pod 网络解决方案,和基于 iptables/ipset 的网络策略执行器。 [Kube-router](https://github.com/cloudnativelabs/kube-router) 是 Kubernetes 的专用网络解决方案,
旨在提供高性能和易操作性。
Kube-router 提供了一个基于 Linux [LVS/IPVS](https://www.linuxvirtualserver.org/software/ipvs.html)
的服务代理、一个基于 Linux 内核转发的无覆盖 Pod-to-Pod 网络解决方案和基于 iptables/ipset 的网络策略执行器。
<!-- <!--
### L2 networks and linux bridging ### L2 networks and linux bridging
@ -445,14 +539,17 @@ work, but has not been thoroughly tested. If you use this technique and
perfect the process, please let us know. perfect the process, please let us know.
Follow the "With Linux Bridge devices" section of [this very nice Follow the "With Linux Bridge devices" section of [this very nice
tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from tutorial](https://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
Lars Kellogg-Stedman. Lars Kellogg-Stedman.
--> -->
### L2 networks and linux bridging ### L2 networks and linux bridging
如果你具有一个“哑”的L2网络例如“裸机”环境中的简单交换机则应该能够执行与上述 GCE 设置类似的操作。请注意,这些说明仅是非常简单的尝试过-似乎可行,但尚未经过全面测试。如果您使用此技术并完善了流程,请告诉我们。 如果你具有一个“哑”的L2网络例如“裸机”环境中的简单交换机则应该能够执行与上述 GCE 设置类似的操作。
请注意,这些说明仅是非常简单的尝试过-似乎可行,但尚未经过全面测试。
如果您使用此技术并完善了流程,请告诉我们。
根据 Lars Kellogg-Stedman 的这份非常不错的“Linux 网桥设备”[使用说明](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/)来进行操作。 根据 Lars Kellogg-Stedman 的这份非常不错的“Linux 网桥设备”
[使用说明](https://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/)来进行操作。
<!-- <!--
### Multus (a Multi Network plugin) ### Multus (a Multi Network plugin)
@ -463,9 +560,23 @@ Multus supports all [reference plugins](https://github.com/containernetworking/p
--> -->
### Multus (a Multi Network plugin) ### Multus (a Multi Network plugin)
[Multus](https://github.com/Intel-Corp/multus-cni) 是一个多 CNI 插件,使用 Kubernetes 中基于 CRD 的网络对象来支持实现 Kubernetes 多网络系统。 [Multus](https://github.com/Intel-Corp/multus-cni) 是一个多 CNI 插件,
使用 Kubernetes 中基于 CRD 的网络对象来支持实现 Kubernetes 多网络系统。
Multus 支持所有[参考插件]https://github.com/containernetworking/plugins比如 [Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel)、[DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp)、[Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan) ),来实现 CNI 规范和第三方插件(比如: [Calico](https://github.com/projectcalico/cni-plugin)、[Weave](https://github.com/weaveworks/weave)、[Cilium](https://github.com/cilium/cilium)、[Contiv](https://github.com/contiv/netplugin))。除此之外, Multus 还支持 [SRIOV](https://github.com/hustcat/sriov-cni)、[DPDK](https://github.com/Intel-Corp/sriov-cni)、[OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) 的工作负载,以及 Kubernetes 中基于云的本机应用程序和基于 NFV 的应用程序。 Multus 支持所有[参考插件](https://github.com/containernetworking/plugins)(比如:
[Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel)、
[DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp)、
[Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)
来实现 CNI 规范和第三方插件(比如:
[Calico](https://github.com/projectcalico/cni-plugin)、
[Weave](https://github.com/weaveworks/weave)、
[Cilium](https://github.com/cilium/cilium)、
[Contiv](https://github.com/contiv/netplugin))。
除此之外, Multus 还支持
[SRIOV](https://github.com/hustcat/sriov-cni)、
[DPDK](https://github.com/Intel-Corp/sriov-cni)、
[OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) 的工作负载,
以及 Kubernetes 中基于云的本机应用程序和基于 NFV 的应用程序。
<!-- <!--
### NSX-T ### NSX-T
@ -476,22 +587,29 @@ Multus 支持所有[参考插件]https://github.com/containernetworking/plugi
--> -->
### NSX-T ### NSX-T
[VMware NSX-T](https://docs.vmware.com/en/VMware-NSX-T/index.html) 是一个网络虚拟化的安全平台。 NSX-T 可以为多云及多系统管理程序环境提供网络虚拟化,并专注于具有异构端点和技术堆栈的新兴应用程序框架和体系结构。除了 vSphere 管理程序之外,这些环境还包括其他虚拟机管理程序,例如 KVM容器和裸机。 [VMware NSX-T](https://docs.vmware.com/en/VMware-NSX-T/index.html) 是一个网络虚拟化和安全平台。
NSX-T 可以为多云及多系统管理程序环境提供网络虚拟化,并专注于具有异构端点和技术堆栈的新兴应用程序框架和体系结构。
除了 vSphere 管理程序之外,这些环境还包括其他虚拟机管理程序,例如 KVM、容器和裸机。
[NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) 提供了 NSX-T 与容器协调器(例如 Kubernetes之间的结合 以及 NSX-T 与基于容器的 CaaS/PaaS 平台(例如 Pivotal Container ServicePKS 和 OpenShift )之间的集成。 [NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf)
提供了 NSX-T 与容器协调器(例如 Kubernetes之间的结合
以及 NSX-T 与基于容器的 CaaS/PaaS 平台(例如 Pivotal Container ServicePKS和 OpenShift之间的集成。
<!-- <!--
### Nuage Networks VCS (Virtualized Cloud Services) ### Nuage Networks VCS (Virtualized Cloud Services)
[Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards. [Nuage](https://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications. The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
--> -->
### Nuage Networks VCS (Virtualized Cloud Services) ### Nuage Networks VCS (Virtualized Cloud Services)
[Nuage](http://www.nuagenetworks.net) 提供了一个高度可扩展的基于策略的软件定义网络SDN平台Nuage 使用开源的 Open vSwitch 作为数据平面,以及基于开放标准构建具有丰富功能的 SDN 控制器。 [Nuage](https://www.nuagenetworks.net) 提供了一个高度可扩展的基于策略的软件定义网络SDN平台。
Nuage 使用开源的 Open vSwitch 作为数据平面,以及基于开放标准构建具有丰富功能的 SDN 控制器。
Nuage 平台使用覆盖层在 Kubernetes Pod 和非 Kubernetes 环境VM 和裸机服务器之间提供基于策略的无缝联网。Nuage 的策略抽象模型在设计时就考虑到了应用程序,并且可以轻松声明应用程序的细粒度策略。该平台的实时分析引擎可为 Kubernetes 应用程序提供可见性和安全性监控。 Nuage 平台使用覆盖层在 Kubernetes Pod 和非 Kubernetes 环境VM 和裸机服务器)之间提供基于策略的无缝联网。
Nuage 的策略抽象模型在设计时就考虑到了应用程序,并且可以轻松声明应用程序的细粒度策略。
该平台的实时分析引擎可为 Kubernetes 应用程序提供可见性和安全性监控。
<!-- <!--
### OpenVSwitch ### OpenVSwitch
@ -510,37 +628,47 @@ at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).
--> -->
### OpenVSwitch ### OpenVSwitch
[OpenVSwitch](https://www.openvswitch.org/) 是一个较为成熟的解决方案,但同时也增加了构建覆盖网络的复杂性,这也得到了几个网络系统的“大商店”的拥护。 [OpenVSwitch](https://www.openvswitch.org/) 是一个较为成熟的解决方案,但同时也增加了构建覆盖网络的复杂性。
这也得到了几个网络系统的“大商店”的拥护。
### OVN (开放式虚拟网络) ### OVN (开放式虚拟网络)
OVN 是一个由 Open vSwitch 社区开发的开源的网络虚拟化解决方案。它允许创建逻辑交换器,逻辑路由,状态 ACL负载均衡等等来建立不同的虚拟网络拓扑。该项目有一个特定的Kubernetes插件和文档 [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)。 OVN 是一个由 Open vSwitch 社区开发的开源的网络虚拟化解决方案。
它允许创建逻辑交换器、逻辑路由、状态 ACL、负载均衡等等来建立不同的虚拟网络拓扑。
该项目有一个特定的Kubernetes插件和文档 [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)。
<!-- <!--
### Project Calico ### Project Calico
[Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine. [Project Calico](https://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall. Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka [canal](https://github.com/tigera/canal), or native GCE, AWS or Azure networking. Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka [canal](https://github.com/tigera/canal), or native GCE, AWS or Azure networking.
--> -->
### Project Calico ### Calico 项目 {#project-calico}
[Project Calico](http://docs.projectcalico.org/) 是一个开源的容器网络提供者和网络策略引擎。 [Calico 项目](https://docs.projectcalico.org/) 是一个开源的容器网络提供者和网络策略引擎。
Calico 提供了高度可扩展的网络和网络解决方案,使用基于与 Internet 相同的 IP 网络原理来连接 Kubernetes Pod适用于 Linux (开放源代码)和 Windows专有-可从 [Tigera](https//www.tigera.io/essentials/) 获得。可以无需封装或覆盖即可部署 Calico以提供高性能高可扩的数据中心网络。Calico 还通过其分布式防火墙为 Kubernetes Pod 提供了基于意图的细粒度网络安全策略。 Calico 提供了高度可扩展的网络和网络解决方案,使用基于与 Internet 相同的 IP 网络原理来连接 Kubernetes Pod
适用于 Linux (开放源代码)和 Windows专有-可从 [Tigera](https://www.tigera.io/essentials/) 获得。
可以无需封装或覆盖即可部署 Calico以提供高性能高可扩的数据中心网络。
Calico 还通过其分布式防火墙为 Kubernetes Pod 提供了基于意图的细粒度网络安全策略。
Calico 还可以和其他的网络解决方案(比如 Flannel、[canal](https://github.com/tigera/canal) 或本机 GCE、AWS、Azure 等)一起以策略实施模式运行。 Calico 还可以和其他的网络解决方案(比如 Flannel、[canal](https://github.com/tigera/canal)
或原生 GCE、AWS、Azure 网络等)一起以策略实施模式运行。
<!-- <!--
### Romana ### Romana
[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces. [Romana](https://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
--> -->
### Romana ### Romana
[Romana](http://romana.io) 是一个开源网络和安全自动化解决方案。它可以让你在没有覆盖网络的情况下部署 Kubernetes。Romana 支持 Kubernetes [网络策略](/docs/concepts/services-networking/network-policies/),来提供跨网络命名空间的隔离。 [Romana](https://romana.io) 是一个开源网络和安全自动化解决方案。
它可以让你在没有覆盖网络的情况下部署 Kubernetes。
Romana 支持 Kubernetes [网络策略](/zh/docs/concepts/services-networking/network-policies/)
来提供跨网络命名空间的隔离。
<!-- <!--
### Weave Net from Weaveworks ### Weave Net from Weaveworks
@ -553,19 +681,20 @@ to run, and in both cases, the network provides one IP address per pod - as is s
--> -->
### Weaveworks 的 Weave Net ### Weaveworks 的 Weave Net
[Weave Net](https://www.weave.works/products/weave-net/) 是 Kubernetes 及其托管应用程序的弹性和易于使用的网络系统。Weave Net 可以作为 [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/) 运行或者独立运行。在这两种运行方式里,都不需要任何配置或额外的代码即可运行,并且在两种情况下,网络都为每个 Pod 提供一个 IP 地址-这是 Kubernetes 的标准配置。 [Weave Net](https://www.weave.works/products/weave-net/) 是 Kubernetes 及其
托管应用程序的弹性且易于使用的网络系统。
Weave Net 可以作为 [CNI 插件](https://www.weave.works/docs/net/latest/cni-plugin/) 运行或者独立运行。
在这两种运行方式里,都不需要任何配置或额外的代码即可运行,并且在两种情况下,
网络都为每个 Pod 提供一个 IP 地址 -- 这是 Kubernetes 的标准配置。
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}
<!-- <!--
The early design of the networking model and its rationale, and some future The early design of the networking model and its rationale, and some future
plans are described in more detail in the [networking design plans are described in more detail in the [networking design
document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md). document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
--> -->
网络模型的早期设计、运行原理以及未来的一些计划,都在 [networking design 网络模型的早期设计、运行原理以及未来的一些计划,都在
document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md) 文档里进行了更详细的描述。 [联网设计文档](https://git.k8s.io/community/contributors/design-proposals/network/networking.md)
里有更详细的描述。

View File

@ -1,19 +1,42 @@
--- ---
title: Kubernetes 中的代理 title: Kubernetes 中的代理
content_type: concept content_type: concept
weight: 90
--- ---
<!--
title: Proxies in Kubernetes
content_type: concept
weight: 90
-->
<!-- overview --> <!-- overview -->
<!--
This page explains proxies used with Kubernetes.
-->
本文讲述了 Kubernetes 中所使用的代理。 本文讲述了 Kubernetes 中所使用的代理。
<!-- body --> <!-- body -->
## 代理 <!--
## Proxies
There are several different proxies you may encounter when using Kubernetes:
-->
## 代理 {#proxies}
用户在使用 Kubernetes 的过程中可能遇到几种不同的代理proxy 用户在使用 Kubernetes 的过程中可能遇到几种不同的代理proxy
1. [kubectl proxy](/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api) <!--
1. The [kubectl proxy](/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api):
- runs on a user's desktop or in a pod
- proxies from a localhost address to the Kubernetes apiserver
- client to proxy uses HTTP
- proxy to apiserver uses HTTPS
- locates apiserver
- adds authentication headers
-->
1. [kubectl proxy](/zh/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api)
- 运行在用户的桌面或 pod 中 - 运行在用户的桌面或 pod 中
- 从本机地址到 Kubernetes apiserver 的代理 - 从本机地址到 Kubernetes apiserver 的代理
@ -22,7 +45,18 @@ content_type: concept
- 指向 apiserver - 指向 apiserver
- 添加认证头信息 - 添加认证头信息
1. [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#discovering-builtin-services) <!--
1. The [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#discovering-builtin-services):
- is a bastion built into the apiserver
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
- runs in the apiserver processes
- client to proxy uses HTTPS (or http if apiserver so configured)
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
- can be used to reach a Node, Pod, or Service
- does load balancing when used to reach a Service
-->
2. [apiserver proxy](/zh/docs/tasks/access-application-cluster/access-cluster/#discovering-builtin-services)
- 是一个建立在 apiserver 内部的“堡垒” - 是一个建立在 apiserver 内部的“堡垒”
- 将集群外部的用户与群集 IP 相连接这些IP是无法通过其他方式访问的 - 将集群外部的用户与群集 IP 相连接这些IP是无法通过其他方式访问的
@ -32,31 +66,66 @@ content_type: concept
- 可以用来访问 Node、 Pod 或 Service - 可以用来访问 Node、 Pod 或 Service
- 当用来访问 Service 时,会进行负载均衡 - 当用来访问 Service 时,会进行负载均衡
1. [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips) <!--
1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):
- runs on each node
- proxies UDP, TCP and SCTP
- does not understand HTTP
- provides load balancing
- is just used to reach services
-->
3. [kube proxy](/zh/docs/concepts/services-networking/service/#ips-and-vips)
- 在每个节点上运行 - 在每个节点上运行
- 代理 UDP 和 TCP - 代理 UDP、TCPSCTP
- 不支持 HTTP - 不支持 HTTP
- 提供负载均衡能力 - 提供负载均衡能力
- 只用来访问 Service - 只用来访问 Service
1. apiserver 之前的代理/负载均衡器: <!--
1. A Proxy/Load-balancer in front of apiserver(s):
- 在不同集群间的存在形式和实现不同 (如 nginx) - existence and implementation varies from cluster to cluster (e.g. nginx)
- 位于所有客户端和一个或多个 apiserver 之间 - sits between all clients and one or more apiservers
- 存在多个 apiserver 时,扮演负载均衡器的角色 - acts as load balancer if there are several apiservers.
-->
4. apiserver 之前的代理/负载均衡器:
1. 外部服务的云负载均衡器: - 在不同集群中的存在形式和实现不同 (如 nginx)
- 位于所有客户端和一个或多个 API 服务器之间
- 存在多个 API 服务器时,扮演负载均衡器的角色
- 由一些云供应商提供 (如AWS ELB、 Google Cloud Load Balancer) <!--
- Kubernetes service 为 `LoadBalancer` 类型时自动创建 1. Cloud Load Balancers on external services:
- 只使用 UDP/TCP 协议
- 不同云供应商的实现不同。 - are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
- are created automatically when the Kubernetes service has type `LoadBalancer`
- usually supports UDP/TCP only
- SCTP support is up to the load balancer implementation of the cloud provider
- implementation varies by cloud provider.
-->
5. 外部服务的云负载均衡器:
- 由一些云供应商提供 (如 AWS ELB、Google Cloud Load Balancer)
- Kubernetes 服务类型为 `LoadBalancer` 时自动创建
- 通常仅支持 UDP/TCP 协议
- SCTP 支持取决于云供应商的负载均衡器实现
- 不同云供应商的云负载均衡器实现不同
<!--
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
will typically ensure that the latter types are setup correctly.
-->
Kubernetes 用户通常只需要关心前两种类型的代理,集群管理员通常需要确保后面几种类型的代理设置正确。 Kubernetes 用户通常只需要关心前两种类型的代理,集群管理员通常需要确保后面几种类型的代理设置正确。
<!--
## Requesting redirects
Proxies have replaced redirect capabilities. Redirects have been deprecated.
-->
## 请求重定向 ## 请求重定向
代理已经取代重定向功能,重定向已被弃用。 代理已经取代重定向功能,重定向功能已被弃用。

View File

@ -55,7 +55,7 @@ a Pod scheduled to a Node with 8GiB of memory and no other Pods, then the contai
more RAM. more RAM.
--> -->
## 请求和约束 ## 请求和约束 {#requests-and-limits}
如果 Pod 运行所在的节点具有足够的可用资源,容器可能(且可以)使用超出对应资源 如果 Pod 运行所在的节点具有足够的可用资源,容器可能(且可以)使用超出对应资源
`request` 属性所设置的资源量。不过,容器不可以使用超出其资源 `limit` `request` 属性所设置的资源量。不过,容器不可以使用超出其资源 `limit`
@ -90,6 +90,7 @@ runtimes can have different ways to implement the same restrictions.
<!-- <!--
## Resource types ## Resource types
*CPU* and *memory* are each a *resource type*. A resource type has a base unit. *CPU* and *memory* are each a *resource type*. A resource type has a base unit.
CPU represents compute processing and is specified in units of [Kubernetes CPUs](#meaning-of-cpu). CPU represents compute processing and is specified in units of [Kubernetes CPUs](#meaning-of-cpu).
Memory is specified in units of bytes. Memory is specified in units of bytes.
@ -101,8 +102,7 @@ For example, on a system where the default page size is 4KiB, you could specify
`hugepages-2Mi: 80Mi`. If the container tries allocating over 40 2MiB huge pages (a `hugepages-2Mi: 80Mi`. If the container tries allocating over 40 2MiB huge pages (a
total of 80 MiB), that allocation fails. total of 80 MiB), that allocation fails.
--> -->
## 资源类型 {#resource-types}
## 资源类型
*CPU* 和*内存*都是*资源类型*。每种资源类型具有其基本单位。 *CPU* 和*内存*都是*资源类型*。每种资源类型具有其基本单位。
CPU 表达的是计算处理能力,其单位是 [Kubernetes CPUs](#meaning-of-cpu)。 CPU 表达的是计算处理能力,其单位是 [Kubernetes CPUs](#meaning-of-cpu)。
@ -210,13 +210,13 @@ CPU 总是按绝对数量来请求的,不可以使用相对数量;
<!-- <!--
## Meaning of memory ## Meaning of memory
Limits and requests for `memory` are measured in bytes. You can express memory as Limits and requests for `memory` are measured in bytes. You can express memory as
a plain integer or as a fixed-point integer using one of these suffixes: a plain integer or as a fixed-point integer using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following represent roughly the same value: Mi, Ki. For example, the following represent roughly the same value:
--> -->
## 内存的含义 {#meaning-of-memory}
## 内存的含义
内存的约束和请求以字节为单位。你可以使用以下后缀之一以一般整数或定点整数形式来表示内存: 内存的约束和请求以字节为单位。你可以使用以下后缀之一以一般整数或定点整数形式来表示内存:
E、P、T、G、M、K。你也可以使用对应的 2 的幂数Ei、Pi、Ti、Gi、Mi、Ki。 E、P、T、G、M、K。你也可以使用对应的 2 的幂数Ei、Pi、Ti、Gi、Mi、Ki。
@ -403,8 +403,7 @@ The kubelet can provide scratch space to Pods using local ephemeral storage to
mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers. {{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
--> -->
## 本地临时存储 {#local-ephemeral-storage}
## 本地临时存储
<!-- feature gate LocalStorageCapacityIsolation --> <!-- feature gate LocalStorageCapacityIsolation -->
{{< feature-state for_k8s_version="v1.10" state="beta" >}} {{< feature-state for_k8s_version="v1.10" state="beta" >}}
@ -862,7 +861,7 @@ operator must advertise an Extended Resource. Second, users must request the
Extended Resource in Pods. Extended Resource in Pods.
--> -->
## 扩展资源Extended Resources ## 扩展资源Extended Resources {#extended-resources}
扩展资源是 `kubernetes.io` 域名之外的标准资源名称。 扩展资源是 `kubernetes.io` 域名之外的标准资源名称。
它们使得集群管理员能够颁布非 Kubernetes 内置资源,而用户可以使用他们。 它们使得集群管理员能够颁布非 Kubernetes 内置资源,而用户可以使用他们。

View File

@ -7,17 +7,15 @@ weight: 10
<!-- overview --> <!-- overview -->
<!-- <!--
By default, containers run with unbounded [compute resources](/docs/user-guide/compute-resources) on a Kubernetes cluster. By default, containers run with unbounded [compute resources](/docs/concepts/configuration/manage-resources-containers/) on a Kubernetes cluster.
With resource quotas, cluster administrators can restrict resource consumption and creation on a namespace basis. With resource quotas, cluster administrators can restrict resource consumption and creation on a {{< glossary_tooltip text="namespace" term_id="namespace" >}} basis.
Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace.
--> -->
默认情况下, Kubernetes 集群上的容器运行使用的[计算资源](/zh/docs/concepts/configuration/manage-resources-containers/)没有限制。
默认情况下, Kubernetes 集群上的容器运行使用的[计算资源](/docs/user-guide/compute-resources) 没有限制。 使用资源配额,集群管理员可以以{{< glossary_tooltip text="名字空间" term_id="namespace" >}}为单位,限制其资源的使用与创建。
使用资源配额,集群管理员可以以命名空间为单位,限制其资源的使用与创建。 在命名空间中,一个 Pod 或 Container 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。
在命名空间中,一个 Pod 或 Container 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。有人担心,一个 Pod 或 Container 会垄断所有可用的资源。LimitRange 是在命名空间内限制资源分配(给多个 Pod 或 Container的策略对象。 有人担心,一个 Pod 或 Container 会垄断所有可用的资源。
LimitRange 是在命名空间内限制资源分配(给多个 Pod 或 Container的策略对象。
<!-- body --> <!-- body -->
@ -30,7 +28,7 @@ A _LimitRange_ provides constraints that can:
- Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime. - Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
--> -->
一个 _LimitRange_ 对象提供的限制能够做到: 一个 _LimitRange(限制范围)_ 对象提供的限制能够做到:
- 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。 - 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。
- 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。 - 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。
@ -39,39 +37,27 @@ A _LimitRange_ provides constraints that can:
<!-- <!--
## Enabling LimitRange ## Enabling LimitRange
-->
LimitRange support has been enabled by default since Kubernetes 1.10.
LimitRange support is enabled by default for many Kubernetes distributions.
-->
## 启用 LimitRange ## 启用 LimitRange
<!-- 对 LimitRange 的支持自 Kubernetes 1.10 版本默认启用。
LimitRange support is enabled by default for many Kubernetes distributions. It is
enabled when the apiserver `--enable-admission-plugins=` flag has `LimitRanger` admission controller as
one of its arguments.
-->
对 LimitRange 的支持默认在多数 Kubernetes 发行版中启用。当 apiserver 的 `--enable-admission-plugins` 标志的参数包含 `LimitRanger` 准入控制器时即启用。 LimitRange 支持在很多 Kubernetes 发行版本中也是默认启用的。
<!--
A LimitRange is enforced in a particular namespace when there is a
LimitRange object in that namespace.
-->
当一个命名空间中有 LimitRange 时,实施该 LimitRange 所定义的限制。
<!-- <!--
The name of a LimitRange object must be a valid The name of a LimitRange object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
--> -->
LimitRange 的名称必须是合法的
LimitRange 的名称必须是合法的 [DNS 子域名](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
<!-- <!--
### Overview of Limit Range ### Overview of Limit Range
-->
### 限制范围总览
<!--
- The administrator creates one `LimitRange` in one namespace. - The administrator creates one `LimitRange` in one namespace.
- Users create resources like Pods, Containers, and PersistentVolumeClaims in the namespace. - Users create resources like Pods, Containers, and PersistentVolumeClaims in the namespace.
- The `LimitRanger` admission controller enforces defaults and limits for all Pods and Containers that do not set compute resource requirements and tracks usage to ensure it does not exceed resource minimum, maximum and ratio defined in any LimitRange present in the namespace. - The `LimitRanger` admission controller enforces defaults and limits for all Pods and Containers that do not set compute resource requirements and tracks usage to ensure it does not exceed resource minimum, maximum and ratio defined in any LimitRange present in the namespace.
@ -80,12 +66,16 @@ LimitRange 的名称必须是合法的 [DNS 子域名](/docs/concepts/overview/w
requests or limits for those values. Otherwise, the system may reject Pod creation. requests or limits for those values. Otherwise, the system may reject Pod creation.
- LimitRange validations occurs only at Pod Admission stage, not on Running Pods. - LimitRange validations occurs only at Pod Admission stage, not on Running Pods.
--> -->
### 限制范围总览
- 管理员在一个命名空间内创建一个 `LimitRange` 对象。 - 管理员在一个命名空间内创建一个 `LimitRange` 对象。
- 用户在命名空间内创建 Pod Container 和 PersistentVolumeClaim 等资源。 - 用户在命名空间内创建 Pod Container 和 PersistentVolumeClaim 等资源。
- `LimitRanger` 准入控制器对所有没有设置计算资源需求的 Pod 和 Container 设置默认值与限制值,并跟踪其使用量以保证没有超出命名空间中存在的任意 LimitRange 对象中的最小、最大资源使用量以及使用量比值。 - `LimitRanger` 准入控制器对所有没有设置计算资源需求的 Pod 和 Container 设置默认值与限制值,
- 若创建或更新资源Pod, Container, PersistentVolumeClaim违反了 LimitRange 的约束,向 API 服务器的请求会失败,并返回 HTTP 状态码 `403 FORBIDDEN` 与描述哪一项约束被违反的消息。 并跟踪其使用量以保证没有超出命名空间中存在的任意 LimitRange 对象中的最小、最大资源使用量以及使用量比值。
- 若命名空间中的 LimitRange 启用了对 `cpu``memory` 的限制,用户必须指定这些值的需求使用量与限制使用量。否则,系统将会拒绝创建 Pod。 - 若创建或更新资源Pod、 Container、PersistentVolumeClaim违反了 LimitRange 的约束,
向 API 服务器的请求会失败,并返回 HTTP 状态码 `403 FORBIDDEN` 与描述哪一项约束被违反的消息。
- 若命名空间中的 LimitRange 启用了对 `cpu``memory` 的限制,
用户必须指定这些值的需求使用量与限制使用量。否则,系统将会拒绝创建 Pod。
- LimitRange 的验证仅在 Pod 准入阶段进行,不对正在运行的 Pod 进行验证。 - LimitRange 的验证仅在 Pod 准入阶段进行,不对正在运行的 Pod 进行验证。
<!-- <!--
@ -94,32 +84,35 @@ Examples of policies that could be created using limit range are:
- In a 2 node cluster with a capacity of 8 GiB RAM and 16 cores, constrain Pods in a namespace to request 100m of CPU with a max limit of 500m for CPU and request 200Mi for Memory with a max limit of 600Mi for Memory. - In a 2 node cluster with a capacity of 8 GiB RAM and 16 cores, constrain Pods in a namespace to request 100m of CPU with a max limit of 500m for CPU and request 200Mi for Memory with a max limit of 600Mi for Memory.
- Define default CPU limit and request to 150m and memory default request to 300Mi for Containers started with no cpu and memory requests in their specs. - Define default CPU limit and request to 150m and memory default request to 300Mi for Containers started with no cpu and memory requests in their specs.
--> -->
能够使用限制范围创建的策略示例有:
能够使用限制范围创建策略的例子有: - 在一个有两个节点8 GiB 内存与16个核的集群中限制一个命名空间的 Pod 申请
100m 单位,最大 500m 单位的 CPU以及申请 200Mi最大 600Mi 的内存。
- 在一个有两个节点8 GiB 内存与16个核的集群中限制一个命名空间的 Pod 申请 100m 单位,最大 500m 单位的 CPU以及申请 200Mi最大 600Mi 的内存。 - 为 spec 中没有 cpu 和内存需求值的 Container 定义默认 CPU 限制值与需求值
- 为 spec 中没有 cpu 和内存需求值的 Container 定义默认 CPU 限制值与需求值 150m内存默认需求值 300Mi。 150m内存默认需求值 300Mi。
<!-- <!--
In the case where the total limits of the namespace is less than the sum of the limits of the Pods/Containers, In the case where the total limits of the namespace is less than the sum of the limits of the Pods/Containers,
there may be contention for resources. In this case, the Containers or Pods will not be created. there may be contention for resources. In this case, the Containers or Pods will not be created.
--> -->
在命名空间的总限制值小于 Pod 或 Container 的限制值的总和的情况下,可能会产生资源竞争。
在命名空间的总限制值小于 Pod 或 Container 的限制值的总和的情况下,可能会产生资源竞争。在这种情况下,将不会创建 Container 或 Pod。 在这种情况下,将不会创建 Container 或 Pod。
<!-- <!--
Neither contention nor changes to a LimitRange will affect already created resources. Neither contention nor changes to a LimitRange will affect already created resources.
--> -->
竞争和对 LimitRange 的改变都不会影响任何已经创建了的资源。 竞争和对 LimitRange 的改变都不会影响任何已经创建了的资源。
## {{% heading "whatsnext" %}}
<!-- <!--
## Examples See [LimitRanger design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information.
--> -->
参阅 [LimitRanger 设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md)获取更多信息。
## 示例
<!-- <!--
For examples on using limits, see:
- See [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/). - See [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/).
- See [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/). - See [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
- See [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/). - See [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
@ -127,23 +120,12 @@ Neither contention nor changes to a LimitRange will affect already created resou
- Check [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage). - Check [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
- See a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/). - See a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/).
--> -->
关于使用限值的例子,可参看
- 查看[如何配置每个命名空间最小和最大的 CPU 约束](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)。 - [如何配置每个命名空间最小和最大的 CPU 约束](/zh/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)。
- 查看[如何配置每个命名空间最小和最大的内存约束](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)。 - [如何配置每个命名空间最小和最大的内存约束](/zh/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)。
- 查看[如何配置每个命名空间默认的 CPU 申请值和限制值](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)。 - [如何配置每个命名空间默认的 CPU 申请值和限制值](/zh/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)。
- 查看[如何配置每个命名空间默认的内存申请值和限制值](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)。 - [如何配置每个命名空间默认的内存申请值和限制值](/zh/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)。
- 查看[如何配置每个命名空间最小和最大存储使用量](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage)。 - [如何配置每个命名空间最小和最大存储使用量](/zh/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage)。
- 查看[配置每个命名空间的配额的详细例子](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)。 - [配置每个命名空间的配额的详细例子](/zh/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)。
## {{% heading "whatsnext" %}}
<!--
See [LimitRanger design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information.
-->
查看 [LimitRanger 设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md)获取更多信息。

View File

@ -1,19 +1,15 @@
--- ---
approvers:
- derekwaynecarr
title: 资源配额 title: 资源配额
content_type: concept content_type: concept
weight: 10 weight: 10
--- ---
<!-- <!--
---
reviewers: reviewers:
- derekwaynecarr - derekwaynecarr
title: Resource Quotas title: Resource Quotas
content_type: concept content_type: concept
weight: 10 weight: 10
---
--> -->
<!-- overview --> <!-- overview -->
@ -21,17 +17,13 @@ weight: 10
<!-- <!--
When several users or teams share a cluster with a fixed number of nodes, When several users or teams share a cluster with a fixed number of nodes,
there is a concern that one team could use more than its fair share of resources. there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern.
--> -->
当多个用户或团队共享具有固定节点数目的集群时,人们会担心有人使用超过其基于公平原则所分配到的资源量。 当多个用户或团队共享具有固定节点数目的集群时,人们会担心有人使用超过其基于公平原则所分配到的资源量。
<!--
Resource quotas are a tool for administrators to address this concern.
-->
资源配额是帮助管理员解决这一问题的工具。 资源配额是帮助管理员解决这一问题的工具。
<!-- body --> <!-- body -->
<!-- <!--
@ -40,7 +32,8 @@ aggregate resource consumption per namespace. It can limit the quantity of obje
be created in a namespace by type, as well as the total amount of compute resources that may be created in a namespace by type, as well as the total amount of compute resources that may
be consumed by resources in that project. be consumed by resources in that project.
--> -->
资源配额,通过 `ResourceQuota` 对象来定义,对每个命名空间的资源消耗总量提供限制。它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命令空间中的 Pod 可以使用的计算资源的总上限。 资源配额,通过 `ResourceQuota` 对象来定义,对每个命名空间的资源消耗总量提供限制。
它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命令空间中的 Pod 可以使用的计算资源的总上限。
<!-- <!--
Resource quotas work like this: Resource quotas work like this:
@ -60,13 +53,26 @@ Resource quotas work like this:
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements. the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem. See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
--> -->
- 不同的团队可以在不同的命名空间下工作,目前这是非约束性的,在未来的版本中可能会通过 ACL (Access Control List 访问控制列表) 来实现强制性约束。 - 不同的团队可以在不同的命名空间下工作,目前这是非约束性的,在未来的版本中可能会通过
ACL (Access Control List 访问控制列表) 来实现强制性约束。
- 集群管理员可以为每个命名空间创建一个或多个资源配额对象。 - 集群管理员可以为每个命名空间创建一个或多个资源配额对象。
- 当用户在命名空间下创建资源(如 Pod、Service 等Kubernetes 的配额系统会跟踪集群的资源使用情况,以确保使用的资源用量不超过资源配额中定义的硬性资源限额。 - 当用户在命名空间下创建资源(如 Pod、Service 等Kubernetes 的配额系统会
- 如果资源创建或者更新请求违反了配额约束那么该请求会报错HTTP 403 FORBIDDEN并在消息中给出有可能违反的约束。 跟踪集群的资源使用情况,以确保使用的资源用量不超过资源配额中定义的硬性资源限额。
- 如果命名空间下的计算资源 (如 `cpu``memory`的配额被启用则用户必须为这些资源设定请求值request和约束值limit否则配额系统将拒绝 Pod 的创建。 - 如果资源创建或者更新请求违反了配额约束那么该请求会报错HTTP 403 FORBIDDEN
并在消息中给出有可能违反的约束。
- 如果命名空间下的计算资源 (如 `cpu``memory`)的配额被启用,则用户必须为
这些资源设定请求值request和约束值limit否则配额系统将拒绝 Pod 的创建。
提示: 可使用 `LimitRanger` 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。 提示: 可使用 `LimitRanger` 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。
若想避免这类问题,请参考[演练](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)中的示例。
若想避免这类问题,请参考
[演练](/zh/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)示例。
<!--
The name of a `ResourceQuota` object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
-->
ResourceQuota 对象的名称必须时合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
<!-- <!--
Examples of policies that could be created using namespaces and quotas are: Examples of policies that could be created using namespaces and quotas are:
@ -79,31 +85,31 @@ Examples of policies that could be created using namespaces and quotas are:
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace - Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
use any amount. use any amount.
--> -->
- 在具有 32 GiB 内存和 16 核 CPU 资源的集群中,允许 A 团队使用 20 GiB 内存 和 10 核的 CPU 资源,允许 B 团队使用 10 GiB 内存和 4 核的 CPU 资源,并且预留 2 GiB 内存和 2 核的 CPU 资源供将来分配。 - 在具有 32 GiB 内存和 16 核 CPU 资源的集群中,允许 A 团队使用 20 GiB 内存 和 10 核的 CPU 资源,
允许 B 团队使用 10 GiB 内存和 4 核的 CPU 资源,并且预留 2 GiB 内存和 2 核的 CPU 资源供将来分配。
- 限制 "testing" 命名空间使用 1 核 CPU 资源和 1GiB 内存。允许 "production" 命名空间使用任意数量。 - 限制 "testing" 命名空间使用 1 核 CPU 资源和 1GiB 内存。允许 "production" 命名空间使用任意数量。
<!-- <!--
In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces,
there may be contention for resources. This is handled on a first-come-first-served basis. there may be contention for resources. This is handled on a first-come-first-served basis.
Neither contention nor changes to quota will affect already created resources.
--> -->
在集群容量小于各命名空间配额总和的情况下可能存在资源竞争。资源竞争时Kubernetes 系统会遵循先到先得的原则。 在集群容量小于各命名空间配额总和的情况下可能存在资源竞争。资源竞争时Kubernetes 系统会遵循先到先得的原则。
<!--
Neither contention nor changes to quota will affect already created resources.
-->
不管是资源竞争还是配额的修改,都不会影响已经创建的资源使用对象。 不管是资源竞争还是配额的修改,都不会影响已经创建的资源使用对象。
<!-- <!--
## Enabling Resource Quota ## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the apiserver `-enable-admission-plugins=` flag has `ResourceQuota` as
one of its arguments.
--> -->
## 启用资源配额 ## 启用资源配额
<!-- 资源配额的支持在很多 Kubernetes 版本中是默认开启的。当 apiserver `--enable-admission-plugins=`
Resource Quota support is enabled by default for many Kubernetes distributions. It is 参数中包含 `ResourceQuota` 时,资源配额会被启用。
enabled when the apiserver `--enable-admission-plugins=` flag has `ResourceQuota` as
one of its arguments.
-->
资源配额的支持在很多 Kubernetes 版本中是默认开启的。当 apiserver `--enable-admission-plugins=` 参数中包含 `ResourceQuota` 时,资源配额会被启用。
<!-- <!--
A resource quota is enforced in a particular namespace when there is a A resource quota is enforced in a particular namespace when there is a
@ -113,13 +119,14 @@ A resource quota is enforced in a particular namespace when there is a
<!-- <!--
## Compute Resource Quota ## Compute Resource Quota
You can limit the total sum of [compute resources](/docs/concepts/configuration/manage-resources-containers/) that can be requested in a given namespace.
--> -->
## 计算资源配额 ## 计算资源配额
<!-- 用户可以对给定命名空间下的可被请求的
You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) that can be requested in a given namespace. [计算资源](/zh/docs/concepts/configuration/manage-resources-containers/)
--> 总量进行限制。
用户可以对给定命名空间下的可被请求的[计算资源](/docs/user-guide/compute-resources)总量进行限制。
<!-- <!--
The following resource types are supported: The following resource types are supported:
@ -128,14 +135,14 @@ The following resource types are supported:
<!-- <!--
| Resource Name | Description | | Resource Name | Description |
| --------------------- | ----------------------------------------------------------- | | --------------------- | --------------------------------------------------------- |
| `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. | | `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. |
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. | | `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. | | `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. | | `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
--> -->
| 资源名称 | 描述 | | 资源名称 | 描述 |
| --------------------- | ----------------------------------------------------------- | | --------------------- | --------------------------------------------- |
| `limits.cpu` | 所有非终止状态的 Pod其 CPU 限额总量不能超过该值。 | | `limits.cpu` | 所有非终止状态的 Pod其 CPU 限额总量不能超过该值。 |
| `limits.memory` | 所有非终止状态的 Pod其内存限额总量不能超过该值。 | | `limits.memory` | 所有非终止状态的 Pod其内存限额总量不能超过该值。 |
| `requests.cpu` | 所有非终止状态的 Pod其 CPU 需求总量不能超过该值。 | | `requests.cpu` | 所有非终止状态的 Pod其 CPU 需求总量不能超过该值。 |
@ -143,21 +150,23 @@ The following resource types are supported:
<!-- <!--
### Resource Quota For Extended Resources ### Resource Quota For Extended Resources
-->
### 扩展资源的资源配额
<!--
In addition to the resources mentioned above, in release 1.10, quota support for In addition to the resources mentioned above, in release 1.10, quota support for
[extended resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) is added. [extended resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) is added.
--> -->
除上述资源外,在 Kubernetes 1.10 版本中,还添加了对[扩展资源](/zh/docs/concepts/configuration/manage-resources-containers/#扩展资源-extended-resources)的支持。 ### 扩展资源的资源配额
除上述资源外,在 Kubernetes 1.10 版本中,还添加了对
[扩展资源](/zh/docs/concepts/configuration/manage-resources-containers/#extended-resources)
的支持。
<!-- <!--
As overcommit is not allowed for extended resources, it makes no sense to specify both `requests` As overcommit is not allowed for extended resources, it makes no sense to specify both `requests`
and `limits` for the same extended resource in a quota. So for extended resources, only quota items and `limits` for the same extended resource in a quota. So for extended resources, only quota items
with prefix `requests.` is allowed for now. with prefix `requests.` is allowed for now.
--> -->
由于扩展资源不可超量分配,因此没有必要在配额中为同一扩展资源同时指定 `requests``limits`。对于扩展资源而言,目前仅允许使用前缀为 `requests.` 的配额项。 由于扩展资源不可超量分配,因此没有必要在配额中为同一扩展资源同时指定 `requests``limits`
对于扩展资源而言,目前仅允许使用前缀为 `requests.` 的配额项。
<!-- <!--
Take the GPU resource as an example, if the resource name is `nvidia.com/gpu`, and you want to Take the GPU resource as an example, if the resource name is `nvidia.com/gpu`, and you want to
@ -174,22 +183,20 @@ See [Viewing and Setting Quotas](#viewing-and-setting-quotas) for more detail in
<!-- <!--
## Storage Resource Quota ## Storage Resource Quota
You can limit the total sum of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested in a given namespace.
In addition, you can limit consumption of storage resources based on associated storage-class.
--> -->
## 存储资源配额 ## 存储资源配额
<!-- 用户可以对给定命名空间下的[存储资源](/zh/docs/concepts/storage/persistent-volumes/)总量进行限制。
You can limit the total sum of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested in a given namespace.
-->
用户可以对给定命名空间下的[存储资源](/docs/user-guide/persistent-volumes)总量进行限制。
<!--
In addition, you can limit consumption of storage resources based on associated storage-class.
-->
此外还可以根据相关的存储类Storage Class来限制存储资源的消耗。 此外还可以根据相关的存储类Storage Class来限制存储资源的消耗。
<!-- <!--
| Resource Name | Description | | Resource Name | Description |
| --------------------- | ----------------------------------------------------------- | | --------------------- | --------------------------------------------------------- |
| `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. | | `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. |
| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | | `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the storage-class-name, the sum of storage requests cannot exceed this value. | | `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the storage-class-name, the sum of storage requests cannot exceed this value. |
@ -198,9 +205,9 @@ In addition, you can limit consumption of storage resources based on associated
| 资源名称 | 描述 | | 资源名称 | 描述 |
| --------------------- | ----------------------------------------------------------- | | --------------------- | ----------------------------------------------------------- |
| `requests.storage` | 所有 PVC存储资源的需求总量不能超过该值。 | | `requests.storage` | 所有 PVC存储资源的需求总量不能超过该值。 |
| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) 总量。 | | `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | 在所有与 storage-class-name 相关的持久卷声明中,存储请求的总和不能超过该值。 | | `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | 在所有与 storage-class-name 相关的持久卷声明中,存储请求的总和不能超过该值。 |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷声明中,命名空间中可以存在的[持久卷声明](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 | | `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷声明中,命名空间中可以存在的[持久卷申领](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 |
<!-- <!--
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
@ -229,12 +236,11 @@ In release 1.8, quota support for local ephemeral storage is added as an alpha f
<!-- <!--
## Object Count Quota ## Object Count Quota
The 1.9 release added support to quota all standard namespaced resource types using the following syntax:
--> -->
## 对象数量配额 ## 对象数量配额
<!--
The 1.9 release added support to quota all standard namespaced resource types using the following syntax:
-->
Kubernetes 1.9 版本增加了使用以下语法对所有标准的、命名空间域的资源类型进行配额设置的支持。 Kubernetes 1.9 版本增加了使用以下语法对所有标准的、命名空间域的资源类型进行配额设置的支持。
* `count/<resource>.<group>` * `count/<resource>.<group>`
@ -263,7 +269,6 @@ For example, to create a quota on a `widgets` custom resource in the `example.co
Kubernetes 1.15 版本增加了对使用相同语法来约束自定义资源的支持。 Kubernetes 1.15 版本增加了对使用相同语法来约束自定义资源的支持。
例如,要对 `example.com` API 组中的自定义资源 `widgets` 设置配额,请使用 `count/widgets.example.com` 例如,要对 `example.com` API 组中的自定义资源 `widgets` 设置配额,请使用 `count/widgets.example.com`
<!-- <!--
When using `count/*` resource quota, an object is charged against the quota if it exists in server storage. When using `count/*` resource quota, an object is charged against the quota if it exists in server storage.
These types of quotas are useful to protect against exhaustion of storage resources. For example, you may These types of quotas are useful to protect against exhaustion of storage resources. For example, you may
@ -278,18 +283,17 @@ a poorly configured cronjob creating too many jobs in a namespace causing a deni
<!-- <!--
Prior to the 1.9 release, it was possible to do generic object count quota on a limited set of resources. Prior to the 1.9 release, it was possible to do generic object count quota on a limited set of resources.
In addition, it is possible to further constrain quota for particular resources by their type. In addition, it is possible to further constrain quota for particular resources by their type.
The following types are supported:
--> -->
在 Kubernetes 1.9 版本之前,可以在有限的一组资源上实施一般性的对象数量配额。 在 Kubernetes 1.9 版本之前,可以在有限的一组资源上实施一般性的对象数量配额。
此外,还可以进一步按资源的类型设置其配额。 此外,还可以进一步按资源的类型设置其配额。
<!--
The following types are supported:
-->
支持以下类型: 支持以下类型:
<!-- <!--
| Resource Name | Description | | Resource Name | Description |
| ------------------------------- | ------------------------------------------------- | | ----------------------------|--------------------------------------------- |
| `configmaps` | The total number of config maps that can exist in the namespace. | | `configmaps` | The total number of config maps that can exist in the namespace. |
| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | | `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `pods` | The total number of pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `.status.phase in (Failed, Succeeded)` is true. | | `pods` | The total number of pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `.status.phase in (Failed, Succeeded)` is true. |
@ -303,10 +307,10 @@ The following types are supported:
| 资源名称 | 描述 | | 资源名称 | 描述 |
| ------------------------------- | ------------------------------------------------- | | ------------------------------- | ------------------------------------------------- |
| `configmaps` | 在该命名空间中允许存在的 ConfigMap 总数上限。 | | `configmaps` | 在该命名空间中允许存在的 ConfigMap 总数上限。 |
| `persistentvolumeclaims` | 在该命名空间中允许存在的 [PVC](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) 的总数上限。 | | `persistentvolumeclaims` | 在该命名空间中允许存在的 [PVC](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 的总数上限。 |
| `pods` | 在该命名空间中允许存在的非终止状态的 pod 总数上限。Pod 终止状态等价于 Pod 的 `.status.phase in (Failed, Succeeded)` = true | | `pods` | 在该命名空间中允许存在的非终止状态的 pod 总数上限。Pod 终止状态等价于 Pod 的 `.status.phase in (Failed, Succeeded)` = true |
| `replicationcontrollers` | 在该命名空间中允许存在的 RC 总数上限。 | | `replicationcontrollers` | 在该命名空间中允许存在的 RC 总数上限。 |
| `resourcequotas` | 在该命名空间中允许存在的[资源配额](/docs/admin/admission-controllers/#resourcequota)总数上限。 | | `resourcequotas` | 在该命名空间中允许存在的资源配额总数上限。 |
| `services` | 在该命名空间中允许存在的 Service 总数上限。 | | `services` | 在该命名空间中允许存在的 Service 总数上限。 |
| `services.loadbalancers` | 在该命名空间中允许存在的 LoadBalancer 类型的服务总数上限。 | | `services.loadbalancers` | 在该命名空间中允许存在的 LoadBalancer 类型的服务总数上限。 |
| `services.nodeports` | 在该命名空间中允许存在的 NodePort 类型的服务总数上限。 | | `services.nodeports` | 在该命名空间中允许存在的 NodePort 类型的服务总数上限。 |
@ -318,18 +322,19 @@ created in a single namespace that are not terminal. You might want to set a `po
quota on a namespace to avoid the case where a user creates many small pods and quota on a namespace to avoid the case where a user creates many small pods and
exhausts the cluster's supply of Pod IPs. exhausts the cluster's supply of Pod IPs.
--> -->
例如,`pods` 配额统计某个命名空间中所创建的、非终止状态的 `Pod` 个数并确保其不超过某上限值。用户可能希望在某命名空间中设置 `pods` 配额,以避免有用户创建很多小的 Pod从而耗尽集群所能提供的 Pod IP 地址。 例如,`pods` 配额统计某个命名空间中所创建的、非终止状态的 `Pod` 个数并确保其不超过某上限值。
用户可能希望在某命名空间中设置 `pods` 配额,以避免有用户创建很多小的 Pod从而耗尽集群所能提供的 Pod IP 地址。
<!-- <!--
## Quota Scopes ## Quota Scopes
-->
## 配额作用域
<!--
Each quota can have an associated set of scopes. A quota will only measure usage for a resource if it matches Each quota can have an associated set of scopes. A quota will only measure usage for a resource if it matches
the intersection of enumerated scopes. the intersection of enumerated scopes.
--> -->
每个配额都有一组相关的作用域scope配额只会对作用域内的资源生效。配额机制仅统计所列举的作用域的交集中的资源用量。 ## 配额作用域 {#quota-scopes}
每个配额都有一组相关的作用域scope配额只会对作用域内的资源生效。
配额机制仅统计所列举的作用域的交集中的资源用量。
<!-- <!--
When a scope is added to the quota, it limits the number of resources it supports to those that pertain to the scope. When a scope is added to the quota, it limits the number of resources it supports to those that pertain to the scope.
@ -340,7 +345,7 @@ Resources specified on the quota outside of the allowed set results in a validat
<!-- <!--
| Scope | Description | | Scope | Description |
| ----- | ----------- | | ----- | ------------ |
| `Terminating` | Match pods where `.spec.activeDeadlineSeconds >= 0` | | `Terminating` | Match pods where `.spec.activeDeadlineSeconds >= 0` |
| `NotTerminating` | Match pods where `.spec.activeDeadlineSeconds is nil` | | `NotTerminating` | Match pods where `.spec.activeDeadlineSeconds is nil` |
| `BestEffort` | Match pods that have best effort quality of service. | | `BestEffort` | Match pods that have best effort quality of service. |
@ -355,12 +360,11 @@ Resources specified on the quota outside of the allowed set results in a validat
<!-- <!--
The `BestEffort` scope restricts a quota to tracking the following resource: `pods` The `BestEffort` scope restricts a quota to tracking the following resource: `pods`
The `Terminating`, `NotTerminating`, and `NotBestEffort` scopes restrict a quota to tracking the following resources:
--> -->
`BestEffort` 作用域限制配额跟踪以下资源:`pods` `BestEffort` 作用域限制配额跟踪以下资源:`pods`
<!--
The `Terminating`, `NotTerminating`, and `NotBestEffort` scopes restrict a quota to tracking the following resources:
-->
`Terminating`、`NotTerminating` 和 `NotBestEffort` 这三种作用域限制配额跟踪以下资源: `Terminating`、`NotTerminating` 和 `NotBestEffort` 这三种作用域限制配额跟踪以下资源:
* `cpu` * `cpu`
@ -383,18 +387,17 @@ Pods can be created at a specific [priority](/docs/concepts/configuration/pod-pr
You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector` You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector`
field in the quota spec. field in the quota spec.
--> -->
Pod 可以创建为特定的[优先级](/docs/concepts/configuration/pod-priority-preemption/#pod-priority)。 Pod 可以创建为特定的[优先级](/zh/docs/concepts/configuration/pod-priority-preemption/#pod-priority)。
通过使用配额规约中的 `scopeSelector` 字段,用户可以根据 Pod 的优先级控制其系统资源消耗。 通过使用配额规约中的 `scopeSelector` 字段,用户可以根据 Pod 的优先级控制其系统资源消耗。
<!-- <!--
A quota is matched and consumed only if `scopeSelector` in the quota spec selects the pod. A quota is matched and consumed only if `scopeSelector` in the quota spec selects the pod.
-->
仅当配额规范中的 `scopeSelector` 字段选择到某 Pod 时,配额机制才会匹配和计量 Pod 的资源消耗。
<!--
This example creates a quota object and matches it with pods at specific priorities. The example This example creates a quota object and matches it with pods at specific priorities. The example
works as follows: works as follows:
--> -->
仅当配额规范中的 `scopeSelector` 字段选择到某 Pod 时,配额机制才会匹配和计量 Pod 的资源消耗。
本示例创建一个配额对象,并将其与具有特定优先级的 Pod 进行匹配。 本示例创建一个配额对象,并将其与具有特定优先级的 Pod 进行匹配。
该示例的工作方式如下: 该示例的工作方式如下:
@ -405,9 +408,7 @@ works as follows:
- 集群中的 Pod 可取三个优先级类之一,即 "low"、"medium"、"high"。 - 集群中的 Pod 可取三个优先级类之一,即 "low"、"medium"、"high"。
- 为每个优先级创建一个配额对象。 - 为每个优先级创建一个配额对象。
<!-- <!-- Save the following YAML to a file `quota.yml`. -->
Save the following YAML to a file `quota.yml`.
-->
将以下 YAML 保存到文件 `quota.yml` 中。 将以下 YAML 保存到文件 `quota.yml` 中。
```yaml ```yaml
@ -467,7 +468,7 @@ Apply the YAML using `kubectl create`.
kubectl create -f ./quota.yml kubectl create -f ./quota.yml
``` ```
```shell ```
resourcequota/pods-high created resourcequota/pods-high created
resourcequota/pods-medium created resourcequota/pods-medium created
resourcequota/pods-low created resourcequota/pods-low created
@ -482,7 +483,7 @@ Verify that `Used` quota is `0` using `kubectl describe quota`.
kubectl describe quota kubectl describe quota
``` ```
```shell ```
Name: pods-high Name: pods-high
Namespace: default Namespace: default
Resource Used Hard Resource Used Hard
@ -557,7 +558,7 @@ the other two quotas are unchanged.
kubectl describe quota kubectl describe quota
``` ```
```shell ```
Name: pods-high Name: pods-high
Namespace: default Namespace: default
Resource Used Hard Resource Used Hard
@ -597,13 +598,12 @@ pods 0 10
<!-- <!--
## Requests vs Limits ## Requests vs Limits
-->
## 请求与限制
<!--
When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. When allocating compute resources, each container may specify a request and a limit value for either CPU or memory.
The quota can be configured to quota either value. The quota can be configured to quota either value.
--> -->
## 请求与限制 {#requests-vs-limits}
分配计算资源时,每个容器可以为 CPU 或内存指定请求和约束。 分配计算资源时,每个容器可以为 CPU 或内存指定请求和约束。
配额可以针对二者之一进行设置。 配额可以针对二者之一进行设置。
@ -612,16 +612,16 @@ If the quota has a value specified for `requests.cpu` or `requests.memory`, then
container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`, container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`,
then it requires that every incoming container specifies an explicit limit for those resources. then it requires that every incoming container specifies an explicit limit for those resources.
--> -->
如果配额中指定了 `requests.cpu``requests.memory` 的值,则它要求每个容器都显式给出对这些资源的请求。同理,如果配额中指定了 `limits.cpu``limits.memory` 的值,那么它要求每个容器都显式设定对应资源的限制。 如果配额中指定了 `requests.cpu``requests.memory` 的值,则它要求每个容器都显式给出对这些资源的请求。
同理,如果配额中指定了 `limits.cpu``limits.memory` 的值,那么它要求每个容器都显式设定对应资源的限制。
<!-- <!--
## Viewing and Setting Quotas ## Viewing and Setting Quotas
Kubectl supports creating, updating, and viewing quotas:
--> -->
## 查看和设置配额 {#viewing-and-setting-quotas} ## 查看和设置配额 {#viewing-and-setting-quotas}
<!--
Kubectl supports creating, updating, and viewing quotas:
-->
Kubectl 支持创建、更新和查看配额: Kubectl 支持创建、更新和查看配额:
```shell ```shell
@ -674,7 +674,7 @@ kubectl create -f ./object-counts.yaml --namespace=myspace
kubectl get quota --namespace=myspace kubectl get quota --namespace=myspace
``` ```
```shell ```
NAME AGE NAME AGE
compute-resources 30s compute-resources 30s
object-counts 32s object-counts 32s
@ -684,7 +684,7 @@ object-counts 32s
kubectl describe quota compute-resources --namespace=myspace kubectl describe quota compute-resources --namespace=myspace
``` ```
```shell ```
Name: compute-resources Name: compute-resources
Namespace: myspace Namespace: myspace
Resource Used Hard Resource Used Hard
@ -700,7 +700,7 @@ requests.nvidia.com/gpu 0 4
kubectl describe quota object-counts --namespace=myspace kubectl describe quota object-counts --namespace=myspace
``` ```
```shell ```
Name: object-counts Name: object-counts
Namespace: myspace Namespace: myspace
Resource Used Hard Resource Used Hard
@ -736,7 +736,7 @@ kubectl create deployment nginx --image=nginx --namespace=myspace
kubectl describe quota --namespace=myspace kubectl describe quota --namespace=myspace
``` ```
```shell ```
Name: test Name: test
Namespace: myspace Namespace: myspace
Resource Used Hard Resource Used Hard
@ -749,27 +749,26 @@ count/secrets 1 4
<!-- <!--
## Quota and Cluster Capacity ## Quota and Cluster Capacity
-->
## 配额和集群容量
<!--
`ResourceQuotas` are independent of the cluster capacity. They are `ResourceQuotas` are independent of the cluster capacity. They are
expressed in absolute units. So, if you add nodes to your cluster, this does *not* expressed in absolute units. So, if you add nodes to your cluster, this does *not*
automatically give each namespace the ability to consume more resources. automatically give each namespace the ability to consume more resources.
--> -->
资源配额与集群资源总量是完全独立的。它们通过绝对的单位来配置。所以,为集群添加节点时,资源配额*不会*自动赋予每个命名空间消耗更多资源的能力。 ## 配额和集群容量 {#quota-and-cluster-capacity}
资源配额与集群资源总量是完全独立的。它们通过绝对的单位来配置。
所以,为集群添加节点时,资源配额*不会*自动赋予每个命名空间消耗更多资源的能力。
<!-- <!--
Sometimes more complex policies may be desired, such as: Sometimes more complex policies may be desired, such as:
-->
有时可能需要资源配额支持更复杂的策略,比如:
<!--
- Proportionally divide total cluster resources among several teams. - Proportionally divide total cluster resources among several teams.
- Allow each tenant to grow resource usage as needed, but have a generous - Allow each tenant to grow resource usage as needed, but have a generous
limit to prevent accidental resource exhaustion. limit to prevent accidental resource exhaustion.
- Detect demand from one namespace, add nodes, and increase quota. - Detect demand from one namespace, add nodes, and increase quota.
--> -->
有时可能需要资源配额支持更复杂的策略,比如:
- 在几个团队中按比例划分总的集群资源。 - 在几个团队中按比例划分总的集群资源。
- 允许每个租户根据需要增加资源使用量,但要有足够的限制以防止资源意外耗尽。 - 允许每个租户根据需要增加资源使用量,但要有足够的限制以防止资源意外耗尽。
- 探测某个命名空间的需求,添加物理节点并扩大资源配额值。 - 探测某个命名空间的需求,添加物理节点并扩大资源配额值。
@ -779,7 +778,8 @@ Such policies could be implemented using `ResourceQuotas` as building blocks, by
writing a "controller" that watches the quota usage and adjusts the quota writing a "controller" that watches the quota usage and adjusts the quota
hard limits of each namespace according to other signals. hard limits of each namespace according to other signals.
--> -->
这些策略可以通过将资源配额作为一个组成模块、手动编写一个控制器来监控资源使用情况,并结合其他信号调整命名空间上的硬性资源配额来实现。 这些策略可以通过将资源配额作为一个组成模块、手动编写一个控制器来监控资源使用情况,
并结合其他信号调整命名空间上的硬性资源配额来实现。
<!-- <!--
Note that resource quota divides up aggregate cluster resources, but it creates no Note that resource quota divides up aggregate cluster resources, but it creates no
@ -789,21 +789,22 @@ restrictions around nodes: pods from several namespaces may run on the same node
<!-- <!--
## Limit Priority Class consumption by default ## Limit Priority Class consumption by default
It may be desired that pods at a particular priority, eg. "cluster-services", should be allowed in a namespace, if and only if, a matching quota object exists.
--> -->
## 默认情况下限制特定优先级的资源消耗 ## 默认情况下限制特定优先级的资源消耗
<!-- 有时候可能希望当且仅当某名字空间中存在匹配的配额对象时,才可以创建特定优先级
It may be desired that pods at a particular priority, eg. "cluster-services", should be allowed in a namespace, if and only if, a matching quota object exists. (例如 "cluster-services")的 Pod。
-->
有时候可能希望当且仅当某名字空间中存在匹配的配额对象时,才可以创建特定优先级(例如 "cluster-services")的 Pod。
<!-- <!--
With this mechanism, operators will be able to restrict usage of certain high priority classes to a limited number of namespaces and not every namespace will be able to consume these priority classes by default. With this mechanism, operators will be able to restrict usage of certain high priority classes to a limited number of namespaces and not every namespace will be able to consume these priority classes by default.
--> -->
通过这种机制,操作人员能够将限制某些高优先级类仅出现在有限数量的命名空间中,而并非每个命名空间默认情况下都能够使用这些优先级类。 通过这种机制,操作人员能够将限制某些高优先级类仅出现在有限数量的命名空间中,
而并非每个命名空间默认情况下都能够使用这些优先级类。
<!-- <!--
To enforce this, kube-apiserver flag `--admission-control-config-file` should be used to pass path to the following configuration file: To enforce this, kube-apiserver flag `-admission-control-config-file` should be used to pass path to the following configuration file:
--> -->
要实现此目的,应使用 kube-apiserver 标志 `--admission-control-config-file` 传递如下配置文件的路径: 要实现此目的,应使用 kube-apiserver 标志 `--admission-control-config-file` 传递如下配置文件的路径:
@ -827,13 +828,13 @@ plugins:
{{% /tab %}} {{% /tab %}}
{{% tab name="apiserver.k8s.io/v1alpha1" %}} {{% tab name="apiserver.k8s.io/v1alpha1" %}}
```yaml ```yaml
# 在 Kubernetes 1.17 中已不推荐使用,请使用 apiserver.config.k8s.io/v1 # 在 Kubernetes 1.17 中已不推荐使用,请使用 apiserver.config.k8s.io/v1
apiVersion: apiserver.k8s.io/v1alpha1 apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration kind: AdmissionConfiguration
plugins: plugins:
- name: "ResourceQuota" - name: "ResourceQuota"
configuration: configuration:
# 在 Kubernetes 1.17 中已不推荐使用,请使用 apiserver.config.k8s.io/v1, ResourceQuotaConfiguration # 在 Kubernetes 1.17 中已不推荐使用,请使用 apiserver.config.k8s.io/v1, ResourceQuotaConfiguration
apiVersion: resourcequota.admission.k8s.io/v1beta1 apiVersion: resourcequota.admission.k8s.io/v1beta1
kind: Configuration kind: Configuration
limitedResources: limitedResources:
@ -848,12 +849,11 @@ plugins:
<!-- <!--
Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present. Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present.
For example:
--> -->
现在,仅当命名空间中存在匹配的 `scopeSelector` 的配额对象时,才允许使用 "cluster-services" Pod。 现在,仅当命名空间中存在匹配的 `scopeSelector` 的配额对象时,才允许使用 "cluster-services" Pod。
<!--
For example:
-->
示例: 示例:
```yaml ```yaml
@ -867,24 +867,22 @@ For example:
<!-- <!--
See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) and [Quota support for priority class design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md) for more information. See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) and [Quota support for priority class design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md) for more information.
--> -->
有关更多信息,请参见 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) 和[优先级类配额支持的设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)。 有关更多信息,请参见 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) 和
[优先级类配额支持的设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)。
<!-- <!--
## Example ## Example
See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
--> -->
## 示例 ## 示例
<!-- 查看[如何使用资源配额的详细示例](/zh/docs/tasks/administer-cluster/quota-api-object/)。
See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
-->
查看[如何使用资源配额的详细示例](/docs/tasks/administer-cluster/quota-api-object/)。
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}
<!-- <!--
See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information. - See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
--> -->
查看[资源配额设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)了解更多信息。 - 查看[资源配额设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)
了解更多信息。