[zh] Tidy up and fix links in tasks section (9/10)
parent
fb6364da0a
commit
dabb6d6896
|
@ -2,25 +2,36 @@
|
|||
title: 同 Pod 内的容器使用共享卷通信
|
||||
content_type: task
|
||||
---
|
||||
|
||||
|
||||
<!--
|
||||
title: Communicate Between Containers in the Same Pod Using a Shared Volume
|
||||
content_type: task
|
||||
weight: 110
|
||||
-->
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
This page shows how to use a Volume to communicate between two Containers running
|
||||
in the same Pod. See also how to allow processes to communicate by
|
||||
[sharing process namespace](/docs/tasks/configure-pod-container/share-process-namespace/)
|
||||
between containers.
|
||||
-->
|
||||
本文旨在说明如何让一个 Pod 内的两个容器使用一个卷(Volume)进行通信。
|
||||
|
||||
|
||||
参阅如何让两个进程跨容器通过
|
||||
[共享进程名字空间](/zh/docs/tasks/configure-pod-container/share-process-namespace/)。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Creating a Pod that runs two Containers
|
||||
|
||||
In this exercise, you create a Pod that runs two Containers. The two containers
|
||||
share a Volume that they can use to communicate. Here is the configuration file
|
||||
for the Pod:
|
||||
-->
|
||||
## 创建一个包含两个容器的 Pod
|
||||
|
||||
在这个练习中,你会创建一个包含两个容器的 Pod。两个容器共享一个卷用于他们之间的通信。
|
||||
|
@ -28,142 +39,175 @@ Pod 的配置文件如下:
|
|||
|
||||
{{< codenew file="pods/two-container-pod.yaml" >}}
|
||||
|
||||
<!--
|
||||
In the configuration file, you can see that the Pod has a Volume named
|
||||
`shared-data`.
|
||||
|
||||
The first container listed in the configuration file runs an nginx server. The
|
||||
mount path for the shared Volume is `/usr/share/nginx/html`.
|
||||
The second container is based on the debian image, and has a mount path of
|
||||
`/pod-data`. The second container runs the following command and then terminates.
|
||||
-->
|
||||
在配置文件中,你可以看到 Pod 有一个共享卷,名为 `shared-data`。
|
||||
|
||||
配置文件中的第一个容器运行了一个 nginx 服务器。共享卷的挂载路径是 `/usr/share/nginx/html`。
|
||||
第二个容器是基于 debian 镜像的,有一个 `/pod-data` 的挂载路径。第二个容器运行了下面的命令然后终止。
|
||||
|
||||
echo Hello from the debian container > /pod-data/index.html
|
||||
```shell
|
||||
echo Hello from the debian container > /pod-data/index.html
|
||||
```
|
||||
|
||||
<!--
|
||||
Notice that the second container writes the `index.html` file in the root
|
||||
directory of the nginx server.
|
||||
|
||||
Create the Pod and the two Containers:
|
||||
-->
|
||||
注意,第二个容器在 nginx 服务器的根目录下写了 `index.html` 文件。
|
||||
|
||||
|
||||
创建一个包含两个容器的 Pod:
|
||||
|
||||
kubectl apply -f https://k8s.io/examples/pods/two-container-pod.yaml
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/two-container-pod.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
View information about the Pod and the Containers:
|
||||
-->
|
||||
查看 Pod 和容器的信息:
|
||||
|
||||
kubectl get pod two-containers --output=yaml
|
||||
|
||||
```shell
|
||||
kubectl get pod two-containers --output=yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
Here is a portion of the output:
|
||||
-->
|
||||
这是输出的一部分:
|
||||
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
...
|
||||
name: two-containers
|
||||
namespace: default
|
||||
...
|
||||
spec:
|
||||
...
|
||||
containerStatuses:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
...
|
||||
name: two-containers
|
||||
namespace: default
|
||||
...
|
||||
spec:
|
||||
...
|
||||
containerStatuses:
|
||||
|
||||
- containerID: docker://c1d8abd1 ...
|
||||
image: debian
|
||||
...
|
||||
lastState:
|
||||
terminated:
|
||||
...
|
||||
name: debian-container
|
||||
...
|
||||
|
||||
- containerID: docker://96c1ff2c5bb ...
|
||||
image: nginx
|
||||
...
|
||||
name: nginx-container
|
||||
...
|
||||
state:
|
||||
running:
|
||||
- containerID: docker://c1d8abd1 ...
|
||||
image: debian
|
||||
...
|
||||
lastState:
|
||||
terminated:
|
||||
...
|
||||
name: debian-container
|
||||
...
|
||||
|
||||
- containerID: docker://96c1ff2c5bb ...
|
||||
image: nginx
|
||||
...
|
||||
name: nginx-container
|
||||
...
|
||||
state:
|
||||
running:
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
You can see that the debian Container has terminated, and the nginx Container
|
||||
is still running.
|
||||
|
||||
Get a shell to nginx Container:
|
||||
-->
|
||||
你可以看到 debian 容器已经被终止了,而 nginx 服务器依然在运行。
|
||||
|
||||
|
||||
进入 nginx 容器的 shell:
|
||||
|
||||
kubectl exec -it two-containers -c nginx-container -- /bin/bash
|
||||
|
||||
```shell
|
||||
kubectl exec -it two-containers -c nginx-container -- /bin/bash
|
||||
```
|
||||
|
||||
<!--
|
||||
In your shell, verify that nginx is running:
|
||||
-->
|
||||
在 shell 中,确认 nginx 还在运行。
|
||||
|
||||
root@two-containers:/# ps aux
|
||||
|
||||
```
|
||||
root@two-containers:/# ps aux
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于这样:
|
||||
|
||||
USER PID ... STAT START TIME COMMAND
|
||||
root 1 ... Ss 21:12 0:00 nginx: master process nginx -g daemon off;
|
||||
|
||||
|
||||
```
|
||||
USER PID ... STAT START TIME COMMAND
|
||||
root 1 ... Ss 21:12 0:00 nginx: master process nginx -g daemon off;
|
||||
```
|
||||
|
||||
<!--
|
||||
Recall that the debian Container created the `index.html` file in the nginx root
|
||||
directory. Use `curl` to send a GET request to the nginx server:
|
||||
-->
|
||||
回忆一下,debian 容器在 nginx 的根目录下创建了 `index.html` 文件。
|
||||
使用 `curl` 向 nginx 服务器发送一个 GET 请求:
|
||||
|
||||
root@two-containers:/# apt-get update
|
||||
root@two-containers:/# apt-get install curl
|
||||
root@two-containers:/# curl localhost
|
||||
|
||||
```
|
||||
root@two-containers:/# apt-get update
|
||||
root@two-containers:/# apt-get install curl
|
||||
root@two-containers:/# curl localhost
|
||||
```
|
||||
|
||||
输出表示 nginx 提供了 debian 容器写的页面:
|
||||
|
||||
Hello from the debian container
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
Hello from the debian container
|
||||
```
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
## Discussion
|
||||
|
||||
The primary reason that Pods can have multiple containers is to support
|
||||
helper applications that assist a primary application. Typical examples of
|
||||
helper applications are data pullers, data pushers, and proxies.
|
||||
Helper and primary applications often need to communicate with each other.
|
||||
Typically this is done through a shared filesystem, as shown in this exercise,
|
||||
or through the loopback network interface, localhost. An example of this pattern is a
|
||||
web server along with a helper program that polls a Git repository for new updates.
|
||||
-->
|
||||
## 讨论
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Pod 能有多个容器的主要原因是为了支持辅助应用(helper applications),以协助主应用(primary application)。
|
||||
辅助应用的典型例子是数据抽取,数据推送和代理。辅助应用和主应用经常需要相互通信。
|
||||
就如这个练习所示,通信通常是通过共享文件系统完成的,或者,也通过回环网络接口 localhost 完成。
|
||||
举个网络接口的例子,web 服务器带有一个协助程序用于拉取 Git 仓库的更新。
|
||||
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
The Volume in this exercise provides a way for Containers to communicate during
|
||||
the life of the Pod. If the Pod is deleted and recreated, any data stored in
|
||||
the shared Volume is lost.
|
||||
-->
|
||||
在本练习中的卷为 Pod 生命周期中的容器相互通信提供了一种方法。如果 Pod 被删除或者重建了,
|
||||
任何共享卷中的数据都会丢失。
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
|
||||
|
||||
* 更多学习内容
|
||||
[混合容器的方式](http://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns.html)。
|
||||
|
||||
|
||||
|
||||
* 学习 [模块化架构的混合容器](http://www.slideshare.net/Docker/slideshare-burns)。
|
||||
|
||||
|
||||
|
||||
* 参见 [配置一个使用存储卷的 Pod](/docs/tasks/configure-pod-container/configure-volume-storage/)。
|
||||
|
||||
|
||||
* 参见 [卷](/docs/api-reference/{{< param "version" >}}/#volume-v1-core)。
|
||||
|
||||
|
||||
* 参见 [Pod](/docs/api-reference/{{< param "version" >}}/#pod-v1-core).
|
||||
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
* Learn more about [patterns for composite containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns).
|
||||
* Learn about [composite containers for modular architecture](https://www.slideshare.net/Docker/slideshare-burns).
|
||||
* See [Configuring a Pod to Use a Volume for Storage](/docs/tasks/configure-pod-container/configure-volume-storage/).
|
||||
* See [Configure a Pod to share process namespace between containers in a Pod](/docs/tasks/configure-pod-container/share-process-namespace/)
|
||||
* See [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core).
|
||||
* See [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
|
||||
-->
|
||||
* 进一步了解[复合容器的模式](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns.html)
|
||||
* 学习[模块化架构中的复合容器](https://www.slideshare.net/Docker/slideshare-burns)
|
||||
* 参见[配置 Pod 使用卷来存储数据](/zh/docs/tasks/configure-pod-container/configure-volume-storage/)
|
||||
* 参考 [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)
|
||||
* 参考 [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
|
||||
|
|
|
@ -1,14 +1,9 @@
|
|||
---
|
||||
title: 配置聚合层
|
||||
reviewers:
|
||||
- lavalamp
|
||||
- cheftako
|
||||
- chenopis
|
||||
content_type: task
|
||||
weight: 10
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Configure the Aggregation Layer
|
||||
reviewers:
|
||||
- lavalamp
|
||||
|
@ -16,7 +11,6 @@ reviewers:
|
|||
- chenopis
|
||||
content_type: task
|
||||
weight: 10
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -24,35 +18,29 @@ weight: 10
|
|||
<!--
|
||||
Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs.
|
||||
-->
|
||||
|
||||
配置 [聚合层](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) 允许 Kubernetes apiserver 使用其它 API 扩展,这些 API 不是核心 Kubernetes API 的一部分。
|
||||
|
||||
|
||||
配置 [聚合层](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) 允许
|
||||
Kubernetes apiserver 使用其它 API 扩展,这些 API 不是核心 Kubernetes API 的一部分。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
There are a few setup requirements for getting the aggregation layer working in your environment to support mutual TLS auth between the proxy and extension apiservers. Kubernetes and the kube-apiserver have multiple CAs, so make sure that the proxy is signed by the aggregation layer CA and not by something else, like the master CA.
|
||||
-->
|
||||
|
||||
要使聚合层在您的环境中正常工作以支持代理服务器和扩展 apiserver 之间的相互 TLS 身份验证,需要满足一些设置要求。Kubernetes 和 kube-apiserver 具有多个 CA,因此请确保代理是由聚合层 CA 签名的,而不是由主 CA 签名的。
|
||||
|
||||
{{< caution >}}
|
||||
{{< note >}}
|
||||
要使聚合层在您的环境中正常工作以支持代理服务器和扩展 apiserver 之间的相互 TLS 身份验证,
|
||||
需要满足一些设置要求。Kubernetes 和 kube-apiserver 具有多个 CA,
|
||||
因此请确保代理是由聚合层 CA 签名的,而不是由主 CA 签名的。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Reusing the same CA for different client types can negatively impact the cluster's ability to function. For more information, see [CA Reusage and Conflicts](#ca-reusage-and-conflicts).
|
||||
-->
|
||||
对不同的客户端类型重复使用相同的 CA 会对群集的功能产生负面影响。有关更多信息,请参见 [CA重用和冲突](#ca-重用和冲突)。
|
||||
|
||||
{{< caution >}}
|
||||
对不同的客户端类型重复使用相同的 CA 会对群集的功能产生负面影响。有关更多信息,请参见
|
||||
[CA 重用和冲突](#ca-reusage-and-conflicts)。
|
||||
{{< /caution >}}
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -62,7 +50,14 @@ Reusing the same CA for different client types can negatively impact the cluster
|
|||
Unlike Custom Resource Definitions (CRDs), the Aggregation API involves another server - your Extension apiserver - in addition to the standard Kubernetes apiserver. The Kubernetes apiserver will need to communicate with your extension apiserver, and your extension apiserver will need to communicate with the Kubernetes apiserver. In order for this communication to be secured, the Kubernetes apiserver uses x509 certificates to authenticate itself to the extension apiserver.
|
||||
|
||||
This section describes how the authentication and authorization flows work, and how to configure them.
|
||||
-->
|
||||
## 身份认证流程
|
||||
|
||||
与自定义资源定义(CRD)不同,除标准的 Kubernetes apiserver 外,Aggregation API 还涉及另一个服务器:扩展 apiserver。Kubernetes apiserver 将需要与您的扩展 apiserver 通信,并且您的扩展 apiserver 也需要与 Kubernetes apiserver 通信。为了确保此通信的安全,Kubernetes apiserver 使用 x509 证书向扩展 apiserver 认证。
|
||||
|
||||
本节介绍身份认证和鉴权流程的工作方式以及如何配置它们。
|
||||
|
||||
<!--
|
||||
The high-level flow is as follows:
|
||||
|
||||
1. Kubernetes apiserver: authenticate the requesting user and authorize their rights to the requested API path.
|
||||
|
@ -79,13 +74,6 @@ The flow can be seen in the following diagram.
|
|||
|
||||
The source for the above swimlanes can be found in the source of this document.
|
||||
-->
|
||||
|
||||
## 认证流程
|
||||
|
||||
与自定义资源定义(CRD)不同,除标准的 Kubernetes apiserver 外,Aggregation API 还涉及另一个服务器:扩展 apiserver。Kubernetes apiserver 将需要与您的扩展 apiserver 通信,并且您的扩展 apiserver 也需要与 Kubernetes apiserver 通信。为了确保此通信的安全,Kubernetes apiserver 使用 x509 证书向扩展 apiserver 认证。
|
||||
|
||||
本节介绍身份认证和鉴权流程的工作方式以及如何配置它们。
|
||||
|
||||
大致流程如下:
|
||||
|
||||
1. Kubernetes apiserver:对发出请求的用户身份认证,并对请求的 API 路径执行鉴权。
|
||||
|
@ -264,20 +252,7 @@ The Kubernetes apiserver connects to the extension apiserver over TLS, authentic
|
|||
* signed client certificate file via `--proxy-client-cert-file`
|
||||
* certificate of the CA that signed the client certificate file via `--requestheader-client-ca-file`
|
||||
* valid Common Names (CN) in the signed client certificate via `--requestheader-allowed-names`
|
||||
|
||||
The Kubernetes apiserver will use the files indicated by `--proxy-client-*-file` to authenticate to the extension apiserver. In order for the request to be considered valid by a compliant extension apiserver, the following conditions must be met:
|
||||
|
||||
1. The connection must be made using a client certificate that is signed by the CA whose certificate is in `--requestheader-client-ca-file`.
|
||||
2. The connection must be made using a client certificate whose CN is one of those listed in `--requestheader-allowed-names`. **Note:** You can set this option to blank as `--requestheader-allowed-names=""`. This will indicate to an extension apiserver that _any_ CN is acceptable.
|
||||
|
||||
When started with these options, the Kubernetes apiserver will:
|
||||
|
||||
1. Use them to authenticate to the extension apiserver.
|
||||
2. Create a configmap in the `kube-system` namespace called `extension-apiserver-authentication`, in which it will place the CA certificate and the allowed CNs. These in turn can be retrieved by extension apiservers to validate requests.
|
||||
|
||||
Note that the same client certificate is used by the Kubernetes apiserver to authenticate against _all_ extension apiservers. It does not create a client certificate per extension apiserver, but rather a single one to authenticate as the Kubernetes apiserver. This same one is reused for all extension apiserver requests.
|
||||
-->
|
||||
|
||||
#### Kubernetes Apiserver 客户端认证
|
||||
|
||||
Kubernetes apiserver 通过 TLS 连接到扩展 apiserver,并使用客户端证书认证。您必须在启动时使用提供的标志向 Kubernetes apiserver 提供以下内容:
|
||||
|
@ -287,12 +262,29 @@ Kubernetes apiserver 通过 TLS 连接到扩展 apiserver,并使用客户端
|
|||
* 通过 `--requestheader-client-ca-file` 签署客户端证书文件的 CA 证书
|
||||
* 通过 `--requestheader-allowed-names` 在签署的客户证书中有效的公用名(CN)
|
||||
|
||||
Kubernetes apiserver 将使用由 --proxy-client-*-file 指示的文件来验证扩展 apiserver。为了使合规的扩展 apiserver 能够将该请求视为有效,必须满足以下条件:
|
||||
<!--
|
||||
The Kubernetes apiserver will use the files indicated by `--proxy-client-*-file` to authenticate to the extension apiserver. In order for the request to be considered valid by a compliant extension apiserver, the following conditions must be met:
|
||||
|
||||
1. The connection must be made using a client certificate that is signed by the CA whose certificate is in `--requestheader-client-ca-file`.
|
||||
2. The connection must be made using a client certificate whose CN is one of those listed in `--requestheader-allowed-names`. **Note:** You can set this option to blank as `--requestheader-allowed-names=""`. This will indicate to an extension apiserver that _any_ CN is acceptable.
|
||||
-->
|
||||
Kubernetes apiserver 将使用由 `--proxy-client-*-file` 指示的文件来验证扩展 apiserver。为了使合规的扩展 apiserver 能够将该请求视为有效,必须满足以下条件:
|
||||
|
||||
1. 连接必须使用由 CA 签署的客户端证书,该证书的证书位于 `--requestheader-client-ca-file` 中。
|
||||
2. 连接必须使用客户端证书,该客户端证书的 CN 是 `--requestheader-allowed-names` 中列出的证书之一。
|
||||
**注意:**您可以将此选项设置为空白,即为`--requestheader-allowed-names`。这将向扩展 apiserver 指示任何 CN 是可接受的。
|
||||
|
||||
{{< note >}}
|
||||
您可以将此选项设置为空白,即为`--requestheader-allowed-names`。这将向扩展 apiserver 指示任何 CN 是可接受的。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
When started with these options, the Kubernetes apiserver will:
|
||||
|
||||
1. Use them to authenticate to the extension apiserver.
|
||||
2. Create a configmap in the `kube-system` namespace called `extension-apiserver-authentication`, in which it will place the CA certificate and the allowed CNs. These in turn can be retrieved by extension apiservers to validate requests.
|
||||
|
||||
Note that the same client certificate is used by the Kubernetes apiserver to authenticate against _all_ extension apiservers. It does not create a client certificate per extension apiserver, but rather a single one to authenticate as the Kubernetes apiserver. This same one is reused for all extension apiserver requests.
|
||||
-->
|
||||
使用这些选项启动时,Kubernetes apiserver 将:
|
||||
|
||||
1. 使用它们向扩展 apiserver 认证。
|
||||
|
@ -305,9 +297,9 @@ Kubernetes apiserver 将使用由 --proxy-client-*-file 指示的文件来验证
|
|||
|
||||
When the Kubernetes apiserver proxies the request to the extension apiserver, it informs the extension apiserver of the username and group with which the original request successfully authenticated. It provides these in http headers of its proxied request. You must inform the Kubernetes apiserver of the names of the headers to be used.
|
||||
|
||||
* the header in which to store the username via `--requestheader-username-headers`
|
||||
* the header in which to store the group via `--requestheader-group-headers`
|
||||
* the prefix to append to all extra headers via `--requestheader-extra-headers-prefix`
|
||||
* the header in which to store the username via `-requestheader-username-headers`
|
||||
* the header in which to store the group via `-requestheader-group-headers`
|
||||
* the prefix to append to all extra headers via `-requestheader-extra-headers-prefix`
|
||||
|
||||
These header names are also placed in the `extension-apiserver-authentication` configmap, so they can be retrieved and used by extension apiservers.
|
||||
-->
|
||||
|
@ -336,14 +328,7 @@ The extension apiserver, upon receiving a proxied request from the Kubernetes ap
|
|||
* Was signed by the CA whose certificate matches the retrieved CA certificate.
|
||||
* Has a CN in the list of allowed CNs, unless the list is blank, in which case all CNs are allowed.
|
||||
* Extract the username and group from the appropriate headers
|
||||
|
||||
If the above passes, then the request is a valid proxied request from a legitimate authenticating proxy, in this case the Kubernetes apiserver.
|
||||
|
||||
Note that it is the responsibility of the extension apiserver implementation to provide the above. Many do it by default, leveraging the `k8s.io/apiserver/` package. Others may provide options to override it using command-line options.
|
||||
|
||||
In order to have permission to retrieve the configmap, an extension apiserver requires the appropriate role. There is a default role named `extension-apiserver-authentication-reader` in the `kube-system` namespace which can be assigned.
|
||||
-->
|
||||
|
||||
### 扩展 Apiserver 认证
|
||||
|
||||
扩展 apiserver 在收到来自 Kubernetes apiserver 的代理请求后,必须验证该请求确实确实来自有效的身份验证代理,该认证代理由 Kubernetes apiserver 履行。扩展 apiserver 通过以下方式对其认证:
|
||||
|
@ -358,6 +343,13 @@ In order to have permission to retrieve the configmap, an extension apiserver re
|
|||
* 在允许的 CN 列表中有一个 CN,除非列表为空,在这种情况下允许所有 CN。
|
||||
* 从适当的头部中提取用户名和组
|
||||
|
||||
<!--
|
||||
If the above passes, then the request is a valid proxied request from a legitimate authenticating proxy, in this case the Kubernetes apiserver.
|
||||
|
||||
Note that it is the responsibility of the extension apiserver implementation to provide the above. Many do it by default, leveraging the `k8s.io/apiserver/` package. Others may provide options to override it using command-line options.
|
||||
|
||||
In order to have permission to retrieve the configmap, an extension apiserver requires the appropriate role. There is a default role named `extension-apiserver-authentication-reader` in the `kube-system` namespace which can be assigned.
|
||||
-->
|
||||
如果以上均通过,则该请求是来自合法认证代理(在本例中为 Kubernetes apiserver)的有效代理请求。
|
||||
|
||||
请注意,扩展 apiserver 实现负责提供上述内容。默认情况下,许多扩展 apiserver 实现利用 `k8s.io/apiserver/` 软件包来做到这一点。也有一些实现可能支持使用命令行选项来覆盖这些配置。
|
||||
|
@ -371,10 +363,11 @@ The extension apiserver now can validate that the user/group retrieved from the
|
|||
|
||||
In order for the extension apiserver to be authorized itself to submit the `SubjectAccessReview` request to the Kubernetes apiserver, it needs the correct permissions. Kubernetes includes a default `ClusterRole` named `system:auth-delegator` that has the appropriate permissions. It can be granted to the extension apiserver's service account.
|
||||
-->
|
||||
|
||||
### 扩展 Apiserver 对请求鉴权
|
||||
|
||||
扩展 apiserver 现在可以验证从标头检索的`user/group`是否有权执行给定请求。通过向 Kubernetes apiserver 发送标准 [SubjectAccessReview](/zh/docs/reference/access-authn-authz/authorization/) 请求来实现。
|
||||
扩展 apiserver 现在可以验证从标头检索的`user/group`是否有权执行给定请求。
|
||||
通过向 Kubernetes apiserver 发送标准
|
||||
[SubjectAccessReview](/zh/docs/reference/access-authn-authz/authorization/) 请求来实现。
|
||||
|
||||
为了使扩展 apiserver 本身被鉴权可以向 Kubernetes apiserver 提交 SubjectAccessReview 请求,它需要正确的权限。Kubernetes 包含一个具有相应权限的名为`system:auth-delegator` 的默认 `ClusterRole`,可以将其授予扩展 apiserver 的服务帐户。
|
||||
|
||||
|
@ -383,21 +376,19 @@ In order for the extension apiserver to be authorized itself to submit the `Subj
|
|||
|
||||
If the `SubjectAccessReview` passes, the extension apiserver executes the request.
|
||||
|
||||
|
||||
## Enable Kubernetes Apiserver flags
|
||||
|
||||
Enable the aggregation layer via the following kube-apiserver flags. They may have already been taken care of by your provider.
|
||||
-->
|
||||
|
||||
### 扩展 Apiserver 执行
|
||||
|
||||
如果`SubjectAccessReview`通过,则扩展 apiserver 执行请求。
|
||||
|
||||
如果 `SubjectAccessReview` 通过,则扩展 apiserver 执行请求。
|
||||
|
||||
## 启用 Kubernetes Apiserver 标志
|
||||
|
||||
通过以下 kube-apiserver 标志启用聚合层。您的服务提供商可能已经为您完成了这些工作:
|
||||
|
||||
```
|
||||
--requestheader-client-ca-file=<path to aggregator CA cert>
|
||||
--requestheader-allowed-names=front-proxy-client
|
||||
--requestheader-extra-headers-prefix=X-Remote-Extra-
|
||||
|
@ -405,13 +396,14 @@ Enable the aggregation layer via the following kube-apiserver flags. They may ha
|
|||
--requestheader-username-headers=X-Remote-User
|
||||
--proxy-client-cert-file=<path to aggregator proxy cert>
|
||||
--proxy-client-key-file=<path to aggregator proxy key>
|
||||
```
|
||||
|
||||
<!--
|
||||
### CA Reusage and Conflicts
|
||||
|
||||
The Kubernetes apiserver has two client CA options:
|
||||
-->
|
||||
|
||||
### CA-重用和冲突
|
||||
### CA-重用和冲突 {#ca-reusage-and-conflicts}
|
||||
|
||||
Kubernetes apiserver 有两个客户端 CA 选项:
|
||||
|
||||
|
@ -423,40 +415,38 @@ Each of these functions independently and can conflict with each other, if not u
|
|||
|
||||
* `--client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file referenced by `--client-ca-file`, then the request is treated as a legitimate request, and the user is the value of the common name `CN=`, while the group is the organization `O=`. See the [documentaton on TLS authentication](/docs/reference/access-authn-authz/authentication/#x509-client-certs).
|
||||
* `--requestheader-client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file reference by `--requestheader-client-ca-file`, then the request is treated as a potentially legitimate request. The Kubernetes apiserver then checks if the common name `CN=` is one of the names in the list provided by `--requestheader-allowed-names`. If the name is allowed, the request is approved; if it is not, the request is not.
|
||||
|
||||
If _both_ `--client-ca-file` and `--requestheader-client-ca-file` are provided, then the request first checks the `--requestheader-client-ca-file` CA and then the `--client-ca-file`. Normally, different CAs, either root CAs or intermediate CAs, are used for each of these options; regular client requests match against `--client-ca-file`, while aggregation requests match against `--requestheader-client-ca-file`. However, if both use the _same_ CA, then client requests that normally would pass via `--client-ca-file` will fail, because the CA will match the CA in `--requestheader-client-ca-file`, but the common name `CN=` will **not** match one of the acceptable common names in `--requestheader-allowed-names`. This can cause your kubelets and other control plane components, as well as end-users, to be unable to authenticate to the Kubernetes apiserver.
|
||||
|
||||
For this reason, use different CA certs for the `--client-ca-file` option - to authorize control plane components and end-users - and the `--requestheader-client-ca-file` option - to authorize aggregation apiserver requests.
|
||||
-->
|
||||
|
||||
这些功能中的每个功能都是独立的;如果使用不正确,可能彼此冲突。
|
||||
|
||||
* `--client-ca-file`:当请求到达 Kubernetes apiserver 时,如果启用了此选项,则 Kubernetes apiserver 会检查请求的证书。如果它是由 `--client-ca-file` 引用的文件中的 CA 证书之一签名的,并且用户是公用名`CN=`的值,而组是组织`O=` 的取值,则该请求被视为合法请求。请参阅 [关于 TLS 身份验证的文档](/docs/reference/access-authn-authz/authentication/#x509-client-certs)。
|
||||
|
||||
* `--requestheader-client-ca-file`:当请求到达 Kubernetes apiserver 时,如果启用此选项,则 Kubernetes apiserver 会检查请求的证书。如果它是由文件引用中的 --requestheader-client-ca-file 所签署的 CA 证书之一签名的,则该请求将被视为潜在的合法请求。然后,Kubernetes apiserver 检查通用名称`CN=`是否是 --requestheader-allowed-names 提供的列表中的名称之一。如果名称允许,则请求被批准;如果不是,则请求被拒绝。
|
||||
|
||||
<!--
|
||||
If _both_ `--client-ca-file` and `--requestheader-client-ca-file` are provided, then the request first checks the `--requestheader-client-ca-file` CA and then the `--client-ca-file`. Normally, different CAs, either root CAs or intermediate CAs, are used for each of these options; regular client requests match against `--client-ca-file`, while aggregation requests match against `--requestheader-client-ca-file`. However, if both use the _same_ CA, then client requests that normally would pass via `--client-ca-file` will fail, because the CA will match the CA in `--requestheader-client-ca-file`, but the common name `CN=` will **not** match one of the acceptable common names in `--requestheader-allowed-names`. This can cause your kubelets and other control plane components, as well as end-users, to be unable to authenticate to the Kubernetes apiserver.
|
||||
|
||||
For this reason, use different CA certs for the `--client-ca-file` option - to authorize control plane components and end-users - and the `--requestheader-client-ca-file` option - to authorize aggregation apiserver requests.
|
||||
-->
|
||||
如果同时提供了 `--client-ca-file` 和`--requestheader-client-ca-file`,则首先检查 `--requestheader-client-ca-file` CA,然后再检查`--client-ca-file`。通常,这些选项中的每一个都使用不同的 CA(根 CA 或中间 CA)。常规客户端请求与 --client-ca-file 相匹配,而聚合请求与 `--requestheader-client-ca-file` 相匹配。但是,如果两者都使用同一个 CA,则通常会通过 `--client-ca-file` 传递的客户端请求将失败,因为 CA 将与
|
||||
`--requestheader-client-ca-file` 中的 CA 匹配,但是通用名称 `CN=` 将不匹配 `--requestheader-allowed-names` 中可接受的通用名称之一。这可能导致您的 kubelet 和其他控制平面组件以及最终用户无法向 Kubernetes apiserver 认证。
|
||||
|
||||
因此,请对用于控制平面组件和最终用户鉴权的 `--client-ca-file` 选项和用于聚合 apiserver 鉴权的 `--requestheader-client-ca-file` 选项使用不同的 CA 证书。
|
||||
|
||||
{{< warning >}}
|
||||
|
||||
<!--
|
||||
Do **not** reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA's usage.
|
||||
-->
|
||||
|
||||
{{< warning >}}
|
||||
除非您了解风险和保护 CA 用法的机制,否则 *不要* 重用在不同上下文中使用的 CA。
|
||||
|
||||
{{< /warning >}}
|
||||
|
||||
<!--
|
||||
If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following `kube-apiserver` flag:
|
||||
-->
|
||||
|
||||
如果您未在运行 API 服务器的主机上运行 kube-proxy,则必须确保使用以下 `kube-apiserver` 标志启用系统:
|
||||
--enable-aggregator-routing=true
|
||||
|
||||
|
||||
```
|
||||
--enable-aggregator-routing=true
|
||||
```
|
||||
|
||||
<!--
|
||||
### Register APIService objects
|
||||
|
@ -464,13 +454,11 @@ If you are not running kube-proxy on a host running the API server, then you mus
|
|||
You can dynamically configure what client requests are proxied to extension
|
||||
apiserver. The following is an example registration:
|
||||
-->
|
||||
|
||||
### 注册 APIService 对象
|
||||
|
||||
您可以动态配置将哪些客户端请求代理到扩展 apiserver。以下是注册示例:
|
||||
|
||||
```yaml
|
||||
|
||||
apiVersion: apiregistration.k8s.io/v1
|
||||
kind: APIService
|
||||
metadata:
|
||||
|
@ -485,6 +473,14 @@ spec:
|
|||
name: < 拓展 Apiserver 服务的 name >
|
||||
caBundle: < PEM 编码的 CA 证书,用于对 Webhook 服务器的证书签名 >
|
||||
```
|
||||
|
||||
<!--
|
||||
The name of an APIService object must be a valid
|
||||
[path segment name](/docs/concepts/overview/working-with-objects/names#path-segment-names).
|
||||
-->
|
||||
APIService 对象的名称必须是合法的
|
||||
[路径片段名称](/zh/docs/concepts/overview/working-with-objects/names#path-segment-names)。
|
||||
|
||||
<!--
|
||||
#### Contacting the extension apiserver
|
||||
|
||||
|
@ -499,8 +495,6 @@ Here is an example of an extension apiserver that is configured to be called on
|
|||
at the subpath "/my-path", and to verify the TLS connection against the ServerName
|
||||
`my-service-name.my-service-namespace.svc` using a custom CA bundle.
|
||||
-->
|
||||
|
||||
|
||||
#### 调用扩展 apiserver
|
||||
|
||||
一旦 Kubernetes apiserver 确定应将请求发送到扩展 apiserver,它需要知道如何调用它。
|
||||
|
@ -536,8 +530,8 @@ spec:
|
|||
* Learn how to [Extend the Kubernetes API Using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/).
|
||||
-->
|
||||
|
||||
* 使用聚合层 [建立扩展 api-server](/zh/docs/tasks/access-kubernetes-api/setup-extension-api-server/)。
|
||||
* 有关高级概述,请参阅 [使用聚合层扩展 Kubernetes API](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)。
|
||||
* 了解如何 [使用自定义资源扩展 Kubernetes API](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/)。
|
||||
* 使用聚合层[安装扩展 API 服务器](/zh/docs/tasks/extend-kubernetes/setup-extension-api-server/)。
|
||||
* 有关高级概述,请参阅[使用聚合层扩展 Kubernetes API](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)。
|
||||
* 了解如何 [使用自定义资源扩展 Kubernetes API](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。
|
||||
|
||||
|
||||
|
|
|
@ -1,21 +1,17 @@
|
|||
---
|
||||
title: 用户自定义资源版本
|
||||
reviewers:
|
||||
- mbohlool
|
||||
- sttts
|
||||
- liggitt
|
||||
title: 用户自定义资源的版本
|
||||
content_type: task
|
||||
weight: 30
|
||||
min-kubernetes-server-version: v1.16
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Versions in CustomResourceDefinitions
|
||||
reviewers:
|
||||
- sttts
|
||||
- liggitt
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
min-kubernetes-server-version: v1.16
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -24,28 +20,22 @@ This page explains how to add versioning information to
|
|||
[CustomResourceDefinitions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1beta1-apiextensions), to indicate the stability
|
||||
level of your CustomResourceDefinitions or advance your API to a new version with conversion between API representations. It also describes how to upgrade an object from one version to another.
|
||||
-->
|
||||
|
||||
本页介绍了如何添加版本信息到 [CustomResourceDefinitions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1beta1-apiextensions),如何表示 CustomResourceDefinitions 的稳定水平或者用 API 之间的表征的转换提高您的 API 到一个新的版本。它还描述了如何将对象从一个版本升级到另一个版本。
|
||||
|
||||
|
||||
本页介绍如何添加版本信息到
|
||||
[CustomResourceDefinitions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1beta1-apiextensions),
|
||||
如何表示 CustomResourceDefinitions 的稳定水平或者用 API 之间的表征的转换提高您的 API 到一个新的版本。
|
||||
本页还描述如何将对象从一个版本升级到另一个版本。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
<!--
|
||||
* Make sure your Kubernetes cluster has a master version of 1.16.0 or higher for `apiextensions.k8s.io/v1`, or 1.11.0 or higher for `apiextensions.k8s.io/v1beta1`.
|
||||
You should have a initial understanding of [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||
-->
|
||||
* 确保您的 Kubernetes 集群的主版本为`apiextensions.k8s.io/v1`的1.16.0或更高版本,`apiextensions.k8s.io/v1beta1`的1.11.0或更高版本。
|
||||
|
||||
<!--
|
||||
* Read about [custom resources](/docs/concepts/api-extension/custom-resources/).
|
||||
-->
|
||||
|
||||
* 阅读 [custom resources](/docs/concepts/api-extension/custom-resources/)。
|
||||
|
||||
你应该对[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
有一些初步了解。
|
||||
|
||||
{{< version-check >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -54,8 +44,6 @@ level of your CustomResourceDefinitions or advance your API to a new version wit
|
|||
-->
|
||||
## 概览
|
||||
|
||||
{{< feature-state state="stable" for_kubernetes_version="1.16" >}}
|
||||
|
||||
<!--
|
||||
The CustomResourceDefinition API provides a workflow for introducing and upgrading
|
||||
to new versions of a CustomResourceDefinition.
|
||||
|
@ -70,7 +58,16 @@ Once the CustomResourceDefinition is created, clients may begin using the
|
|||
`v1beta1` API.
|
||||
|
||||
Later it might be necessary to add new version such as `v1`.
|
||||
-->
|
||||
CustomResourceDefinition API 提供了用于引入和升级的工作流程到 CustomResourceDefinition 的新版本。
|
||||
|
||||
创建 CustomResourceDefinition 时,会在 CustomResourceDefinition `spec.versions` 列表设置适当的稳定级和版本号。例如`v1beta1`表示第一个版本尚未稳定。所有自定义资源对象将首先存储在这个版本
|
||||
|
||||
创建 CustomResourceDefinition 后,客户端可以开始使用 v1beta1 API。
|
||||
|
||||
稍后可能需要添加新版本,例如 v1。
|
||||
|
||||
<!--
|
||||
Adding a new version:
|
||||
|
||||
1. Pick a conversion strategy. Since custom resource objects need to be able to
|
||||
|
@ -88,7 +85,17 @@ Adding a new version:
|
|||
`spec.versions` list with `served:true`. Also, set `spec.conversion` field
|
||||
to the selected conversion strategy. If using a conversion webhook, configure
|
||||
`spec.conversion.webhookClientConfig` field to call the webhook.
|
||||
-->
|
||||
增加一个新版本:
|
||||
|
||||
1. 选择一种转化策略。由于自定义资源对象需要能够两种版本都可用,这意味着它们有时会以与存储版本不同的版本。为了能够做到这一点,
|
||||
有时必须在它们存储的版本和提供的版本。如果转换涉及结构变更,并且需要自定义逻辑,转换应该使用 webhook。如果没有结构变更,
|
||||
则使用 None 默认转换策略,不同版本时只有`apiVersion`字段有变更。
|
||||
2. 如果使用转换 Webhook,请创建并部署转换 Webhook。希望看到更多详细信息,请参见 [Webhook conversion](#webhook转换)。
|
||||
3. 更新 CustomResourceDefinition,来将新版本包含在具有`served:true`的 spec.versions 列表。另外,设置`spec.conversion`字段
|
||||
到所选的转换策略。如果使用转换 Webhook,请配置`spec.conversion.webhookClientConfig`来调用 webhook。
|
||||
|
||||
<!--
|
||||
Once the new version is added, clients may incrementally migrate to the new
|
||||
version. It is perfectly safe for some clients to use the old version while
|
||||
others use the new version.
|
||||
|
@ -99,7 +106,16 @@ Migrate stored objects to the new version:
|
|||
|
||||
It is safe for clients to use both the old and new version before, during and
|
||||
after upgrading the objects to a new stored version.
|
||||
-->
|
||||
添加新版本后,客户端可以逐步迁移到新版本。对于某些客户而言,在使用旧版本的同时支持其他人使用新版本。
|
||||
|
||||
将存储的对象迁移到新版本:
|
||||
|
||||
1. 请参阅 [将现有对象升级到新的存储版本](#将现有对象升级到新的存储版本) 章节。
|
||||
|
||||
对于客户来说,在将对象升级到新的存储版本之前,期间和之后使用旧版本和新版本都是安全的。
|
||||
|
||||
<!--
|
||||
Removing an old version:
|
||||
|
||||
1. Ensure all clients are fully migrated to the new version. The kube-apiserver
|
||||
|
@ -115,39 +131,7 @@ Removing an old version:
|
|||
1. Verify that the old version is no longer listed in the CustomResourceDefinition `status.storedVersions`.
|
||||
1. Remove the old version from the CustomResourceDefinition `spec.versions` list.
|
||||
1. Drop conversion support for the old version in conversion webhooks.
|
||||
|
||||
## Specify multiple versions
|
||||
|
||||
The CustomResourceDefinition API `versions` field can be used to support multiple versions of custom resources that you
|
||||
have developed. Versions can have different schemas, and conversion webhooks can convert custom resources between versions.
|
||||
Webhook conversions should follow the [Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md) wherever applicable.
|
||||
Specifically, See the [API change documentation](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md) for a set of useful gotchas and suggestions.
|
||||
-->
|
||||
CustomResourceDefinition API 提供了用于引入和升级的工作流程到 CustomResourceDefinition 的新版本。
|
||||
|
||||
创建 CustomResourceDefinition 时,会在 CustomResourceDefinition `spec.versions` 列表设置适当的稳定级和版本号。例如`v1beta1`表示第一个版本尚未稳定。所有自定义资源对象将首先存储在这个版本
|
||||
|
||||
创建 CustomResourceDefinition 后,客户端可以开始使用 v1beta1 API。
|
||||
|
||||
稍后可能需要添加新版本,例如 v1。
|
||||
|
||||
增加一个新版本:
|
||||
|
||||
1. 选择一种转化策略。由于自定义资源对象需要能够两种版本都可用,这意味着它们有时会以与存储版本不同的版本。为了能够做到这一点,
|
||||
有时必须在它们存储的版本和提供的版本。如果转换涉及结构变更,并且需要自定义逻辑,转换应该使用 webhook。如果没有结构变更,
|
||||
则使用 None 默认转换策略,不同版本时只有`apiVersion`字段有变更。
|
||||
2. 如果使用转换 Webhook,请创建并部署转换 Webhook。希望看到更多详细信息,请参见 [Webhook conversion](#webhook转换)。
|
||||
3. 更新 CustomResourceDefinition,来将新版本包含在具有`served:true`的 spec.versions 列表。另外,设置`spec.conversion`字段
|
||||
到所选的转换策略。如果使用转换 Webhook,请配置`spec.conversion.webhookClientConfig`来调用 webhook。
|
||||
|
||||
添加新版本后,客户端可以逐步迁移到新版本。对于某些客户而言,在使用旧版本的同时支持其他人使用新版本。
|
||||
|
||||
将存储的对象迁移到新版本:
|
||||
|
||||
1. 请参阅 [将现有对象升级到新的存储版本](#将现有对象升级到新的存储版本) 章节。
|
||||
|
||||
对于客户来说,在将对象升级到新的存储版本之前,期间和之后使用旧版本和新版本都是安全的。
|
||||
|
||||
删除旧版本:
|
||||
|
||||
1. 确保所有客户端都已完全迁移到新版本。kube-apiserver 可以查看日志以帮助识别仍通过进行访问的所有客户端旧版本。
|
||||
|
@ -159,17 +143,28 @@ CustomResourceDefinition API 提供了用于引入和升级的工作流程到 Cu
|
|||
4. 从 CustomResourceDefinition`spec.versions` 列表中删除旧版本。
|
||||
5. 在转换 webhooks 中放弃对旧版本的转换支持。
|
||||
|
||||
## 指定多个版本
|
||||
<!--
|
||||
## Specify multiple versions
|
||||
|
||||
The CustomResourceDefinition API `versions` field can be used to support multiple versions of custom resources that you
|
||||
have developed. Versions can have different schemas, and conversion webhooks can convert custom resources between versions.
|
||||
Webhook conversions should follow the [Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md) wherever applicable.
|
||||
Specifically, See the [API change documentation](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md) for a set of useful gotchas and suggestions.
|
||||
-->
|
||||
## 指定多个版本 {#specify-multiple-versions}
|
||||
|
||||
CustomResourceDefinition API 的`versions`字段可用于支持您自定义资源的多个版本已经开发的。版本可以具有不同的架构,并且转换 Webhooks 可以在版本之间转换自定义资源。
|
||||
在适用的情况下,Webhook 转换应遵循 [Kubernetes API](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md)。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
In `apiextensions.k8s.io/v1beta1`, there was a `version` field instead of `versions`. The
|
||||
`version` field is deprecated and optional, but if it is not empty, it must
|
||||
match the first item in the `versions` field.
|
||||
|
||||
|
||||
-->
|
||||
{{< note >}}
|
||||
在 `apiextensions.k8s.io/v1beta1` 版本中,有一个 `version` 字段,名字不叫做 `versions`。
|
||||
`version` 字段已经被废弃,成为可选项。不过如果该字段不是空,则必须与
|
||||
`versions` 字段中的第一个条目匹配。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -177,13 +172,13 @@ This example shows a CustomResourceDefinition with two versions. For the first
|
|||
example, the assumption is all versions share the same schema with no conversion
|
||||
between them. The comments in the YAML provide more context.
|
||||
-->
|
||||
|
||||
此示例显示了两个版本的 CustomResourceDefinition。第一个例子,假设所有的版本共享相同的模式而它们之间没有转换。YAML 中的评论提供了更多背景信息。
|
||||
|
||||
{{< tabs name="CustomResourceDefinition_versioning_example_1" >}}
|
||||
{{% tab name="apiextensions.k8s.io/v1" %}}
|
||||
|
||||
```yaml
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
# name must match the spec fields below, and be in the form: <plural>.<group>
|
||||
|
@ -304,21 +299,19 @@ After creation, the API server starts to serve each enabled version at an HTTP
|
|||
REST endpoint. In the above example, the API versions are available at
|
||||
`/apis/example.com/v1beta1` and `/apis/example.com/v1`.
|
||||
-->
|
||||
|
||||
在创建之后,apiserver 开始在 HTTP REST 端点上为每个启用的版本提供服务。在上面的示例中,API 版本可以在`/apis/example.com/v1beta1` 和 `/apis/example.com/v1`中获得。
|
||||
在创建之后,apiserver 开始在 HTTP REST 端点上为每个启用的版本提供服务。
|
||||
在上面的示例中,API 版本可以在`/apis/example.com/v1beta1` 和 `/apis/example.com/v1`中获得。
|
||||
|
||||
<!--
|
||||
### Version priority
|
||||
-->
|
||||
### 版本优先级
|
||||
|
||||
<!--
|
||||
Regardless of the order in which versions are defined in a
|
||||
CustomResourceDefinition, the version with the highest priority is used by
|
||||
kubectl as the default version to access objects. The priority is determined
|
||||
by parsing the _name_ field to determine the version number, the stability
|
||||
(GA, Beta, or Alpha), and the sequence within that stability level.
|
||||
-->
|
||||
### 版本优先级
|
||||
|
||||
不考虑 CustomResourceDefinition 中版本被定义的顺序,kubectl 使用具有最高优先级的版本作为访问对象的默认版本。
|
||||
通过解析 _name_ 字段确定优先级来决定版本号,稳定性(GA,Beta,或者 Alpha),以及该稳定性水平内的序列。
|
||||
|
@ -352,9 +345,11 @@ like `v2` or `v2beta1`. Versions are sorted using the following algorithm:
|
|||
-->
|
||||
- 遵循 Kubernetes 版本模式的条目在不符合条件的条目之前进行排序。
|
||||
- 对于遵循 Kubernetes 版本模式的条目,版本字符串的数字部分从最大到最小排序。
|
||||
- 如果字符串`beta` 或 `alpha`跟随第一数字部分,它们按顺序排序,在没有`beta` 或 `alpha`后缀(假定为 GA 版本)的等效字符串后面。
|
||||
- 如果字符串 `beta` 或 `alpha` 跟随第一数字部分,它们按顺序排序,在没有 `beta` 或 `alpha`
|
||||
后缀(假定为 GA 版本)的等效字符串后面。
|
||||
- 如果另一个数字跟在`beta`或`alpha`之后,那么这些数字也是从最大到最小排序。
|
||||
- 不符合上述格式的字符串按字母顺序排序,数字部分不经过特殊处理。请注意,在下面的示例中,`foo1`在 `foo10`上方排序。这与遵循 Kubernetes 版本模式的条目的数字部分排序不同。
|
||||
- 不符合上述格式的字符串按字母顺序排序,数字部分不经过特殊处理。
|
||||
请注意,在下面的示例中,`foo1`在 `foo10`上方排序。这与遵循 Kubernetes 版本模式的条目的数字部分排序不同。
|
||||
|
||||
<!--
|
||||
This might make sense if you look at the following sorted version list:
|
||||
|
@ -380,24 +375,24 @@ version sort order is `v1`, followed by `v1beta1`. This causes the kubectl
|
|||
command to use `v1` as the default version unless the provided object specifies
|
||||
the version.
|
||||
-->
|
||||
|
||||
对于 [指定多个版本](#指定多个版本) 中的示例,版本排序顺序为`v1`,后跟着`v1beta1`。
|
||||
这导致了 kubectl 命令使用`v1`作为默认版本,除非提供对象指定版本。
|
||||
对于 [指定多个版本](#specify-multiple-versions) 中的示例,版本排序顺序为 `v1`,后跟着 `v1beta1`。
|
||||
这导致了 kubectl 命令使用 `v1` 作为默认版本,除非提供对象指定版本。
|
||||
|
||||
<!--
|
||||
## Webhook conversion
|
||||
|
||||
Webhook conversion is available as beta since 1.15, and as alpha since Kubernetes 1.13. The
|
||||
`CustomResourceWebhookConversion` feature should be enabled. Please refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information.
|
||||
-->
|
||||
## Webhook转换
|
||||
|
||||
<!--
|
||||
Webhook conversion is introduced in Kubernetes 1.13 as an alpha feature. To use it, the
|
||||
`CustomResourceWebhookConversion` feature should be enabled. Please refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information.
|
||||
-->
|
||||
{{< feature-state state="stable" for_k8s_version="v1.16" >}}
|
||||
|
||||
{{< note >}}
|
||||
|
||||
Webhook 转换在 Kubernetes 1.13 中作为 alpha 功能引入。要使用它,应启用`CustomResourceWebhookConversion`功能。请参阅 [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) 文档以获得更多信息。
|
||||
|
||||
Webhook 转换在 Kubernetes 1.15 中作为 beta 功能。
|
||||
要使用它,应启用`CustomResourceWebhookConversion`功能。
|
||||
在大多数集群上,这类 beta 特性应该时自动启用的。
|
||||
请参阅[特行门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) 文档以获得更多信息。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -423,10 +418,7 @@ To cover all of these cases and to optimize conversion by the API server, the co
|
|||
|
||||
<!--
|
||||
### Write a conversion webhook server
|
||||
-->
|
||||
### 编写一个转换webhook服务
|
||||
|
||||
<!--
|
||||
Please refer to the implementation of the [custom resource conversion webhook
|
||||
server](https://github.com/kubernetes/kubernetes/tree/v1.15.0/test/images/crd-conversion-webhook/main.go)
|
||||
that is validated in a Kubernetes e2e test. The webhook handles the
|
||||
|
@ -436,8 +428,9 @@ contains a list of custom resources that need to be converted independently with
|
|||
changing the order of objects.
|
||||
The example server is organized in a way to be reused for other conversions. Most of the common code are located in the [framework file](https://github.com/kubernetes/kubernetes/tree/v1.15.0/test/images/crd-conversion-webhook/converter/framework.go) that leaves only [one function](https://github.com/kubernetes/kubernetes/blob/v1.15.0/test/images/crd-conversion-webhook/converter/example_converter.go#L29-L80) to be implemented for different conversions.
|
||||
-->
|
||||
### 编写一个转换 Webhook 服务器
|
||||
|
||||
请参考 [自定义资源转换 webhook 服务](https://github.com/kubernetes/kubernetes/tree/v1.13.0/test/images/crd-conversion-webhook/main.go) 的实施,这在 Kubernetes e2e 测试中得到验证。webhook 处理由 apiserver 发送的`ConversionReview`请求,并发送回包含在`ConversionResponse`中的转换结果。请注意,请求包含需要独立转换不改变对象顺序的自定义资源列表。示例服务器的组织方式使其可以重用于其他转换。大多数常见代码都位于 [框架文件](https://github.com/kubernetes/kubernetes/tree/v1.14.0/test/images/crd-conversion-webhook/converter/framework.go) 中,只留下 [示例](https://github.com/kubernetes/kubernetes/blob/v1.13.0/test/images/crd-conversion-webhook/converter/example_converter.go#L29-L80) 用于实施不同的转换。
|
||||
请参考[自定义资源转换 webhook 服务](https://github.com/kubernetes/kubernetes/tree/v1.15.0/test/images/crd-conversion-webhook/main.go) 的实施,这在 Kubernetes e2e 测试中得到验证。webhook 处理由 apiserver 发送的`ConversionReview`请求,并发送回包含在`ConversionResponse`中的转换结果。请注意,请求包含需要独立转换不改变对象顺序的自定义资源列表。示例服务器的组织方式使其可以重用于其他转换。大多数常见代码都位于 [框架文件](https://github.com/kubernetes/kubernetes/tree/v1.14.0/test/images/crd-conversion-webhook/converter/framework.go) 中,只留下 [示例](https://github.com/kubernetes/kubernetes/blob/v1.13.0/test/images/crd-conversion-webhook/converter/example_converter.go#L29-L80) 用于实施不同的转换。
|
||||
|
||||
<!--
|
||||
The example conversion webhook server leaves the `ClientAuth` field
|
||||
|
@ -447,27 +440,27 @@ authenticate the identity of the clients, supposedly API servers. If you need
|
|||
mutual TLS or other ways to authenticate the clients, see
|
||||
how to [authenticate API servers](/docs/reference/access-authn-authz/extensible-admission-controllers/#authenticate-apiservers).
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
|
||||
示例转换 webhook 服务器留下`ClientAuth`字段 [empty](https://github.com/kubernetes/kubernetes/tree/v1.13.0/test/images/crd-conversion-webhook/config.go#L47-L48),默认为`NoClientCert`。
|
||||
示例转换 webhook 服务器留下`ClientAuth`字段为
|
||||
[空](https://github.com/kubernetes/kubernetes/tree/v1.13.0/test/images/crd-conversion-webhook/config.go#L47-L48),
|
||||
默认为`NoClientCert`。
|
||||
|
||||
这意味着 webhook 服务器没有验证客户端的身份,据称是 apiserver。
|
||||
|
||||
如果您需要相互 TLS 或者其他方式来验证客户端,请参阅如何 [验证 API 服务](/docs/reference/access-authn-authz/extensible-admission-controllers/#authenticate-apiservers)。
|
||||
|
||||
如果您需要相互 TLS 或者其他方式来验证客户端,请参阅如何
|
||||
[验证 API 服务](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#authenticate-apiservers)。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Deploy the conversion webhook service
|
||||
-->
|
||||
### 部署转换webhook服务
|
||||
|
||||
<!--
|
||||
Documentation for deploying the conversion webhook is the same as for the [admission webhook example service](/docs/reference/access-authn-authz/extensible-admission-controllers/#deploy_the_admission_webhook_service).
|
||||
The assumption for next sections is that the conversion webhook server is deployed to a service named `example-conversion-webhook-server` in `default` namespace and serving traffic on path `/crdconvert`.
|
||||
-->
|
||||
用于部署转换 webhook 的文档与 [准入webhook示例服务](/docs/reference/access-authn-authz/extensible-admission-controllers/#deploy_the_admission_webhook_service)。
|
||||
### 部署转换 Webhook 服务
|
||||
|
||||
用于部署转换 webhook 的文档与
|
||||
[准入webhook示例服务](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#deploy_the_admission_webhook_service)。
|
||||
下一节的假设是转换 webhook 服务器部署到`default`命名空间中名为`example-conversion-webhook-server`的服务器上,并在路径`/crdconvert`上提供流量。
|
||||
|
||||
<!--
|
||||
|
@ -484,14 +477,13 @@ if a different port is used for the service.
|
|||
|
||||
<!--
|
||||
### Configure CustomResourceDefinition to use conversion webhooks
|
||||
-->
|
||||
### 配置CustomResourceDefinition以使用转换webhooks
|
||||
|
||||
<!--
|
||||
The `None` conversion example can be extended to use the conversion webhook by modifying `conversion`
|
||||
section of the `spec`:
|
||||
-->
|
||||
通过修改`spec`中的`conversion`部分,可以扩展`None`转换示例来使用转换 webhook。
|
||||
### 配置 CustomResourceDefinition 以使用转换 Webhook
|
||||
|
||||
通过修改 `spec` 中的 `conversion` 部分,可以扩展 `None` 转换示例来使用转换 webhook。
|
||||
|
||||
{{< tabs name="CustomResourceDefinition_versioning_example_2" >}}
|
||||
{{% tab name="apiextensions.k8s.io/v1" %}}
|
||||
|
@ -641,11 +633,7 @@ Make sure the conversion service is up and running before applying new changes.
|
|||
|
||||
<!--
|
||||
### Contacting the webhook
|
||||
-->
|
||||
|
||||
### 调用-webhook
|
||||
|
||||
<!--
|
||||
Once the API server has determined a request should be sent to a conversion webhook,
|
||||
it needs to know how to contact the webhook. This is specified in the `webhookClientConfig`
|
||||
stanza of the webhook configuration.
|
||||
|
@ -653,10 +641,12 @@ stanza of the webhook configuration.
|
|||
Conversion webhooks can either be called via a URL or a service reference,
|
||||
and can optionally include a custom CA bundle to use to verify the TLS connection.
|
||||
-->
|
||||
### 调用 Webhook
|
||||
|
||||
apiserver 一旦确定请求应发送到转换 webhook,它需要知道如何调用 webhook。这是在`webhookClientConfig`中指定的 webhook 配置。
|
||||
|
||||
转换 webhook 可以通过调用 URL 或服务,并且可以选择包含自定义 CA 包,以用于验证 TLS 连接。
|
||||
|
||||
### URL
|
||||
|
||||
<!--
|
||||
|
@ -744,8 +734,6 @@ Here is an example of a webhook that is configured to call a service on port "12
|
|||
at the subpath "/my-path", and to verify the TLS connection against the ServerName
|
||||
`my-service-name.my-service-namespace.svc` using a custom CA bundle.
|
||||
-->
|
||||
|
||||
|
||||
### 服务引用
|
||||
|
||||
`webhookClientConfig`内部的`service`段是对转换 webhook 服务的引用。如果 Webhook 在集群中运行,则应使用`service`而不是`url`。
|
||||
|
@ -796,6 +784,7 @@ spec:
|
|||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
## Webhook request and response
|
||||
|
||||
|
@ -808,8 +797,7 @@ serialized to JSON as the body.
|
|||
Webhooks can specify what versions of `ConversionReview` objects they accept
|
||||
with the `conversionReviewVersions` field in their CustomResourceDefinition:
|
||||
-->
|
||||
|
||||
## Webhook-请求和响应
|
||||
## Webhook 请求和响应
|
||||
|
||||
### 请求
|
||||
|
||||
|
@ -1112,7 +1100,6 @@ should be avoided whenever possible, and should not be used to enforce validatio
|
|||
<!--
|
||||
Example of a response from a webhook indicating a conversion request failed, with an optional message:
|
||||
-->
|
||||
|
||||
来自 Webhook 的响应示例,指示转换请求失败,并带有可选消息:
|
||||
|
||||
{{< tabs name="ConversionReview_response_failure" >}}
|
||||
|
|
|
@ -1,15 +1,10 @@
|
|||
---
|
||||
title: 设置一个扩展的 API server
|
||||
reviewers:
|
||||
- lavalamp
|
||||
- cheftako
|
||||
- chenopis
|
||||
title: 安装一个扩展的 API server
|
||||
content_type: task
|
||||
weight: 15
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Setup an extension API server
|
||||
reviewers:
|
||||
- lavalamp
|
||||
|
@ -17,7 +12,6 @@ reviewers:
|
|||
- chenopis
|
||||
content_type: task
|
||||
weight: 15
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -25,28 +19,18 @@ weight: 15
|
|||
<!--
|
||||
Setting up an extension API server to work the aggregation layer allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs.
|
||||
-->
|
||||
设置一个扩展的 API server 来使用聚合层以让 Kubernetes apiserver 使用其它 API 进行扩展,这些 API 不是核心 Kubernetes API 的一部分。
|
||||
|
||||
|
||||
安装扩展的 API 服务器来使用聚合层以让 Kubernetes API 服务器使用其它 API 进行扩展,
|
||||
这些 API 不是核心 Kubernetes API 的一部分。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* You need to have a Kubernetes cluster running.
|
||||
* You must [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) and enable the apiserver flags.
|
||||
-->
|
||||
* 您需要拥有一个运行的 Kubernetes 集群。
|
||||
* 您必须 [配置聚合层](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) 并且启用 apiserver 的相关参数。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
<!--
|
||||
* You must [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) and enable the apiserver flags.
|
||||
-->
|
||||
* 你必须[配置聚合层](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
|
||||
并且启用 API 服务器的相关参数。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -57,55 +41,76 @@ The following steps describe how to set up an extension-apiserver *at a high lev
|
|||
|
||||
Alternatively, you can use an existing 3rd party solution, such as [apiserver-builder](https://github.com/Kubernetes-incubator/apiserver-builder/blob/master/README.md), which should generate a skeleton and automate all of the following steps for you.
|
||||
-->
|
||||
## 设置一个扩展的 api-server 来使用聚合层
|
||||
## 安装一个扩展的 API 服务器来使用聚合层
|
||||
|
||||
以下步骤描述如何 *在一个高层次* 设置一个扩展的 apiserver。无论您使用的是 YAML 配置还是使用 API,这些步骤都适用。目前我们正在尝试区分出两者的区别。有关使用 YAML 配置的具体示例,您可以在 Kubernetes 库中查看 [sample-apiserver](https://github.com/kubernetes/sample-apiserver/blob/master/README.md)。
|
||||
以下步骤描述如何 *在一个高层次* 设置一个扩展的 apiserver。无论你使用的是 YAML 配置还是使用 API,这些步骤都适用。
|
||||
目前我们正在尝试区分出两者的区别。有关使用 YAML 配置的具体示例,你可以在 Kubernetes 库中查看
|
||||
[sample-apiserver](https://github.com/kubernetes/sample-apiserver/blob/master/README.md)。
|
||||
|
||||
或者,您可以使用现有的第三方解决方案,例如 [apiserver-builder](https://github.com/Kubernetes-incubator/apiserver-builder/blob/master/README.md),它将生成框架并自动执行以下所有步骤。
|
||||
或者,你可以使用现有的第三方解决方案,例如
|
||||
[apiserver-builder](https://github.com/Kubernetes-incubator/apiserver-builder/blob/master/README.md),
|
||||
它将生成框架并自动执行以下所有步骤。
|
||||
|
||||
<!--
|
||||
1. Make sure the APIService API is enabled (check `--runtime-config`). It should be on by default, unless it's been deliberately turned off in your cluster.
|
||||
1. Make sure the APIService API is enabled (check `-runtime-config`). It should be on by default, unless it's been deliberately turned off in your cluster.
|
||||
1. You may need to make an RBAC rule allowing you to add APIService objects, or get your cluster administrator to make one. (Since API extensions affect the entire cluster, it is not recommended to do testing/development/debug of an API extension in a live cluster.)
|
||||
1. Create the Kubernetes namespace you want to run your extension api-service in.
|
||||
1. Create/get a CA cert to be used to sign the server cert the extension api-server uses for HTTPS.
|
||||
1. Create a server cert/key for the api-server to use for HTTPS. This cert should be signed by the above CA. It should also have a CN of the Kube DNS name. This is derived from the Kubernetes service and be of the form `<service name>.<service name namespace>.svc`
|
||||
1. Create a Kubernetes secret with the server cert/key in your namespace.
|
||||
1. Create a Kubernetes deployment for the extension api-server and make sure you are loading the secret as a volume. It should contain a reference to a working image of your extension api-server. The deployment should also be in your namespace.
|
||||
-->
|
||||
1. 确保启用了 APIService API(检查 `--runtime-config`)。默认应该是启用的,除非被特意关闭了。
|
||||
2. 你可能需要制定一个 RBAC 规则,以允许你添加 APIService 对象,或让你的集群管理员创建一个。
|
||||
(由于 API 扩展会影响整个集群,因此不建议在实时集群中对 API 扩展进行测试/开发/调试)
|
||||
3. 创建 Kubernetes 命名空间,扩展的 api-service 将运行在该命名空间中。
|
||||
4. 创建(或获取)用来签署服务器证书的 CA 证书,扩展 api-server 中将使用该证书做 HTTPS 连接。
|
||||
5. 为 api-server 创建一个服务端的证书(或秘钥)以使用 HTTPS。这个证书应该由上述的 CA 签署。
|
||||
同时应该还要有一个 Kube DNS 名称的 CN,这是从 Kubernetes 服务派生而来的,
|
||||
格式为 `<service name>.<service name namespace>.svc`。
|
||||
6. 使用命名空间中的证书(或秘钥)创建一个 Kubernetes secret。
|
||||
7. 为扩展 api-server 创建一个 Kubernetes Deployment,并确保以卷的方式挂载了 Secret。
|
||||
它应该包含对扩展 api-server 镜像的引用。Deployment 也应该在同一个命名空间中。
|
||||
|
||||
<!--
|
||||
1. Make sure that your extension-apiserver loads those certs from that volume and that they are used in the HTTPS handshake.
|
||||
1. Create a Kubernetes service account in your namespace.
|
||||
1. Create a Kubernetes cluster role for the operations you want to allow on your resources.
|
||||
1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you just created.
|
||||
1. Create a Kubernetes cluster role binding from the service account in your namespace to the `system:auth-delegator` cluster role to delegate auth decisions to the Kubernetes core API server.
|
||||
1. Create a Kubernetes role binding from the service account in your namespace to the `extension-apiserver-authentication-reader` role. This allows your extension api-server to access the `extension-apiserver-authentication` configmap.
|
||||
-->
|
||||
8. 确保你的扩展 apiserver 从该卷中加载了那些证书,并在 HTTPS 握手过程中使用它们。
|
||||
9. 在你的命令空间中创建一个 Kubernetes 服务账号。
|
||||
10. 为资源允许的操作创建 Kubernetes 集群角色。
|
||||
11. 用你命令空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到你刚创建的角色上。
|
||||
12. 用你命令空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到 `system:auth-delegator`
|
||||
集群角色,以将 auth 决策委派给 Kubernetes 核心 API 服务器。
|
||||
13. 以你命令空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到
|
||||
`extension-apiserver-authentication-reader` 角色。
|
||||
这将让你的扩展 api-server 能够访问 `extension-apiserver-authentication` configmap。
|
||||
|
||||
<!--
|
||||
1. Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/), only pass in the PEM encoded CA bundle because the base 64 encoding is done for you.
|
||||
1. Use kubectl to get your resource. When run, kubectl should return "No resources found.". This message indicates that everything worked but you currently have no objects of that resource type created.
|
||||
-->
|
||||
1. 确保启用了 APIService API(检查 `--runtime-config`)。默认应该是启用的,除非被特意关闭了。
|
||||
1. 您可能需要制定一个 RBAC 规则,以允许您添加 APIService 对象,或让您的集群管理员创建一个。(由于 API 扩展会影响整个集群,因此不建议在实时集群中对 API 扩展进行测试/开发/调试)
|
||||
1. 创建 Kubernetes 命名空间,扩展的 api-service 将运行在该命名空间中。
|
||||
1. 创建(或获取)用来签署服务器证书的 CA 证书,扩展 api-server 中将使用该证书做 HTTPS 连接。
|
||||
1. 为 api-server 创建一个服务端的证书(或秘钥)以使用 HTTPS。这个证书应该由上述的 CA 签署。同时应该还要有一个 Kube DNS 名称的 CN,这是从 Kubernetes 服务派生而来的,格式为 `<service name>.<service name namespace>.svc`。
|
||||
1. 使用命名空间中的证书(或秘钥)创建一个 Kubernetes secret。
|
||||
1. 为扩展 api-server 创建一个 Kubernetes deployment,并确保以卷的方式挂载了 secret。它应该包含对扩展 api-server 镜像的引用。Deployment 也应该在同一个命名空间中。
|
||||
1. 确保您的扩展 apiserver 从该卷中加载了那些证书,并在 HTTPS 握手过程中使用它们。
|
||||
1. 在您的命令空间中创建一个 Kubernetes service account。
|
||||
1. 为资源允许的操作创建 Kubernetes 集群角色。
|
||||
1. 以您命令空间中的 service account 创建一个 Kubernetes 集群角色绑定,绑定到您刚创建的角色上。
|
||||
1. 以您命令空间中的 service account 创建一个 Kubernetes 集群角色绑定,绑定到 `system:auth-delegator` 集群角色,以将 auth 决策委派给 Kubernetes 核心 API 服务器。
|
||||
1. 以您命令空间中的 service account 创建一个 Kubernetes 集群角色绑定,绑定到 `extension-apiserver-authentication-reader` 角色。这将让您的扩展 api-server 能够访问 `extension-apiserver-authentication` configmap。
|
||||
1. 创建一个 Kubernetes apiservice。上述的 CA 证书应该使用 base64 编码,剥离新行并用作 apiservice 中的 spec.caBundle。这不应该是命名空间化的。如果使用了 [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/),那么只需要传入 PEM 编码的 CA 绑定,因为 base 64 编码已经完成了。
|
||||
1. 使用 kubectl 来获得您的资源。它应该返回 "找不到资源"。此消息表示一切正常,但您目前还没有创建该资源类型的对象。
|
||||
|
||||
|
||||
14. 创建一个 Kubernetes apiservice。
|
||||
上述的 CA 证书应该使用 base64 编码,剥离新行并用作 apiservice 中的 spec.caBundle。
|
||||
该资源不应放到任何名字空间。如果使用了
|
||||
[kube-aggregator API](https://github.com/kubernetes/kube-aggregator/),那么只需要传入
|
||||
PEM 编码的 CA 绑定,因为 base 64 编码已经完成了。
|
||||
15. 使用 kubectl 来获得你的资源。
|
||||
它应该返回 "找不到资源"。此消息表示一切正常,但你目前还没有创建该资源类型的对象。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* If you haven't already, [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) and enable the apiserver flags.
|
||||
* For a high level overview, see [Extending the Kubernetes API with the aggregation layer](/docs/concepts/api-extension/apiserver-aggregation).
|
||||
* Learn how to [Extend the Kubernetes API Using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/).
|
||||
-->
|
||||
* 如果你还未配置,请 [配置聚合层](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) 并启用 apiserver 的相关参数。
|
||||
* 高级概述,请参阅 [使用聚合层扩展 Kubernetes API](/docs/concepts/api-extension/apiserver-aggregation)。
|
||||
* 了解如何 [使用 Custom Resource Definition 扩展 Kubernetes API](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)。
|
||||
* 如果你还未配置,请[配置聚合层](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
|
||||
并启用 apiserver 的相关参数。
|
||||
* 高级概述,请参阅[使用聚合层扩展 Kubernetes API](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation)。
|
||||
* 了解如何[使用 Custom Resource Definition 扩展 Kubernetes API](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。
|
||||
|
||||
|
|
|
@ -1,20 +1,17 @@
|
|||
---
|
||||
reviewers:
|
||||
- chenopis
|
||||
|
||||
title: 使用 CronJob 运行自动化任务
|
||||
content_type: task
|
||||
weight: 10
|
||||
min-kubernetes-server-version: v1.8
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Running Automated Tasks with a CronJob
|
||||
reviewers:
|
||||
- chenopis
|
||||
content_type: task
|
||||
weight: 10
|
||||
---
|
||||
min-kubernetes-server-version: v1.8
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -31,14 +28,15 @@ Cron jobs can also schedule individual tasks for a specific time, such as if you
|
|||
|
||||
CronJobs 在创建周期性以及重复性的任务时很有帮助,例如执行备份操作或者发送邮件。CronJobs 也可以在特定时间调度单个任务,例如你想调度低活跃周期的任务。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
CronJob resource in `batch/v2alpha1` API group has been deprecated starting from cluster version 1.8.
|
||||
You should switch to using `batch/v1beta1`, instead, which is enabled by default in the API server.
|
||||
Examples in this document use `batch/v1beta1` in all examples.
|
||||
-->
|
||||
从集群版本1.8开始,`batch/v2alpha1` API 组中的 CronJob 资源已经被废弃。
|
||||
你应该切换到 API 服务器默认启用的 `batch/v1beta1` API 组。本文中的所有示例使用了`batch/v1beta1`。
|
||||
{{< note >}}
|
||||
从集群版本 1.8 开始,`batch/v2alpha1` API 组中的 CronJob 资源已经被废弃。
|
||||
你应该切换到 API 服务器默认启用的 `batch/v1beta1` API 组。
|
||||
本文中的所有示例使用了 `batch/v1beta1`。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -47,30 +45,15 @@ For example, in certain circumstances, a single cron job can create multiple job
|
|||
Therefore, jobs should be idempotent.
|
||||
For more limitations, see [CronJobs](/docs/concepts/workloads/controllers/cron-jobs).
|
||||
-->
|
||||
|
||||
CronJobs 有一些限制和特点。
|
||||
例如,在特定状况下,同一个 CronJob 可以创建多个任务。
|
||||
因此,任务应该是幂等的。
|
||||
查看更多限制,请参考 [CronJobs](/zh/docs/concepts/workloads/controllers/cron-jobs)。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
<!--
|
||||
* You need a working Kubernetes cluster at version >= 1.8 (for CronJob). For previous versions of cluster (< 1.8)
|
||||
you need to explicitly enable `batch/v2alpha1` API by passing `--runtime-config=batch/v2alpha1=true` to
|
||||
the API server (see [Turn on or off an API version for your cluster](/docs/admin/cluster-management/#turn-on-or-off-an-api-version-for-your-cluster)
|
||||
for more), and then restart both the API server and the controller manager
|
||||
component.
|
||||
-->
|
||||
|
||||
* 你需要一个版本 >=1.8 且工作正常的 Kubernetes 集群。对于更早的版本( <1.8 ),你需要对 API 服务器设置 `--runtime-config=batch/v2alpha1=true` 来开启 `batch/v2alpha1` API,(更多信息请查看 [为你的集群开启或关闭 API 版本](/zh/docs/tasks/administer-cluster/cluster-management/#打开或关闭集群的-api-版本)
|
||||
), 然后重启 API 服务器和控制管理器。
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -80,7 +63,6 @@ component.
|
|||
Cron jobs require a config file.
|
||||
This example cron job config `.spec` file prints the current time and a hello message every minute:
|
||||
-->
|
||||
|
||||
## 创建 CronJob
|
||||
|
||||
CronJob 需要一个配置文件。
|
||||
|
@ -91,61 +73,69 @@ CronJob 需要一个配置文件。
|
|||
<!--
|
||||
Run the example cron job by downloading the example file and then running this command:
|
||||
-->
|
||||
|
||||
想要运行示例的 CronJob,可以下载示例文件并执行命令:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./cronjob.yaml
|
||||
cronjob "hello" created
|
||||
kubectl create -f ./cronjob.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
Alternatively, you can use `kubectl run` to create a cron job without writing a full config:
|
||||
-->
|
||||
|
||||
或者你也可以使用 `kubectl run` 来创建一个 CronJob 而不需要编写完整的配置:
|
||||
|
||||
```shell
|
||||
$ kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
|
||||
cronjob "hello" created
|
||||
```
|
||||
cronjob.batch/hello created
|
||||
```
|
||||
|
||||
<!--
|
||||
After creating the cron job, get its status using this command:
|
||||
-->
|
||||
|
||||
创建好 CronJob 后,使用下面的命令来获取其状态:
|
||||
|
||||
```shell
|
||||
$ kubectl get cronjob hello
|
||||
NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE
|
||||
hello */1 * * * * False 0 <none>
|
||||
kubectl get cronjob hello
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
|
||||
hello */1 * * * * False 0 50s 75s
|
||||
```
|
||||
<!--
|
||||
As you can see from the results of the command, the cron job has not scheduled or run any jobs yet.
|
||||
Watch for the job to be created in around one minute:
|
||||
-->
|
||||
|
||||
就像你从命令返回结果看到的那样,CronJob 还没有调度或执行任何任务。大约需要一分钟任务才能创建好。
|
||||
|
||||
```shell
|
||||
$ kubectl get jobs --watch
|
||||
NAME DESIRED SUCCESSFUL AGE
|
||||
hello-4111706356 1 1 2s
|
||||
kubectl get jobs --watch
|
||||
```
|
||||
|
||||
```
|
||||
NAME COMPLETIONS DURATION AGE
|
||||
hello-4111706356 0/1 0s
|
||||
hello-4111706356 0/1 0s 0s
|
||||
hello-4111706356 1/1 5s 5s
|
||||
```
|
||||
|
||||
<!--
|
||||
Now you've seen one running job scheduled by the "hello" cron job.
|
||||
You can stop watching the job and view the cron job again to see that it scheduled the job:
|
||||
-->
|
||||
|
||||
现在你已经看到了一个运行中的任务被 “hello” CronJob 调度。你可以停止监视这个任务,然后再次查看 CronJob 就能看到它调度任务:
|
||||
现在你已经看到了一个运行中的任务被 “hello” CronJob 调度。
|
||||
你可以停止监视这个任务,然后再次查看 CronJob 就能看到它调度任务:
|
||||
|
||||
```shell
|
||||
$ kubectl get cronjob hello
|
||||
NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE
|
||||
hello */1 * * * * False 0 Mon, 29 Aug 2016 14:34:00 -0700
|
||||
kubectl get cronjob hello
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
|
||||
hello */1 * * * * False 0 50s 75s
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -155,21 +145,34 @@ There are currently 0 active jobs, meaning that the job has completed or failed.
|
|||
Now, find the pods that the last scheduled job created and view the standard output of one of the pods.
|
||||
Note that the job name and pod name are different.
|
||||
-->
|
||||
|
||||
你应该能看到 “hello” CronJob 在 `LAST-SCHEDULE` 声明的时间点成功的调度了一次任务。有0个活跃的任务意味着任务执行完毕或者执行失败。
|
||||
你应该能看到 “hello” CronJob 在 `LAST-SCHEDULE` 声明的时间点成功的调度了一次任务。
|
||||
有 0 个活跃的任务意味着任务执行完毕或者执行失败。
|
||||
|
||||
现在,找到最后一次调度任务创建的 Pod 并查看一个 Pod 的标准输出。请注意任务名称和 Pod 名称是不同的。
|
||||
|
||||
<!--
|
||||
The job name and pod name are different.
|
||||
-->
|
||||
{{< note >}}
|
||||
Job 名称和 Pod 名称不同。
|
||||
{{< /note >}}
|
||||
|
||||
```shell
|
||||
# Replace "hello-4111706356" with the job name in your system
|
||||
$ pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items..metadata.name})
|
||||
# 在你的系统上将 "hello-4111706356" 替换为 Job 名称
|
||||
pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items..metadata.name})
|
||||
```
|
||||
|
||||
$ echo $pods
|
||||
hello-4111706356-o9qcm
|
||||
<!--
|
||||
Show pod log:
|
||||
-->
|
||||
查看 Pod 日志:
|
||||
|
||||
$ kubectl logs $pods
|
||||
Mon Aug 29 21:34:09 UTC 2016
|
||||
```shell
|
||||
kubectl logs $pods
|
||||
```
|
||||
|
||||
```
|
||||
Fri Feb 22 11:02:09 UTC 2019
|
||||
Hello from the Kubernetes cluster
|
||||
```
|
||||
|
||||
|
@ -179,41 +182,42 @@ Hello from the Kubernetes cluster
|
|||
When you don't need a cron job any more, delete it with `kubectl delete cronjob`:
|
||||
-->
|
||||
|
||||
## 删除 CronJob
|
||||
|
||||
当你不再需要 CronJob 时,可以用 `kubectl delete cronjob` 删掉它:
|
||||
|
||||
```shell
|
||||
$ kubectl delete cronjob hello
|
||||
cronjob "hello" deleted
|
||||
kubectl delete cronjob hello
|
||||
```
|
||||
|
||||
<!--
|
||||
Deleting the cron job removes all the jobs and pods it created and stops it from creating additional jobs.
|
||||
You can read more about removing jobs in [garbage collection](/docs/concepts/workloads/controllers/garbage-collection/).
|
||||
-->
|
||||
|
||||
删除 CronJob 会清除它创建的所有任务和 Pod,并阻止它创建额外的任务。你可以查阅 [垃圾收集](/zh/docs/concepts/workloads/controllers/garbage-collection/)。
|
||||
删除 CronJob 会清除它创建的所有任务和 Pod,并阻止它创建额外的任务。你可以查阅
|
||||
[垃圾收集](/zh/docs/concepts/workloads/controllers/garbage-collection/)。
|
||||
|
||||
<!--
|
||||
## Writing a Cron Job Spec
|
||||
|
||||
As with all other Kubernetes configs, a cron job needs `apiVersion`, `kind`, and `metadata` fields. For general
|
||||
information about working with config files, see [deploying applications](/docs/user-guide/deploying-applications),
|
||||
and [using kubectl to manage resources](/docs/user-guide/working-with-resources) documents.
|
||||
information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/),
|
||||
and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents.
|
||||
|
||||
A cron job config also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
-->
|
||||
|
||||
## 编写 CronJob 声明信息
|
||||
|
||||
像 Kubernetes 的其他配置一样,CronJob 需要 `apiVersion`、 `kind`、 和 `metadata` 域。配置文件的一般信息,请参考 [部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/) 和 [使用 kubectl 管理资源](/zh/docs/concepts/overview/working-with-objects/object-management/).
|
||||
像 Kubernetes 的其他配置一样,CronJob 需要 `apiVersion`、`kind`、和 `metadata` 域。
|
||||
配置文件的一般信息,请参考
|
||||
[部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/) 和
|
||||
[使用 kubectl 管理资源](/zh/docs/concepts/overview/working-with-objects/object-management/).
|
||||
|
||||
CronJob 配置也需要包括[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
CronJob 配置也需要包括
|
||||
[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
All modifications to a cron job, especially its `.spec`, are applied only to the following runs.
|
||||
-->
|
||||
{{< note >}}
|
||||
对 CronJob 的所有改动,特别是它的 `.spec`,只会影响将来的运行实例。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -223,16 +227,16 @@ All modifications to a cron job, especially its `.spec`, are applied only to the
|
|||
The `.spec.schedule` is a required field of the `.spec`.
|
||||
It takes a [Cron](https://en.wikipedia.org/wiki/Cron) format string, such as `0 * * * *` or `@hourly`, as schedule time of its jobs to be created and executed.
|
||||
-->
|
||||
|
||||
### 时间安排
|
||||
|
||||
`.spec.schedule` 是 `.spec` 需要的域。它使用了 [Cron](https://en.wikipedia.org/wiki/Cron) 格式串,例如 `0 * * * *` or `@hourly` ,做为它的任务被创建和执行的调度时间。
|
||||
`.spec.schedule` 是 `.spec` 需要的域。它使用了 [Cron](https://en.wikipedia.org/wiki/Cron)
|
||||
格式串,例如 `0 * * * *` or `@hourly` ,做为它的任务被创建和执行的调度时间。
|
||||
|
||||
<!--
|
||||
The format also includes extended `vixie cron` step values. As explained in the [FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29):
|
||||
-->
|
||||
|
||||
该格式也包含了扩展的 `vixie cron` 步长值。[FreeBSD 手册](https://www.freebsd.org/cgi/man.cgi?crontab%285%29)中解释如下:
|
||||
该格式也包含了扩展的 `vixie cron` 步长值。
|
||||
[FreeBSD 手册](https://www.freebsd.org/cgi/man.cgi?crontab%285%29)中解释如下:
|
||||
|
||||
<!--
|
||||
> Step values can be used in conjunction with ranges. Following a range
|
||||
|
@ -244,13 +248,14 @@ The format also includes extended `vixie cron` step values. As explained in the
|
|||
-->
|
||||
|
||||
> 步长可被用于范围组合。范围后面带有 `/<数字>` 可以声明范围内的步幅数值。
|
||||
> 例如,`0-23/2` 可被用在小时域来声明命令在其他数值的小时数执行( V7 标准中对应的方法是`0,2,4,6,8,10,12,14,16,18,20,22`)。
|
||||
> 例如,`0-23/2` 可被用在小时域来声明命令在其他数值的小时数执行
|
||||
> ( V7 标准中对应的方法是`0,2,4,6,8,10,12,14,16,18,20,22`)。
|
||||
> 步长也可以放在通配符后面,因此如果你想表达 "每两小时",就用 `*/2` 。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
A question mark (`?`) in the schedule has the same meaning as an asterisk `*`, that is, it stands for any of available value for a given field.
|
||||
-->
|
||||
{{< note >}}
|
||||
调度中的问号 (`?`) 和星号 `*` 含义相同,表示给定域的任何可用值。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -258,14 +263,18 @@ A question mark (`?`) in the schedule has the same meaning as an asterisk `*`, t
|
|||
### Job Template
|
||||
|
||||
The `.spec.jobTemplate` is the template for the job, and it is required.
|
||||
It has exactly the same schema as a [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/), except that it is nested and does not have an `apiVersion` or `kind`.
|
||||
For information about writing a job `.spec`, see [Writing a Job Spec](/docs/concepts/workloads/controllers/jobs-run-to-completion/#writing-a-job-spec).
|
||||
It has exactly the same schema as a [Job](/docs/concepts/workloads/controllers/job/),
|
||||
except that it is nested and does not have an `apiVersion` or `kind`.
|
||||
For information about writing a job `.spec`, see
|
||||
[Writing a Job Spec](/docs/concepts/workloads/controllers/job/#writing-a-job-spec).
|
||||
-->
|
||||
|
||||
### 任务模版
|
||||
|
||||
`.spec.jobTemplate`是任务的模版,它是必须的。它和 [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)的语法完全一样,除了它是嵌套的没有 `apiVersion` 和 `kind`。
|
||||
编写任务的 `.spec` ,请参考 [编写任务的Spec](/docs/concepts/workloads/controllers/jobs-run-to-completion/#writing-a-job-spec)。
|
||||
`.spec.jobTemplate`是任务的模版,它是必须的。它和
|
||||
[Job](/zh/docs/concepts/workloads/controllers/job/)的语法完全一样,
|
||||
除了它是嵌套的没有 `apiVersion` 和 `kind`。
|
||||
编写任务的 `.spec` ,请参考
|
||||
[编写 Job 的Spec](/zh/docs/concepts/workloads/controllers/job/#writing-a-job-spec)。
|
||||
|
||||
<!--
|
||||
### Starting Deadline
|
||||
|
@ -276,8 +285,7 @@ After the deadline, the cron job does not start the job.
|
|||
Jobs that do not meet their deadline in this way count as failed jobs.
|
||||
If this field is not specified, the jobs have no deadline.
|
||||
-->
|
||||
|
||||
### 开始的最后期限
|
||||
### 开始的最后期限 {#starting-deadline}
|
||||
|
||||
`.spec.startingDeadlineSeconds` 域是可选的。
|
||||
它表示任务如果由于某种原因错过了调度时间,开始该任务的截止时间的秒数。过了截止时间,CronJob 就不会开始任务。
|
||||
|
@ -294,11 +302,15 @@ field is set (not null), the CronJob controller counts how many missed jobs occu
|
|||
schedules occurred in the last 200 seconds. In that case, if there were more than 100 missed schedules in the
|
||||
last 200 seconds, the cron job is no longer scheduled.
|
||||
-->
|
||||
|
||||
CronJob 控制器会统计错过了多少次调度。如果错过了100次以上的调度,CronJob 就不再调度了。当没有设置 `.spec.startingDeadlineSeconds` 时,CronJob 控制器统计从`status.lastScheduleTime`到当前的调度错过次数。
|
||||
例如一个 CronJob 期望每分钟执行一次,`status.lastScheduleTime`是 5:00am,但现在是 7:00am。那意味着120次调度被错过了,所以 CronJob 将不再被调度。
|
||||
如果设置了 `.spec.startingDeadlineSeconds` 域(非空),CronJob 控制器统计从 `.spec.startingDeadlineSeconds` 到当前时间错过了多少次任务。
|
||||
例如设置了 `200`,它会统计过去200秒内错过了多少次调度。在那种情况下,如果过去200秒内错过了超过100次的调度,CronJob 就不再调度。
|
||||
CronJob 控制器会统计错过了多少次调度。如果错过了100次以上的调度,CronJob 就不再调度了。
|
||||
当没有设置 `.spec.startingDeadlineSeconds` 时,CronJob 控制器统计从
|
||||
`status.lastScheduleTime` 到当前的调度错过次数。
|
||||
例如一个 CronJob 期望每分钟执行一次,`status.lastScheduleTime`是 `5:00am`,
|
||||
但现在是 `7:00am`。那意味着 120 次调度被错过了,所以 CronJob 将不再被调度。
|
||||
如果设置了 `.spec.startingDeadlineSeconds` 域(非空),CronJob 控制器统计从
|
||||
`.spec.startingDeadlineSeconds` 到当前时间错过了多少次任务。
|
||||
例如设置了 `200`,它会统计过去 200 秒内错过了多少次调度。
|
||||
在那种情况下,如果过去 200 秒内错过了超过 100 次的调度,CronJob 就不再调度。
|
||||
|
||||
<!--
|
||||
### Concurrency Policy
|
||||
|
@ -314,10 +326,10 @@ the spec may specify only one of the following concurrency policies:
|
|||
Note that concurrency policy only applies to the jobs created by the same cron job.
|
||||
If there are multiple cron jobs, their respective jobs are always allowed to run concurrently.
|
||||
-->
|
||||
|
||||
### 并发性规则
|
||||
|
||||
`.spec.concurrencyPolicy` 也是可选的。它声明了 CronJob 创建的任务执行时发生重叠如何处理。spec 仅能声明下列规则中的一种:
|
||||
`.spec.concurrencyPolicy` 也是可选的。它声明了 CronJob 创建的任务执行时发生重叠如何处理。
|
||||
spec 仅能声明下列规则中的一种:
|
||||
|
||||
* `Allow` (默认):CronJob 允许并发任务执行。
|
||||
* `Forbid`: CronJob 不允许并发任务执行;如果新任务的执行时间到了而老任务没有执行完,CronJob 会忽略新任务的执行。
|
||||
|
@ -333,17 +345,18 @@ If it is set to `true`, all subsequent executions are suspended.
|
|||
This setting does not apply to already started executions.
|
||||
Defaults to false.
|
||||
-->
|
||||
|
||||
### 挂起
|
||||
|
||||
`.spec.suspend`域也是可选的。如果设置为 `true` ,后续发生的执行都会挂起。这个设置对已经开始的执行不起作用。默认是关闭的。
|
||||
`.spec.suspend`域也是可选的。如果设置为 `true` ,后续发生的执行都会挂起。
|
||||
这个设置对已经开始的执行不起作用。默认是关闭的。
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Executions that are suspended during their scheduled time count as missed jobs.
|
||||
When `.spec.suspend` changes from `true` to `false` on an existing cron job without a [starting deadline](#starting-deadline), the missed jobs are scheduled immediately.
|
||||
-->
|
||||
在调度时间内挂起的执行都会被统计为错过的任务。当 `.spec.suspend` 从 `true` 改为 `false` 时,且没有 [开始的最后期限](#starting-deadline),错过的任务会被立即调度。
|
||||
{{< caution >}}
|
||||
在调度时间内挂起的执行都会被统计为错过的任务。当 `.spec.suspend` 从 `true` 改为 `false` 时,
|
||||
且没有 [开始的最后期限](#starting-deadline),错过的任务会被立即调度。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
|
@ -353,7 +366,6 @@ The `.spec.successfulJobsHistoryLimit` and `.spec.failedJobsHistoryLimit` fields
|
|||
These fields specify how many completed and failed jobs should be kept.
|
||||
By default, they are set to 3 and 1 respectively. Setting a limit to `0` corresponds to keeping none of the corresponding kind of jobs after they finish.
|
||||
-->
|
||||
|
||||
### 任务历史限制
|
||||
|
||||
`.spec.successfulJobsHistoryLimit` 和 `.spec.failedJobsHistoryLimit`是可选的。
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: 使用工作队列进行粗粒度并行处理
|
||||
min-kubernetes-server-version: v1.8
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
|
@ -7,6 +8,7 @@ weight: 30
|
|||
<!--
|
||||
---
|
||||
title: Coarse Parallel Processing Using a Work Queue
|
||||
min-kubernetes-server-version: v1.8
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
|
@ -30,7 +32,6 @@ Here is an overview of the steps in this example:
|
|||
1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes
|
||||
one task from the message queue, processes it, and repeats until the end of the queue is reached.
|
||||
-->
|
||||
|
||||
本例中,我们会运行包含多个并行工作进程的 Kubernetes Job。
|
||||
|
||||
本例中,每个 Pod 一旦被创建,会立即从任务队列中取走一个工作单元并完成它,然后将工作单元从队列中删除后再退出。
|
||||
|
@ -44,8 +45,6 @@ Here is an overview of the steps in this example:
|
|||
1. **启动一个在队列中执行这些任务的 Job**。该 Job 启动多个 Pod。每个 Pod 从消息队列中取走一个任务,处理它,然后重复执行,直到队列的队尾。
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
|
@ -54,12 +53,11 @@ Be familiar with the basic,
|
|||
non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/).
|
||||
-->
|
||||
|
||||
要熟悉 Job 基本用法(非并行的),请参考 [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/)。
|
||||
要熟悉 Job 基本用法(非并行的),请参考
|
||||
[Job](/zh/docs/concepts/workloads/controllers/job/)。
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
|
@ -72,7 +70,6 @@ cluster and reuse it for many jobs, as well as for long-running services.
|
|||
|
||||
Start RabbitMQ as follows:
|
||||
-->
|
||||
|
||||
## 启动消息队列服务
|
||||
|
||||
本例使用了 RabbitMQ,使用其他 AMQP 类型的消息服务应该比较容易。
|
||||
|
@ -82,9 +79,17 @@ Start RabbitMQ as follows:
|
|||
按下面的方法启动 RabbitMQ:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.3/examples/celery-rabbitmq/rabbitmq-service.yaml
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.3/examples/celery-rabbitmq/rabbitmq-service.yaml
|
||||
```
|
||||
```
|
||||
service "rabbitmq-service" created
|
||||
$ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.3/examples/celery-rabbitmq/rabbitmq-controller.yaml
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.3/examples/celery-rabbitmq/rabbitmq-controller.yaml
|
||||
```
|
||||
|
||||
```
|
||||
replicationcontroller "rabbitmq-controller" created
|
||||
```
|
||||
|
||||
|
@ -103,7 +108,6 @@ and experiment with queues.
|
|||
|
||||
First create a temporary interactive Pod.
|
||||
-->
|
||||
|
||||
## 测试消息队列服务
|
||||
|
||||
现在,我们可以试着访问消息队列。我们将会创建一个临时的可交互的 Pod,在它上面安装一些工具,然后用队列做实验。
|
||||
|
@ -112,7 +116,9 @@ First create a temporary interactive Pod.
|
|||
|
||||
```shell
|
||||
# 创建一个临时的可交互的 Pod
|
||||
$ kubectl run -i --tty temp --image ubuntu:14.04
|
||||
kubectl run -i --tty temp --image ubuntu:14.04
|
||||
```
|
||||
```
|
||||
Waiting for pod default/temp-loe07 to be running, status is Pending, pod ready: false
|
||||
... [ previous line repeats several times .. hit return when it stops ] ...
|
||||
```
|
||||
|
@ -122,7 +128,6 @@ Note that your pod name and command prompt will be different.
|
|||
|
||||
Next install the `amqp-tools` so we can work with message queues.
|
||||
-->
|
||||
|
||||
请注意你的 Pod 名称和命令提示符将会不同。
|
||||
|
||||
接下来安装 `amqp-tools` ,这样我们就能用消息队列了。
|
||||
|
@ -148,10 +153,6 @@ Next, we will check that we can discover the rabbitmq service:
|
|||
<!--
|
||||
# Note the rabbitmq-service has a DNS name, provided by Kubernetes:
|
||||
-->
|
||||
<!--
|
||||
# Your address will vary.
|
||||
-->
|
||||
|
||||
```
|
||||
# 请注意 rabbitmq-service 有Kubernetes 提供的 DNS 名称,
|
||||
|
||||
|
@ -162,54 +163,43 @@ Address: 10.0.0.10#53
|
|||
Name: rabbitmq-service.default.svc.cluster.local
|
||||
Address: 10.0.147.152
|
||||
|
||||
# 你的 IP 地址将会发生变化。
|
||||
# 你的 IP 地址会不同
|
||||
```
|
||||
|
||||
<!--
|
||||
If Kube-DNS is not setup correctly, the previous step may not work for you.
|
||||
You can also find the service IP in an env var:
|
||||
-->
|
||||
|
||||
如果 Kube-DNS 没有正确安装,上一步可能会出错。
|
||||
你也可以在环境变量中找到服务 IP。
|
||||
|
||||
<!--
|
||||
# Your address will vary.
|
||||
-->
|
||||
|
||||
```
|
||||
# env | grep RABBIT | grep HOST
|
||||
RABBITMQ_SERVICE_SERVICE_HOST=10.0.147.152
|
||||
|
||||
# 你的 IP 地址将会发生变化。
|
||||
# 你的 IP 地址会有所不同
|
||||
```
|
||||
|
||||
<!--
|
||||
Next we will verify we can create a queue, and publish and consume messages.
|
||||
-->
|
||||
|
||||
接着我们将要确认可以创建队列,并能发布消息和消费消息。
|
||||
|
||||
<!--
|
||||
# In the next line, rabbitmq-service is the hostname where the rabbitmq-service
|
||||
# can be reached. 5672 is the standard port for rabbitmq.
|
||||
-->
|
||||
<!--
|
||||
|
||||
# If you could not resolve "rabbitmq-service" in the previous step,
|
||||
# then use this command instead:
|
||||
# root@temp-loe07:/# BROKER_URL=amqp://guest:guest@$RABBITMQ_SERVICE_SERVICE_HOST:5672
|
||||
|
||||
# Now create a queue:
|
||||
|
||||
-->
|
||||
<!--
|
||||
# and publish a message to it:
|
||||
-->
|
||||
<!--
|
||||
# and get it back.
|
||||
-->
|
||||
|
||||
|
||||
```shell
|
||||
# 下一行,rabbitmq-service 是访问 rabbitmq-service 的主机名。5672是 rabbitmq 的标准端口。
|
||||
|
||||
|
@ -232,6 +222,7 @@ root@temp-loe07:/# /usr/bin/amqp-consume --url=$BROKER_URL -q foo -c 1 cat && ec
|
|||
Hello
|
||||
root@temp-loe07:/#
|
||||
```
|
||||
|
||||
<!--
|
||||
In the last command, the `amqp-consume` tool takes one message (`-c 1`)
|
||||
from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` is just printing
|
||||
|
@ -255,7 +246,6 @@ In a practice, the content of the messages might be:
|
|||
- configuration parameters to a simulation
|
||||
- frame numbers of a scene to be rendered
|
||||
-->
|
||||
|
||||
## 为队列增加任务
|
||||
|
||||
现在让我们给队列增加一些任务。在我们的示例中,任务是多个待打印的字符串。
|
||||
|
@ -283,9 +273,9 @@ In practice, you might write a program to fill the queue using an amqp client li
|
|||
例如,我们创建队列并使用 amqp 命令行工具向队列中填充消息。实践中,你可以写个程序来利用 amqp 客户端库来填充这些队列。
|
||||
|
||||
```shell
|
||||
$ /usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d job1
|
||||
$ for f in apple banana cherry date fig grape lemon melon
|
||||
/usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d job1
|
||||
|
||||
for f in apple banana cherry date fig grape lemon melon
|
||||
do
|
||||
/usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f
|
||||
done
|
||||
|
@ -302,7 +292,6 @@ We will use the `amqp-consume` utility to read the message
|
|||
from the queue and run our actual program. Here is a very simple
|
||||
example program:
|
||||
-->
|
||||
|
||||
这样,我们给队列中填充了8个消息。
|
||||
|
||||
## 创建镜像
|
||||
|
@ -322,10 +311,14 @@ and [worker.py](/examples/application/job/rabbitmq/worker.py). In either case,
|
|||
build the image with this command:
|
||||
-->
|
||||
|
||||
现在,编译镜像。如果你在用源代码树,那么切换到目录 `examples/job/work-queue-1`。否则的话,创建一个临时目录,切换到这个目录。下载 [Dockerfile](/examples/application/job/rabbitmq/Dockerfile),和 [worker.py](/examples/application/job/rabbitmq/worker.py)。无论哪种情况,都可以用下面的命令编译镜像
|
||||
现在,编译镜像。如果你在用源代码树,那么切换到目录 `examples/job/work-queue-1`。
|
||||
否则的话,创建一个临时目录,切换到这个目录。下载
|
||||
[Dockerfile](/examples/application/job/rabbitmq/Dockerfile),和
|
||||
[worker.py](/examples/application/job/rabbitmq/worker.py)。
|
||||
无论哪种情况,都可以用下面的命令编译镜像
|
||||
|
||||
```shell
|
||||
$ docker build -t job-wq-1 .
|
||||
docker build -t job-wq-1 .
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -333,8 +326,8 @@ For the [Docker Hub](https://hub.docker.com/), tag your app image with
|
|||
your username and push to the Hub with the below commands. Replace
|
||||
`<username>` with your Hub username.
|
||||
-->
|
||||
|
||||
对于 [Docker Hub](https://hub.docker.com/), 给你的应用镜像打上标签,标签为你的用户名,然后用下面的命令推送到 Hub。用你的 Hub 用户名替换 `<username>`。
|
||||
对于 [Docker Hub](https://hub.docker.com/), 给你的应用镜像打上标签,
|
||||
标签为你的用户名,然后用下面的命令推送到 Hub。用你的 Hub 用户名替换 `<username>`。
|
||||
|
||||
```shell
|
||||
docker tag job-wq-1 <username>/job-wq-1
|
||||
|
@ -347,8 +340,9 @@ Registry](https://cloud.google.com/tools/container-registry/), tag
|
|||
your app image with your project ID, and push to GCR. Replace
|
||||
`<project>` with your project ID.
|
||||
-->
|
||||
|
||||
如果你在用[谷歌容器仓库](https://cloud.google.com/tools/container-registry/),用你的项目 ID 作为标签打到你的应用镜像上,然后推送到 GCR。用你的项目 ID 替换 `<project>`。
|
||||
如果你在用[谷歌容器仓库](https://cloud.google.com/tools/container-registry/),
|
||||
用你的项目 ID 作为标签打到你的应用镜像上,然后推送到 GCR。
|
||||
用你的项目 ID 替换 `<project>`。
|
||||
|
||||
```shell
|
||||
docker tag job-wq-1 gcr.io/<project>/job-wq-1
|
||||
|
@ -361,7 +355,6 @@ gcloud docker -- push gcr.io/<project>/job-wq-1
|
|||
Here is a job definition. You'll need to make a copy of the Job and edit the
|
||||
image to match the name you used, and call it `./job.yaml`.
|
||||
-->
|
||||
|
||||
## 定义 Job
|
||||
|
||||
这里给出一个 Job 定义 yaml文件。你需要拷贝一份并编辑镜像以匹配你使用的名称,保存为 `./job.yaml`。
|
||||
|
@ -377,7 +370,6 @@ done. So we set, `.spec.completions: 8` for the example, since we put 8 items i
|
|||
|
||||
So, now run the Job:
|
||||
-->
|
||||
|
||||
本例中,每个 Pod 使用队列中的一个消息然后退出。这样,Job 的完成计数就代表了完成的工作项的数量。本例中我们设置 `.spec.completions: 8`,因为我们放了8项内容在队列中。
|
||||
|
||||
## 运行 Job
|
||||
|
@ -395,7 +387,10 @@ Now wait a bit, then check on the job.
|
|||
稍等片刻,然后检查 Job。
|
||||
|
||||
```shell
|
||||
$ kubectl describe jobs/job-wq-1
|
||||
kubectl describe jobs/job-wq-1
|
||||
```
|
||||
|
||||
```
|
||||
Name: job-wq-1
|
||||
Namespace: default
|
||||
Selector: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f
|
||||
|
@ -434,11 +429,8 @@ Events:
|
|||
<!--
|
||||
All our pods succeeded. Yay.
|
||||
-->
|
||||
|
||||
我们所有的 Pod 都成功了。耶!
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
|
@ -451,12 +443,12 @@ It does require that you run a message queue service.
|
|||
If running a queue service is inconvenient, you may
|
||||
want to consider one of the other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns).
|
||||
-->
|
||||
|
||||
## 替代方案
|
||||
|
||||
本文所讲述的处理方法的好处是你不需要修改你的 "worker" 程序使其知道工作队列的存在。
|
||||
|
||||
本文所描述的方法需要你运行一个消息队列服务。如果不方便运行消息队列服务,你也许会考虑另外一种[任务模式](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns)。
|
||||
本文所描述的方法需要你运行一个消息队列服务。如果不方便运行消息队列服务,你也许会考虑另外一种
|
||||
[任务模式](/zh/docs/concepts/workloads/controllers/job/#job-patterns)。
|
||||
|
||||
<!--
|
||||
This approach creates a pod for every work item. If your work items only take a few seconds,
|
||||
|
@ -470,9 +462,15 @@ A [different example](/docs/tasks/job/fine-parallel-processing-work-queue/), sho
|
|||
communicate with the work queue using a client library.
|
||||
-->
|
||||
|
||||
本文所述的方法为每个工作项创建了一个 Pod。如果你的工作项仅需数秒钟,为每个工作项创建 Pod会增加很多的常规消耗。可以考虑另外的方案请参考[示例](/docs/tasks/job/fine-parallel-processing-work-queue/),这种方案可以实现每个 Pod 执行多个工作项。
|
||||
本文所述的方法为每个工作项创建了一个 Pod。
|
||||
如果你的工作项仅需数秒钟,为每个工作项创建 Pod会增加很多的常规消耗。
|
||||
可以考虑另外的方案请参考[示例](/zh/docs/tasks/job/fine-parallel-processing-work-queue/),
|
||||
这种方案可以实现每个 Pod 执行多个工作项。
|
||||
|
||||
示例中,我们使用 `amqp-consume` 从消息队列读取消息并执行我们真正的程序。这样的好处是你不需要修改你的程序使其知道队列的存在。要了解怎样使用客户端库和工作队列通信,请参考[不同的示例](/docs/tasks/job/fine-parallel-processing-work-queue/)。
|
||||
示例中,我们使用 `amqp-consume` 从消息队列读取消息并执行我们真正的程序。
|
||||
这样的好处是你不需要修改你的程序使其知道队列的存在。
|
||||
要了解怎样使用客户端库和工作队列通信,请参考
|
||||
[不同的示例](/zh/docs/tasks/job/fine-parallel-processing-work-queue/)。
|
||||
|
||||
<!--
|
||||
## Caveats
|
||||
|
@ -491,15 +489,14 @@ exits with success, or if the node crashes before the kubelet is able to post th
|
|||
back to the api-server, then the Job will not appear to be complete, even though all items
|
||||
in the queue have been processed.
|
||||
-->
|
||||
|
||||
## 友情提醒
|
||||
|
||||
如果设置的完成数量小于队列中的消息数量,会导致一部分消息项不会被执行。
|
||||
|
||||
如果设置的完成数量大于队列中的消息数量,当队列中所有的消息都处理完成后,Job 也会显示为未完成。Job 将创建 Pod 并阻塞等待消息输入。
|
||||
如果设置的完成数量大于队列中的消息数量,当队列中所有的消息都处理完成后,
|
||||
Job 也会显示为未完成。Job 将创建 Pod 并阻塞等待消息输入。
|
||||
|
||||
当发生下面两种情况时,即使队列中所有的消息都处理完了,Job 也不会显示为完成状态:
|
||||
* 在 amqp-consume 命令拿到消息和容器成功退出之间的时间段内,执行杀死容器操作;
|
||||
* 在 kubelet 向 api-server 传回 Pod 成功运行之前,发生节点崩溃。
|
||||
|
||||
|
||||
|
|
|
@ -1,16 +1,14 @@
|
|||
---
|
||||
cn-approvers:
|
||||
- linyouchong
|
||||
title: 使用工作队列进行精细的并行处理
|
||||
content_type: task
|
||||
min-kubernetes-server-version: v1.8
|
||||
weight: 40
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Fine Parallel Processing Using a Work Queue
|
||||
content_type: task
|
||||
weight: 40
|
||||
---
|
||||
min-kubernetes-server-version: v1.8
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -24,14 +22,12 @@ worker processes in a given pod.
|
|||
<!--
|
||||
In this example, as each pod is created, it picks up one unit of work
|
||||
from a task queue, processes it, and repeats until the end of the queue is reached.
|
||||
|
||||
Here is an overview of the steps in this example:
|
||||
-->
|
||||
在这个例子中,当每个pod被创建时,它会从一个任务队列中获取一个工作单元,处理它,然后重复,直到到达队列的尾部。
|
||||
|
||||
|
||||
<!--
|
||||
Here is an overview of the steps in this example:
|
||||
-->
|
||||
下面是这个示例的步骤概述
|
||||
下面是这个示例的步骤概述:
|
||||
|
||||
<!--
|
||||
1. **Start a storage service to hold the work queue.** In this example, we use Redis to store
|
||||
|
@ -41,77 +37,55 @@ Here is an overview of the steps in this example:
|
|||
as Redis once and reuse it for the work queues of many jobs, and other things.
|
||||
-->
|
||||
|
||||
1. **启动存储服务用于保存工作队列。** 在这个例子中,我们使用 Redis 来存储工作项。在上一个例子中,我们使用了 RabbitMQ。在这个例子中,由于 AMQP 不能为客户端提供一个良好的方法来检测一个有限长度的工作队列是否为空,我们使用了 Redis 和一个自定义的工作队列客户端库。在实践中,您可能会设置一个类似于 Redis 的存储库,并将其同时用于多项任务或其他事务的工作队列。
|
||||
1. **启动存储服务用于保存工作队列。** 在这个例子中,我们使用 Redis 来存储工作项。
|
||||
在上一个例子中,我们使用了 RabbitMQ。
|
||||
在这个例子中,由于 AMQP 不能为客户端提供一个良好的方法来检测一个有限长度的工作队列是否为空,
|
||||
我们使用了 Redis 和一个自定义的工作队列客户端库。
|
||||
在实践中,你可能会设置一个类似于 Redis 的存储库,并将其同时用于多项任务或其他事务的工作队列。
|
||||
|
||||
<!--
|
||||
1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In
|
||||
this example, a message is just an integer that we will do a lengthy computation on.
|
||||
-->
|
||||
|
||||
2. **创建一个队列,然后向其中填充消息。** 每个消息表示一个将要被处理的工作任务。在这个例子中,消息只是一个我们将用于进行长度计算的整数。
|
||||
2. **创建一个队列,然后向其中填充消息。** 每个消息表示一个将要被处理的工作任务。
|
||||
在这个例子中,消息只是一个我们将用于进行长度计算的整数。
|
||||
|
||||
<!--
|
||||
1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes
|
||||
one task from the message queue, processes it, and repeats until the end of the queue is reached.
|
||||
-->
|
||||
|
||||
3. **启动一个 Job 对队列中的任务进行处理**。这个 Job 启动了若干个 Pod 。每个 Pod 从消息队列中取出一个工作任务,处理它,然后重复,直到到达队列的尾部。
|
||||
|
||||
|
||||
|
||||
{{< toc >}}
|
||||
3. **启动一个 Job 对队列中的任务进行处理**。这个 Job 启动了若干个 Pod 。
|
||||
每个 Pod 从消息队列中取出一个工作任务,处理它,然后重复,直到到达队列的尾部。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
Be familiar with the basic,
|
||||
non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/).
|
||||
non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/).
|
||||
-->
|
||||
熟秋基础知识,非并行方式运行 [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/)。
|
||||
|
||||
|
||||
熟悉基本的、非并行的 [Job](/zh/docs/concepts/workloads/controllers/job/)。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Starting Redis
|
||||
-->
|
||||
## 启动 Redis
|
||||
|
||||
<!--
|
||||
For this example, for simplicity, we will start a single instance of Redis.
|
||||
See the [Redis Example](https://github.com/kubernetes/examples/tree/master/guestbook) for an example
|
||||
of deploying Redis scalably and redundantly.
|
||||
-->
|
||||
## 启动 Redis
|
||||
|
||||
对于这个例子,为了简单起见,我们将启动一个单实例的 Redis。
|
||||
了解如何部署一个可伸缩、高可用的 Redis 例子,请查看 [Redis 样例](https://github.com/kubernetes/examples/tree/master/guestbook)
|
||||
了解如何部署一个可伸缩、高可用的 Redis 例子,请查看
|
||||
[Redis 示例](https://github.com/kubernetes/examples/tree/master/guestbook)
|
||||
|
||||
<!--
|
||||
If you are working from the website source tree, you can go to the following
|
||||
directory and start a temporary Pod running Redis and a service so we can find it.
|
||||
You could also download the following files directly:
|
||||
-->
|
||||
如果您在使用本文档库的源代码目录,您可以进入如下目录,然后启动一个临时的 Pod 用于运行 Redis 和 一个临时的 service 以便我们能够找到这个 Pod
|
||||
|
||||
```shell
|
||||
$ cd content/en/examples/application/job/redis
|
||||
$ kubectl create -f ./redis-pod.yaml
|
||||
pod/redis-master created
|
||||
$ kubectl create -f ./redis-service.yaml
|
||||
service/redis created
|
||||
```
|
||||
|
||||
<!--
|
||||
If you're not working from the source tree, you could also download the following
|
||||
files directly:
|
||||
-->
|
||||
如果您没有使用本文档库的源代码目录,您可以直接下载如下文件:
|
||||
你可以直接下载如下文件:
|
||||
|
||||
- [`redis-pod.yaml`](/examples/application/job/redis/redis-pod.yaml)
|
||||
- [`redis-service.yaml`](/examples/application/job/redis/redis-service.yaml)
|
||||
|
@ -122,22 +96,22 @@ files directly:
|
|||
|
||||
<!--
|
||||
## Filling the Queue with tasks
|
||||
|
||||
Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be
|
||||
printed.
|
||||
|
||||
Start a temporary interactive pod for running the Redis CLI.
|
||||
-->
|
||||
## 使用任务填充队列
|
||||
|
||||
<!--
|
||||
Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be
|
||||
printed.
|
||||
-->
|
||||
现在,让我们往队列里添加一些“任务”。在这个例子中,我们的任务只是一些将被打印出来的字符串。
|
||||
|
||||
<!--
|
||||
Start a temporary interactive pod for running the Redis CLI.
|
||||
-->
|
||||
启动一个临时的可交互的 pod 用于运行 Redis 命令行界面。
|
||||
|
||||
```shell
|
||||
$ kubectl run -i --tty temp --image redis --command "/bin/sh"
|
||||
kubectl run -i --tty temp --image redis --command "/bin/sh"
|
||||
```
|
||||
```
|
||||
Waiting for pod default/redis2-c7h78 to be running, status is Pending, pod ready: false
|
||||
Hit enter for command prompt
|
||||
```
|
||||
|
@ -188,29 +162,26 @@ So, the list with key `job2` will be our work queue.
|
|||
Note: if you do not have Kube DNS setup correctly, you may need to change
|
||||
the first step of the above block to `redis-cli -h $REDIS_SERVICE_HOST`.
|
||||
-->
|
||||
注意:如果您还没有正确地配置 Kube DNS,您可能需要将上面的第一步改为 `redis-cli -h $REDIS_SERVICE_HOST`。
|
||||
|
||||
注意:如果你还没有正确地配置 Kube DNS,你可能需要将上面的第一步改为
|
||||
`redis-cli -h $REDIS_SERVICE_HOST`。
|
||||
|
||||
<!--
|
||||
## Create an Image
|
||||
-->
|
||||
创建镜像
|
||||
|
||||
<!--
|
||||
Now we are ready to create an image that we will run.
|
||||
-->
|
||||
现在我们已经准备好创建一个我们要运行的镜像
|
||||
|
||||
<!--
|
||||
We will use a python worker program with a redis client to read
|
||||
the messages from the message queue.
|
||||
-->
|
||||
我们会使用一个带有 redis 客户端的 python 工作程序从消息队列中读出消息。
|
||||
|
||||
<!--
|
||||
A simple Redis work queue client library is provided,
|
||||
called rediswq.py ([Download](/examples/application/job/redis/rediswq.py)).
|
||||
-->
|
||||
## 创建镜像
|
||||
|
||||
现在我们已经准备好创建一个我们要运行的镜像
|
||||
|
||||
我们会使用一个带有 redis 客户端的 python 工作程序从消息队列中读出消息。
|
||||
|
||||
这里提供了一个简单的 Redis 工作队列客户端库,叫 rediswq.py ([下载](/examples/application/job/redis/rediswq.py))。
|
||||
|
||||
<!--
|
||||
|
@ -222,12 +193,12 @@ Job 中每个 Pod 内的 “工作程序” 使用工作队列客户端库获取
|
|||
{{< codenew language="python" file="application/job/redis/worker.py" >}}
|
||||
|
||||
<!--
|
||||
If you are working from the source tree,
|
||||
change directory to the `content/en/examples/application/job/redis/` directory.
|
||||
Otherwise, download [`worker.py`](/examples/application/job/redis/worker.py), [`rediswq.py`](/examples/application/job/redis/rediswq.py), and [`Dockerfile`](/examples/application/job/redis/Dockerfile)
|
||||
You could download [`worker.py`](/examples/application/job/redis/worker.py), [`rediswq.py`](/examples/application/job/redis/rediswq.py), and [`Dockerfile`](/examples/application/job/redis/Dockerfile)
|
||||
using above links. Then build the image:
|
||||
-->
|
||||
如果您在使用本文档库的源代码目录,请将当前目录切换到 `content/en/examples/application/job/redis/`。否则,请点击链接下载 [`worker.py`](/examples/application/job/redis/worker.py)、 [`rediswq.py`](/examples/application/job/redis/rediswq.py) 和 [`Dockerfile`](/examples/application/job/redis/Dockerfile)。然后构建镜像:
|
||||
你也可以下载 [`worker.py`](/examples/application/job/redis/worker.py)、
|
||||
[`rediswq.py`](/examples/application/job/redis/rediswq.py) 和
|
||||
[`Dockerfile`](/examples/application/job/redis/Dockerfile)。然后构建镜像:
|
||||
|
||||
```shell
|
||||
docker build -t job-wq-2 .
|
||||
|
@ -235,15 +206,15 @@ docker build -t job-wq-2 .
|
|||
|
||||
<!--
|
||||
### Push the image
|
||||
-->
|
||||
### Push 镜像
|
||||
|
||||
<!--
|
||||
For the [Docker Hub](https://hub.docker.com/), tag your app image with
|
||||
your username and push to the Hub with the below commands. Replace
|
||||
`<username>` with your Hub username.
|
||||
-->
|
||||
对于 [Docker Hub](https://hub.docker.com/),请先用您的用户名给镜像打上标签,然后使用下面的命令 push 您的镜像到仓库。请将 `<username>` 替换为您自己的用户名。
|
||||
### Push 镜像
|
||||
|
||||
对于 [Docker Hub](https://hub.docker.com/),请先用你的用户名给镜像打上标签,
|
||||
然后使用下面的命令 push 你的镜像到仓库。请将 `<username>` 替换为你自己的用户名。
|
||||
|
||||
```shell
|
||||
docker tag job-wq-2 <username>/job-wq-2
|
||||
|
@ -254,7 +225,8 @@ docker push <username>/job-wq-2
|
|||
You need to push to a public repository or [configure your cluster to be able to access
|
||||
your private repository](/docs/concepts/containers/images/).
|
||||
-->
|
||||
您需要将镜像 push 到一个公共仓库或者 [配置集群访问您的私有仓库](/docs/concepts/containers/images/)。
|
||||
你需要将镜像 push 到一个公共仓库或者
|
||||
[配置集群访问你的私有仓库](/zh/docs/concepts/containers/images/)。
|
||||
|
||||
<!--
|
||||
If you are using [Google Container
|
||||
|
@ -262,8 +234,8 @@ Registry](https://cloud.google.com/tools/container-registry/), tag
|
|||
your app image with your project ID, and push to GCR. Replace
|
||||
`<project>` with your project ID.
|
||||
-->
|
||||
如果您使用的是 [Google Container
|
||||
Registry](https://cloud.google.com/tools/container-registry/),请先用您的 project ID 给您的镜像打上标签,然后 push 到 GCR 。请将 `<project>` 替换为您自己的 project ID
|
||||
如果你使用的是 [Google Container Registry](https://cloud.google.com/tools/container-registry/),
|
||||
请先用你的 project ID 给你的镜像打上标签,然后 push 到 GCR 。请将 `<project>` 替换为你自己的 project ID
|
||||
|
||||
```shell
|
||||
docker tag job-wq-2 gcr.io/<project>/job-wq-2
|
||||
|
@ -272,12 +244,11 @@ gcloud docker -- push gcr.io/<project>/job-wq-2
|
|||
|
||||
<!--
|
||||
## Defining a Job
|
||||
|
||||
Here is the job definition:
|
||||
-->
|
||||
## 定义一个 Job
|
||||
|
||||
<!--
|
||||
Here is the job definition:
|
||||
-->
|
||||
这是 job 定义:
|
||||
|
||||
{{< codenew file="application/job/redis/job.yaml" >}}
|
||||
|
@ -286,7 +257,7 @@ Here is the job definition:
|
|||
Be sure to edit the job template to
|
||||
change `gcr.io/myproject` to your own path.
|
||||
-->
|
||||
请确保将 job 模板中的 `gcr.io/myproject` 更改为您自己的路径。
|
||||
请确保将 job 模板中的 `gcr.io/myproject` 更改为你自己的路径。
|
||||
|
||||
<!--
|
||||
In this example, each pod works on several items from the queue and then exits when there are no more items.
|
||||
|
@ -297,21 +268,25 @@ exits with success, the controller knows the work is done, and the Pods will exi
|
|||
So, we set the completion count of the Job to 1. The job controller will wait for the other pods to complete
|
||||
too.
|
||||
-->
|
||||
在这个例子中,每个 pod 处理了队列中的多个项目,直到队列中没有项目时便退出。因为是由工作程序自行检测工作队列是否为空,并且 Job 控制器不知道工作队列的存在,所以依赖于工作程序在完成工作时发出信号。工作程序以成功退出的形式发出信号表示工作队列已经为空。所以,只要有任意一个工作程序成功退出,控制器就知道工作已经完成了,所有的 Pod 将很快会退出。因此,我们将 Job 的 completion count 设置为 1 。尽管如此,Job 控制器还是会等待其它 Pod 完成。
|
||||
|
||||
在这个例子中,每个 pod 处理了队列中的多个项目,直到队列中没有项目时便退出。
|
||||
因为是由工作程序自行检测工作队列是否为空,并且 Job 控制器不知道工作队列的存在,
|
||||
所以依赖于工作程序在完成工作时发出信号。
|
||||
工作程序以成功退出的形式发出信号表示工作队列已经为空。
|
||||
所以,只要有任意一个工作程序成功退出,控制器就知道工作已经完成了,所有的 Pod 将很快会退出。
|
||||
因此,我们将 Job 的完成计数(Completion Count)设置为 1 。
|
||||
尽管如此,Job 控制器还是会等待其它 Pod 完成。
|
||||
|
||||
<!--
|
||||
## Running the Job
|
||||
|
||||
So, now run the Job:
|
||||
-->
|
||||
## 运行 Job
|
||||
|
||||
<!--
|
||||
So, now run the Job:
|
||||
-->
|
||||
现在运行这个 Job :
|
||||
|
||||
```shell
|
||||
kubectl create -f ./job.yaml
|
||||
kubectl apply -f ./job.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -320,7 +295,10 @@ Now wait a bit, then check on the job.
|
|||
稍等片刻,然后检查这个 Job。
|
||||
|
||||
```shell
|
||||
$ kubectl describe jobs/job-wq-2
|
||||
kubectl describe jobs/job-wq-2
|
||||
```
|
||||
|
||||
```
|
||||
Name: job-wq-2
|
||||
Namespace: default
|
||||
Selector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f
|
||||
|
@ -345,9 +323,15 @@ Events:
|
|||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8
|
||||
```
|
||||
|
||||
查看日志:
|
||||
|
||||
$ kubectl logs pods/job-wq-2-7r7b2
|
||||
```shell
|
||||
kubectl logs pods/job-wq-2-7r7b2
|
||||
```
|
||||
|
||||
```
|
||||
Worker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f
|
||||
Initial queue state: empty=False
|
||||
Working on banana
|
||||
|
@ -358,22 +342,21 @@ Working on lemon
|
|||
<!--
|
||||
As you can see, one of our pods worked on several work units.
|
||||
-->
|
||||
您可以看到,其中的一个 pod 处理了若干个工作单元。
|
||||
|
||||
|
||||
你可以看到,其中的一个 pod 处理了若干个工作单元。
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
## Alternatives
|
||||
-->
|
||||
## 其它
|
||||
## 替代方案
|
||||
|
||||
<!--
|
||||
If running a queue service or modifying your containers to use a work queue is inconvenient, you may
|
||||
want to consider one of the other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns).
|
||||
-->
|
||||
如果您不方便运行一个队列服务或者修改您的容器用于运行一个工作队列,您可以考虑其它的 [job 模式](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns)。
|
||||
如果你不方便运行一个队列服务或者修改你的容器用于运行一个工作队列,你可以考虑其它的
|
||||
[Job 模式](/zh/docs/concepts/workloads/controllers/job/#job-patterns)。
|
||||
|
||||
<!--
|
||||
If you have a continuous stream of background processing work to run, then
|
||||
|
@ -381,6 +364,7 @@ consider running your background workers with a `replicationController` instead,
|
|||
and consider running a background processing library such as
|
||||
[https://github.com/resque/resque](https://github.com/resque/resque).
|
||||
-->
|
||||
如果您有连续的后台处理业务,那么可以考虑使用 `replicationController` 来运行您的后台业务,和运行一个类似 [https://github.com/resque/resque](https://github.com/resque/resque) 的后台处理库。
|
||||
|
||||
如果你有连续的后台处理业务,那么可以考虑使用 `replicationController` 来运行你的后台业务,
|
||||
和运行一个类似 [https://github.com/resque/resque](https://github.com/resque/resque)
|
||||
的后台处理库。
|
||||
|
||||
|
|
|
@ -1,76 +1,69 @@
|
|||
---
|
||||
reviewers:
|
||||
- vishh
|
||||
content_type: concept
|
||||
title: 调度 GPUs
|
||||
description: 配置和调度 GPU 成一类资源以供集群中节点使用
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- vishh
|
||||
content_type: concept
|
||||
title: Schedule GPUs
|
||||
---
|
||||
--->
|
||||
description: Configure and schedule GPUs for use as a resource by nodes in a cluster.
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.10" >}}
|
||||
|
||||
<!--
|
||||
Kubernetes includes **experimental** support for managing AMD and NVIDIA GPUs spread
|
||||
across nodes. The support for NVIDIA GPUs was added in v1.6 and has gone through
|
||||
multiple backwards incompatible iterations. The support for AMD GPUs was added in
|
||||
v1.9 via [device plugin](#deploying-amd-gpu-device-plugin).
|
||||
Kubernetes includes **experimental** support for managing AMD and NVIDIA GPUs
|
||||
(graphical processing units) across several nodes.
|
||||
|
||||
This page describes how users can consume GPUs across different Kubernetes versions
|
||||
and the current limitations.
|
||||
--->
|
||||
Kubernetes 支持对节点上的 AMD 和 NVIDA GPU 进行管理,目前处于**实验**状态。对 NVIDIA GPU 的支持在 v1.6 中加入,已经经历了多次不向后兼容的迭代。而对 AMD GPU 的支持则在 v1.9 中通过 [设备插件](#deploying-amd-gpu-device-plugin) 加入。
|
||||
|
||||
这个页面介绍了用户如何在不同的 Kubernetes 版本中使用 GPU,以及当前存在的一些限制。
|
||||
|
||||
|
||||
-->
|
||||
Kubernetes 支持对节点上的 AMD 和 NVIDA GPU (图形处理单元)进行管理,目前处于**实验**状态。
|
||||
|
||||
本页介绍用户如何在不同的 Kubernetes 版本中使用 GPU,以及当前存在的一些限制。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## v1.8 onwards
|
||||
## Using device plugins
|
||||
|
||||
**From 1.8 onwards, the recommended way to consume GPUs is to use [device
|
||||
plugins](/docs/concepts/cluster-administration/device-plugins).**
|
||||
Kubernetes implements {{< glossary_tooltip text="Device Plugins" term_id="device-plugin" >}}
|
||||
to let Pods access specialized hardware features such as GPUs.
|
||||
|
||||
To enable GPU support through device plugins before 1.10, the `DevicePlugins`
|
||||
feature gate has to be explicitly set to true across the system:
|
||||
`--feature-gates="DevicePlugins=true"`. This is no longer required starting
|
||||
from 1.10.
|
||||
--->
|
||||
## 从 v1.8 起
|
||||
As an administrator, you have to install GPU drivers from the corresponding
|
||||
hardware vendor on the nodes and run the corresponding device plugin from the
|
||||
GPU vendor:
|
||||
-->
|
||||
## 使用设备插件 {#using-device-plugins}
|
||||
|
||||
**从 1.8 版本开始,我们推荐通过 [设备插件](/docs/concepts/cluster-administration/device-plugins) 的方式来使用 GPU。**
|
||||
Kubernetes 实现了{{< glossary_tooltip text="设备插件(Device Plugins)" term_id="device-plugin" >}}
|
||||
以允许 Pod 访问类似 GPU 这类特殊的硬件功能特性。
|
||||
|
||||
在 1.10 版本之前,为了通过设备插件开启 GPU 的支持,我们需要在系统中将 `DevicePlugins` 这一特性开关显式地设置为 true:`--feature-gates="DevicePlugins=true"`。不过,
|
||||
从 1.10 版本开始,我们就不需要这一步骤了。
|
||||
作为集群管理员,你要在节点上安装来自对应硬件厂商的 GPU 驱动程序,并运行
|
||||
来自 GPU 厂商的对应的设备插件。
|
||||
|
||||
* [AMD](#deploying-amd-gpu-device-plugin)
|
||||
* [NVIDIA](#deploying-nvidia-gpu-device-plugin)
|
||||
|
||||
<!--
|
||||
Then you have to install GPU drivers from the corresponding vendor on the nodes
|
||||
and run the corresponding device plugin from the GPU vendor
|
||||
([AMD](#deploying-amd-gpu-device-plugin), [NVIDIA](#deploying-nvidia-gpu-device-plugin)).
|
||||
When the above conditions are true, Kubernetes will expose `amd.com/gpu` or
|
||||
`nvidia.com/gpu` as a schedulable resource.
|
||||
|
||||
When the above conditions are true, Kubernetes will expose `nvidia.com/gpu` or
|
||||
`amd.com/gpu` as a schedulable resource.
|
||||
--->
|
||||
接着你需要在主机节点上安装对应厂商的 GPU 驱动并运行对应厂商的设备插件 ([AMD](#deploying-amd-gpu-device-plugin)、[NVIDIA](#deploying-nvidia-gpu-device-plugin))。
|
||||
|
||||
当上面的条件都满足,Kubernetes 将会暴露 `nvidia.com/gpu` 或 `amd.com/gpu` 来作为
|
||||
一种可调度的资源。
|
||||
|
||||
<!--
|
||||
You can consume these GPUs from your containers by requesting
|
||||
`<vendor>.com/gpu` just like you request `cpu` or `memory`.
|
||||
However, there are some limitations in how you specify the resource requirements
|
||||
when using GPUs:
|
||||
--->
|
||||
你也能通过像请求 `cpu` 或 `memory` 一样请求 `<vendor>.com/gpu` 来在容器中使用 GPU。然而,当你要通过指定资源请求来使用 GPU 时,存在着以下几点限制:
|
||||
-->
|
||||
当以上条件满足时,Kubernetes 将暴露 `amd.com/gpu` 或 `nvidia.com/gpu` 为
|
||||
可调度的资源。
|
||||
|
||||
你可以通过请求 `<vendor>.com/gpu` 资源来使用 GPU 设备,就像你为 CPU
|
||||
和内存所做的那样。
|
||||
不过,使用 GPU 时,在如何指定资源需求这个方面还是有一些限制的:
|
||||
|
||||
<!--
|
||||
- GPUs are only supposed to be specified in the `limits` section, which means:
|
||||
|
@ -79,21 +72,21 @@ when using GPUs:
|
|||
* You can specify GPU in both `limits` and `requests` but these two values
|
||||
must be equal.
|
||||
* You cannot specify GPU `requests` without specifying `limits`.
|
||||
- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs.
|
||||
- Containers (and Pods) do not share GPUs. There's no overcommitting of GPUs.
|
||||
- Each container can request one or more GPUs. It is not possible to request a
|
||||
fraction of a GPU.
|
||||
-->
|
||||
- GPUs 只能设置在 `limits` 部分,这意味着:
|
||||
* 你可以指定 GPU 的 `limits` 而不指定其 `requests`,Kubernetes 将使用限制
|
||||
值作为默认的请求值;
|
||||
* 你可以同时指定 `limits` 和 `requests`,不过这两个值必须相等。
|
||||
* 你不可以仅指定 `requests` 而不指定 `limits`。
|
||||
- 容器(以及 Pod)之间是不共享 GPU 的。GPU 也不可以过量分配(Overcommitting)。
|
||||
- 每个容器可以请求一个或者多个 GPU,但是用小数值来请求部分 GPU 是不允许的。
|
||||
|
||||
<!--
|
||||
Here's an example:
|
||||
--->
|
||||
- GPU 仅仅支持在 `limits` 部分被指定,这表明:
|
||||
* 你可以仅仅指定 GPU 的 `limits` 字段而不必须指定 `requests` 字段,因为 Kubernetes 会默认使用 limit 字段的值来作为 request 字段的默认值。
|
||||
* 你能同时指定 GPU 的 `limits` 和 `requests` 字段,但这两个值必须相等。
|
||||
* 你不能仅仅指定 GPU 的 `request` 字段而不指定 `limits`。
|
||||
- 容器(以及 pod)并不会共享 GPU,也不存在对 GPU 的过量使用。
|
||||
- 每一个容器能够请求一个或多个 GPU。然而只请求一个 GPU 的一部分是不允许的。
|
||||
|
||||
下面是一个例子:
|
||||
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
|
@ -115,8 +108,8 @@ spec:
|
|||
|
||||
The [official AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
|
||||
has the following requirements:
|
||||
--->
|
||||
### 部署 AMD GPU 设备插件
|
||||
-->
|
||||
### 部署 AMD GPU 设备插件 {#deploying-amd-gpu-device-plugin}
|
||||
|
||||
[官方的 AMD GPU 设备插件](https://github.com/RadeonOpenCompute/k8s-device-plugin) 有以下要求:
|
||||
|
||||
|
@ -132,37 +125,37 @@ kubectl create -f https://raw.githubusercontent.com/RadeonOpenCompute/k8s-device
|
|||
# For Kubernetes v1.10
|
||||
kubectl create -f https://raw.githubusercontent.com/RadeonOpenCompute/k8s-device-plugin/r1.10/k8s-ds-amdgpu-dp.yaml
|
||||
```
|
||||
--->
|
||||
-->
|
||||
- Kubernetes 节点必须预先安装 AMD GPU 的 Linux 驱动。
|
||||
|
||||
如果你的集群已经启动并且满足上述要求的话,可以这样部署 AMD 设备插件:
|
||||
```
|
||||
# 针对 Kubernetes v1.9
|
||||
kubectl create -f https://raw.githubusercontent.com/RadeonOpenCompute/k8s-device-plugin/r1.9/k8s-ds-amdgpu-dp.yaml
|
||||
|
||||
# 针对 Kubernetes v1.10
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/RadeonOpenCompute/k8s-device-plugin/r1.10/k8s-ds-amdgpu-dp.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
Report issues with this device plugin to [RadeonOpenCompute/k8s-device-plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin).
|
||||
--->
|
||||
请到 [RadeonOpenCompute/k8s-device-plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin) 报告有关此设备插件的问题。
|
||||
You can report issues with this third-party device plugin by logging an issue in
|
||||
[RadeonOpenCompute/k8s-device-plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin).
|
||||
-->
|
||||
你可以到 [RadeonOpenCompute/k8s-device-plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
|
||||
项目报告有关此设备插件的问题。
|
||||
|
||||
<!--
|
||||
### Deploying NVIDIA GPU device plugin
|
||||
|
||||
There are currently two device plugin implementations for NVIDIA GPUs:
|
||||
-->
|
||||
### 部署 NVIDIA GPU 设备插件 {#deploying-nvidia-gpu-device-plugin}
|
||||
|
||||
对于 NVIDIA GPUs,目前存在两种设备插件的实现:
|
||||
|
||||
<!--
|
||||
#### Official NVIDIA GPU device plugin
|
||||
|
||||
The [official NVIDIA GPU device plugin](https://github.com/NVIDIA/k8s-device-plugin)
|
||||
has the following requirements:
|
||||
--->
|
||||
### 部署 NVIDIA GPU 设备插件
|
||||
|
||||
对于 NVIDIA GPUs,目前存在两种设备插件的实现:
|
||||
|
||||
-->
|
||||
#### 官方的 NVIDIA GPU 设备插件
|
||||
|
||||
[官方的 NVIDIA GPU 设备插件](https://github.com/NVIDIA/k8s-device-plugin) 有以下要求:
|
||||
|
@ -176,34 +169,18 @@ has the following requirements:
|
|||
|
||||
To deploy the NVIDIA device plugin once your cluster is running and the above
|
||||
requirements are satisfied:
|
||||
--->
|
||||
-->
|
||||
- Kubernetes 的节点必须预先安装了 NVIDIA 驱动
|
||||
- Kubernetes 的节点必须预先安装 [nvidia-docker 2.0](https://github.com/NVIDIA/nvidia-docker)
|
||||
- Docker 的[默认运行时](https://github.com/NVIDIA/k8s-device-plugin#preparing-your-gpu-nodes)必须设置为 nvidia-container-runtime,而不是 runc
|
||||
- NVIDIA 驱动版本 ~= 361.93
|
||||
- NVIDIA 驱动版本 ~= 384.81
|
||||
|
||||
如果你的集群已经启动并且满足上述要求的话,可以这样部署 NVIDIA 设备插件:
|
||||
|
||||
<!--
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml
|
||||
```
|
||||
# For Kubernetes v1.8
|
||||
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.8/nvidia-device-plugin.yml
|
||||
|
||||
# For Kubernetes v1.9
|
||||
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.9/nvidia-device-plugin.yml
|
||||
```
|
||||
|
||||
Report issues with this device plugin to [NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin).
|
||||
--->
|
||||
```
|
||||
# 针对 Kubernetes v1.8
|
||||
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.8/nvidia-device-plugin.yml
|
||||
|
||||
# 针对 Kubernetes v1.9
|
||||
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.9/nvidia-device-plugin.yml
|
||||
```
|
||||
|
||||
请到 [NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin) 报告有关此设备插件的问题。
|
||||
请到 [NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin)项目报告有关此设备插件的问题。
|
||||
|
||||
<!--
|
||||
#### NVIDIA GPU device plugin used by GCE
|
||||
|
@ -213,29 +190,15 @@ doesn't require using nvidia-docker and should work with any container runtime
|
|||
that is compatible with the Kubernetes Container Runtime Interface (CRI). It's tested
|
||||
on [Container-Optimized OS](https://cloud.google.com/container-optimized-os/)
|
||||
and has experimental code for Ubuntu from 1.9 onwards.
|
||||
--->
|
||||
-->
|
||||
#### GCE 中使用的 NVIDIA GPU 设备插件
|
||||
|
||||
[GCE 使用的 NVIDIA GPU 设备插件](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu) 并不要求使用 nvidia-docker,并且对于任何实现了 Kubernetes CRI 的容器运行时,都应该能够使用。这一实现已经在 [Container-Optimized OS](https://cloud.google.com/container-optimized-os/) 上进行了测试,并且在 1.9 版本之后会有对于 Ubuntu 的实验性代码。
|
||||
|
||||
<!--
|
||||
On your 1.12 cluster, you can use the following commands to install the NVIDIA drivers and device plugin:
|
||||
你可以使用下面的命令来安装 NVIDIA 驱动以及设备插件:
|
||||
|
||||
```
|
||||
# Install NVIDIA drivers on Container-Optimized OS:
|
||||
kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/stable/daemonset.yaml
|
||||
|
||||
# Install NVIDIA drivers on Ubuntu (experimental):
|
||||
kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/stable/nvidia-driver-installer/ubuntu/daemonset.yaml
|
||||
|
||||
# Install the device plugin:
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.12/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
|
||||
```
|
||||
--->
|
||||
在你 1.12 版本的集群上,你能使用下面的命令来安装 NVIDIA 驱动以及设备插件:
|
||||
|
||||
```
|
||||
# 在容器优化的操作系统上安装 NVIDIA 驱动:
|
||||
# 在 COntainer-Optimized OS 上安装 NVIDIA 驱动:
|
||||
kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/stable/daemonset.yaml
|
||||
|
||||
# 在 Ubuntu 上安装 NVIDIA 驱动 (实验性质):
|
||||
|
@ -248,11 +211,12 @@ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/releas
|
|||
<!--
|
||||
Report issues with this device plugin and installation method to [GoogleCloudPlatform/container-engine-accelerators](https://github.com/GoogleCloudPlatform/container-engine-accelerators).
|
||||
|
||||
Instructions for using NVIDIA GPUs on GKE are
|
||||
[here](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus)
|
||||
--->
|
||||
Google publishes its own [instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus) for using NVIDIA GPUs on GKE .
|
||||
-->
|
||||
请到 [GoogleCloudPlatform/container-engine-accelerators](https://github.com/GoogleCloudPlatform/container-engine-accelerators) 报告有关此设备插件以及安装方法的问题。
|
||||
|
||||
关于如何在 GKE 上使用 NVIDIA GPUs,Google 也提供自己的[指令](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus)。
|
||||
|
||||
<!--
|
||||
## Clusters containing different types of GPUs
|
||||
|
||||
|
@ -261,20 +225,15 @@ can use [Node Labels and Node Selectors](/docs/tasks/configure-pod-container/ass
|
|||
to schedule pods to appropriate nodes.
|
||||
|
||||
For example:
|
||||
--->
|
||||
## 集群内存在不同类型的 NVIDIA GPU
|
||||
-->
|
||||
## 集群内存在不同类型的 GPU
|
||||
|
||||
如果集群内部的不同节点上有不同类型的 NVIDIA GPU,那么你可以使用 [节点标签和节点选择器](/docs/tasks/configure-pod-container/assign-pods-nodes/) 来将 pod 调度到合适的节点上。
|
||||
如果集群内部的不同节点上有不同类型的 NVIDIA GPU,那么你可以使用
|
||||
[节点标签和节点选择器](/zh/docs/tasks/configure-pod-container/assign-pods-nodes/)
|
||||
来将 pod 调度到合适的节点上。
|
||||
|
||||
例如:
|
||||
|
||||
<!--
|
||||
```shell
|
||||
# Label your nodes with the accelerator type they have.
|
||||
kubectl label nodes <node-with-k80> accelerator=nvidia-tesla-k80
|
||||
kubectl label nodes <node-with-p100> accelerator=nvidia-tesla-p100
|
||||
```
|
||||
--->
|
||||
```shell
|
||||
# 为你的节点加上它们所拥有的加速器类型的标签
|
||||
kubectl label nodes <node-with-k80> accelerator=nvidia-tesla-k80
|
||||
|
@ -282,9 +241,22 @@ kubectl label nodes <node-with-p100> accelerator=nvidia-tesla-p100
|
|||
```
|
||||
|
||||
<!--
|
||||
For AMD GPUs, you can deploy [Node Labeller](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller), which automatically labels your nodes with GPU properties. Currently supported properties:
|
||||
--->
|
||||
对于 AMD GPUs,您可以部署 [节点标签器](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller),它会自动给节点打上 GPU 属性标签。目前支持的属性:
|
||||
## Automatic node labelling {#node-labeller}
|
||||
-->
|
||||
## 自动节点标签 {#node-labeller}
|
||||
|
||||
<!--
|
||||
If you're using AMD GPU devices, you can deploy
|
||||
[Node Labeller](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller).
|
||||
Node Labeller is a {{< glossary_tooltip text="controller" term_id="controller" >}} that automatically
|
||||
labels your nodes with GPU properties.
|
||||
|
||||
At the moment, that controller can add labels for:
|
||||
-->
|
||||
如果你在使用 AMD GPUs,你可以部署
|
||||
[Node Labeller](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller),
|
||||
它是一个 {{< glossary_tooltip text="控制器" term_id="controller" >}},
|
||||
会自动给节点打上 GPU 属性标签。目前支持的属性:
|
||||
|
||||
<!--
|
||||
* Device ID (-device-id)
|
||||
|
@ -300,7 +272,6 @@ For AMD GPUs, you can deploy [Node Labeller](https://github.com/RadeonOpenComput
|
|||
* CZ - Carrizo
|
||||
* AI - Arctic Islands
|
||||
* RV - Raven
|
||||
|
||||
Example result:
|
||||
--->
|
||||
* 设备 ID (-device-id)
|
||||
|
@ -319,7 +290,11 @@ Example result:
|
|||
|
||||
示例:
|
||||
|
||||
$ kubectl describe node cluster-node-23
|
||||
```shell
|
||||
kubectl describe node cluster-node-23
|
||||
```
|
||||
|
||||
```
|
||||
Name: cluster-node-23
|
||||
Roles: <none>
|
||||
Labels: beta.amd.com/gpu.cu-count.64=1
|
||||
|
@ -333,11 +308,12 @@ Example result:
|
|||
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
|
||||
node.alpha.kubernetes.io/ttl: 0
|
||||
......
|
||||
```
|
||||
|
||||
<!--
|
||||
Specify the GPU type in the pod spec:
|
||||
--->
|
||||
在 pod 的 spec 字段中指定 GPU 的类型:
|
||||
With the Node Labeller in use, you can specify the GPU type in the Pod spec:
|
||||
-->
|
||||
使用了 Node Labeller 的时候,你可以在 Pod 的规约中指定 GPU 的类型:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -360,5 +336,6 @@ spec:
|
|||
<!--
|
||||
This will ensure that the pod will be scheduled to a node that has the GPU type
|
||||
you specified.
|
||||
--->
|
||||
这能够保证 pod 能够被调度到你所指定类型的 GPU 的节点上去。
|
||||
-->
|
||||
这能够保证 Pod 能够被调度到你所指定类型的 GPU 的节点上去。
|
||||
|
||||
|
|
|
@ -1,45 +1,89 @@
|
|||
---
|
||||
approvers:
|
||||
- jcbsmpsn
|
||||
- mikedanese
|
||||
title: 证书轮换
|
||||
title: 为 kubelet 配置证书轮换
|
||||
content_type: task
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
- jcbsmpsn
|
||||
- mikedanese
|
||||
title: Configure Certificate Rotation for the Kubelet
|
||||
content_type: task
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
This page shows how to enable and configure certificate rotation for the kubelet.
|
||||
-->
|
||||
本文展示如何在 kubelet 中启用并配置证书轮换。
|
||||
|
||||
{{< feature-state for_k8s_version="v1.8" state="beta" >}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* Kubernetes version 1.8.0 or later is required
|
||||
-->
|
||||
* 要求 Kubernetes 1.8.0 或更高的版本
|
||||
|
||||
* Kubelet 证书轮换在 1.8.0 版本中处于 beta 阶段, 这意味着该特性可能在没有通知的情况下发生变化。
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Overview
|
||||
|
||||
The kubelet uses certificates for authenticating to the Kubernetes API. By
|
||||
default, these certificates are issued with one year expiration so that they do
|
||||
not need to be renewed too frequently.
|
||||
-->
|
||||
## 概述
|
||||
|
||||
Kubelet 使用证书进行 Kubernetes API 的认证。
|
||||
默认情况下,这些证书的签发期限为一年,所以不需要太频繁地进行更新。
|
||||
|
||||
Kubernetes 1.8 版本中包含 beta 特性 [kubelet 证书轮换](/docs/tasks/administer-cluster/certificate-rotation/),
|
||||
<!--
|
||||
Kubernetes 1.8 contains [kubelet certificate
|
||||
rotation](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/), a beta feature
|
||||
that will automatically generate a new key and request a new certificate from
|
||||
the Kubernetes API as the current certificate approaches expiration. Once the
|
||||
new certificate is available, it will be used for authenticating connections to
|
||||
the Kubernetes API.
|
||||
-->
|
||||
Kubernetes 1.8 版本中包含 beta 特性
|
||||
[kubelet 证书轮换](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/),
|
||||
在当前证书即将过期时,
|
||||
将自动生成新的秘钥,并从 Kubernetes API 申请新的证书。 一旦新的证书可用,它将被用于与
|
||||
Kubernetes API 间的连接认证。
|
||||
|
||||
<!--
|
||||
## Enabling client certificate rotation
|
||||
|
||||
The `kubelet` process accepts an argument `--rotate-certificates` that controls
|
||||
if the kubelet will automatically request a new certificate as the expiration of
|
||||
the certificate currently in use approaches. Since certificate rotation is a
|
||||
beta feature, the feature flag must also be enabled with
|
||||
`--feature-gates=RotateKubeletClientCertificate=true`.
|
||||
-->
|
||||
## 启用客户端证书轮换
|
||||
|
||||
`kubelet` 进程接收 `--rotate-certificates` 参数,该参数决定 kubelet 在当前使用的证书即将到期时,
|
||||
是否会自动申请新的证书。 由于证书轮换是 beta 特性,必须通过参数 `--feature-gates=RotateKubeletClientCertificate=true` 进行启用。
|
||||
|
||||
是否会自动申请新的证书。 由于证书轮换是 beta 特性,必须通过参数
|
||||
`--feature-gates=RotateKubeletClientCertificate=true` 进行启用。
|
||||
|
||||
<!--
|
||||
The `kube-controller-manager` process accepts an argument
|
||||
`--experimental-cluster-signing-duration` that controls how long certificates
|
||||
will be issued for.
|
||||
-->
|
||||
`kube-controller-manager` 进程接收
|
||||
`--experimental-cluster-signing-duration` 参数,该参数控制证书签发的有效期限。
|
||||
|
||||
<!--
|
||||
## Understanding the certificate rotation configuration
|
||||
|
||||
When a kubelet starts up, if it is configured to bootstrap (using the
|
||||
`--bootstrap-kubeconfig` flag), it will use its initial certificate to connect
|
||||
to the Kubernetes API and issue a certificate signing request. You can view the
|
||||
status of certificate signing requests using:
|
||||
-->
|
||||
## 理解证书轮换配置
|
||||
|
||||
当 kubelet 启动时,如被配置为自举(使用`--bootstrap-kubeconfig` 参数),kubelet 会使用其初始证书连接到
|
||||
|
@ -49,18 +93,38 @@ Kubernetes API ,并发送证书签名的请求。 可以通过以下方式查
|
|||
kubectl get csr
|
||||
```
|
||||
|
||||
<!--
|
||||
Initially a certificate signing request from the kubelet on a node will have a
|
||||
status of `Pending`. If the certificate signing requests meets specific
|
||||
criteria, it will be auto approved by the controller manager, then it will have
|
||||
a status of `Approved`. Next, the controller manager will sign a certificate,
|
||||
issued for the duration specified by the
|
||||
`--experimental-cluster-signing-duration` parameter, and the signed certificate
|
||||
will be attached to the certificate signing requests.
|
||||
-->
|
||||
最初,来自节点上 kubelet 的证书签名请求处于 `Pending` 状态。 如果证书签名请求满足特定条件,
|
||||
控制器管理器会自动批准,此时请求会处于 `Approved` 状态。 接下来,控制器管理器会签署证书,
|
||||
证书的有效期限由 `--experimental-cluster-signing-duration` 参数指定,签署的证书会被附加到证书签名请求中。
|
||||
|
||||
<!--
|
||||
The kubelet will retrieve the signed certificate from the Kubernetes API and
|
||||
write that to disk, in the location specified by `--cert-dir`. Then the kubelet
|
||||
will use the new certificate to connect to the Kubernetes API.
|
||||
-->
|
||||
Kubelet 会从 Kubernetes API 取回签署的证书,并将其写入磁盘,存储位置通过 `--cert-dir` 参数指定。
|
||||
然后 kubelet 会使用新的证书连接到 Kubernetes API。
|
||||
|
||||
<!--
|
||||
As the expiration of the signed certificate approaches, the kubelet will
|
||||
automatically issue a new certificate signing request, using the Kubernetes
|
||||
API. Again, the controller manager will automatically approve the certificate
|
||||
request and attach a signed certificate to the certificate signing request. The
|
||||
kubelet will retrieve the new signed certificate from the Kubernetes API and
|
||||
write that to disk. Then it will update the connections it has to the
|
||||
Kubernetes API to reconnect using the new certificate.
|
||||
-->
|
||||
当签署的证书即将到期时,kubelet 会使用 Kubernetes API,发起新的证书签名请求。
|
||||
同样地,控制器管理器会自动批准证书请求,并将签署的证书附加到证书签名请求中。 Kubelet
|
||||
会从 Kubernetes API 取回签署的证书,并将其写入磁盘。 然后它会更新与 Kubernetes API
|
||||
的连接,使用新的证书重新连接到 Kubernetes API。
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
Loading…
Reference in New Issue