[zh] Tidy up and fix links in tasks section (5/10)
parent
bdd04151a8
commit
68abcb9638
|
@ -1,29 +1,26 @@
|
|||
---
|
||||
reviewers:
|
||||
- mtaufen
|
||||
- dawnchen
|
||||
title: 通过配置文件设置 Kubelet 参数
|
||||
content_type: task
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- mtaufen
|
||||
- dawnchen
|
||||
title: Set Kubelet parameters via a config file
|
||||
content_type: task
|
||||
---
|
||||
--->
|
||||
|
||||
<!-- overview -->
|
||||
{{< feature-state state="beta" >}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
|
||||
|
||||
<!--
|
||||
A subset of the Kubelet's configuration parameters may be
|
||||
set via an on-disk config file, as a substitute for command-line flags.
|
||||
This functionality is considered beta in v1.10.
|
||||
--->
|
||||
通过保存在硬盘的配置文件设置 Kubelet 的配置参数子集,可以作为命令行参数的替代。此功能在 v1.10 中为 beta 版。
|
||||
通过保存在硬盘的配置文件设置 kubelet 的部分配置参数,这可以作为命令行参数的替代。
|
||||
此功能在 v1.10 中为 beta 版。
|
||||
|
||||
<!--
|
||||
Providing parameters via a config file is the recommended approach because
|
||||
|
@ -31,65 +28,86 @@ it simplifies node deployment and configuration management.
|
|||
--->
|
||||
建议通过配置文件的方式提供参数,因为这样可以简化节点部署和配置管理。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!--
|
||||
- A v1.10 or higher Kubelet binary must be installed for beta functionality.
|
||||
--->
|
||||
- 需要安装 1.10 或更高版本的 Kubelet 二进制文件,才能实现 beta 功能。
|
||||
|
||||
|
||||
-->
|
||||
- 需要安装 1.10 或更高版本的 kubelet 可执行文件,才能使用此 beta 功能。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Create the config file
|
||||
--->
|
||||
|
||||
The subset of the Kubelet's configuration that can be configured via a file
|
||||
is defined by the `KubeletConfiguration` struct
|
||||
[here (v1beta1)](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
|
||||
-->
|
||||
## 创建配置文件
|
||||
|
||||
`KubeletConfiguration` 结构体定义了可以通过文件配置的 Kubelet 配置子集,
|
||||
该结构体在 [这里(v1beta1)](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)
|
||||
可以找到。
|
||||
|
||||
`KubeletConfiguration` 结构体定义了可以通过文件配置的 Kubelet 配置子集,该结构体在 [这里(v1beta1)](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go) 可以找到, 配置文件必须是这个结构体中参数的 JSON 或 YAML 表现形式。
|
||||
<!--
|
||||
The configuration file must be a JSON or YAML representation of the parameters
|
||||
in this struct. Make sure the Kubelet has read permissions on the file.
|
||||
|
||||
Here is an example of what this file might look like:
|
||||
-->
|
||||
配置文件必须是这个结构体中参数的 JSON 或 YAML 表现形式。
|
||||
确保 kubelet 可以读取该文件。
|
||||
|
||||
在单独的文件夹中创建一个名为 `kubelet` 的文件,并保证 Kubelet 可以读取该文件夹及文件。您应该在这个 `kubelet` 文件中编写 Kubelet 配置。
|
||||
下面是一个 Kubelet 配置文件示例:
|
||||
|
||||
这是一个 Kubelet 配置文件示例:
|
||||
|
||||
```
|
||||
```yaml
|
||||
kind: KubeletConfiguration
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
evictionHard:
|
||||
memory.available: "200Mi"
|
||||
```
|
||||
在这个示例中, 当可用内存低于200Mi 时, Kubelet 将会开始驱逐 Pods。 没有声明的其余配置项都将使用默认值, 命令行中的 flags 将会覆盖配置文件中的对应值。
|
||||
|
||||
|
||||
作为一个小技巧,您可以从活动节点生成配置文件,相关方法请查看 [重新配置活动集群节点的 Kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet)。
|
||||
|
||||
|
||||
## 启动通过配置文件配置的 Kubelet 进程
|
||||
|
||||
<!--
|
||||
In the example, the Kubelet is configured to evict Pods when available memory drops below 200Mi.
|
||||
All other Kubelet configuration values are left at their built-in defaults, unless overridden
|
||||
by flags. Command line flags which target the same value as a config file will override that value.
|
||||
|
||||
For a trick to generate a configuration file from a live node, see
|
||||
[Reconfigure a Node's Kubelet in a Live Cluster](/docs/tasks/administer-cluster/reconfigure-kubelet).
|
||||
-->
|
||||
在这个示例中, 当可用内存低于 200Mi 时, kubelet 将会开始驱逐 Pods。
|
||||
没有声明的其余配置项都将使用默认值,除非使用命令行参数来重载。
|
||||
命令行中的参数将会覆盖配置文件中的对应值。
|
||||
|
||||
作为一个小技巧,你可以从活动节点生成配置文件,相关方法请查看
|
||||
[重新配置活动集群节点的 kubelet](/zh/docs/tasks/administer-cluster/reconfigure-kubelet)。
|
||||
|
||||
<!--
|
||||
## Start a Kubelet process configured via the config file
|
||||
|
||||
Start the Kubelet with the `--config` flag set to the path of the Kubelet's config file.
|
||||
The Kubelet will then load its config from this file.
|
||||
--->
|
||||
|
||||
## 启动通过配置文件配置的 Kubelet 进程
|
||||
|
||||
启动 Kubelet 需要将 `--config` 参数设置为 Kubelet 配置文件的路径。Kubelet 将从此文件加载其配置。
|
||||
|
||||
<!--
|
||||
Note that command line flags which target the same value as a config file will override that value.
|
||||
This helps ensure backwards compatibility with the command-line API.
|
||||
--->
|
||||
请注意,命令行参数与配置文件有相同的值时,就会覆盖配置文件中的该值。这有助于确保命令行 API 的向后兼容性。
|
||||
-->
|
||||
请注意,命令行参数与配置文件有相同的值时,就会覆盖配置文件中的该值。
|
||||
这有助于确保命令行 API 的向后兼容性。
|
||||
|
||||
<!--
|
||||
Note that relative file paths in the Kubelet config file are resolved relative to the
|
||||
location of the Kubelet config file, whereas relative paths in command line flags are resolved
|
||||
relative to the Kubelet's current working directory.
|
||||
--->
|
||||
请注意,Kubelet 配置文件中的相对文件路径是相对于 Kubelet 配置文件的位置解析的,而命令行参数中的相对路径是相对于 Kubelet 的当前工作目录解析的。
|
||||
-->
|
||||
请注意,kubelet 配置文件中的相对文件路径是相对于 kubelet 配置文件的位置解析的,
|
||||
而命令行参数中的相对路径是相对于 kubelet 的当前工作目录解析的。
|
||||
|
||||
<!--
|
||||
Note that some default values differ between command-line flags and the Kubelet config file.
|
||||
|
@ -97,23 +115,23 @@ If `--config` is provided and the values are not specified via the command line,
|
|||
defaults for the `KubeletConfiguration` version apply.
|
||||
In the above example, this version is `kubelet.config.k8s.io/v1beta1`.
|
||||
--->
|
||||
请注意,命令行参数和 Kubelet 配置文件的某些默认值不同。如果设置了 `--config`,并且没有通过命令行指定值,则 `KubeletConfiguration` 版本的默认值生效。在上面的例子中,version 是 `kubelet.config.k8s.io/v1beta1`。
|
||||
|
||||
|
||||
请注意,命令行参数和 Kubelet 配置文件的某些默认值不同。
|
||||
如果设置了 `--config`,并且没有通过命令行指定值,则 `KubeletConfiguration`
|
||||
版本的默认值生效。在上面的例子中,version 是 `kubelet.config.k8s.io/v1beta1`。
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
## Relationship to Dynamic Kubelet Config
|
||||
--->
|
||||
## 与动态 Kubelet 配置的关系
|
||||
|
||||
<!--
|
||||
If you are using the [Dynamic Kubelet Configuration](/docs/tasks/administer-cluster/reconfigure-kubelet)
|
||||
feature, the combination of configuration provided via `--config` and any flags which override these values
|
||||
is considered the default "last known good" configuration by the automatic rollback mechanism.
|
||||
--->
|
||||
如果您正在使用 [动态 Kubelet 配置](/docs/tasks/administer-cluster/reconfigure-kubelet) 特性,那么自动回滚机制将认为是 "最后已知正常(last known good)" 的配置,通过 `--config` 提供的配置与覆盖这些值的任何参数的结合。
|
||||
|
||||
## 与动态 Kubelet 配置的关系
|
||||
|
||||
如果你正在使用[动态 kubelet 配置](/zh/docs/tasks/administer-cluster/reconfigure-kubelet)特性,
|
||||
那么自动回滚机制将认为通过 `--config` 提供的配置与覆盖这些值的任何参数的组合是
|
||||
"最后已知正常(last known good)" 的配置。
|
||||
|
||||
|
||||
|
|
|
@ -4,11 +4,9 @@ content_type: task
|
|||
weight: 120
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Assign Pods to Nodes
|
||||
content_type: task
|
||||
weight: 120
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -21,82 +19,84 @@ Kubernetes cluster.
|
|||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Add a label to a node
|
||||
|
||||
1. List the nodes in your cluster:
|
||||
-->
|
||||
## 给节点添加标签
|
||||
|
||||
<!--
|
||||
1. List the nodes in your cluster:
|
||||
-->
|
||||
1. 列出集群中的节点
|
||||
|
||||
kubectl get nodes
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似如下:
|
||||
<!-- The output is similar to this: -->
|
||||
输出类似如下:
|
||||
|
||||
NAME STATUS AGE VERSION
|
||||
worker0 Ready 1d v1.6.0+fff5156
|
||||
worker1 Ready 1d v1.6.0+fff5156
|
||||
worker2 Ready 1d v1.6.0+fff5156
|
||||
```
|
||||
NAME STATUS AGE VERSION
|
||||
worker0 Ready 1d v1.6.0+fff5156
|
||||
worker1 Ready 1d v1.6.0+fff5156
|
||||
worker2 Ready 1d v1.6.0+fff5156
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Chose one of your nodes, and add a label to it:
|
||||
-->
|
||||
1. 选择其中一个节点,为它添加标签:
|
||||
2. 选择其中一个节点,为它添加标签:
|
||||
|
||||
kubectl label nodes <your-node-name> disktype=ssd
|
||||
```shell
|
||||
kubectl label nodes <your-node-name> disktype=ssd
|
||||
```
|
||||
|
||||
<!--
|
||||
where `<your-node-name>` is the name of your chosen node.
|
||||
-->
|
||||
`<your-node-name>` 是你选择的节点的名称。
|
||||
<!--
|
||||
where `<your-node-name>` is the name of your chosen node.
|
||||
-->
|
||||
`<your-node-name>` 是你选择的节点的名称。
|
||||
|
||||
<!--
|
||||
1. Verify that your chosen node has a `disktype=ssd` label:
|
||||
-->
|
||||
1. 验证你选择的节点是否有 `disktype=ssd` 标签:
|
||||
3. 验证你选择的节点是否有 `disktype=ssd` 标签:
|
||||
|
||||
kubectl get nodes --show-labels
|
||||
```shell
|
||||
kubectl get nodes --show-labels
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似如下:
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似如下:
|
||||
|
||||
NAME STATUS AGE VERSION LABELS
|
||||
worker0 Ready 1d v1.6.0+fff5156 ...,disktype=ssd,kubernetes.io/hostname=worker0
|
||||
worker1 Ready 1d v1.6.0+fff5156 ...,kubernetes.io/hostname=worker1
|
||||
worker2 Ready 1d v1.6.0+fff5156 ...,kubernetes.io/hostname=worker2
|
||||
|
||||
<!--
|
||||
In the preceding output, you can see that the `worker0` node has a
|
||||
`disktype=ssd` label.
|
||||
-->
|
||||
在前面的输出中,你可以看到 `worker0` 节点有 `disktype=ssd` 标签。
|
||||
```
|
||||
NAME STATUS AGE VERSION LABELS
|
||||
worker0 Ready 1d v1.6.0+fff5156 ...,disktype=ssd,kubernetes.io/hostname=worker0
|
||||
worker1 Ready 1d v1.6.0+fff5156 ...,kubernetes.io/hostname=worker1
|
||||
worker2 Ready 1d v1.6.0+fff5156 ...,kubernetes.io/hostname=worker2
|
||||
```
|
||||
<!--
|
||||
In the preceding output, you can see that the `worker0` node has a
|
||||
`disktype=ssd` label.
|
||||
-->
|
||||
在前面的输出中,你可以看到 `worker0` 节点有 `disktype=ssd` 标签。
|
||||
|
||||
<!--
|
||||
## Create a pod that gets scheduled to your chosen node
|
||||
-->
|
||||
## 创建一个调度到你选择的节点的 pod
|
||||
|
||||
<!--
|
||||
This pod configuration file describes a pod that has a node selector,
|
||||
`disktype: ssd`. This means that the pod will get scheduled on a node that has
|
||||
a `disktype=ssd` label.
|
||||
-->
|
||||
该 pod 配置文件描述了一个拥有节点选择器 `disktype: ssd` 的 pod。这表明该 pod 将被调度到
|
||||
## 创建一个调度到你选择的节点的 pod
|
||||
|
||||
此 Pod 配置文件描述了一个拥有节点选择器 `disktype: ssd` 的 Pod。这表明该 Pod 将被调度到
|
||||
有 `disktype=ssd` 标签的节点。
|
||||
|
||||
{{< codenew file="pods/pod-nginx.yaml" >}}
|
||||
|
@ -107,24 +107,26 @@ a `disktype=ssd` label.
|
|||
-->
|
||||
1. 使用该配置文件去创建一个 pod,该 pod 将被调度到你选择的节点上:
|
||||
|
||||
kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Verify that the pod is running on your chosen node:
|
||||
-->
|
||||
1. 验证 pod 是不是运行在你选择的节点上:
|
||||
2. 验证 pod 是不是运行在你选择的节点上:
|
||||
|
||||
kubectl get pods --output=wide
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似如下:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
nginx 1/1 Running 0 13s 10.200.0.4 worker0
|
||||
```shell
|
||||
kubectl get pods --output=wide
|
||||
```
|
||||
|
||||
<!-- The output is similar to this: -->
|
||||
输出类似如下:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
nginx 1/1 Running 0 13s 10.200.0.4 worker0
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -132,6 +134,5 @@ a `disktype=ssd` label.
|
|||
Learn more about
|
||||
[labels and selectors](/docs/concepts/overview/working-with-objects/labels/).
|
||||
-->
|
||||
了解更多关于
|
||||
[标签和选择器](/docs/concepts/overview/working-with-objects/labels/)。
|
||||
进一步了解[标签和选择器](/zh/docs/concepts/overview/working-with-objects/labels/)
|
||||
|
||||
|
|
|
@ -4,11 +4,9 @@ content_type: task
|
|||
weight: 140
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Attach Handlers to Container Lifecycle Events
|
||||
content_type: task
|
||||
weight: 140
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -23,33 +21,26 @@ Container is terminated.
|
|||
当一个容器启动后,Kubernetes 将立即发送 postStart 事件;在容器被终结之前,
|
||||
Kubernetes 将发送一个 preStop 事件。
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Define postStart and preStop handlers
|
||||
-->
|
||||
## 定义 postStart 和 preStop 处理函数
|
||||
|
||||
<!--
|
||||
In this exercise, you create a Pod that has one Container. The Container has handlers
|
||||
for the postStart and preStop events.
|
||||
-->
|
||||
## 定义 postStart 和 preStop 处理函数
|
||||
|
||||
在本练习中,你将创建一个包含一个容器的 Pod,该容器为 postStart 和 preStop 事件提供对应的处理函数。
|
||||
|
||||
<!--
|
||||
Here is the configuration file for the Pod:
|
||||
-->
|
||||
下面是对应 Pod 的配置文件
|
||||
下面是对应 Pod 的配置文件:
|
||||
|
||||
{{< codenew file="pods/lifecycle-events.yaml" >}}
|
||||
|
||||
|
@ -59,55 +50,58 @@ file to the Container's `/usr/share` directory. The preStop command shuts down
|
|||
nginx gracefully. This is helpful if the Container is being terminated because of a failure.
|
||||
-->
|
||||
在上述配置文件中,你可以看到 postStart 命令在容器的 `/usr/share` 目录下写入文件 `message`。
|
||||
命令 preStop 负责优雅地终止 nginx 服务。当因为失效而导致容器终止时,这一处理方式很有用。```
|
||||
命令 preStop 负责优雅地终止 nginx 服务。当因为失效而导致容器终止时,这一处理方式很有用。
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
-->
|
||||
创建 Pod:
|
||||
|
||||
kubectl apply -f https://k8s.io/examples/pods/lifecycle-events.yaml
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/lifecycle-events.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
Verify that the Container in the Pod is running:
|
||||
-->
|
||||
验证 Pod 中的容器已经运行:
|
||||
|
||||
kubectl get pod lifecycle-demo
|
||||
```shell
|
||||
kubectl get pod lifecycle-demo
|
||||
```
|
||||
|
||||
<!--
|
||||
Get a shell into the Container running in your Pod:
|
||||
-->
|
||||
使用 shell 连接到你的 Pod 里的容器:
|
||||
|
||||
kubectl exec -it lifecycle-demo -- /bin/bash
|
||||
```
|
||||
kubectl exec -it lifecycle-demo -- /bin/bash
|
||||
```
|
||||
|
||||
<!--
|
||||
In your shell, verify that the `postStart` handler created the `message` file:
|
||||
-->
|
||||
在 shell 中,验证 `postStart` 处理函数创建了 `message` 文件:
|
||||
|
||||
root@lifecycle-demo:/# cat /usr/share/message
|
||||
```
|
||||
root@lifecycle-demo:/# cat /usr/share/message
|
||||
```
|
||||
|
||||
<!--
|
||||
The output shows the text written by the postStart handler:
|
||||
-->
|
||||
命令行输出的是 `postStart` 处理函数所写入的文本
|
||||
|
||||
Hello from the postStart handler
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
Hello from the postStart handler
|
||||
```
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
## Discussion
|
||||
-->
|
||||
## 讨论
|
||||
|
||||
<!--
|
||||
Kubernetes sends the postStart event immediately after the Container is created.
|
||||
There is no guarantee, however, that the postStart handler is called before
|
||||
the Container's entrypoint is called. The postStart handler runs asynchronously
|
||||
|
@ -115,9 +109,13 @@ relative to the Container's code, but Kubernetes' management of the container
|
|||
blocks until the postStart handler completes. The Container's status is not
|
||||
set to RUNNING until the postStart handler completes.
|
||||
-->
|
||||
Kubernetes 在容器创建后立即发送 postStart 事件。然而,postStart 处理函数的调用不保证早于容器的入口点(entrypoint)
|
||||
## 讨论
|
||||
|
||||
Kubernetes 在容器创建后立即发送 postStart 事件。
|
||||
然而,postStart 处理函数的调用不保证早于容器的入口点(entrypoint)
|
||||
的执行。postStart 处理函数与容器的代码是异步执行的,但 Kubernetes
|
||||
的容器管理逻辑会一直阻塞等待 postStart 处理函数执行完毕。只有 postStart 处理函数执行完毕,容器的状态才会变成
|
||||
的容器管理逻辑会一直阻塞等待 postStart 处理函数执行完毕。
|
||||
只有 postStart 处理函数执行完毕,容器的状态才会变成
|
||||
RUNNING。
|
||||
|
||||
<!--
|
||||
|
@ -128,34 +126,28 @@ unless the Pod's grace period expires. For more details, see
|
|||
-->
|
||||
Kubernetes 在容器结束前立即发送 preStop 事件。除非 Pod 宽限期限超时,Kubernetes 的容器管理逻辑
|
||||
会一直阻塞等待 preStop 处理函数执行完毕。更多的相关细节,可以参阅
|
||||
[Pods 的结束](/docs/user-guide/pods/#termination-of-pods)。
|
||||
[Pods 的结束](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
Kubernetes only sends the preStop event when a Pod is *terminated*.
|
||||
This means that the preStop hook is not invoked when the Pod is *completed*.
|
||||
This limitation is tracked in [issue #55087](https://github.com/kubernetes/kubernetes/issues/55807).
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
Kubernetes 只有在 Pod *结束(Terminated)* 的时候才会发送 preStop 事件,这意味着在 Pod *完成(Completed)* 时
|
||||
Kubernetes 只有在 Pod *结束(Terminated)* 的时候才会发送 preStop 事件,
|
||||
这意味着在 Pod *完成(Completed)* 时
|
||||
preStop 的事件处理逻辑不会被触发。这个限制在
|
||||
[issue #55087](https://github.com/kubernetes/kubernetes/issues/55807) 中被追踪。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
|
||||
* Learn more about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
-->
|
||||
* 进一步了解[容器生命周期回调](/docs/concepts/containers/container-lifecycle-hooks/)。
|
||||
* 进一步了解[Pod 的生命周期](/docs/concepts/workloads/pods/pod-lifecycle/)。
|
||||
|
||||
* 进一步了解[容器生命周期回调](/zh/docs/concepts/containers/container-lifecycle-hooks/)。
|
||||
* 进一步了解[Pod 的生命周期](/zh/docs/concepts/workloads/pods/pod-lifecycle/)。
|
||||
|
||||
<!--
|
||||
### Reference
|
||||
|
@ -170,6 +162,3 @@ preStop 的事件处理逻辑不会被触发。这个限制在
|
|||
* [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
|
||||
* 参阅 [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) 中关于`terminationGracePeriodSeconds` 的部分
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -16,7 +16,10 @@ despite bugs.
|
|||
-->
|
||||
这篇文章介绍如何给容器配置存活、就绪和启动探测器。
|
||||
|
||||
[kubelet](/zh/docs/admin/kubelet/) 使用存活探测器来知道什么时候要重启容器。例如,存活探测器可以捕捉到死锁(应用程序在运行,但是无法继续执行后面的步骤)。这样的情况下重启容器有助于让应用程序在有问题的情况下更可用。
|
||||
[kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/)
|
||||
使用存活探测器来知道什么时候要重启容器。
|
||||
例如,存活探测器可以捕捉到死锁(应用程序在运行,但是无法继续执行后面的步骤)。
|
||||
这样的情况下重启容器有助于让应用程序在有问题的情况下更可用。
|
||||
|
||||
<!--
|
||||
The kubelet uses readiness probes to know when a Container is ready to start
|
||||
|
@ -30,19 +33,20 @@ it succeeds, making sure those probes don't interfere with the application start
|
|||
This can be used to adopt liveness checks on slow starting containers, avoiding them
|
||||
getting killed by the kubelet before they are up and running.
|
||||
-->
|
||||
kubelet 使用就绪探测器可以知道容器什么时候准备好了并可以开始接受请求流量, 当一个 Pod 内的所有容器都准备好了,才能把这个 Pod 看作就绪了。这种信号的一个用途就是控制哪个 Pod 作为 Service 的后端。在 Pod 还没有准备好的时候,会从 Service 的负载均衡器中被剔除的。
|
||||
|
||||
kubelet 使用启动探测器可以知道应用程序容器什么时候启动了。如果配置了这类探测器,就可以控制容器在启动成功后再进行存活性和就绪检查,确保这些存活、就绪探测器不会影响应用程序的启动。这可以用于对慢启动容器进行存活性检测,避免它们在启动运行之前就被杀掉。
|
||||
|
||||
kubelet 使用就绪探测器可以知道容器什么时候准备好了并可以开始接受请求流量, 当一个 Pod
|
||||
内的所有容器都准备好了,才能把这个 Pod 看作就绪了。
|
||||
这种信号的一个用途就是控制哪个 Pod 作为 Service 的后端。
|
||||
在 Pod 还没有准备好的时候,会从 Service 的负载均衡器中被剔除的。
|
||||
|
||||
kubelet 使用启动探测器可以知道应用程序容器什么时候启动了。
|
||||
如果配置了这类探测器,就可以控制容器在启动成功后再进行存活性和就绪检查,
|
||||
确保这些存活、就绪探测器不会影响应用程序的启动。
|
||||
这可以用于对慢启动容器进行存活性检测,避免它们在启动运行之前就被杀掉。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
|
@ -57,9 +61,11 @@ In this exercise, you create a Pod that runs a Container based on the
|
|||
-->
|
||||
## 定义存活命令 {#define-a-liveness-command}
|
||||
|
||||
许多长时间运行的应用程序最终会过渡到断开的状态,除非重新启动,否则无法恢复。Kubernetes 提供了存活探测器来发现并补救这种情况。
|
||||
许多长时间运行的应用程序最终会过渡到断开的状态,除非重新启动,否则无法恢复。
|
||||
Kubernetes 提供了存活探测器来发现并补救这种情况。
|
||||
|
||||
在这篇练习中,会创建一个 Pod,其中运行一个基于 `k8s.gcr.io/busybox` 镜像的容器。下面是这个 Pod 的配置文件。
|
||||
在这篇练习中,你会创建一个 Pod,其中运行一个基于 `k8s.gcr.io/busybox` 镜像的容器。
|
||||
下面是这个 Pod 的配置文件。
|
||||
|
||||
{{< codenew file="pods/probe/exec-liveness.yaml" >}}
|
||||
|
||||
|
@ -75,7 +81,12 @@ and restarts it.
|
|||
|
||||
When the Container starts, it executes this command:
|
||||
-->
|
||||
在这个配置文件中,可以看到 Pod 中只有一个容器。`periodSeconds` 字段指定了 kubelet 应该每 5 秒执行一次存活探测。`initialDelaySeconds` 字段告诉 kubelet 在执行第一次探测前应该等待 5 秒。kubelet 在容器内执行命令 `cat /tmp/healthy` 来进行探测。如果命令执行成功并且返回值为 0,kubelet 就会认为这个容器是健康存活的。如果这个命令返回非 0 值,kubelet 会杀死这个容器并重新启动它。
|
||||
在这个配置文件中,可以看到 Pod 中只有一个容器。
|
||||
`periodSeconds` 字段指定了 kubelet 应该每 5 秒执行一次存活探测。
|
||||
`initialDelaySeconds` 字段告诉 kubelet 在执行第一次探测前应该等待 5 秒。
|
||||
kubelet 在容器内执行命令 `cat /tmp/healthy` 来进行探测。
|
||||
如果命令执行成功并且返回值为 0,kubelet 就会认为这个容器是健康存活的。
|
||||
如果这个命令返回非 0 值,kubelet 会杀死这个容器并重新启动它。
|
||||
|
||||
当容器启动时,执行如下的命令:
|
||||
|
||||
|
@ -90,7 +101,9 @@ code. After 30 seconds, `cat /tmp/healthy` returns a failure code.
|
|||
|
||||
Create the Pod:
|
||||
-->
|
||||
这个容器生命的前 30 秒, `/tmp/healthy` 文件是存在的。所以在这最开始的 30 秒内,执行命令 `cat /tmp/healthy` 会返回成功码。30 秒之后,执行命令 `cat /tmp/healthy` 就会返回失败码。
|
||||
这个容器生命的前 30 秒, `/tmp/healthy` 文件是存在的。
|
||||
所以在这最开始的 30 秒内,执行命令 `cat /tmp/healthy` 会返回成功代码。
|
||||
30 秒之后,执行命令 `cat /tmp/healthy` 就会返回失败代码。
|
||||
|
||||
创建 Pod:
|
||||
|
||||
|
@ -110,9 +123,9 @@ kubectl describe pod liveness-exec
|
|||
<!--
|
||||
The output indicates that no liveness probes have failed yet:
|
||||
-->
|
||||
输出结果显示还没有存活探测器失败:
|
||||
输出结果表明还没有存活探测器失败:
|
||||
|
||||
```shell
|
||||
```
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
|
@ -137,7 +150,7 @@ probes have failed, and the containers have been killed and recreated.
|
|||
-->
|
||||
在输出结果的最下面,有信息显示存活探测器失败了,这个容器被杀死并且被重建了。
|
||||
|
||||
```shell
|
||||
```
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
|
@ -162,7 +175,7 @@ The output shows that `RESTARTS` has been incremented:
|
|||
-->
|
||||
输出结果显示 `RESTARTS` 的值增加了 1。
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
liveness-exec 1/1 Running 1 1m
|
||||
```
|
||||
|
@ -176,7 +189,8 @@ image.
|
|||
-->
|
||||
## 定义一个存活态 HTTP 请求接口 {#define-a-liveness-HTTP-request}
|
||||
|
||||
另外一种类型的存活探测方式是使用 HTTP GET 请求。下面是一个 Pod 的配置文件,其中运行一个基于 `k8s.gcr.io/liveness` 镜像的容器。
|
||||
另外一种类型的存活探测方式是使用 HTTP GET 请求。
|
||||
下面是一个 Pod 的配置文件,其中运行一个基于 `k8s.gcr.io/liveness` 镜像的容器。
|
||||
|
||||
{{< codenew file="pods/probe/http-liveness.yaml" >}}
|
||||
|
||||
|
@ -191,7 +205,12 @@ returns a success code, the kubelet considers the Container to be alive and
|
|||
healthy. If the handler returns a failure code, the kubelet kills the Container
|
||||
and restarts it.
|
||||
-->
|
||||
在这个配置文件中,可以看到 Pod 也只有一个容器。`periodSeconds` 字段指定了 kubelet 每隔 3 秒执行一次存活探测。`initialDelaySeconds` 字段告诉 kubelet 在执行第一次探测前应该等待 3 秒。kubelet 会向容器内运行的服务(服务会监听 8080 端口)发送一个 HTTP GET 请求来执行探测。如果服务上 `/healthz` 路径下的处理程序返回成功码,则 kubelet 认为容器是健康存活的。如果处理程序返回失败码,则 kubelet 会杀死这个容器并且重新启动它。
|
||||
在这个配置文件中,可以看到 Pod 也只有一个容器。
|
||||
`periodSeconds` 字段指定了 kubelet 每隔 3 秒执行一次存活探测。
|
||||
`initialDelaySeconds` 字段告诉 kubelet 在执行第一次探测前应该等待 3 秒。
|
||||
kubelet 会向容器内运行的服务(服务会监听 8080 端口)发送一个 HTTP GET 请求来执行探测。
|
||||
如果服务器上 `/healthz` 路径下的处理程序返回成功代码,则 kubelet 认为容器是健康存活的。
|
||||
如果处理程序返回失败代码,则 kubelet 会杀死这个容器并且重新启动它。
|
||||
|
||||
<!--
|
||||
Any code greater than or equal to 200 and less than 400 indicates success. Any
|
||||
|
@ -203,7 +222,7 @@ You can see the source code for the server in
|
|||
For the first 10 seconds that the Container is alive, the `/healthz` handler
|
||||
returns a status of 200. After that, the handler returns a status of 500.
|
||||
-->
|
||||
任何大于或等于 200 并且小于 400 的返回码标示成功,其它返回码都标示失败。
|
||||
任何大于或等于 200 并且小于 400 的返回代码标示成功,其它返回代码都标示失败。
|
||||
|
||||
可以在这里看服务的源码 [server.go](https://github.com/kubernetes/kubernetes/blob/master/test/images/agnhost/liveness/server.go)。
|
||||
|
||||
|
@ -229,7 +248,8 @@ checks will fail, and the kubelet will kill and restart the Container.
|
|||
|
||||
To try the HTTP liveness check, create a Pod:
|
||||
-->
|
||||
kubelet 在容器启动之后 3 秒开始执行健康检测。所以前几次健康检查都是成功的。但是 10 秒之后,健康检查会失败,并且 kubelet 会杀死容器再重新启动容器。
|
||||
kubelet 在容器启动之后 3 秒开始执行健康检测。所以前几次健康检查都是成功的。
|
||||
但是 10 秒之后,健康检查会失败,并且 kubelet 会杀死容器再重新启动容器。
|
||||
|
||||
创建一个 Pod 来测试 HTTP 的存活检测:
|
||||
|
||||
|
@ -254,7 +274,9 @@ the HTTP liveness probe uses that proxy.
|
|||
In releases after v1.13, local HTTP proxy environment variable settings do not
|
||||
affect the HTTP liveness probe.
|
||||
-->
|
||||
在 1.13(包括 1.13版本)之前的版本中,如果在 Pod 运行的节点上设置了环境变量 `http_proxy`(或者 `HTTP_PROXY`),HTTP 的存活探测会使用这个代理。在 1.13 之后的版本中,设置本地的 HTTP 代理环境变量不会影响 HTTP 的存活探测。
|
||||
在 1.13(包括 1.13版本)之前的版本中,如果在 Pod 运行的节点上设置了环境变量
|
||||
`http_proxy`(或者 `HTTP_PROXY`),HTTP 的存活探测会使用这个代理。
|
||||
在 1.13 之后的版本中,设置本地的 HTTP 代理环境变量不会影响 HTTP 的存活探测。
|
||||
|
||||
<!--
|
||||
## Define a TCP liveness probe
|
||||
|
@ -266,7 +288,9 @@ can’t it is considered a failure.
|
|||
-->
|
||||
## 定义 TCP 的存活探测 {#define-a-TCP-liveness-probe}
|
||||
|
||||
第三种类型的存活探测是使用 TCP 套接字。通过配置,kubelet 会尝试在指定端口和容器建立套接字链接。如果能建立链接,这个容器就被看作是健康的,如果不能则这个容器就被看作是有问题的。
|
||||
第三种类型的存活探测是使用 TCP 套接字。
|
||||
通过配置,kubelet 会尝试在指定端口和容器建立套接字链接。
|
||||
如果能建立连接,这个容器就被看作是健康的,如果不能则这个容器就被看作是有问题的。
|
||||
|
||||
{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}}
|
||||
|
||||
|
@ -286,9 +310,15 @@ will be restarted.
|
|||
|
||||
To try the TCP liveness check, create a Pod:
|
||||
-->
|
||||
如你所见,TCP 检测的配置和 HTTP 检测非常相似。下面这个例子同时使用就绪和存活探测器。kubelet 会在容器启动 5 秒后发送第一个就绪探测。这会尝试连接 `goproxy` 容器的 8080 端口。如果探测成功,这个 Pod 会被标记为就绪状态,kubelet 将继续每隔 10 秒运行一次检测。
|
||||
如你所见,TCP 检测的配置和 HTTP 检测非常相似。
|
||||
下面这个例子同时使用就绪和存活探测器。kubelet 会在容器启动 5 秒后发送第一个就绪探测。
|
||||
这会尝试连接 `goproxy` 容器的 8080 端口。
|
||||
如果探测成功,这个 Pod 会被标记为就绪状态,kubelet 将继续每隔 10 秒运行一次检测。
|
||||
|
||||
除了就绪探测,这个配置包括了一个存活探测。kubelet 会在容器启动 15 秒后进行第一次存活探测。就像就绪探测一样,会尝试连接 `goproxy` 容器的 8080 端口。如果存活探测失败,这个容器会被重新启动。
|
||||
除了就绪探测,这个配置包括了一个存活探测。
|
||||
kubelet 会在容器启动 15 秒后进行第一次存活探测。
|
||||
就像就绪探测一样,会尝试连接 `goproxy` 容器的 8080 端口。
|
||||
如果存活探测失败,这个容器会被重新启动。
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml
|
||||
|
@ -312,7 +342,8 @@ for HTTP or TCP liveness checks:
|
|||
-->
|
||||
## 使用命名端口 {#use-a-named-port}
|
||||
|
||||
对于 HTTP 或者 TCP 存活检测可以使用命名的[容器端口](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerport-v1-core)。
|
||||
对于 HTTP 或者 TCP 存活检测可以使用命名的
|
||||
[ContainerPort](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerport-v1-core)。
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
|
@ -341,7 +372,10 @@ So, the previous example would become:
|
|||
-->
|
||||
## 使用启动探测器保护慢启动容器 {#define-startup-probes}
|
||||
|
||||
有时候,会有一些现有的应用程序在启动时需要较多的初始化时间。要不影响对引起探测死锁的快速响应,这种情况下,设置存活探测参数是要技巧的。技巧就是使用一个命令来设置启动探测,针对HTTP 或者 TCP 检测,可以通过设置 `failureThreshold * periodSeconds` 参数来保证有足够长的时间应对糟糕情况下的启动时间。
|
||||
有时候,会有一些现有的应用程序在启动时需要较多的初始化时间。
|
||||
要不影响对引起探测死锁的快速响应,这种情况下,设置存活探测参数是要技巧的。
|
||||
技巧就是使用一个命令来设置启动探测,针对HTTP 或者 TCP 检测,可以通过设置
|
||||
`failureThreshold * periodSeconds` 参数来保证有足够长的时间应对糟糕情况下的启动时间。
|
||||
|
||||
所以,前面的例子就变成了:
|
||||
|
||||
|
@ -392,12 +426,16 @@ Services.
|
|||
-->
|
||||
## 定义就绪探测器 {#define-readiness-probes}
|
||||
|
||||
有时候,应用程序会暂时性的不能提供通信服务。例如,应用程序在启动时可能需要加载很大的数据或配置文件,或是启动后要依赖等待外部服务。在这种情况下,既不想杀死应用程序,也不想给它发送请求。Kubernetes 提供了就绪探测器来发现并缓解这些情况。容器所在 Pod 上报还未就绪的信息,并且不接受通过 Kubernetes Service 的流量。
|
||||
有时候,应用程序会暂时性的不能提供通信服务。
|
||||
例如,应用程序在启动时可能需要加载很大的数据或配置文件,或是启动后要依赖等待外部服务。
|
||||
在这种情况下,既不想杀死应用程序,也不想给它发送请求。
|
||||
Kubernetes 提供了就绪探测器来发现并缓解这些情况。
|
||||
容器所在 Pod 上报还未就绪的信息,并且不接受通过 Kubernetes Service 的流量。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Readiness probes runs on the container during its whole lifecycle.
|
||||
-->
|
||||
{{< note >}}
|
||||
就绪探测器在容器的整个生命周期中保持运行状态。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -405,7 +443,8 @@ Readiness probes runs on the container during its whole lifecycle.
|
|||
Readiness probes are configured similarly to liveness probes. The only difference
|
||||
is that you use the `readinessProbe` field instead of the `livenessProbe` field.
|
||||
-->
|
||||
就绪探测器的配置和存活探测器的配置相似。唯一区别就是要使用 `readinessProbe` 字段,而不是 `livenessProbe` 字段。
|
||||
就绪探测器的配置和存活探测器的配置相似。
|
||||
唯一区别就是要使用 `readinessProbe` 字段,而不是 `livenessProbe` 字段。
|
||||
|
||||
```yaml
|
||||
readinessProbe:
|
||||
|
@ -426,17 +465,18 @@ for it, and that containers are restarted when they fail.
|
|||
-->
|
||||
HTTP 和 TCP 的就绪探测器配置也和存活探测器的配置一样的。
|
||||
|
||||
就绪和存活探测可以在同一个容器上并行使用。两者都用可以确保流量不会发给还没有准备好的容器,并且容器会在它们失败的时候被重新启动。
|
||||
就绪和存活探测可以在同一个容器上并行使用。
|
||||
两者都用可以确保流量不会发给还没有准备好的容器,并且容器会在它们失败的时候被重新启动。
|
||||
|
||||
<!--
|
||||
## Configure Probes
|
||||
-->
|
||||
## 配置探测器 {#configure-probes}
|
||||
|
||||
{{< comment >}}
|
||||
<!--
|
||||
Eventually, some of this section could be moved to a concept topic.
|
||||
-->
|
||||
{{< comment >}}
|
||||
最后,本节的一些内容可以放到某个概念主题里。
|
||||
{{< /comment >}}
|
||||
|
||||
|
@ -445,7 +485,8 @@ Eventually, some of this section could be moved to a concept topic.
|
|||
you can use to more precisely control the behavior of liveness and readiness
|
||||
checks:
|
||||
-->
|
||||
[探测器](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)有很多配置字段,可以使用这些字段精确的控制存活和就绪检测的行为:
|
||||
[Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
有很多配置字段,可以使用这些字段精确的控制存活和就绪检测的行为:
|
||||
|
||||
<!--
|
||||
* `initialDelaySeconds`: Number of seconds after the container has started
|
||||
|
@ -464,8 +505,11 @@ Defaults to 3. Minimum value is 1.
|
|||
* `initialDelaySeconds`:容器启动后要等待多少秒后存活和就绪探测器才被初始化,默认是 0 秒,最小值是 0。
|
||||
* `periodSeconds`:执行探测的时间间隔(单位是秒)。默认是 10 秒。最小值是 1。
|
||||
* `timeoutSeconds`:探测的超时后等待多少秒。默认值是 1 秒。最小值是 1。
|
||||
* `successThreshold`:探测器在失败后,被视为成功的最小连续成功数。默认值是 1。存活探测的这个值必须是 1。最小值是 1。
|
||||
* `failureThreshold`:当探测失败时,Kubernetes 的重试次数。存活探测情况下的放弃就意味着重新启动容器。就绪探测情况下的放弃 Pod 会被打上未就绪的标签。默认值是 3。最小值是 1。
|
||||
* `successThreshold`:探测器在失败后,被视为成功的最小连续成功数。默认值是 1。
|
||||
存活探测的这个值必须是 1。最小值是 1。
|
||||
* `failureThreshold`:当探测失败时,Kubernetes 的重试次数。
|
||||
存活探测情况下的放弃就意味着重新启动容器。
|
||||
就绪探测情况下的放弃 Pod 会被打上未就绪的标签。默认值是 3。最小值是 1。
|
||||
|
||||
<!--
|
||||
[HTTP probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)
|
||||
|
@ -479,7 +523,8 @@ set "Host" in httpHeaders instead.
|
|||
* `port`: Name or number of the port to access on the container. Number must be
|
||||
in the range 1 to 65535.
|
||||
-->
|
||||
[HTTP 探测器](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)可以在 `httpGet` 上配置额外的字段:
|
||||
[HTTP Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)
|
||||
可以在 `httpGet` 上配置额外的字段:
|
||||
|
||||
* `host`:连接使用的主机名,默认是 Pod 的 IP。也可以在 HTTP 头中设置 “Host” 来代替。
|
||||
* `scheme` :用于设置连接主机的方式(HTTP 还是 HTTPS)。默认是 HTTP。
|
||||
|
@ -502,20 +547,25 @@ For a TCP probe, the kubelet makes the probe connection at the node, not in the
|
|||
means that you can not use a service name in the `host` parameter since the kubelet is unable
|
||||
to resolve it.
|
||||
-->
|
||||
对于 HTTP 探测,kubelet 发送一个 HTTP 请求到指定的路径和端口来执行检测。除非 `httpGet` 中的 `host` 字段设置了,否则 kubelet 默认是给 Pod 的 IP 地址发送探测。如果 `scheme` 字段设置为了 `HTTPS`,kubelet 会跳过证书验证发送 HTTPS 请求。大多数情况下,不需要设置`host` 字段。这里有个需要设置 `host` 字段的场景,假设容器监听 127.0.0.1,并且 Pod 的 `hostNetwork` 字段设置为了 `true`。那么 `httpGet` 中的 `host` 字段应该设置为 127.0.0.1。可能更常见的情况是如果 Pod 依赖虚拟主机,你不应该设置 `host` 字段,而是应该在 `httpHeaders` 中设置 `Host`。
|
||||
|
||||
对于一次 TCP 探测,kubelet 在节点上(不是在 Pod 里面)建立探测连接,这意味着你不能在 `host` 参数上配置 service name,因为 kubelet 不能解析 service name。
|
||||
|
||||
对于 HTTP 探测,kubelet 发送一个 HTTP 请求到指定的路径和端口来执行检测。
|
||||
除非 `httpGet` 中的 `host` 字段设置了,否则 kubelet 默认是给 Pod 的 IP 地址发送探测。
|
||||
如果 `scheme` 字段设置为了 `HTTPS`,kubelet 会跳过证书验证发送 HTTPS 请求。
|
||||
大多数情况下,不需要设置`host` 字段。
|
||||
这里有个需要设置 `host` 字段的场景,假设容器监听 127.0.0.1,并且 Pod 的 `hostNetwork`
|
||||
字段设置为了 `true`。那么 `httpGet` 中的 `host` 字段应该设置为 127.0.0.1。
|
||||
可能更常见的情况是如果 Pod 依赖虚拟主机,你不应该设置 `host` 字段,而是应该在
|
||||
`httpHeaders` 中设置 `Host`。
|
||||
|
||||
对于一次 TCP 探测,kubelet 在节点上(不是在 Pod 里面)建立探测连接,
|
||||
这意味着你不能在 `host` 参数上配置服务名称,因为 kubelet 不能解析服务名称。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* Learn more about
|
||||
[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||
-->
|
||||
* 进一步了解[容器探测器](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)。
|
||||
* 进一步了解[容器探测器](/zh/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)。
|
||||
|
||||
<!--
|
||||
### Reference
|
||||
|
@ -525,8 +575,9 @@ to resolve it.
|
|||
* [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
-->
|
||||
### 参考 {#reference}
|
||||
|
||||
* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
* [容器](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
|
||||
* [探测器](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
* [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
|
||||
* [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
|
||||
|
||||
|
|
|
@ -5,11 +5,9 @@ weight: 130
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Configure Pod Initialization
|
||||
content_type: task
|
||||
weight: 130
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -36,7 +34,7 @@ Here is the configuration file for the Pod:
|
|||
-->
|
||||
## 创建一个包含 Init 容器的 Pod {#creating-a-pod-that-has-an-init-container}
|
||||
|
||||
本例中您将创建一个包含一个应用容器和一个 Init 容器的 Pod。Init 容器在应用容器启动前运行完成。
|
||||
本例中你将创建一个包含一个应用容器和一个 Init 容器的 Pod。Init 容器在应用容器启动前运行完成。
|
||||
|
||||
下面是 Pod 的配置文件:
|
||||
|
||||
|
@ -51,13 +49,14 @@ shared Volume at `/work-dir`, and the application container mounts the shared
|
|||
Volume at `/usr/share/nginx/html`. The init container runs the following command
|
||||
and then terminates:
|
||||
-->
|
||||
|
||||
配置文件中,您可以看到应用容器和 Init 容器共享了一个卷。
|
||||
配置文件中,你可以看到应用容器和 Init 容器共享了一个卷。
|
||||
|
||||
Init 容器将共享卷挂载到了 `/work-dir` 目录,应用容器将共享卷挂载到了 `/usr/share/nginx/html` 目录。
|
||||
Init 容器执行完下面的命令就终止:
|
||||
|
||||
wget -O /work-dir/index.html http://kubernetes.io
|
||||
```shell
|
||||
wget -O /work-dir/index.html http://kubernetes.io
|
||||
```
|
||||
|
||||
<!--
|
||||
Notice that the init container writes the `index.html` file in the root directory
|
||||
|
@ -65,37 +64,41 @@ of the nginx server.
|
|||
|
||||
Create the Pod:
|
||||
-->
|
||||
|
||||
请注意 Init 容器在 nginx 服务器的根目录写入 `index.html`。
|
||||
|
||||
创建 Pod:
|
||||
|
||||
kubectl create -f https://k8s.io/examples/pods/init-containers.yaml
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/init-containers.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
Verify that the nginx container is running:
|
||||
-->
|
||||
|
||||
检查 nginx 容器运行正常:
|
||||
|
||||
kubectl get pod init-demo
|
||||
```shell
|
||||
kubectl get pod init-demo
|
||||
```
|
||||
|
||||
<!--
|
||||
The output shows that the nginx container is running:
|
||||
-->
|
||||
|
||||
结果表明 nginx 容器运行正常:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
init-demo 1/1 Running 0 1m
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
init-demo 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
<!--
|
||||
Get a shell into the nginx container running in the init-demo Pod:
|
||||
-->
|
||||
|
||||
通过 shell 进入 init-demo Pod 中的 nginx 容器:
|
||||
|
||||
kubectl exec -it init-demo -- /bin/bash
|
||||
```shell
|
||||
kubectl exec -it init-demo -- /bin/bash
|
||||
```
|
||||
|
||||
<!--
|
||||
In your shell, send a GET request to the nginx server:
|
||||
|
@ -103,33 +106,33 @@ In your shell, send a GET request to the nginx server:
|
|||
|
||||
在 shell 中,发送个 GET 请求到 nginx 服务器:
|
||||
|
||||
root@nginx:~# apt-get update
|
||||
root@nginx:~# apt-get install curl
|
||||
root@nginx:~# curl localhost
|
||||
```
|
||||
root@nginx:~# apt-get update
|
||||
root@nginx:~# apt-get install curl
|
||||
root@nginx:~# curl localhost
|
||||
```
|
||||
|
||||
<!--
|
||||
The output shows that nginx is serving the web page that was written by the init container:
|
||||
-->
|
||||
|
||||
结果表明 nginx 正在为 Init 容器编写的 web 页面服务:
|
||||
|
||||
<!Doctype html>
|
||||
<html id="home">
|
||||
|
||||
<head>
|
||||
...
|
||||
"url": "http://kubernetes.io/"}</script>
|
||||
</head>
|
||||
<body>
|
||||
...
|
||||
<p>Kubernetes is open source giving you the freedom to take advantage ...</p>
|
||||
...
|
||||
|
||||
```
|
||||
<!Doctype html>
|
||||
<html id="home">
|
||||
|
||||
<head>
|
||||
...
|
||||
"url": "http://kubernetes.io/"}</script>
|
||||
</head>
|
||||
<body>
|
||||
...
|
||||
<p>Kubernetes is open source giving you the freedom to take advantage ...</p>
|
||||
...
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* Learn more about
|
||||
[communicating between Containers running in the same Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/).
|
||||
|
@ -138,11 +141,8 @@ The output shows that nginx is serving the web page that was written by the init
|
|||
* Learn more about [Debugging Init Containers](/docs/tasks/debug-application-cluster/debug-init-containers/)
|
||||
-->
|
||||
|
||||
* 进一步了解 [相同 Pod 中的容器间的通信](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/)。
|
||||
* 进一步了解 [Init 容器](/docs/concepts/workloads/pods/init-containers/)。
|
||||
* 进一步了解 [卷](/docs/concepts/storage/volumes/)。
|
||||
* 进一步了解 [Init 容器排错](/docs/tasks/debug-application-cluster/debug-init-containers/)。
|
||||
|
||||
|
||||
|
||||
* 进一步了解[同一 Pod 中的容器间的通信](/zh/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/)。
|
||||
* 进一步了解 [Init 容器](/zh/docs/concepts/workloads/pods/init-containers/)。
|
||||
* 进一步了解[卷](/zh/docs/concepts/storage/volumes/)。
|
||||
* 进一步了解 [Init 容器排错](/zh/docs/tasks/debug-application-cluster/debug-init-containers/)。
|
||||
|
||||
|
|
|
@ -1,15 +1,13 @@
|
|||
---
|
||||
title: 为 Windows 的 pod 和容器配置 RunAsUserName
|
||||
title: 为 Windows 的 Pod 和容器配置 RunAsUserName
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Configure RunAsUserName for Windows pods and containers
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -19,36 +17,38 @@ weight: 20
|
|||
<!--
|
||||
This page shows how to enable and use the `RunAsUserName` feature for pods and containers that will run on Windows nodes. This feature is meant to be the Windows equivalent of the Linux-specific `runAsUser` feature, allowing users to run the container entrypoints with a different username that their default ones.
|
||||
-->
|
||||
本页展示如何为运行在 Windows 节点上的 pod 和容器启用并使用 `RunAsUserName` 功能。
|
||||
此功能旨在成为 Windows 版的 `runAsUser`(Linux),允许用户使用与默认用户名不同的
|
||||
用户名运行容器 entrypoint。
|
||||
|
||||
本页展示如何为运行在 Windows 节点上的 pod 和容器启用并使用 `RunAsUserName` 功能。此功能旨在成为 Windows 版的 `runAsUser`(Linux),允许用户使用与默认用户名不同的用户名运行容器 entrypoint。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
This feature is in beta. The overall functionality for `RunAsUserName` will not change, but there may be some changes regarding the username validation.
|
||||
-->
|
||||
该功能目前处于 beta 状态。 `RunAsUserName` 的整体功能不会出现变更,但是关于用户名验证的部分可能会有所更改。
|
||||
{{< note >}}
|
||||
此功能目前处于 Beta 状态。`RunAsUserName` 的整体功能不会出现变更,
|
||||
但是关于用户名验证的部分可能会有所更改。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!--
|
||||
You need to have a Kubernetes cluster and the kubectl command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes where pods with containers running Windows workloads will get scheduled.
|
||||
-->
|
||||
|
||||
你必须有一个 Kubernetes 集群,并且 kubectl 必须能和集群通信。集群应该要有 Windows 工作节点,将在其中调度运行 Windows 工作负载的 pod 和容器。
|
||||
你必须有一个 Kubernetes 集群,并且 kubectl 必须能和集群通信。
|
||||
集群应该要有 Windows 工作节点,将在其中调度运行 Windows 工作负载的 pod 和容器。
|
||||
|
||||
<!--
|
||||
## Set the Username for a Pod
|
||||
|
||||
To specify the username with which to execute the Pod's container processes, include the `securityContext` field ([PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core) in the Pod specification, and within it, the `windowsOptions` ([WindowsSecurityContextOptions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#windowssecuritycontextoptions-v1-core) field containing the `runAsUserName` field.
|
||||
-->
|
||||
|
||||
## 为 Pod 设置 Username
|
||||
|
||||
要指定运行 Pod 容器时所使用的用户名,请在 Pod 声明中包含 `securityContext` ([PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core))字段,并在其内部包含 `windowsOptions` ([WindowsSecurityContextOptions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#windowssecuritycontextoptions-v1-core))字段的 `runAsUserName` 字段。
|
||||
要指定运行 Pod 容器时所使用的用户名,请在 Pod 声明中包含 `securityContext`
|
||||
([PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core))字段,
|
||||
并在其内部包含 `windowsOptions`
|
||||
([WindowsSecurityContextOptions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#windowssecuritycontextoptions-v1-core))
|
||||
字段的 `runAsUserName` 字段。
|
||||
|
||||
<!--
|
||||
The Windows security context options that you specify for a Pod apply to all Containers and init Containers in the Pod.
|
||||
|
@ -56,7 +56,7 @@ The Windows security context options that you specify for a Pod apply to all Con
|
|||
Here is a configuration file for a Windows Pod that has the `runAsUserName` field set:
|
||||
-->
|
||||
|
||||
您为 Pod 指定的 Windows SecurityContext 选项适用于该 Pod 中(包括 init 容器)的所有容器。
|
||||
你为 Pod 指定的 Windows SecurityContext 选项适用于该 Pod 中(包括 init 容器)的所有容器。
|
||||
|
||||
这儿有一个已经设置了 `runAsUserName` 字段的 Windows Pod 的配置文件:
|
||||
|
||||
|
@ -127,17 +127,15 @@ The Windows security context options that you specify for a Container apply only
|
|||
|
||||
Here is the configuration file for a Pod that has one Container, and the `runAsUserName` field is set at the Pod level and the Container level:
|
||||
-->
|
||||
你为容器指定的 Windows SecurityContext 选项仅适用于该容器,并且它会覆盖 Pod 级别设置。
|
||||
|
||||
您为容器指定的 Windows SecurityContext 选项仅适用于该容器,并且它会覆盖 Pod 级别设置。
|
||||
|
||||
这儿有一个 Pod 的配置文件,其只有一个容器,并且在 Pod 级别和容器级别都设置了 `runAsUserName`:
|
||||
这里有一个 Pod 的配置文件,其中只有一个容器,并且在 Pod 级别和容器级别都设置了 `runAsUserName`:
|
||||
|
||||
{{< codenew file="windows/run-as-username-container.yaml" >}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
-->
|
||||
|
||||
创建 Pod:
|
||||
|
||||
```shell
|
||||
|
@ -147,7 +145,6 @@ kubectl apply -f https://k8s.io/examples/windows/run-as-username-container.yaml
|
|||
<!--
|
||||
Verify that the Pod's Container is running:
|
||||
-->
|
||||
|
||||
验证 Pod 容器是否在运行:
|
||||
|
||||
```shell
|
||||
|
@ -157,7 +154,6 @@ kubectl get pod run-as-username-container-demo
|
|||
<!--
|
||||
Get a shell to the running Container:
|
||||
-->
|
||||
|
||||
获取该容器的 shell:
|
||||
|
||||
```shell
|
||||
|
@ -167,7 +163,6 @@ kubectl exec -it run-as-username-container-demo -- powershell
|
|||
<!--
|
||||
Check that the shell is running user the correct username (the one set at the Container level):
|
||||
-->
|
||||
|
||||
检查运行 shell 的用户的用户名是否正确(应该是容器级别设置的那个):
|
||||
|
||||
```powershell
|
||||
|
@ -177,10 +172,9 @@ echo $env:USERNAME
|
|||
<!--
|
||||
The output should be:
|
||||
-->
|
||||
|
||||
输出结果应该是这样:
|
||||
|
||||
```shell
|
||||
```
|
||||
ContainerAdministrator
|
||||
```
|
||||
|
||||
|
@ -189,10 +183,11 @@ ContainerAdministrator
|
|||
|
||||
In order to use this feature, the value set in the `runAsUserName` field must be a valid username. It must have the following format: `DOMAIN\USER`, where `DOMAIN\` is optional. Windows user names are case insensitive. Additionally, there are some restrictions regarding the `DOMAIN` and `USER`:
|
||||
-->
|
||||
|
||||
## Windows Username 的局限性
|
||||
|
||||
想要使用此功能,在 `runAsUserName` 字段中设置的值必须是有效的用户名。它必须是 `DOMAIN\USER` 这种格式,其中 `DOMAIN\` 是可选的。Windows 用户名不区分大小写。此外,关于 `DOMAIN` 和 `USER` 还有一些限制:
|
||||
想要使用此功能,在 `runAsUserName` 字段中设置的值必须是有效的用户名。
|
||||
它必须是 `DOMAIN\USER` 这种格式,其中 `DOMAIN\` 是可选的。
|
||||
Windows 用户名不区分大小写。此外,关于 `DOMAIN` 和 `USER` 还有一些限制:
|
||||
|
||||
<!--
|
||||
- The `runAsUserName` field cannot be empty, and it cannot contain control characters (ASCII values: `0x00-0x1F`, `0x7F`)
|
||||
|
@ -201,7 +196,6 @@ In order to use this feature, the value set in the `runAsUserName` field must be
|
|||
- DNS names: maximum 255 characters, contains only alphanumeric characters, dots, and dashes, and it cannot start or end with a `.` (dot) or `-` (dash).
|
||||
- The `USER` must have at most 20 characters, it cannot contain *only* dots or spaces, and it cannot contain the following characters: `" / \ [ ] : ; | = , + * ? < > @`.
|
||||
-->
|
||||
|
||||
- `runAsUserName` 字段不能为空,并且不能包含控制字符(ASCII 值:`0x00-0x1F`、`0x7F`)
|
||||
- `DOMAIN` 必须是 NetBios 名称或 DNS 名称,每种名称都有各自的局限性:
|
||||
- NetBios 名称:最多 15 个字符,不能以 `.`(点)开头,并且不能包含以下字符:`\ / : * ? " < > |`
|
||||
|
@ -213,24 +207,19 @@ Examples of acceptable values for the `runAsUserName` field: `ContainerAdministr
|
|||
|
||||
For more information about these limtations, check [here](https://support.microsoft.com/en-us/help/909264/naming-conventions-in-active-directory-for-computers-domains-sites-and) and [here](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.localaccounts/new-localuser?view=powershell-5.1).
|
||||
-->
|
||||
|
||||
`runAsUserName` 字段接受的值的一些示例:`ContainerAdministrator`、`ContainerUser`、`NT AUTHORITY\NETWORK SERVICE`、`NT AUTHORITY\LOCAL SERVICE`。
|
||||
`runAsUserName` 字段接受的值的一些示例:`ContainerAdministrator`、`ContainerUser`、
|
||||
`NT AUTHORITY\NETWORK SERVICE`、`NT AUTHORITY\LOCAL SERVICE`。
|
||||
|
||||
关于这些限制的更多信息,可以查看[这里](https://support.microsoft.com/en-us/help/909264/naming-conventions-in-active-directory-for-computers-domains-sites-and)和[这里](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.localaccounts/new-localuser?view=powershell-5.1)。
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* [Guide for scheduling Windows containers in Kubernetes](/docs/setup/production-environment/windows/user-guide-windows-containers/)
|
||||
* [Managing Workload Identity with Group Managed Service Accounts (GMSA)](/docs/setup/production-environment/windows/user-guide-windows-containers/#managing-workload-identity-with-group-managed-service-accounts)
|
||||
* [Configure GMSA for Windows pods and containers](/docs/tasks/configure-pod-container/configure-gmsa/)
|
||||
-->
|
||||
|
||||
* [Kubernetes 中调度 Windows 容器的指南](/docs/setup/production-environment/windows/user-guide-windows-containers/)
|
||||
* [使用组托管服务帐户(GMSA)管理工作负载身份](/docs/setup/production-environment/windows/user-guide-windows-containers/#managing-workload-identity-with-group-managed-service-accounts)
|
||||
* [Windows 下 pod 和容器的 GMSA 配置](/docs/tasks/configure-pod-container/configure-gmsa/)
|
||||
|
||||
* [Kubernetes 中调度 Windows 容器的指南](/zh/docs/setup/production-environment/windows/user-guide-windows-containers/)
|
||||
* [使用组托管服务帐户(GMSA)管理工作负载身份](/zh/docs/setup/production-environment/windows/user-guide-windows-containers/#managing-workload-identity-with-group-managed-service-accounts)
|
||||
* [Windows 下 pod 和容器的 GMSA 配置](/zh/docs/tasks/configure-pod-container/configure-gmsa/)
|
||||
|
||||
|
|
|
@ -5,11 +5,9 @@ weight: 40
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Assign Extended Resources to a Container
|
||||
content_type: task
|
||||
weight: 40
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -21,14 +19,8 @@ This page shows how to assign extended resources to a Container.
|
|||
-->
|
||||
本文介绍如何为容器指定扩展资源。
|
||||
|
||||
{{< feature-state state="stable" >}}
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
<!--
|
||||
|
@ -36,11 +28,9 @@ Before you do this exercise, do the exercise in
|
|||
[Advertise Extended Resources for a Node](/docs/tasks/administer-cluster/extended-resource-node/).
|
||||
That will configure one of your Nodes to advertise a dongle resource.
|
||||
-->
|
||||
在您开始此练习前,请先练习[为节点广播扩展资源](/docs/tasks/administer-cluster/extended-resource-node/)。
|
||||
在那个练习中将配置您的一个节点来广播 dongle 资源。
|
||||
|
||||
|
||||
|
||||
在你开始此练习前,请先练习
|
||||
[为节点广播扩展资源](/zh/docs/tasks/administer-cluster/extended-resource-node/)。
|
||||
在那个练习中将配置你的一个节点来广播 dongle 资源。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -57,9 +47,10 @@ Here is the configuration file for a Pod that has one Container:
|
|||
-->
|
||||
## 给 Pod 分派扩展资源
|
||||
|
||||
要请求扩展资源,需要在您的容器清单中包括 `resources:requests` 字段。
|
||||
要请求扩展资源,需要在你的容器清单中包括 `resources:requests` 字段。
|
||||
扩展资源可以使用任何完全限定名称,只是不能使用 `*.kubernetes.io/`。
|
||||
有效的扩展资源名的格式为 `example.com/foo`,其中 `example.com` 应被替换为您的组织的域名,而 `foo` 则是描述性的资源名称。
|
||||
有效的扩展资源名的格式为 `example.com/foo`,其中 `example.com` 应被替换为
|
||||
你的组织的域名,而 `foo` 则是描述性的资源名称。
|
||||
|
||||
下面是包含一个容器的 Pod 配置文件:
|
||||
|
||||
|
@ -70,7 +61,7 @@ In the configuration file, you can see that the Container requests 3 dongles.
|
|||
|
||||
Create a Pod:
|
||||
-->
|
||||
在配置文件中,您可以看到容器请求了 3 个 dongles。
|
||||
在配置文件中,你可以看到容器请求了 3 个 dongles。
|
||||
|
||||
创建 Pod:
|
||||
|
||||
|
@ -137,7 +128,7 @@ kubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod-2.y
|
|||
<!--
|
||||
Describe the Pod
|
||||
-->
|
||||
描述 Pod
|
||||
描述 Pod:
|
||||
|
||||
```shell
|
||||
kubectl describe pod extended-resource-demo-2
|
||||
|
@ -175,7 +166,7 @@ It has a status of Pending:
|
|||
-->
|
||||
输出结果表明 Pod 虽然被创建了,但没有被调度到节点上正常运行。Pod 的状态为 Pending:
|
||||
|
||||
```yaml
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
extended-resource-demo-2 0/1 Pending 0 6m
|
||||
```
|
||||
|
@ -185,7 +176,7 @@ extended-resource-demo-2 0/1 Pending 0 6m
|
|||
|
||||
Delete the Pods that you created for this exercise:
|
||||
-->
|
||||
## 环境清理
|
||||
## 清理
|
||||
|
||||
删除本练习中创建的 Pod:
|
||||
|
||||
|
@ -194,11 +185,8 @@ kubectl delete pod extended-resource-demo
|
|||
kubectl delete pod extended-resource-demo-2
|
||||
```
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
### For application developers
|
||||
|
||||
|
@ -207,8 +195,8 @@ kubectl delete pod extended-resource-demo-2
|
|||
-->
|
||||
## 应用开发者参考
|
||||
|
||||
* [为容器和 Pod 分配内存资源](/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
* [为容器和 Pod 分配 CPU 资源](/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
* [为容器和 Pod 分配内存资源](/zh/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
* [为容器和 Pod 分配 CPU 资源](/zh/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
|
||||
<!--
|
||||
### For cluster administrators
|
||||
|
@ -217,7 +205,5 @@ kubectl delete pod extended-resource-demo-2
|
|||
-->
|
||||
### 集群管理员参考
|
||||
|
||||
* [为节点广播扩展资源](/docs/tasks/administer-cluster/extended-resource-node/)
|
||||
|
||||
|
||||
* [为节点广播扩展资源](/zh/docs/tasks/administer-cluster/extended-resource-node/)
|
||||
|
||||
|
|
|
@ -1,6 +1,4 @@
|
|||
---
|
||||
reviewers:
|
||||
- jsafrane
|
||||
title: 创建静态 Pod
|
||||
weight: 170
|
||||
content_type: task
|
||||
|
@ -17,8 +15,10 @@ Unlike Pods that are managed by the control plane (for example, a
|
|||
instead, the kubelet watches each static Pod (and restarts it if it crashes).
|
||||
-->
|
||||
|
||||
*静态 Pod* 在指定的节点上由 kubelet 守护进程直接管理,不需要 {{< glossary_tooltip text="API 服务" term_id="kube-apiserver" >}} 监管。
|
||||
不像 Pod 是由控制面管理的(例如,{{< glossary_tooltip text="Deployment" term_id="deployment" >}});相反 kubelet 监视每个静态 Pod(在它崩溃之后重新启动)。
|
||||
*静态 Pod* 在指定的节点上由 kubelet 守护进程直接管理,不需要
|
||||
{{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}} 监管。
|
||||
与由控制面管理的 Pod(例如,{{< glossary_tooltip text="Deployment" term_id="deployment" >}})
|
||||
不同;kubelet 监视每个静态 Pod(在它崩溃之后重新启动)。
|
||||
|
||||
<!--
|
||||
Static Pods are always bound to one {{< glossary_tooltip term_id="kubelet" >}} on a specific node.
|
||||
|
@ -35,18 +35,17 @@ Pods to run a Pod on every node, you should probably be using a
|
|||
instead.
|
||||
{{< /note >}}
|
||||
-->
|
||||
|
||||
静态 Pod 永远都会绑定到一个指定节点上的 {{< glossary_tooltip term_id="kubelet" >}}。
|
||||
|
||||
kubelet 会尝试通过 Kubernetes API 服务器为每个静态 Pod 自动创建一个 {{< glossary_tooltip text="镜像 Pod" term_id="mirror-pod" >}}。
|
||||
kubelet 会尝试通过 Kubernetes API 服务器为每个静态 Pod 自动创建一个
|
||||
{{< glossary_tooltip text="镜像 Pod" term_id="mirror-pod" >}}。
|
||||
这意味着节点上运行的静态 Pod 对 API 服务来说是不可见的,但是不能通过 API 服务器来控制。
|
||||
|
||||
{{< note >}}
|
||||
如果你在运行一个 Kubernetes 集群,并且在每个节点上都运行一个静态 Pod,就可能需要考虑使用 {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}} 替代这种方式。
|
||||
如果你在运行一个 Kubernetes 集群,并且在每个节点上都运行一个静态 Pod,
|
||||
就可能需要考虑使用 {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}} 替代这种方式。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
|
@ -57,12 +56,10 @@ This page assumes you're using {{< glossary_tooltip term_id="docker" >}} to run
|
|||
and that your nodes are running the Fedora operating system.
|
||||
Instructions for other distributions or Kubernetes installations may vary.
|
||||
-->
|
||||
|
||||
本文假定你在使用 {{< glossary_tooltip term_id="docker" >}} 来运行 Pod,并且你的节点是运行着 Fedora 操作系统。
|
||||
本文假定你在使用 {{< glossary_tooltip term_id="docker" >}} 来运行 Pod,
|
||||
并且你的节点是运行着 Fedora 操作系统。
|
||||
其它发行版或者 Kubernetes 部署版本上操作方式可能不一样。
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
|
@ -70,10 +67,11 @@ Instructions for other distributions or Kubernetes installations may vary.
|
|||
|
||||
You can configure a static Pod with either a [file system hosted configuration file](/docs/tasks/configure-pod-container/static-pod/#configuration-files) or a [web hosted configuration file](/docs/tasks/configure-pod-container/static-pod/#pods-created-via-http).
|
||||
-->
|
||||
|
||||
## 创建静态 Pod {#static-pod-creation}
|
||||
|
||||
可以通过[文件系统上的配置文件](/docs/tasks/configure-pod-container/static-pod/#configuration-files)或者 [web 网络上的配置文件](/docs/tasks/configure-pod-container/static-pod/#pods-created-via-http)来配置静态 Pod。
|
||||
可以通过[文件系统上的配置文件](/zh/docs/tasks/configure-pod-container/static-pod/#configuration-files)
|
||||
或者 [web 网络上的配置文件](/zh/docs/tasks/configure-pod-container/static-pod/#pods-created-via-http)
|
||||
来配置静态 Pod。
|
||||
|
||||
<!--
|
||||
### Filesystem-hosted static Pod manifest {#configuration-files}
|
||||
|
@ -83,10 +81,12 @@ Note that the kubelet will ignore files starting with dots when scanning the spe
|
|||
|
||||
For example, this is how to start a simple web server as a static Pod:
|
||||
-->
|
||||
|
||||
### 文件系统上的静态 Pod 声明文件 {#configuration-files}
|
||||
|
||||
声明文件是标准的 Pod 定义文件,以 JSON 或者 YAML 格式存储在指定目录。路径设置在 [Kubelet 配置文件](/docs/tasks/administer-cluster/kubelet-config-file)的 `staticPodPath: <目录>` 字段,kubelet 会定期的扫描这个文件夹下的 YAML/JSON 文件来创建/删除静态 Pod。
|
||||
声明文件是标准的 Pod 定义文件,以 JSON 或者 YAML 格式存储在指定目录。路径设置在
|
||||
[Kubelet 配置文件](/zh/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||
的 `staticPodPath: <目录>` 字段,kubelet 会定期的扫描这个文件夹下的 YAML/JSON
|
||||
文件来创建/删除静态 Pod。
|
||||
注意 kubelet 扫描目录的时候会忽略以点开头的文件。
|
||||
|
||||
例如:下面是如何以静态 Pod 的方式启动一个简单 web 服务:
|
||||
|
@ -97,9 +97,9 @@ For example, this is how to start a simple web server as a static Pod:
|
|||
|
||||
1. 选择一个要运行静态 Pod 的节点。在这个例子中选择 `my-node1`。
|
||||
|
||||
```shell
|
||||
ssh my-node1
|
||||
```
|
||||
```shell
|
||||
ssh my-node1
|
||||
```
|
||||
<!--
|
||||
2. Choose a directory, say `/etc/kubelet.d` and place a web server Pod definition there, e.g. `/etc/kubelet.d/static-web.yaml`:
|
||||
|
||||
|
@ -123,29 +123,30 @@ For example, this is how to start a simple web server as a static Pod:
|
|||
protocol: TCP
|
||||
EOF
|
||||
-->
|
||||
2. 选择一个目录,比如在 `/etc/kubelet.d` 目录来保存 web 服务 Pod 的定义文件,
|
||||
`/etc/kubelet.d/static-web.yaml`:
|
||||
|
||||
1. 选择一个目录,比如在 `/etc/kubelet.d` 目录来保存 web 服务 Pod 的定义文件, `/etc/kubelet.d/static-web.yaml`:
|
||||
```shell
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
mkdir /etc/kubelet.d/
|
||||
cat <<EOF >/etc/kubelet.d/static-web.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: static-web
|
||||
labels:
|
||||
role: myrole
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
EOF
|
||||
```
|
||||
|
||||
```shell
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
mkdir /etc/kubelet.d/
|
||||
cat <<EOF >/etc/kubelet.d/static-web.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: static-web
|
||||
labels:
|
||||
role: myrole
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
EOF
|
||||
```
|
||||
<!--
|
||||
1. Configure your kubelet on the node to use this directory by running it with `--pod-manifest-path=/etc/kubelet.d/` argument. On Fedora edit `/etc/kubernetes/kubelet` to include this line:
|
||||
|
||||
|
@ -154,13 +155,13 @@ For example, this is how to start a simple web server as a static Pod:
|
|||
```
|
||||
or add the `staticPodPath: <the directory>` field in the [KubeletConfiguration file](/docs/tasks/administer-cluster/kubelet-config-file).
|
||||
-->
|
||||
|
||||
3. 配置这个节点上的 kubelet,使用这个参数执行 `--pod-manifest-path=/etc/kubelet.d/`:
|
||||
|
||||
```
|
||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
|
||||
```
|
||||
或者在 [Kubelet配置文件](/docs/tasks/administer-cluster/kubelet-config-file)中添加 `staticPodPath: <目录>`字段。
|
||||
```
|
||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
|
||||
```
|
||||
或者在 [Kubelet配置文件](/zh/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||
中添加 `staticPodPath: <目录>`字段。
|
||||
|
||||
<!--
|
||||
1. Restart the kubelet. On Fedora, you would run:
|
||||
|
@ -170,13 +171,13 @@ For example, this is how to start a simple web server as a static Pod:
|
|||
systemctl restart kubelet
|
||||
```
|
||||
-->
|
||||
4. 重启 kubelet。Fedora 上使用下面的命令:
|
||||
|
||||
1. 重启 kubelet。Fedora 上使用下面的命令:
|
||||
```shell
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
```shell
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
systemctl restart kubelet
|
||||
```
|
||||
<!--
|
||||
### Web-hosted static pod manifest {#pods-created-via-http}
|
||||
|
||||
|
@ -188,46 +189,46 @@ Pods, the kubelet applies them.
|
|||
|
||||
To use this approach:
|
||||
-->
|
||||
|
||||
### Web 网上的静态 Pod 声明文件 {#pods-created-via-http}
|
||||
|
||||
Kubelet 根据 `--manifest-url=<URL>` 参数的配置定期的下载指定文件,并且转换成 JSON/YAML 格式的 Pod 容器定义文件。
|
||||
与[文件系统上的声明文件](#configuration-files)使用方式类似,kubelet 调度获取声明文件。如果静态 Pod 的声明文件有改变,kubelet会应用这些改变。
|
||||
Kubelet 根据 `--manifest-url=<URL>` 参数的配置定期的下载指定文件,并且转换成
|
||||
JSON/YAML 格式的 Pod 定义文件。
|
||||
与[文件系统上的清单文件](#configuration-files)使用方式类似,kubelet 调度获取清单文件。
|
||||
如果静态 Pod 的清单文件有改变,kubelet 会应用这些改变。
|
||||
|
||||
按照下面的方式来:
|
||||
|
||||
<!--
|
||||
1. Create a YAML file and store it on a web server so that you can pass the URL of that file to the kubelet.
|
||||
-->
|
||||
|
||||
1. 创建一个 YAML 文件,并保存在保存在 web 服务上,为 kubelet 生成一个 URL。
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: static-web
|
||||
labels:
|
||||
role: myrole
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: static-web
|
||||
labels:
|
||||
role: myrole
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
```
|
||||
|
||||
<!--
|
||||
2. Configure the kubelet on your selected node to use this web manifest by running it with `--manifest-url=<manifest-url>`. On Fedora, edit `/etc/kubernetes/kubelet` to include this line:
|
||||
-->
|
||||
2. 通过在选择的节点上使用 `--manifest-url=<manifest-url>` 配置运行 kubelet。
|
||||
在 Fedora 添加下面这行到 `/etc/kubernetes/kubelet` :
|
||||
|
||||
1. 通过在选择的节点上使用 `--manifest-url=<manifest-url>` 配置运行 kubelet。在 Fedora 添加下面这行到 `/etc/kubernetes/kubelet` :
|
||||
|
||||
```
|
||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --manifest-url=<manifest-url>"
|
||||
```
|
||||
```
|
||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --manifest-url=<manifest-url>"
|
||||
```
|
||||
<!--
|
||||
3. Restart the kubelet. On Fedora, you would run:
|
||||
|
||||
|
@ -236,13 +237,13 @@ Kubelet 根据 `--manifest-url=<URL>` 参数的配置定期的下载指定文件
|
|||
systemctl restart kubelet
|
||||
```
|
||||
-->
|
||||
3. 重启 kubelet。在 Fedora 上运行如下命令:
|
||||
|
||||
1. 重启 kubelet。在 Fedora 上运行如下命令:
|
||||
```shell
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
```shell
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
systemctl restart kubelet
|
||||
```
|
||||
<!--
|
||||
## Observe static pod behavior {#behavior-of-static-pods}
|
||||
|
||||
|
@ -258,48 +259,49 @@ docker ps
|
|||
|
||||
The output might be something like:
|
||||
-->
|
||||
|
||||
## 观察静态 pod 的行为 {#behavior-of-static-pods}
|
||||
|
||||
当 kubelet 启动时,会自动启动所有定义的静态 Pod。当定义了一个静态 Pod 并重新启动 kubelet 时,新的静态 Pod 就应该已经在运行了。
|
||||
当 kubelet 启动时,会自动启动所有定义的静态 Pod。
|
||||
当定义了一个静态 Pod 并重新启动 kubelet 时,新的静态 Pod 就应该已经在运行了。
|
||||
|
||||
可以在节点上运行下面的命令来查看正在运行的容器(包括静态 Pod):
|
||||
|
||||
```shell
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
docker ps
|
||||
```
|
||||
|
||||
<!--
|
||||
The output might be something like:
|
||||
-->
|
||||
|
||||
输出可能会像这样:
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
|
||||
```
|
||||
|
||||
<!--
|
||||
You can see the mirror Pod on the API server:
|
||||
-->
|
||||
|
||||
可以在 API 服务上看到镜像 Pod:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
static-web-my-node1 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
Make sure the kubelet has permission to create the mirror Pod in the API server. If not, the creation request is rejected by the API server. See
|
||||
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/).
|
||||
{{< /note >}}
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
要确保 kubelet 在 API 服务上有创建镜像 Pod 的权限。如果没有,创建请求会被 API 服务拒绝。可以看[Pod安全策略](/docs/concepts/policy/pod-security-policy/)。
|
||||
要确保 kubelet 在 API 服务上有创建镜像 Pod 的权限。如果没有,创建请求会被 API 服务拒绝。
|
||||
可以看[Pod安全策略](/zh/docs/concepts/policy/pod-security-policy/)。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -307,7 +309,8 @@ Make sure the kubelet has permission to create the mirror Pod in the API server.
|
|||
propagated into the mirror Pod. You can use those labels as normal via
|
||||
{{< glossary_tooltip term_id="selector" text="selectors" >}}, etc.
|
||||
-->
|
||||
静态 Pod 上的{{< glossary_tooltip term_id="label" text="标签" >}} 被传到镜像 Pod。 你可以通过 {{< glossary_tooltip term_id="selector" text="选择算符" >}} 使用这些标签,比如。
|
||||
静态 Pod 上的{{< glossary_tooltip term_id="label" text="标签" >}} 被传到镜像 Pod。
|
||||
你可以通过 {{< glossary_tooltip term_id="selector" text="选择算符" >}} 使用这些标签。
|
||||
|
||||
<!--
|
||||
If you try to use `kubectl` to delete the mirror Pod from the API server,
|
||||
|
@ -321,17 +324,21 @@ kubectl delete pod static-web-my-node1
|
|||
```
|
||||
pod "static-web-my-node1" deleted
|
||||
```
|
||||
|
||||
<!--
|
||||
You can see that the Pod is still running:
|
||||
-->
|
||||
可以看到 Pod 还在运行:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
static-web-my-node1 1/1 Running 0 12s
|
||||
```
|
||||
|
||||
<!--
|
||||
Back on your node where the kubelet is running, you can try to stop the Docker
|
||||
container manually.
|
||||
|
@ -345,20 +352,22 @@ sleep 20
|
|||
docker ps
|
||||
```
|
||||
-->
|
||||
|
||||
回到 kubelet 运行的节点上,可以手工停止 Docker 容器。
|
||||
可以看到过了一段时间后 kubelet 会发现容器停止了并且会自动重启 Pod:
|
||||
|
||||
```shell
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
docker stop f6d05272b57e # 把 ID 换为你的容器的 ID
|
||||
# 把 ID 换为你的容器的 ID
|
||||
docker stop f6d05272b57e
|
||||
sleep 20
|
||||
docker ps
|
||||
```
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
|
||||
```
|
||||
|
||||
<!--
|
||||
## Dynamic addition and removal of static pods
|
||||
|
||||
|
@ -379,12 +388,12 @@ docker ps
|
|||
-->
|
||||
## 动态增加和删除静态 pod
|
||||
|
||||
运行中的 kubelet 会定期扫描配置的目录(比如例子中的 `/etc/kubelet.d` 目录)中的变化,并且根据文件中出现/消失的 Pod 来添加/删除 Pod。
|
||||
运行中的 kubelet 会定期扫描配置的目录(比如例子中的 `/etc/kubelet.d` 目录)中的变化,
|
||||
并且根据文件中出现/消失的 Pod 来添加/删除 Pod。
|
||||
|
||||
```shell
|
||||
# 前提是你在用主机文件系统上的静态 Pod 配置文件
|
||||
# 在 kubelet 运行的节点上执行以下命令
|
||||
#
|
||||
mv /etc/kubelet.d/static-web.yaml /tmp
|
||||
sleep 20
|
||||
docker ps
|
||||
|
@ -393,6 +402,7 @@ mv /tmp/static-web.yaml /etc/kubelet.d/
|
|||
sleep 20
|
||||
docker ps
|
||||
```
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
|
||||
|
|
|
@ -7,13 +7,9 @@ weight: 200
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- cdrage
|
||||
title: Translate a Docker Compose File to Kubernetes Resources
|
||||
content_type: task
|
||||
weight: 170
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -21,52 +17,41 @@ weight: 170
|
|||
<!--
|
||||
What's Kompose? It's a conversion tool for all things compose (namely Docker Compose) to container orchestrators (Kubernetes or OpenShift).
|
||||
-->
|
||||
|
||||
Kompose 是什么?它是个转换工具,可将 compose(即 Docker Compose)所组装的所有内容转换成容器编排器(Kubernetes 或 OpenShift)可识别的形式。
|
||||
Kompose 是什么?它是个转换工具,可将 compose(即 Docker Compose)所组装的所有内容
|
||||
转换成容器编排器(Kubernetes 或 OpenShift)可识别的形式。
|
||||
|
||||
<!--
|
||||
More information can be found on the Kompose website at [http://kompose.io](http://kompose.io).
|
||||
-->
|
||||
|
||||
更多信息请参考 Kompose 官网 [http://kompose.io](http://kompose.io)。
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Install Kompose
|
||||
-->
|
||||
|
||||
## 安装 Kompose
|
||||
|
||||
<!--
|
||||
We have multiple ways to install Kompose. Our preferred method is downloading the binary from the latest GitHub release.
|
||||
-->
|
||||
## 安装 Kompose
|
||||
|
||||
我们有很多种方式安装 Kompose。首选方式是从最新的 GitHub 发布页面下载二进制文件。
|
||||
|
||||
<!--
|
||||
## GitHub release
|
||||
-->
|
||||
|
||||
## GitHub 发布版本
|
||||
|
||||
<!--
|
||||
Kompose is released via GitHub on a three-week cycle, you can see all current releases on the [GitHub release page](https://github.com/kubernetes/kompose/releases).
|
||||
-->
|
||||
## GitHub 发布版本
|
||||
|
||||
Kompose 通过 GitHub 发布版本,发布周期为三星期。您可以在[GitHub 发布页面](https://github.com/kubernetes/kompose/releases)上看到所有当前版本。
|
||||
Kompose 通过 GitHub 发布版本,发布周期为三星期。
|
||||
你可以在 [GitHub 发布页面](https://github.com/kubernetes/kompose/releases)
|
||||
上看到所有当前版本。
|
||||
|
||||
```sh
|
||||
```shell
|
||||
# Linux
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.16.0/kompose-linux-amd64 -o kompose
|
||||
|
||||
|
@ -83,18 +68,16 @@ sudo mv ./kompose /usr/local/bin/kompose
|
|||
<!--
|
||||
Alternatively, you can download the [tarball](https://github.com/kubernetes/kompose/releases).
|
||||
-->
|
||||
或者,您可以下载 [tarball](https://github.com/kubernetes/kompose/releases)。
|
||||
或者,你可以下载 [tarball](https://github.com/kubernetes/kompose/releases)。
|
||||
|
||||
## Go
|
||||
|
||||
<!--
|
||||
Installing using `go get` pulls from the master branch with the latest development changes.
|
||||
-->
|
||||
|
||||
用 `go get` 命令从主分支拉取最新的开发变更的方法安装 Kompose。
|
||||
|
||||
|
||||
```sh
|
||||
```shell
|
||||
go get -u github.com/kubernetes/kompose
|
||||
```
|
||||
|
||||
|
@ -104,17 +87,17 @@ go get -u github.com/kubernetes/kompose
|
|||
Kompose is in [EPEL](https://fedoraproject.org/wiki/EPEL) CentOS repository.
|
||||
If you don't have [EPEL](https://fedoraproject.org/wiki/EPEL) repository already installed and enabled you can do it by running `sudo yum install epel-release`
|
||||
-->
|
||||
|
||||
Kompose 位于 [EPEL](https://fedoraproject.org/wiki/EPEL) CentOS 代码仓库。
|
||||
如果您还没有安装启用 [EPEL](https://fedoraproject.org/wiki/EPEL) 代码仓库,请运行命令 `sudo yum install epel-release`。
|
||||
如果你还没有安装启用 [EPEL](https://fedoraproject.org/wiki/EPEL) 代码仓库,
|
||||
请运行命令 `sudo yum install epel-release`。
|
||||
|
||||
<!--
|
||||
If you have [EPEL](https://fedoraproject.org/wiki/EPEL) enabled in your system, you can install Kompose like any other package.
|
||||
-->
|
||||
如果你的系统中已经启用了 [EPEL](https://fedoraproject.org/wiki/EPEL),
|
||||
你就可以像安装其他软件包一样安装 Kompose。
|
||||
|
||||
如果您的系统中已经启用了 [EPEL](https://fedoraproject.org/wiki/EPEL),您就可以像安装其他软件包一样安装 Kompose。
|
||||
|
||||
```bash
|
||||
```shell
|
||||
sudo yum -y install kompose
|
||||
```
|
||||
|
||||
|
@ -123,10 +106,9 @@ sudo yum -y install kompose
|
|||
<!--
|
||||
Kompose is in Fedora 24, 25 and 26 repositories. You can install it just like any other package.
|
||||
-->
|
||||
Kompose 位于 Fedora 24、25 和 26 的代码仓库。你可以像安装其他软件包一样安装 Kompose。
|
||||
|
||||
Kompose 位于 Fedora 24、25 和 26 的代码仓库。您可以像安装其他软件包一样安装 Kompose。
|
||||
|
||||
```bash
|
||||
```shell
|
||||
sudo dnf -y install kompose
|
||||
```
|
||||
|
||||
|
@ -135,152 +117,154 @@ sudo dnf -y install kompose
|
|||
<!--
|
||||
On macOS you can install latest release via [Homebrew](https://brew.sh):
|
||||
-->
|
||||
在 macOS 上你可以通过 [Homebrew](https://brew.sh) 安装 Kompose 的最新版本:
|
||||
|
||||
在 macOS 上您可以通过 [Homebrew](https://brew.sh) 安装 Kompose 的最新版本:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
brew install kompose
|
||||
|
||||
```
|
||||
|
||||
<!--
|
||||
## Use Kompose
|
||||
-->
|
||||
|
||||
## 使用 Kompose
|
||||
|
||||
<!--
|
||||
In just a few steps, we'll take you from Docker Compose to Kubernetes. All
|
||||
you need is an existing `docker-compose.yml` file.
|
||||
-->
|
||||
|
||||
再需几步,我们就把你从 Docker Compose 带到 Kubernetes。
|
||||
您只需要一个现有的 `docker-compose.yml` 文件。
|
||||
你只需要一个现有的 `docker-compose.yml` 文件。
|
||||
|
||||
1. <!--Go to the directory containing your `docker-compose.yml` file. If you don't
|
||||
have one, test using this one.-->
|
||||
进入 `docker-compose.yml` 文件所在的目录。如果没有,请使用下面这个进行测试。
|
||||
|
||||
```yaml
|
||||
version: "2"
|
||||
```yaml
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
services:
|
||||
|
||||
redis-master:
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
ports:
|
||||
- "6379"
|
||||
redis-master:
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
ports:
|
||||
- "6379"
|
||||
|
||||
redis-slave:
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
ports:
|
||||
- "6379"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
|
||||
frontend:
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
ports:
|
||||
- "80:80"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
labels:
|
||||
kompose.service.type: LoadBalancer
|
||||
```
|
||||
redis-slave:
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
ports:
|
||||
- "6379"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
|
||||
frontend:
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
ports:
|
||||
- "80:80"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
labels:
|
||||
kompose.service.type: LoadBalancer
|
||||
```
|
||||
|
||||
2. <!--Run the `kompose up` command to deploy to Kubernetes directly, or skip to
|
||||
the next step instead to generate a file to use with `kubectl`.-->
|
||||
运行 `kompose up` 命令直接部署到 Kubernetes,或者跳到下一步,生成 `kubectl` 使用的文件。
|
||||
|
||||
```bash
|
||||
$ kompose up
|
||||
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
```bash
|
||||
$ kompose up
|
||||
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
|
||||
INFO Successfully created Service: redis
|
||||
INFO Successfully created Service: web
|
||||
INFO Successfully created Deployment: redis
|
||||
INFO Successfully created Deployment: web
|
||||
|
||||
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
|
||||
```
|
||||
INFO Successfully created Service: redis
|
||||
INFO Successfully created Service: web
|
||||
INFO Successfully created Deployment: redis
|
||||
INFO Successfully created Deployment: web
|
||||
|
||||
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
|
||||
```
|
||||
|
||||
3. <!--To convert the `docker-compose.yml` file to files that you can use with
|
||||
`kubectl`, run `kompose convert` and then `kubectl create -f <output file>`.-->
|
||||
要将 `docker-compose.yml` 转换为 `kubectl` 可用的文件,请运行 `kompose convert` 命令进行转换,然后运行 `kubectl create -f <output file>` 进行创建。
|
||||
要将 `docker-compose.yml` 转换为 `kubectl` 可用的文件,请运行 `kompose convert` 命令进行转换,
|
||||
然后运行 `kubectl create -f <output file>` 进行创建。
|
||||
|
||||
```bash
|
||||
$ kompose convert
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
```shell
|
||||
kompose convert
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl create -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
|
||||
service/frontend created
|
||||
service/redis-master created
|
||||
service/redis-slave created
|
||||
deployment.apps/frontend created
|
||||
deployment.apps/redis-master created
|
||||
deployment.apps/redis-slave created
|
||||
```
|
||||
<!--
|
||||
Your deployments are running in Kubernetes.
|
||||
-->
|
||||
您部署的应用在 Kubernetes 中运行起来了。
|
||||
```
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl create -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
|
||||
```
|
||||
|
||||
4. <!--Access your application.-->访问您的应用。
|
||||
```
|
||||
service/frontend created
|
||||
service/redis-master created
|
||||
service/redis-slave created
|
||||
deployment.apps/frontend created
|
||||
deployment.apps/redis-master created
|
||||
deployment.apps/redis-slave created
|
||||
```
|
||||
|
||||
<!--If you're already using `minikube` for your development process:-->
|
||||
<!--
|
||||
Your deployments are running in Kubernetes.
|
||||
-->
|
||||
你部署的应用在 Kubernetes 中运行起来了。
|
||||
|
||||
如果您在开发过程中使用 `minikube`,请执行:
|
||||
4. <!--Access your application.-->
|
||||
访问你的应用
|
||||
|
||||
```bash
|
||||
$ minikube service frontend
|
||||
```
|
||||
<!--If you're already using `minikube` for your development process:-->
|
||||
|
||||
<!--Otherwise, let's look up what IP your service is using!-->
|
||||
否则,我们要查看一下您的服务使用了什么 IP!
|
||||
如果你在开发过程中使用 `minikube`,请执行:
|
||||
|
||||
```sh
|
||||
$ kubectl describe svc frontend
|
||||
Name: frontend
|
||||
Namespace: default
|
||||
Labels: service=frontend
|
||||
Selector: service=frontend
|
||||
Type: LoadBalancer
|
||||
IP: 10.0.0.183
|
||||
LoadBalancer Ingress: 192.0.2.89
|
||||
Port: 80 80/TCP
|
||||
NodePort: 80 31144/TCP
|
||||
Endpoints: 172.17.0.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```shell
|
||||
minikube service frontend
|
||||
```
|
||||
|
||||
```
|
||||
<!--Otherwise, let's look up what IP your service is using!-->
|
||||
否则,我们要查看一下你的服务使用了什么 IP!
|
||||
|
||||
<!--If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.-->
|
||||
如果您使用的是云提供商,您的 IP 将在 `LoadBalancer Ingress` 字段给出。
|
||||
```shell
|
||||
kubectl describe svc frontend
|
||||
```
|
||||
|
||||
```sh
|
||||
$ curl http://192.0.2.89
|
||||
```
|
||||
```
|
||||
Name: frontend
|
||||
Namespace: default
|
||||
Labels: service=frontend
|
||||
Selector: service=frontend
|
||||
Type: LoadBalancer
|
||||
IP: 10.0.0.183
|
||||
LoadBalancer Ingress: 192.0.2.89
|
||||
Port: 80 80/TCP
|
||||
NodePort: 80 31144/TCP
|
||||
Endpoints: 172.17.0.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
<!--If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.-->
|
||||
如果你使用的是云提供商,你的 IP 将在 `LoadBalancer Ingress` 字段给出。
|
||||
|
||||
```shell
|
||||
curl http://192.0.2.89
|
||||
```
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
## User Guide
|
||||
-->
|
||||
|
||||
## 用户指南
|
||||
## 用户指南 {#user-guide}
|
||||
|
||||
<!--
|
||||
- CLI
|
||||
|
@ -294,11 +278,11 @@ you need is an existing `docker-compose.yml` file.
|
|||
- [Restart](#restart)
|
||||
- [Docker Compose Versions](#docker-compose-versions)
|
||||
-->
|
||||
|
||||
- CLI
|
||||
- [`kompose convert`](#kompose-convert)
|
||||
- [`kompose up`](#kompose-up)
|
||||
- [`kompose down`](#kompose-down)
|
||||
|
||||
- 文档
|
||||
- [构建和推送 Docker 镜像](#构建和推送-docker-镜像)
|
||||
- [其他转换方式](#其他转换方式)
|
||||
|
@ -310,9 +294,8 @@ you need is an existing `docker-compose.yml` file.
|
|||
Kompose has support for two providers: OpenShift and Kubernetes.
|
||||
You can choose a targeted provider using global option `--provider`. If no provider is specified, Kubernetes is set by default.
|
||||
-->
|
||||
|
||||
Kompose 支持两种驱动:OpenShift 和 Kubernetes。
|
||||
您可以通过全局选项 `--provider` 选择驱动方式。如果没有指定,会将 Kubernetes 作为默认驱动。
|
||||
你可以通过全局选项 `--provider` 选择驱动方式。如果没有指定,会将 Kubernetes 作为默认驱动。
|
||||
|
||||
## `kompose convert`
|
||||
|
||||
|
@ -320,8 +303,10 @@ Kompose 支持将 V1、V2 和 V3 版本的 Docker Compose 文件转换为 Kubern
|
|||
|
||||
### Kubernetes
|
||||
|
||||
```sh
|
||||
$ kompose --file docker-voting.yml convert
|
||||
```shell
|
||||
kompose --file docker-voting.yml convert
|
||||
```
|
||||
```
|
||||
WARN Unsupported key networks - ignoring
|
||||
WARN Unsupported key build - ignoring
|
||||
INFO Kubernetes file "worker-svc.yaml" created
|
||||
|
@ -334,8 +319,13 @@ INFO Kubernetes file "result-deployment.yaml" created
|
|||
INFO Kubernetes file "vote-deployment.yaml" created
|
||||
INFO Kubernetes file "worker-deployment.yaml" created
|
||||
INFO Kubernetes file "db-deployment.yaml" created
|
||||
```
|
||||
|
||||
$ ls
|
||||
```shell
|
||||
ls
|
||||
```
|
||||
|
||||
```
|
||||
db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml
|
||||
db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml
|
||||
```
|
||||
|
@ -343,11 +333,12 @@ db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yam
|
|||
<!--
|
||||
You can also provide multiple docker-compose files at the same time:
|
||||
-->
|
||||
你也可以同时提供多个 docker-compose 文件进行转换:
|
||||
|
||||
您也可以同时提供多个 docker-compose 文件进行转换:
|
||||
|
||||
```sh
|
||||
$ kompose -f docker-compose.yml -f docker-guestbook.yml convert
|
||||
```shell
|
||||
kompose -f docker-compose.yml -f docker-guestbook.yml convert
|
||||
```
|
||||
```
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "mlbparks-service.yaml" created
|
||||
INFO Kubernetes file "mongodb-service.yaml" created
|
||||
|
@ -359,8 +350,13 @@ INFO Kubernetes file "mongodb-deployment.yaml" created
|
|||
INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
|
||||
$ ls
|
||||
```shell
|
||||
ls
|
||||
```
|
||||
|
||||
```
|
||||
mlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml
|
||||
frontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml
|
||||
frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml
|
||||
|
@ -370,13 +366,15 @@ redis-master-deployment.yaml
|
|||
<!--
|
||||
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
|
||||
-->
|
||||
|
||||
当提供多个 docker-compose 文件时,配置将会合并。任何通用的配置都将被后续文件覆盖。
|
||||
|
||||
### OpenShift
|
||||
|
||||
```sh
|
||||
$ kompose --provider openshift --file docker-voting.yml convert
|
||||
```shell
|
||||
kompose --provider openshift --file docker-voting.yml convert
|
||||
```
|
||||
|
||||
```
|
||||
WARN [worker] Service cannot be created because of missing port.
|
||||
INFO OpenShift file "vote-service.yaml" created
|
||||
INFO OpenShift file "db-service.yaml" created
|
||||
|
@ -397,13 +395,15 @@ INFO OpenShift file "result-imagestream.yaml" created
|
|||
<!--
|
||||
It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch for the build. You can specify a different source repo and branch using ``--build-repo`` and ``--build-branch`` options respectively.
|
||||
-->
|
||||
|
||||
kompose 还支持为服务中的构建指令创建 buildconfig。
|
||||
默认情况下,它使用当前 git 分支的 remote 仓库作为源仓库,使用当前分支作为构建的源分支。
|
||||
您可以分别使用 ``--build-repo`` 和 ``--build-branch`` 选项指定不同的源仓库和分支。
|
||||
你可以分别使用 ``--build-repo`` 和 ``--build-branch`` 选项指定不同的源仓库和分支。
|
||||
|
||||
```sh
|
||||
$ kompose --provider openshift --file buildconfig/docker-compose.yml convert
|
||||
```shell
|
||||
kompose --provider openshift --file buildconfig/docker-compose.yml convert
|
||||
```
|
||||
|
||||
```
|
||||
WARN [foo] Service cannot be created because of missing port.
|
||||
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
|
||||
INFO OpenShift file "foo-deploymentconfig.yaml" created
|
||||
|
@ -411,12 +411,12 @@ INFO OpenShift file "foo-imagestream.yaml" created
|
|||
INFO OpenShift file "foo-buildconfig.yaml" created
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If you are manually pushing the Openshift artifacts using ``oc create -f``, you need to ensure that you push the imagestream artifact before the buildconfig artifact, to workaround this Openshift issue: https://github.com/openshift/origin/issues/4518 .
|
||||
-->
|
||||
|
||||
如果使用 ``oc create -f`` 手动推送 Openshift 工件,则需要确保在构建配置工件之前推送 imagestream 工件,以解决 Openshift 的这个问题:https://github.com/openshift/origin/issues/4518 。
|
||||
{{< note >}}
|
||||
如果使用 ``oc create -f`` 手动推送 Openshift 工件,则需要确保在构建配置工件之前推送
|
||||
imagestream 工件,以解决 Openshift 的这个问题:https://github.com/openshift/origin/issues/4518 。
|
||||
{{< /note >}}
|
||||
|
||||
## `kompose up`
|
||||
|
@ -424,13 +424,15 @@ If you are manually pushing the Openshift artifacts using ``oc create -f``, you
|
|||
<!--
|
||||
Kompose supports a straightforward way to deploy your "composed" application to Kubernetes or OpenShift via `kompose up`.
|
||||
-->
|
||||
|
||||
Kompose 支持通过 `kompose up` 直接将您的"复合的(composed)" 应用程序部署到 Kubernetes 或 OpenShift。
|
||||
Kompose 支持通过 `kompose up` 直接将你的"复合的(composed)" 应用程序部署到 Kubernetes 或 OpenShift。
|
||||
|
||||
### Kubernetes
|
||||
|
||||
```sh
|
||||
$ kompose --file ./examples/docker-guestbook.yml up
|
||||
```shell
|
||||
kompose --file ./examples/docker-guestbook.yml up
|
||||
```
|
||||
|
||||
```
|
||||
We are going to create Kubernetes deployments and services for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
|
||||
|
@ -442,8 +444,13 @@ INFO Successfully created deployment: redis-slave
|
|||
INFO Successfully created deployment: frontend
|
||||
|
||||
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details.
|
||||
```
|
||||
|
||||
$ kubectl get deployment,svc,pods
|
||||
```shell
|
||||
kubectl get deployment,svc,pods
|
||||
```
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.extensions/frontend 1 1 1 1 4m
|
||||
deployment.extensions/redis-master 1 1 1 1 4m
|
||||
|
@ -469,13 +476,18 @@ pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m
|
|||
|
||||
**注意**:
|
||||
|
||||
- 您必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。
|
||||
- 此操作仅生成 Deployment 和 Service 对象并将其部署到 Kubernetes。如果需要部署其他不同类型的资源,请使用 `kompose convert` 和 `kubectl create -f` 命令。
|
||||
- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。
|
||||
- 此操作仅生成 Deployment 和 Service 对象并将其部署到 Kubernetes。
|
||||
如果需要部署其他不同类型的资源,请使用 `kompose convert` 和 `kubectl create -f` 命令。
|
||||
|
||||
|
||||
### OpenShift
|
||||
```sh
|
||||
$ kompose --file ./examples/docker-guestbook.yml --provider openshift up
|
||||
|
||||
```shell
|
||||
kompose --file ./examples/docker-guestbook.yml --provider openshift up
|
||||
```
|
||||
|
||||
```
|
||||
We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
|
||||
|
||||
|
@ -490,8 +502,13 @@ INFO Successfully created deployment: redis-master
|
|||
INFO Successfully created ImageStream: redis-master
|
||||
|
||||
Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' for details.
|
||||
```
|
||||
|
||||
$ oc get dc,svc,is
|
||||
```shell
|
||||
oc get dc,svc,is
|
||||
```
|
||||
|
||||
```
|
||||
NAME REVISION DESIRED CURRENT TRIGGERED BY
|
||||
dc/frontend 0 1 0 config,image(frontend:v4)
|
||||
dc/redis-master 0 1 0 config,image(redis-master:e2e)
|
||||
|
@ -510,10 +527,9 @@ is/redis-slave 172.30.12.200:5000/fff/redis-slave v1
|
|||
**Note**:
|
||||
- You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`)
|
||||
-->
|
||||
|
||||
**注意**:
|
||||
|
||||
- 您必须有一个运行正常的 OpenShift 集群,该集群具有预先配置的 `oc` 上下文 (`oc login`)。
|
||||
- 你必须有一个运行正常的 OpenShift 集群,该集群具有预先配置的 `oc` 上下文 (`oc login`)。
|
||||
|
||||
## `kompose down`
|
||||
|
||||
|
@ -521,10 +537,15 @@ is/redis-slave 172.30.12.200:5000/fff/redis-slave v1
|
|||
Once you have deployed "composed" application to Kubernetes, `$ kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
|
||||
-->
|
||||
|
||||
您一旦将"复合(composed)" 应用部署到 Kubernetes,`$ kompose down` 命令将能帮您通过删除 Deployment 和 Service 对象来删除应用。如果需要删除其他资源,请使用 'kubectl' 命令。
|
||||
你一旦将"复合(composed)" 应用部署到 Kubernetes,`kompose down`
|
||||
命令将能帮你通过删除 Deployment 和 Service 对象来删除应用。
|
||||
如果需要删除其他资源,请使用 'kubectl' 命令。
|
||||
|
||||
```sh
|
||||
$ kompose --file docker-guestbook.yml down
|
||||
```shell
|
||||
kompose --file docker-guestbook.yml down
|
||||
```
|
||||
|
||||
```
|
||||
INFO Successfully deleted service: redis-master
|
||||
INFO Successfully deleted deployment: redis-master
|
||||
INFO Successfully deleted service: redis-slave
|
||||
|
@ -535,8 +556,11 @@ INFO Successfully deleted deployment: frontend
|
|||
|
||||
<!--
|
||||
**Note**:
|
||||
|
||||
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
|
||||
|
||||
## Build and Push Docker Images
|
||||
|
||||
Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will:
|
||||
- Automatically be built with Docker using the `image` key specified within your file
|
||||
- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
|
||||
|
@ -545,14 +569,14 @@ Using an [example Docker Compose file](https://raw.githubusercontent.com/kuberne
|
|||
|
||||
**注意**:
|
||||
|
||||
- 您必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。
|
||||
- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。
|
||||
|
||||
## 构建和推送 Docker 镜像
|
||||
|
||||
Kompose 支持构建和推送 Docker 镜像。如果 Docker Compose 文件中使用了 `build` 关键字,您的镜像将会:
|
||||
Kompose 支持构建和推送 Docker 镜像。如果 Docker Compose 文件中使用了 `build` 关键字,你的镜像将会:
|
||||
|
||||
- 使用文档中指定的 `image` 键自动构建 Docker 镜像
|
||||
- 使用本地凭据推送到正确的 Docker 仓库
|
||||
- 使用文档中指定的 `image` 键自动构建 Docker 镜像
|
||||
- 使用本地凭据推送到正确的 Docker 仓库
|
||||
|
||||
使用 [Docker Compose 文件示例](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml)
|
||||
|
||||
|
@ -568,11 +592,13 @@ services:
|
|||
<!--
|
||||
Using `kompose up` with a `build` key:
|
||||
-->
|
||||
|
||||
使用带有 `build` 键的 `kompose up` 命令:
|
||||
|
||||
```none
|
||||
$ kompose up
|
||||
```shell
|
||||
kompose up
|
||||
```
|
||||
|
||||
```
|
||||
INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
|
||||
INFO Building image 'docker.io/foo/bar' from directory 'build'
|
||||
INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
|
||||
|
@ -591,30 +617,36 @@ Your application has been deployed to Kubernetes. You can run 'kubectl get deplo
|
|||
<!--
|
||||
In order to disable the functionality, or choose to use BuildConfig generation (with OpenShift) `--build (local|build-config|none)` can be passed.
|
||||
-->
|
||||
要想禁用该功能,或者使用 BuildConfig 中的版本(在 OpenShift 中),
|
||||
可以通过传递 `--build (local|build-config|none)` 参数来实现。
|
||||
|
||||
要想禁用该功能,或者使用 BuildConfig 中的版本(在 OpenShift 中),可以通过传递 `--build (local|build-config|none)` 参数来实现。
|
||||
|
||||
```sh
|
||||
```shell
|
||||
# Disable building/pushing Docker images
|
||||
$ kompose up --build none
|
||||
kompose up --build none
|
||||
|
||||
# Generate Build Config artifacts for OpenShift
|
||||
$ kompose up --provider openshift --build build-config
|
||||
kompose up --provider openshift --build build-config
|
||||
```
|
||||
|
||||
<!--
|
||||
## Alternative Conversions
|
||||
|
||||
The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts.
|
||||
-->
|
||||
|
||||
## 其他转换方式
|
||||
|
||||
默认的 `kompose` 转换会生成 yaml 格式的 Kubernetes [Deployment](/docs/concepts/workloads/controllers/deployment/) 和 [Service](/docs/concepts/services-networking/service/) 对象。
|
||||
您可以选择通过 `-j` 参数生成 json 格式的对象。
|
||||
您也可以替换生成 [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) 对象、[Daemon Sets](/docs/concepts/workloads/controllers/daemonset/) 或 [Helm](https://github.com/helm/helm) charts。
|
||||
默认的 `kompose` 转换会生成 yaml 格式的 Kubernetes
|
||||
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 和
|
||||
[Service](/zh/docs/concepts/services-networking/service/) 对象。
|
||||
你可以选择通过 `-j` 参数生成 json 格式的对象。
|
||||
你也可以替换生成 [Replication Controllers](/zh/docs/concepts/workloads/controllers/replicationcontroller/) 对象、
|
||||
[Daemon Sets](/zh/docs/concepts/workloads/controllers/daemonset/) 或
|
||||
[Helm](https://github.com/helm/helm) charts。
|
||||
|
||||
```sh
|
||||
$ kompose convert -j
|
||||
```shell
|
||||
kompose convert -j
|
||||
```
|
||||
```
|
||||
INFO Kubernetes file "redis-svc.json" created
|
||||
INFO Kubernetes file "web-svc.json" created
|
||||
INFO Kubernetes file "redis-deployment.json" created
|
||||
|
@ -624,11 +656,12 @@ INFO Kubernetes file "web-deployment.json" created
|
|||
<!--
|
||||
The `*-deployment.json` files contain the Deployment objects.
|
||||
-->
|
||||
|
||||
`*-deployment.json` 文件中包含 Deployment 对象。
|
||||
|
||||
```sh
|
||||
$ kompose convert --replication-controller
|
||||
```shell
|
||||
kompose convert --replication-controller
|
||||
```
|
||||
```
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-replicationcontroller.yaml" created
|
||||
|
@ -639,10 +672,15 @@ INFO Kubernetes file "web-replicationcontroller.yaml" created
|
|||
The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --replication-controller --replicas 3`
|
||||
-->
|
||||
|
||||
`*-replicationcontroller.yaml` 文件包含 Replication Controller 对象。如果您想指定副本数(默认为 1),可以使用 `--replicas` 参数:`$ kompose convert --replication-controller --replicas 3`
|
||||
`*-replicationcontroller.yaml` 文件包含 Replication Controller 对象。
|
||||
如果你想指定副本数(默认为 1),可以使用 `--replicas` 参数:
|
||||
`kompose convert --replication-controller --replicas 3`
|
||||
|
||||
```sh
|
||||
$ kompose convert --daemon-set
|
||||
```shell
|
||||
kompose convert --daemon-set
|
||||
```
|
||||
|
||||
```
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-daemonset.yaml" created
|
||||
|
@ -653,20 +691,26 @@ INFO Kubernetes file "web-daemonset.yaml" created
|
|||
The `*-daemonset.yaml` files contain the Daemon Set objects
|
||||
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do:
|
||||
-->
|
||||
|
||||
`*-daemonset.yaml` 文件包含 Daemon Set 对象。
|
||||
|
||||
如果您想生成 [Helm](https://github.com/kubernetes/helm) 可用的 Chart,只需简单的执行下面的命令:
|
||||
如果你想生成 [Helm](https://github.com/kubernetes/helm) 可用的 Chart,只需简单的执行下面的命令:
|
||||
|
||||
```sh
|
||||
$ kompose convert -c
|
||||
```shell
|
||||
kompose convert -c
|
||||
```
|
||||
```
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-deployment.yaml" created
|
||||
chart created in "./docker-compose/"
|
||||
```
|
||||
|
||||
$ tree docker-compose/
|
||||
```shell
|
||||
tree docker-compose/
|
||||
```
|
||||
|
||||
```
|
||||
docker-compose
|
||||
├── Chart.yaml
|
||||
├── README.md
|
||||
|
@ -679,35 +723,36 @@ docker-compose
|
|||
|
||||
<!--
|
||||
The chart structure is aimed at providing a skeleton for building your Helm charts.
|
||||
-->
|
||||
这个 Chart 结构旨在为构建 Helm Chart 提供框架。
|
||||
|
||||
<!--
|
||||
## Labels
|
||||
|
||||
`kompose` supports Kompose-specific labels within the `docker-compose.yml` file in order to explicitly define a service's behavior upon conversion.
|
||||
- `kompose.service.type` defines the type of service to be created.
|
||||
|
||||
For example:
|
||||
-->
|
||||
|
||||
这个图标结构旨在为构建 Helm Chart 提供框架。
|
||||
|
||||
## 标签
|
||||
|
||||
`kompose` 支持 `docker-compose.yml` 文件中用于 Kompose 的标签,以便在转换时明确定义 Service 的行为。
|
||||
|
||||
- `kompose.service.type` 定义要创建的 Service 类型。
|
||||
- `kompose.service.type` 定义要创建的 Service 类型。例如:
|
||||
|
||||
|
||||
|
||||
```yaml
|
||||
version: "2"
|
||||
services:
|
||||
nginx:
|
||||
image: nginx
|
||||
dockerfile: foobar
|
||||
build: ./foobar
|
||||
cap_add:
|
||||
- ALL
|
||||
container_name: foobar
|
||||
labels:
|
||||
kompose.service.type: nodeport
|
||||
```
|
||||
```yaml
|
||||
version: "2"
|
||||
services:
|
||||
nginx:
|
||||
image: nginx
|
||||
dockerfile: foobar
|
||||
build: ./foobar
|
||||
cap_add:
|
||||
- ALL
|
||||
container_name: foobar
|
||||
labels:
|
||||
kompose.service.type: nodeport
|
||||
```
|
||||
|
||||
<!--
|
||||
- `kompose.service.expose` defines if the service needs to be made accessible from outside the cluster or not. If the value is set to "true", the provider sets the endpoint automatically, and for any other value, the value is set as the hostname. If multiple ports are defined in a service, the first one is chosen to be the exposed.
|
||||
|
@ -715,29 +760,31 @@ services:
|
|||
- For the OpenShift provider, a route is created.
|
||||
For example:
|
||||
-->
|
||||
- `kompose.service.expose` 定义是否允许从集群外部访问 Service。
|
||||
如果该值被设置为 "true",提供程序将自动设置端点,对于任何其他值,该值将被设置为主机名。
|
||||
如果在 Service 中定义了多个端口,则选择第一个端口作为公开端口。
|
||||
|
||||
- `kompose.service.expose` 定义 是否允许从集群外部访问 Service。如果该值被设置为 "true",提供程序将自动设置端点,对于任何其他值,该值将被设置为主机名。如果在 Service 中定义了多个端口,则选择第一个端口作为公开端口。
|
||||
- 对于 Kubernetes 驱动程序,创建了一个 Ingress 资源,并且假定已经配置了相应的 Ingress 控制器。
|
||||
- 对于 OpenShift 驱动程序, 创建一个 route。
|
||||
|
||||
例如:
|
||||
例如:
|
||||
|
||||
```yaml
|
||||
version: "2"
|
||||
services:
|
||||
web:
|
||||
image: tuna/docker-counter23
|
||||
ports:
|
||||
- "5000:5000"
|
||||
links:
|
||||
- redis
|
||||
labels:
|
||||
kompose.service.expose: "counter.example.com"
|
||||
redis:
|
||||
image: redis:3.0
|
||||
ports:
|
||||
- "6379"
|
||||
```
|
||||
```yaml
|
||||
version: "2"
|
||||
services:
|
||||
web:
|
||||
image: tuna/docker-counter23
|
||||
ports:
|
||||
- "5000:5000"
|
||||
links:
|
||||
- redis
|
||||
labels:
|
||||
kompose.service.expose: "counter.example.com"
|
||||
redis:
|
||||
image: redis:3.0
|
||||
ports:
|
||||
- "6379"
|
||||
```
|
||||
|
||||
<!--
|
||||
The currently supported options are:
|
||||
|
@ -749,27 +796,27 @@ The currently supported options are:
|
|||
|
||||
当前支持的选项有:
|
||||
|
||||
| 键 | 值 |
|
||||
| 键 | 值 |
|
||||
|----------------------|-------------------------------------|
|
||||
| kompose.service.type | nodeport / clusterip / loadbalancer |
|
||||
| kompose.service.expose| true / hostname |
|
||||
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The `kompose.service.type` label should be defined with `ports` only, otherwise `kompose` will fail.
|
||||
-->
|
||||
{{< note >}}
|
||||
`kompose.service.type` 标签应该只用`ports`来定义,否则 `kompose` 会失败。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Restart
|
||||
|
||||
If you want to create normal pods without controllers you can use `restart` construct of docker-compose to define that. Follow table below to see what happens on the `restart` value.
|
||||
-->
|
||||
|
||||
## 重启
|
||||
|
||||
如果你想创建没有控制器的普通 Pod,可以使用 docker-compose 的 `restart` 结构来定义它。请参考下表了解 `restart` 的不同参数。
|
||||
如果你想创建没有控制器的普通 Pod,可以使用 docker-compose 的 `restart` 结构来定义它。
|
||||
请参考下表了解 `restart` 的不同参数。
|
||||
|
||||
<!--
|
||||
| `docker-compose` `restart` | object created | Pod `restartPolicy` |
|
||||
|
@ -788,18 +835,16 @@ If you want to create normal pods without controllers you can use `restart` cons
|
|||
| `no` | Pod | `Never` |
|
||||
|
||||
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The controller object could be `deployment` or `replicationcontroller`, etc.
|
||||
-->
|
||||
{{< note >}}
|
||||
控制器对象可以是 `deployment` 或 `replicationcontroller` 等。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
For e.g. `pival` service will become pod down here. This container calculated value of `pi`.
|
||||
-->
|
||||
|
||||
例如,`pival` Service 将在这里变成 Pod。这个容器的计算值为 `pi`。
|
||||
|
||||
```yaml
|
||||
|
@ -814,32 +859,36 @@ services:
|
|||
|
||||
<!--
|
||||
### Warning about Deployment Config's
|
||||
|
||||
If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to "Recreate" instead of "RollingUpdate" (default). This is done to avoid multiple instances of a service from accessing a volume at the same time.
|
||||
-->
|
||||
|
||||
### 关于 Deployment Config 的提醒
|
||||
|
||||
如果 Docker Compose 文件中为服务声明了卷,Deployment (Kubernetes) 或 DeploymentConfig (OpenShift) 的策略会从 "RollingUpdate" (默认) 变为 "Recreate"。
|
||||
如果 Docker Compose 文件中为服务声明了卷,Deployment (Kubernetes) 或 DeploymentConfig (OpenShift)
|
||||
的策略会从 "RollingUpdate" (默认) 变为 "Recreate"。
|
||||
这样做的目的是为了避免服务的多个实例同时访问卷。
|
||||
|
||||
<!--
|
||||
If the Docker Compose file has service name with `_` in it (eg.`web_service`), then it will be replaced by `-` and the service name will be renamed accordingly (eg.`web-service`). Kompose does this because "Kubernetes" doesn't allow `_` in object name.
|
||||
Please note that changing service name might break some `docker-compose` files.
|
||||
-->
|
||||
|
||||
如果 Docker Compose 文件中的服务名包含 `_` (例如 `web_service`),那么将会被替换为 `-`,服务也相应的会重命名(例如 `web-service`)。
|
||||
如果 Docker Compose 文件中的服务名包含 `_` (例如 `web_service`),
|
||||
那么将会被替换为 `-`,服务也相应的会重命名(例如 `web-service`)。
|
||||
Kompose 这样做的原因是 "Kubernetes" 不允许对象名称中包含 `_`。
|
||||
|
||||
<!--
|
||||
## Docker Compose Versions
|
||||
|
||||
Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature.
|
||||
A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys.
|
||||
-->
|
||||
|
||||
## Docker Compose 版本
|
||||
|
||||
Kompose 支持的 Docker Compose 版本包括:1、2 和 3。有限支持 2.1 和 3.2 版本,因为它们还在实验阶段。
|
||||
|
||||
所有三个版本的兼容性列表请查看我们的 [转换文档](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md),文档中列出了所有不兼容的 Docker Compose 关键字。
|
||||
所有三个版本的兼容性列表请查看我们的
|
||||
[转换文档](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md),
|
||||
文档中列出了所有不兼容的 Docker Compose 关键字。
|
||||
|
||||
|
||||
|
|
|
@ -1,64 +1,65 @@
|
|||
---
|
||||
title: 配置多个调度器
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- davidopp
|
||||
- madhusudancs
|
||||
title: Configure Multiple Schedulers
|
||||
content_type: task
|
||||
---
|
||||
weight: 20
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Kubernetes ships with a default scheduler that is described [here](/docs/admin/kube-scheduler/).
|
||||
-->
|
||||
Kubernetes 自带了一个默认调度器,其详细描述请查阅[这里](/docs/reference/command-line-tools-reference/kube-scheduler/)。
|
||||
<!--
|
||||
If the default scheduler does not suit your needs you can implement your own scheduler.
|
||||
Not just that, you can even run multiple schedulers simultaneously alongside
|
||||
the default scheduler and instruct Kubernetes what scheduler to use for each
|
||||
of your pods. Let's learn how to run multiple schedulers in Kubernetes with an
|
||||
example.
|
||||
-->
|
||||
如果默认调度器不适合您的需求,您可以实现自己的调度器。
|
||||
<!--
|
||||
Not just that, you can even run multiple schedulers simultaneously alongside the default scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's learn how to run multiple schedulers in Kubernetes with an example.
|
||||
-->
|
||||
不仅如此,您甚至可以伴随着默认调度器同时运行多个调度器,并告诉 Kubernetes 为每个 pod 使用什么调度器。
|
||||
Kubernetes 自带了一个默认调度器,其详细描述请查阅
|
||||
[这里](/zh/docs/reference/command-line-tools-reference/kube-scheduler/)。
|
||||
如果默认调度器不适合你的需求,你可以实现自己的调度器。
|
||||
不仅如此,你甚至可以和默认调度器一起同时运行多个调度器,并告诉 Kubernetes 为每个
|
||||
Pod 使用哪个调度器。
|
||||
让我们通过一个例子讲述如何在 Kubernetes 中运行多个调度器。
|
||||
|
||||
<!--
|
||||
A detailed description of how to implement a scheduler is outside the scope of this document. Please refer to the kube-scheduler implementation in[pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/pkg/scheduler)in the Kubernetes source directory for a canonical example.
|
||||
A detailed description of how to implement a scheduler is outside the scope of
|
||||
this document. Please refer to the kube-scheduler implementation
|
||||
in[pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/{{< param
|
||||
"githubbranch" >}}/pkg/scheduler)in the Kubernetes source directory for a
|
||||
canonical example.
|
||||
-->
|
||||
关于实现调度器的具体细节描述超出了本文范围。
|
||||
请参考 kube-scheduler 的实现,规范示例代码位于 [pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/pkg/scheduler)。
|
||||
|
||||
|
||||
|
||||
请参考 kube-scheduler 的实现,规范示例代码位于
|
||||
[pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/pkg/scheduler)。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Package the scheduler
|
||||
|
||||
Package your scheduler binary into a container image. For the purposes of this example,
|
||||
let's just use the default scheduler (kube-scheduler) as our second scheduler as well.
|
||||
Clone the [Kubernetes source code from Github](https://github.com/kubernetes/kubernetes)
|
||||
and build the source.
|
||||
-->
|
||||
## 打包调度器
|
||||
|
||||
<!--
|
||||
Package your scheduler binary into a container image. For the purposes of this example,let's just use the default scheduler (kube-scheduler) as our second scheduler as well.
|
||||
-->
|
||||
将调度器二进制文件打包到容器镜像中。出于示例目的,我们就使用默认调度器(kube-scheduler)作为我们的第二个调度器。
|
||||
<!--
|
||||
Clone the [Kubernetes source code from Github](https://github.com/kubernetes/kubernetes)and build the source.
|
||||
-->
|
||||
从 Github 克隆 [Kubernetes 源代码](https://github.com/kubernetes/kubernetes),并编译构建源代码。
|
||||
将调度器可执行文件打包到容器镜像中。出于示例目的,我们就使用默认调度器
|
||||
(kube-scheduler)作为我们的第二个调度器。
|
||||
克隆 [Github 上 Kubernetes 源代码](https://github.com/kubernetes/kubernetes),
|
||||
并编译构建源代码。
|
||||
|
||||
```shell
|
||||
git clone https://github.com/kubernetes/kubernetes.git
|
||||
|
@ -77,13 +78,14 @@ ADD ./_output/dockerized/bin/linux/amd64/kube-scheduler /usr/local/bin/kube-sche
|
|||
```
|
||||
|
||||
<!--
|
||||
Save the file as `Dockerfile`, build the image and push it to a registry. This example pushes the image to[Google Container Registry (GCR)](https://cloud.google.com/container-registry/).
|
||||
Save the file as `Dockerfile`, build the image and push it to a registry. This example
|
||||
pushes the image to
|
||||
[Google Container Registry (GCR)](https://cloud.google.com/container-registry/).
|
||||
For more details, please read the GCR
|
||||
[documentation](https://cloud.google.com/container-registry/docs/).
|
||||
-->
|
||||
将文件保存为 `Dockerfile`,构建镜像并将其推送到镜像仓库。
|
||||
此示例将镜像推送到 [Google 容器镜像仓库(GCR)](https://cloud.google.com/container-registry/)。
|
||||
<!--
|
||||
For more details, please read the GCR[documentation](https://cloud.google.com/container-registry/docs/).
|
||||
-->
|
||||
有关详细信息,请阅读 GCR [文档](https://cloud.google.com/container-registry/docs/)。
|
||||
|
||||
```shell
|
||||
|
@ -93,29 +95,43 @@ gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0
|
|||
|
||||
<!--
|
||||
## Define a Kubernetes Deployment for the scheduler
|
||||
|
||||
Now that we have our scheduler in a container image, we can just create a pod
|
||||
config for it and run it in our Kubernetes cluster. But instead of creating a pod
|
||||
directly in the cluster, let's use a [Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment/) manages a
|
||||
[Replica Set](/docs/concepts/workloads/controllers/replicaset/) which in turn manages the pods,
|
||||
thereby making the scheduler resilient to failures. Here is the deployment
|
||||
config. Save it as `my-scheduler.yaml`:
|
||||
-->
|
||||
## 为调度器定义 Kubernetes Deployment
|
||||
|
||||
<!--
|
||||
Now that we have our scheduler in a container image, we can just create a pod config for it and run it in our Kubernetes cluster. But instead of creating a pod directly in the cluster, let's use a [Deployment](/docs/concepts/workloads/controllers/deployment/)for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment/) manages a[Replica Set](/docs/concepts/workloads/controllers/replicaset/) which in turn manages the pods,thereby making the scheduler resilient to failures. Here is the deployment config. Save it as `my-scheduler.yaml`:
|
||||
-->
|
||||
现在我们将调度器放在容器镜像中,我们可以为它创建一个 pod 配置,并在我们的 Kubernetes 集群中运行它。
|
||||
但是与其在集群中直接创建一个 pod,不如使用 [Deployment](/docs/concepts/workloads/controllers/deployment/)。
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/) 管理一个 [Replica Set](/docs/concepts/workloads/controllers/replicaset/),Replica Set 再管理 pod,从而使调度器能够适应故障。
|
||||
以下是 Deployment 配置,被保存为 `my-scheduler.yaml`:
|
||||
现在我们将调度器放在容器镜像中,我们可以为它创建一个 Pod 配置,并在我们的 Kubernetes 集群中
|
||||
运行它。但是与其在集群中直接创建一个 Pod,不如使用
|
||||
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/)。
|
||||
Deployment 管理一个 [ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/),
|
||||
ReplicaSet 再管理 Pod,从而使调度器能够免受一些故障的影响。
|
||||
以下是 Deployment 配置,将其保存为 `my-scheduler.yaml`:
|
||||
|
||||
{{< codenew file="admin/sched/my-scheduler.yaml" >}}
|
||||
|
||||
<!--
|
||||
An important thing to note here is that the name of the scheduler specified as an argument to the scheduler command in the container spec should be unique. This is the name that is matched against the value of the optional `spec.schedulerName` on pods, to determine whether this scheduler is responsible for scheduling a particular pod.
|
||||
An important thing to note here is that the name of the scheduler specified as an
|
||||
argument to the scheduler command in the container spec should be unique.
|
||||
This is the name that is matched against the value of the optional `spec.schedulerName`
|
||||
on pods, to determine whether this scheduler is responsible for scheduling a particular pod.
|
||||
-->
|
||||
这里需要注意的是,在该部署文件中 Container 的 spec 配置的调度器启动命令参数(--scheduler-name)指定的调度器名称应该是惟一的。
|
||||
这个名称应该与 pods 上的可选参数 `spec.schedulerName` 的值相匹配,也就是说调度器名称的匹配关系决定了 pods 的调度任务由哪个调度器负责。
|
||||
这里需要注意的是,在容器规约中配置的调度器启动命令参数(--scheduler-name)所指定的
|
||||
调度器名称应该是唯一的。
|
||||
这个名称应该与 Pod 上的可选参数 `spec.schedulerName` 的值相匹配,也就是说调度器名称的匹配
|
||||
关系决定了 Pods 的调度任务由哪个调度器负责。
|
||||
|
||||
<!--
|
||||
Note also that we created a dedicated service account `my-scheduler` and bind the cluster role`system:kube-scheduler` to it so that it can acquire the same privileges as `kube-scheduler`.
|
||||
Note also that we created a dedicated service account `my-scheduler` and bind the cluster role
|
||||
`system:kube-scheduler` to it so that it can acquire the same privileges as `kube-scheduler`.
|
||||
-->
|
||||
还要注意,我们创建了一个专用服务帐户 `my-scheduler` 并将集群角色 `system:kube-scheduler` 绑定到它,以便它可以获得与 `kube-scheduler` 相同的权限。
|
||||
还要注意,我们创建了一个专用服务账号 `my-scheduler` 并将集群角色 `system:kube-scheduler`
|
||||
绑定到它,以便它可以获得与 `kube-scheduler` 相同的权限。
|
||||
|
||||
<!--
|
||||
Please see the[kube-scheduler documentation](/docs/admin/kube-scheduler/) for detailed description of other command line arguments.
|
||||
|
@ -124,12 +140,11 @@ Please see the[kube-scheduler documentation](/docs/admin/kube-scheduler/) for de
|
|||
|
||||
<!--
|
||||
## Run the second scheduler in the cluster
|
||||
|
||||
In order to run your scheduler in a Kubernetes cluster, just create the deployment specified in the config above in a Kubernetes cluster:
|
||||
-->
|
||||
## 在集群中运行第二个调度器
|
||||
|
||||
<!--
|
||||
In order to run your scheduler in a Kubernetes cluster, just create the deployment specified in the config above in a Kubernetes cluster:
|
||||
-->
|
||||
为了在 Kubernetes 集群中运行我们的第二个调度器,只需在 Kubernetes 集群中创建上面配置中指定的 Deployment:
|
||||
|
||||
```shell
|
||||
|
@ -139,10 +154,15 @@ kubectl create -f my-scheduler.yaml
|
|||
<!--
|
||||
Verify that the scheduler pod is running:
|
||||
-->
|
||||
验证调度器 pod 正在运行:
|
||||
验证调度器 Pod 正在运行:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --namespace=kube-system
|
||||
kubectl get pods --namespace=kube-system
|
||||
```
|
||||
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
....
|
||||
my-scheduler-lnf4s-4744f 1/1 Running 0 2m
|
||||
|
@ -152,16 +172,21 @@ my-scheduler-lnf4s-4744f 1/1 Running 0 2m
|
|||
<!--
|
||||
You should see a "Running" my-scheduler pod, in addition to the default kube-scheduler pod in this list.
|
||||
-->
|
||||
此列表中,除了默认的 kube-scheduler pod 之外,您应该还能看到处于 “Running” 状态的 my-scheduler pod。
|
||||
此列表中,除了默认的 `kube-scheduler` Pod 之外,你应该还能看到处于 “Running” 状态的
|
||||
`my-scheduler` Pod。
|
||||
|
||||
|
||||
<!--
|
||||
### Enable leader election
|
||||
|
||||
To run multiple-scheduler with leader election enabled, you must do the following:
|
||||
-->
|
||||
要在启用了 leader 选举的情况下运行多调度器,您必须执行以下操作:
|
||||
|
||||
<!--
|
||||
First, update the following fields in your YAML file:
|
||||
-->
|
||||
### 启用领导者选举
|
||||
|
||||
要在启用了 leader 选举的情况下运行多调度器,你必须执行以下操作:
|
||||
|
||||
首先,更新上述 Deployment YAML(my-scheduler.yaml)文件中的以下字段:
|
||||
|
||||
* `--leader-elect=true`
|
||||
|
@ -171,110 +196,98 @@ First, update the following fields in your YAML file:
|
|||
<!--
|
||||
If RBAC is enabled on your cluster, you must update the `system:kube-scheduler` cluster role. Add you scheduler name to the resourceNames of the rule applied for endpoints resources, as in the following example:
|
||||
-->
|
||||
如果在集群上启用了 RBAC,则必须更新 `system:kube-scheduler` 集群角色。将调度器名称添加到应用于端点资源的规则的 resourceNames,如以下示例所示:
|
||||
```
|
||||
$ kubectl edit clusterrole system:kube-scheduler
|
||||
- apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
annotations:
|
||||
rbac.authorization.kubernetes.io/autoupdate: "true"
|
||||
labels:
|
||||
kubernetes.io/bootstrapping: rbac-defaults
|
||||
name: system:kube-scheduler
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resourceNames:
|
||||
- kube-scheduler
|
||||
- my-scheduler
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- delete
|
||||
- get
|
||||
- patch
|
||||
- update
|
||||
如果在集群上启用了 RBAC,则必须更新 `system:kube-scheduler` 集群角色。
|
||||
将调度器名称添加到应用于端点资源的规则的 resourceNames,如以下示例所示:
|
||||
|
||||
```shell
|
||||
kubectl edit clusterrole system:kube-scheduler
|
||||
```
|
||||
|
||||
{{< codenew file="admin/sched/clusterrole.yaml" >}}
|
||||
|
||||
<!--
|
||||
## Specify schedulers for pods
|
||||
-->
|
||||
## 指定 pod 的调度器
|
||||
## 为 Pod 指定调度器
|
||||
|
||||
<!--
|
||||
Now that our second scheduler is running, let's create some pods, and direct them to be scheduled by either the default scheduler or the one we just deployed. In order to schedule a given pod using a specific scheduler, we specify the name of the scheduler in that pod spec. Let's look at three examples.
|
||||
-->
|
||||
现在我们的第二个调度器正在运行,让我们创建一些 pod,并指定它们由默认调度器或我们刚部署的调度器进行调度。
|
||||
为了使用特定的调度器调度给定的 pod,我们在那个 pod 的 spec 中指定调度器的名称。让我们看看三个例子。
|
||||
|
||||
现在我们的第二个调度器正在运行,让我们创建一些 Pod,并指定它们由默认调度器或我们刚部署的
|
||||
调度器进行调度。
|
||||
为了使用特定的调度器调度给定的 Pod,我们在那个 Pod 的规约中指定调度器的名称。让我们看看三个例子。
|
||||
|
||||
<!--
|
||||
- Pod spec without any scheduler name
|
||||
-->
|
||||
- Pod spec 没有任何调度器名称
|
||||
- Pod spec 没有任何调度器名称
|
||||
|
||||
{{< codenew file="admin/sched/pod1.yaml" >}}
|
||||
|
||||
<!--
|
||||
<!--
|
||||
When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.
|
||||
-->
|
||||
如果未提供调度器名称,则会使用 default-scheduler 自动调度 pod。
|
||||
-->
|
||||
如果未提供调度器名称,则会使用 default-scheduler 自动调度 pod。
|
||||
|
||||
<!--
|
||||
<!--
|
||||
Save this file as `pod1.yaml` and submit it to the Kubernetes cluster.
|
||||
-->
|
||||
将此文件另存为 `pod1.yaml`,并将其提交给 Kubernetes 集群。
|
||||
-->
|
||||
将此文件另存为 `pod1.yaml`,并将其提交给 Kubernetes 集群。
|
||||
|
||||
```shell
|
||||
kubectl create -f pod1.yaml
|
||||
```
|
||||
```shell
|
||||
kubectl create -f pod1.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
- Pod spec with `default-scheduler`
|
||||
-->
|
||||
- Pod spec 设置为 `default-scheduler`
|
||||
- Pod spec 设置为 `default-scheduler`
|
||||
|
||||
{{< codenew file="admin/sched/pod2.yaml" >}}
|
||||
|
||||
<!--
|
||||
<!--
|
||||
A scheduler is specified by supplying the scheduler name as a value to `spec.schedulerName`. In this case, we supply the name of the default scheduler which is `default-scheduler`.
|
||||
-->
|
||||
通过将调度器名称作为 `spec.schedulerName` 参数的值来指定调度器。在这种情况下,我们提供默认调度器的名称,即 `default-scheduler`。
|
||||
-->
|
||||
通过将调度器名称作为 `spec.schedulerName` 参数的值来指定调度器。
|
||||
在这种情况下,我们提供默认调度器的名称,即 `default-scheduler`。
|
||||
|
||||
<!--
|
||||
<!--
|
||||
Save this file as `pod2.yaml` and submit it to the Kubernetes cluster.
|
||||
-->
|
||||
将此文件另存为 `pod2.yaml`,并将其提交给 Kubernetes 集群。
|
||||
-->
|
||||
将此文件另存为 `pod2.yaml`,并将其提交给 Kubernetes 集群。
|
||||
|
||||
```shell
|
||||
kubectl create -f pod2.yaml
|
||||
```
|
||||
```shell
|
||||
kubectl create -f pod2.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
- Pod spec with `my-scheduler`
|
||||
-->
|
||||
- Pod spec 设置为 `my-scheduler`
|
||||
- Pod spec 设置为 `my-scheduler`
|
||||
|
||||
{{< codenew file="admin/sched/pod3.yaml" >}}
|
||||
|
||||
<!--
|
||||
In this case, we specify that this pod should be scheduled using the scheduler that we deployed - `my-scheduler`. Note that the value of `spec.schedulerName` should match the name supplied to the scheduler command as an argument in the deployment config for the scheduler.
|
||||
-->
|
||||
在这种情况下,我们指定此 pod 使用我们部署的 `my-scheduler` 来调度。
|
||||
请注意,`spec.schedulerName` 参数的值应该与 Deployment 中配置的提供给 scheduler 命令的参数名称匹配。
|
||||
<!--
|
||||
In this case, we specify that this pod should be scheduled using the
|
||||
scheduler that we deployed - `my-scheduler`. Note that the value of
|
||||
`spec.schedulerName` should match the name supplied to the scheduler command
|
||||
as an argument in the deployment config for the scheduler.
|
||||
-->
|
||||
在这种情况下,我们指定此 pod 使用我们部署的 `my-scheduler` 来调度。
|
||||
请注意,`spec.schedulerName` 参数的值应该与 Deployment 中配置的提供给
|
||||
scheduler 命令的参数名称匹配。
|
||||
|
||||
<!--
|
||||
<!--
|
||||
Save this file as `pod3.yaml` and submit it to the Kubernetes cluster.
|
||||
-->
|
||||
将此文件另存为 `pod3.yaml`,并将其提交给 Kubernetes 集群。
|
||||
-->
|
||||
将此文件另存为 `pod3.yaml`,并将其提交给 Kubernetes 集群。
|
||||
|
||||
```shell
|
||||
kubectl create -f pod3.yaml
|
||||
```
|
||||
```shell
|
||||
kubectl create -f pod3.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
Verify that all three pods are running.
|
||||
Verify that all three pods are running.
|
||||
-->
|
||||
确认所有三个 pod 都在运行。
|
||||
|
||||
|
@ -282,8 +295,6 @@ kubectl create -f pod3.yaml
|
|||
kubectl get pods
|
||||
```
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
|
@ -294,18 +305,19 @@ kubectl get pods
|
|||
<!--
|
||||
In order to make it easier to work through these examples, we did not verify that the pods were actually scheduled using the desired schedulers. We can verify that by changing the order of pod and deployment config submissions above. If we submit all the pod configs to a Kubernetes cluster before submitting the scheduler deployment config,we see that the pod `annotation-second-scheduler` remains in "Pending" state forever while the other two pods get scheduled. Once we submit the scheduler deployment config and our new scheduler starts running, the `annotation-second-scheduler` pod gets scheduled as well.
|
||||
-->
|
||||
为了更容易地完成这些示例,我们没有验证 pod 实际上是使用所需的调度程序调度的。
|
||||
我们可以通过更改 pod 的顺序和上面的部署配置提交来验证这一点。
|
||||
如果我们在提交调度器部署配置之前将所有 pod 配置提交给 Kubernetes 集群,我们将看到注解了 `annotation-second-scheduler` 的 pod 始终处于 “Pending” 状态,而其他两个 pod 被调度。
|
||||
一旦我们提交调度器部署配置并且我们的新调度器开始运行,注解了 `annotation-second-scheduler` 的 pod 就能被调度。
|
||||
|
||||
为了更容易地完成这些示例,我们没有验证 Pod 实际上是使用所需的调度程序调度的。
|
||||
我们可以通过更改 Pod 的顺序和上面的部署配置提交来验证这一点。
|
||||
如果我们在提交调度器部署配置之前将所有 Pod 配置提交给 Kubernetes 集群,
|
||||
我们将看到注解了 `annotation-second-scheduler` 的 Pod 始终处于 “Pending” 状态,
|
||||
而其他两个 Pod 被调度。
|
||||
一旦我们提交调度器部署配置并且我们的新调度器开始运行,注解了
|
||||
`annotation-second-scheduler` 的 pod 就能被调度。
|
||||
<!--
|
||||
Alternatively, one could just look at the "Scheduled" entries in the event logs to verify that the pods were scheduled by the desired schedulers.
|
||||
-->
|
||||
或者,可以查看事件日志中的 “Scheduled” 条目,以验证是否由所需的调度器调度了 pod。
|
||||
或者,可以查看事件日志中的 “Scheduled” 条目,以验证是否由所需的调度器调度了 Pod。
|
||||
|
||||
```shell
|
||||
kubectl get events
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,37 @@
|
|||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
annotations:
|
||||
rbac.authorization.kubernetes.io/autoupdate: "true"
|
||||
labels:
|
||||
kubernetes.io/bootstrapping: rbac-defaults
|
||||
name: system:kube-scheduler
|
||||
rules:
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
resources:
|
||||
- leases
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
resourceNames:
|
||||
- kube-scheduler
|
||||
- my-scheduler
|
||||
resources:
|
||||
- leases
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resourceNames:
|
||||
- kube-scheduler
|
||||
- my-scheduler
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- delete
|
||||
- get
|
||||
- patch
|
||||
- update
|
Loading…
Reference in New Issue