Fix typos

pull/24037/head
Jorge Vallecillo 2020-09-22 14:27:09 -06:00
commit 2ff48e6f42
9 changed files with 959 additions and 689 deletions

View File

@ -154,6 +154,7 @@ aliases:
- hanjiayao
- lichuqiang
- SataQiu
- tanjunchen
- tengqm
- xiangpengzhao
- xichengliudui

View File

@ -30,13 +30,23 @@ node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB
Then the cluster is logically viewed as below:
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph TB
subgraph "zoneB"
n3(Node3)
n4(Node4)
end
subgraph "zoneA"
n1(Node1)
n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/kubernetes-api/labels-annotations-taints/) that are created and populated automatically on most clusters.
@ -80,17 +90,25 @@ You can read more about this field by running `kubectl explain Pod.spec.topology
### Example: One TopologySpreadConstraint
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
| P | P | P | |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as:
@ -100,15 +118,46 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones,
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
```
+---------------+---------------+ +---------------+---------------+
| zoneA | zoneB | | zoneA | zoneB |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
| node1 | node2 | node3 | node4 | OR | node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
| P | P | P | P | | P | P | P P | |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
OR
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n3
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can tweak the Pod spec to meet various kinds of requirements:
@ -118,17 +167,26 @@ You can tweak the Pod spec to meet various kinds of requirements:
### Example: Multiple TopologySpreadConstraints
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
| P | P | P | |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node:
@ -138,15 +196,24 @@ In this case, to match the first constraint, the incoming Pod can only be placed
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
```
+---------------+-------+
| zoneA | zoneB |
+-------+-------+-------+
| node1 | node2 | node3 |
+-------+-------+-------+
| P P | P | P P |
+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p4(Pod) --> n3(Node3)
p5(Pod) --> n3
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n1
p3(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
@ -169,15 +236,37 @@ There are some implicit conventions worth noting here:
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
```
+---------------+---------------+-------+
| zoneA | zoneB | zoneC |
+-------+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 | node5 |
+-------+-------+-------+-------+-------+
| P | P | P | | |
+-------+-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
{{<mermaid>}}
graph BT
subgraph "zoneC"
n5(Node5)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n5 k8s;
class zoneC cluster;
{{< /mermaid >}}
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.

View File

@ -43,7 +43,6 @@ Kubernetes sebagai <i>open source</i> memberikan kamu kebebasan untuk menggunaka
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019" button id="desktopKCButton">Hadiri KubeCon di Barcelona tanggal 20-23 Mei 2019</a>
<br>
<br>

View File

@ -0,0 +1,13 @@
---
title: Referensi
linkTitle: "Referensi"
main_menu: true
weight: 70
content_type: concept
---
<!-- overview -->
Bagian dari dokumentasi Kubernetes ini berisi referensi-referensi.
<!-- TODO: translate the rest -->

View File

@ -0,0 +1,4 @@
---
title: Mengakses API
weight: 20
---

File diff suppressed because it is too large Load Diff

View File

@ -379,7 +379,7 @@ There are a few reasons for using proxying for Services:
### 为什么不使用 DNS 轮询?
时不时会有人问,就是为什么 Kubernetes 依赖代理将入站流量转发到后端。 那其他方法呢?
时不时会有人问,就是为什么 Kubernetes 依赖代理将入站流量转发到后端。 那其他方法呢?
例如是否可以配置具有多个A值或IPv6为AAAA的DNS记录并依靠轮询名称解析
使用服务代理有以下几个原因:

View File

@ -1,341 +1,474 @@
---
title: 使用扩展进行并行处理
content_type: concept
min-kubernetes-server-version: v1.8
weight: 20
---
<!--
---
title: Parallel Processing using Expansions
content_type: concept
min-kubernetes-server-version: v1.8
weight: 20
---
-->
<!-- overview -->
<!--
In this example, we will run multiple Kubernetes Jobs created from
a common template. You may want to be familiar with the basic,
non-parallel, use of [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) first.
-->
在这个示例中,我们将运行从一个公共模板创建的多个 Kubernetes Job。您可能需要先熟悉 [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) 的基本概念、非并行以及如何使用它。
<!-- body -->
<!--
## Basic Template Expansion
-->
## 基本模板扩展
<!--
First, download the following template of a job to a file called `job-tmpl.yaml`
-->
首先,将以下作业模板下载到名为 `job-tmpl.yaml` 的文件中。
{{< codenew file="application/job/job-tmpl.yaml" >}}
<!--
Unlike a *pod template*, our *job template* is not a Kubernetes API type. It is just
a yaml representation of a Job object that has some placeholders that need to be filled
in before it can be used. The `$ITEM` syntax is not meaningful to Kubernetes.
-->
*pod 模板*不同,我们的 *job 模板*不是 Kubernetes API 类型。它只是 Job 对象的 yaml 表示,
YAML 文件有一些占位符,在使用它之前需要填充这些占位符。`$ITEM` 语法对 Kubernetes 没有意义。
<!--
In this example, the only processing the container does is to `echo` a string and sleep for a bit.
In a real use case, the processing would be some substantial computation, such as rendering a frame
of a movie, or processing a range of rows in a database. The `$ITEM` parameter would specify for
example, the frame number or the row range.
-->
在这个例子中,容器所做的唯一处理是 `echo` 一个字符串并睡眠一段时间。
在真实的用例中,处理将是一些重要的计算,例如渲染电影的一帧,或者处理数据库中的若干行。这时,`$ITEM` 参数将指定帧号或行范围。
<!--
This Job and its Pod template have a label: `jobgroup=jobexample`. There is nothing special
to the system about this label. This label
makes it convenient to operate on all the jobs in this group at once.
We also put the same label on the pod template so that we can check on all Pods of these Jobs
with a single command.
After the job is created, the system will add more labels that distinguish one Job's pods
from another Job's pods.
Note that the label key `jobgroup` is not special to Kubernetes. You can pick your own label scheme.
-->
这个 Job 及其 Pod 模板有一个标签: `jobgroup=jobexample`。这个标签在系统中没有什么特别之处。
这个标签使得我们可以方便地同时操作组中的所有作业。
我们还将相同的标签放在 pod 模板上,这样我们就可以用一个命令检查这些 Job 的所有 pod。
创建作业之后,系统将添加更多的标签来区分一个 Job 的 pod 和另一个 Job 的 pod。
注意,标签键 `jobgroup` 对 Kubernetes 并无特殊含义。您可以选择自己的标签方案。
<!--
Next, expand the template into multiple files, one for each item to be processed.
-->
下一步,将模板展开到多个文件中,每个文件对应要处理的项。
```shell
# 下载 job-templ.yaml
curl -L -s -O https://k8s.io/examples/application/job/job-tmpl.yaml
# 创建临时目录,并且在目录中创建 job yaml 文件
mkdir ./jobs
for i in apple banana cherry
do
cat job-tmpl.yaml | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml
done
```
<!--
Check if it worked:
-->
检查是否工作正常:
```shell
ls jobs/
```
<!--
The output is similar to this:
-->
输出类似以下内容:
```
job-apple.yaml
job-banana.yaml
job-cherry.yaml
```
<!--
Here, we used `sed` to replace the string `$ITEM` with the loop variable.
You could use any type of template language (jinja2, erb) or write a program
to generate the Job objects.
-->
在这里,我们使用 `sed` 将字符串 `$ITEM` 替换为循环变量。
您可以使用任何类型的模板语言(jinja2, erb) 或编写程序来生成 Job 对象。
<!--
Next, create all the jobs with one kubectl command:
-->
接下来,使用 kubectl 命令创建所有作业:
```shell
kubectl create -f ./jobs
```
<!--
The output is similar to this:
-->
输出类似以下内容:
```
job.batch/process-item-apple created
job.batch/process-item-banana created
job.batch/process-item-cherry created
```
<!--
Now, check on the jobs:
-->
现在,检查这些作业:
```shell
kubectl get jobs -l jobgroup=jobexample
```
<!--
The output is similar to this:
-->
输出类似以下内容:
```
NAME COMPLETIONS DURATION AGE
process-item-apple 1/1 14s 20s
process-item-banana 1/1 12s 20s
process-item-cherry 1/1 12s 20s
```
<!--
Here we use the `-l` option to select all jobs that are part of this
group of jobs. (There might be other unrelated jobs in the system that we
do not care to see.)
-->
在这里,我们使用 `-l` 选项选择属于这组作业的所有作业。(系统中可能还有其他不相关的工作,我们不想看到。)
<!--
We can check on the pods as well using the same label selector:
-->
使用同样的标签选择器,我们还可以检查 pods
```shell
kubectl get pods -l jobgroup=jobexample
```
<!--
The output is similar to this:
-->
输出类似以下内容:
```
NAME READY STATUS RESTARTS AGE
process-item-apple-kixwv 0/1 Completed 0 4m
process-item-banana-wrsf7 0/1 Completed 0 4m
process-item-cherry-dnfu9 0/1 Completed 0 4m
```
<!--
We can use this single command to check on the output of all jobs at once:
-->
我们可以使用以下操作命令一次性地检查所有作业的输出:
```shell
kubectl logs -f -l jobgroup=jobexample
```
<!--
The output is:
-->
输出内容为:
```
Processing item apple
Processing item banana
Processing item cherry
```
<!--
## Multiple Template Parameters
-->
## 多个模板参数
<!--
In the first example, each instance of the template had one parameter, and that parameter was also
used as a label. However label keys are limited in [what characters they can
contain](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
-->
在第一个示例中,模板的每个实例都有一个参数,该参数也用作标签。
但是标签的键名在[可包含的字符](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)方面有一定的约束。
<!--
This slightly more complex example uses the jinja2 template language to generate our objects.
We will use a one-line python script to convert the template to a file.
-->
这个稍微复杂一点的示例使用 jinja2 模板语言来生成我们的对象。
我们将使用一行 python 脚本将模板转换为文件。
<!--
First, copy and paste the following template of a Job object, into a file called `job.yaml.jinja2`:
-->
首先,粘贴 Job 对象的以下模板到一个名为 `job.yaml.jinja2` 的文件中:
```liquid
{%- set params = [{ "name": "apple", "url": "https://www.orangepippin.com/varieties/apples", },
{ "name": "banana", "url": "https://en.wikipedia.org/wiki/Banana", },
{ "name": "raspberry", "url": "https://www.raspberrypi.org/" }]
%}
{%- for p in params %}
{%- set name = p["name"] %}
{%- set url = p["url"] %}
apiVersion: batch/v1
kind: Job
metadata:
name: jobexample-{{ name }}
labels:
jobgroup: jobexample
spec:
template:
metadata:
name: jobexample
labels:
jobgroup: jobexample
spec:
containers:
- name: c
image: busybox
command: ["sh", "-c", "echo Processing URL {{ url }} && sleep 5"]
restartPolicy: Never
---
{%- endfor %}
```
<!--
The above template defines parameters for each job object using a list of
python dicts (lines 1-4). Then a for loop emits one job yaml object
for each set of parameters (remaining lines).
We take advantage of the fact that multiple yaml documents can be concatenated
with the `---` separator (second to last line).
.) We can pipe the output directly to kubectl to
create the objects.
-->
上面的模板使用 python 字典列表(第 1-4 行)定义每个作业对象的参数。
然后使用 for 循环为每组参数(剩余行)生成一个作业 yaml 对象。
我们利用了多个 yaml 文档可以与 `---` 分隔符连接的事实(倒数第二行)。
我们可以将输出直接传递给 kubectl 来创建对象。
<!--
You will need the jinja2 package if you do not already have it: `pip install --user jinja2`.
Now, use this one-line python program to expand the template:
-->
如果您还没有 jinja2 包则需要安装它: `pip install --user jinja2`
现在,使用这个一行 python 程序来展开模板:
```shell
alias render_template='python -c "from jinja2 import Template; import sys; print(Template(sys.stdin.read()).render());"'
```
<!--
The output can be saved to a file, like this:
-->
输出可以保存到一个文件,像这样:
```shell
cat job.yaml.jinja2 | render_template > jobs.yaml
```
<!--
Or sent directly to kubectl, like this:
-->
或直接发送到 kubectl如下所示
```shell
cat job.yaml.jinja2 | render_template | kubectl apply -f -
```
<!--
## Alternatives
-->
## 替代方案
<!--
If you have a large number of job objects, you may find that:
-->
如果您有大量作业对象,您可能会发现:
<!--
- Even using labels, managing so many Job objects is cumbersome.
- You exceed resource quota when creating all the Jobs at once,
and do not want to wait to create them incrementally.
- Very large numbers of jobs created at once overload the
Kubernetes apiserver, controller, or scheduler.
-->
- 即使使用标签,管理这么多 Job 对象也很麻烦。
- 在一次创建所有作业时,您超过了资源配额,可是您也不希望以递增方式创建 Job 并等待其完成。
- 同时创建大量作业会使 Kubernetes apiserver、控制器或者调度器负压过大。
<!--
In this case, you can consider one of the
other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns).
-->
在这种情况下,您可以考虑其他的[作业模式](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns)。
---
title: 使用展开的方式进行并行处理
content_type: concept
min-kubernetes-server-version: v1.8
weight: 20
---
<!--
title: Parallel Processing using Expansions
content_type: concept
min-kubernetes-server-version: v1.8
weight: 20
-->
<!-- overview -->
<!--
This task demonstrates running multiple {{< glossary_tooltip text="Jobs" term_id="job" >}}
based on a common template. You can use this approach to process batches of work in
parallel.
For this example there are only three items: _apple_, _banana_, and _cherry_.
The sample Jobs process each item simply by printing a string then pausing.
See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how
this pattern fits more realistic use cases.
-->
本任务展示基于一个公共的模板运行多个{{< glossary_tooltip text="Jobs" term_id="job" >}}。
你可以用这种方法来并行执行批处理任务。
在本任务示例中只有三个工作条目_apple_、_banana_ 和 _cherry_
示例任务处理每个条目时仅仅是打印一个字符串之后结束。
参考[在真实负载中使用 Job](#using-jobs-in-real-workloads)了解更适用于真实使用场景的模式。
## {{% heading "prerequisites" %}}
<!--
You should be familiar with the basic,
non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/).
-->
你应先熟悉基本的、非并行的 [Job](/zh/docs/concepts/workloads/controllers/job/)
的用法。
{{< include "task-tutorial-prereqs.md" >}}
<!--
For basic templating you need the command-line utility `sed`.
To follow the advanced templating example, you need a working installation of
[Python](https://www.python.org/), and the Jinja2 template
library for Python.
Once you have Python set up, you can install Jinja2 by running:
-->
任务中的基本模板示例要求安装命令行工具 `sed`
要使用较高级的模板示例,你需要安装 [Python](https://www.python.org/)
并且要安装 Jinja2 模板库。
一旦 Python 已经安装好,你可以运行下面的命令安装 Jinja2
```shell
pip install --user jinja2
```
<!-- steps -->
<!--
## Create Jobs based on a template
-->
## 基于模板创建 Job {#create-jobs-based-on-a-template}
<!--
First, download the following template of a job to a file called `job-tmpl.yaml`
-->
首先,将以下作业模板下载到名为 `job-tmpl.yaml` 的文件中。
{{< codenew file="application/job/job-tmpl.yaml" >}}
```shell
# 使用 curl 下载 job-tmpl.yaml
curl -L -s -O https://k8s.io/examples/application/job/job-tmpl.yaml
```
<!--
The file you downloaded is not yet a valid Kubernetes
{{< glossary_tooltip text="manifest" term_id="manifest" >}}.
Instead that template is a YAML representation of a Job object with some placeholders
that need to be filled in before it can be used. The `$ITEM` syntax is not meaningful to Kubernetes.
-->
你所下载的文件不是一个合法的 Kubernetes {{< glossary_tooltip text="清单" term_id="manifest" >}}。
这里的模板只是 Job 对象的 yaml 表示,其中包含一些占位符,在使用它之前需要被填充。
`$ITEM` 语法对 Kubernetes 没有意义。
<!--
### Create manifests from the template
The following shell snippet uses `sed` to replace the string `$ITEM` with the loop
variable, writing into a temporary directory named `jobs`. Run this now:
-->
### 基于模板创建清单
下面的 Shell 代码片段使用 `sed` 将字符串 `$ITEM` 替换为循环变量,并将结果
写入到一个名为 `jobs` 的临时目录。
```shell
# 展开模板文件到多个文件中,每个文件对应一个要处理的条目
mkdir ./jobs
for i in apple banana cherry
do
cat job-tmpl.yaml | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml
done
```
<!--
Check if it worked:
-->
检查上述脚本的输出:
```shell
ls jobs/
```
<!--
The output is similar to this:
-->
输出类似于:
```
job-apple.yaml
job-banana.yaml
job-cherry.yaml
```
<!--
You could use any type of template language (for example: Jinja2; ERB), or
write a program to generate the Job manifests.
-->
你可以使用任何一种模板语言例如Jinja2、ERB或者编写一个程序来
生成 Job 清单。
<!--
### Create Jobs from the manifests
Next, create all the Jobs with one kubectl command:
-->
### 基于清单创建 Job
接下来用一个 kubectl 命令创建所有的 Job
```shell
kubectl create -f ./jobs
```
<!--
The output is similar to this:
-->
输出类似于:
```
job.batch/process-item-apple created
job.batch/process-item-banana created
job.batch/process-item-cherry created
```
<!--
Now, check on the jobs:
-->
现在检查 Job
```shell
kubectl get jobs -l jobgroup=jobexample
```
<!--
The output is similar to this:
-->
输出类似于:
```
NAME COMPLETIONS DURATION AGE
process-item-apple 1/1 14s 22s
process-item-banana 1/1 12s 21s
process-item-cherry 1/1 12s 20s
```
<!--
Using the `-l` option to kubectl selects only the Jobs that are part
of this group of jobs (there might be other unrelated jobs in the system).
You can check on the Pods as well using the same
{{< glossary_tooltip text="label selector" term_id="selector" >}}:
-->
使用 kubectl 的 `-l` 选项可以仅选择属于当前 Job 组的对象
(系统中可能存在其他不相关的 Job
你可以使用相同的 {{< glossary_tooltip text="标签选择算符" term_id="selector" >}}
来过滤 Pods
```shell
kubectl get pods -l jobgroup=jobexample
```
<!--
The output is similar to:
-->
输出类似于:
```
NAME READY STATUS RESTARTS AGE
process-item-apple-kixwv 0/1 Completed 0 4m
process-item-banana-wrsf7 0/1 Completed 0 4m
process-item-cherry-dnfu9 0/1 Completed 0 4m
```
<!--
We can use this single command to check on the output of all jobs at once:
-->
我们可以用下面的命令查看所有 Job 的输出:
```shell
kubectl logs -f -l jobgroup=jobexample
```
<!--
The output should be:
-->
输出类似于:
```
Processing item apple
Processing item banana
Processing item cherry
```
<!--
### Clean up {#cleanup-1}
-->
### 清理 {#cleanup-1}
```shell
# 删除所创建的 Job
# 集群会自动清理 Job 对应的 Pod
kubectl delete job -l jobgroup=jobexample
```
<!--
## Use advanced template parameters
In the [first example](#create-jobs-based-on-a-template), each instance of the template had one
parameter, and that parameter was also used in the Job's name. However,
[names](/docs/concepts/overview/working-with-objects/names/#names) are restricted
to contain only certain characters.
-->
## 使用高级模板参数
在[第一个例子](#create-jobs-based-on-a-template)中,模板的每个示例都有一个参数
而该参数也用在 Job 名称中。不过,对象
[名称](/zh/docs/concepts/overview/working-with-objects/names/#names)
被限制只能使用某些字符。
<!--
This slightly more complex example uses the
[Jinja template language](https://palletsprojects.com/p/jinja/) to generate manifests
and then objects from those manifests, with a multiple parameters for each Job.
For this part of the task, you are going to use a one-line Python script to
convert the template to a set of manifests.
First, copy and paste the following template of a Job object, into a file called `job.yaml.jinja2`:
-->
这里的略微复杂的例子使用 [Jinja 模板语言](https://palletsprojects.com/p/jinja/)
来生成清单,并基于清单来生成对象,每个 Job 都有多个参数。
在本任务中,你将会使用一个一行的 Python 脚本,将模板转换为一组清单文件。
首先,复制下面的 Job 对象模板到一个名为 `job.yaml.jinja2` 的文件。
```liquid
{%- set params = [{ "name": "apple", "url": "http://dbpedia.org/resource/Apple", },
{ "name": "banana", "url": "http://dbpedia.org/resource/Banana", },
{ "name": "cherry", "url": "http://dbpedia.org/resource/Cherry" }]
%}
{%- for p in params %}
{%- set name = p["name"] %}
{%- set url = p["url"] %}
---
apiVersion: batch/v1
kind: Job
metadata:
name: jobexample-{{ name }}
labels:
jobgroup: jobexample
spec:
template:
metadata:
name: jobexample
labels:
jobgroup: jobexample
spec:
containers:
- name: c
image: busybox
command: ["sh", "-c", "echo Processing URL {{ url }} && sleep 5"]
restartPolicy: Never
{%- endfor %}
```
<!--
The above template defines two parameters for each Job object using a list of
python dicts (lines 1-4). A `for` loop emits one Job manifest for each
set of parameters (remaining lines).
This example relies on a feature of YAML. One YAML file can contain multiple
documents (Kubernetes manifests, in this case), separated by `---` on a line
by itself.
You can pipe the output directly to `kubectl` to create the Jobs.
Next, use this one-line Python program to expand the template:
-->
上面的模板使用 python 字典列表(第 1-4 行)定义每个作业对象的参数。
然后使用 for 循环为每组参数(剩余行)生成一个作业 yaml 对象。
我们利用了多个 YAML 文档(这里的 Kubernetes 清单)可以用 `---` 分隔符连接的事实。
我们可以将输出直接传递给 kubectl 来创建对象。
接下来我们用单行的 Python 程序将模板展开。
```shell
alias render_template='python -c "from jinja2 import Template; import sys; print(Template(sys.stdin.read()).render());"'
```
<!--
Use `render_template` to convert the parameters and template into a single
YAML file containing Kubernetes manifests:
-->
使用 `render_template` 将参数和模板转换成一个 YAML 文件,其中包含 Kubernetes
资源清单:
```shell
# 此命令需要之前定义的别名
cat job.yaml.jinja2 | render_template > jobs.yaml
```
<!--
You can view `jobs.yaml` to verify that the `render_template` script worked
correctly.
Once you are happy that `render_template` is working how you intend,
you can pipe its output into `kubectl`:
-->
你可以查看 `jobs.yaml` 以验证 `render_template` 脚本是否正常工作。
当你对输出结果比较满意时,可以用管道将其输出发送给 kubectl如下所示
```shell
cat job.yaml.jinja2 | render_template | kubectl apply -f -
```
<!--
Kubernetes accepts and runs the Jobs you created.
-->
Kubernets 接收清单文件并执行你所创建的 Job。
<!-- discussion -->
<!--
## Using Jobs in real workloads
In a real use case, each Job performs some substantial computation, such as rendering a frame
of a movie, or processing a range of rows in a database. If you were rendering a movie
you would set `$ITEM` to the frame number. If you were processing rows from a database
table, you would set `$ITEM` to represent the range of database rows to process.
In the task, you ran a command to collect the output from Pods by fetching
their logs. In a real use case, each Pod for a Job writes its output to
durable storage before completing. You can use a PersistentVolume for each Job,
or an external storage service. For example, if you are rendering frames for a movie,
use HTTP to `PUT` the rendered frame data to a URL, using a different URL for each
frame.
-->
## 在真实负载中使用 Job
在真实的负载中,每个 Job 都会执行一些重要的计算,例如渲染电影的一帧,
或者处理数据库中的若干行。这时,`$ITEM` 参数将指定帧号或行范围。
在此任务中,你运行一个命令通过取回 Pod 的日志来收集其输出。
在真实应用场景中Job 的每个 Pod 都会在结束之前将其输出写入到某持久性存储中。
你可以为每个 Job 指定 PersistentVolume 卷,或者使用其他外部存储服务。
例如,如果你在渲染视频帧,你可能会使用 HTTP 协议将渲染完的帧数据
用 'PUT' 请求发送到某 URL每个帧使用不同的 URl。
<!--
## Labels on Jobs and Pods
After you create a Job, Kubernetes automatically adds additional
{{< glossary_tooltip text="labels" term_id="label" >}} that
distinguish one Job's pods from another Job's pods.
In this example, each Job and its Pod template have a label:
`jobgroup=jobexample`.
Kubernetes itself pays no attention to labels named `jobgroup`. Setting a label
for all the Jobs you create from a template makes it convenient to operate on all
those Jobs at once.
In the [first example](#create-jobs-based-on-a-template) you used a template to
create several Jobs. The template ensures that each Pod also gets the same label, so
you can check on all Pods for these templated Jobs with a single command.
-->
## Job 和 Pod 上的标签
你创建了 Job 之后Kubernetes 自动为 Job 的 Pod 添加
{{< glossary_tooltip text="标签" term_id="label" >}},以便能够将一个 Job
的 Pod 与另一个 Job 的 Pod 区分开来。
在本例中,每个 Job 及其 Pod 模板有一个标签: `jobgroup=jobexample`
Kubernetes 自身对标签名 `jobgroup` 没有什么要求。
为创建自同一模板的所有 Job 使用同一标签使得我们可以方便地同时操作组中的所有作业。
在[第一个例子](#create-jobs-based-on-a-template)中,你使用模板来创建了若干 Job。
模板确保每个 Pod 都能够获得相同的标签,这样你可以用一条命令检查这些模板化
Job 所生成的全部 Pod。
<!--
The label key `jobgroup` is not special or reserved.
You can pick your own labelling scheme.
There are [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels)
that you can use if you wish.
-->
{{< note >}}
标签键 `jobgroup` 没什么特殊的,也不是保留字。 你可以选择你自己的标签方案。
如果愿意,有一些[建议的标签](/zh/docs/concepts/overview/working-with-objects/common-labels/#labels)
可供使用。
{{< /note >}}
<!--
## Alternatives
If you plan to create a large number of Job objects, you may find that:
-->
## 替代方案
如果你有计划创建大量 Job 对象,你可能会发现:
<!--
- Even using labels, managing so many Job objects is cumbersome.
- If you create many Jobs in a batch, you might place high load
on the Kubernetes control plane. Alternatively, the Kubernetes API
server could rate limit you, temporarily rejecting your requests with a 429 status.
- You are limited by a {{< glossary_tooltip text="resource quota" term_id="resource-quota" >}}
on Jobs: the API server permanently rejects some of your requests
when you create a great deal of work in one batch.
-->
- 即使使用标签,管理这么多 Job 对象也很麻烦。
- 如果你一次性创建很多 Job很可能会给 Kubernetes 控制面带来很大压力。
一种替代方案是Kubernetes API 可能对请求施加速率限制,通过 429 返回
状态值临时拒绝你的请求。
- 你可能会受到 Job 相关的{{< glossary_tooltip text="资源配额" term_id="resource-quota" >}}
限制如果你在一个批量请求中触发了太多的任务API 服务器会永久性地拒绝你的某些请求。
<!--
There are other [job patterns](/docs/concepts/workloads/controllers/job/#job-patterns)
that you can use to process large amounts of work without creating very many Job
objects.
You could also consider writing your own [controller](/docs/concepts/architecture/controller/)
to manage Job objects automatically.
-->
还有一些其他[作业模式](/zh/docs/concepts/workloads/controllers/job/#job-patterns)
可供选择,这些模式都能用来处理大量任务而又不会创建过多的 Job 对象。
你也可以考虑编写自己的[控制器](/zh/docs/concepts/architecture/controller/)
来自动管理 Job 对象。

View File

@ -7,7 +7,7 @@
<ul id="error-sections">
{{ $sections := slice "docs" "blog" "training" "partners" "community" "case-studies" }}
{{ range $sections }}
{{ with site.GetPage "section" . }}<li><a href="{{ .RelPermalink }}" data-proofer-ignore>{{ .Title }}</a></li>{{ end }}
{{ with site.GetPage "section" . }}<li><a href="{{ .RelPermalink }}">{{ .Title }}</a></li>{{ end }}
{{ end }}
</ul>
</section>