Merge pull request #27729 from CaoDonghui123/fix4

[zh]Resync Tutorials (1)
pull/27773/head^2
Kubernetes Prow Robot 2021-04-29 19:35:58 -07:00 committed by GitHub
commit 43fcebde32
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 281 additions and 88 deletions

View File

@ -51,10 +51,14 @@ Kubernetes 文档的这一部分包含教程。每个教程展示了如何完成
<!--
## Configuration
* [Example: Configuring a Java Microservice](/docs/tutorials/configuration/configure-java-microservice/)
* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
-->
## 配置
* [示例:配置 Java 微服务](/zh/docs/tutorials/configuration/configure-java-microservice/)
* [使用一个 ConfigMap 配置 Redis](/zh/docs/tutorials/configuration/configure-redis-using-configmap/)
<!--

View File

@ -1,12 +1,14 @@
---
title: AppArmor
title: 使用 AppArmor 限制容器对资源的访问
content_type: tutorial
weight: 10
---
<!-- ---
reviewers:
- stclair
title: AppArmor
title: Restrict a Container's Access to Resources with AppArmor
content_type: tutorial
weight: 10
--- -->
<!-- overview -->
@ -17,11 +19,15 @@ content_type: tutorial
<!-- AppArmor is a Linux kernel security module that supplements the standard Linux user and group based
permissions to confine programs to a limited set of resources. AppArmor can be configured for any
application to reduce its potential attack surface and provide greater in-depth defense. It is
configured through profiles tuned to whitelist the access needed by a specific program or container,
configured through profiles tuned to allow the access needed by a specific program or container,
such as Linux capabilities, network access, file permissions, etc. Each profile can be run in either
*enforcing* mode, which blocks access to disallowed resources, or *complain* mode, which only reports
violations. -->
Apparmor 是一个 Linux 内核安全模块,它补充了标准的基于 Linux 用户和组的安全模块将程序限制为有限资源集的权限。AppArmor 可以配置为任何应用程序减少潜在的攻击面并且提供更加深入的防御。AppArmor 是通过配置文件进行配置的,这些配置文件被调整为报名单,列出了特定程序或者容器所需要的访问权限,如 Linux 功能、网络访问、文件权限等。每个配置文件都可以在*强制*模式(阻止访问不允许的资源)或*投诉*模式(仅报告冲突)下运行。
Apparmor 是一个 Linux 内核安全模块,它补充了标准的基于 Linux 用户和组的安全模块将程序限制为有限资源集的权限。
AppArmor 可以配置为任何应用程序减少潜在的攻击面,并且提供更加深入的防御。
AppArmor 是通过配置文件进行配置的,这些配置文件被调整为允许特定程序或者容器访问,如 Linux 功能、网络访问、文件权限等。
每个配置文件都可以在*强制enforcing*模式(阻止访问不允许的资源)或*投诉complain*模式
(仅报告冲突)下运行。
@ -244,9 +250,8 @@ k8s-apparmor-example-deny-write (enforce)
<!-- *This example assumes you have already set up a cluster with AppArmor support.* -->
*本例假设您已经使用 AppArmor 支持设置了一个集群。*
<!-- First, we need to load the profile we want to use onto our nodes. The profile we'll use simply
denies all file writes: -->
首先,我们需要将要使用的配置文件加载到节点上。我们将使用的配置文件仅拒绝所有文件写入:
<!-- First, we need to load the profile we want to use onto our nodes. This profile denies all file writes: -->
首先,我们需要将要使用的配置文件加载到节点上。配置文件拒绝所有文件写入:
```shell
#include <tunables/global>
@ -259,9 +264,12 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
```
<!-- Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our
nodes. For this example we'll just use SSH to install the profiles, but other approaches are
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles). -->
由于我们不知道 Pod 将被安排在那里,我们需要在所有节点上加载配置文件。在本例中,我们将只使用 SSH 来安装概要文件,但是在[使用配置文件设置节点](#setting-up-nodes-with-profiles)中讨论了其他方法。
nodes. For this example we'll use SSH to install the profiles, but other approaches are
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
-->
由于我们不知道 Pod 将被调度到哪里,我们需要在所有节点上加载配置文件。
在本例中,我们将使用 SSH 来安装概要文件,但是在[使用配置文件设置节点](#setting-up-nodes-with-profiles)
中讨论了其他方法。
```shell
NODES=(
@ -403,9 +411,9 @@ Events:
23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
```
<!-- Note the pod status is Failed, with a helpful error message: `Pod Cannot enforce AppArmor: profile
<!-- Note the pod status is Pending, with a helpful error message: `Pod Cannot enforce AppArmor: profile
"k8s-apparmor-example-allow-write" is not loaded`. An event was also recorded with the same message. -->
注意 pod 呈现失败状态,并且显示一条有用的错误信息:`Pod Cannot enforce AppArmor: profile
注意 pod 呈现 Pending 状态,并且显示一条有用的错误信息:`Pod Cannot enforce AppArmor: profile
"k8s-apparmor-example-allow-write" 未加载`。还用相同的消息记录了一个事件。
<!-- ## Administration -->

View File

@ -52,14 +52,14 @@ Kubernetes 允许你将加载到节点上的 seccomp 配置文件自动应用于
<!--
In order to complete all steps in this tutorial, you must install
[kind](https://kind.sigs.k8s.io/docs/user/quick-start/) and
[kubectl](/docs/tasks/tools/install-kubectl/). This tutorial will show examples
[kubectl](/docs/tasks/tools/). This tutorial will show examples
with both alpha (pre-v1.19) and generally available seccomp functionality, so
make sure that your cluster is [configured
correctly](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version)
for the version you are using.
-->
为了完成本教程中的所有步骤,你必须安装 [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)
和 [kubectl](/zh/docs/tasks/tools/install-kubectl/)。本教程将显示同时具有 alphav1.19 之前的版本)
和 [kubectl](/zh/docs/tasks/tools/)。本教程将显示同时具有 alphav1.19 之前的版本)
和通常可用的 seccomp 功能的示例,因此请确保为所使用的版本[正确配置](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version)了集群。
<!-- steps -->
@ -91,8 +91,8 @@ into the cluster.
For simplicity, [kind](https://kind.sigs.k8s.io/) can be used to create a single
node cluster with the seccomp profiles loaded. Kind runs Kubernetes in Docker,
so each node of the cluster is actually just a container. This allows for files
to be mounted in the filesystem of each container just as one might load files
so each node of the cluster is a container. This allows for files
to be mounted in the filesystem of each container similar to loading files
onto a node.
Download the example above, and save it to a file named `kind.yaml`. Then create
@ -101,8 +101,8 @@ the cluster with the configuration.
## 使用 Kind 创建一个本地 Kubernetes 集群
为简单起见,可以使用 [kind](https://kind.sigs.k8s.io/) 创建一个已经加载 seccomp 配置文件的单节点集群。
Kind 在 Docker 中运行 Kubernetes因此集群的每个节点实际上只是一个容器。这允许将文件挂载到每个容器的文件系统中,
就像将文件挂载到节点上一样
Kind 在 Docker 中运行 Kubernetes因此集群的每个节点是一个容器。这允许将文件挂载到每个容器的文件系统中,
类似于将文件挂载到节点上
{{< codenew file="pods/security/seccomp/kind.yaml" >}}
<br>

View File

@ -112,7 +112,7 @@ This tutorial provides a container image that uses NGINX to echo back all the re
<!--
The `dashboard` command enables the dashboard add-on and opens the proxy in the default web browser. You can create Kubernetes resources on the dashboard such as Deployment and Service.
If you are running in an environment as root, see [Open Dashboard with URL](/docs/tutorials/hello-minikube#open-dashboard-with-url).
If you are running in an environment as root, see [Open Dashboard with URL](#open-dashboard-with-url).
To stop the proxy, run `Ctrl+C` to exit the process. The dashboard remains running.
-->
@ -120,7 +120,7 @@ To stop the proxy, run `Ctrl+C` to exit the process. The dashboard remains runni
`dashboard` 命令启用仪表板插件,并在默认的 Web 浏览器中打开代理。你可以在仪表板上创建 Kubernetes 资源,例如 Deployment 和 Service。
如果你以 root 用户身份在环境中运行,
请参见[使用 URL 打开仪表板](/zh/docs/tutorials/hello-minikube#open-dashboard-with-url)。
请参见[使用 URL 打开仪表板](#open-dashboard-with-url)。
要停止代理,请运行 `Ctrl+C` 退出该进程。仪表板仍在运行中。
{{< /note >}}
@ -273,9 +273,9 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/).
如果你用 `kubectl expose` 暴露了其它的端口,客户端将不能访问其它端口。
<!--
2. View the Service you just created:
2. View the Service you created:
-->
2. 查看你刚刚创建的 Service
2. 查看你创建的 Service
```shell
kubectl get services
@ -391,9 +391,9 @@ Minikube 有一组内置的 {{< glossary_tooltip text="插件" term_id="addons"
```
<!--
3. View the Pod and Service you just created:
3. View the Pod and Service you created:
-->
3. 查看刚才创建的 Pod 和 Service
3. 查看创建的 Pod 和 Service
```shell
kubectl get pod,svc -n kube-system

View File

@ -120,6 +120,8 @@ Headless Service and StatefulSet defined in `web.yaml`.
```shell
kubectl apply -f web.yaml
```
```
service/nginx created
statefulset.apps/web created
```
@ -134,10 +136,19 @@ The command above creates two Pods, each running an
```shell
kubectl get service nginx
```
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP None <none> 80/TCP 12s
```
<!--
...then get the `web` StatefulSet, to verify that both were created successfully:
-->
...然后获取 `web` StatefulSet以验证两者均已成功创建
```shell
kubectl get statefulset web
```
```
NAME DESIRED CURRENT AGE
web 2 1 20s
```
@ -159,6 +170,8 @@ look like the example below.
```shell
kubectl get pods -w -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-0 0/1 Pending 0 0s
web-0 0/1 Pending 0 0s
@ -200,10 +213,11 @@ StatefulSet 中的 Pod 拥有一个唯一的顺序索引和稳定的网络身份
```shell
kubectl get pods -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
```
<!--
@ -228,7 +242,9 @@ Each Pod has a stable hostname based on its ordinal index. Use
每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](/zh/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。
```shell
for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
for i in 0 1; do kubectl exec "web-$i" -- sh -c 'hostname'; done
```
```
web-0
web-1
```
@ -244,7 +260,20 @@ addresses.
```shell
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
```
<!--
which starts a new shell. In that new shell, run:
-->
这将启动一个新的 shell。在新 shell 中,运行:
```shell
# Run this in the dns-test container shell
nslookup web-0.nginx
```
<!--
The output is similar to:
-->
输出类似于:
```
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
@ -284,6 +313,8 @@ the Pods in the StatefulSet.
```shell
kubectl delete pod -l app=nginx
```
```
pod "web-0" deleted
pod "web-1" deleted
```
@ -297,6 +328,8 @@ Running and Ready.
```shell
kubectl get pod -w -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-0 0/1 ContainerCreating 0 0s
NAME READY STATUS RESTARTS AGE
@ -316,11 +349,32 @@ DNS entries.
```shell
for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
```
```
web-0
web-1
```
<!--
then, run:
-->
然后,运行:
```
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh
```
<!--
which starts a new shell.
In that new shell, run:
-->
这将启动一个新的 shell。在新 shell 中,运行:
```shell
# Run this in the dns-test container shell
nslookup web-0.nginx
```
<!--
The output is similar to:
-->
输出类似于:
```
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
@ -377,6 +431,12 @@ Get the PersistentVolumeClaims for `web-0` and `web-1`.
```shell
kubectl get pvc -l app=nginx
```
<!--
The output is similar to:
-->
输出类似于:
```
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 48s
www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s
@ -405,30 +465,35 @@ NGINX web 服务器默认会加载位于 `/usr/share/nginx/html/index.html` 的
将 Pod 的主机名写入它们的`index.html`文件并验证 NGINX web 服务器使用该主机名提供服务。
```shell
for i in 0 1; do kubectl exec web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done
for i in 0 1; do kubectl exec "web-$i" -- sh -c 'echo "$(hostname)" > /usr/share/nginx/html/index.html'; done
for i in 0 1; do kubectl exec -it web-$i -- curl localhost; done
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
```
```
web-0
web-1
```
{{< note >}}
<!--
If you instead see 403 Forbidden responses for the above curl command,
If you instead see **403 Forbidden** responses for the above curl command,
you will need to fix the permissions of the directory mounted by the `volumeMounts`
(due to a [bug when using hostPath volumes](https://github.com/kubernetes/kubernetes/issues/2630)) with:
(due to a [bug when using hostPath volumes](https://github.com/kubernetes/kubernetes/issues/2630)),
by running:
-->
请注意,如果你看见上面的 curl 命令返回了 403 Forbidden 的响应,你需要像这样修复使用 `volumeMounts`due to a [bug when using hostPath volumes](https://github.com/kubernetes/kubernetes/issues/2630))挂载的目录的权限:
请注意,如果你看见上面的 curl 命令返回了 **403 Forbidden** 的响应,你需要像这样修复使用 `volumeMounts`
(原因归咎于[使用 hostPath 卷时存在的缺陷](https://github.com/kubernetes/kubernetes/issues/2630)
挂载的目录的权限
运行:
`for i in 0 1; do kubectl exec web-$i -- chmod 755 /usr/share/nginx/html; done`
```shell
for i in 0 1; do kubectl exec web-$i -- chmod 755 /usr/share/nginx/html; done
```
<!--
before retrying the curl command above.
before retrying the `curl` command above.
-->
在你重新尝试上面的 curl 命令之前。
在你重新尝试上面的 `curl` 命令之前。
{{< /note >}}
<!--
@ -449,6 +514,8 @@ In a second terminal, delete all of the StatefulSet's Pods.
```shell
kubectl delete pod -l app=nginx
```
```
pod "web-0" deleted
pod "web-1" deleted
```
@ -461,6 +528,8 @@ for all of the Pods to transition to Running and Ready.
```shell
kubectl get pod -w -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-0 0/1 ContainerCreating 0 0s
NAME READY STATUS RESTARTS AGE
@ -478,7 +547,9 @@ Verify the web servers continue to serve their hostnames.
验证所有 web 服务器在继续使用它们的主机名提供服务。
```
for i in 0 1; do kubectl exec -it web-$i -- curl localhost; done
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
```
```
web-0
web-1
```
@ -526,6 +597,8 @@ to 5.-->
```shell
kubectl scale sts web --replicas=5
```
```
statefulset.apps/web scaled
```
<!--
@ -537,6 +610,8 @@ for the three additional Pods to transition to Running and Ready.
```shell
kubectl get pods -w -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 2h
web-1 1/1 Running 0 2h
@ -587,8 +662,9 @@ three replicas.
```shell
kubectl patch sts web -p '{"spec":{"replicas":3}}'
```
```
statefulset.apps/web patched
```
<!--
@ -597,8 +673,10 @@ Wait for `web-4` and `web-3` to transition to Terminating.
等待 `web-4``web-3` 状态变为 Terminating。
```
```shell
kubectl get pods -w -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3h
web-1 1/1 Running 0 3h
@ -633,6 +711,8 @@ Get the StatefulSet's PersistentVolumeClaims.
```shell
kubectl get pvc -l app=nginx
```
```
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 13h
www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 13h
@ -684,6 +764,8 @@ Patch `web` StatefulSet 来执行 `RollingUpdate` 更新策略。
```shell
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
```
```
statefulset.apps/web patched
```
<!--
@ -695,8 +777,9 @@ image again.
```shell
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
```
```
statefulset.apps/web patched
```
<!--
@ -707,6 +790,12 @@ In another terminal, watch the Pods in the StatefulSet.
```shell
kubectl get po -l app=nginx -w
```
<!--
The output is similar to:
-->
输出类似于:
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 7m
web-1 1/1 Running 0 7m
@ -761,7 +850,9 @@ StatefulSet 里的 Pod 采用和序号相反的顺序更新。在更新下一个
获取 Pod 来查看他们的容器镜像。
```shell
for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
for p in 0 1 2; do kubectl get pod "web-$p" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
```
```
k8s.gcr.io/nginx-slim:0.8
k8s.gcr.io/nginx-slim:0.8
k8s.gcr.io/nginx-slim:0.8
@ -798,6 +889,8 @@ Patch `web` StatefulSet 来对 `updateStrategy` 字段添加一个分区。
```shell
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":3}}}}'
```
```
statefulset.apps/web patched
```
@ -809,6 +902,8 @@ Patch the StatefulSet again to change the container's image.
```shell
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.7"}]'
```
```
statefulset.apps/web patched
```
@ -819,7 +914,9 @@ Delete a Pod in the StatefulSet.
删除 StatefulSet 中的 Pod。
```shell
kubectl delete po web-2
kubectl delete pod web-2
```
```
pod "web-2" deleted
```
@ -830,7 +927,9 @@ Wait for the Pod to be Running and Ready.
等待 Pod 变成 Running 和 Ready。
```shell
kubectl get po -lapp=nginx -w
kubectl get pod -l app=nginx -w
```
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 4m
web-1 1/1 Running 0 4m
@ -845,10 +944,10 @@ Get the Pod's container.
获取 Pod 的容器。
```shell
kubectl get po web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
kubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
```
```
k8s.gcr.io/nginx-slim:0.8
```
<!--
@ -876,6 +975,8 @@ Patch the StatefulSet to decrement the partition.
```shell
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
```
```
statefulset.apps/web patched
```
@ -886,7 +987,9 @@ Wait for `web-2` to be Running and Ready.
等待 `web-2` 变成 Running 和 Ready。
```shell
kubectl get po -lapp=nginx -w
kubectl get pod -l app=nginx -w
```
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 4m
web-1 1/1 Running 0 4m
@ -901,7 +1004,9 @@ Get the Pod's container.
获取 Pod 的容器。
```shell
kubectl get po web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
kubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
```
```
k8s.gcr.io/nginx-slim:0.7
```
@ -920,7 +1025,9 @@ Delete the `web-1` Pod.
删除 `web-1` Pod。
```shell
kubectl delete po web-1
kubectl delete pod web-1
```
```
pod "web-1" deleted
```
@ -931,7 +1038,13 @@ Wait for the `web-1` Pod to be Running and Ready.
等待 `web-1` 变成 Running 和 Ready。
```shell
kubectl get po -lapp=nginx -w
kubectl get pod -l app=nginx -w
```
<!--
The output is similar to:
-->
输出类似于:
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 6m
web-1 0/1 Terminating 0 6m
@ -952,7 +1065,9 @@ Get the `web-1` Pods container.
获取 `web-1` Pod 的容器。
```shell
kubectl get po web-1 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
kubectl get pod web-1 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'
```
```
k8s.gcr.io/nginx-slim:0.8
```
@ -986,6 +1101,8 @@ The partition is currently set to `2`. Set the partition to `0`.
```shell
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'
```
```
statefulset.apps/web patched
```
@ -996,7 +1113,13 @@ Wait for all of the Pods in the StatefulSet to become Running and Ready.
等待 StatefulSet 中的所有 Pod 变成 Running 和 Ready。
```shell
kubectl get po -lapp=nginx -w
kubectl get pod -l app=nginx -w
```
<!--
The output is similar to:
-->
输出类似于:
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m
web-1 0/1 ContainerCreating 0 11s
@ -1020,11 +1143,12 @@ Get the Pod's containers.
获取 Pod 的容器。
```shell
for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
for p in 0 1 2; do kubectl get pod "web-$p" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
```
```
k8s.gcr.io/nginx-slim:0.7
k8s.gcr.io/nginx-slim:0.7
k8s.gcr.io/nginx-slim:0.7
```
<!--
@ -1083,6 +1207,8 @@ not delete any of its Pods.
```shell
kubectl delete statefulset web --cascade=false
```
```
statefulset.apps "web" deleted
```
@ -1094,6 +1220,8 @@ Get the Pods to examine their status.
```shell
kubectl get pods -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 6m
web-1 1/1 Running 0 7m
@ -1110,6 +1238,8 @@ Delete `web-0`.
```shell
kubectl delete pod web-0
```
```
pod "web-0" deleted
```
@ -1121,6 +1251,8 @@ Get the StatefulSet's Pods.
```shell
kubectl get pods -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-1 1/1 Running 0 10m
web-2 1/1 Running 0 7m
@ -1150,9 +1282,10 @@ an error indicating that the Service already exists.
```shell
kubectl apply -f web.yaml
```
```
statefulset.apps/web created
service/nginx unchanged
```
<!--
Ignore the error. It only indicates that an attempt was made to create the nginx
@ -1168,6 +1301,8 @@ Examine the output of the `kubectl get` command running in the first terminal.
```shell
kubectl get pods -w -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-1 1/1 Running 0 16m
web-2 1/1 Running 0 2m
@ -1185,22 +1320,28 @@ web-2 0/1 Terminating 0 3m
<!--
When the `web` StatefulSet was recreated, it first relaunched `web-0`.
Since `web-1` was already Running and Ready, when `web-0` transitioned to
Running and Ready, it simply adopted this Pod. Since you recreated the StatefulSet
with `replicas` equal to 2, once `web-0` had been recreated, and once
`web-1` had been determined to already be Running and Ready, `web-2` was
terminated.
Running and Ready, it adopted this Pod. Since you recreated the StatefulSet
with `replicas` equal to 2, once `web-0` had been recreated, and once
`web-1` had been determined to already be Running and Ready, `web-2` was
terminated.
Let's take another look at the contents of the `index.html` file served by the
Pods' webservers.
Pods' webservers:
-->
当重新创建 `web` StatefulSet 时,`web-0`被第一个重新启动。由于 `web-1` 已经处于 Running 和 Ready 状态,当 `web-0` 变成 Running 和 Ready 时StatefulSet 会直接接收这个 Pod。由于你重新创建的 StatefulSet 的 `replicas` 等于 2一旦 `web-0` 被重新创建并且 `web-1` 被认为已经处于 Running 和 Ready 状态时,`web-2`将会被终止。
当重新创建 `web` StatefulSet 时,`web-0` 被第一个重新启动。
由于 `web-1` 已经处于 Running 和 Ready 状态,当 `web-0` 变成 Running 和 Ready 时,
StatefulSet 会接收这个 Pod。由于你重新创建的 StatefulSet 的 `replicas` 等于 2
一旦 `web-0` 被重新创建并且 `web-1` 被认为已经处于 Running 和 Ready 状态时,`web-2` 将会被终止。
让我们再看看被 Pod 的 web 服务器加载的 `index.html` 的内容。
让我们再看看被 Pod 的 web 服务器加载的 `index.html` 的内容
```shell
for i in 0 1; do kubectl exec -it web-$i -- curl localhost; done
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
```
```
web-0
web-1
```
@ -1229,11 +1370,17 @@ In one terminal window, watch the Pods in the StatefulSet.
kubectl get pods -w -l app=nginx
```
<!--
In another terminal, delete the StatefulSet again. This time, omit the
-->
在另一个窗口中再次删除这个 StatefulSet。这次省略 `--cascade=false` 参数。
```shell
kubectl delete statefulset web
```
```
statefulset.apps "web" deleted
```
@ -1246,6 +1393,9 @@ and wait for all of the Pods to transition to Terminating.
```shell
kubectl get pods -w -l app=nginx
```
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 11m
web-1 1/1 Running 0 27m
@ -1279,6 +1429,9 @@ must delete the `nginx` Service manually.
```shell
kubectl delete service nginx
```
```
service "nginx" deleted
```
@ -1290,9 +1443,11 @@ Recreate the StatefulSet and Headless Service one more time.
```shell
kubectl apply -f web.yaml
```
```
service/nginx created
statefulset.apps/web created
```
<!--
@ -1303,7 +1458,10 @@ the contents of their `index.html` files.
当 StatefulSet 所有的 Pod 变成 Running 和 Ready 时,获取它们的 `index.html` 文件的内容。
```shell
for i in 0 1; do kubectl exec -it web-$i -- curl localhost; done
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
```
```
web-0
web-1
```
@ -1324,13 +1482,17 @@ Finally delete the `web` StatefulSet and the `nginx` service.
```shell
kubectl delete service nginx
```
```
service "nginx" deleted
```
... 并且删除 `web` StatefulSet:
```shell
kubectl delete statefulset web
```
```
statefulset "web" deleted
```
@ -1354,14 +1516,16 @@ above.
`Parallel` pod management tells the StatefulSet controller to launch or
terminate all Pods in parallel, and not to wait for Pods to become Running
and Ready or completely terminated prior to launching or terminating another
Pod.
Pod. This option only affects the behavior for scaling operations. Updates are not affected.
-->
## Pod 管理策略
对于某些分布式系统来说StatefulSet 的顺序性保证是不必要和/或者不应该的。这些系统仅仅要求唯一性和身份标志。为了解决这个问题,在 Kubernetes 1.7 中我们针对 StatefulSet API Object 引入了 `.spec.podManagementPolicy`
对于某些分布式系统来说StatefulSet 的顺序性保证是不必要和/或者不应该的。
这些系统仅仅要求唯一性和身份标志。为了解决这个问题,在 Kubernetes 1.7 中
我们针对 StatefulSet API 对象引入了 `.spec.podManagementPolicy`
此选项仅影响扩缩操作的行为。更新不受影响。
### OrderedReady Pod 管理策略
@ -1372,7 +1536,8 @@ Pod.
### Parallel Pod 管理策略
`Parallel` pod 管理策略告诉 StatefulSet 控制器并行的终止所有 Pod在启动或终止另一个 Pod 前,不必等待这些 Pod 变成 Running 和 Ready 或者完全终止状态。
`Parallel` pod 管理策略告诉 StatefulSet 控制器并行的终止所有 Pod
在启动或终止另一个 Pod 前,不必等待这些 Pod 变成 Running 和 Ready 或者完全终止状态。
{{< codenew file="application/web/web-parallel.yaml" >}}
@ -1398,16 +1563,17 @@ kubectl get po -lapp=nginx -w
```
<!--
In another terminal, create the StatefulSet and Service in the manifest.
In another terminal, create the StatefulSet and Service in the manifest:
-->
在另一个终端窗口创建清单中的 StatefulSet 和 Service
在另一个终端窗口创建清单中的 StatefulSet 和 Service
```shell
kubectl apply -f web-parallel.yaml
```
```
service/nginx created
statefulset.apps/web created
```
<!--
@ -1417,7 +1583,9 @@ Examine the output of the `kubectl get` command that you executed in the first t
查看你在第一个终端中运行的 `kubectl get` 命令的输出。
```shell
kubectl get po -lapp=nginx -w
kubectl get pod -l app=nginx -w
```
```
NAME READY STATUS RESTARTS AGE
web-0 0/1 Pending 0 0s
web-0 0/1 Pending 0 0s
@ -1442,6 +1610,8 @@ StatefulSet 控制器同时启动了 `web-0` 和 `web-1`。
```shell
kubectl scale statefulset/web --replicas=4
```
```
statefulset.apps/web scaled
```
@ -1451,7 +1621,7 @@ Examine the output of the terminal where the `kubectl get` command is running.
`kubectl get` 命令运行的终端里检查它的输出。
```shell
```
web-3 0/1 Pending 0 0s
web-3 0/1 Pending 0 0s
web-3 0/1 Pending 0 7s
@ -1461,27 +1631,36 @@ web-3 1/1 Running 0 26s
```
<!--
The StatefulSet controller launched two new Pods, and it did not wait for
The StatefulSet launched two new Pods, and it did not wait for
the first to become Running and Ready prior to launching the second.
Keep this terminal open, and in another terminal delete the `web` StatefulSet.
## {{% heading "cleanup" %}}
You should have two terminals open, ready for you to run `kubectl` commands as
part of cleanup.
-->
StatefulSet 控制器启动了两个新的 Pod而且在启动第二个之前并没有等待第一个变成 Running 和 Ready 状态。
StatefulSet 启动了两个新的 Pod而且在启动第二个之前并没有等待第一个变成 Running 和 Ready 状态。
保持这个终端打开,并在另一个终端删除 `web` StatefulSet。
## {{% heading "cleanup" %}}
您应该打开两个终端,准备在清理过程中运行 `kubectl` 命令。
```shell
kubectl delete sts web
# sts is an abbreviation for statefulset
```
<!--
Again, examine the output of the `kubectl get` command running in the other terminal.
You can watch `kubectl get` to see those Pods being deleted.
-->
在另一个终端里再次检查 `kubectl get` 命令的输出。
你可以监测 `kubectl get` 来查看那些 Pod 被删除
```shell
kubectl get pod -l app=nginx -w
```
```
web-3 1/1 Terminating 0 9m
web-2 1/1 Terminating 0 9m
web-3 1/1 Terminating 0 9m
@ -1530,10 +1709,12 @@ kubectl delete svc nginx
<!--
You will need to delete the persistent storage media for the PersistentVolumes
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
reclaimed.
You also need to delete the persistent storage media for the PersistentVolumes
used in this tutorial.
Follow the necessary steps, based on your environment, storage configuration,
and provisioning method, to ensure that all storage is reclaimed.
-->
你需要删除本教程中用到的 PersistentVolumes 的持久化存储介质。基于你的环境、存储配置和提供方式,按照必须的步骤保证回收所有的存储。

View File

@ -20,7 +20,7 @@ external IP address.
## {{% heading "prerequisites" %}}
<!--
* Install [kubectl](/docs/tasks/tools/install-kubectl/).
* Install [kubectl](/docs/tasks/tools/).
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
@ -28,7 +28,7 @@ external IP address.
* Configure `kubectl` to communicate with your Kubernetes API server. For instructions, see the
documentation for your cloud provider.
-->
* 安装 [kubectl](/zh/docs/tasks/tools/install-kubectl/).
* 安装 [kubectl](/zh/docs/tasks/tools/).
* 使用 Google Kubernetes Engine 或 Amazon Web Services 等云供应商创建 Kubernetes 集群。
本教程创建了一个[外部负载均衡器](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/)
需要云供应商。