replace zh to /zh in content/zh/docs/tutorials/ directory

pull/20169/head
tanjunchen 2020-04-10 10:50:57 +08:00
parent 5888082a2f
commit cb5a3cd3f2
11 changed files with 374 additions and 382 deletions

View File

@ -16,17 +16,17 @@ content_template: templates/concept
{{% capture overview %}}
Kubernetes 文档的这一部分包含教程。一个教程展示了如何完成一个比单个[任务](zh/docs/tasks/)更大的目标。
Kubernetes 文档的这一部分包含教程。一个教程展示了如何完成一个比单个[任务](/zh/docs/tasks/)更大的目标。
通常一个教程有几个部分,每个部分都有一系列步骤。在浏览每个教程之前,
您可能希望将[标准化术语表](zh/docs/reference/glossary/)页面添加到书签,供以后参考。
您可能希望将[标准化术语表](/zh/docs/reference/glossary/)页面添加到书签,供以后参考。
<!--
This section of the Kubernetes documentation contains tutorials.
A tutorial shows how to accomplish a goal that is larger than a single
[task](zh/docs/tasks/). Typically a tutorial has several sections,
[task](/docs/tasks/). Typically a tutorial has several sections,
each of which has a sequence of steps.
Before walking through each tutorial, you may want to bookmark the
[Standardized Glossary](zh/docs/reference/glossary/) page for later references.
[Standardized Glossary](/docs/reference/glossary/) page for later references.
-->
{{% /capture %}}
@ -39,10 +39,10 @@ Before walking through each tutorial, you may want to bookmark the
## Basics
-->
* [Kubernetes 基础知识](zh/docs/tutorials/Kubernetes-Basics/)是一个深入的交互式教程,帮助您理解 Kubernetes 系统,并尝试一些基本的 Kubernetes 特性。
* [Kubernetes 基础知识](/zh/docs/tutorials/Kubernetes-Basics/)是一个深入的交互式教程,帮助您理解 Kubernetes 系统,并尝试一些基本的 Kubernetes 特性。
<!--
* [Kubernetes Basics](zh/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.
-->
* [使用 Kubernetes (Udacity) 的可伸缩微服务](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
@ -57,10 +57,10 @@ Before walking through each tutorial, you may want to bookmark the
* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
-->
* [你好 Minikube](zh/docs/tutorials/hello-minikube/)
* [你好 Minikube](/zh/docs/tutorials/hello-minikube/)
<!--
* [Hello Minikube](zh/docs/tutorials/hello-minikube/)
* [Hello Minikube]/docs/tutorials/hello-minikube/)
-->
## 配置
@ -69,10 +69,10 @@ Before walking through each tutorial, you may want to bookmark the
## Configuration
-->
* [使用一个 ConfigMap 配置 Redis](zh/docs/tutorials/configuration/configure-redis-using-configmap/)
* [使用一个 ConfigMap 配置 Redis](/zh/docs/tutorials/configuration/configure-redis-using-configmap/)
<!--
* [Configuring Redis Using a ConfigMap](zh/docs/tutorials/configuration/configure-redis-using-configmap/)
* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
-->
## 无状态应用程序
@ -81,16 +81,16 @@ Before walking through each tutorial, you may want to bookmark the
## Stateless Applications
-->
* [公开外部 IP 地址访问集群中的应用程序](zh/docs/tutorials/stateless-application/expose-external-ip-address/)
* [公开外部 IP 地址访问集群中的应用程序](/zh/docs/tutorials/stateless-application/expose-external-ip-address/)
<!--
* [Exposing an External IP Address to Access an Application in a Cluster](zh/docs/tutorials/stateless-application/expose-external-ip-address/)
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
-->
* [示例:使用 Redis 部署 PHP 留言板应用程序](zh/docs/tutorials/stateless-application/guestbook/)
* [示例:使用 Redis 部署 PHP 留言板应用程序](/zh/docs/tutorials/stateless-application/guestbook/)
<!--
* [Example: Deploying PHP Guestbook application with Redis](zh/docs/tutorials/stateless-application/guestbook/)
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
-->
## 有状态应用程序
@ -99,28 +99,28 @@ Before walking through each tutorial, you may want to bookmark the
## Stateful Applications
-->
* [StatefulSet 基础](zh/docs/tutorials/stateful-application/basic-stateful-set/)
* [StatefulSet 基础](/zh/docs/tutorials/stateful-application/basic-stateful-set/)
<!--
* [StatefulSet Basics](zh/docs/tutorials/stateful-application/basic-stateful-set/)
* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
-->
* [示例WordPress 和 MySQL 使用持久卷](zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
* [示例WordPress 和 MySQL 使用持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
<!--
* [Example: WordPress and MySQL with Persistent Volumes](zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
-->
* [示例:使用有状态集部署 Cassandra](zh/docs/tutorials/stateful-application/cassandra/)
* [示例:使用有状态集部署 Cassandra](/zh/docs/tutorials/stateful-application/cassandra/)
<!--
* [Example: Deploying Cassandra with Stateful Sets](zh/docs/tutorials/stateful-application/cassandra/)
* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/)
-->
* [运行 ZooKeeperCP 分布式系统](zh/docs/tutorials/stateful-application/zookeeper/)
* [运行 ZooKeeperCP 分布式系统](/zh/docs/tutorials/stateful-application/zookeeper/)
<!--
* [Running ZooKeeper, A CP Distributed System](zh/docs/tutorials/stateful-application/zookeeper/)
* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
-->
## CI/CD 管道
@ -155,33 +155,33 @@ Before walking through each tutorial, you may want to bookmark the
## 集群
* [AppArmor](zh/docs/tutorials/clusters/apparmor/)
* [AppArmor](/zh/docs/tutorials/clusters/apparmor/)
<!--
## Clusters
* [AppArmor](zh/docs/tutorials/clusters/apparmor/)
* [AppArmor](/docs/tutorials/clusters/apparmor/)
-->
## 服务
* [使用源 IP](zh/docs/tutorials/services/source-ip/)
* [使用源 IP](/zh/docs/tutorials/services/source-ip/)
<!--
## Services
* [Using Source IP](zh/docs/tutorials/services/source-ip/)
* [Using Source IP](/docs/tutorials/services/source-ip/)
-->
{{% /capture %}}
{{% capture whatsnext %}}
如果您想编写教程,请参阅[使用页面模板](zh/docs/home/contribute/page-templates/)
如果您想编写教程,请参阅[使用页面模板](/zh/docs/home/contribute/page-templates/)
以获取有关教程页面类型和教程模板的信息。
<!--
If you would like to write a tutorial, see
[Using Page Templates](zh/docs/home/contribute/page-templates/)
[Using Page Templates](/docs/home/contribute/page-templates/)
for information about the tutorial page type and the tutorial template.
-->

View File

@ -88,7 +88,7 @@ Apparmor 是一个 Linux 内核安全模块,它补充了标准的基于 Linux
kernel, including patches that add additional hooks and features. Kubernetes has only been
tested with the upstream version, and does not promise support for other features.
{{< /note >}} -->
2. AppArmor 内核模块已启用 -- 要使 Linux 内核强制执行 AppArmor 配置文件,必须安装并且启动 AppArmor 内核模块。默认情况下,有几个发行版支持该模块,如 Ubuntu 和 SUSE还有许多发行版提供可选支持。要检查模块是否已启用请检查
2. AppArmor 内核模块已启用 -- 要使 Linux 内核强制执行 AppArmor 配置文件,必须安装并且启动 AppArmor 内核模块。默认情况下,有几个发行版支持该模块,如 Ubuntu 和 SUSE还有许多发行版提供可选支持。要检查模块是否已启用请检查
`/sys/module/apparmor/parameters/enabled` 文件:
```shell
cat /sys/module/apparmor/parameters/enabled
@ -416,7 +416,7 @@ Events:
nodes. There are lots of ways to setup the profiles though, such as: -->
Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载到节点上。有很多方法可以设置配置文件,例如:
<!-- * Through a [DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on each node to
<!-- * Through a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on each node to
ensure the correct profiles are loaded. An example implementation can be found
[here](https://git.k8s.io/kubernetes/test/images/apparmor-loader).
* At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.) or
@ -430,7 +430,7 @@ Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载
<!-- The scheduler is not aware of which profiles are loaded onto which node, so the full set of profiles
must be loaded onto every node. An alternative approach is to add a node label for each profile (or
class of profiles) on the node, and use a
[node selector](/zh/docs/concepts/configuration/assign-pod-node/) to ensure the Pod is run on a
[node selector](/docs/concepts/configuration/assign-pod-node/) to ensure the Pod is run on a
node with the required profile. -->
调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签,并使用[节点选择器](/zh/docs/concepts/configuration/assign pod node/)确保 Pod 在具有所需配置文件的节点上运行。
@ -525,7 +525,7 @@ logs or through `journalctl`. More information is provided in
想要调试 AppArmor 的问题您可以检查系统日志查看具体拒绝了什么。AppArmor 将详细消息记录到 `dmesg` ,错误通常可以在系统日志中或通过 `journalctl` 找到。更多详细信息见[AppArmor 失败](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Failures)。
<!-- ## API Reference -->
## API 参考
## API 参考
<!-- ### Pod Annotation -->
### Pod 注释

View File

@ -9,7 +9,7 @@ content_template: templates/tutorial
{{% capture overview %}}
<!--
This page provides a real world example of how to configure Redis using a ConfigMap and builds upon the [Configure Containers Using a ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) task.
This page provides a real world example of how to configure Redis using a ConfigMap and builds upon the [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) task.
-->
这篇文档基于[使用 ConfigMap 来配置 Containers](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 这个任务,提供了一个使用 ConfigMap 来配置 Redis 的真实案例。
@ -40,7 +40,7 @@ This page provides a real world example of how to configure Redis using a Config
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!--
* The example shown on this page works with `kubectl` 1.14 and above.
* Understand [Configure Containers Using a ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/).
* Understand [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/).
-->
* 此页面上显示的示例适用于 `kubectl` 1.14和在其以上的版本。
* 理解[使用ConfigMap来配置Containers](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。
@ -77,8 +77,8 @@ configMapGenerator:
EOF
```
<!--
Add the pod resource config to the `kustomization.yaml`:
<!--
Add the pod resource config to the `kustomization.yaml`:
-->
将 pod 的资源配置添加到 `kustomization.yaml` 文件中:
@ -93,8 +93,8 @@ resources:
EOF
```
<!--
Apply the kustomization directory to create both the ConfigMap and Pod objects:
<!--
Apply the kustomization directory to create both the ConfigMap and Pod objects:
-->
应用整个 kustomization 文件夹以创建 ConfigMap 和 Pod 对象:
@ -102,8 +102,8 @@ Apply the kustomization directory to create both the ConfigMap and Pod objects:
kubectl apply -k .
```
<!--
Examine the created objects by
<!--
Examine the created objects by
-->
使用以下命令检查创建的对象
@ -116,20 +116,20 @@ NAME READY STATUS RESTARTS AGE
pod/redis 1/1 Running 0 52s
```
<!--
<!--
In the example, the config volume is mounted at `/redis-master`.
It uses `path` to add the `redis-config` key to a file named `redis.conf`.
The file path for the redis config, therefore, is `/redis-master/redis.conf`.
This is where the image will look for the config file for the redis master.
This is where the image will look for the config file for the redis master.
-->
在示例中,配置卷挂载在 `/redis-master` 下。
它使用 `path``redis-config` 密钥添加到名为 `redis.conf` 的文件中。
因此redis配置的文件路径为 `/redis-master/redis.conf`
这是镜像将在其中查找 redis master 的配置文件的位置。
<!--
<!--
Use `kubectl exec` to enter the pod and run the `redis-cli` tool to verify that
the configuration was correctly applied:
the configuration was correctly applied:
-->
使用 `kubectl exec` 进入 pod 并运行 `redis-cli` 工具来验证配置已正确应用:
@ -143,8 +143,8 @@ kubectl exec -it redis redis-cli
2) "allkeys-lru"
```
<!--
Delete the created pod:
<!--
Delete the created pod:
-->
删除创建的 pod
```shell
@ -155,8 +155,8 @@ kubectl delete pod redis
{{% capture whatsnext %}}
<!--
* Learn more about [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/).
<!--
* Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).
-->
* 了解有关 [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)的更多信息。

View File

@ -33,16 +33,16 @@ card:
<!--
This tutorial shows you how to run a simple Hello World Node.js app
on Kubernetes using [Minikube](zh/docs/setup/learning-environment/minikube) and Katacoda.
on Kubernetes using [Minikube](/docs/setup/learning-environment/minikube) and Katacoda.
Katacoda provides a free, in-browser Kubernetes environment.
-->
本教程向您展示如何使用 [Minikube](zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。
本教程向您展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。
{{< note >}}
<!--
You can also follow this tutorial if you've installed [Minikube locally](zh/docs/tasks/tools/install-minikube/).
You can also follow this tutorial if you've installed [Minikube locally](/docs/tasks/tools/install-minikube/).
-->
如果您已在本地安装 [Minikube](zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。
如果您已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。
{{< /note >}}
@ -117,17 +117,17 @@ For more information on the `docker build` command, read the [Docker documentati
## Create a Deployment
A Kubernetes [*Pod*](zh/docs/concepts/workloads/pods/pod/) is a group of one or more Containers,
A Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) is a group of one or more Containers,
tied together for the purposes of administration and networking. The Pod in this
tutorial has only one Container. A Kubernetes
[*Deployment*](zh/docs/concepts/workloads/controllers/deployment/) checks on the health of your
[*Deployment*](/docs/concepts/workloads/controllers/deployment/) checks on the health of your
Pod and restarts the Pod's Container if it terminates. Deployments are the
recommended way to manage the creation and scaling of Pods.
-->
## 创建 Deployment
Kubernetes [*Pod*](zh/docs/concepts/workloads/pods/pod/) 是由一个或多个为了管理和联网而绑定在一起的容器构成的组。本教程中的 Pod 只有一个容器。Kubernetes [*Deployment*](zh/docs/concepts/workloads/controllers/deployment/) 检查 Pod 的健康状况,并在 Pod 中的容器终止的情况下重新启动新的容器。Deployment 是管理 Pod 创建和扩展的推荐方法。
Kubernetes [*Pod*](/zh/docs/concepts/workloads/pods/pod/) 是由一个或多个为了管理和联网而绑定在一起的容器构成的组。本教程中的 Pod 只有一个容器。Kubernetes [*Deployment*](/zh/docs/concepts/workloads/controllers/deployment/) 检查 Pod 的健康状况,并在 Pod 中的容器终止的情况下重新启动新的容器。Deployment 是管理 Pod 创建和扩展的推荐方法。
<!--
1. Use the `kubectl create` command to create a Deployment that manages a Pod. The
@ -203,9 +203,9 @@ Pod runs a Container based on the provided Docker image.
```
<!--
{{< note >}}For more information about `kubectl`commands, see the [kubectl overview](zh/docs/user-guide/kubectl-overview/).{{< /note >}}
{{< note >}}For more information about `kubectl`commands, see the [kubectl overview](/docs/user-guide/kubectl-overview/).{{< /note >}}
-->
{{< note >}}有关 kubectl 命令的更多信息,请参阅 [kubectl 概述](zh/docs/user-guide/kubectl-overview/)。{{< /note >}}
{{< note >}}有关 kubectl 命令的更多信息,请参阅 [kubectl 概述](/zh/docs/user-guide/kubectl-overview/)。{{< /note >}}
<!--
## Create a Service
@ -213,12 +213,12 @@ Pod runs a Container based on the provided Docker image.
By default, the Pod is only accessible by its internal IP address within the
Kubernetes cluster. To make the `hello-node` Container accessible from outside the
Kubernetes virtual network, you have to expose the Pod as a
Kubernetes [*Service*](zh/docs/concepts/services-networking/service/).
Kubernetes [*Service*](/docs/concepts/services-networking/service/).
-->
## 创建 Service
默认情况下Pod 只能通过 Kubernetes 集群中的内部 IP 地址访问。要使得 `hello-node` 容器可以从 Kubernetes 虚拟网络的外部访问,您必须将 Pod 暴露为 Kubernetes [*Service*](zh/docs/concepts/services-networking/service/)。
默认情况下Pod 只能通过 Kubernetes 集群中的内部 IP 地址访问。要使得 `hello-node` 容器可以从 Kubernetes 虚拟网络的外部访问,您必须将 Pod 暴露为 Kubernetes [*Service*](/zh/docs/concepts/services-networking/service/)。
<!--
1. Expose the Pod to the public internet using the `kubectl expose` command:
@ -447,12 +447,12 @@ minikube delete
{{% capture whatsnext %}}
<!--
* Learn more about [Deployment objects](zh/docs/concepts/workloads/controllers/deployment/).
* Learn more about [Deploying applications](zh/docs/tasks/run-application/run-stateless-application-deployment/).
* Learn more about [Service objects](zh/docs/concepts/services-networking/service/).
* Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/).
* Learn more about [Deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/).
* Learn more about [Service objects](/docs/concepts/services-networking/service/).
-->
* 进一步了解 [Deployment 对象](zh/docs/concepts/workloads/controllers/deployment/)。
* 学习更多关于 [部署应用](zh/docs/tasks/run-application/run-stateless-application-deployment/)。
* 学习更多关于 [Service 对象](zh/docs/concepts/services-networking/service/)。
* 进一步了解 [Deployment 对象](/zh/docs/concepts/workloads/controllers/deployment/)。
* 学习更多关于 [部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)。
* 学习更多关于 [Service 对象](/zh/docs/concepts/services-networking/service/)。
{{% /capture %}}

View File

@ -24,8 +24,8 @@ Kubernetes 集群中运行的应用通过 Service 抽象来互相查找、通信
* [NAT](https://en.wikipedia.org/wiki/Network_address_translation): 网络地址转换
* [Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT): 替换数据包的源 IP, 通常为节点的 IP
* [Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT): 替换数据包的目的 IP, 通常为 Pod 的 IP
* [VIP](zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个虚拟 IP, 例如分配给每个 Kubernetes Service 的 IP
* [Kube-proxy](zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个网络守护程序,在每个节点上协调 Service VIP 管理
* [VIP](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个虚拟 IP, 例如分配给每个 Kubernetes Service 的 IP
* [Kube-proxy](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个网络守护程序,在每个节点上协调 Service VIP 管理
## 准备工作
@ -59,7 +59,7 @@ deployment.apps/source-ip-app created
## Type=ClusterIP 类型 Services 的 Source IP
如果你的 kube-proxy 运行在 [iptables 模式](zh/docs/user-guide/services/#proxy-mode-iptables)下,从集群内部发送到 ClusterIP 的包永远不会进行源地址 NAT这从 Kubernetes 1.2 开始是默认选项。Kube-proxy 通过一个 `proxyMode` endpoint 暴露它的模式。
如果你的 kube-proxy 运行在 [iptables 模式](/zh/docs/user-guide/services/#proxy-mode-iptables)下,从集群内部发送到 ClusterIP 的包永远不会进行源地址 NAT这从 Kubernetes 1.2 开始是默认选项。Kube-proxy 通过一个 `proxyMode` endpoint 暴露它的模式。
```console
kubectl get nodes
@ -136,7 +136,7 @@ command=GET
## Type=NodePort 类型 Services 的 Source IP
从 Kubernetes 1.5 开始,发送给类型为 [Type=NodePort](zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT。你可以通过创建一个 `NodePort` Service 来进行测试:
从 Kubernetes 1.5 开始,发送给类型为 [Type=NodePort](/zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT。你可以通过创建一个 `NodePort` Service 来进行测试:
```console
kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort
@ -189,7 +189,7 @@ client_address=10.240.0.3
```
为了防止这种情况发生Kubernetes 提供了一个特性来保留客户端的源 IP 地址[(点击此处查看可用特性)](zh/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。设置 `service.spec.externalTrafficPolicy` 的值为 `Local`,请求就只会被代理到本地 endpoints 而不会被转发到其它节点。这样就保留了最初的源 IP 地址。如果没有本地 endpoints发送到这个节点的数据包将会被丢弃。这样在应用到数据包的任何包处理规则下你都能依赖这个正确的 source-ip 使数据包通过并到达 endpoint。
为了防止这种情况发生Kubernetes 提供了一个特性来保留客户端的源 IP 地址[(点击此处查看可用特性)](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。设置 `service.spec.externalTrafficPolicy` 的值为 `Local`,请求就只会被代理到本地 endpoints 而不会被转发到其它节点。这样就保留了最初的源 IP 地址。如果没有本地 endpoints发送到这个节点的数据包将会被丢弃。这样在应用到数据包的任何包处理规则下你都能依赖这个正确的 source-ip 使数据包通过并到达 endpoint。
设置 `service.spec.externalTrafficPolicy` 字段如下:
@ -244,7 +244,7 @@ client_address=104.132.1.79
## Type=LoadBalancer 类型 Services 的 Source IP
从Kubernetes1.5开始,发送给类型为 [Type=LoadBalancer](zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT这是因为所有处于 `Ready` 状态的可调度 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP如前面章节所述
从Kubernetes1.5开始,发送给类型为 [Type=LoadBalancer](/zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT这是因为所有处于 `Ready` 状态的可调度 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP如前面章节所述
你可以通过在一个 loadbalancer 上暴露这个 source-ip-app 来进行测试。
@ -390,6 +390,6 @@ $ kubectl delete deployment source-ip-app
{{% capture whatsnext %}}
* 学习更多关于 [通过 services 连接应用](zh/docs/concepts/services-networking/connect-applications-service/)
* 学习更多关于 [负载均衡](zh/docs/user-guide/load-balancer)
* 学习更多关于 [通过 services 连接应用](/zh/docs/concepts/services-networking/connect-applications-service/)
* 学习更多关于 [负载均衡](/zh/docs/user-guide/load-balancer)
{{% /capture %}}

View File

@ -13,37 +13,37 @@ approvers:
{{% capture overview %}}
<!--
<!--
This tutorial provides an introduction to managing applications with
[StatefulSets](zh/docs/concepts/workloads/controllers/statefulset/). It
[StatefulSets](/docs/concepts/workloads/controllers/statefulset/). It
demonstrates how to create, delete, scale, and update the Pods of StatefulSets.
-->
本教程介绍如何了使用 [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) 来管理应用。演示了如何创建、删除、扩容/缩容和更新 StatefulSets 的 Pods。
本教程介绍如何了使用 [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) 来管理应用。演示了如何创建、删除、扩容/缩容和更新 StatefulSets 的 Pods。
{{% /capture %}}
{{% capture prerequisites %}}
<!--
Before you begin this tutorial, you should familiarize yourself with the
<!--
Before you begin this tutorial, you should familiarize yourself with the
following Kubernetes concepts.
-->
在开始本教程之前,你应该熟悉以下 Kubernetes 的概念:
* [Pods](zh/docs/user-guide/pods/single-container/)
* [Cluster DNS](zh/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](zh/docs/concepts/services-networking/service/#headless-services)
* [PersistentVolumes](zh/docs/concepts/storage/persistent-volumes/)
* [Pods](/zh/docs/user-guide/pods/single-container/)
* [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services)
* [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/)
* [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
* [StatefulSets](zh/docs/concepts/workloads/controllers/statefulset/)
* [kubectl CLI](zh/docs/user-guide/kubectl/)
* [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/)
* [kubectl CLI](/zh/docs/user-guide/kubectl/)
<!--
This tutorial assumes that your cluster is configured to dynamically provision
<!--
This tutorial assumes that your cluster is configured to dynamically provision
PersistentVolumes. If your cluster is not configured to do so, you
will have to manually provision two 1 GiB volumes prior to starting this
will have to manually provision two 1 GiB volumes prior to starting this
tutorial.
-->
@ -53,11 +53,11 @@ tutorial.
{{% capture objectives %}}
<!--
StatefulSets are intended to be used with stateful applications and distributed
systems. However, the administration of stateful applications and
distributed systems on Kubernetes is a broad, complex topic. In order to
demonstrate the basic features of a StatefulSet, and not to conflate the former
<!--
StatefulSets are intended to be used with stateful applications and distributed
systems. However, the administration of stateful applications and
distributed systems on Kubernetes is a broad, complex topic. In order to
demonstrate the basic features of a StatefulSet, and not to conflate the former
topic with the latter, you will deploy a simple web application using a StatefulSet.
After this tutorial, you will be familiar with the following.
@ -87,34 +87,34 @@ StatefulSets 旨在与有状态的应用及分布式系统一起使用。然而
## 创建 StatefulSet
作为开始,使用如下示例创建一个 StatefulSet。它和 [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) 概念中的示例相似。它创建了一个 [Headless Service](zh/docs/user-guide/services/#headless-services) `nginx` 用来发布 StatefulSet `web` 中的 Pod 的 IP 地址。
作为开始,使用如下示例创建一个 StatefulSet。它和 [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) 概念中的示例相似。它创建了一个 [Headless Service](/zh/docs/user-guide/services/#headless-services) `nginx` 用来发布 StatefulSet `web` 中的 Pod 的 IP 地址。
{{< codenew file="application/web/web.yaml" >}}
<!--
<!--
Download the example above, and save it to a file named `web.yaml`
You will need to use two terminal windows. In the first terminal, use
[`kubectl get`](zh/docs/reference/generated/kubectl/kubectl-commands/#get) to watch the creation
You will need to use two terminal windows. In the first terminal, use
[`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) to watch the creation
of the StatefulSet's Pods.
-->
下载上面的例子并保存为文件 `web.yaml`
你需要使用两个终端窗口。在第一个终端中,使用 [`kubectl get`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 来查看 StatefulSet 的 Pods 的创建情况。
你需要使用两个终端窗口。在第一个终端中,使用 [`kubectl get`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 来查看 StatefulSet 的 Pods 的创建情况。
```shell
kubectl get pods -w -l app=nginx
```
<!--
In the second terminal, use
[`kubectl apply`](zh/docs/reference/generated/kubectl/kubectl-commands/#apply) to create the
In the second terminal, use
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to create the
Headless Service and StatefulSet defined in `web.yaml`.
-->
在另一个终端中,使用 [`kubectl apply`](zh/docs/reference/generated/kubectl/kubectl-commands/#apply)来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。
在另一个终端中,使用 [`kubectl apply`](/zh/docs/reference/generated/kubectl/kubectl-commands/#apply)来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。
```shell
kubectl apply -f web.yaml
@ -123,8 +123,8 @@ statefulset.apps/web created
```
<!--
The command above creates two Pods, each running an
[NGINX](https://www.nginx.com) webserver. Get the `nginx` Service and the
The command above creates two Pods, each running an
[NGINX](https://www.nginx.com) webserver. Get the `nginx` Service and the
`web` StatefulSet to verify that they were created successfully.
-->
@ -144,9 +144,9 @@ web 2 1 20s
### Ordered Pod Creation
For a StatefulSet with N replicas, when Pods are being deployed, they are
created sequentially, in order from {0..N-1}. Examine the output of the
`kubectl get` command in the first terminal. Eventually, the output will
For a StatefulSet with N replicas, when Pods are being deployed, they are
created sequentially, in order from {0..N-1}. Examine the output of the
`kubectl get` command in the first terminal. Eventually, the output will
look like the example below.
-->
@ -165,14 +165,14 @@ web-0 1/1 Running 0 19s
web-1 0/1 Pending 0 0s
web-1 0/1 Pending 0 0s
web-1 0/1 ContainerCreating 0 0s
web-1 1/1 Running 0 18s
web-1 1/1 Running 0 18s
```
<!--
Notice that the `web-1` Pod is not launched until the `web-0` Pod is
[Running and Ready](zh/docs/user-guide/pod-states).
Notice that the `web-1` Pod is not launched until the `web-0` Pod is
[Running and Ready](/docs/user-guide/pod-states).
-->
请注意在 `web-0` Pod 处于 [Running和Ready](zh/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。
请注意在 `web-0` Pod 处于 [Running和Ready](/zh/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。
<!--
## Pods in a StatefulSet
@ -205,25 +205,25 @@ web-1 1/1 Running 0 1m
```
<!--
As mentioned in the [StatefulSets](zh/docs/concepts/workloads/controllers/statefulset/)
concept, the Pods in a StatefulSet have a sticky, unique identity. This identity
is based on a unique ordinal index that is assigned to each Pod by the
StatefulSet controller. The Pods' names take the form
`<statefulset name>-<ordinal index>`. Since the `web` StatefulSet has two
As mentioned in the [StatefulSets](/docs/concepts/workloads/controllers/statefulset/)
concept, the Pods in a StatefulSet have a sticky, unique identity. This identity
is based on a unique ordinal index that is assigned to each Pod by the
StatefulSet controller. The Pods' names take the form
`<statefulset name>-<ordinal index>`. Since the `web` StatefulSet has two
replicas, it creates two Pods, `web-0` and `web-1`.
### Using Stable Network Identities
Each Pod has a stable hostname based on its ordinal index. Use
[`kubectl exec`](zh/docs/reference/generated/kubectl/kubectl-commands/#exec) to execute the
`hostname` command in each Pod.
[`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec) to execute the
`hostname` command in each Pod.
-->
如同 [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) 概念中所提到的StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为`<statefulset name>-<ordinal index>`。`web`StatefulSet 拥有两个副本,所以它创建了两个 Pod`web-0`和`web-1`。
如同 [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) 概念中所提到的StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为`<statefulset name>-<ordinal index>`。`web`StatefulSet 拥有两个副本,所以它创建了两个 Pod`web-0`和`web-1`。
### 使用稳定的网络身份标识
每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](zh/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。
每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](/zh/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。
```shell
for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
@ -231,17 +231,17 @@ web-0
web-1
```
<!--
Use [`kubectl run`](zh/docs/reference/generated/kubectl/kubectl-commands/#run) to execute
a container that provides the `nslookup` command from the `dnsutils` package.
Using `nslookup` on the Pods' hostnames, you can examine their in-cluster DNS
<!--
Use [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) to execute
a container that provides the `nslookup` command from the `dnsutils` package.
Using `nslookup` on the Pods' hostnames, you can examine their in-cluster DNS
addresses.
-->
使用 [`kubectl run`](zh/docs/reference/generated/kubectl/kubectl-commands/#run) 运行一个提供 `nslookup` 命令的容器,该命令来自于 `dnsutils` 包。通过对 Pod 的主机名执行 `nslookup`,你可以检查他们在集群内部的 DNS 地址。
使用 [`kubectl run`](/zh/docs/reference/generated/kubectl/kubectl-commands/#run) 运行一个提供 `nslookup` 命令的容器,该命令来自于 `dnsutils` 包。通过对 Pod 的主机名执行 `nslookup`,你可以检查他们在集群内部的 DNS 地址。
```shell
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
nslookup web-0.nginx
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
@ -258,11 +258,11 @@ Address 1: 10.244.2.6
```
<!--
The CNAME of the headless service points to SRV records (one for each Pod that
is Running and Ready). The SRV records point to A record entries that
contain the Pods' IP addresses.
The CNAME of the headless service points to SRV records (one for each Pod that
is Running and Ready). The SRV records point to A record entries that
contain the Pods' IP addresses.
In one terminal, watch the StatefulSet's Pods.
In one terminal, watch the StatefulSet's Pods.
-->
headless service 的 CNAME 指向 SRV 记录(记录每个 Running 和 Ready 状态的 Pod。SRV 记录指向一个包含 Pod IP 地址的记录表项。
@ -274,11 +274,11 @@ kubectl get pod -w -l app=nginx
```
<!--
In a second terminal, use
[`kubectl delete`](zh/docs/reference/generated/kubectl/kubectl-commands/#delete) to delete all
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) to delete all
the Pods in the StatefulSet.
-->
在另一个终端中使用 [`kubectl delete`](zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet 中所有的 Pod。
在另一个终端中使用 [`kubectl delete`](/zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet 中所有的 Pod。
```shell
kubectl delete pod -l app=nginx
@ -287,7 +287,7 @@ pod "web-1" deleted
```
<!--
Wait for the StatefulSet to restart them, and for both Pods to transition to
Wait for the StatefulSet to restart them, and for both Pods to transition to
Running and Ready.
-->
@ -306,7 +306,7 @@ web-1 1/1 Running 0 34s
```
<!--
Use `kubectl exec` and `kubectl run` to view the Pods hostnames and in-cluster
Use `kubectl exec` and `kubectl run` to view the Pods hostnames and in-cluster
DNS entries.
-->
@ -317,7 +317,7 @@ for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
web-0
web-1
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh
nslookup web-0.nginx
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
@ -333,23 +333,23 @@ Name: web-1.nginx
Address 1: 10.244.2.8
```
<!--
The Pods' ordinals, hostnames, SRV records, and A record names have not changed,
but the IP addresses associated with the Pods may have changed. In the cluster
used for this tutorial, they have. This is why it is important not to configure
The Pods' ordinals, hostnames, SRV records, and A record names have not changed,
but the IP addresses associated with the Pods may have changed. In the cluster
used for this tutorial, they have. This is why it is important not to configure
other applications to connect to Pods in a StatefulSet by IP address.
If you need to find and connect to the active members of a StatefulSet, you
should query the CNAME of the Headless Service
(`nginx.default.svc.cluster.local`). The SRV records associated with the
CNAME will contain only the Pods in the StatefulSet that are Running and
If you need to find and connect to the active members of a StatefulSet, you
should query the CNAME of the Headless Service
(`nginx.default.svc.cluster.local`). The SRV records associated with the
CNAME will contain only the Pods in the StatefulSet that are Running and
Ready.
If your application already implements connection logic that tests for
liveness and readiness, you can use the SRV records of the Pods (
If your application already implements connection logic that tests for
liveness and readiness, you can use the SRV records of the Pods (
`web-0.nginx.default.svc.cluster.local`,
`web-1.nginx.default.svc.cluster.local`), as they are stable, and your
application will be able to discover the Pods' addresses when they transition
`web-1.nginx.default.svc.cluster.local`), as they are stable, and your
application will be able to discover the Pods' addresses when they transition
to Running and Ready.
-->
@ -381,20 +381,20 @@ www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO
```
<!--
The StatefulSet controller created two PersistentVolumeClaims that are
bound to two [PersistentVolumes](zh/docs/concepts/storage/persistent-volumes/). As the cluster used in this tutorial is configured to dynamically provision
The StatefulSet controller created two PersistentVolumeClaims that are
bound to two [PersistentVolumes](/docs/concepts/storage/persistent-volumes/). As the cluster used in this tutorial is configured to dynamically provision
PersistentVolumes, the PersistentVolumes were created and bound automatically.
The NGINX webservers, by default, will serve an index file at
`/usr/share/nginx/html/index.html`. The `volumeMounts` field in the
StatefulSets `spec` ensures that the `/usr/share/nginx/html` directory is
The NGINX webservers, by default, will serve an index file at
`/usr/share/nginx/html/index.html`. The `volumeMounts` field in the
StatefulSets `spec` ensures that the `/usr/share/nginx/html` directory is
backed by a PersistentVolume.
Write the Pods' hostnames to their `index.html` files and verify that the NGINX
Write the Pods' hostnames to their `index.html` files and verify that the NGINX
webservers serve the hostnames.
-->
StatefulSet 控制器创建了两个 PersistentVolumeClaims绑定到两个 [PersistentVolumes](zh/docs/concepts/storage/volumes/)。由于本教程使用的集群配置为动态提供 PersistentVolume所有的 PersistentVolume 都是自动创建和绑定的。
StatefulSet 控制器创建了两个 PersistentVolumeClaims绑定到两个 [PersistentVolumes](/zh/docs/concepts/storage/volumes/)。由于本教程使用的集群配置为动态提供 PersistentVolume所有的 PersistentVolume 都是自动创建和绑定的。
NGINX web 服务器默认会加载位于 `/usr/share/nginx/html/index.html` 的 index 文件。StatefulSets `spec` 中的 `volumeMounts` 字段保证了 `/usr/share/nginx/html` 文件夹由一个 PersistentVolume 支持。
@ -451,7 +451,7 @@ pod "web-0" deleted
pod "web-1" deleted
```
<!--
Examine the output of the `kubectl get` command in the first terminal, and wait
Examine the output of the `kubectl get` command in the first terminal, and wait
for all of the Pods to transition to Running and Ready.
-->
@ -482,17 +482,17 @@ web-1
```
<!--
Even though `web-0` and `web-1` were rescheduled, they continue to serve their
hostnames because the PersistentVolumes associated with their
PersistentVolumeClaims are remounted to their `volumeMounts`. No matter what
node `web-0`and `web-1` are scheduled on, their PersistentVolumes will be
Even though `web-0` and `web-1` were rescheduled, they continue to serve their
hostnames because the PersistentVolumes associated with their
PersistentVolumeClaims are remounted to their `volumeMounts`. No matter what
node `web-0`and `web-1` are scheduled on, their PersistentVolumes will be
mounted to the appropriate mount points.
## Scaling a StatefulSet
Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
This is accomplished by updating the `replicas` field. You can use either
[`kubectl scale`](zh/docs/reference/generated/kubectl/kubectl-commands/#scale) or
[`kubectl patch`](zh/docs/reference/generated/kubectl/kubectl-commands/#patch) to scale a StatefulSet.
[`kubectl scale`](/docs/reference/generated/kubectl/kubectl-commands/#scale) or
[`kubectl patch`](/docs/reference/generated/kubectl/kubectl-commands/#patch) to scale a StatefulSet.
### Scaling Up
@ -504,7 +504,7 @@ In one terminal window, watch the Pods in the StatefulSet.
## 扩容/缩容 StatefulSet
扩容/缩容 StatefulSet 指增加或减少它的副本数。这通过更新 `replicas` 字段完成。你可以使用[`kubectl scale`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#scale) 或者[`kubectl patch`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#patch)来扩容/缩容一个 StatefulSet。
扩容/缩容 StatefulSet 指增加或减少它的副本数。这通过更新 `replicas` 字段完成。你可以使用[`kubectl scale`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#scale) 或者[`kubectl patch`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#patch)来扩容/缩容一个 StatefulSet。
### 扩容
@ -517,7 +517,7 @@ kubectl get pods -w -l app=nginx
```
<!--
In another terminal window, use `kubectl scale` to scale the number of replicas
In another terminal window, use `kubectl scale` to scale the number of replicas
to 5.-->
在另一个终端窗口使用 `kubectl scale` 扩展副本数为 5。
@ -527,7 +527,7 @@ kubectl scale sts web --replicas=5
statefulset.apps/web scaled
```
<!--
Examine the output of the `kubectl get` command in the first terminal, and wait
Examine the output of the `kubectl get` command in the first terminal, and wait
for the three additional Pods to transition to Running and Ready.
-->
@ -556,8 +556,8 @@ web-4 1/1 Running 0 19s
<!--
The StatefulSet controller scaled the number of replicas. As with
[StatefulSet creation](#ordered-pod-creation), the StatefulSet controller
created each Pod sequentially with respect to its ordinal index, and it
waited for each Pod's predecessor to be Running and Ready before launching the
created each Pod sequentially with respect to its ordinal index, and it
waited for each Pod's predecessor to be Running and Ready before launching the
subsequent Pod.
### Scaling Down
@ -577,8 +577,8 @@ kubectl get pods -w -l app=nginx
```
<!--
In another terminal, use `kubectl patch` to scale the StatefulSet back down to
three replicas.
In another terminal, use `kubectl patch` to scale the StatefulSet back down to
three replicas.
-->
在另一个终端使用 `kubectl patch` 将 StatefulSet 缩容回三个副本。
@ -614,11 +614,11 @@ web-3 1/1 Terminating 0 42s
<!--
### Ordered Pod Termination
The controller deleted one Pod at a time, in reverse order with respect to its
ordinal index, and it waited for each to be completely shutdown before
The controller deleted one Pod at a time, in reverse order with respect to its
ordinal index, and it waited for each to be completely shutdown before
deleting the next.
Get the StatefulSet's PersistentVolumeClaims.
Get the StatefulSet's PersistentVolumeClaims.
-->
### 顺序终止 Pod
@ -641,16 +641,16 @@ www-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO
```
<!--
There are still five PersistentVolumeClaims and five PersistentVolumes.
When exploring a Pod's [stable storage](#writing-to-stable-storage), we saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted when the StatefulSet's Pods are deleted. This is still true when Pod deletion is caused by scaling the StatefulSet down.
There are still five PersistentVolumeClaims and five PersistentVolumes.
When exploring a Pod's [stable storage](#writing-to-stable-storage), we saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted when the StatefulSet's Pods are deleted. This is still true when Pod deletion is caused by scaling the StatefulSet down.
## Updating StatefulSets
In Kubernetes 1.7 and later, the StatefulSet controller supports automated updates. The
strategy used is determined by the `spec.updateStrategy` field of the
StatefulSet API Object. This feature can be used to upgrade the container
images, resource requests and/or limits, labels, and annotations of the Pods in a
StatefulSet. There are two valid update strategies, `RollingUpdate` and
In Kubernetes 1.7 and later, the StatefulSet controller supports automated updates. The
strategy used is determined by the `spec.updateStrategy` field of the
StatefulSet API Object. This feature can be used to upgrade the container
images, resource requests and/or limits, labels, and annotations of the Pods in a
StatefulSet. There are two valid update strategies, `RollingUpdate` and
`OnDelete`.
`RollingUpdate` update strategy is the default for StatefulSets.
@ -666,7 +666,7 @@ Kubernetes 1.7 版本的 StatefulSet 控制器支持自动更新。更新策略
<!--
The `RollingUpdate` update strategy will update all Pods in a StatefulSet, in
The `RollingUpdate` update strategy will update all Pods in a StatefulSet, in
reverse ordinal order, while respecting the StatefulSet guarantees.
Patch the `web` StatefulSet to apply the `RollingUpdate` update strategy.
@ -685,7 +685,7 @@ kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpda
statefulset.apps/web patched
```
<!--
In one terminal window, patch the `web` StatefulSet to change the container
In one terminal window, patch the `web` StatefulSet to change the container
image again.
-->
@ -740,15 +740,15 @@ web-0 1/1 Running 0 10s
```
<!--
The Pods in the StatefulSet are updated in reverse ordinal order. The
StatefulSet controller terminates each Pod, and waits for it to transition to Running and
Ready prior to updating the next Pod. Note that, even though the StatefulSet
controller will not proceed to update the next Pod until its ordinal successor
is Running and Ready, it will restore any Pod that fails during the update to
its current version. Pods that have already received the update will be
restored to the updated version, and Pods that have not yet received the
update will be restored to the previous version. In this way, the controller
attempts to continue to keep the application healthy and the update consistent
The Pods in the StatefulSet are updated in reverse ordinal order. The
StatefulSet controller terminates each Pod, and waits for it to transition to Running and
Ready prior to updating the next Pod. Note that, even though the StatefulSet
controller will not proceed to update the next Pod until its ordinal successor
is Running and Ready, it will restore any Pod that fails during the update to
its current version. Pods that have already received the update will be
restored to the updated version, and Pods that have not yet received the
update will be restored to the previous version. In this way, the controller
attempts to continue to keep the application healthy and the update consistent
in the presence of intermittent failures.
Get the Pods to view their container images.
@ -769,13 +769,13 @@ k8s.gcr.io/nginx-slim:0.8
<!--
All the Pods in the StatefulSet are now running the previous container image.
**Tip** You can also use `kubectl rollout status sts/<name>` to view
**Tip** You can also use `kubectl rollout status sts/<name>` to view
the status of a rolling update.
#### Staging an Update
You can stage an update to a StatefulSet by using the `partition` parameter of
the `RollingUpdate` update strategy. A staged update will keep all of the Pods
in the StatefulSet at the current version while allowing mutations to the
You can stage an update to a StatefulSet by using the `partition` parameter of
the `RollingUpdate` update strategy. A staged update will keep all of the Pods
in the StatefulSet at the current version while allowing mutations to the
StatefulSet's `.spec.template`.
Patch the `web` StatefulSet to add a partition to the `updateStrategy` field.
@ -850,13 +850,13 @@ k8s.gcr.io/nginx-slim:0.8
```
<!--
Notice that, even though the update strategy is `RollingUpdate` the StatefulSet
controller restored the Pod with its original container. This is because the
ordinal of the Pod is less than the `partition` specified by the
Notice that, even though the update strategy is `RollingUpdate` the StatefulSet
controller restored the Pod with its original container. This is because the
ordinal of the Pod is less than the `partition` specified by the
`updateStrategy`.
#### Rolling Out a Canary
You can roll out a canary to test a modification by decrementing the `partition`
You can roll out a canary to test a modification by decrementing the `partition`
you specified [above](#staging-an-update).
Patch the StatefulSet to decrement the partition.
@ -905,8 +905,8 @@ k8s.gcr.io/nginx-slim:0.7
```
<!--
When you changed the `partition`, the StatefulSet controller automatically
updated the `web-2` Pod because the Pod's ordinal was greater than or equal to
When you changed the `partition`, the StatefulSet controller automatically
updated the `web-2` Pod because the Pod's ordinal was greater than or equal to
the `partition`.
Delete the `web-1` Pod.
@ -955,19 +955,19 @@ k8s.gcr.io/nginx-slim:0.8
```
<!--
`web-1` was restored to its original configuration because the Pod's ordinal
was less than the partition. When a partition is specified, all Pods with an
ordinal that is greater than or equal to the partition will be updated when the
StatefulSet's `.spec.template` is updated. If a Pod that has an ordinal less
than the partition is deleted or otherwise terminated, it will be restored to
`web-1` was restored to its original configuration because the Pod's ordinal
was less than the partition. When a partition is specified, all Pods with an
ordinal that is greater than or equal to the partition will be updated when the
StatefulSet's `.spec.template` is updated. If a Pod that has an ordinal less
than the partition is deleted or otherwise terminated, it will be restored to
its original configuration.
#### Phased Roll Outs
You can perform a phased roll out (e.g. a linear, geometric, or exponential
roll out) using a partitioned rolling update in a similar manner to how you
rolled out a [canary](#rolling-out-a-canary). To perform a phased roll out, set
the `partition` to the ordinal at which you want the controller to pause the
update.
roll out) using a partitioned rolling update in a similar manner to how you
rolled out a [canary](#rolling-out-a-canary). To perform a phased roll out, set
the `partition` to the ordinal at which you want the controller to pause the
update.
The partition is currently set to `2`. Set the partition to `0`.
-->
@ -1026,22 +1026,22 @@ k8s.gcr.io/nginx-slim:0.7
```
<!--
By moving the `partition` to `0`, you allowed the StatefulSet controller to
By moving the `partition` to `0`, you allowed the StatefulSet controller to
continue the update process.
### On Delete
The `OnDelete` update strategy implements the legacy (1.6 and prior) behavior,
When you select this update strategy, the StatefulSet controller will not
automatically update Pods when a modification is made to the StatefulSet's
`.spec.template` field. This strategy can be selected by setting the
The `OnDelete` update strategy implements the legacy (1.6 and prior) behavior,
When you select this update strategy, the StatefulSet controller will not
automatically update Pods when a modification is made to the StatefulSet's
`.spec.template` field. This strategy can be selected by setting the
`.spec.template.updateStrategy.type` to `OnDelete`.
## Deleting StatefulSets
StatefulSet supports both Non-Cascading and Cascading deletion. In a
Non-Cascading Delete, the StatefulSet's Pods are not deleted when the StatefulSet is deleted. In a Cascading Delete, both the StatefulSet and its Pods are
StatefulSet supports both Non-Cascading and Cascading deletion. In a
Non-Cascading Delete, the StatefulSet's Pods are not deleted when the StatefulSet is deleted. In a Cascading Delete, both the StatefulSet and its Pods are
deleted.
### Non-Cascading Delete
@ -1071,13 +1071,13 @@ kubectl get pods -w -l app=nginx
```
<!--
Use [`kubectl delete`](zh/docs/reference/generated/kubectl/kubectl-commands/#delete) to delete the
StatefulSet. Make sure to supply the `--cascade=false` parameter to the
command. This parameter tells Kubernetes to only delete the StatefulSet, and to
Use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) to delete the
StatefulSet. Make sure to supply the `--cascade=false` parameter to the
command. This parameter tells Kubernetes to only delete the StatefulSet, and to
not delete any of its Pods.
-->
使用 [`kubectl delete`](zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet。请确保提供了 `--cascade=false` 参数给命令。这个参数告诉 Kubernetes 只删除 StatefulSet 而不要删除它的任何 Pod。
使用 [`kubectl delete`](/zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet。请确保提供了 `--cascade=false` 参数给命令。这个参数告诉 Kubernetes 只删除 StatefulSet 而不要删除它的任何 Pod。
```shell
kubectl delete statefulset web --cascade=false
@ -1141,7 +1141,7 @@ kubectl get pods -w -l app=nginx
<!--
In a second terminal, recreate the StatefulSet. Note that, unless
you deleted the `nginx` Service ( which you should not have ), you will see
you deleted the `nginx` Service ( which you should not have ), you will see
an error indicating that the Service already exists.
-->
在另一个终端里重新创建 StatefulSet。请注意除非你删除了 `nginx` Service (你不应该这样做),你将会看到一个错误,提示 Service 已经存在。
@ -1154,7 +1154,7 @@ service/nginx unchanged
```
<!--
Ignore the error. It only indicates that an attempt was made to create the nginx
Headless Service even though that Service already exists.
Headless Service even though that Service already exists.
Examine the output of the `kubectl get` command running in the first terminal.
-->
@ -1181,14 +1181,14 @@ web-2 0/1 Terminating 0 3m
```
<!--
When the `web` StatefulSet was recreated, it first relaunched `web-0`.
When the `web` StatefulSet was recreated, it first relaunched `web-0`.
Since `web-1` was already Running and Ready, when `web-0` transitioned to
Running and Ready, it simply adopted this Pod. Since you recreated the StatefulSet
with `replicas` equal to 2, once `web-0` had been recreated, and once
`web-1` had been determined to already be Running and Ready, `web-2` was
terminated.
Running and Ready, it simply adopted this Pod. Since you recreated the StatefulSet
with `replicas` equal to 2, once `web-0` had been recreated, and once
`web-1` had been determined to already be Running and Ready, `web-2` was
terminated.
Let's take another look at the contents of the `index.html` file served by the
Let's take another look at the contents of the `index.html` file served by the
Pods' webservers.
-->
@ -1204,10 +1204,10 @@ web-1
```
<!--
Even though you deleted both the StatefulSet and the `web-0` Pod, it still
serves the hostname originally entered into its `index.html` file. This is
because the StatefulSet never deletes the PersistentVolumes associated with a
Pod. When you recreated the StatefulSet and it relaunched `web-0`, its original
Even though you deleted both the StatefulSet and the `web-0` Pod, it still
serves the hostname originally entered into its `index.html` file. This is
because the StatefulSet never deletes the PersistentVolumes associated with a
Pod. When you recreated the StatefulSet and it relaunched `web-0`, its original
PersistentVolume was remounted.
### Cascading Delete
@ -1236,7 +1236,7 @@ statefulset.apps "web" deleted
```
<!--
Examine the output of the `kubectl get` command running in the first terminal,
Examine the output of the `kubectl get` command running in the first terminal,
and wait for all of the Pods to transition to Terminating.
-->
@ -1260,12 +1260,12 @@ web-1 0/1 Terminating 0 29m
```
<!--
As you saw in the [Scaling Down](#scaling-down) section, the Pods
are terminated one at a time, with respect to the reverse order of their ordinal
indices. Before terminating a Pod, the StatefulSet controller waits for
As you saw in the [Scaling Down](#scaling-down) section, the Pods
are terminated one at a time, with respect to the reverse order of their ordinal
indices. Before terminating a Pod, the StatefulSet controller waits for
the Pod's successor to be completely terminated.
Note that, while a cascading delete will delete the StatefulSet and its Pods,
Note that, while a cascading delete will delete the StatefulSet and its Pods,
it will not delete the Headless Service associated with the StatefulSet. You
must delete the `nginx` Service manually.
-->
@ -1294,7 +1294,7 @@ statefulset.apps/web created
```
<!--
When all of the StatefulSet's Pods transition to Running and Ready, retrieve
When all of the StatefulSet's Pods transition to Running and Ready, retrieve
the contents of their `index.html` files.
-->
@ -1307,8 +1307,8 @@ web-1
```
<!--
Even though you completely deleted the StatefulSet, and all of its Pods, the
Pods are recreated with their PersistentVolumes mounted, and `web-0` and
Even though you completely deleted the StatefulSet, and all of its Pods, the
Pods are recreated with their PersistentVolumes mounted, and `web-0` and
`web-1` will still serve their hostnames.
Finally delete the `web` StatefulSet and the `nginx` service.
@ -1330,22 +1330,22 @@ statefulset "web" deleted
<!--
## Pod Management Policy
For some distributed systems, the StatefulSet ordering guarantees are
unnecessary and/or undesirable. These systems require only uniqueness and
identity. To address this, in Kubernetes 1.7, we introduced
`.spec.podManagementPolicy` to the StatefulSet API Object.
For some distributed systems, the StatefulSet ordering guarantees are
unnecessary and/or undesirable. These systems require only uniqueness and
identity. To address this, in Kubernetes 1.7, we introduced
`.spec.podManagementPolicy` to the StatefulSet API Object.
### OrderedReady Pod Management
`OrderedReady` pod management is the default for StatefulSets. It tells the
StatefulSet controller to respect the ordering guarantees demonstrated
`OrderedReady` pod management is the default for StatefulSets. It tells the
StatefulSet controller to respect the ordering guarantees demonstrated
above.
### Parallel Pod Management
`Parallel` pod management tells the StatefulSet controller to launch or
terminate all Pods in parallel, and not to wait for Pods to become Running
and Ready or completely terminated prior to launching or terminating another
`Parallel` pod management tells the StatefulSet controller to launch or
terminate all Pods in parallel, and not to wait for Pods to become Running
and Ready or completely terminated prior to launching or terminating another
Pod.
-->
@ -1371,7 +1371,7 @@ Pod.
<!--
Download the example above, and save it to a file named `web-parallel.yaml`
This manifest is identical to the one you downloaded above except that the `.spec.podManagementPolicy`
This manifest is identical to the one you downloaded above except that the `.spec.podManagementPolicy`
of the `web` StatefulSet is set to `Parallel`.
In one terminal, watch the Pods in the StatefulSet.
@ -1424,7 +1424,7 @@ web-1 1/1 Running 0 10s
<!--
The StatefulSet controller launched both `web-0` and `web-1` at the same time.
Keep the second terminal open, and, in another terminal window scale the
Keep the second terminal open, and, in another terminal window scale the
StatefulSet.
-->
@ -1453,7 +1453,7 @@ web-3 1/1 Running 0 26s
```
<!--
The StatefulSet controller launched two new Pods, and it did not wait for
The StatefulSet controller launched two new Pods, and it did not wait for
the first to become Running and Ready prior to launching the second.
Keep this terminal open, and in another terminal delete the `web` StatefulSet.
@ -1500,10 +1500,10 @@ web-3 0/1 Terminating 0 9m
```
<!--
The StatefulSet controller deletes all Pods concurrently, it does not wait for
The StatefulSet controller deletes all Pods concurrently, it does not wait for
a Pod's ordinal successor to terminate prior to deleting that Pod.
Close the terminal where the `kubectl get` command is running and delete the `nginx`
Close the terminal where the `kubectl get` command is running and delete the `nginx`
Service.
-->
@ -1522,8 +1522,8 @@ kubectl delete svc nginx
<!--
You will need to delete the persistent storage media for the PersistentVolumes
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
reclaimed.
-->

View File

@ -28,18 +28,18 @@ title: "Example: Deploying Cassandra with Stateful Sets"
本示例也使用了Kubernetes的一些核心组件
- [_Pods_](zh/docs/user-guide/pods)
- [ _Services_](zh/docs/user-guide/services)
- [_Replication Controllers_](zh/docs/user-guide/replication-controller)
- [_Stateful Sets_](zh/docs/concepts/workloads/controllers/statefulset/)
- [_Daemon Sets_](zh/docs/admin/daemons)
- [_Pods_](/zh/docs/user-guide/pods)
- [ _Services_](/zh/docs/user-guide/services)
- [_Replication Controllers_](/zh/docs/user-guide/replication-controller)
- [_Stateful Sets_](/zh/docs/concepts/workloads/controllers/statefulset/)
- [_Daemon Sets_](/zh/docs/admin/daemons)
## 准备工作
本示例假设你已经安装运行了一个 Kubernetes集群版本 >=1.2),并且还在某个路径下安装了 [`kubectl`](zh/docs/tasks/tools/install-kubectl/) 命令行工具。请查看 [getting started guides](zh/docs/getting-started-guides/) 获取关于你的平台的安装说明。
本示例假设你已经安装运行了一个 Kubernetes集群版本 >=1.2),并且还在某个路径下安装了 [`kubectl`](/zh/docs/tasks/tools/install-kubectl/) 命令行工具。请查看 [getting started guides](/zh/docs/getting-started-guides/) 获取关于你的平台的安装说明。
本示例还需要一些代码和配置文件。为了避免手动输入,你可以 `git clone` Kubernetes 源到你本地。
@ -133,7 +133,7 @@ kubectl delete daemonset cassandra
## 步骤 1创建 Cassandra Headless Service
Kubernetes _[Service](zh/docs/user-guide/services)_ 描述一组执行同样任务的 [_Pod_](zh/docs/user-guide/pods)。在 Kubernetes 中,一个应用的原子调度单位是一个 Pod一个或多个_必须_调度到相同主机上的容器。
Kubernetes _[Service](/zh/docs/user-guide/services)_ 描述一组执行同样任务的 [_Pod_](/zh/docs/user-guide/pods)。在 Kubernetes 中,一个应用的原子调度单位是一个 Pod一个或多个_必须_调度到相同主机上的容器。
这个 Service 用于在 Kubernetes 集群内部进行 Cassandra 客户端和 Cassandra Pod 之间的 DNS 查找。
@ -354,7 +354,7 @@ $ kubectl exec cassandra-0 -- cqlsh -e 'desc keyspaces'
system_traces system_schema system_auth system system_distributed
```
你需要使用 `kubectl edit` 来增加或减小 Cassandra StatefulSet 的大小。你可以在[文档](zh/docs/user-guide/kubectl/kubectl_edit) 中找到更多关于 `edit` 命令的信息。
你需要使用 `kubectl edit` 来增加或减小 Cassandra StatefulSet 的大小。你可以在[文档](/zh/docs/user-guide/kubectl/kubectl_edit) 中找到更多关于 `edit` 命令的信息。
使用以下命令编辑 StatefulSet。
@ -429,7 +429,7 @@ $ grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodS
## 步骤 5使用 Replication Controller 创建 Cassandra 节点 pod
Kubernetes _[Replication Controller](zh/docs/user-guide/replication-controller)_ 负责复制一个完全相同的 pod 集合。像 Service 一样,它具有一个 selector query用来识别它的集合成员。和 Service 不一样的是,它还具有一个期望的副本数,并且会通过创建或删除 Pod 来保证 Pod 的数量满足它期望的状态。
Kubernetes _[Replication Controller](/zh/docs/user-guide/replication-controller)_ 负责复制一个完全相同的 pod 集合。像 Service 一样,它具有一个 selector query用来识别它的集合成员。和 Service 不一样的是,它还具有一个期望的副本数,并且会通过创建或删除 Pod 来保证 Pod 的数量满足它期望的状态。
和我们刚才定义的 Service 一起Replication Controller 能够让我们轻松的构建一个复制的、可扩展的 Cassandra 集群。
@ -639,7 +639,7 @@ $ kubectl delete rc cassandra
## 步骤 8使用 DaemonSet 替换 Replication Controller
在 Kubernetes中[_DaemonSet_](zh/docs/admin/daemons) 能够将 pod 一对一的分布到 Kubernetes 节点上。和 _ReplicationController_ 相同的是它也有一个用于识别它的集合成员的 selector query。但和 _ReplicationController_ 不同的是,它拥有一个节点 selector用于限制基于模板的 pod 可以调度的节点。并且 pod 的复制不是基于一个设置的数量,而是为每一个节点分配一个 pod。
在 Kubernetes中[_DaemonSet_](/zh/docs/admin/daemons) 能够将 pod 一对一的分布到 Kubernetes 节点上。和 _ReplicationController_ 相同的是它也有一个用于识别它的集合成员的 selector query。但和 _ReplicationController_ 不同的是,它拥有一个节点 selector用于限制基于模板的 pod 可以调度的节点。并且 pod 的复制不是基于一个设置的数量,而是为每一个节点分配一个 pod。
示范用例当部署到云平台时预期情况是实例是短暂的并且随时可能终止。Cassandra 被搭建成为在各个节点间复制数据以便于实现数据冗余。这样的话,即使一个实例终止了,存储在它上面的数据却没有,并且集群会通过重新复制数据到其它运行节点来作为响应。
@ -802,6 +802,6 @@ $ kubectl delete daemonset cassandra
查看本示例的 [image](https://github.com/kubernetes/examples/tree/master/cassandra/image) 目录,了解如何构建容器的 docker 镜像及其内容。
你可能还注意到我们设置了一些 Cassandra 参数(`MAX_HEAP_SIZE`和`HEAP_NEWSIZE`),并且增加了关于 [namespace](zh/docs/user-guide/namespaces) 的信息。我们还告诉 Kubernetes 容器暴露了 `CQL``Thrift` API 端口。最后,我们告诉集群管理器我们需要 0.1 cpu0.1 核)。
你可能还注意到我们设置了一些 Cassandra 参数(`MAX_HEAP_SIZE`和`HEAP_NEWSIZE`),并且增加了关于 [namespace](/zh/docs/user-guide/namespaces) 的信息。我们还告诉 Kubernetes 容器暴露了 `CQL``Thrift` API 端口。最后,我们告诉集群管理器我们需要 0.1 cpu0.1 核)。
[!Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cassandra/README.md?pixel)]()

View File

@ -12,23 +12,23 @@ card:
{{% capture overview %}}
<!--
This tutorial shows you how to deploy a WordPress site and a MySQL database using Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
<!--
This tutorial shows you how to deploy a WordPress site and a MySQL database using Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
-->
本示例描述了如何通过 Minikube 在 Kubernetes 上安装 WordPress 和 MySQL。这两个应用都使用 PersistentVolumes 和 PersistentVolumeClaims 保存数据。
<!--
A [PersistentVolume](zh/docs/concepts/storage/persistent-volumes/)(PV)is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a [StorageClass](zh/docs/concepts/storage/storage-classes). A [PersistentVolumeClaim](zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.
A [PersistentVolume](/docs/concepts/storage/persistent-volumes/)(PV)is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a [StorageClass](/docs/concepts/storage/storage-classes). A [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.
-->
[PersistentVolume](zh/docs/concepts/storage/persistent-volumes/)PV是一块集群里由管理员手动提供或 kubernetes 通过 [StorageClass](zh/docs/concepts/storage/storage-classes) 动态创建的存储。
[PersistentVolumeClaim](zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)PVC是一个满足对 PV 存储需要的请求。PersistentVolumes 和 PersistentVolumeClaims 是独立于 Pod 生命周期而在 Pod 重启,重新调度甚至删除过程中保存数据。
[PersistentVolume](/zh/docs/concepts/storage/persistent-volumes/)PV是一块集群里由管理员手动提供或 kubernetes 通过 [StorageClass](/zh/docs/concepts/storage/storage-classes) 动态创建的存储。
[PersistentVolumeClaim](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)PVC是一个满足对 PV 存储需要的请求。PersistentVolumes 和 PersistentVolumeClaims 是独立于 Pod 生命周期而在 Pod 重启,重新调度甚至删除过程中保存数据。
{{< warning >}}
<!--
This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress) to deploy WordPress in production.
This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress) to deploy WordPress in production.
-->
deployment 在生产场景中并不适合,它使用单实例 WordPress 和 MySQL Pods。考虑使用 [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress) 在生产场景中部署 WordPress。
@ -53,7 +53,7 @@ deployment 在生产场景中并不适合,它使用单实例 WordPress 和 MyS
* MySQL resource configs
* WordPress resource configs
* Apply the kustomization directory by `kubectl apply -k ./`
* Clean up
* Clean up
-->
* 创建 PersistentVolumeClaims 和 PersistentVolumes
@ -77,7 +77,7 @@ Download the following configuration files:
1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml)
1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml)
1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml)
-->
此例在`kubectl` 1.14 或者更高版本有效。
@ -86,14 +86,14 @@ Download the following configuration files:
1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml)
2. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml)
2. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml)
{{% /capture %}}
{{% capture lessoncontent %}}
<!--
## Create PersistentVolumeClaims and PersistentVolumes
## Create PersistentVolumeClaims and PersistentVolumes
-->
## 创建 PersistentVolumeClaims 和 PersistentVolumes
@ -113,15 +113,15 @@ MySQL 和 Wordpress 都需要一个 PersistentVolume 来存储数据。他们的
{{< warning >}}
<!--
In local clusters, the default StorageClass uses the `hostPath` provisioner. `hostPath` volumes are only suitable for development and testing. With `hostPath` volumes, your data lives in `/tmp` on the node the Pod is scheduled onto and does not move between nodes. If a Pod dies and gets scheduled to another node in the cluster, or the node is rebooted, the data is lost.
In local clusters, the default StorageClass uses the `hostPath` provisioner. `hostPath` volumes are only suitable for development and testing. With `hostPath` volumes, your data lives in `/tmp` on the node the Pod is scheduled onto and does not move between nodes. If a Pod dies and gets scheduled to another node in the cluster, or the node is rebooted, the data is lost.
-->
在本地群集中,默认的 StorageClass 使用`hostPath`供应器。 `hostPath`卷仅适用于开发和测试。使用 `hostPath` 卷,您的数据位于 Pod 调度到的节点上的`/tmp`中,并且不会在节点之间移动。如果 Pod 死亡并被调度到群集中的另一个节点,或者该节点重新启动,则数据将丢失。
{{< /warning >}}
{{< note >}}
<!--
If you are bringing up a cluster that needs to use the `hostPath` provisioner, the `--enable-hostpath-provisioner` flag must be set in the `controller-manager` component.
<!--
If you are bringing up a cluster that needs to use the `hostPath` provisioner, the `--enable-hostpath-provisioner` flag must be set in the `controller-manager` component.
-->
如果要建立需要使用`hostPath`设置程序的集群,则必须在 controller-manager 组件中设置`--enable-hostpath-provisioner`标志。
@ -133,25 +133,25 @@ If you are bringing up a cluster that needs to use the `hostPath` provisioner, t
如果你已经有运行在 Google Kubernetes Engine 的集群,请参考 [this guide](https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk)。
{{< /note >}}
<!--
## Create a kustomization.yaml
<!--
## Create a kustomization.yaml
-->
## 创建 kustomization.yaml
<!--
### Add a Secret generator
<!--
### Add a Secret generator
-->
### 创建 Secret 生成器
<!--
A [Secret](zh/docs/concepts/configuration/secret/) is an object that stores a piece of sensitive data like a password or key. Since 1.14, `kubectl` supports the management of Kubernetes objects using a kustomization file. You can create a Secret by generators in `kustomization.yaml`.
A [Secret](/docs/concepts/configuration/secret/) is an object that stores a piece of sensitive data like a password or key. Since 1.14, `kubectl` supports the management of Kubernetes objects using a kustomization file. You can create a Secret by generators in `kustomization.yaml`.
Add a Secret generator in `kustomization.yaml` from the following command. You will need to replace `YOUR_PASSWORD` with the password you want to use.
Add a Secret generator in `kustomization.yaml` from the following command. You will need to replace `YOUR_PASSWORD` with the password you want to use.
-->
A [Secret](zh/docs/concepts/configuration/secret/) 是存储诸如密码或密钥之类的敏感数据的对象。从 1.14 开始,`kubectl`支持使用 kustomization 文件管理 Kubernetes 对象。您可以通过`kustomization.yaml`中的生成器创建一个 Secret。
A [Secret](/zh/docs/concepts/configuration/secret/) 是存储诸如密码或密钥之类的敏感数据的对象。从 1.14 开始,`kubectl`支持使用 kustomization 文件管理 Kubernetes 对象。您可以通过`kustomization.yaml`中的生成器创建一个 Secret。
通过以下命令在`kustomization.yaml`中添加一个 Secret 生成器。您需要用您要使用的密码替换`YOUR_PASSWORD`。
@ -164,13 +164,13 @@ secretGenerator:
EOF
```
<!--
## Add resource configs for MySQL and WordPress
<!--
## Add resource configs for MySQL and WordPress
-->
## 补充 MySQL 和 WordPress 的资源配置
<!--
<!--
The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The `MYSQL_ROOT_PASSWORD` environment variable sets the database password from the Secret.
-->
@ -178,11 +178,11 @@ The following manifest describes a single-instance MySQL Deployment. The MySQL c
{{< codenew file="application/wordpress/mysql-deployment.yaml" >}}
<!--
<!--
The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the
PersistentVolume at `/var/www/html` for website data files. The `WORDPRESS_DB_HOST` environment variable sets
the name of the MySQL Service defined above, and WordPress will access the database by Service. The
`WORDPRESS_DB_PASSWORD` environment variable sets the database password from the Secret kustomize generated.
`WORDPRESS_DB_PASSWORD` environment variable sets the database password from the Secret kustomize generated.
-->
以下 manifest 文件描述了单实例 WordPress 部署。WordPress 容器将网站数据文件位于`/var/www/html`的 PersistentVolume。`WORDPRESS_DB_HOST`环境变量集上面定义的 MySQL Service 的名称WordPress 将通过 Service 访问数据库。`WORDPRESS_DB_PASSWORD`环境变量设置从 Secret kustomize 生成的数据库密码。
@ -234,8 +234,8 @@ the name of the MySQL Service defined above, and WordPress will access the datab
```
<!--
## Apply and Verify
<!--
## Apply and Verify
-->
## 应用和验证
@ -348,7 +348,7 @@ kubectl apply -k ./
```
响应应如下所示:
```shell
NAME TYPE DATA AGE
mysql-pass-c57bb4t7mf Opaque 1 9s
@ -419,7 +419,7 @@ kubectl apply -k ./
```
6. 复制 IP 地址,然后将页面加载到浏览器中来查看您的站点。
您应该看到类似于以下屏幕截图的 WordPress 设置页面。
![wordpress-init](https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/WordPress.png)
@ -427,8 +427,8 @@ kubectl apply -k ./
{{% /capture %}}
{{< warning >}}
<!--
Do not leave your WordPress installation on this page. If another user finds it, they can set up a website on your instance and use it to serve malicious content. <br/><br/>Either install WordPress by creating a username and password or delete your instance.
<!--
Do not leave your WordPress installation on this page. If another user finds it, they can set up a website on your instance and use it to serve malicious content. <br/><br/>Either install WordPress by creating a username and password or delete your instance.
-->
不要在此页面上保留 WordPress 安装。如果其他用户找到了它,他们可以在您的实例上建立一个网站并使用它来提供恶意内容。<br/><br/>通过创建用户名和密码来安装 WordPress 或删除您的实例。
@ -453,24 +453,16 @@ Do not leave your WordPress installation on this page. If another user finds it,
{{% capture whatsnext %}}
* Learn more about [Introspection and Debugging](zh/docs/tasks/debug-application-cluster/debug-application-introspection/)
* Learn more about [Jobs](zh/docs/concepts/workloads/controllers/jobs-run-to-completion/)
* Learn more about [Port Forwarding](zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
* Learn how to [Get a Shell to a Container](zh/docs/tasks/debug-application-cluster/get-shell-running-container/)
<!--
* Learn more about [Introspection and Debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/)
* Learn more about [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
* Learn more about [Port Forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
* Learn how to [Get a Shell to a Container](/docs/tasks/debug-application-cluster/get-shell-running-container/)
-->
1. 运行以下命令以删除您的 SecretDeploymentsServices 和 PersistentVolumeClaims
```shell
kubectl delete -k ./
```
{{% /capture %}}
{{% capture whatsnext %}}
* 了解更多关于 [Introspection and Debugging](zh/docs/tasks/debug-application-cluster/debug-application-introspection/)
* 了解更多关于 [Jobs](zh/docs/concepts/workloads/controllers/jobs-run-to-completion/)
* 了解更多关于 [Port Forwarding](zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
* 了解如何 [Get a Shell to a Container](zh/docs/tasks/debug-application-cluster/get-shell-running-container/)
* 了解更多关于 [Introspection and Debugging](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/)
* 了解更多关于 [Jobs](/zh/docs/concepts/workloads/controllers/jobs-run-to-completion/)
* 了解更多关于 [Port Forwarding](/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
* 了解如何 [Get a Shell to a Container](/zh/docs/tasks/debug-application-cluster/get-shell-running-container/)
{{% /capture %}}

View File

@ -14,23 +14,23 @@ content_template: templates/tutorial
{{% capture overview %}}
本教程展示了在 Kubernetes 上使用 [PodDisruptionBudgets](zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) 和 [PodAntiAffinity](zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。
本教程展示了在 Kubernetes 上使用 [PodDisruptionBudgets](/zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) 和 [PodAntiAffinity](/zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。
{{% /capture %}}
{{% capture prerequisites %}}
在开始本教程前,你应该熟悉以下 Kubernetes 概念。
* [Pods](zh/docs/user-guide/pods/single-container/)
* [Cluster DNS](zh/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](zh/docs/concepts/services-networking/service/#headless-services)
* [PersistentVolumes](zh/docs/concepts/storage/volumes/)
* [Pods](/zh/docs/user-guide/pods/single-container/)
* [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services)
* [PersistentVolumes](/zh/docs/concepts/storage/volumes/)
* [PersistentVolume Provisioning](http://releases.k8s.io/{{< param "githubbranch" >}}/examples/persistent-volume-provisioning/)
* [ConfigMaps](zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
* [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/)
* [PodDisruptionBudgets](zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget)
* [PodAntiAffinity](zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)
* [kubectl CLI](zh/docs/user-guide/kubectl)
* [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
* [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/)
* [PodDisruptionBudgets](/zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget)
* [PodAntiAffinity](/zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)
* [kubectl CLI](/zh/docs/user-guide/kubectl)
@ -69,14 +69,14 @@ ZooKeeper 在内存中保存它们的整个状态机,但是每个改变都被
下面的清单包含一个
[Headless Service](zh/docs/concepts/services-networking/service/#headless-services)
一个 [Service](zh/docs/concepts/services-networking/service/)
一个 [PodDisruptionBudget](zh/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget)
和一个 [StatefulSet](zh/docs/concepts/workloads/controllers/statefulset/)。
[Headless Service](/zh/docs/concepts/services-networking/service/#headless-services)
一个 [Service](/zh/docs/concepts/services-networking/service/)
一个 [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget)
和一个 [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/)。
{{< codenew file="application/zookeeper/zookeeper.yaml" >}}
打开一个命令行终端,使用 [`kubectl apply`](zh/docs/reference/generated/kubectl/kubectl-commands/#apply)
打开一个命令行终端,使用 [`kubectl apply`](/zh/docs/reference/generated/kubectl/kubectl-commands/#apply)
创建这个清单。
```shell
@ -92,7 +92,7 @@ poddisruptionbudget.policy/zk-pdb created
statefulset.apps/zk created
```
使用 [`kubectl get`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 查看 StatefulSet 控制器创建的 Pods。
使用 [`kubectl get`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 查看 StatefulSet 控制器创建的 Pods。
```shell
kubectl get pods -w -l app=zk
@ -130,7 +130,7 @@ StatefulSet 控制器创建了3个 Pods每个 Pod 包含一个 [ZooKeeper 3.4
由于在匿名网络中没有用于选举 leader 的终止算法Zab 要求显式的进行成员关系配置,以执行 leader 选举。Ensemble 中的每个服务都需要具有一个独一无二的标识符,所有的服务均需要知道标识符的全集,并且每个标志都需要和一个网络地址相关联。
使用 [`kubectl exec`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。
使用 [`kubectl exec`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。
```shell
for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
@ -184,7 +184,7 @@ zk-2.zk-headless.default.svc.cluster.local
```
[Kubernetes DNS](zh/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。
[Kubernetes DNS](/zh/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。
ZooKeeper 在一个名为 `zoo.cfg` 的文件中保存它的应用配置。使用 `kubectl exec``zk-0` Pod 中查看 `zoo.cfg` 文件的内容。
@ -320,7 +320,7 @@ numChildren = 0
如同在 [ZooKeeper 基础](#zookeeper-basics) 一节所提到的ZooKeeper 提交所有的条目到一个持久 WAL并周期性的将内存快照写入存储介质。对于使用一致性协议实现一个复制状态机的应用来说使用 WALs 提供持久化是一种常用的技术,对于普通的存储应用也是如此。
使用 [`kubectl delete`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#delete) 删除 `zk` StatefulSet。
使用 [`kubectl delete`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#delete) 删除 `zk` StatefulSet。
```shell
kubectl delete statefulset zk
@ -641,7 +641,7 @@ log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-
这是在容器里安全记录日志的最简单的方法。由于应用的日志被写入标准输出Kubernetes 将会为你处理日志轮转。Kubernetes 还实现了一个智能保存策略,保证写入标准输出和标准错误流的应用日志不会耗尽本地存储媒介。
使用 [`kubectl logs`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#logs) 从一个 Pod 中取回最后几行日志。
使用 [`kubectl logs`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#logs) 从一个 Pod 中取回最后几行日志。
```shell
kubectl logs zk-0 --tail 20
@ -679,7 +679,7 @@ kubectl logs zk-0 --tail 20
### 配置非特权用户
在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](zh/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。
在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。
`zk` StatefulSet 的 Pod 的 `template` 包含了一个 SecurityContext。
@ -736,7 +736,7 @@ drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
### 处理进程故障
[Restart Policies](zh/docs/user-guide/pod-states/#restartpolicy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说Always 是唯一合适的 RestartPolicy这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。
[Restart Policies](/zh/docs/user-guide/pod-states/#restartpolicy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说Always 是唯一合适的 RestartPolicy这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。
检查 `zk-0` Pod 中运行的 ZooKeeper 服务的进程树。
@ -947,7 +947,7 @@ kubectl get nodes
```
使用 [`kubectl cordon`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#cordon) cordon 你的集群中除4个节点以外的所有节点。
使用 [`kubectl cordon`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#cordon) cordon 你的集群中除4个节点以外的所有节点。
```shell
kubectl cordon < node name >
@ -987,7 +987,7 @@ kubernetes-minion-group-i4c4
```
使用 [`kubectl drain`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。
使用 [`kubectl drain`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。
```shell
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
@ -1102,7 +1102,7 @@ numChildren = 0
```
使用 [`kubectl uncordon`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#uncordon) 来取消对第一个节点的隔离。
使用 [`kubectl uncordon`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#uncordon) 来取消对第一个节点的隔离。
```shell
kubectl uncordon kubernetes-minion-group-pb41

View File

@ -26,21 +26,21 @@ external IP address.
{{% capture prerequisites %}}
<!--
* Install [kubectl](zh/docs/tasks/tools/install-kubectl/).
* Install [kubectl](/docs/tasks/tools/install-kubectl/).
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
[external load balancer](zh/docs/tasks/access-application-cluster/create-external-load-balancer/),
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
which requires a cloud provider.
* Configure `kubectl` to communicate with your Kubernetes API server. For
instructions, see the documentation for your cloud provider.
-->
* 安装 [kubectl](zh/docs/tasks/tools/install-kubectl/).
* 安装 [kubectl](/zh/docs/tasks/tools/install-kubectl/).
* 使用 Google Kubernetes Engine 或 Amazon Web Services 等云供应商创建 Kubernetes 群集。
本教程创建了一个[外部负载均衡器](zh/docs/tasks/access-application-cluster/create-external-load-balancer/),需要云供应商。
本教程创建了一个[外部负载均衡器](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/),需要云供应商。
* 配置 `kubectl` 与 Kubernetes API 服务器通信。有关说明,请参阅云供应商文档。
@ -79,16 +79,16 @@ external IP address.
<!--
The preceding command creates a
[Deployment](zh/docs/concepts/workloads/controllers/deployment/)
[Deployment](/docs/concepts/workloads/controllers/deployment/)
object and an associated
[ReplicaSet](zh/docs/concepts/workloads/controllers/replicaset/)
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
object. The ReplicaSet has five
[Pods](zh/docs/concepts/workloads/pods/pod/),
[Pods](/docs/concepts/workloads/pods/pod/),
each of which runs the Hello World application.
-->
前面的命令创建一个 [Deployment](zh/docs/concepts/workloads/controllers/deployment/)
对象和一个关联的 [ReplicaSet](zh/docs/concepts/workloads/controllers/replicaset/)对象。
ReplicaSet 有五个 [Pod](zh/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。
前面的命令创建一个 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/)
对象和一个关联的 [ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/)对象。
ReplicaSet 有五个 [Pod](/zh/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。
<!--
1. Display information about the Deployment:
@ -169,8 +169,8 @@ external IP address.
记下服务公开的外部 IP 地址(`LoadBalancer Ingress`)。
在本例中,外部 IP 地址是 104.198.205.71。还要注意 `Port``NodePort` 的值。
在本例中,`Port` 是 8080`NodePort` 是32377。
<!--
1. In the preceding output, you can see that the service has several endpoints:
10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more. These are internal
@ -214,8 +214,8 @@ external IP address.
其中 `<external-ip>` 是您的服务的外部 IP 地址(`LoadBalancer Ingress`
`<port>` 是您的服务描述中的 `port` 的值。
如果您正在使用 minikube输入 `minikube service my-service` 将在浏览器中自动打开 Hello World 应用程序。
<!--
<!--
The response to a successful request is a hello message:
-->
成功请求的响应是一条问候消息:
@ -249,9 +249,9 @@ the Hello World application, enter this command:
<!--
Learn more about
[connecting applications with services](zh/docs/concepts/services-networking/connect-applications-service/).
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
-->
了解更多关于[将应用程序与服务连接](zh/docs/concepts/services-networking/connect-applications-service/)。
了解更多关于[将应用程序与服务连接](/zh/docs/concepts/services-networking/connect-applications-service/)。
{{% /capture %}}

View File

@ -25,8 +25,8 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica
一个简单的多层 web 应用程序。本例由以下组件组成:
<!--
* A single-instance [Redis](https://redis.io/) master to store guestbook entries
* Multiple [replicated Redis](https://redis.io/topics/replication) instances to serve reads
* A single-instance [Redis](https://redis.io/) master to store guestbook entries
* Multiple [replicated Redis](https://redis.io/topics/replication) instances to serve reads
* Multiple web frontend instances
-->
@ -143,14 +143,14 @@ Replace POD-NAME with the name of your Pod.
### 创建 Redis 主节点的服务
<!--
The guestbook applications needs to communicate to the Redis master to write its data. You need to apply a [Service](zh/docs/concepts/services-networking/service/) to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
The guestbook applications needs to communicate to the Redis master to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
-->
留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](zh/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。
留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](/zh/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。
{{< codenew file="application/guestbook/redis-master-service.yaml" >}}
<!--
1. Apply the Redis Master Service from the following `redis-master-service.yaml` file:
1. Apply the Redis Master Service from the following `redis-master-service.yaml` file:
-->
1. 使用下面的 `redis-master-service.yaml` 文件创建 Redis 主节点的服务:
@ -181,7 +181,7 @@ The guestbook applications needs to communicate to the Redis master to write its
{{< note >}}
<!--
This manifest file creates a Service named `redis-master` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis master Pod.
This manifest file creates a Service named `redis-master` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis master Pod.
-->
这个清单文件创建了一个名为 `Redis-master` 的 Service其中包含一组与前面定义的标签匹配的标签因此服务将网络流量路由到 Redis 主节点 Pod 上。
@ -205,12 +205,12 @@ Although the Redis master is a single pod, you can make it highly available to m
### 创建 Redis 从节点 Deployment
<!--
Deployments scale based off of the configurations set in the manifest file. In this case, the Deployment object specifies two replicas.
Deployments scale based off of the configurations set in the manifest file. In this case, the Deployment object specifies two replicas.
-->
Deployments 根据清单文件中设置的配置进行伸缩。在这种情况下Deployment 对象指定两个副本。
<!--
If there are not any replicas running, this Deployment would start the two replicas on your container cluster. Conversely, if there are more than two replicas are running, it would scale down until two replicas are running.
If there are not any replicas running, this Deployment would start the two replicas on your container cluster. Conversely, if there are more than two replicas are running, it would scale down until two replicas are running.
-->
如果没有任何副本正在运行,则此 Deployment 将启动容器集群上的两个副本。相反,
如果有两个以上的副本在运行,那么它的规模就会缩小,直到运行两个副本为止。
@ -349,20 +349,20 @@ The guestbook application has a web frontend serving the HTTP requests written i
### 创建前端服务
<!--
The `redis-slave` and `redis-master` Services you applied are only accessible within the container cluster because the default type for a Service is [ClusterIP](zh/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
The `redis-slave` and `redis-master` Services you applied are only accessible within the container cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
-->
应用的 `redis-slave``redis-master` 服务只能在容器集群中访问,因为服务的默认类型是
[ClusterIP](zh/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。
[ClusterIP](/zh/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。
<!--
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. Minikube can only expose Services through `NodePort`.
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. Minikube can only expose Services through `NodePort`.
-->
如果您希望客人能够访问您的留言板您必须将前端服务配置为外部可见的以便客户机可以从容器集群之外请求服务。Minikube 只能通过 `NodePort` 公开服务。
{{< note >}}
<!--
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
-->
一些云提供商,如 Google Compute Engine 或 Google Kubernetes Engine支持外部负载均衡器。如果您的云提供商支持负载均衡器并且您希望使用它
只需删除或注释掉 `type: NodePort`,并取消注释 `type: LoadBalancer` 即可。
@ -386,14 +386,14 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
2. 查询服务列表以验证前端服务正在运行:
```shell
kubectl get services
kubectl get services
```
<!--
The response should be similar to this:
-->
响应应该与此类似:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend ClusterIP 10.0.0.112 <none> 80:31323/TCP 6s
@ -472,7 +472,7 @@ If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` y
2. 复制外部 IP 地址,然后在浏览器中加载页面以查看留言板。
<!--
## Scale the Web Frontend
## Scale the Web Frontend
-->
## 扩展 Web 前端
@ -501,7 +501,7 @@ Scaling up or down is easy because your servers are defined as a Service that us
```
<!--
The response should look similar to this:
The response should look similar to this:
-->
响应应该类似于这样:
@ -548,7 +548,7 @@ Scaling up or down is easy because your servers are defined as a Service that us
redis-slave-2005841000-fpvqc 1/1 Running 0 1h
redis-slave-2005841000-phfv9 1/1 Running 0 1h
```
{{% /capture %}}
{{% capture cleanup %}}
@ -580,7 +580,7 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
deployment.apps "redis-slave" deleted
service "redis-master" deleted
service "redis-slave" deleted
deployment.apps "frontend" deleted
deployment.apps "frontend" deleted
service "frontend" deleted
```
@ -594,9 +594,9 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
```
<!--
The response should be this:
The response should be this:
-->
响应应该是:
响应应该是:
```
No resources found.
@ -607,16 +607,16 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
{{% capture whatsnext %}}
<!--
* Complete the [Kubernetes Basics](zh/docs/tutorials/kubernetes-basics/) Interactive Tutorials
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
* Read more about [connecting applications](zh/docs/concepts/services-networking/connect-applications-service/)
* Read more about [Managing Resources](zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
* Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)
-->
* 完成 [Kubernetes Basics](zh/docs/tutorials/kubernetes-basics/) 交互式教程
* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
* 阅读更多关于[连接应用程序](zh/docs/concepts/services-networking/connect-applications-service/)
* 阅读更多关于[管理资源](zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)
* 完成 [Kubernetes Basics](/zh/docs/tutorials/kubernetes-basics/) 交互式教程
* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
* 阅读更多关于[连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/)
* 阅读更多关于[管理资源](/zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)
{{% /capture %}}