[zh-cn] sync tutorials/stateful-application/*
Signed-off-by: xin.li <xin.li@daocloud.io>pull/43163/head
parent
98ea70d0eb
commit
7a86c9de71
|
@ -114,7 +114,7 @@ It creates a [headless Service](/docs/concepts/services-networking/service/#head
|
|||
它创建了一个 [Headless Service](/zh-cn/docs/concepts/services-networking/service/#headless-services)
|
||||
`nginx` 用来发布 StatefulSet `web` 中的 Pod 的 IP 地址。
|
||||
|
||||
{{< codenew file="application/web/web.yaml" >}}
|
||||
{{% code_sample file="application/web/web.yaml" %}}
|
||||
|
||||
<!--
|
||||
Download the example above, and save it to a file named `web.yaml`
|
||||
|
@ -237,7 +237,7 @@ Pods in a StatefulSet have a unique ordinal index and a stable network identity.
|
|||
StatefulSet 中的每个 Pod 拥有一个唯一的顺序索引和稳定的网络身份标识。
|
||||
|
||||
<!--
|
||||
### Examining the Pod's Ordinal Index
|
||||
### Examining the Pod's ordinal Index
|
||||
-->
|
||||
### 检查 Pod 的顺序索引 {#examining-the-pod-s-ordinal-index}
|
||||
|
||||
|
@ -272,7 +272,7 @@ Pod 名称的格式为 `<statefulset 名称>-<序号索引>`。
|
|||
`web` StatefulSet 拥有两个副本,所以它创建了两个 Pod:`web-0` 和 `web-1`。
|
||||
|
||||
<!--
|
||||
### Using Stable Network Identities
|
||||
### Using Stable network Identities
|
||||
-->
|
||||
### 使用稳定的网络身份标识 {#using-stable-network-identities}
|
||||
|
||||
|
@ -347,7 +347,7 @@ The CNAME of the headless service points to SRV records (one for each Pod that
|
|||
is Running and Ready). The SRV records point to A record entries that
|
||||
contain the Pods' IP addresses.
|
||||
-->
|
||||
headless service 的 CNAME 指向 SRV 记录(记录每个 Running 和 Ready 状态的 Pod)。
|
||||
Headless service 的 CNAME 指向 SRV 记录(记录每个 Running 和 Ready 状态的 Pod)。
|
||||
SRV 记录指向一个包含 Pod IP 地址的记录表项。
|
||||
|
||||
<!--
|
||||
|
@ -466,6 +466,11 @@ Pod 的序号、主机名、SRV 条目和记录名称没有改变,但和 Pod
|
|||
在本教程中使用的集群中它们就改变了。这就是为什么不要在其他应用中使用
|
||||
StatefulSet 中 Pod 的 IP 地址进行连接,这点很重要。
|
||||
|
||||
<!--
|
||||
#### Discovery for specific Pods in a StatefulSet
|
||||
-->
|
||||
#### 发现 StatefulSet 中特定的 Pod {#discovery-for-specific-pods-in-a-statefulset}
|
||||
|
||||
<!--
|
||||
If you need to find and connect to the active members of a StatefulSet, you
|
||||
should query the CNAME of the headless Service
|
||||
|
@ -490,7 +495,7 @@ to Running and Ready.
|
|||
Pod 的状态变为 Running 和 Ready 时,你的应用就能够发现它们的地址。
|
||||
|
||||
<!--
|
||||
### Writing to Stable Storage
|
||||
### Writing to stable Storage
|
||||
-->
|
||||
### 写入稳定的存储 {#writing-to-stable-storage}
|
||||
|
||||
|
@ -659,7 +664,7 @@ This is accomplished by updating the `replicas` field. You can use either
|
|||
或者 [`kubectl patch`](/docs/reference/generated/kubectl/kubectl-commands/#patch) 来扩容/缩容一个 StatefulSet。
|
||||
|
||||
<!--
|
||||
### Scaling Up
|
||||
### Scaling up
|
||||
-->
|
||||
### 扩容 {#scaling-up}
|
||||
|
||||
|
@ -775,7 +780,7 @@ web-3 1/1 Terminating 0 42s
|
|||
```
|
||||
|
||||
<!--
|
||||
### Ordered Pod Termination
|
||||
### Ordered Pod termination
|
||||
-->
|
||||
### 顺序终止 Pod {#ordered-pod-termination}
|
||||
|
||||
|
@ -805,7 +810,9 @@ www-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO
|
|||
|
||||
<!--
|
||||
There are still five PersistentVolumeClaims and five PersistentVolumes.
|
||||
When exploring a Pod's [stable storage](#writing-to-stable-storage), we saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted when the StatefulSet's Pods are deleted. This is still true when Pod deletion is caused by scaling the StatefulSet down.
|
||||
When exploring a Pod's [stable storage](#writing-to-stable-storage), we saw that the PersistentVolumes
|
||||
mounted to the Pods of a StatefulSet are not deleted when the StatefulSet's Pods are deleted.
|
||||
This is still true when Pod deletion is caused by scaling the StatefulSet down.
|
||||
-->
|
||||
五个 PersistentVolumeClaims 和五个 PersistentVolume 卷仍然存在。
|
||||
查看 Pod 的[稳定存储](#stable-storage),我们发现当删除 StatefulSet 的
|
||||
|
@ -835,7 +842,7 @@ StatefulSet 中 Pod 的的容器镜像、资源请求和限制、标签和注解
|
|||
`RollingUpdate` 更新策略是 StatefulSet 默认策略。
|
||||
|
||||
<!--
|
||||
### Rolling Update
|
||||
### RollingUpdate {#rolling-update}
|
||||
-->
|
||||
### 滚动更新 {#rolling-update}
|
||||
|
||||
|
@ -971,7 +978,7 @@ StatefulSet 的滚动更新状态。
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
#### Staging an Update
|
||||
#### Staging an update
|
||||
-->
|
||||
#### 分段更新 {#staging-an-update}
|
||||
|
||||
|
@ -1059,7 +1066,7 @@ ordinal of the Pod is less than the `partition` specified by the
|
|||
这是因为 Pod 的序号比 `updateStrategy` 指定的 `partition` 更小。
|
||||
|
||||
<!--
|
||||
#### Rolling Out a Canary
|
||||
#### Rolling Out a canary
|
||||
-->
|
||||
#### 金丝雀发布 {#rolling-out-a-canary}
|
||||
|
||||
|
@ -1184,7 +1191,7 @@ StatefulSet 的 `.spec.template`,则所有序号大于或等于分区的 Pod
|
|||
如果一个序号小于分区的 Pod 被删除或者终止,它将被按照原来的配置恢复。
|
||||
|
||||
<!--
|
||||
#### Phased Roll Outs
|
||||
#### Phased Roll outs
|
||||
-->
|
||||
#### 分阶段的发布 {#phased-roll-outs}
|
||||
|
||||
|
@ -1264,7 +1271,7 @@ continue the update process.
|
|||
将 `partition` 改变为 `0` 以允许 StatefulSet 继续更新过程。
|
||||
|
||||
<!--
|
||||
### On Delete
|
||||
### OnDelete {#on-delete}
|
||||
-->
|
||||
### OnDelete 策略 {#on-delete}
|
||||
|
||||
|
@ -1292,7 +1299,7 @@ StatefulSet 同时支持级联和非级联删除。使用非级联方式删除 S
|
|||
的 Pod 不会被删除。使用级联删除时,StatefulSet 和它的 Pod 都会被删除。
|
||||
|
||||
<!--
|
||||
### Non-Cascading Delete
|
||||
### Non-cascading delete
|
||||
-->
|
||||
### 非级联删除 {#non-cascading-delete}
|
||||
|
||||
|
@ -1399,7 +1406,7 @@ service/nginx unchanged
|
|||
Ignore the error. It only indicates that an attempt was made to create the _nginx_
|
||||
headless Service even though that Service already exists.
|
||||
-->
|
||||
请忽略这个错误。它仅表示 kubernetes 进行了一次创建 **nginx** headless Service
|
||||
请忽略这个错误。它仅表示 kubernetes 进行了一次创建 **nginx** Headless Service
|
||||
的尝试,尽管那个 Service 已经存在。
|
||||
|
||||
<!--
|
||||
|
@ -1465,7 +1472,7 @@ PersistentVolume was remounted.
|
|||
当你重建这个 StatefulSet 并且重新启动了 `web-0` 时,它原本的 PersistentVolume 卷会被重新挂载。
|
||||
|
||||
<!--
|
||||
### Cascading Delete
|
||||
### Cascading delete
|
||||
-->
|
||||
### 级联删除 {#cascading-delete}
|
||||
|
||||
|
@ -1482,7 +1489,7 @@ kubectl get pods -w -l app=nginx
|
|||
In another terminal, delete the StatefulSet again. This time, omit the
|
||||
`--cascade=orphan` parameter.
|
||||
-->
|
||||
在另一个窗口中再次删除这个 StatefulSet。这次省略 `--cascade=orphan` 参数。
|
||||
在另一个窗口中再次删除这个 StatefulSet,这次省略 `--cascade=orphan` 参数。
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset web
|
||||
|
@ -1547,7 +1554,7 @@ service "nginx" deleted
|
|||
<!--
|
||||
Recreate the StatefulSet and headless Service one more time:
|
||||
-->
|
||||
再一次重新创建 StatefulSet 和 headless Service:
|
||||
再一次重新创建 StatefulSet 和 Headless Service:
|
||||
|
||||
```shell
|
||||
kubectl apply -f web.yaml
|
||||
|
@ -1584,7 +1591,7 @@ PersistentVolume 卷,并且 `web-0` 和 `web-1` 将继续使用它的主机名
|
|||
<!--
|
||||
Finally, delete the `nginx` Service...
|
||||
-->
|
||||
最后删除 `nginx` service
|
||||
最后删除 `nginx` Service:
|
||||
|
||||
```shell
|
||||
kubectl delete service nginx
|
||||
|
@ -1597,7 +1604,7 @@ service "nginx" deleted
|
|||
<!--
|
||||
...and the `web` StatefulSet:
|
||||
-->
|
||||
并且删除 `web` StatefulSet:
|
||||
并且删除 `web` StatefulSet:
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset web
|
||||
|
@ -1608,7 +1615,7 @@ statefulset "web" deleted
|
|||
```
|
||||
|
||||
<!--
|
||||
## Pod Management Policy
|
||||
## Pod Management policy
|
||||
-->
|
||||
## Pod 管理策略 {#pod-management-policy}
|
||||
|
||||
|
@ -1619,12 +1626,12 @@ identity. To address this, in Kubernetes 1.7, we introduced
|
|||
`.spec.podManagementPolicy` to the StatefulSet API Object.
|
||||
-->
|
||||
对于某些分布式系统来说,StatefulSet 的顺序性保证是不必要和/或者不应该的。
|
||||
这些系统仅仅要求唯一性和身份标志。为了解决这个问题,在 Kubernetes 1.7 中
|
||||
我们针对 StatefulSet API 对象引入了 `.spec.podManagementPolicy`。
|
||||
这些系统仅仅要求唯一性和身份标志。为了解决这个问题,在 Kubernetes 1.7
|
||||
中我们针对 StatefulSet API 对象引入了 `.spec.podManagementPolicy`。
|
||||
此选项仅影响扩缩操作的行为。更新不受影响。
|
||||
|
||||
<!--
|
||||
### OrderedReady Pod Management
|
||||
### OrderedReady Pod management
|
||||
-->
|
||||
### OrderedReady Pod 管理策略 {#orderedready-pod-management}
|
||||
|
||||
|
@ -1637,7 +1644,7 @@ above.
|
|||
StatefulSet 控制器遵循上文展示的顺序性保证。
|
||||
|
||||
<!--
|
||||
### Parallel Pod Management
|
||||
### Parallel Pod management
|
||||
-->
|
||||
### Parallel Pod 管理策略 {#parallel-pod-management}
|
||||
|
||||
|
@ -1650,7 +1657,7 @@ Pod. This option only affects the behavior for scaling operations. Updates are n
|
|||
`Parallel` Pod 管理策略告诉 StatefulSet 控制器并行的终止所有 Pod,
|
||||
在启动或终止另一个 Pod 前,不必等待这些 Pod 变成 Running 和 Ready 或者完全终止状态。
|
||||
|
||||
{{< codenew file="application/web/web-parallel.yaml" >}}
|
||||
{{{% code_sample file="application/web/web-parallel.yaml" %}}
|
||||
|
||||
<!--
|
||||
Download the example above, and save it to a file named `web-parallel.yaml`
|
||||
|
@ -1760,7 +1767,7 @@ kubectl delete sts web
|
|||
<!--
|
||||
You can watch `kubectl get` to see those Pods being deleted.
|
||||
-->
|
||||
你可以监视 `kubectl get` 来查看那些 Pod 被删除
|
||||
你可以监视 `kubectl get` 来查看那些 Pod 被删除:
|
||||
|
||||
```shell
|
||||
kubectl get pod -l app=nginx -w
|
||||
|
@ -1812,7 +1819,8 @@ kubectl delete svc nginx
|
|||
Delete the persistent storage media for the PersistentVolumes used in this tutorial.
|
||||
-->
|
||||
|
||||
删除本教程中用到的 PersistentVolume 卷的持久化存储介质。
|
||||
删除本教程中用到的 PersistentVolume 卷的持久化存储介质:
|
||||
|
||||
```shell
|
||||
kubectl get pvc
|
||||
```
|
||||
|
|
|
@ -123,7 +123,7 @@ Create a Service to track all Cassandra StatefulSet members from the `cassandra-
|
|||
|
||||
以下 Service 用于在 Cassandra Pod 和集群中的客户端之间进行 DNS 查找:
|
||||
|
||||
{{< codenew file="application/cassandra/cassandra-service.yaml" >}}
|
||||
{{% code_sample file="application/cassandra/cassandra-service.yaml" %}}
|
||||
|
||||
创建一个 Service 来跟踪 `cassandra-service.yaml` 文件中的所有 Cassandra StatefulSet:
|
||||
|
||||
|
@ -181,7 +181,7 @@ Please update the following StatefulSet for the cloud you are working with.
|
|||
请为正在使用的云更新以下 StatefulSet。
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}}
|
||||
{{% code_sample file="application/cassandra/cassandra-statefulset.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Cassandra StatefulSet from the `cassandra-statefulset.yaml` file:
|
||||
|
@ -438,6 +438,6 @@ By using environment variables you can change values that are inserted into `cas
|
|||
* See more custom [Seed Provider Configurations](https://git.k8s.io/examples/cassandra/java/README.md)
|
||||
-->
|
||||
* 了解如何[扩缩 StatefulSet](/docs/tasks/run-application/scale-stateful-set/)。
|
||||
* 了解有关 [*KubernetesSeedProvider*](https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java) 的更多信息
|
||||
* 了解有关 [**KubernetesSeedProvider**](https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java) 的更多信息
|
||||
* 查看更多自定义 [Seed Provider Configurations](https://git.k8s.io/examples/cassandra/java/README.md)
|
||||
|
||||
|
|
|
@ -211,7 +211,7 @@ environment variable sets the database password from the Secret.
|
|||
以下清单文件描述的是一个单实例的 MySQL Deployment。MySQL 容器将 PersistentVolume 挂载在 `/var/lib/mysql`。
|
||||
`MYSQL_ROOT_PASSWORD` 环境变量根据 Secret 设置数据库密码。
|
||||
|
||||
{{< codenew file="application/wordpress/mysql-deployment.yaml" >}}
|
||||
{{% code_sample file="application/wordpress/mysql-deployment.yaml" %}}
|
||||
|
||||
<!--
|
||||
The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the
|
||||
|
@ -224,7 +224,7 @@ the name of the MySQL Service defined above, and WordPress will access the datab
|
|||
`WORDPRESS_DB_HOST` 环境变量设置上面定义的 MySQL Service 的名称,WordPress 将通过 Service 访问数据库。
|
||||
`WORDPRESS_DB_PASSWORD` 环境变量根据使用 kustomize 生成的 Secret 设置数据库密码。
|
||||
|
||||
{{< codenew file="application/wordpress/wordpress-deployment.yaml" >}}
|
||||
{{% code_sample file="application/wordpress/wordpress-deployment.yaml" %}}
|
||||
|
||||
<!--
|
||||
1. Download the MySQL deployment configuration file.
|
||||
|
|
|
@ -50,7 +50,10 @@ Kubernetes concepts.
|
|||
- [kubectl CLI](/zh-cn/docs/reference/kubectl/kubectl/)
|
||||
|
||||
<!--
|
||||
You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and drain the cluster's nodes. **This means that the cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become unschedulable.** You should use a dedicated cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.
|
||||
You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory.
|
||||
In this tutorial you will cordon and drain the cluster's nodes. **This means that the cluster will terminate
|
||||
and evict all Pods on its nodes, and the nodes will temporarily become unschedulable.** You should use a dedicated
|
||||
cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.
|
||||
-->
|
||||
你需要一个至少包含四个节点的集群,每个节点至少 2 个 CPU 和 4 GiB 内存。
|
||||
在本教程中你将会隔离(Cordon)和腾空(Drain )集群的节点。
|
||||
|
@ -109,7 +112,11 @@ ZooKeeper 通过
|
|||
一致性协议在 ensemble 的所有服务器之间复制一个状态机来确保这个特性。
|
||||
|
||||
<!--
|
||||
The ensemble uses the Zab protocol to elect a leader, and the ensemble cannot write data until that election is complete. Once complete, the ensemble uses Zab to ensure that it replicates all writes to a quorum before it acknowledges and makes them visible to clients. Without respect to weighted quorums, a quorum is a majority component of the ensemble containing the current leader. For instance, if the ensemble has three servers, a component that contains the leader and one other server constitutes a quorum. If the ensemble can not achieve a quorum, the ensemble cannot write data.
|
||||
The ensemble uses the Zab protocol to elect a leader, and the ensemble cannot write data until that election is complete.
|
||||
Once complete, the ensemble uses Zab to ensure that it replicates all writes to a quorum before it acknowledges and makes
|
||||
them visible to clients. Without respect to weighted quorums, a quorum is a majority component of the ensemble containing
|
||||
the current leader. For instance, if the ensemble has three servers, a component that contains the leader and one other
|
||||
server constitutes a quorum. If the ensemble can not achieve a quorum, the ensemble cannot write data.
|
||||
-->
|
||||
Ensemble 使用 Zab 协议选举一个领导者,在选举出领导者前不能写入数据。
|
||||
一旦选举出了领导者,ensemble 使用 Zab 保证所有写入被复制到一个 quorum,
|
||||
|
@ -144,7 +151,7 @@ and a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/).
|
|||
一个 [PodDisruptionBudget](/zh-cn/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget)
|
||||
和一个 [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/)。
|
||||
|
||||
{{< codenew file="application/zookeeper/zookeeper.yaml" >}}
|
||||
{{% code_sample file="application/zookeeper/zookeeper.yaml" %}}
|
||||
|
||||
<!--
|
||||
Open a terminal, and use the
|
||||
|
@ -219,7 +226,9 @@ StatefulSet 控制器创建 3 个 Pod,每个 Pod 包含一个
|
|||
<!--
|
||||
### Facilitating Leader Election
|
||||
|
||||
Because there is no terminating algorithm for electing a leader in an anonymous network, Zab requires explicit membership configuration to perform leader election. Each server in the ensemble needs to have a unique identifier, all servers need to know the global set of identifiers, and each identifier needs to be associated with a network address.
|
||||
Because there is no terminating algorithm for electing a leader in an anonymous network, Zab requires
|
||||
explicit membership configuration to perform leader election. Each server in the ensemble needs to have
|
||||
a unique identifier, all servers need to know the global set of identifiers, and each identifier needs to be associated with a network address.
|
||||
|
||||
Use [`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec) to get the hostnames
|
||||
of the Pods in the `zk` StatefulSet.
|
||||
|
@ -239,8 +248,10 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
|
|||
```
|
||||
|
||||
<!--
|
||||
The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of `<statefulset name>-<ordinal index>`. Because the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and
|
||||
`zk-2`.
|
||||
The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index.
|
||||
The hostnames take the form of `<statefulset name>-<ordinal index>`. Because the `replicas`
|
||||
field of the `zk` StatefulSet is set to `3`, the Set's controller creates three Pods with their
|
||||
hostnames set to `zk-0`, `zk-1`, and `zk-2`.
|
||||
-->
|
||||
StatefulSet 控制器基于每个 Pod 的序号索引为它们各自提供一个唯一的主机名。
|
||||
主机名采用 `<statefulset 名称>-<序数索引>` 的形式。
|
||||
|
@ -303,7 +314,9 @@ zk-2.zk-hs.default.svc.cluster.local
|
|||
```
|
||||
|
||||
<!--
|
||||
The A records in [Kubernetes DNS](/docs/concepts/services-networking/dns-pod-service/) resolve the FQDNs to the Pods' IP addresses. If Kubernetes reschedules the Pods, it will update the A records with the Pods' new IP addresses, but the A records names will not change.
|
||||
The A records in [Kubernetes DNS](/docs/concepts/services-networking/dns-pod-service/) resolve
|
||||
the FQDNs to the Pods' IP addresses. If Kubernetes reschedules the Pods, it will update the
|
||||
A records with the Pods' new IP addresses, but the A records names will not change.
|
||||
|
||||
ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use `kubectl exec` to view the contents of the `zoo.cfg` file in the `zk-0` Pod.
|
||||
-->
|
||||
|
@ -349,7 +362,10 @@ server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
|
|||
<!--
|
||||
### Achieving consensus
|
||||
|
||||
Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processes in the system to agree on which processes have committed which data. If two Pods are launched with the same ordinal, two ZooKeeper servers would both identify themselves as the same server.
|
||||
Consensus protocols require that the identifiers of each participant be unique. No two participants
|
||||
in the Zab protocol should claim the same unique identifier. This is necessary to allow the processes
|
||||
in the system to agree on which processes have committed which data. If two Pods are launched with
|
||||
the same ordinal, two ZooKeeper servers would both identify themselves as the same server.
|
||||
-->
|
||||
### 达成共识 {#achieving-consensus}
|
||||
|
||||
|
@ -412,7 +428,9 @@ server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
|
|||
```
|
||||
|
||||
<!--
|
||||
When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running and Ready), or they will fail to do so (if either of the conditions are not met). No state will arise where one server acknowledges a write on behalf of another.
|
||||
When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the
|
||||
value (if leader election has succeeded and at least two of the Pods are Running and Ready), or they will fail to
|
||||
do so (if either of the conditions are not met). No state will arise where one server acknowledges a write on behalf of another.
|
||||
-->
|
||||
当服务器使用 Zab 协议尝试提交一个值的时候,它们会达成一致并成功提交这个值
|
||||
(如果领导者选举成功并且至少有两个 Pod 处于 Running 和 Ready 状态),
|
||||
|
@ -734,7 +752,8 @@ kubectl get sts zk -o yaml
|
|||
```
|
||||
|
||||
<!--
|
||||
The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.
|
||||
The command used to start the ZooKeeper servers passed the configuration as command line parameter.
|
||||
You can also use environment variables to pass configuration to the ensemble.
|
||||
-->
|
||||
用于启动 ZooKeeper 服务器的命令将这些配置作为命令行参数传给了 ensemble。
|
||||
你也可以通过环境变量来传入这些配置。
|
||||
|
@ -889,7 +908,8 @@ F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
|
|||
```
|
||||
|
||||
<!--
|
||||
By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server's data directory, it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing to its WAL and storing its snapshots.
|
||||
By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server's data directory,
|
||||
it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing to its WAL and storing its snapshots.
|
||||
|
||||
Use the command below to get the file permissions of the ZooKeeper data directory on the `zk-0` Pod.
|
||||
-->
|
||||
|
@ -903,7 +923,8 @@ kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data
|
|||
```
|
||||
|
||||
<!--
|
||||
Because the `fsGroup` field of the `securityContext` object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write its data.
|
||||
Because the `fsGroup` field of the `securityContext` object is set to 1000, the ownership of the Pods'
|
||||
PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write its data.
|
||||
-->
|
||||
由于 `securityContext` 对象的 `fsGroup` 字段设置为 1000,
|
||||
Pod 的 PersistentVolume 的所有权属于 zookeeper 用户组,
|
||||
|
@ -1698,4 +1719,3 @@ You should always allocate additional capacity for critical services so that the
|
|||
* 使用 `kubectl uncordon` 解除你集群中所有节点的隔离。
|
||||
* 你需要删除在本教程中使用的 PersistentVolume 的持久存储介质。
|
||||
请遵循必须的步骤,基于你的环境、存储配置和制备方法,保证回收所有的存储。
|
||||
|
||||
|
|
Loading…
Reference in New Issue