fix space problems which ide results in

Signed-off-by: tim-zju <21651152@zju.edu.cn>
reviewable/pr2036/r4
tim-zju 2016-12-22 19:47:40 +08:00
parent 8edcfc05b8
commit 1f7b3148f2
4 changed files with 182 additions and 182 deletions

View File

@ -20,9 +20,9 @@ Thankfully, there is a system we can use to manage our containers in a cluster e
Before we jump in and start kube'ing it up, it's important to understand some of the fundamentals of Kubernetes.
* Containers: These are the Docker, rtk, AppC, or whatever Container you are running. You can think of these like subatomic particles; everything is made up of them, but you rarely (if ever) interact with them directly.
* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let's say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database.
* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Lets say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database.
* Deployments: A Deployment provides declarative updates for Pods. You can define Deployments to create new Pods, or replace existing Pods. You only need to describe the desired state in a Deployment object, and the deployment controller will change the actual state to the desired state at a controlled rate for you. You can define Deployments to create new resources, or replace existing ones by new ones.
* Services: A service is the single point of contact for a group of Pods. For example, let's say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it's a good idea to use Services.
* Services: A service is the single point of contact for a group of Pods. For example, lets say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so its a good idea to use Services.
## Step 1: Creating the Container
@ -371,7 +371,7 @@ At this point, the local directory looks like this
```shell
$ ls
Dockerfile
Dockerfile
app
db-deployment.yml
db-service.yml

View File

@ -15,7 +15,7 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported
4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version)
## Networking
Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don't natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) dont natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
### Linux
The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC.

View File

@ -11,15 +11,15 @@ title: Running ZooKeeper, A CP Distributed System
---
{% capture overview %}
This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on
Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/),
[PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget),
This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on
Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/),
[PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget),
and [PodAntiAffinity](/docs/user-guide/node-selection/).
{% endcapture %}
{% capture prerequisites %}
Before starting this tutorial, you should be familiar with the following
Before starting this tutorial, you should be familiar with the following
Kubernetes concepts.
* [Pods](/docs/user-guide/pods/single-container/)
@ -34,16 +34,16 @@ Kubernetes concepts.
* [kubectl CLI](/docs/user-guide/kubectl)
You will require a cluster with at least four nodes, and each node will require
at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and
drain the cluster's nodes. **This means that all Pods on the cluster's nodes
will be terminated and evicted, and the nodes will, temporarily, become
unschedulable.** You should use a dedicated cluster for this tutorial, or you
should ensure that the disruption you cause will not interfere with other
at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and
drain the cluster's nodes. **This means that all Pods on the cluster's nodes
will be terminated and evicted, and the nodes will, temporarily, become
unschedulable.** You should use a dedicated cluster for this tutorial, or you
should ensure that the disruption you cause will not interfere with other
tenants.
This tutorial assumes that your cluster is configured to dynamically provision
This tutorial assumes that your cluster is configured to dynamically provision
PersistentVolumes. If your cluster is not configured to do so, you
will have to manually provision three 20 GiB volumes prior to starting this
will have to manually provision three 20 GiB volumes prior to starting this
tutorial.
{% endcapture %}
@ -60,51 +60,51 @@ After this tutorial, you will know the following.
#### ZooKeeper Basics
[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a
[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a
distributed, open-source coordination service for distributed applications.
ZooKeeper allows you to read, write, and observe updates to data. Data are
organized in a file system like hierarchy and replicated to all ZooKeeper
servers in the ensemble (a set of ZooKeeper servers). All operations on data
are atomic and sequentially consistent. ZooKeeper ensures this by using the
[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf)
ZooKeeper allows you to read, write, and observe updates to data. Data are
organized in a file system like hierarchy and replicated to all ZooKeeper
servers in the ensemble (a set of ZooKeeper servers). All operations on data
are atomic and sequentially consistent. ZooKeeper ensures this by using the
[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf)
consensus protocol to replicate a state machine across all servers in the ensemble.
The ensemble uses the Zab protocol to elect a leader, and
data can not be written until a leader is elected. Once a leader is
elected, the ensemble uses Zab to ensure that all writes are replicated to a
data can not be written until a leader is elected. Once a leader is
elected, the ensemble uses Zab to ensure that all writes are replicated to a
quorum before they are acknowledged and made visible to clients. Without respect
to weighted quorums, a quorum is a majority component of the ensemble containing
the current leader. For instance, if the ensemble has three servers, a component
that contains the leader and one other server constitutes a quorum. If the
to weighted quorums, a quorum is a majority component of the ensemble containing
the current leader. For instance, if the ensemble has three servers, a component
that contains the leader and one other server constitutes a quorum. If the
ensemble can not achieve a quorum, data can not be written.
ZooKeeper servers keep their entire state machine in memory, but every mutation
is written to a durable WAL (Write Ahead Log) on storage media. When a server
crashes, it can recover its previous state by replaying the WAL. In order to
prevent the WAL from growing without bound, ZooKeeper servers will periodically
snapshot their in memory state to storage media. These snapshots can be loaded
directly into memory, and all WAL entries that preceded the snapshot may be
ZooKeeper servers keep their entire state machine in memory, but every mutation
is written to a durable WAL (Write Ahead Log) on storage media. When a server
crashes, it can recover its previous state by replaying the WAL. In order to
prevent the WAL from growing without bound, ZooKeeper servers will periodically
snapshot their in memory state to storage media. These snapshots can be loaded
directly into memory, and all WAL entries that preceded the snapshot may be
safely discarded.
### Creating a ZooKeeper Ensemble
The manifest below contains a
[Headless Service](/docs/user-guide/services/#headless-services),
a [ConfigMap](/docs/user-guide/configmap/),
a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget),
and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/).
The manifest below contains a
[Headless Service](/docs/user-guide/services/#headless-services),
a [ConfigMap](/docs/user-guide/configmap/),
a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget),
and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/).
{% include code.html language="yaml" file="zookeeper.yaml" ghlink="/docs/tutorials/stateful-application/zookeeper.yaml" %}
Open a command terminal, and use
[`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) to create the
Open a command terminal, and use
[`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) to create the
manifest.
```shell
kubectl create -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml
```
This creates the `zk-headless` Headless Service, the `zk-config` ConfigMap,
This creates the `zk-headless` Headless Service, the `zk-config` ConfigMap,
the `zk-budget` PodDisruptionBudget, and the `zk` StatefulSet.
```shell
@ -142,29 +142,29 @@ zk-2 0/1 Running 0 19s
zk-2 1/1 Running 0 40s
```
The StatefulSet controller creates three Pods, and each Pod has a container with
The StatefulSet controller creates three Pods, and each Pod has a container with
a [ZooKeeper 3.4.9](http://www-us.apache.org/dist/zookeeper/zookeeper-3.4.9/) server.
#### Facilitating Leader Election
As there is no terminating algorithm for electing a leader in an anonymous
network, Zab requires explicit membership configuration in order to perform
leader election. Each server in the ensemble needs to have a unique
As there is no terminating algorithm for electing a leader in an anonymous
network, Zab requires explicit membership configuration in order to perform
leader election. Each server in the ensemble needs to have a unique
identifier, all servers need to know the global set of identifiers, and each
identifier needs to be associated with a network address.
Use [`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to get the hostnames
Use [`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to get the hostnames
of the Pods in the `zk` StatefulSet.
```shell
for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
```
The StatefulSet controller provides each Pod with a unique hostname based on its
ordinal index. The hostnames take the form `<statefulset name>-<ordinal index>`.
As the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's
controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and
`zk-2`.
The StatefulSet controller provides each Pod with a unique hostname based on its
ordinal index. The hostnames take the form `<statefulset name>-<ordinal index>`.
As the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's
controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and
`zk-2`.
```shell
zk-0
@ -172,9 +172,9 @@ zk-1
zk-2
```
The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and
each server's identifier is stored in a file called `myid` in the server's
data directory.
The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and
each server's identifier is stored in a file called `myid` in the servers
data directory.
Examine the contents of the `myid` file for each server.
@ -182,7 +182,7 @@ Examine the contents of the `myid` file for each server.
for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
```
As the identifiers are natural numbers and the ordinal indices are non-negative
As the identifiers are natural numbers and the ordinal indices are non-negative
integers, you can generate an identifier by adding one to the ordinal.
```shell
@ -200,7 +200,7 @@ Get the FQDN (Fully Qualified Domain Name) of each Pod in the `zk` StatefulSet.
for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
```
The `zk-headless` Service creates a domain for all of the Pods,
The `zk-headless` Service creates a domain for all of the Pods,
`zk-headless.default.svc.cluster.local`.
```shell
@ -209,11 +209,11 @@ zk-1.zk-headless.default.svc.cluster.local
zk-2.zk-headless.default.svc.cluster.local
```
The A records in [Kubernetes DNS](/docs/admin/dns/) resolve the FQDNs to the Pods' IP addresses.
If the Pods are rescheduled, the A records will be updated with the Pods' new IP
The A records in [Kubernetes DNS](/docs/admin/dns/) resolve the FQDNs to the Pods' IP addresses.
If the Pods are rescheduled, the A records will be updated with the Pods' new IP
addresses, but the A record's names will not change.
ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use
ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use
`kubectl exec` to view the contents of the `zoo.cfg` file in the `zk-0` Pod.
```
@ -222,8 +222,8 @@ kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
For the `server.1`, `server.2`, and `server.3` properties at the bottom of
the file, the `1`, `2`, and `3` correspond to the identifiers in the
ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in
the `zk` StatefulSet.
ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in
the `zk` StatefulSet.
```shell
clientPort=2181
@ -244,16 +244,16 @@ server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888
#### Achieving Consensus
Consensus protocols require that the identifiers of each participant be
unique. No two participants in the Zab protocol should claim the same unique
identifier. This is necessary to allow the processes in the system to agree on
which processes have committed which data. If two Pods were launched with the
Consensus protocols require that the identifiers of each participant be
unique. No two participants in the Zab protocol should claim the same unique
identifier. This is necessary to allow the processes in the system to agree on
which processes have committed which data. If two Pods were launched with the
same ordinal, two ZooKeeper servers would both identify themselves as the same
server.
When you created the `zk` StatefulSet, the StatefulSet's controller created
each Pod sequentially, in the order defined by the Pods' ordinal indices, and it
waited for each Pod to be Running and Ready before creating the next Pod.
When you created the `zk` StatefulSet, the StatefulSet's controller created
each Pod sequentially, in the order defined by the Pods' ordinal indices, and it
waited for each Pod to be Running and Ready before creating the next Pod.
```shell
kubectl get pods -w -l app=zk
@ -277,7 +277,7 @@ zk-2 1/1 Running 0 40s
The A records for each Pod are only entered when the Pod becomes Ready. Therefore,
the FQDNs of the ZooKeeper servers will only resolve to a single endpoint, and that
endpoint will be the unique ZooKeeper server claiming the identity configured
endpoint will be the unique ZooKeeper server claiming the identity configured
in its `myid` file.
```shell
@ -286,7 +286,7 @@ zk-1.zk-headless.default.svc.cluster.local
zk-2.zk-headless.default.svc.cluster.local
```
This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files
This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files
represents a correctly configured ensemble.
```shell
@ -295,16 +295,16 @@ server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888
```
When the servers use the Zab protocol to attempt to commit a value, they will
either achieve consensus and commit the value (if leader election has succeeded
and at least two of the Pods are Running and Ready), or they will fail to do so
(if either of the aforementioned conditions are not met). No state will arise
When the servers use the Zab protocol to attempt to commit a value, they will
either achieve consensus and commit the value (if leader election has succeeded
and at least two of the Pods are Running and Ready), or they will fail to do so
(if either of the aforementioned conditions are not met). No state will arise
where one server acknowledges a write on behalf of another.
#### Sanity Testing the Ensemble
The most basic sanity test is to write some data to one ZooKeeper server and
to read the data from another.
The most basic sanity test is to write some data to one ZooKeeper server and
to read the data from another.
Use the `zkCli.sh` script to write `world` to the path `/hello` on the `zk-0` Pod.
@ -327,7 +327,7 @@ Get the data from the `zk-1` Pod.
kubectl exec zk-1 zkCli.sh get /hello
```
The data that you created on `zk-0` is available on all of the servers in the
The data that you created on `zk-0` is available on all of the servers in the
ensemble.
```shell
@ -351,12 +351,12 @@ numChildren = 0
#### Providing Durable Storage
As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section,
ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots
in memory state, to storage media. Using WALs to provide durability is a common
ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots
in memory state, to storage media. Using WALs to provide durability is a common
technique for applications that use consensus protocols to achieve a replicated
state machine and for storage applications in general.
Use [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete the
Use [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete the
`zk` StatefulSet.
```shell
@ -392,7 +392,7 @@ Reapply the manifest in `zookeeper.yaml`.
kubectl apply -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml
```
The `zk` StatefulSet will be created, but, as they already exist, the other API
The `zk` StatefulSet will be created, but, as they already exist, the other API
Objects in the manifest will not be modified.
```shell
@ -429,14 +429,14 @@ zk-2 0/1 Running 0 19s
zk-2 1/1 Running 0 40s
```
Get the value you entered during the [sanity test](#sanity-testing-the-ensemble),
Get the value you entered during the [sanity test](#sanity-testing-the-ensemble),
from the `zk-2` Pod.
```shell
kubectl exec zk-2 zkCli.sh get /hello
```
Even though all of the Pods in the `zk` StatefulSet have been terminated and
Even though all of the Pods in the `zk` StatefulSet have been terminated and
recreated, the ensemble still serves the original value.
```shell
@ -457,8 +457,8 @@ dataLength = 5
numChildren = 0
```
The `volumeClaimTemplates` field, of the `zk` StatefulSet's `spec`, specifies a
PersistentVolume that will be provisioned for each Pod.
The `volumeClaimTemplates` field, of the `zk` StatefulSet's `spec`, specifies a
PersistentVolume that will be provisioned for each Pod.
```yaml
volumeClaimTemplates:
@ -474,8 +474,8 @@ volumeClaimTemplates:
```
The StatefulSet controller generates a PersistentVolumeClaim for each Pod in
the StatefulSet.
The StatefulSet controller generates a PersistentVolumeClaim for each Pod in
the StatefulSet.
Get the StatefulSet's PersistentVolumeClaims.
@ -483,7 +483,7 @@ Get the StatefulSet's PersistentVolumeClaims.
kubectl get pvc -l app=zk
```
When the StatefulSet recreated its Pods, the Pods' PersistentVolumes were
When the StatefulSet recreated its Pods, the Pods' PersistentVolumes were
remounted.
```shell
@ -502,19 +502,19 @@ volumeMounts:
mountPath: /var/lib/zookeeper
```
When a Pod in the `zk` StatefulSet is (re)scheduled, it will always have the
same PersistentVolume mounted to the ZooKeeper server's data directory.
Even when the Pods are rescheduled, all of the writes made to the ZooKeeper
When a Pod in the `zk` StatefulSet is (re)scheduled, it will always have the
same PersistentVolume mounted to the ZooKeeper server's data directory.
Even when the Pods are rescheduled, all of the writes made to the ZooKeeper
servers' WALs, and all of their snapshots, remain durable.
### Ensuring Consistent Configuration
As noted in the [Facilitating Leader Election](#facilitating-leader-election) and
[Achieving Consensus](#achieving-consensus) sections, the servers in a
ZooKeeper ensemble require consistent configuration in order to elect a leader
[Achieving Consensus](#achieving-consensus) sections, the servers in a
ZooKeeper ensemble require consistent configuration in order to elect a leader
and form a quorum. They also require consistent configuration of the Zab protocol
in order for the protocol to work correctly over a network. You can use
ConfigMaps to achieve this.
in order for the protocol to work correctly over a network. You can use
ConfigMaps to achieve this.
Get the `zk-config` ConfigMap.
@ -532,8 +532,8 @@ data:
tick: "2000"
```
The `env` field of the `zk` StatefulSet's Pod `template` reads the ConfigMap
into environment variables. These variables are injected into the containers
The `env` field of the `zk` StatefulSet's Pod `template` reads the ConfigMap
into environment variables. These variables are injected into the containers
environment.
```yaml
@ -581,7 +581,7 @@ env:
```
The entry point of the container invokes a bash script, `zkConfig.sh`, prior to
launching the ZooKeeper server process. This bash script generates the
launching the ZooKeeper server process. This bash script generates the
ZooKeeper configuration files from the supplied environment variables.
```yaml
@ -597,8 +597,8 @@ Examine the environment of all of the Pods in the `zk` StatefulSet.
for i in 0 1 2; do kubectl exec zk-$i env | grep ZK_*;echo""; done
```
All of the variables populated from `zk-config` contain identical values. This
allows the `zkGenConfig.sh` script to create consistent configurations for all
All of the variables populated from `zk-config` contain identical values. This
allows the `zkGenConfig.sh` script to create consistent configurations for all
of the ZooKeeper servers in the ensemble.
```shell
@ -653,16 +653,16 @@ ZK_LOG_DIR=/var/log/zookeeper
#### Configuring Logging
One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging.
ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default,
it uses a time and size based rolling file appender for its logging configuration.
One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging.
ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default,
it uses a time and size based rolling file appender for its logging configuration.
Get the logging configuration from one of Pods in the `zk` StatefulSet.
```shell
kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties
```
The logging configuration below will cause the ZooKeeper process to write all
The logging configuration below will cause the ZooKeeper process to write all
of its logs to the standard output file stream.
```shell
@ -675,20 +675,20 @@ log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
```
This is the simplest possible way to safely log inside the container. As the
application's logs are being written to standard out, Kubernetes will handle
log rotation for you. Kubernetes also implements a sane retention policy that
ensures application logs written to standard out and standard error do not
This is the simplest possible way to safely log inside the container. As the
application's logs are being written to standard out, Kubernetes will handle
log rotation for you. Kubernetes also implements a sane retention policy that
ensures application logs written to standard out and standard error do not
exhaust local storage media.
Use [`kubectl logs`](/docs/user-guide/kubectl/kubectl_logs/) to retrieve the last
Use [`kubectl logs`](/docs/user-guide/kubectl/kubectl_logs/) to retrieve the last
few log lines from one of the Pods.
```shell
kubectl logs zk-0 --tail 20
```
Application logs that are written to standard out or standard error are viewable
Application logs that are written to standard out or standard error are viewable
using `kubectl logs` and from the Kubernetes Dashboard.
```shell
@ -714,19 +714,19 @@ using `kubectl logs` and from the Kubernetes Dashboard.
2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)
```
Kubernetes also supports more powerful, but more complex, logging integrations
with [Google Cloud Logging](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md)
Kubernetes also supports more powerful, but more complex, logging integrations
with [Google Cloud Logging](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md)
and [ELK](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-es/README.md).
For cluster level log shipping and aggregation, you should consider deploying a
[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html)
[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html)
container to rotate and ship your logs.
#### Configuring a Non-Privileged User
The best practices with respect to allowing an application to run as a privileged
user inside of a container are a matter of debate. If your organization requires
that applications be run as a non-privileged user you can use a
[SecurityContext](/docs/user-guide/security-context/) to control the user that
The best practices with respect to allowing an application to run as a privileged
user inside of a container are a matter of debate. If your organization requires
that applications be run as a non-privileged user you can use a
[SecurityContext](/docs/user-guide/security-context/) to control the user that
the entry point runs as.
The `zk` StatefulSet's Pod `template` contains a SecurityContext.
@ -737,7 +737,7 @@ securityContext:
fsGroup: 1000
```
In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000
In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000
corresponds to the zookeeper group.
Get the ZooKeeper process information from the `zk-0` Pod.
@ -746,7 +746,7 @@ Get the ZooKeeper process information from the `zk-0` Pod.
kubectl exec zk-0 -- ps -elf
```
As the `runAsUser` field of the `securityContext` object is set to 1000,
As the `runAsUser` field of the `securityContext` object is set to 1000,
instead of running as root, the ZooKeeper process runs as the zookeeper user.
```shell
@ -755,8 +755,8 @@ F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg
```
By default, when the Pod's PersistentVolume is mounted to the ZooKeeper server's
data directory, it is only accessible by the root user. This configuration
By default, when the Pod's PersistentVolume is mounted to the ZooKeeper server's
data directory, it is only accessible by the root user. This configuration
prevents the ZooKeeper process from writing to its WAL and storing its snapshots.
Get the file permissions of the ZooKeeper data directory on the `zk-0` Pod.
@ -765,8 +765,8 @@ Get the file permissions of the ZooKeeper data directory on the `zk-0` Pod.
kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data
```
As the `fsGroup` field of the `securityContext` object is set to 1000,
the ownership of the Pods' PersistentVolumes is set to the zookeeper group,
As the `fsGroup` field of the `securityContext` object is set to 1000,
the ownership of the Pods' PersistentVolumes is set to the zookeeper group,
and the ZooKeeper process is able to successfully read and write its data.
```shell
@ -775,21 +775,21 @@ drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
### Managing the ZooKeeper Process
The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision)
documentation indicates that "You will want to have a supervisory process that
manages each of your ZooKeeper server processes (JVM)." Utilizing a watchdog
(supervisory process) to restart failed processes in a distributed system is a
common pattern. When deploying an application in Kubernetes, rather than using
an external utility as a supervisory process, you should use Kubernetes as the
The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision)
documentation indicates that "You will want to have a supervisory process that
manages each of your ZooKeeper server processes (JVM)." Utilizing a watchdog
(supervisory process) to restart failed processes in a distributed system is a
common pattern. When deploying an application in Kubernetes, rather than using
an external utility as a supervisory process, you should use Kubernetes as the
watchdog for your application.
#### Handling Process Failure
#### Handling Process Failure
[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how
[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how
Kubernetes handles process failures for the entry point of the container in a Pod.
For Pods in a StatefulSet, the only appropriate RestartPolicy is Always, and this
is the default value. For stateful applications you should **never** override
is the default value. For stateful applications you should **never** override
the default policy.
@ -799,7 +799,7 @@ Examine the process tree for the ZooKeeper server running in the `zk-0` Pod.
kubectl exec zk-0 -- ps -ef
```
The command used as the container's entry point has PID 1, and the
The command used as the container's entry point has PID 1, and the
the ZooKeeper process, a child of the entry point, has PID 23.
@ -824,8 +824,8 @@ In another terminal, kill the ZooKeeper process in Pod `zk-0`.
```
The death of the ZooKeeper process caused its parent process to terminate. As
the RestartPolicy of the container is Always, the parent process was relaunched.
The death of the ZooKeeper process caused its parent process to terminate. As
the RestartPolicy of the container is Always, the parent process was relaunched.
```shell
@ -840,19 +840,19 @@ zk-0 1/1 Running 1 29m
```
If your application uses a script (such as zkServer.sh) to launch the process
If your application uses a script (such as zkServer.sh) to launch the process
that implements the application's business logic, the script must terminate with the
child process. This ensures that Kubernetes will restart the application's
container when the process implementing the application's business logic fails.
container when the process implementing the application's business logic fails.
#### Testing for Liveness
Configuring your application to restart failed processes is not sufficient to
keep a distributed system healthy. There are many scenarios where
a system's processes can be both alive and unresponsive, or otherwise
unhealthy. You should use liveness probes in order to notify Kubernetes
Configuring your application to restart failed processes is not sufficient to
keep a distributed system healthy. There are many scenarios where
a system's processes can be both alive and unresponsive, or otherwise
unhealthy. You should use liveness probes in order to notify Kubernetes
that your application's processes are unhealthy and should be restarted.
@ -869,7 +869,7 @@ The Pod `template` for the `zk` StatefulSet specifies a liveness probe.
```
The probe calls a simple bash script that uses the ZooKeeper `ruok` four letter
The probe calls a simple bash script that uses the ZooKeeper `ruok` four letter
word to test the server's health.
@ -900,7 +900,7 @@ kubectl exec zk-0 -- rm /opt/zookeeper/bin/zkOk.sh
```
When the liveness probe for the ZooKeeper process fails, Kubernetes will
When the liveness probe for the ZooKeeper process fails, Kubernetes will
automatically restart the process for you, ensuring that unhealthy processes in
the ensemble are restarted.
@ -921,10 +921,10 @@ zk-0 1/1 Running 1 1h
#### Testing for Readiness
Readiness is not the same as liveness. If a process is alive, it is scheduled
and healthy. If a process is ready, it is able to process input. Liveness is
Readiness is not the same as liveness. If a process is alive, it is scheduled
and healthy. If a process is ready, it is able to process input. Liveness is
a necessary, but not sufficient, condition for readiness. There are many cases,
particularly during initialization and termination, when a process can be
particularly during initialization and termination, when a process can be
alive but not ready.
@ -932,8 +932,8 @@ If you specify a readiness probe, Kubernetes will ensure that your application's
processes will not receive network traffic until their readiness checks pass.
For a ZooKeeper server, liveness implies readiness. Therefore, the readiness
probe from the `zookeeper.yaml` manifest is identical to the liveness probe.
For a ZooKeeper server, liveness implies readiness. Therefore, the readiness
probe from the `zookeeper.yaml` manifest is identical to the liveness probe.
```yaml
@ -946,28 +946,28 @@ probe from the `zookeeper.yaml` manifest is identical to the liveness probe.
```
Even though the liveness and readiness probes are identical, it is important
to specify both. This ensures that only healthy servers in the ZooKeeper
Even though the liveness and readiness probes are identical, it is important
to specify both. This ensures that only healthy servers in the ZooKeeper
ensemble receive network traffic.
### Tolerating Node Failure
ZooKeeper needs a quorum of servers in order to successfully commit mutations
to data. For a three server ensemble, two servers must be healthy in order for
writes to succeed. In quorum based systems, members are deployed across failure
domains to ensure availability. In order to avoid an outage, due to the loss of an
individual machine, best practices preclude co-locating multiple instances of the
ZooKeeper needs a quorum of servers in order to successfully commit mutations
to data. For a three server ensemble, two servers must be healthy in order for
writes to succeed. In quorum based systems, members are deployed across failure
domains to ensure availability. In order to avoid an outage, due to the loss of an
individual machine, best practices preclude co-locating multiple instances of the
application on the same machine.
By default, Kubernetes may co-locate Pods in a StatefulSet on the same node.
By default, Kubernetes may co-locate Pods in a StatefulSet on the same node.
For the three server ensemble you created, if two servers reside on the same
node, and that node fails, the clients of your ZooKeeper service will experience
an outage until at least one of the Pods can be rescheduled.
an outage until at least one of the Pods can be rescheduled.
You should always provision additional capacity to allow the processes of critical
systems to be rescheduled in the event of node failures. If you do so, then the
outage will only last until the Kubernetes scheduler reschedules one of the ZooKeeper
systems to be rescheduled in the event of node failures. If you do so, then the
outage will only last until the Kubernetes scheduler reschedules one of the ZooKeeper
servers. However, if you want your service to tolerate node failures with no downtime,
you should use a `PodAntiAffinity` annotation.
@ -985,7 +985,7 @@ kubernetes-minion-group-a5aq
kubernetes-minion-group-2g2d
```
This is because the Pods in the `zk` StatefulSet contain a
This is because the Pods in the `zk` StatefulSet contain a
[PodAntiAffinity](/docs/user-guide/node-selection/) annotation.
```yaml
@ -1006,11 +1006,11 @@ scheduler.alpha.kubernetes.io/affinity: >
}
```
The `requiredDuringSchedulingRequiredDuringExecution` field tells the
The `requiredDuringSchedulingRequiredDuringExecution` field tells the
Kubernetes Scheduler that it should never co-locate two Pods from the `zk-headless`
Service in the domain defined by the `topologyKey`. The `topologyKey`
`kubernetes.io/hostname` indicates that the domain is an individual node. Using
different rules, labels, and selectors, you can extend this technique to spread
`kubernetes.io/hostname` indicates that the domain is an individual node. Using
different rules, labels, and selectors, you can extend this technique to spread
your ensemble across physical, network, and power failure domains.
### Surviving Maintenance
@ -1018,8 +1018,8 @@ your ensemble across physical, network, and power failure domains.
**In this section you will cordon and drain nodes. If you are using this tutorial
on a shared cluster, be sure that this will not adversely affect other tenants.**
The previous section showed you how to spread your Pods across nodes to survive
unplanned node failures, but you also need to plan for temporary node failures
The previous section showed you how to spread your Pods across nodes to survive
unplanned node failures, but you also need to plan for temporary node failures
that occur due to planned maintenance.
Get the nodes in your cluster.
@ -1028,7 +1028,7 @@ Get the nodes in your cluster.
kubectl get nodes
```
Use [`kubectl cordon`](/docs/user-guide/kubectl/kubectl_cordon/) to
Use [`kubectl cordon`](/docs/user-guide/kubectl/kubectl_cordon/) to
cordon all but four of the nodes in your cluster.
```shell{% raw %}
@ -1041,8 +1041,8 @@ Get the `zk-budget` PodDisruptionBudget.
kubectl get poddisruptionbudget zk-budget
```
The `min-available` field indicates to Kubernetes that at least two Pods from
`zk` StatefulSet must be available at any time.
The `min-available` field indicates to Kubernetes that at least two Pods from
`zk` StatefulSet must be available at any time.
```yaml
NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
@ -1065,7 +1065,7 @@ kubernetes-minion-group-ixsl
kubernetes-minion-group-i4c4
{% endraw %}```
Use [`kubectl drain`](/docs/user-guide/kubectl/kubectl_drain/) to cordon and
Use [`kubectl drain`](/docs/user-guide/kubectl/kubectl_drain/) to cordon and
drain the node on which the `zk-0` Pod is scheduled.
```shell {% raw %}
@ -1075,7 +1075,7 @@ pod "zk-0" deleted
node "kubernetes-minion-group-pb41" drained
{% endraw %}```
As there are four nodes in your cluster, `kubectl drain`, succeeds and the
As there are four nodes in your cluster, `kubectl drain`, succeeds and the
`zk-0` is rescheduled to another node.
```
@ -1095,7 +1095,7 @@ zk-0 0/1 Running 0 51s
zk-0 1/1 Running 0 1m
```
Keep watching the StatefulSet's Pods in the first terminal and drain the node on which
Keep watching the StatefulSet's Pods in the first terminal and drain the node on which
`zk-1` is scheduled.
```shell{% raw %}
@ -1105,8 +1105,8 @@ pod "zk-1" deleted
node "kubernetes-minion-group-ixsl" drained
{% endraw %}```
The `zk-1` Pod can not be scheduled. As the `zk` StatefulSet contains a
`PodAntiAffinity` annotation preventing co-location of the Pods, and as only
The `zk-1` Pod can not be scheduled. As the `zk` StatefulSet contains a
`PodAntiAffinity` annotation preventing co-location of the Pods, and as only
two nodes are schedulable, the Pod will remain in a Pending state.
```shell
@ -1133,7 +1133,7 @@ zk-1 0/1 Pending 0 0s
zk-1 0/1 Pending 0 0s
```
Continue to watch the Pods of the stateful set, and drain the node on which
Continue to watch the Pods of the stateful set, and drain the node on which
`zk-2` is scheduled.
```shell{% raw %}
@ -1145,9 +1145,9 @@ There are pending pods when an error occurred: Cannot evict pod as it would viol
pod/zk-2
{% endraw %}```
Use `CRTL-C` to terminate to kubectl.
Use `CRTL-C` to terminate to kubectl.
You can not drain the third node because evicting `zk-2` would violate `zk-budget`. However,
You can not drain the third node because evicting `zk-2` would violate `zk-budget`. However,
the node will remain cordoned.
Use `zkCli.sh` to retrieve the value you entered during the sanity test from `zk-0`.
@ -1232,9 +1232,9 @@ node "kubernetes-minion-group-ixsl" uncordoned
```
You can use `kubectl drain` in conjunction with PodDisruptionBudgets to ensure that your service
remains available during maintenance. If drain is used to cordon nodes and evict pods prior to
taking the node offline for maintenance, services that express a disruption budget will have that
budget respected. You should always allocate additional capacity for critical services so that
remains available during maintenance. If drain is used to cordon nodes and evict pods prior to
taking the node offline for maintenance, services that express a disruption budget will have that
budget respected. You should always allocate additional capacity for critical services so that
their Pods can be immediately rescheduled.
{% endcapture %}
@ -1242,8 +1242,8 @@ their Pods can be immediately rescheduled.
{% capture cleanup %}
* Use `kubectl uncordon` to uncordon all the nodes in your cluster.
* You will need to delete the persistent storage media for the PersistentVolumes
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
reclaimed.
{% endcapture %}
{% include templates/tutorial.md %}

View File

@ -85,8 +85,8 @@ NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx-svc 10.0.0.208 80/TCP 0s
```
With the above commands, we first create resources under docs/user-guide/nginx/ and print the resources created with `-o name` output format
(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`.
With the above commands, we first create resources under docs/user-guide/nginx/ and print the resources created with `-o name` output format
(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`.
If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename,-f` flag.