Reduce heading levels by 1.

reviewable/pr2287/r1
steveperry-53 2017-01-18 10:18:37 -08:00
parent 0f6b992771
commit 02d17ade5f
40 changed files with 178 additions and 187 deletions

View File

@ -24,7 +24,7 @@
{% if whatsnext %} {% if whatsnext %}
### What's next ## What's next
{{ whatsnext }} {{ whatsnext }}

View File

@ -15,7 +15,7 @@
{% if prerequisites %} {% if prerequisites %}
### Before you begin ## Before you begin
{{ prerequisites }} {{ prerequisites }}
@ -48,7 +48,7 @@
{% if whatsnext %} {% if whatsnext %}
### What's next ## What's next
{{ whatsnext }} {{ whatsnext }}

View File

@ -15,7 +15,7 @@
{% if objectives %} {% if objectives %}
### Objectives ## Objectives
{{ objectives }} {{ objectives }}
@ -28,7 +28,7 @@
{% if prerequisites %} {% if prerequisites %}
### Before you begin ## Before you begin
{{ prerequisites }} {{ prerequisites }}
@ -52,7 +52,7 @@
{% if cleanup %} {% if cleanup %}
### Cleaning up ## Cleaning up
{{ cleanup }} {{ cleanup }}
@ -61,7 +61,7 @@
{% if whatsnext %} {% if whatsnext %}
### What's next ## What's next
{{ whatsnext }} {{ whatsnext }}

View File

@ -22,7 +22,7 @@ guarantees about the ordering of deployment and scaling.
{% capture body %} {% capture body %}
### Using StatefulSets ## Using StatefulSets
StatefulSets are valuable for applications that require one or more of the StatefulSets are valuable for applications that require one or more of the
following. following.
@ -39,7 +39,7 @@ provides a set of stateless replicas. Controllers such as
[Deployment](/docs/user-guide/deployments/) or [Deployment](/docs/user-guide/deployments/) or
[ReplicaSet](/docs/user-guide/replicasets/) may be better suited to your stateless needs. [ReplicaSet](/docs/user-guide/replicasets/) may be better suited to your stateless needs.
### Limitations ## Limitations
* StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5. * StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5.
* As with all alpha/beta resources, you can disable StatefulSet through the `--runtime-config` option passed to the apiserver. * As with all alpha/beta resources, you can disable StatefulSet through the `--runtime-config` option passed to the apiserver.
* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. * The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
@ -47,7 +47,7 @@ provides a set of stateless replicas. Controllers such as
* StatefulSets currently require a [Headless Service](/docs/user-guide/services/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service. * StatefulSets currently require a [Headless Service](/docs/user-guide/services/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service.
* Updating an existing StatefulSet is currently a manual process. * Updating an existing StatefulSet is currently a manual process.
### Components ## Components
The example below demonstrates the components of a StatefulSet. The example below demonstrates the components of a StatefulSet.
* A Headless Service, named nginx, is used to control the network domain. * A Headless Service, named nginx, is used to control the network domain.
@ -103,17 +103,17 @@ spec:
storage: 1Gi storage: 1Gi
``` ```
### Pod Identity ## Pod Identity
StatefulSet Pods have a unique identity that is comprised of an ordinal, a StatefulSet Pods have a unique identity that is comprised of an ordinal, a
stable network identity, and stable storage. The identity sticks to the Pod, stable network identity, and stable storage. The identity sticks to the Pod,
regardless of which node it's (re)scheduled on. regardless of which node it's (re)scheduled on.
__Ordinal Index__ ### Ordinal Index
For a StatefulSet with N replicas, each Pod in the StatefulSet will be For a StatefulSet with N replicas, each Pod in the StatefulSet will be
assigned an integer ordinal, in the range [0,N), that is unique over the Set. assigned an integer ordinal, in the range [0,N), that is unique over the Set.
__Stable Network ID__ ### Stable Network ID
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet
and the ordinal of the Pod. The pattern for the constructed hostname and the ordinal of the Pod. The pattern for the constructed hostname
@ -139,7 +139,7 @@ Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain
Note that Cluster Domain will be set to `cluster.local` unless Note that Cluster Domain will be set to `cluster.local` unless
[otherwise configured](http://releases.k8s.io/{{page.githubbranch}}/build/kube-dns/README.md#how-do-i-configure-it). [otherwise configured](http://releases.k8s.io/{{page.githubbranch}}/build/kube-dns/README.md#how-do-i-configure-it).
__Stable Storage__ ### Stable Storage
Kubernetes creates one [PersistentVolume](/docs/user-guide/volumes/) for each Kubernetes creates one [PersistentVolume](/docs/user-guide/volumes/) for each
VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume
@ -149,7 +149,7 @@ PersistentVolume Claims. Note that, the PersistentVolumes associated with the
Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
This must be done manually. This must be done manually.
### Deployment and Scaling Guarantee ## Deployment and Scaling Guarantee
* For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}. * For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
* When Pods are being deleted, they are terminated in reverse order, from {N-1..0}. * When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.

View File

@ -7,7 +7,7 @@ This page explains how Kubernetes objects are represented in the Kubernetes API,
{% endcapture %} {% endcapture %}
{% capture body %} {% capture body %}
### Understanding Kubernetes Objects ## Understanding Kubernetes Objects
*Kubernetes Objects* are persistent entities in the Kubernetes system. Kubenetes uses these entities to represent the state of your cluster. Specifically, they can describe: *Kubernetes Objects* are persistent entities in the Kubernetes system. Kubenetes uses these entities to represent the state of your cluster. Specifically, they can describe:
@ -19,7 +19,7 @@ A Kubernetes object is a "record of intent"--once you create the object, the Kub
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md). When you use the `kubectl` comamnd-line interface, for example, the CLI makes the necessary Kubernetes API calls for you; you can also use the Kubernetes API directly in your own programs. Kubernetes currently provides a `golang` [client library](https://github.com/kubernetes/client-go) for this purpose, and other language libraries (such as [Python](https://github.com/kubernetes-incubator/client-python)) are being developed. To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md). When you use the `kubectl` comamnd-line interface, for example, the CLI makes the necessary Kubernetes API calls for you; you can also use the Kubernetes API directly in your own programs. Kubernetes currently provides a `golang` [client library](https://github.com/kubernetes/client-go) for this purpose, and other language libraries (such as [Python](https://github.com/kubernetes-incubator/client-python)) are being developed.
#### Object Spec and Status ### Object Spec and Status
Every Kubernetes object includes two nested object fields that govern the object's configuration: the object *spec* and the object *status*. The *spec*, which you must provide, describes your *desired state* for the object--the characteristics that you want the object to have. The *status* describes the *actual state* for the object, and is supplied and updated by the Kubernetes system. At any given time, the Kubernetes Control Plane actively manages an object's actual state to match the desired state you supplied. Every Kubernetes object includes two nested object fields that govern the object's configuration: the object *spec* and the object *status*. The *spec*, which you must provide, describes your *desired state* for the object--the characteristics that you want the object to have. The *status* describes the *actual state* for the object, and is supplied and updated by the Kubernetes system. At any given time, the Kubernetes Control Plane actively manages an object's actual state to match the desired state you supplied.
@ -27,7 +27,7 @@ For example, a Kubernetes Deployment is an object that can represent an applicat
For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#spec-and-status). For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#spec-and-status).
#### Describing a Kubernetes Object ### Describing a Kubernetes Object
When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via `kubectl`), that API request must include that information as JSON in the request body. **Most often, you provide the information to `kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API request. When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via `kubectl`), that API request must include that information as JSON in the request body. **Most often, you provide the information to `kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API request.
@ -47,7 +47,7 @@ The output is similar to this:
deployment "nginx-deployment" created deployment "nginx-deployment" created
``` ```
#### Required Fields ### Required Fields
In the `.yaml` file for the Kubernetes object you want to create, you'll need to set values for the following fields: In the `.yaml` file for the Kubernetes object you want to create, you'll need to set values for the following fields:

View File

@ -9,7 +9,7 @@ This page provides an overview of `Pod`, the smallest deployable object in the K
{:toc} {:toc}
{% capture body %} {% capture body %}
### Understanding Pods ## Understanding Pods
A *Pod* is the basic building block of Kubernetes--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster. A *Pod* is the basic building block of Kubernetes--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
@ -29,7 +29,7 @@ The [Kubernetes Blog](http://blog.kubernetes.io) has some additional information
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run muliple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information. Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run muliple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information.
#### How Pods Manage Multiple Containers ### How Pods Manage Multiple Containers
Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same phyiscal or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated. Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same phyiscal or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.
@ -39,15 +39,15 @@ Note that grouping multiple co-located and co-managed containers in a single Pod
Pods provide two kinds of shared resources for their constituent containers: *networking* and *storage*. Pods provide two kinds of shared resources for their constituent containers: *networking* and *storage*.
##### Networking #### Networking
Each Pod is assigned a unique IP address. Every the container in a Pod shares the network namespace, including the IP address and network ports. Containers *inside a Pod* can communicate with one another using `localhost`. When containers in a Pod communicate with entities *outside the Pod*, they must coordinate how they use the shared network resources (such as ports). Each Pod is assigned a unique IP address. Every the container in a Pod shares the network namespace, including the IP address and network ports. Containers *inside a Pod* can communicate with one another using `localhost`. When containers in a Pod communicate with entities *outside the Pod*, they must coordinate how they use the shared network resources (such as ports).
##### Storage #### Storage
A Pod can specify a set of shared storage *volumes*. All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted. See Volumes for more information on how Kubernetes implements shared storage in a Pod. A Pod can specify a set of shared storage *volumes*. All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted. See Volumes for more information on how Kubernetes implements shared storage in a Pod.
### Working with Pods ## Working with Pods
You'll rarely create individual Pods directly in Kubernetes--even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a Controller), it is scheduled to run on a Node in your your cluster. The Pod remains on that Node until the process is terminated, the pod object is deleted, or the pod is *evicted* for lack of resources, or the Node fails. You'll rarely create individual Pods directly in Kubernetes--even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a Controller), it is scheduled to run on a Node in your your cluster. The Pod remains on that Node until the process is terminated, the pod object is deleted, or the pod is *evicted* for lack of resources, or the Node fails.
@ -55,7 +55,7 @@ You'll rarely create individual Pods directly in Kubernetes--even singleton Pods
Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fails, or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a *Controller*, that handles the work of managing the relatively disposable Pod instances. Thus, while it is possible to use Pod directly, it's far more common in Kubernetes to manage your pods using a Controller. See [Pods and Controllers](#pods-and-controllers) for more information on how Kubernetes uses Controllers to implement Pod scaling and healing. Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fails, or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a *Controller*, that handles the work of managing the relatively disposable Pod instances. Thus, while it is possible to use Pod directly, it's far more common in Kubernetes to manage your pods using a Controller. See [Pods and Controllers](#pods-and-controllers) for more information on how Kubernetes uses Controllers to implement Pod scaling and healing.
#### Pods and Controllers ### Pods and Controllers
A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node). A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node).

View File

@ -8,7 +8,7 @@ to objects. Clients such as tools and libraries can retrieve this metadata.
{% endcapture %} {% endcapture %}
{% capture body %} {% capture body %}
### Attaching metadata to objects ## Attaching metadata to objects
You can use either labels or annotations to attach metadata to Kubernetes You can use either labels or annotations to attach metadata to Kubernetes
objects. Labels can be used to select objects and to find objects. Labels can be used to select objects and to find

View File

@ -28,7 +28,7 @@ load-balanced access to an application running in a cluster.
{% capture lessoncontent %} {% capture lessoncontent %}
### Creating a Service for an application running in two pods ## Creating a Service for an application running in two pods
1. Run a Hello World application in your cluster: 1. Run a Hello World application in your cluster:
@ -98,7 +98,7 @@ load-balanced access to an application running in a cluster.
where `<minikube-node-ip-address>` us the IP address of your Minikube node, where `<minikube-node-ip-address>` us the IP address of your Minikube node,
and `<service-node-port>` is the NodePort value for your service. and `<service-node-port>` is the NodePort value for your service.
### Using a service configuration file ## Using a service configuration file
As an alternative to using `kubectl expose`, you can use a As an alternative to using `kubectl expose`, you can use a
[service configuration file](/docs/user-guide/services/operations) [service configuration file](/docs/user-guide/services/operations)
@ -108,15 +108,6 @@ to create a Service.
{% endcapture %} {% endcapture %}
{% capture cleanup %}
If you want to stop the Hello World application, enter these commands:
TODO
{% endcapture %}
{% capture whatsnext %} {% capture whatsnext %}
Learn more about Learn more about

View File

@ -22,7 +22,7 @@ for database debugging.
{% capture steps %} {% capture steps %}
### Creating a pod to run a Redis server ## Creating a pod to run a Redis server
1. Create a pod: 1. Create a pod:
@ -51,7 +51,7 @@ for database debugging.
6379 6379
### Forward a local port to a port on the pod ## Forward a local port to a port on the pod
1. Forward port 6379 on the local workstation to port 6379 of redis-master pod: 1. Forward port 6379 on the local workstation to port 6379 of redis-master pod:
@ -77,7 +77,7 @@ for database debugging.
{% capture discussion %} {% capture discussion %}
### Discussion ## Discussion
Connections made to local port 6379 are forwarded to port 6379 of the pod that Connections made to local port 6379 are forwarded to port 6379 of the pod that
is running the Redis server. With this connection in place you can use your is running the Redis server. With this connection in place you can use your

View File

@ -19,13 +19,13 @@ This page shows how to use an HTTP proxy to access the Kubernetes API.
{% capture steps %} {% capture steps %}
### Using kubectl to start a proxy server ## Using kubectl to start a proxy server
This command starts a proxy to the Kubernetes API server: This command starts a proxy to the Kubernetes API server:
kubectl proxy --port=8080 kubectl proxy --port=8080
### Exploring the Kubernetes API ## Exploring the Kubernetes API
When the proxy server is running, you can explore the API using `curl`, `wget`, When the proxy server is running, you can explore the API using `curl`, `wget`,
or a browser. or a browser.

View File

@ -15,7 +15,7 @@ Kubernetes cluster.
{% capture steps %} {% capture steps %}
### Adding a label to a node ## Adding a label to a node
1. List the nodes in your cluster: 1. List the nodes in your cluster:
@ -49,7 +49,7 @@ Kubernetes cluster.
In the preceding output, you can see that the `worker0` node has a In the preceding output, you can see that the `worker0` node has a
`disktype=ssd` label. `disktype=ssd` label.
### Creating a pod that gets scheduled to your chosen node ## Creating a pod that gets scheduled to your chosen node
This pod configuration file describes a pod that has a node selector, This pod configuration file describes a pod that has a node selector,
`disktype: ssd`. This means that the pod will get scheduled on a node that has `disktype: ssd`. This means that the pod will get scheduled on a node that has

View File

@ -15,7 +15,7 @@ PersistentVolume.
{% capture steps %} {% capture steps %}
### Why change reclaim policy of a PersistentVolume ## Why change reclaim policy of a PersistentVolume
`PersistentVolumes` can have various reclaim policies, including "Retain", `PersistentVolumes` can have various reclaim policies, including "Retain",
"Recycle", and "Delete". For dynamically provisioned `PersistentVolumes`, "Recycle", and "Delete". For dynamically provisioned `PersistentVolumes`,
@ -27,7 +27,7 @@ policy. With the "Retain" policy, if a user deletes a `PeristentVolumeClaim`,
the corresponding `PersistentVolume` is not be deleted. Instead, it is moved to the the corresponding `PersistentVolume` is not be deleted. Instead, it is moved to the
`Released` phase, where all of its data can be manually recovered. `Released` phase, where all of its data can be manually recovered.
### Changing the reclaim policy of a PersistentVolume ## Changing the reclaim policy of a PersistentVolume
1. List the PersistentVolumes in your cluster: 1. List the PersistentVolumes in your cluster:
@ -70,7 +70,7 @@ the corresponding `PersistentVolume` is not be deleted. Instead, it is moved to
* Learn more about [PersistentVolumes](/docs/user-guide/persistent-volumes/). * Learn more about [PersistentVolumes](/docs/user-guide/persistent-volumes/).
* Learn more about [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims). * Learn more about [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims).
#### Reference ### Reference
* [PersistentVolume](/docs/api-reference/v1/definitions/#_v1_persistentvolume) * [PersistentVolume](/docs/api-reference/v1/definitions/#_v1_persistentvolume)
* [PersistentVolumeClaim](/docs/api-reference/v1/definitions/#_v1_persistentvolumeclaim) * [PersistentVolumeClaim](/docs/api-reference/v1/definitions/#_v1_persistentvolumeclaim)

View File

@ -19,7 +19,7 @@ Kubernetes cluster.
{% capture steps %} {% capture steps %}
### Determining whether DNS horizontal autoscaling is already enabled ## Determining whether DNS horizontal autoscaling is already enabled
List the Deployments in your cluster in the kube-system namespace: List the Deployments in your cluster in the kube-system namespace:
@ -36,7 +36,7 @@ If you see "kube-dns-autoscaler" in the output, DNS horizontal autoscaling is
already enabled, and you can skip to already enabled, and you can skip to
[Tuning autoscaling parameters](#tuning-autoscaling-parameters). [Tuning autoscaling parameters](#tuning-autoscaling-parameters).
### Getting the name of your DNS Deployment or ReplicationController ## Getting the name of your DNS Deployment or ReplicationController
List the Deployments in your cluster in the kube-system namespace: List the Deployments in your cluster in the kube-system namespace:
@ -63,7 +63,7 @@ The output is similar to this:
kube-dns-v20 1 1 1 ... kube-dns-v20 1 1 1 ...
... ...
### Determining your scale target ## Determining your scale target
If you have a DNS Deployment, your scale target is: If you have a DNS Deployment, your scale target is:
@ -80,7 +80,7 @@ where <your-rc-name> is the name of your DNS ReplicationController. For example,
if your DNS ReplicationController name is kube-dns-v20, your scale target is if your DNS ReplicationController name is kube-dns-v20, your scale target is
ReplicationController/kube-dns-v20. ReplicationController/kube-dns-v20.
### Enabling DNS horizontal autoscaling ## Enabling DNS horizontal autoscaling
In this section, you create a Deployment. The Pods in the Deployment run a In this section, you create a Deployment. The Pods in the Deployment run a
container based on the `cluster-proportional-autoscaler-amd64` image. container based on the `cluster-proportional-autoscaler-amd64` image.
@ -102,7 +102,7 @@ The output of a successful command is:
DNS horizontal autoscaling is now enabled. DNS horizontal autoscaling is now enabled.
### Tuning autoscaling parameters ## Tuning autoscaling parameters
Verify that the kube-dns-autoscaler ConfigMap exists: Verify that the kube-dns-autoscaler ConfigMap exists:
@ -139,12 +139,12 @@ cores, `nodesPerReplica` dominates.
There are other supported scaling patterns. For details, see There are other supported scaling patterns. For details, see
[cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler). [cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler).
### Disable DNS horizontal autoscaling ## Disable DNS horizontal autoscaling
There are a few options for turning DNS horizontal autoscaling. Which option to There are a few options for turning DNS horizontal autoscaling. Which option to
use depends on different conditions. use depends on different conditions.
#### Option 1: Scale down the kube-dns-autoscaler deployment to 0 replicas ### Option 1: Scale down the kube-dns-autoscaler deployment to 0 replicas
This option works for all situations. Enter this command: This option works for all situations. Enter this command:
@ -165,7 +165,7 @@ The output displays 0 in the DESIRED and CURRENT columns:
kube-dns-autoscaler 0 0 0 0 ... kube-dns-autoscaler 0 0 0 0 ...
... ...
#### Option 2: Delete the kube-dns-autoscaler deployment ### Option 2: Delete the kube-dns-autoscaler deployment
This option works if kube-dns-autoscaler is under your own control, which means This option works if kube-dns-autoscaler is under your own control, which means
no one will re-create it: no one will re-create it:
@ -176,7 +176,7 @@ The output is:
deployment "kube-dns-autoscaler" deleted deployment "kube-dns-autoscaler" deleted
#### Option 3: Delete the kube-dns-autoscaler manifest file from the master node ### Option 3: Delete the kube-dns-autoscaler manifest file from the master node
This option works if kube-dns-autoscaler is under control of the This option works if kube-dns-autoscaler is under control of the
[Addon Manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/README.md)'s [Addon Manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/README.md)'s
@ -194,7 +194,7 @@ kube-dns-autoscaler Deployment.
{% capture discussion %} {% capture discussion %}
### Understanding how DNS horizontal autoscaling works ## Understanding how DNS horizontal autoscaling works
* The cluster-proportional-autoscaler application is deployed separately from * The cluster-proportional-autoscaler application is deployed separately from
the DNS service. the DNS service.
@ -215,7 +215,7 @@ the autoscaler Pod.
* The autoscaler provides a controller interface to support two control * The autoscaler provides a controller interface to support two control
patterns: *linear* and *ladder*. patterns: *linear* and *ladder*.
### Future enhancements ## Future enhancements
Control patterns, in addition to linear and ladder, that consider custom metrics Control patterns, in addition to linear and ladder, that consider custom metrics
are under consideration as a future development. are under consideration as a future development.

View File

@ -21,7 +21,7 @@ application-level disruption SLOs you want the system to enforce.
{% capture steps %} {% capture steps %}
### Use `kubectl drain` to remove a node from service ## Use `kubectl drain` to remove a node from service
You can use `kubectl drain` to safely evict all of your pods from a You can use `kubectl drain` to safely evict all of your pods from a
node before you perform maintenance on the node (e.g. kernel upgrade, node before you perform maintenance on the node (e.g. kernel upgrade,
@ -64,7 +64,7 @@ kubectl uncordon <node name>
``` ```
afterwards to tell Kubernetes that it can resume scheduling new pods onto the node. afterwards to tell Kubernetes that it can resume scheduling new pods onto the node.
### Draining multiple nodes in parallel ## Draining multiple nodes in parallel
The `kubectl drain` command should only be issued to a single node at a The `kubectl drain` command should only be issued to a single node at a
time. However, you can run multiple `kubectl drain` commands for time. However, you can run multiple `kubectl drain` commands for

View File

@ -19,7 +19,7 @@ in a Kubernetes Pod.
{% capture steps %} {% capture steps %}
### Assigning CPU and RAM resources to a container ## Assigning CPU and RAM resources to a container
When you create a Pod, you can request CPU and RAM resources for the containers When you create a Pod, you can request CPU and RAM resources for the containers
that run in the Pod. You can also set limits for CPU and RAM resources. To that run in the Pod. You can also set limits for CPU and RAM resources. To
@ -64,7 +64,7 @@ for the `Pod`:
cpu: 250m cpu: 250m
memory: 64Mi memory: 64Mi
### Understanding CPU and RAM units ## Understanding CPU and RAM units
The CPU resource is measured in *cpu*s. Fractional values are allowed. You can The CPU resource is measured in *cpu*s. Fractional values are allowed. You can
use the suffix *m* to mean mili. For example 100m cpu is 100 milicpu, and is use the suffix *m* to mean mili. For example 100m cpu is 100 milicpu, and is
@ -89,7 +89,7 @@ If you specify a request, a Pod is guaranteed to be able to use that much
of the resource. See of the resource. See
[Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resource-qos.md) for the difference between resource limits and requests. [Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resource-qos.md) for the difference between resource limits and requests.
### If you don't specify limits or requests ## If you don't specify limits or requests
If you don't specify a RAM limit, Kubernetes places no upper bound on the If you don't specify a RAM limit, Kubernetes places no upper bound on the
amount of RAM a Container can use. A Container could use all the RAM amount of RAM a Container can use. A Container could use all the RAM

View File

@ -21,7 +21,7 @@ Container is terminated.
{% capture steps %} {% capture steps %}
### Defining postStart and preStop handlers ## Defining postStart and preStop handlers
In this exercise, you create a Pod that has one Container. The Container has handlers In this exercise, you create a Pod that has one Container. The Container has handlers
for the postStart and preStop events. for the postStart and preStop events.
@ -60,7 +60,7 @@ The output shows the text written by the postStart handler:
{% capture discussion %} {% capture discussion %}
### Discussion ## Discussion
Kubernetes sends the postStart event immediately after the Container is created. Kubernetes sends the postStart event immediately after the Container is created.
There is no guarantee, however, that the postStart handler is called before There is no guarantee, however, that the postStart handler is called before
@ -83,7 +83,7 @@ unless the Pod's grace period expires. For more details, see
* Learn more about the [lifecycle of a Pod](https://kubernetes.io/docs/user-guide/pod-states/). * Learn more about the [lifecycle of a Pod](https://kubernetes.io/docs/user-guide/pod-states/).
#### Reference ### Reference
* [Lifecycle](https://kubernetes.io/docs/resources-reference/1_5/#lifecycle-v1) * [Lifecycle](https://kubernetes.io/docs/resources-reference/1_5/#lifecycle-v1)
* [Container](https://kubernetes.io/docs/resources-reference/1_5/#container-v1) * [Container](https://kubernetes.io/docs/resources-reference/1_5/#container-v1)

View File

@ -19,7 +19,7 @@ in the same Pod.
{% capture steps %} {% capture steps %}
### Creating a Pod that runs two Containers ## Creating a Pod that runs two Containers
In this exercise, you create a Pod that runs two Containers. The two containers In this exercise, you create a Pod that runs two Containers. The two containers
share a Volume that they can use to communicate. Here is the configuration file share a Volume that they can use to communicate. Here is the configuration file
@ -111,7 +111,7 @@ The output shows that nginx serves a web page written by the debian container:
{% capture discussion %} {% capture discussion %}
### Discussion ## Discussion
The primary reason that Pods can have multiple containers is to support The primary reason that Pods can have multiple containers is to support
helper applications that assist a primary application. Typical examples of helper applications that assist a primary application. Typical examples of

View File

@ -30,7 +30,7 @@ When a Pod is not ready, it is removed from Service load balancers.
{% capture steps %} {% capture steps %}
### Defining a liveness command ## Defining a liveness command
Many applications running for long periods of time eventually transition to Many applications running for long periods of time eventually transition to
broken states, and cannot recover except by being restarted. Kubernetes provides broken states, and cannot recover except by being restarted. Kubernetes provides
@ -117,7 +117,7 @@ NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 1 1m liveness-exec 1/1 Running 1 1m
``` ```
### Defining a liveness HTTP request ## Defining a liveness HTTP request
Another kind of liveness probe uses an HTTP GET request. Here is the configuration Another kind of liveness probe uses an HTTP GET request. Here is the configuration
file for a Pod that runs a container based on the `gcr.io/google_containers/liveness` file for a Pod that runs a container based on the `gcr.io/google_containers/liveness`
@ -173,7 +173,7 @@ the Container has been restarted:
kubectl describe pod liveness-http kubectl describe pod liveness-http
``` ```
### Using a named port ## Using a named port
You can use a named You can use a named
[ContainerPort](/docs/api-reference/v1/definitions/#_v1_containerport) [ContainerPort](/docs/api-reference/v1/definitions/#_v1_containerport)
@ -191,7 +191,7 @@ livenessProbe:
port: liveness-port port: liveness-port
``` ```
### Defining readiness probes ## Defining readiness probes
Sometimes, applications are temporarily unable to serve traffic. Sometimes, applications are temporarily unable to serve traffic.
For example, an application might need to load large data or configuration For example, an application might need to load large data or configuration
@ -219,7 +219,7 @@ readinessProbe:
{% capture discussion %} {% capture discussion %}
### Discussion ## Discussion
{% comment %} {% comment %}
Eventually, some of this Discussion section could be moved to a concept topic. Eventually, some of this Discussion section could be moved to a concept topic.
@ -260,7 +260,7 @@ In addition to command probes and HTTP probes, Kubenetes supports
* Learn more about * Learn more about
[Health Checking section](/docs/user-guide/walkthrough/k8s201/#health-checking). [Health Checking section](/docs/user-guide/walkthrough/k8s201/#health-checking).
#### Reference ### Reference
* [Pod](http://kubernetes.io/docs/api-reference/v1/definitions#_v1_pod) * [Pod](http://kubernetes.io/docs/api-reference/v1/definitions#_v1_pod)
* [Container](/docs/api-reference/v1/definitions/#_v1_container) * [Container](/docs/api-reference/v1/definitions/#_v1_container)

View File

@ -16,7 +16,7 @@ application Container runs.
{% capture steps %} {% capture steps %}
### Creating a Pod that has an init Container ## Creating a Pod that has an init Container
In this exercise you create a Pod that has one application Container and one In this exercise you create a Pod that has one application Container and one
init Container. The init Container runs to completion before the application init Container. The init Container runs to completion before the application

View File

@ -23,7 +23,7 @@ key-value cache and store.
{% capture steps %} {% capture steps %}
### Configuring a volume for a Pod ## Configuring a volume for a Pod
In this exercise, you create a Pod that runs one Container. This Pod has a In this exercise, you create a Pod that runs one Container. This Pod has a
Volume of type Volume of type

View File

@ -19,7 +19,7 @@ in a Kubernetes Pod.
{% capture steps %} {% capture steps %}
### Defining a command and arguments when you create a Pod ## Defining a command and arguments when you create a Pod
When you create a Pod, you can define a command and arguments for the When you create a Pod, you can define a command and arguments for the
containers that run in the Pod. To define a command, include the `command` containers that run in the Pod. To define a command, include the `command`
@ -60,7 +60,7 @@ from the Pod:
command-demo command-demo
tcp://10.3.240.1:443 tcp://10.3.240.1:443
### Using environment variables to define arguments ## Using environment variables to define arguments
In the preceding example, you defined the arguments directly by In the preceding example, you defined the arguments directly by
providing strings. As an alternative to providing strings directly, providing strings. As an alternative to providing strings directly,
@ -81,7 +81,7 @@ and
NOTE: The environment variable appears in parentheses, `"$(VAR)"`. This is NOTE: The environment variable appears in parentheses, `"$(VAR)"`. This is
required for the variable to be expanded in the `command` or `args` field. required for the variable to be expanded in the `command` or `args` field.
### Running a command in a shell ## Running a command in a shell
In some cases, you need your command to run in a shell. For example, your In some cases, you need your command to run in a shell. For example, your
command might consist of several commands piped together, or it might be a shell command might consist of several commands piped together, or it might be a shell

View File

@ -19,7 +19,7 @@ in a Kubernetes Pod.
{% capture steps %} {% capture steps %}
### Defining an environment variable for a container ## Defining an environment variable for a container
When you create a Pod, you can set environment variables for the containers When you create a Pod, you can set environment variables for the containers
that run in the Pod. To set environment variables, include the `env` field in that run in the Pod. To set environment variables, include the `env` field in

View File

@ -15,7 +15,7 @@ encryption keys, into Pods.
{% capture steps %} {% capture steps %}
### Converting your secret data to a base-64 representation ## Converting your secret data to a base-64 representation
Suppose you want to have two pieces of secret data: a username `my-app` and a password Suppose you want to have two pieces of secret data: a username `my-app` and a password
`39528$vdg7Jb`. First, use [Base64 encoding](https://www.base64encode.org/) to `39528$vdg7Jb`. First, use [Base64 encoding](https://www.base64encode.org/) to
@ -28,7 +28,7 @@ example:
The output shows that the base-64 representation of your username is `bXktYXBwCg==`, The output shows that the base-64 representation of your username is `bXktYXBwCg==`,
and the base-64 representation of your password is `Mzk1MjgkdmRnN0piCg==`. and the base-64 representation of your password is `Mzk1MjgkdmRnN0piCg==`.
### Creating a Secret ## Creating a Secret
Here is a configuration file you can use to create a Secret that holds your Here is a configuration file you can use to create a Secret that holds your
username and password: username and password:
@ -72,7 +72,7 @@ username and password:
password: 13 bytes password: 13 bytes
username: 7 bytes username: 7 bytes
### Creating a Pod that has access to the secret data through a Volume ## Creating a Pod that has access to the secret data through a Volume
Here is a configuration file you can use to create a Pod: Here is a configuration file you can use to create a Pod:
@ -119,7 +119,7 @@ is exposed:
my-app my-app
39528$vdg7Jb 39528$vdg7Jb
### Creating a Pod that has access to the secret data through environment variables ## Creating a Pod that has access to the secret data through environment variables
Here is a configuration file you can use to create a Pod: Here is a configuration file you can use to create a Pod:
@ -160,7 +160,7 @@ Here is a configuration file you can use to create a Pod:
* Learn more about [Secrets](/docs/user-guide/secrets/). * Learn more about [Secrets](/docs/user-guide/secrets/).
* Learn about [Volumes](/docs/user-guide/volumes/). * Learn about [Volumes](/docs/user-guide/volumes/).
#### Reference ### Reference
* [Secret](docs/api-reference/v1/definitions/#_v1_secret) * [Secret](docs/api-reference/v1/definitions/#_v1_secret)
* [Volume](docs/api-reference/v1/definitions/#_v1_volume) * [Volume](docs/api-reference/v1/definitions/#_v1_volume)

View File

@ -20,7 +20,7 @@ private Docker registry or repository.
{% capture steps %} {% capture steps %}
### Logging in to Docker ## Logging in to Docker
docker login docker login
@ -43,7 +43,7 @@ The output contains a section similar to this:
} }
} }
### Creating a Secret that holds your authorization token ## Creating a Secret that holds your authorization token
Create a Secret named `regsecret`: Create a Secret named `regsecret`:
@ -55,7 +55,7 @@ where:
* `<your-pword>` is your Docker password. * `<your-pword>` is your Docker password.
* `<your-email>` is your Docker email. * `<your-email>` is your Docker email.
### Understanding your Secret ## Understanding your Secret
To understand what's in the Secret you just created, start by viewing the To understand what's in the Secret you just created, start by viewing the
Secret in YAML format: Secret in YAML format:
@ -92,7 +92,7 @@ The output is similar to this:
Notice that the secret data contains the authorization token from your Notice that the secret data contains the authorization token from your
`config.json` file. `config.json` file.
### Creating a Pod that uses your Secret ## Creating a Pod that uses your Secret
Here is a configuration file for a Pod that needs access to your secret data: Here is a configuration file for a Pod that needs access to your secret data:

View File

@ -27,7 +27,7 @@ the general
{% capture steps %} {% capture steps %}
### Writing and reading a termination message ## Writing and reading a termination message
In this exercise, you create a Pod that runs one container. In this exercise, you create a Pod that runs one container.
The configuration file specifies a command that runs when The configuration file specifies a command that runs when
@ -75,7 +75,7 @@ only the termination message:
{% raw %} kubectl get pod termination-demo -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"{% endraw %} {% raw %} kubectl get pod termination-demo -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"{% endraw %}
``` ```
### Setting the termination log file ## Setting the termination log file
By default Kubernetes retrieves termination messages from By default Kubernetes retrieves termination messages from
`/dev/termination-log`. To change this to a different file, `/dev/termination-log`. To change this to a different file,

View File

@ -26,7 +26,7 @@ This task shows you how to debug a StatefulSet.
{% capture steps %} {% capture steps %}
### Debugging a StatefulSet ## Debugging a StatefulSet
In order to list all the pods which belong to a StatefulSet, which have a label `app=myapp` set on them, you can use the following: In order to list all the pods which belong to a StatefulSet, which have a label `app=myapp` set on them, you can use the following:
@ -44,7 +44,7 @@ kubectl annotate pods <pod-name> pod.alpha.kubernetes.io/initialized="false" --o
When the annotation is set to `"false"`, the StatefulSet will not respond to its Pods becoming unhealthy or unavailable. It will not create replacement Pods till the annotation is removed or set to `"true"` on each StatefulSet Pod. When the annotation is set to `"false"`, the StatefulSet will not respond to its Pods becoming unhealthy or unavailable. It will not create replacement Pods till the annotation is removed or set to `"true"` on each StatefulSet Pod.
#### Step-wise Initialization ### Step-wise Initialization
You can also use the same annotation to debug race conditions during bootstrapping of the StatefulSet by setting the `pod.alpha.kubernetes.io/initialized` annotation to `"false"` in the `.spec.template.metadata.annotations` field of the StatefulSet prior to creating it. You can also use the same annotation to debug race conditions during bootstrapping of the StatefulSet by setting the `pod.alpha.kubernetes.io/initialized` annotation to `"false"` in the `.spec.template.metadata.annotations` field of the StatefulSet prior to creating it.

View File

@ -21,13 +21,13 @@ This page shows how to delete Pods which are part of a stateful set, and explain
{% capture steps %} {% capture steps %}
### StatefulSet considerations ## StatefulSet considerations
In normal operation of a StatefulSet, there is **never** a need to force delete a StatefulSet Pod. The StatefulSet controller is responsible for creating, scaling and deleting members of the StatefulSet. It tries to ensure that the specified number of Pods from ordinal 0 through N-1 are alive and ready. StatefulSet ensures that, at any time, there is at most one Pod with a given identity running in a cluster. This is referred to as *at most one* semantics provided by a StatefulSet. In normal operation of a StatefulSet, there is **never** a need to force delete a StatefulSet Pod. The StatefulSet controller is responsible for creating, scaling and deleting members of the StatefulSet. It tries to ensure that the specified number of Pods from ordinal 0 through N-1 are alive and ready. StatefulSet ensures that, at any time, there is at most one Pod with a given identity running in a cluster. This is referred to as *at most one* semantics provided by a StatefulSet.
Manual force deletion should be undertaken with caution, as it has the potential to violate the at most one semantics inherent to StatefulSet. StatefulSets may be used to run distributed and clustered applications which have a need for a stable network identity and stable storage. These applications often have configuration which relies on an ensemble of a fixed number of members with fixed identities. Having multiple members with the same identity can be disastrous and may lead to data loss (e.g. split brain scenario in quorum-based systems). Manual force deletion should be undertaken with caution, as it has the potential to violate the at most one semantics inherent to StatefulSet. StatefulSets may be used to run distributed and clustered applications which have a need for a stable network identity and stable storage. These applications often have configuration which relies on an ensemble of a fixed number of members with fixed identities. Having multiple members with the same identity can be disastrous and may lead to data loss (e.g. split brain scenario in quorum-based systems).
### Deleting Pods ## Deleting Pods
You can perform a graceful pod deletion with the following command: You can perform a graceful pod deletion with the following command:
@ -46,7 +46,7 @@ The recommended best practice is to use the first or second approach. If a Node
Normally, the system completes the deletion once the Pod is no longer running on a Node, or the Node is deleted by an administrator. You may override this by force deleting the Pod. Normally, the system completes the deletion once the Pod is no longer running on a Node, or the Node is deleted by an administrator. You may override this by force deleting the Pod.
#### Force Deletion ### Force Deletion
Force deletions **do not** wait for confirmation from the kubelet that the Pod has been terminated. Irrespective of whether a force deletion is successful in killing a Pod, it will immediately free up the name from the apiserver. This would let the StatefulSet controller create a replacement Pod with that same identity; this can lead to the duplication of a still-running Pod, and if said Pod can still communicate with the other members of the StatefulSet, will violate the at most one semantics that StatefulSet is designed to guarantee. Force deletions **do not** wait for confirmation from the kubelet that the Pod has been terminated. Irrespective of whether a force deletion is successful in killing a Pod, it will immediately free up the name from the apiserver. This would let the StatefulSet controller create a replacement Pod with that same identity; this can lead to the duplication of a still-running Pod, and if said Pod can still communicate with the other members of the StatefulSet, will violate the at most one semantics that StatefulSet is designed to guarantee.

View File

@ -22,7 +22,7 @@ This task shows you how to delete a StatefulSet.
{% capture steps %} {% capture steps %}
### Deleting a StatefulSet ## Deleting a StatefulSet
You can delete a StatefulSet in the same way you delete other resources in Kubernetes: use the `kubectl delete` command, and specify the StatefulSet either by file or by name. You can delete a StatefulSet in the same way you delete other resources in Kubernetes: use the `kubectl delete` command, and specify the StatefulSet either by file or by name.
@ -52,13 +52,13 @@ By passing `--cascade=false` to `kubectl delete`, the Pods managed by the Statef
kubectl delete pods -l app=myapp kubectl delete pods -l app=myapp
``` ```
#### Persistent Volumes ### Persistent Volumes
Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have left the [terminating state](/docs/user-guide/pods/index#termination-of-pods) might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion. Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have left the [terminating state](/docs/user-guide/pods/index#termination-of-pods) might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
**Note: Use caution when deleting a PVC, as it may lead to data loss.** **Note: Use caution when deleting a PVC, as it may lead to data loss.**
#### Complete deletion of a StatefulSet ### Complete deletion of a StatefulSet
To simply delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following: To simply delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
@ -72,7 +72,7 @@ kubectl delete pvc -l app=myapp
In the example above, the Pods have the label `app=myapp`; substitute your own label as appropriate. In the example above, the Pods have the label `app=myapp`; substitute your own label as appropriate.
#### Force deletion of StatefulSet pods ### Force deletion of StatefulSet pods
If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/) for details. If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/) for details.

View File

@ -25,13 +25,13 @@ This page shows how to scale a StatefulSet.
{% capture steps %} {% capture steps %}
### Use `kubectl` to scale StatefulSets ## Use `kubectl` to scale StatefulSets
Make sure you have `kubectl` upgraded to Kubernetes version 1.5 or later before Make sure you have `kubectl` upgraded to Kubernetes version 1.5 or later before
continuing. If you're unsure, run `kubectl version` and check `Client Version` continuing. If you're unsure, run `kubectl version` and check `Client Version`
for which kubectl you're using. for which kubectl you're using.
#### `kubectl scale` ### `kubectl scale`
First, find the StatefulSet you want to scale. Remember, you need to first understand if you can scale it or not. First, find the StatefulSet you want to scale. Remember, you need to first understand if you can scale it or not.
@ -45,7 +45,7 @@ Change the number of replicas of your StatefulSet:
kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas> kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
``` ```
#### Alternative: `kubectl apply` / `kubectl edit` / `kubectl patch` ### Alternative: `kubectl apply` / `kubectl edit` / `kubectl patch`
Alternatively, you can do [in-place updates](/docs/user-guide/managing-deployments/#in-place-updates-of-resources) on your StatefulSets. Alternatively, you can do [in-place updates](/docs/user-guide/managing-deployments/#in-place-updates-of-resources) on your StatefulSets.
@ -68,9 +68,9 @@ Or use `kubectl patch`:
kubectl patch statefulsets <stateful-set-name> -p '{"spec":{"replicas":<new-replicas>}}' kubectl patch statefulsets <stateful-set-name> -p '{"spec":{"replicas":<new-replicas>}}'
``` ```
### Troubleshooting ## Troubleshooting
#### Scaling down doesn't not work right ### Scaling down doesn't not work right
You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place
after those stateful Pods become running and ready. after those stateful Pods become running and ready.

View File

@ -23,7 +23,7 @@ This page shows how to upgrade from PetSets (Kubernetes version 1.3 or 1.4) to *
{% capture steps %} {% capture steps %}
### Differences between alpha PetSets and beta StatefulSets ## Differences between alpha PetSets and beta StatefulSets
PetSet was introduced as an alpha resource in Kubernetes release 1.3, and was renamed to StatefulSet as a beta resource in 1.5. PetSet was introduced as an alpha resource in Kubernetes release 1.3, and was renamed to StatefulSet as a beta resource in 1.5.
Here are some notable changes: Here are some notable changes:
@ -33,13 +33,13 @@ Here are some notable changes:
* **Flipped debug annotation behavior**: The default value of the debug annotation (`pod.alpha.kubernetes.io/initialized`) is now `true`. The absence of this annotation will pause PetSet operations, but will NOT pause StatefulSet operations. In most cases, you no longer need this annotation in your StatefulSet manifests. * **Flipped debug annotation behavior**: The default value of the debug annotation (`pod.alpha.kubernetes.io/initialized`) is now `true`. The absence of this annotation will pause PetSet operations, but will NOT pause StatefulSet operations. In most cases, you no longer need this annotation in your StatefulSet manifests.
### Upgrading from PetSets to StatefulSets ## Upgrading from PetSets to StatefulSets
Note that these steps need to be done in the specified order. You **should Note that these steps need to be done in the specified order. You **should
NOT upgrade your Kubernetes master, nodes, or `kubectl` to Kubernetes version NOT upgrade your Kubernetes master, nodes, or `kubectl` to Kubernetes version
1.5 or later**, until told to do so. 1.5 or later**, until told to do so.
#### Find all PetSets and their manifests ### Find all PetSets and their manifests
First, find all existing PetSets in your cluster: First, find all existing PetSets in your cluster:
@ -60,7 +60,7 @@ Here's an example command for you to save all existing PetSets as one file.
kubectl get petsets --all-namespaces -o yaml > all-petsets.yaml kubectl get petsets --all-namespaces -o yaml > all-petsets.yaml
``` ```
#### Prepare StatefulSet manifests ### Prepare StatefulSet manifests
Now, for every PetSet manifest you have, prepare a corresponding StatefulSet manifest: Now, for every PetSet manifest you have, prepare a corresponding StatefulSet manifest:
@ -71,7 +71,7 @@ Now, for every PetSet manifest you have, prepare a corresponding StatefulSet man
It's recommended that you keep both PetSet manifests and StatefulSet manifests, so that you can safely roll back and recreate your PetSets, It's recommended that you keep both PetSet manifests and StatefulSet manifests, so that you can safely roll back and recreate your PetSets,
if you decide not to upgrade your cluster. if you decide not to upgrade your cluster.
#### Delete all PetSets without cascading ### Delete all PetSets without cascading
If you find existing PetSets in your cluster in the previous step, you need to delete all PetSets *without cascading*. You can do this from `kubectl` with `--cascade=false`. If you find existing PetSets in your cluster in the previous step, you need to delete all PetSets *without cascading*. You can do this from `kubectl` with `--cascade=false`.
Note that if the flag isn't set, **cascading deletion will be performed by default**, and all Pods managed by your PetSets will be gone. Note that if the flag isn't set, **cascading deletion will be performed by default**, and all Pods managed by your PetSets will be gone.
@ -103,18 +103,18 @@ kubectl get petsets --all-namespaces
At this moment, you've deleted all PetSets in your cluster, but not their Pods, Persistent Volumes, or Persistent Volume Claims. At this moment, you've deleted all PetSets in your cluster, but not their Pods, Persistent Volumes, or Persistent Volume Claims.
However, since the Pods are not managed by PetSets anymore, they will be vulnerable to node failures until you finish the master upgrade and recreate StatefulSets. However, since the Pods are not managed by PetSets anymore, they will be vulnerable to node failures until you finish the master upgrade and recreate StatefulSets.
#### Upgrade your master to Kubernetes version 1.5 or later ### Upgrade your master to Kubernetes version 1.5 or later
Now, you can [upgrade your Kubernetes master](/docs/admin/cluster-management/#upgrading-a-cluster) to Kubernetes version 1.5 or later. Now, you can [upgrade your Kubernetes master](/docs/admin/cluster-management/#upgrading-a-cluster) to Kubernetes version 1.5 or later.
Note that **you should NOT upgrade Nodes at this time**, because the Pods Note that **you should NOT upgrade Nodes at this time**, because the Pods
(that were once managed by PetSets) are now vulnerable to node failures. (that were once managed by PetSets) are now vulnerable to node failures.
#### Upgrade kubectl to Kubernetes version 1.5 or later ### Upgrade kubectl to Kubernetes version 1.5 or later
Upgrade `kubectl` to Kubernetes version 1.5 or later, following [the steps for installing and setting up Upgrade `kubectl` to Kubernetes version 1.5 or later, following [the steps for installing and setting up
kubectl](/docs/user-guide/prereqs/). kubectl](/docs/user-guide/prereqs/).
#### Create StatefulSets ### Create StatefulSets
Make sure you have both master and `kubectl` upgraded to Kubernetes version 1.5 Make sure you have both master and `kubectl` upgraded to Kubernetes version 1.5
or later before continuing: or later before continuing:
@ -147,7 +147,7 @@ newly-upgraded cluster:
kubectl get statefulsets --all-namespaces kubectl get statefulsets --all-namespaces
``` ```
#### Upgrade nodes to Kubernetes version 1.5 or later (optional) ### Upgrade nodes to Kubernetes version 1.5 or later (optional)
You can now [upgrade Kubernetes nodes](/docs/admin/cluster-management/#upgrading-a-cluster) You can now [upgrade Kubernetes nodes](/docs/admin/cluster-management/#upgrading-a-cluster)
to Kubernetes version 1.5 or later. This step is optional, but needs to be done after all StatefulSets to Kubernetes version 1.5 or later. This step is optional, but needs to be done after all StatefulSets

View File

@ -30,7 +30,7 @@ Init Containers.
{% capture steps %} {% capture steps %}
### Checking the status of Init Containers ## Checking the status of Init Containers
The Pod status will give you an overview of Init Container execution: The Pod status will give you an overview of Init Container execution:
@ -49,7 +49,7 @@ NAME READY STATUS RESTARTS AGE
See [Understanding Pod status](#understanding-pod-status) for more examples of See [Understanding Pod status](#understanding-pod-status) for more examples of
status values and their meanings. status values and their meanings.
### Getting details about Init Containers ## Getting details about Init Containers
You can see detailed information about Init Container execution by running: You can see detailed information about Init Container execution by running:
@ -98,7 +98,7 @@ kubectl get pod <pod-name> --template '{{index .metadata.annotations "pod.beta.k
This will return the same information as above, but in raw JSON format. This will return the same information as above, but in raw JSON format.
### Accessing logs from Init Containers ## Accessing logs from Init Containers
You can access logs for an Init Container by passing its Container name along You can access logs for an Init Container by passing its Container name along
with the Pod name: with the Pod name:
@ -115,7 +115,7 @@ commands as they're executed. For example, you can do this in Bash by running
{% capture discussion %} {% capture discussion %}
### Understanding Pod status ## Understanding Pod status
A Pod status beginning with `Init:` summarizes the status of Init Container A Pod status beginning with `Init:` summarizes the status of Init Container
execution. The table below describes some example status values that you might execution. The table below describes some example status values that you might

View File

@ -15,7 +15,7 @@ of Services, and how you can toggle this behavior according to your needs.
{% include task-tutorial-prereqs.md %} {% include task-tutorial-prereqs.md %}
### Terminology ## Terminology
This document makes use of the following terms: This document makes use of the following terms:
@ -26,7 +26,7 @@ This document makes use of the following terms:
* [Kube-proxy](/docs/user-guide/services/#virtual-ips-and-service-proxies): a network daemon that orchestrates Service VIP management on every node * [Kube-proxy](/docs/user-guide/services/#virtual-ips-and-service-proxies): a network daemon that orchestrates Service VIP management on every node
### Prerequisites ## Prerequisites
You must have a working Kubernetes 1.5 cluster to run the examples in this You must have a working Kubernetes 1.5 cluster to run the examples in this
document. The examples use a small nginx webserver that echoes back the source document. The examples use a small nginx webserver that echoes back the source
@ -50,7 +50,7 @@ deployment "source-ip-app" created
{% capture lessoncontent %} {% capture lessoncontent %}
### Source IP for Services with Type=ClusterIP ## Source IP for Services with Type=ClusterIP
Packets sent to ClusterIP from within the cluster are never source NAT'd if Packets sent to ClusterIP from within the cluster are never source NAT'd if
you're running kube-proxy in [iptables mode](/docs/user-guide/services/#proxy-mode-iptables), you're running kube-proxy in [iptables mode](/docs/user-guide/services/#proxy-mode-iptables),
@ -107,7 +107,7 @@ command=GET
... ...
``` ```
### Source IP for Services with Type=NodePort ## Source IP for Services with Type=NodePort
As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/user-guide/services/#type-nodeport) As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/user-guide/services/#type-nodeport)
are source NAT'd by default. You can test this by creating a `NodePort` Service: are source NAT'd by default. You can test this by creating a `NodePort` Service:
@ -204,7 +204,7 @@ Visually:
### Source IP for Services with Type=LoadBalancer ## Source IP for Services with Type=LoadBalancer
As of Kubernetes 1.5, packets sent to Services with [Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer) are As of Kubernetes 1.5, packets sent to Services with [Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer) are
source NAT'd by default, because all schedulable Kubernetes nodes in the source NAT'd by default, because all schedulable Kubernetes nodes in the

View File

@ -51,7 +51,7 @@ After this tutorial, you will be familiar with the following.
{% endcapture %} {% endcapture %}
{% capture lessoncontent %} {% capture lessoncontent %}
### Creating a StatefulSet ## Creating a StatefulSet
Begin by creating a StatefulSet using the example below. It is similar to the Begin by creating a StatefulSet using the example below. It is similar to the
example presented in the example presented in the
@ -95,7 +95,7 @@ NAME DESIRED CURRENT AGE
web 2 1 20s web 2 1 20s
``` ```
#### Ordered Pod Creation ### Ordered Pod Creation
For a StatefulSet with N replicas, when Pods are being deployed, they are For a StatefulSet with N replicas, when Pods are being deployed, they are
created sequentially, in order from {0..N-1}. Examine the output of the created sequentially, in order from {0..N-1}. Examine the output of the
@ -120,11 +120,11 @@ Notice that the `web-0` Pod is launched and set to Pending prior to
launching `web-1`. In fact, `web-1` is not launched until `web-0` is launching `web-1`. In fact, `web-1` is not launched until `web-0` is
[Running and Ready](/docs/user-guide/pod-states). [Running and Ready](/docs/user-guide/pod-states).
### Pods in a StatefulSet ## Pods in a StatefulSet
Unlike Pods in other controllers, the Pods in a StatefulSet have a unique Unlike Pods in other controllers, the Pods in a StatefulSet have a unique
ordinal index and a stable network identity. ordinal index and a stable network identity.
#### Examining the Pod's Ordinal Index ### Examining the Pod's Ordinal Index
Get the StatefulSet's Pods. Get the StatefulSet's Pods.
@ -143,7 +143,7 @@ Set controller. The Pods' names take the form
`<statefulset name>-<ordinal index>`. Since the `web` StatefulSet has two `<statefulset name>-<ordinal index>`. Since the `web` StatefulSet has two
replicas, it creates two Pods, `web-0` and `web-1`. replicas, it creates two Pods, `web-0` and `web-1`.
#### Using Stable Network Identities ### Using Stable Network Identities
Each Pod has a stable hostname based on its ordinal index. Use Each Pod has a stable hostname based on its ordinal index. Use
[`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to execute the [`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to execute the
`hostname` command in each Pod. `hostname` command in each Pod.
@ -253,7 +253,7 @@ liveness and readiness, you can use the SRV records of the Pods (
application will be able to discover the Pods' addresses when they transition application will be able to discover the Pods' addresses when they transition
to Running and Ready. to Running and Ready.
#### Writing to Stable Storage ### Writing to Stable Storage
Get the PersistentVolumeClaims for `web-0` and `web-1`. Get the PersistentVolumeClaims for `web-0` and `web-1`.
@ -326,14 +326,14 @@ Volume Claims are remounted to their `volumeMount`s. No matter what node `web-0`
and `web-1` are scheduled on, their PersistentVolumes will be mounted to the and `web-1` are scheduled on, their PersistentVolumes will be mounted to the
appropriate mount points. appropriate mount points.
### Scaling a StatefulSet ## Scaling a StatefulSet
Scaling a StatefulSet refers to increasing or decreasing the number of replicas. Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
This is accomplished by updating the `replicas` field. You can use either This is accomplished by updating the `replicas` field. You can use either
[`kubectl scale`](/docs/user-guide/kubectl/kubectl_scale/) or [`kubectl scale`](/docs/user-guide/kubectl/kubectl_scale/) or
[`kubectl patch`](/docs/user-guide/kubectl/kubectl_patch/) to scale a Stateful [`kubectl patch`](/docs/user-guide/kubectl/kubectl_patch/) to scale a Stateful
Set. Set.
#### Scaling Up ### Scaling Up
In one terminal window, watch the Pods in the StatefulSet. In one terminal window, watch the Pods in the StatefulSet.
@ -378,7 +378,7 @@ created each Pod sequentially with respect to its ordinal index, and it
waited for each Pod's predecessor to be Running and Ready before launching the waited for each Pod's predecessor to be Running and Ready before launching the
subsequent Pod. subsequent Pod.
#### Scaling Down ### Scaling Down
In one terminal, watch the StatefulSet's Pods. In one terminal, watch the StatefulSet's Pods.
@ -412,7 +412,7 @@ web-3 1/1 Terminating 0 42s
web-3 1/1 Terminating 0 42s web-3 1/1 Terminating 0 42s
``` ```
#### Ordered Pod Termination ### Ordered Pod Termination
The controller deleted one Pod at a time, with respect to its ordinal index, The controller deleted one Pod at a time, with respect to its ordinal index,
in reverse order, and it waited for each to be completely shutdown before in reverse order, and it waited for each to be completely shutdown before
@ -438,7 +438,7 @@ the StatefulSet's Pods are deleted. This is still true when Pod deletion is
caused by scaling the StatefulSet down. This feature can be used to facilitate caused by scaling the StatefulSet down. This feature can be used to facilitate
upgrading the container images of Pods in a StatefulSet. upgrading the container images of Pods in a StatefulSet.
### Updating Containers ## Updating Containers
As demonstrated in the [Scaling a StatefulSet](#scaling-a-statefulset) section, As demonstrated in the [Scaling a StatefulSet](#scaling-a-statefulset) section,
the `replicas` field of a StatefulSet is mutable. The only other field of a the `replicas` field of a StatefulSet is mutable. The only other field of a
StatefulSet that can be updated is the `spec.template.containers` field. StatefulSet that can be updated is the `spec.template.containers` field.
@ -530,14 +530,14 @@ gcr.io/google_containers/nginx-slim:0.7
All the Pods in the StatefulSet are now running a new container image. All the Pods in the StatefulSet are now running a new container image.
### Deleting StatefulSets ## Deleting StatefulSets
StatefulSet supports both Non-Cascading and Cascading deletion. In a StatefulSet supports both Non-Cascading and Cascading deletion. In a
Non-Cascading Delete, the StatefulSet's Pods are not deleted when the Stateful Non-Cascading Delete, the StatefulSet's Pods are not deleted when the Stateful
Set is deleted. In a Cascading Delete, both the StatefulSet and its Pods are Set is deleted. In a Cascading Delete, both the StatefulSet and its Pods are
deleted. deleted.
#### Non-Cascading Delete ### Non-Cascading Delete
In one terminal window, watch the Pods in the StatefulSet. In one terminal window, watch the Pods in the StatefulSet.
@ -643,7 +643,7 @@ because the StatefulSet never deletes the PersistentVolumes associated with a
Pod. When you recreated the StatefulSet and it relaunched `web-0`, its original Pod. When you recreated the StatefulSet and it relaunched `web-0`, its original
PersistentVolume was remounted. PersistentVolume was remounted.
#### Cascading Delete ### Cascading Delete
In one terminal window, watch the Pods in the StatefulSet. In one terminal window, watch the Pods in the StatefulSet.

View File

@ -49,12 +49,12 @@ on general patterns for running stateful applications in Kubernetes.
{% capture lessoncontent %} {% capture lessoncontent %}
### Deploying MySQL ## Deploying MySQL
The example MySQL deployment consists of a ConfigMap, two Services, The example MySQL deployment consists of a ConfigMap, two Services,
and a StatefulSet. and a StatefulSet.
#### ConfigMap ### ConfigMap
Create the ConfigMap from the following YAML configuration file: Create the ConfigMap from the following YAML configuration file:
@ -74,7 +74,7 @@ portions to apply to different Pods.
Each Pod decides which portion to look at as it's initializing, Each Pod decides which portion to look at as it's initializing,
based on information provided by the StatefulSet controller. based on information provided by the StatefulSet controller.
#### Services ### Services
Create the Services from the following YAML configuration file: Create the Services from the following YAML configuration file:
@ -100,7 +100,7 @@ Because there is only one MySQL master, clients should connect directly to the
MySQL master Pod (through its DNS entry within the Headless Service) to execute MySQL master Pod (through its DNS entry within the Headless Service) to execute
writes. writes.
#### StatefulSet ### StatefulSet
Finally, create the StatefulSet from the following YAML configuration file: Finally, create the StatefulSet from the following YAML configuration file:
@ -133,7 +133,7 @@ This manifest uses a variety of techniques for managing stateful Pods as part of
a StatefulSet. The next section highlights some of these techniques to explain a StatefulSet. The next section highlights some of these techniques to explain
what happens as the StatefulSet creates Pods. what happens as the StatefulSet creates Pods.
### Understanding stateful Pod initialization ## Understanding stateful Pod initialization
The StatefulSet controller starts Pods one at a time, in order by their The StatefulSet controller starts Pods one at a time, in order by their
ordinal index. ordinal index.
@ -146,7 +146,7 @@ In this case, that results in Pods named `mysql-0`, `mysql-1`, and `mysql-2`.
The Pod template in the above StatefulSet manifest takes advantage of these The Pod template in the above StatefulSet manifest takes advantage of these
properties to perform orderly startup of MySQL replication. properties to perform orderly startup of MySQL replication.
#### Generating configuration ### Generating configuration
Before starting any of the containers in the Pod spec, the Pod first runs any Before starting any of the containers in the Pod spec, the Pod first runs any
[Init Containers](/docs/user-guide/production-pods/#handling-initialization) [Init Containers](/docs/user-guide/production-pods/#handling-initialization)
@ -175,7 +175,7 @@ Combined with the StatefulSet controller's
this ensures the MySQL master is Ready before creating slaves, so they can begin this ensures the MySQL master is Ready before creating slaves, so they can begin
replicating. replicating.
#### Cloning existing data ### Cloning existing data
In general, when a new Pod joins the set as a slave, it must assume the MySQL In general, when a new Pod joins the set as a slave, it must assume the MySQL
master might already have data on it. It also must assume that the replication master might already have data on it. It also must assume that the replication
@ -196,7 +196,7 @@ from the Pod whose ordinal index is one lower.
This works because the StatefulSet controller always ensures Pod `N` is This works because the StatefulSet controller always ensures Pod `N` is
Ready before starting Pod `N+1`. Ready before starting Pod `N+1`.
#### Starting replication ### Starting replication
After the Init Containers complete successfully, the regular containers run. After the Init Containers complete successfully, the regular containers run.
The MySQL Pods consist of a `mysql` container that runs the actual `mysqld` The MySQL Pods consist of a `mysql` container that runs the actual `mysqld`
@ -220,7 +220,7 @@ connections from other Pods requesting a data clone.
This server remains up indefinitely in case the StatefulSet scales up, or in This server remains up indefinitely in case the StatefulSet scales up, or in
case the next Pod loses its PersistentVolumeClaim and needs to redo the clone. case the next Pod loses its PersistentVolumeClaim and needs to redo the clone.
### Sending client traffic ## Sending client traffic
You can send test queries to the MySQL master (hostname `mysql-0.mysql`) You can send test queries to the MySQL master (hostname `mysql-0.mysql`)
by running a temporary container with the `mysql:5.7` image and running the by running a temporary container with the `mysql:5.7` image and running the
@ -287,13 +287,13 @@ endpoint might be selected upon each connection attempt:
You can press **Ctrl+C** when you want to stop the loop, but it's useful to keep You can press **Ctrl+C** when you want to stop the loop, but it's useful to keep
it running in another window so you can see the effects of the following steps. it running in another window so you can see the effects of the following steps.
### Simulating Pod and Node downtime ## Simulating Pod and Node downtime
To demonstrate the increased availability of reading from the pool of slaves To demonstrate the increased availability of reading from the pool of slaves
instead of a single server, keep the `SELECT @@server_id` loop from above instead of a single server, keep the `SELECT @@server_id` loop from above
running while you force a Pod out of the Ready state. running while you force a Pod out of the Ready state.
#### Break the Readiness Probe ### Break the Readiness Probe
The [readiness probe](/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks) The [readiness probe](/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks)
for the `mysql` container runs the command `mysql -h 127.0.0.1 -e 'SELECT 1'` for the `mysql` container runs the command `mysql -h 127.0.0.1 -e 'SELECT 1'`
@ -333,7 +333,7 @@ after a few seconds:
kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql
``` ```
#### Delete Pods ### Delete Pods
The StatefulSet also recreates Pods if they're deleted, similar to what a The StatefulSet also recreates Pods if they're deleted, similar to what a
ReplicaSet does for stateless Pods. ReplicaSet does for stateless Pods.
@ -348,7 +348,7 @@ PersistentVolumeClaim.
You should see server ID `102` disappear from the loop output for a while You should see server ID `102` disappear from the loop output for a while
and then return on its own. and then return on its own.
#### Drain a Node ### Drain a Node
If your Kubernetes cluster has multiple Nodes, you can simulate Node downtime If your Kubernetes cluster has multiple Nodes, you can simulate Node downtime
(such as when Nodes are upgraded) by issuing a (such as when Nodes are upgraded) by issuing a
@ -407,7 +407,7 @@ Now uncordon the Node to return it to a normal state:
kubectl uncordon <node-name> kubectl uncordon <node-name>
``` ```
### Scaling the number of slaves ## Scaling the number of slaves
With MySQL replication, you can scale your read query capacity by adding slaves. With MySQL replication, you can scale your read query capacity by adding slaves.
With StatefulSet, you can do this with a single command: With StatefulSet, you can do this with a single command:

View File

@ -37,7 +37,7 @@ application is MySQL.
{% capture lessoncontent %} {% capture lessoncontent %}
### Set up a disk in your environment ## Set up a disk in your environment
You can use any type of persistent volume for your stateful app. See You can use any type of persistent volume for your stateful app. See
[Types of Persistent Volumes](/docs/user-guide/persistent-volumes/#types-of-persistent-volumes) [Types of Persistent Volumes](/docs/user-guide/persistent-volumes/#types-of-persistent-volumes)
@ -66,7 +66,7 @@ kubectl create -f http://k8s.io/docs/tutorials/stateful-application/gce-volume.y
``` ```
### Deploy MySQL ## Deploy MySQL
You can run a stateful application by creating a Kubernetes Deployment You can run a stateful application by creating a Kubernetes Deployment
and connecting it to an existing PersistentVolume using a and connecting it to an existing PersistentVolume using a
@ -146,7 +146,7 @@ for a secure solution.
Access Modes: RWO Access Modes: RWO
No events. No events.
### Accessing the MySQL instance ## Accessing the MySQL instance
The preceding YAML file creates a service that The preceding YAML file creates a service that
allows other Pods in the cluster to access the database. The Service option allows other Pods in the cluster to access the database. The Service option
@ -171,7 +171,7 @@ If you don't see a command prompt, try pressing enter.
mysql> mysql>
``` ```
### Updating ## Updating
The image or any other part of the Deployment can be updated as usual The image or any other part of the Deployment can be updated as usual
with the `kubectl apply` command. Here are some precautions that are with the `kubectl apply` command. Here are some precautions that are
@ -187,7 +187,7 @@ specific to stateful apps:
one Pod running at a time. The `Recreate` strategy will stop the one Pod running at a time. The `Recreate` strategy will stop the
first pod before creating a new one with the updated configuration. first pod before creating a new one with the updated configuration.
### Deleting a deployment ## Deleting a deployment
Delete the deployed objects by name: Delete the deployed objects by name:

View File

@ -58,7 +58,7 @@ After this tutorial, you will know the following.
{% capture lessoncontent %} {% capture lessoncontent %}
#### ZooKeeper Basics ### ZooKeeper Basics
[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a [Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a
distributed, open-source coordination service for distributed applications. distributed, open-source coordination service for distributed applications.
@ -86,7 +86,7 @@ snapshot their in memory state to storage media. These snapshots can be loaded
directly into memory, and all WAL entries that preceded the snapshot may be directly into memory, and all WAL entries that preceded the snapshot may be
safely discarded. safely discarded.
### Creating a ZooKeeper Ensemble ## Creating a ZooKeeper Ensemble
The manifest below contains a The manifest below contains a
[Headless Service](/docs/user-guide/services/#headless-services), [Headless Service](/docs/user-guide/services/#headless-services),
@ -145,7 +145,7 @@ zk-2 1/1 Running 0 40s
The StatefulSet controller creates three Pods, and each Pod has a container with The StatefulSet controller creates three Pods, and each Pod has a container with
a [ZooKeeper 3.4.9](http://www-us.apache.org/dist/zookeeper/zookeeper-3.4.9/) server. a [ZooKeeper 3.4.9](http://www-us.apache.org/dist/zookeeper/zookeeper-3.4.9/) server.
#### Facilitating Leader Election ### Facilitating Leader Election
As there is no terminating algorithm for electing a leader in an anonymous As there is no terminating algorithm for electing a leader in an anonymous
network, Zab requires explicit membership configuration in order to perform network, Zab requires explicit membership configuration in order to perform
@ -242,7 +242,7 @@ server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888 server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888
``` ```
#### Achieving Consensus ### Achieving Consensus
Consensus protocols require that the identifiers of each participant be Consensus protocols require that the identifiers of each participant be
unique. No two participants in the Zab protocol should claim the same unique unique. No two participants in the Zab protocol should claim the same unique
@ -301,7 +301,7 @@ and at least two of the Pods are Running and Ready), or they will fail to do so
(if either of the aforementioned conditions are not met). No state will arise (if either of the aforementioned conditions are not met). No state will arise
where one server acknowledges a write on behalf of another. where one server acknowledges a write on behalf of another.
#### Sanity Testing the Ensemble ### Sanity Testing the Ensemble
The most basic sanity test is to write some data to one ZooKeeper server and The most basic sanity test is to write some data to one ZooKeeper server and
to read the data from another. to read the data from another.
@ -348,7 +348,7 @@ dataLength = 5
numChildren = 0 numChildren = 0
``` ```
#### Providing Durable Storage ### Providing Durable Storage
As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section, As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section,
ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots
@ -507,7 +507,7 @@ same PersistentVolume mounted to the ZooKeeper server's data directory.
Even when the Pods are rescheduled, all of the writes made to the ZooKeeper Even when the Pods are rescheduled, all of the writes made to the ZooKeeper
servers' WALs, and all of their snapshots, remain durable. servers' WALs, and all of their snapshots, remain durable.
### Ensuring Consistent Configuration ## Ensuring Consistent Configuration
As noted in the [Facilitating Leader Election](#facilitating-leader-election) and As noted in the [Facilitating Leader Election](#facilitating-leader-election) and
[Achieving Consensus](#achieving-consensus) sections, the servers in a [Achieving Consensus](#achieving-consensus) sections, the servers in a
@ -651,7 +651,7 @@ ZK_DATA_LOG_DIR=/var/lib/zookeeper/log
ZK_LOG_DIR=/var/log/zookeeper ZK_LOG_DIR=/var/log/zookeeper
``` ```
#### Configuring Logging ### Configuring Logging
One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging. One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging.
ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default, ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default,
@ -721,7 +721,7 @@ For cluster level log shipping and aggregation, you should consider deploying a
[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) [sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html)
container to rotate and ship your logs. container to rotate and ship your logs.
#### Configuring a Non-Privileged User ### Configuring a Non-Privileged User
The best practices with respect to allowing an application to run as a privileged The best practices with respect to allowing an application to run as a privileged
user inside of a container are a matter of debate. If your organization requires user inside of a container are a matter of debate. If your organization requires
@ -773,7 +773,7 @@ and the ZooKeeper process is able to successfully read and write its data.
drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
``` ```
### Managing the ZooKeeper Process ## Managing the ZooKeeper Process
The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision) The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision)
documentation indicates that "You will want to have a supervisory process that documentation indicates that "You will want to have a supervisory process that
@ -783,7 +783,7 @@ common pattern. When deploying an application in Kubernetes, rather than using
an external utility as a supervisory process, you should use Kubernetes as the an external utility as a supervisory process, you should use Kubernetes as the
watchdog for your application. watchdog for your application.
#### Handling Process Failure ### Handling Process Failure
[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how [Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how
@ -846,7 +846,7 @@ child process. This ensures that Kubernetes will restart the application's
container when the process implementing the application's business logic fails. container when the process implementing the application's business logic fails.
#### Testing for Liveness ### Testing for Liveness
Configuring your application to restart failed processes is not sufficient to Configuring your application to restart failed processes is not sufficient to
@ -918,7 +918,7 @@ zk-0 1/1 Running 1 1h
``` ```
#### Testing for Readiness ### Testing for Readiness
Readiness is not the same as liveness. If a process is alive, it is scheduled Readiness is not the same as liveness. If a process is alive, it is scheduled
@ -951,7 +951,7 @@ to specify both. This ensures that only healthy servers in the ZooKeeper
ensemble receive network traffic. ensemble receive network traffic.
### Tolerating Node Failure ## Tolerating Node Failure
ZooKeeper needs a quorum of servers in order to successfully commit mutations ZooKeeper needs a quorum of servers in order to successfully commit mutations
to data. For a three server ensemble, two servers must be healthy in order for to data. For a three server ensemble, two servers must be healthy in order for
@ -1013,7 +1013,7 @@ Service in the domain defined by the `topologyKey`. The `topologyKey`
different rules, labels, and selectors, you can extend this technique to spread different rules, labels, and selectors, you can extend this technique to spread
your ensemble across physical, network, and power failure domains. your ensemble across physical, network, and power failure domains.
### Surviving Maintenance ## Surviving Maintenance
**In this section you will cordon and drain nodes. If you are using this tutorial **In this section you will cordon and drain nodes. If you are using this tutorial
on a shared cluster, be sure that this will not adversely affect other tenants.** on a shared cluster, be sure that this will not adversely affect other tenants.**

View File

@ -29,7 +29,7 @@ provides load balancing for an application that has two running instances.
{% capture lessoncontent %} {% capture lessoncontent %}
### Creating a service for an application running in two pods ## Creating a service for an application running in two pods
1. Run a Hello World application in your cluster: 1. Run a Hello World application in your cluster:
@ -111,7 +111,7 @@ provides load balancing for an application that has two running instances.
Hello Kubernetes! Hello Kubernetes!
### Using a service configuration file ## Using a service configuration file
As an alternative to using `kubectl expose`, you can use a As an alternative to using `kubectl expose`, you can use a
[service configuration file](/docs/user-guide/services/operations) [service configuration file](/docs/user-guide/services/operations)

View File

@ -36,7 +36,7 @@ external IP address.
{% capture lessoncontent %} {% capture lessoncontent %}
### Creating a service for an application running in five pods ## Creating a service for an application running in five pods
1. Run a Hello World application in your cluster: 1. Run a Hello World application in your cluster:

View File

@ -38,7 +38,7 @@ driver.
{% capture lessoncontent %} {% capture lessoncontent %}
### Create a Minikube cluster ## Create a Minikube cluster
This tutorial uses [Minikube](https://github.com/kubernetes/minikube) to This tutorial uses [Minikube](https://github.com/kubernetes/minikube) to
create a local cluster. This tutorial also assumes you are using create a local cluster. This tutorial also assumes you are using
@ -94,7 +94,7 @@ Verify that `kubectl` is configured to communicate with your cluster:
kubectl cluster-info kubectl cluster-info
``` ```
### Create your Node.js application ## Create your Node.js application
The next step is to write the application. Save this code in a folder named `hellonode` The next step is to write the application. Save this code in a folder named `hellonode`
with the filename `server.js`: with the filename `server.js`:
@ -113,7 +113,7 @@ Stop the running Node.js server by pressing **Ctrl-C**.
The next step is to package your application in a Docker container. The next step is to package your application in a Docker container.
### Create a Docker container image ## Create a Docker container image
Create a file, also in the `hellonode` folder, named `Dockerfile`. A Dockerfile describes Create a file, also in the `hellonode` folder, named `Dockerfile`. A Dockerfile describes
the image that you want to build. You can build a Docker container image by extending an the image that you want to build. You can build a Docker container image by extending an
@ -145,7 +145,7 @@ docker build -t hello-node:v1 .
Now the Minikube VM can run the image you built. Now the Minikube VM can run the image you built.
### Create a Deployment ## Create a Deployment
A Kubernetes [*Pod*](/docs/user-guide/pods/) is a group of one or more Containers, A Kubernetes [*Pod*](/docs/user-guide/pods/) is a group of one or more Containers,
tied together for the purposes of administration and networking. The Pod in this tied together for the purposes of administration and networking. The Pod in this
@ -206,7 +206,7 @@ kubectl config view
For more information about `kubectl`commands, see the For more information about `kubectl`commands, see the
[kubectl overview](/docs/user-guide/kubectl-overview/). [kubectl overview](/docs/user-guide/kubectl-overview/).
### Create a Service ## Create a Service
By default, the Pod is only accessible by its internal IP address within the By default, the Pod is only accessible by its internal IP address within the
Kubernetes cluster. To make the `hello-node` Container accessible from outside the Kubernetes cluster. To make the `hello-node` Container accessible from outside the
@ -254,7 +254,7 @@ you should now be able to see some logs:
kubectl logs <POD-NAME> kubectl logs <POD-NAME>
``` ```
### Update your app ## Update your app
Edit your `server.js` file to return a new message: Edit your `server.js` file to return a new message:
@ -281,7 +281,7 @@ Run your app again to view the new message:
minikube service hello-node minikube service hello-node
``` ```
### Clean up ## Clean up
Now you can clean up the resources you created in your cluster: Now you can clean up the resources you created in your cluster:

View File

@ -27,7 +27,7 @@ This page shows how to run an application using a Kubernetes Deployment object.
{% capture lessoncontent %} {% capture lessoncontent %}
### Creating and exploring an nginx deployment ## Creating and exploring an nginx deployment
You can run an application by creating a Kubernetes Deployment object, and you You can run an application by creating a Kubernetes Deployment object, and you
can describe a Deployment in a YAML file. For example, this YAML file describes can describe a Deployment in a YAML file. For example, this YAML file describes
@ -72,7 +72,7 @@ a Deployment that runs the nginx:1.7.9 Docker image:
where `<pod-name>` is the name of one of your pods. where `<pod-name>` is the name of one of your pods.
### Updating the deployment ## Updating the deployment
You can update the deployment by applying a new YAML file. This YAML file You can update the deployment by applying a new YAML file. This YAML file
specifies that the deployment should be updated to use nginx 1.8. specifies that the deployment should be updated to use nginx 1.8.
@ -87,7 +87,7 @@ specifies that the deployment should be updated to use nginx 1.8.
kubectl get pods -l app=nginx kubectl get pods -l app=nginx
### Scaling the application by increasing the replica count ## Scaling the application by increasing the replica count
You can increase the number of pods in your Deployment by applying a new YAML You can increase the number of pods in your Deployment by applying a new YAML
file. This YAML file sets `replicas` to 4, which specifies that the Deployment file. This YAML file sets `replicas` to 4, which specifies that the Deployment
@ -111,7 +111,7 @@ should have four pods:
nginx-deployment-148880595-fxcez 1/1 Running 0 2m nginx-deployment-148880595-fxcez 1/1 Running 0 2m
nginx-deployment-148880595-rwovn 1/1 Running 0 2m nginx-deployment-148880595-rwovn 1/1 Running 0 2m
### Deleting a deployment ## Deleting a deployment
Delete the deployment by name: Delete the deployment by name: