Replace references to Replication Controllers with Deployments in docs/user-guide/walkthrough.

Also:
* Cleanup pasted yaml and instead embed file
* Clarify Service test sections by including busybox exec commands as part of the preformatted section
pull/198/head
Phillip Wittrock 2016-03-18 14:45:11 -07:00
parent 7c8d7542ab
commit 1086e6426c
7 changed files with 156 additions and 123 deletions

View File

@ -0,0 +1,16 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.8 # Update the version of nginx from 1.7.9 to 1.8
ports:
- containerPort: 80

View File

@ -0,0 +1,18 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

View File

@ -30,18 +30,7 @@ See [pods](/docs/user-guide/pods/) for more details.
The simplest pod definition describes the deployment of a single container. For example, an nginx web server pod might be defined as such:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
{% include code.html language="yaml" file="pod-nginx.yaml" ghlink="/docs/user-guide/walkthrough/pod-nginx.yaml" %}
A pod definition is a declaration of a _desired state_. Desired state is a very important concept in the Kubernetes model. Many things present a desired state to the system, and it is Kubernetes' responsibility to make sure that the current state matches the desired state. For example, when you create a Pod, you declare that you want the containers in it to be running. If the containers happen to not be running (e.g. program failure, ...), Kubernetes will continue to (re-)create them for you in order to drive them to the desired state. This process continues until the Pod is deleted.
@ -64,11 +53,14 @@ $ kubectl get pods
On most providers, the pod IPs are not externally accessible. The easiest way to test that the pod is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](/docs/user-guide/getting-into-containers/) for details.
Provided the pod IP is accessible, you should be able to access its http endpoint with curl on port 80:
Provided the pod IP is accessible, you should be able to access its http endpoint with wget on port 80:
```shell
$ curl http://$(kubectl get pod nginx -o go-template={% raw %}{{.status.podIP}}{% endraw %})
```
```shell{% raw %}
$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1 --env "POD_IP=$(kubectl get pod nginx -o go-template={{.status.podIP}})"
u@busybox$ wget -qO- http://$POD_IP # Run in the busybox container
u@busybox$ exit # Exit the busybox container
$ kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
{% endraw %}```
Delete the pod by name:

View File

@ -1,123 +1,154 @@
---
---
---
---
## Labels, Replication Controllers, Services and Health Checking
## Labels, Deployments, Services and Health Checking
If you went through [Kubernetes 101](/docs/user-guide/walkthrough/), you learned about kubectl, pods, volumes, and multiple containers.
For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, deployment and
If you went through [Kubernetes 101](/docs/user-guide/walkthrough/), you learned about kubectl, Pods, Volumes, and multiple containers.
For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, Deployment and
scaling.
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
* TOC
* TOC
{:toc}
## Labels
Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods. Please do! But eventually you will need a system to organize these pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
Having already learned about Pods and how to create them, you may be struck by an urge to create many, many Pods. Please do! But eventually you will need a system to organize these Pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
To add a label, add a labels section under metadata in the pod definition:
To add a label, add a labels section under metadata in the Pod definition:
```yaml
```yaml
labels:
app: nginx
```
For example, here is the nginx pod definition with labels ([pod-nginx-with-label.yaml](/docs/user-guide/walkthrough/pod-nginx-with-label.yaml)):
```
For example, here is the nginx Pod definition with labels ([pod-nginx-with-label.yaml](/docs/user-guide/walkthrough/pod-nginx-with-label.yaml)):
{% include code.html language="yaml" file="pod-nginx-with-label.yaml" ghlink="/docs/user-guide/walkthrough/pod-nginx-with-label.yaml" %}
Create the labeled pod ([pod-nginx-with-label.yaml](/docs/user-guide/walkthrough/pod-nginx-with-label.yaml)):
Create the labeled Pod ([pod-nginx-with-label.yaml](/docs/user-guide/walkthrough/pod-nginx-with-label.yaml)):
```shell
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx-with-label.yaml
```
List all pods with the label `app=nginx`:
```shell
kubectl create -f docs/user-guide/walkthrough/pod-nginx-with-label.yaml
```
```shell
$ kubectl get pods -l app=nginx
```
For more information, see [Labels](/docs/user-guide/labels).
They are a core concept used by two additional Kubernetes building blocks: Replication Controllers and Services.
List all Pods with the label `app=nginx`:
```shell
kubectl get pods -l app=nginx
```
For more information, see [Labels](/docs/user-guide/labels/).
They are a core concept used by two additional Kubernetes building blocks: Deployments and Services.
## Replication Controllers
## Deployments
OK, now you know how to make awesome, multi-container, labeled pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogeneous?
Now that you know how to make awesome, multi-container, labeled Pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual Pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of Pods up or down? How will you roll out a new release?
Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single Kubernetes object. The replication controller also contains a label selector that identifies the set of objects managed by the replication controller. The replication controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.
The answer to those questions and more is to use a [_Deployment_](/docs/user-guide/deployment/) to manage maintaining and updating your running _Pods_.
For example, here is a replication controller that instantiates two nginx pods ([replication-controller.yaml](/docs/user-guide/walkthrough/replication-controller.yaml)):
{% include code.html language="yaml" file="replication-controller.yaml" ghlink="/docs/user-guide/walkthrough/replication-controller.yaml" %}
A Deployment object defines a Pod creation template (a "cookie-cutter" if you will) and desired replica count. The Deployment uses a label selector to identify the Pods it manages, and will create or delete Pods as needed to meet the replica count. Deployments are also used to manage safely rolling out changes to your running Pods.
Here is a Deployment that instantiates two nginx Pods:
{% include code.html language="yaml" file="deployment.yaml" ghlink="/docs/user-guide/walkthrough/deployment.yaml" %}
#### Replication Controller Management
#### Deployment Management
Create an nginx replication controller ([replication-controller.yaml](/docs/user-guide/walkthrough/replication-controller.yaml)):
Create an nginx Deployment:
```shell
$ kubectl create -f docs/user-guide/walkthrough/replication-controller.yaml
```
List all replication controllers:
Download the `deployment.yaml` above by clicking on the file name and copy to your local directory.
```shell
$ kubectl get rc
```
Delete the replication controller by name:
```shell
kubectl create -f ./deployment.yaml
```
```shell
$ kubectl delete rc nginx-controller
```
For more information, see [Replication Controllers](/docs/user-guide/replication-controller).
List all Deployments:
```shell
kubectl get deployment
```
List the Pods created by the Deployment:
```shell
kubectl get pods -l app=nginx
```
Upgrade the nginx container from 1.7.9 to 1.8 by changing the Deployment and calling `apply`. The following config
contains the desired changes:
{% include code.html language="yaml" file="deployment-update.yaml" ghlink="/docs/user-guide/walkthrough/deployment-update.yaml" %}
Download ./deployment-update.yaml and copy to your local directory.
```shell
kubectl apply -f ./deployment-update.yaml
```
Watch the Deployment create Pods with new names and delete the old Pods:
```shell
kubectl get pods -l app=nginx
```
Delete the Deployment by name:
```shell
kubectl delete deployment nginx-deployment
```
For more information, such as how to rollback Deployment changes to a previous version, see [_Deployments_](/docs/user-guide/deployment/).
## Services
Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider.
Once you have a replicated set of Pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a Deployment managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the Pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of Pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider.
For example, here is a service that balances across the pods created in the previous nginx replication controller example ([service.yaml](/docs/user-guide/walkthrough/service.yaml)):
{% include code.html language="yaml" file="service.yaml" ghlink="/docs/user-guide/walkthrough/service.yaml" %}
For example, here is a service that balances across the Pods created in the previous nginx Deployment example ([service.yaml](/docs/user-guide/walkthrough/service.yaml)):
{% include code.html language="yaml" file="service.yaml" ghlink="/docs/user-guide/walkthrough/service.yaml" %}
#### Service Management
Create an nginx service ([service.yaml](/docs/user-guide/walkthrough/service.yaml)):
```shell
$ kubectl create -f docs/user-guide/walkthrough/service.yaml
```
```shell
kubectl create -f docs/user-guide/walkthrough/service.yaml
```
List all services:
```shell
$ kubectl get services
```
On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](/docs/user-guide/kubectl-overview/) for details.
```shell
kubectl get services
```
Provided the service IP is accessible, you should be able to access its http endpoint with curl on port 80:
On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox Pod and exec commands on it remotely. See the [command execution documentation](/docs/user-guide/kubectl-overview/) for details.
Provided the service IP is accessible, you should be able to access its http endpoint with wget on the exposed port:
```shell{% raw %}
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template='{{.spec.clusterIP}}')
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template='{{(index .spec.ports 0).port}}')
$ echo "$SERVICE_IP:$SERVICE_PORT"
$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i --env "SERVICE_IP=$SERVICE_IP,SERVICE_PORT=$SERVICE_PORT"
u@busybox$ wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run in the busybox container
u@busybox$ exit # Exit the busybox container
$ kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
{% endraw %}```
```shell
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template={% raw %}{{.spec.clusterIP}}{% endraw %})
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template='{% raw %}{{(index .spec.ports 0).port}}{% endraw %}')
$ curl http://${SERVICE_IP}:${SERVICE_PORT}
```
To delete the service by name:
```shell
$ kubectl delete service nginx-service
```
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some pod that is a member of the set identified by the label selector in the Service.
```shell
kubectl delete service nginx-service
```
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some Pod that is a member of the set identified by the label selector in the Service.
For more information, see [Services](/docs/user-guide/services/).
@ -143,7 +174,7 @@ Kubernetes.
However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:
```go
```go
lockOne := sync.Mutex{}
lockTwo := sync.Mutex{}
@ -155,8 +186,8 @@ go func() {
lockTwo.Lock();
lockOne.Lock();
```
```
This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
@ -173,9 +204,9 @@ In all cases, if the Kubelet discovers a failure the container is restarted.
The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
Here is an example config for a pod with an HTTP health check ([pod-with-http-healthcheck.yaml](/docs/user-guide/walkthrough/pod-with-http-healthcheck.yaml)):
{% include code.html language="yaml" file="pod-with-http-healthcheck.yaml" ghlink="/docs/user-guide/walkthrough/pod-with-http-healthcheck.yaml" %}
Here is an example config for a Pod with an HTTP health check ([pod-with-http-healthcheck.yaml](/docs/user-guide/walkthrough/pod-with-http-healthcheck.yaml)):
{% include code.html language="yaml" file="pod-with-http-healthcheck.yaml" ghlink="/docs/user-guide/walkthrough/pod-with-http-healthcheck.yaml" %}
For more information about health checking, see [Container Probes](/docs/user-guide/pod-states/#container-probes).
@ -183,4 +214,4 @@ For more information about health checking, see [Container Probes](/docs/user-gu
## What's Next?
For a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/).
For a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/).

View File

@ -5,6 +5,6 @@ metadata:
spec:
containers:
- name: nginx
image: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

View File

@ -1,24 +0,0 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
spec:
replicas: 2
# selector identifies the set of Pods that this
# replication controller is responsible for managing
selector:
app: nginx
# podTemplate defines the 'cookie cutter' used for creating
# new pods when necessary
template:
metadata:
labels:
# Important: these labels need to match the selector above
# The api server enforces this constraint.
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

View File

@ -9,7 +9,7 @@ spec:
# (e.g. 'www') or a number (e.g. 80)
targetPort: 80
protocol: TCP
# just like the selector in the replication controller,
# just like the selector in the deployment,
# but this time it identifies the set of pods to load balance
# traffic to.
selector: