change newline character to LF from CRLF (#9724)
parent
878f52219c
commit
3ee7fbd0ba
|
@ -1,239 +1,239 @@
|
||||||
---
|
---
|
||||||
reviewers:
|
reviewers:
|
||||||
- janetkuo
|
- janetkuo
|
||||||
- mikedanese
|
- mikedanese
|
||||||
title: Kubernetes 201
|
title: Kubernetes 201
|
||||||
---
|
---
|
||||||
|
|
||||||
{{< toc >}}
|
{{< toc >}}
|
||||||
|
|
||||||
## Labels, Deployments, Services and Health Checking
|
## Labels, Deployments, Services and Health Checking
|
||||||
|
|
||||||
If you went through [Kubernetes 101](/docs/tutorials/k8s101/), you learned about kubectl, Pods, Volumes, and multiple containers.
|
If you went through [Kubernetes 101](/docs/tutorials/k8s101/), you learned about kubectl, Pods, Volumes, and multiple containers.
|
||||||
For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, Deployment and
|
For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, Deployment and
|
||||||
scaling.
|
scaling.
|
||||||
|
|
||||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||||
|
|
||||||
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
|
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
|
||||||
|
|
||||||
|
|
||||||
## Labels
|
## Labels
|
||||||
|
|
||||||
Having already learned about Pods and how to create them, you may be struck by an urge to create many, many Pods. Please do! But eventually you will need a system to organize these Pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
|
Having already learned about Pods and how to create them, you may be struck by an urge to create many, many Pods. Please do! But eventually you will need a system to organize these Pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
|
||||||
|
|
||||||
To add a label, add a labels section under metadata in the Pod definition:
|
To add a label, add a labels section under metadata in the Pod definition:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
labels:
|
labels:
|
||||||
env: test
|
env: test
|
||||||
```
|
```
|
||||||
|
|
||||||
For example, here is the nginx Pod definition with labels ([pod-nginx.yaml](/examples/pods/pod-nginx.yaml)):
|
For example, here is the nginx Pod definition with labels ([pod-nginx.yaml](/examples/pods/pod-nginx.yaml)):
|
||||||
|
|
||||||
{{< codenew file="pods/pod-nginx.yaml" >}}
|
{{< codenew file="pods/pod-nginx.yaml" >}}
|
||||||
|
|
||||||
Create the labeled Pod:
|
Create the labeled Pod:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml
|
kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
List all Pods with the label `env=test`:
|
List all Pods with the label `env=test`:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods -l env=test
|
kubectl get pods -l env=test
|
||||||
```
|
```
|
||||||
|
|
||||||
Delete the Pod by label:
|
Delete the Pod by label:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl delete pod -l env=test
|
kubectl delete pod -l env=test
|
||||||
```
|
```
|
||||||
|
|
||||||
For more information, see [Labels](/docs/concepts/overview/working-with-objects/labels/).
|
For more information, see [Labels](/docs/concepts/overview/working-with-objects/labels/).
|
||||||
They are a core concept used by two additional Kubernetes building blocks: Deployments and Services.
|
They are a core concept used by two additional Kubernetes building blocks: Deployments and Services.
|
||||||
|
|
||||||
|
|
||||||
## Deployments
|
## Deployments
|
||||||
|
|
||||||
Now that you know how to make awesome, multi-container, labeled Pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual Pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of Pods up or down? How will you roll out a new release?
|
Now that you know how to make awesome, multi-container, labeled Pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual Pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of Pods up or down? How will you roll out a new release?
|
||||||
|
|
||||||
The answer to those questions and more is to use a [Deployment](/docs/concepts/workloads/controllers/deployment/) to manage maintaining and updating your running _Pods_.
|
The answer to those questions and more is to use a [Deployment](/docs/concepts/workloads/controllers/deployment/) to manage maintaining and updating your running _Pods_.
|
||||||
|
|
||||||
A Deployment object defines a Pod creation template (a "cookie-cutter" if you will) and desired replica count. The Deployment uses a label selector to identify the Pods it manages, and will create or delete Pods as needed to meet the replica count. Deployments are also used to manage safely rolling out changes to your running Pods.
|
A Deployment object defines a Pod creation template (a "cookie-cutter" if you will) and desired replica count. The Deployment uses a label selector to identify the Pods it manages, and will create or delete Pods as needed to meet the replica count. Deployments are also used to manage safely rolling out changes to your running Pods.
|
||||||
|
|
||||||
Here is a Deployment that instantiates two nginx Pods:
|
Here is a Deployment that instantiates two nginx Pods:
|
||||||
|
|
||||||
{{< codenew file="application/deployment.yaml" >}}
|
{{< codenew file="application/deployment.yaml" >}}
|
||||||
|
|
||||||
|
|
||||||
### Deployment Management
|
### Deployment Management
|
||||||
|
|
||||||
Create an nginx Deployment:
|
Create an nginx Deployment:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl create -f https://k8s.io/examples/application/deployment.yaml
|
kubectl create -f https://k8s.io/examples/application/deployment.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
List all Deployments:
|
List all Deployments:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get deployment
|
kubectl get deployment
|
||||||
```
|
```
|
||||||
|
|
||||||
List the Pods created by the Deployment:
|
List the Pods created by the Deployment:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods -l app=nginx
|
kubectl get pods -l app=nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
Upgrade the nginx container from 1.7.9 to 1.8 by changing the Deployment and calling `apply`. The following config
|
Upgrade the nginx container from 1.7.9 to 1.8 by changing the Deployment and calling `apply`. The following config
|
||||||
contains the desired changes:
|
contains the desired changes:
|
||||||
|
|
||||||
{{< codenew file="application/deployment-update.yaml" >}}
|
{{< codenew file="application/deployment-update.yaml" >}}
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
|
kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Watch the Deployment create Pods with new names and delete the old Pods:
|
Watch the Deployment create Pods with new names and delete the old Pods:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods -l app=nginx
|
kubectl get pods -l app=nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
Delete the Deployment by name:
|
Delete the Deployment by name:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl delete deployment nginx-deployment
|
kubectl delete deployment nginx-deployment
|
||||||
```
|
```
|
||||||
|
|
||||||
For more information, such as how to rollback Deployment changes to a previous version, see [_Deployments_](/docs/concepts/workloads/controllers/deployment/).
|
For more information, such as how to rollback Deployment changes to a previous version, see [_Deployments_](/docs/concepts/workloads/controllers/deployment/).
|
||||||
|
|
||||||
|
|
||||||
## Services
|
## Services
|
||||||
|
|
||||||
Once you have a replicated set of Pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a Deployment managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the Pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of Pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider.
|
Once you have a replicated set of Pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a Deployment managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the Pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of Pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider.
|
||||||
|
|
||||||
For example, here is a service that balances across the Pods created in the previous nginx Deployment example ([service.yaml](/examples/service/nginx-service.yaml)):
|
For example, here is a service that balances across the Pods created in the previous nginx Deployment example ([service.yaml](/examples/service/nginx-service.yaml)):
|
||||||
|
|
||||||
{{< codenew file="service/nginx-service.yaml" >}}
|
{{< codenew file="service/nginx-service.yaml" >}}
|
||||||
|
|
||||||
|
|
||||||
### Service Management
|
### Service Management
|
||||||
|
|
||||||
Create an nginx Service:
|
Create an nginx Service:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl create -f https://k8s.io/examples/service/nginx-service.yaml
|
kubectl create -f https://k8s.io/examples/service/nginx-service.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
List all services:
|
List all services:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get services
|
kubectl get services
|
||||||
```
|
```
|
||||||
|
|
||||||
On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox Pod and exec commands on it remotely. See the [command execution documentation](/docs/user-guide/kubectl-overview/) for details.
|
On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox Pod and exec commands on it remotely. See the [command execution documentation](/docs/user-guide/kubectl-overview/) for details.
|
||||||
|
|
||||||
Provided the service IP is accessible, you should be able to access its http endpoint with wget on the exposed port:
|
Provided the service IP is accessible, you should be able to access its http endpoint with wget on the exposed port:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
||||||
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template='{{.spec.clusterIP}}')
|
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template='{{.spec.clusterIP}}')
|
||||||
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template='{{(index .spec.ports 0).port}}')
|
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template='{{(index .spec.ports 0).port}}')
|
||||||
$ echo "$SERVICE_IP:$SERVICE_PORT"
|
$ echo "$SERVICE_IP:$SERVICE_PORT"
|
||||||
$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i --env "SERVICE_IP=$SERVICE_IP" --env "SERVICE_PORT=$SERVICE_PORT"
|
$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i --env "SERVICE_IP=$SERVICE_IP" --env "SERVICE_PORT=$SERVICE_PORT"
|
||||||
u@busybox$ wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run in the busybox container
|
u@busybox$ wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run in the busybox container
|
||||||
u@busybox$ exit # Exit the busybox container
|
u@busybox$ exit # Exit the busybox container
|
||||||
$ kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
|
$ kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The service definition [exposed the Nginx Service](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) as port 8000 (`$SERVCE_PORT`). We can also access the service from a host running Kubernetes using that port:
|
The service definition [exposed the Nginx Service](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) as port 8000 (`$SERVCE_PORT`). We can also access the service from a host running Kubernetes using that port:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run on a Kubernetes host
|
wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run on a Kubernetes host
|
||||||
```
|
```
|
||||||
|
|
||||||
(This works on AWS with Weave.)
|
(This works on AWS with Weave.)
|
||||||
|
|
||||||
To delete the service by name:
|
To delete the service by name:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl delete service nginx-service
|
kubectl delete service nginx-service
|
||||||
```
|
```
|
||||||
|
|
||||||
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some Pod that is a member of the set identified by the label selector in the Service.
|
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some Pod that is a member of the set identified by the label selector in the Service.
|
||||||
|
|
||||||
For more information, see [Services](/docs/concepts/services-networking/service/).
|
For more information, see [Services](/docs/concepts/services-networking/service/).
|
||||||
|
|
||||||
|
|
||||||
## Health Checking
|
## Health Checking
|
||||||
|
|
||||||
When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/kubernetes/kubernetes/issues) indicates otherwise...
|
When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/kubernetes/kubernetes/issues) indicates otherwise...
|
||||||
|
|
||||||
Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
|
Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
|
||||||
and repair of your application. That way a system outside of your application itself is responsible for monitoring the
|
and repair of your application. That way a system outside of your application itself is responsible for monitoring the
|
||||||
application and taking action to fix it. It's important that the system be outside of the application, since if
|
application and taking action to fix it. It's important that the system be outside of the application, since if
|
||||||
your application fails and the health checking agent is part of your application, it may fail as well and you'll never know.
|
your application fails and the health checking agent is part of your application, it may fail as well and you'll never know.
|
||||||
In Kubernetes, the health check monitor is the Kubelet agent.
|
In Kubernetes, the health check monitor is the Kubelet agent.
|
||||||
|
|
||||||
### Process Health Checking
|
### Process Health Checking
|
||||||
|
|
||||||
The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon
|
The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon
|
||||||
if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples
|
if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples
|
||||||
you have run so far, this health checking was actually already enabled. It's on for every single container that runs in
|
you have run so far, this health checking was actually already enabled. It's on for every single container that runs in
|
||||||
Kubernetes.
|
Kubernetes.
|
||||||
|
|
||||||
### Application Health Checking
|
### Application Health Checking
|
||||||
|
|
||||||
However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:
|
However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:
|
||||||
|
|
||||||
```go
|
```go
|
||||||
lockOne := sync.Mutex{}
|
lockOne := sync.Mutex{}
|
||||||
lockTwo := sync.Mutex{}
|
lockTwo := sync.Mutex{}
|
||||||
|
|
||||||
go func() {
|
go func() {
|
||||||
lockOne.Lock();
|
lockOne.Lock();
|
||||||
lockTwo.Lock();
|
lockTwo.Lock();
|
||||||
...
|
...
|
||||||
}()
|
}()
|
||||||
|
|
||||||
lockTwo.Lock();
|
lockTwo.Lock();
|
||||||
lockOne.Lock();
|
lockOne.Lock();
|
||||||
```
|
```
|
||||||
|
|
||||||
This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
|
This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
|
||||||
still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
|
still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
|
||||||
|
|
||||||
To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
|
To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
|
||||||
Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.
|
Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.
|
||||||
|
|
||||||
Currently, there are three types of application health checks that you can choose from:
|
Currently, there are three types of application health checks that you can choose from:
|
||||||
|
|
||||||
* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](/docs/user-guide/liveness/).
|
* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](/docs/user-guide/liveness/).
|
||||||
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](/docs/user-guide/liveness/).
|
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](/docs/user-guide/liveness/).
|
||||||
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
|
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
|
||||||
|
|
||||||
In all cases, if the Kubelet discovers a failure the container is restarted.
|
In all cases, if the Kubelet discovers a failure the container is restarted.
|
||||||
|
|
||||||
The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
|
The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
|
||||||
|
|
||||||
Here is an example config for a Pod with an HTTP health check
|
Here is an example config for a Pod with an HTTP health check
|
||||||
([pod-with-http-healthcheck.yaml](/examples/pods/probe/pod-with-http-healthcheck.yaml)):
|
([pod-with-http-healthcheck.yaml](/examples/pods/probe/pod-with-http-healthcheck.yaml)):
|
||||||
|
|
||||||
{{< codenew file="pods/probe/pod-with-http-healthcheck.yaml" >}}
|
{{< codenew file="pods/probe/pod-with-http-healthcheck.yaml" >}}
|
||||||
|
|
||||||
And here is an example config for a Pod with a TCP Socket health check
|
And here is an example config for a Pod with a TCP Socket health check
|
||||||
([pod-with-tcp-socket-healthcheck.yaml](/examples/pods/probe/pod-with-tcp-socket-healthcheck.yaml)):
|
([pod-with-tcp-socket-healthcheck.yaml](/examples/pods/probe/pod-with-tcp-socket-healthcheck.yaml)):
|
||||||
|
|
||||||
{{< codenew file="pods/probe/pod-with-tcp-socket-healthcheck.yaml" >}}
|
{{< codenew file="pods/probe/pod-with-tcp-socket-healthcheck.yaml" >}}
|
||||||
|
|
||||||
For more information about health checking, see [Container Probes](/docs/user-guide/pod-states/#container-probes).
|
For more information about health checking, see [Container Probes](/docs/user-guide/pod-states/#container-probes).
|
||||||
|
|
||||||
|
|
||||||
## What's Next?
|
## What's Next?
|
||||||
|
|
||||||
For a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/).
|
For a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/).
|
||||||
|
|
Loading…
Reference in New Issue