Edited Concepts landing page, Objects overview, and Pod concept documentation to address feedback from pwittroc.

reviewable/pr1611/r3
Devin Donnelly 2016-11-18 16:37:23 -08:00
parent a21c80911e
commit 30be38b63f
3 changed files with 49 additions and 17 deletions

View File

@ -14,19 +14,21 @@ This page explains how Kubernetes objects are represented in the Kubernetes API,
* The resources available to those applications
* The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
When you create a Kubernetes object, you create a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that the entity exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster to be doing; this is your cluster's **desired state**.
When you create a Kubernetes object, you create a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's **desired state**.
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md). When you use the `kubectl` comamnd-line interface, for example, the CLI makes the necessary Kubernetes API calls for you; you can also use the Kubernetes API directly in your own programs.
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md). When you use the `kubectl` comamnd-line interface, for example, the CLI makes the necessary Kubernetes API calls for you; you can also use the Kubernetes API directly in your own programs. Kubernetes currently provides a `golang` client library for this purpose, and other language libraries are being developed.
#### Object Spec and Status
Every Kubernetes object has two major nested object fields: the object *spec* and the object *status*. The *spec*, which you must provide, describes your *desired state* for the object--the characteristics that you want the object to have. The *status* describes the *actual state* for the object, and is supplied by the Kubernetes system. At any given time, the [Kubernetes Control Plane](/docs/concepts/control-plane/overview/) actively maintains an object's actual state to match the desired state you supplied.
Every Kubernetes object includes two nested object fields that govern the object's configuration: the object *spec* and the object *status*. The *spec*, which you must provide, describes your *desired state* for the object--the characteristics that you want the object to have. The *status* describes the *actual state* for the object, and is supplied and updated by the Kubernetes system. At any given time, the [Kubernetes Control Plane](/docs/concepts/control-plane/overview/) actively manages an object's actual state to match the desired state you supplied.
For more information on the object spec and status, see the [Kubernetes API Conventions](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#spec-and-status).
For example, a Kubernetes [Deployment]() is an object that can represent an application running on your cluster. When you create the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running. The Kubernetes system reads the Deployment spec and starts three instances of your desired application--updating the status to match your spec. If any of those instances should fail (a status change), the Kubernetes system reacts to the difference between spec and status by making a correction--in this case, starting a replacement instance.
For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#spec-and-status).
#### Describing a Kubernetes Object
When you create an object in Kubernetes, you need to describe it. Your description must provide some basic information about the object along with the object spec that represents your desired state. The Kubernetes API communicates this information by passing JSON; when you make Kubernetes API calls or use the `kubectl` command-line interface, **you can express that JSON using a `.yaml` file.**
When you create an object in Kubernetes, you must provide the object spec that describes your desired state for it, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via `kubectl`), that API call sends the information to the Kubernetes Master as JSON. **You can express that JSON using a `.yaml` file.**
Here's an example `.yaml` file that shows an example of the required fields and object spec for a Kubernetes [Deployment](/docs/concepts/abstractions/deployment/):
@ -36,6 +38,11 @@ One way to create a Deployment using a `.yaml` file like the one above is to use
```shell
$ kubectl create -f docs/user-guide/nginx-deployment.yaml --record
```
The output is similar to this:
```shell
deployment "nginx-deployment" created
```

View File

@ -13,20 +13,29 @@ This page provides an overview of `Pod`, the smallest deployable object in the K
A *Pod* is the basic building block of Kubernetes--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, and options that govern how the container(s) should run. A Pod represents a unit of deployment: *a single workload in Kubernetes*, which might consist of either a single application or a small number of applications that are tightly coupled and that share resources.
A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: *a single instance of an application in Kubernetes*, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
> [Docker](https://www.docker.com) is the most common container runtime used in a Kubernetes Pod, but Pods support other container runtimes as well.
Pods are employed a number of ways in a Kubernetes cluster, including:
* **Pods that run a single application container**. The "one-application per Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single application, and Kubernetes manages the Pods rather than the containers directly.
* **Pods that run multiple application containers that need to work together**. Pods can support multiple application containers that are tightly coupled and need to share resources. You can think of these applications as forming a *single cohesive unit of service*. The Pod wraps them together with shared resources as a single managable entity.
* **Pods that run a single container**. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly.
* **Pods that run multiple containers that need to work together**. A Pod might encapsulate an application that relies on multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service--one container serving files from a shared volume to the public, while a separate "sidecar" container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single managable entity.
Pods typically *do not* model multiple instances of the same application container. Instead, you can have Kubernetes maintain separate Pods for each instance you want to run, usually managed by a Controller. See [Pods and Controllers](#pods-and-controllers) for more information.
The [Kubernetes Blog](http://blog.kubernetes.io) has some additional information on Pod use cases. For more information, see:
#### How Pods Manage Containers
* [The Distributed System Toolkit: Patterns for Composite Containers](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html)
* [Container Design Patterns](http://blog.kubernetes.io/2016/06/container-design-patterns.html)
Pods are designed to support multiple cooperating processes (as application containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same phyiscal or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.
Note that each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run muliple instances), you should use multiple Pods, one for each instance. In Kubernetes, such Pods are usually managed by a Controller. See [Pods and Controllers](#pods-and-controllers) for more information.
#### How Pods Manage Multiple Containers
Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same phyiscal or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.
Note that grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. For example, you might have a container that acts as a web server for files in a shared volume, and a separate "sidecar" container that updates those files from a remote source, as in the following diagram:
![pod diagram](/images/docs/pod.svg){: style="max-width: 50%" }
Pods provide two kinds of shared resources for their constituent containers: *networking* and *storage*.
@ -40,13 +49,15 @@ A Pod can specify a set of shared storage *volumes*. All containers in the pod c
### Working with Pods
When a Pod gets created (directly or indirectly), it is scheduled to run on a [node]() in your cluster, and remains on that node until terminated or deleted. Should a node in the cluster fail, the Pods scheduled on that node are deleted after a timeout period. See [Termination](#pod-termination) for more details on how Pods terminate in Kubernetes.
You'll rarely create individual Pods directly in Kubernetes--even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a Controller), it is scheduled to run on a [node]() in your cluster. The Pod remains on that node until the process is terminated, the pod object is deleted, or the pod is *evicted* for lack of resources.
You'll rarely create or interact directly with individual Pods in Kubernetes--even singleton Pods. This is because Pods are designed as relatively ephemeral entities (as opposed to a durable one). A Pod won't survive a scheduling failure, a node failure, or an eviction due to a lack of resources or node maintenance. Thus, while it is possible to use Pod directly, it's far more common in Kubernetes to manage your pods using a higher-level abstraction called a *Controller*.
> Note: Restarting a container in a Pod should not be confused with restarting the Pod. The Pod itself does not run, but is an environment the containers run in and persists until it is deleted.
Pods do not, by themselves, self-heal. If a Pod is scheduled to a node that fails, or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or node maintenance. Kubernetes uses a higher-level abstraction, called a *Controller*, that handles the work of managing the relatively disposable Pod instances. Thus, while it is possible to use Pod directly, it's far more common in Kubernetes to manage your pods using a Controller. See [Pods and Controllers](#pods-and-controllers) for more information on how Kubernetes uses Controllers to implement Pod scaling and healing.
#### <a name="pods-and-controllers"></a> Pods and Controllers
A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope (for example, if a node fails, a Controller might schedule an identical replacement Pod on a different node).
A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different node).
Some examples of Controllers that contain one or more pods include:
@ -58,7 +69,14 @@ In general, Controllers use a [Pod Template]() that you provide to create the Po
#### <a name="pod-termination"></a> Pod Termination
Since Pods represent processes running on your cluster, Kubernetes provides for *graceful termination* when Pods are no longer needed. Kubernetes implements graceful termination by applying a default *grace period* of 30 seconds from the time that you issue a termination request. After the grace period expires, Kubernetes issues a `KILL` signal to the relevant processes and the Pod is deleted from the Kubernetes Master.
Since Pods represent processes running on your cluster, Kubernetes provides for *graceful termination* when Pods are no longer needed. Kubernetes implements graceful termination by applying a default *grace period* of 30 seconds from the time that you issue a termination request. A typical Pod termination in Kubernetes involves the following steps:
1. You send a command or API call to terminate the Pod.
1. Kubernetes updates the Pod status to reflect the time after which the Pod is to be considered "dead" (the time of the termination request plus the grace period).
1. Kubernetes marks the Pod state as "Terminating" and stops sending traffic to the Pod.
1. Kubernetes send a `TERM` signal to the Pod, indicating that the Pod should shut down.
1. When the grace period expires, Kubernetes issues a `SIGKILL` to any processes still running in the Pod.
1. Kubernetes removes the Pod from the API server on the Kubernetes Master.
> **Note:** The grace period is configurable; you can set your own grace period when interacting with the cluster to request termination, such as using the `kubectl delete` command. See the [Terminating a Pod]() tutorial for more information.

View File

@ -7,7 +7,12 @@ The Concepts section helps you learn about the parts of the Kubernetes system an
To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kuberentes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of processes running on your cluster: the Kubernetes Master, and kubelet and kube-proxy processes running on your cluster's individual nodes.
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kuberentes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection processes running on your cluster:
* The **Kubernetes Master** is a collection of four processes that run on a single node in your cluster, which is designated as the master node.
* Each individual non-master node in your cluster runs two processes:
* **kubelet**, which communicates with the Kubernetes Master.
* **kube-proxy**, a network proxy which reflects Kubernetes networking services on each node.
## Kubernetes Objects
@ -30,7 +35,9 @@ In addition, Kubernetes contains a number of higher-level abstractions that buil
## Kubernetes Control Plane
The various parts of the Kubernetes Control Plane, such as the Kubernetes Master and kubelet processes, govern how Kubernetes communicates with your cluster. When you use the Kubernetes API to create a Deployment object, for example, the Kubernetes Control Plane carries out your instructions.
The various parts of the Kubernetes Control Plane, such as the Kubernetes Master and kubelet processes, govern how Kubernetes communicates with your cluster. The Control Plane maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage those objects' state. At any given time, the Control Plane's control loops will attempt to match the actual state of all the objects in the system to the desired state that you provided when you created those objects.
For example, When you use the Kubernetes API to create a Deployment object, for example, you provide a new desired state for the system. The Kubernetes Control Plane records that object creation, and carries out your instructions by starting the required applications and scheduling them to cluster nodes--thus making the cluster's actual state match the desired state.
### Kubernetes Master