Revise Pod concept (#22603)
* Revise Pod concept Adapt the existing Pod documentation to suit the Docsy theme, by promoting the Pod concept itself to /docs/concepts/workloads/pods/ Following on from this, update the Pod Lifecycle page to cover the lifecycle of a Pod and follow on directly from the Pod concept, for readers keen to understand things in detail. This change also removes the automatic contents list from the Pod overview page. Instead, the new page links to all the pages inside the Pod section. * Update links to Pod concept Link to updated content * Incorporate Pod concept suggestions Co-authored-by: Celeste Horgan <celeste@cncf.io> * Revise StatefulSet suggestion for Pod concept Co-authored-by: Celeste Horgan <celeste@cncf.io> Co-authored-by: Celeste Horgan <celeste@cncf.io>pull/22769/head
parent
c80c9c40c1
commit
49eee8fd3d
|
@ -12,7 +12,7 @@ understand exactly how it is expected to work. There are 4 distinct networking
|
|||
problems to address:
|
||||
|
||||
1. Highly-coupled container-to-container communications: this is solved by
|
||||
[pods](/docs/concepts/workloads/pods/pod/) and `localhost` communications.
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}} and `localhost` communications.
|
||||
2. Pod-to-Pod communications: this is the primary focus of this document.
|
||||
3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
|
||||
4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
|
||||
|
|
|
@ -255,7 +255,7 @@ makes Pod P eligible to preempt Pods on another Node.
|
|||
#### Graceful termination of preemption victims
|
||||
|
||||
When Pods are preempted, the victims get their
|
||||
[graceful termination period](/docs/concepts/workloads/pods/pod/#termination-of-pods).
|
||||
[graceful termination period](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
||||
They have that much time to finish their work and exit. If they don't, they are
|
||||
killed. This graceful termination period creates a time gap between the point
|
||||
that the scheduler preempts Pods and the time when the pending Pod (P) can be
|
||||
|
@ -268,7 +268,7 @@ priority Pods to zero or a small number.
|
|||
|
||||
#### PodDisruptionBudget is supported, but not guaranteed
|
||||
|
||||
A [Pod Disruption Budget (PDB)](/docs/concepts/workloads/pods/disruptions/)
|
||||
A [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) (PDB)
|
||||
allows application owners to limit the number of Pods of a replicated application
|
||||
that are down simultaneously from voluntary disruptions. Kubernetes supports
|
||||
PDB when preempting Pods, but respecting PDB is best effort. The scheduler tries
|
||||
|
|
|
@ -42,7 +42,7 @@ so it must complete before the call to delete the container can be sent.
|
|||
No parameters are passed to the handler.
|
||||
|
||||
A more detailed description of the termination behavior can be found in
|
||||
[Termination of Pods](/docs/concepts/workloads/pods/pod/#termination-of-pods).
|
||||
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
||||
|
||||
### Hook handler implementations
|
||||
|
||||
|
|
|
@ -92,7 +92,7 @@ and the `spec` format for a Deployment can be found in
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [Kubernetes API overview](/docs/reference/using-api/api-overview/) explains some more API concepts
|
||||
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/pod-overview/).
|
||||
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/).
|
||||
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes
|
||||
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ A DaemonSet also needs a [`.spec`](https://git.k8s.io/community/contributors/dev
|
|||
|
||||
The `.spec.template` is one of the required fields in `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [Pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`.
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a Pod template in a DaemonSet has to specify appropriate
|
||||
labels (see [pod selector](#pod-selector)).
|
||||
|
|
|
@ -13,8 +13,8 @@ weight: 30
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
|
||||
[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
|
||||
A _Deployment_ provides declarative updates for {{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
{{< glossary_tooltip term_id="replica-set" text="ReplicaSets" >}}.
|
||||
|
||||
You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_tooltip term_id="controller" >}} changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
|
||||
|
||||
|
@ -23,8 +23,6 @@ Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in th
|
|||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Use Case
|
||||
|
@ -1053,8 +1051,7 @@ A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contrib
|
|||
|
||||
The `.spec.template` and `.spec.selector` are the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [Pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [Pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an
|
||||
`apiVersion` or `kind`.
|
||||
The `.spec.template` is a [Pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate
|
||||
labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector](#selector)).
|
||||
|
|
|
@ -122,7 +122,7 @@ A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/d
|
|||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`.
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a Job must specify appropriate
|
||||
labels (see [pod selector](#pod-selector)) and an appropriate restart policy.
|
||||
|
|
|
@ -126,7 +126,7 @@ A ReplicationController also needs a [`.spec` section](https://git.k8s.io/commun
|
|||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`.
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate
|
||||
labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [pod selector](#pod-selector).
|
||||
|
|
|
@ -1,5 +1,271 @@
|
|||
---
|
||||
title: "Pods"
|
||||
reviewers:
|
||||
- erictune
|
||||
title: Pods
|
||||
content_type: concept
|
||||
weight: 10
|
||||
no_list: true
|
||||
card:
|
||||
name: concepts
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
_Pods_ are the smallest deployable units of computing that you can create and manage in Kubernetes.
|
||||
|
||||
A _Pod_ (as in a pod of whales or pea pod) is a group of one or more
|
||||
{{< glossary_tooltip text="containers" term_id="container" >}}, with shared storage/network resources, and a specification
|
||||
for how to run the containers. A Pod's contents are always co-located and
|
||||
co-scheduled, and run in a shared context. A Pod models an
|
||||
application-specific "logical host": it contains one or more application
|
||||
containers which are relatively tightly coupled.
|
||||
In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.
|
||||
|
||||
As well as application containers, a Pod can contain
|
||||
[init containers](/docs/concepts/workloads/pods/init-containers/) that run
|
||||
during Pod startup. You can also inject
|
||||
[ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/)
|
||||
for debugging if your cluster offers this.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## What is a Pod?
|
||||
|
||||
{{< note >}}
|
||||
While Kubernetes supports more
|
||||
{{< glossary_tooltip text="container runtimes" term_id="container-runtime" >}}
|
||||
than just Docker, [Docker](https://www.docker.com/) is the most commonly known
|
||||
runtime, and it helps to describe Pods using some terminology from Docker.
|
||||
{{< /note >}}
|
||||
|
||||
The shared context of a Pod is a set of Linux namespaces, cgroups, and
|
||||
potentially other facets of isolation - the same things that isolate a Docker
|
||||
container. Within a Pod's context, the individual applications may have
|
||||
further sub-isolations applied.
|
||||
|
||||
In terms of Docker concepts, a Pod is similar to a group of Docker containers
|
||||
with shared namespaces and shared filesystem volumes.
|
||||
|
||||
## Using Pods
|
||||
|
||||
Usually you don't need to create Pods directly, even singleton Pods. Instead, create them using workload resources such as {{< glossary_tooltip text="Deployment"
|
||||
term_id="deployment" >}} or {{< glossary_tooltip text="Job" term_id="job" >}}.
|
||||
If your Pods need to track state, consider the
|
||||
{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} resource.
|
||||
|
||||
Pods in a Kubernetes cluster are used in two main ways:
|
||||
|
||||
* **Pods that run a single container**. The "one-container-per-Pod" model is the
|
||||
most common Kubernetes use case; in this case, you can think of a Pod as a
|
||||
wrapper around a single container; Kubernetes manages Pods rather than managing
|
||||
the containers directly.
|
||||
* **Pods that run multiple containers that need to work together**. A Pod can
|
||||
encapsulate an application composed of multiple co-located containers that are
|
||||
tightly coupled and need to share resources. These co-located containers
|
||||
form a single cohesive unit of service—for example, one container serving data
|
||||
stored in a shared volume to the public, while a separate _sidecar_ container
|
||||
refreshes or updates those files.
|
||||
The Pod wraps these containers, storage resources, and an ephemeral network
|
||||
identity together as a single unit.
|
||||
|
||||
{{< note >}}
|
||||
Grouping multiple co-located and co-managed containers in a single Pod is a
|
||||
relatively advanced use case. You should use this pattern only in specific
|
||||
instances in which your containers are tightly coupled.
|
||||
{{< /note >}}
|
||||
|
||||
Each Pod is meant to run a single instance of a given application. If you want to
|
||||
scale your application horizontally (to provide more overall resources by running
|
||||
more instances), you should use multiple Pods, one for each instance. In
|
||||
Kubernetes, this is typically referred to as _replication_.
|
||||
Replicated Pods are usually created and managed as a group by a workload resource
|
||||
and its {{< glossary_tooltip text="controller" term_id="controller" >}}.
|
||||
|
||||
See [Pods and controllers](#pods-and-controllers) for more information on how
|
||||
Kubernetes uses workload resources, and their controllers, to implement application
|
||||
scaling and auto-healing.
|
||||
|
||||
### How Pods manage multiple containers
|
||||
|
||||
Pods are designed to support multiple cooperating processes (as containers) that form
|
||||
a cohesive unit of service. The containers in a Pod are automatically co-located and
|
||||
co-scheduled on the same physical or virtual machine in the cluster. The containers
|
||||
can share resources and dependencies, communicate with one another, and coordinate
|
||||
when and how they are terminated.
|
||||
|
||||
For example, you might have a container that
|
||||
acts as a web server for files in a shared volume, and a separate "sidecar" container
|
||||
that updates those files from a remote source, as in the following diagram:
|
||||
|
||||
{{< figure src="/images/docs/pod.svg" alt="example pod diagram" width="50%" >}}
|
||||
|
||||
Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}} as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}. Init containers run and complete before the app containers are started.
|
||||
|
||||
Pods natively provide two kinds of shared resources for their constituent containers:
|
||||
[networking](#pod-networking) and [storage](#pod-storage).
|
||||
|
||||
## Working with Pods
|
||||
|
||||
You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This
|
||||
is because Pods are designed as relatively ephemeral, disposable entities. When
|
||||
a Pod gets created (directly by you, or indirectly by a
|
||||
{{< glossary_tooltip text="controller" term_id="controller" >}}), the new Pod is
|
||||
scheduled to run on a {{< glossary_tooltip term_id="node" >}} in your cluster.
|
||||
The Pod remains on that node until the Pod finishes execution, the Pod object is deleted,
|
||||
the Pod is *evicted* for lack of resources, or the node fails.
|
||||
|
||||
{{< note >}}
|
||||
Restarting a container in a Pod should not be confused with restarting a Pod. A Pod
|
||||
is not a process, but an environment for running container(s). A Pod persists until
|
||||
it is deleted.
|
||||
{{< /note >}}
|
||||
|
||||
When you create the manifest for a Pod object, make sure the name specified is a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
### Pods and controllers
|
||||
|
||||
You can use workload resources to create and manage multiple Pods for you. A controller
|
||||
for the resource handles replication and rollout and automatic healing in case of
|
||||
Pod failure. For example, if a Node fails, a controller notices that Pods on that
|
||||
Node have stopped working and creates a replacement Pod. The scheduler places the
|
||||
replacement Pod onto a healthy Node.
|
||||
|
||||
Here are some examples of workload resources that manage one or more Pods:
|
||||
|
||||
* {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
|
||||
* {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}
|
||||
* {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}
|
||||
|
||||
### Pod templates
|
||||
|
||||
Controllers for {{< glossary_tooltip text="workload" term_id="workload" >}} resources create Pods
|
||||
from a _pod template_ and manage those Pods on your behalf.
|
||||
|
||||
PodTemplates are specifications for creating Pods, and are included in workload resources such as
|
||||
[Deployments](/docs/concepts/workloads/controllers/deployment/),
|
||||
[Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and
|
||||
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/).
|
||||
|
||||
Each controller for a workload resource uses the `PodTemplate` inside the workload
|
||||
object to make actual Pods. The `PodTemplate` is part of the desired state of whatever
|
||||
workload resource you used to run your app.
|
||||
|
||||
The sample below is a manifest for a simple Job with a `template` that starts one
|
||||
container. The container in that Pod prints a message then pauses.
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: hello
|
||||
spec:
|
||||
template:
|
||||
# This is the pod template
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
|
||||
restartPolicy: OnFailure
|
||||
# The pod template ends here
|
||||
```
|
||||
|
||||
Modifying the pod template or switching to a new pod template has no effect on the
|
||||
Pods that already exist. Pods do not receive template updates directly. Instead,
|
||||
a new Pod is created to match the revised pod template.
|
||||
|
||||
For example, the deployment controller ensures that the running Pods match the current
|
||||
pod template for each Deployment object. If the template is updated, the Deployment has
|
||||
to remove the existing Pods and create new Pods based on the updated template. Each workload
|
||||
resource implements its own rules for handling changes to the Pod template.
|
||||
|
||||
On Nodes, the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} does not
|
||||
directly observe or manage any of the details around pod templates and updates; those
|
||||
details are abstracted away. That abstraction and separation of concerns simplifies
|
||||
system semantics, and makes it feasible to extend the cluster's behavior without
|
||||
changing existing code.
|
||||
|
||||
## Resource sharing and communication
|
||||
|
||||
Pods enable data sharing and communication among their constituent
|
||||
containters.
|
||||
|
||||
### Storage in Pods {#pod-storage}
|
||||
|
||||
A Pod can specify a set of shared storage
|
||||
{{< glossary_tooltip text="volumes" term_id="volume" >}}. All containers
|
||||
in the Pod can access the shared volumes, allowing those containers to
|
||||
share data. Volumes also allow persistent data in a Pod to survive
|
||||
in case one of the containers within needs to be restarted. See
|
||||
[Storage](/docs/concepts/storage/) for more information on how
|
||||
Kubernetes implements shared storage and makes it available to Pods.
|
||||
|
||||
### Pod networking
|
||||
|
||||
Each Pod is assigned a unique IP address for each address family. Every
|
||||
container in a Pod shares the network namespace, including the IP address and
|
||||
network ports. Inside a Pod (and **only** then), the containers that belong to the Pod
|
||||
can communicate with one another using `localhost`. When containers in a Pod communicate
|
||||
with entities *outside the Pod*,
|
||||
they must coordinate how they use the shared network resources (such as ports).
|
||||
Within a Pod, containers share an IP address and port space, and
|
||||
can find each other via `localhost`. The containers in a Pod can also communicate
|
||||
with each other using standard inter-process communications like SystemV semaphores
|
||||
or POSIX shared memory. Containers in different Pods have distinct IP addresses
|
||||
and can not communicate by IPC without
|
||||
[special configuration](/docs/concepts/policy/pod-security-policy/).
|
||||
Containers that want to interact with a container running in a different Pod can
|
||||
use IP networking to comunicate.
|
||||
|
||||
Containers within the Pod see the system hostname as being the same as the configured
|
||||
`name` for the Pod. There's more about this in the [networking](/docs/concepts/cluster-administration/networking/)
|
||||
section.
|
||||
|
||||
## Privileged mode for containers
|
||||
|
||||
Any container in a Pod can enable privileged mode, using the `privileged` flag on the [security context](/docs/tasks/configure-pod-container/security-context/) of the container spec. This is useful for containers that want to use operating system administrative capabilities such as manipulating the network stack or accessing hardware devices.
|
||||
Processes within a privileged container get almost the same privileges that are available to processes outside a container.
|
||||
|
||||
{{< note >}}
|
||||
Your {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}} must support the concept of a privileged container for this setting to be relevant.
|
||||
{{< /note >}}
|
||||
|
||||
## Static Pods
|
||||
|
||||
_Static Pods_ are managed directly by the kubelet daemon on a specific node,
|
||||
without the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
|
||||
observing them.
|
||||
Whereas most Pods are managed by the control plane (for example, a
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}), for static
|
||||
Pods, the kubelet directly supervises each static Pod (and restarts it if it fails).
|
||||
|
||||
Static Pods are always bound to one {{< glossary_tooltip term_id="kubelet" >}} on a specific node.
|
||||
The main use for static Pods is to run a self-hosted control plane: in other words,
|
||||
using the kubelet to supervise the individual [control plane components](/docs/concepts/overview/components/#control-plane-components).
|
||||
|
||||
The kubelet automatically tries to create a {{< glossary_tooltip text="mirror Pod" term_id="mirror-pod" >}}
|
||||
on the Kubernetes API server for each static Pod.
|
||||
This means that the Pods running on a node are visible on the API server,
|
||||
but cannot be controlled from there.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
* Learn about [PodPresets](/docs/concepts/workloads/pods/podpreset/).
|
||||
* Lean about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
|
||||
configure different Pods with different container runtime configurations.
|
||||
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
||||
* Read about [PodDisruptionBudget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions.
|
||||
* Pod is a top-level resource in the Kubernetes REST API.
|
||||
The [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
object definition describes the object in detail.
|
||||
* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explains common layouts for Pods with more than one container.
|
||||
|
||||
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}, you can read about the prior art, including:
|
||||
* [Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
|
||||
* [Borg](https://research.google.com/pubs/pub43438.html)
|
||||
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
|
||||
* [Omega](https://research.google/pubs/pub41684/)
|
||||
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).
|
||||
|
|
|
@ -11,17 +11,15 @@ weight: 60
|
|||
<!-- overview -->
|
||||
This guide is for application owners who want to build
|
||||
highly available applications, and thus need to understand
|
||||
what types of Disruptions can happen to Pods.
|
||||
what types of disruptions can happen to Pods.
|
||||
|
||||
It is also for Cluster Administrators who want to perform automated
|
||||
It is also for cluster administrators who want to perform automated
|
||||
cluster actions, like upgrading and autoscaling clusters.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Voluntary and Involuntary Disruptions
|
||||
## Voluntary and involuntary disruptions
|
||||
|
||||
Pods do not disappear until someone (a person or a controller) destroys them, or
|
||||
there is an unavoidable hardware or system software error.
|
||||
|
@ -48,7 +46,7 @@ Administrator. Typical application owner actions include:
|
|||
- updating a deployment's pod template causing a restart
|
||||
- directly deleting a pod (e.g. by accident)
|
||||
|
||||
Cluster Administrator actions include:
|
||||
Cluster administrator actions include:
|
||||
|
||||
- [Draining a node](/docs/tasks/administer-cluster/safely-drain-node/) for repair or upgrade.
|
||||
- Draining a node from a cluster to scale the cluster down (learn about
|
||||
|
@ -68,7 +66,7 @@ Not all voluntary disruptions are constrained by Pod Disruption Budgets. For exa
|
|||
deleting deployments or pods bypasses Pod Disruption Budgets.
|
||||
{{< /caution >}}
|
||||
|
||||
## Dealing with Disruptions
|
||||
## Dealing with disruptions
|
||||
|
||||
Here are some ways to mitigate involuntary disruptions:
|
||||
|
||||
|
@ -90,58 +88,58 @@ of cluster (node) autoscaling may cause voluntary disruptions to defragment and
|
|||
Your cluster administrator or hosting provider should have documented what level of voluntary
|
||||
disruptions, if any, to expect.
|
||||
|
||||
Kubernetes offers features to help run highly available applications at the same
|
||||
time as frequent voluntary disruptions. We call this set of features
|
||||
*Disruption Budgets*.
|
||||
|
||||
|
||||
## How Disruption Budgets Work
|
||||
## Pod disruption budgets
|
||||
|
||||
{{< feature-state for_k8s_version="v1.5" state="beta" >}}
|
||||
|
||||
An Application Owner can create a `PodDisruptionBudget` object (PDB) for each application.
|
||||
A PDB limits the number of pods of a replicated application that are down simultaneously from
|
||||
voluntary disruptions. For example, a quorum-based application would
|
||||
Kubernetes offers features to help you run highly available applications even when you
|
||||
introduce frequent voluntary disruptions.
|
||||
|
||||
As an application owner, you can create a PodDisruptionBudget (PDB) for each application.
|
||||
A PDB limits the number of Pods of a replicated application that are down simultaneously from
|
||||
voluntary disruptions. For example, a quorum-based application would
|
||||
like to ensure that the number of replicas running is never brought below the
|
||||
number needed for a quorum. A web front end might want to
|
||||
ensure that the number of replicas serving load never falls below a certain
|
||||
percentage of the total.
|
||||
|
||||
Cluster managers and hosting providers should use tools which
|
||||
respect Pod Disruption Budgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api)
|
||||
instead of directly deleting pods or deployments. Examples are the `kubectl drain` command
|
||||
and the Kubernetes-on-GCE cluster upgrade script (`cluster/gce/upgrade.sh`).
|
||||
respect PodDisruptionBudgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api)
|
||||
instead of directly deleting pods or deployments.
|
||||
|
||||
When a cluster administrator wants to drain a node
|
||||
they use the `kubectl drain` command. That tool tries to evict all
|
||||
the pods on the machine. The eviction request may be temporarily rejected,
|
||||
and the tool periodically retries all failed requests until all pods
|
||||
are terminated, or until a configurable timeout is reached.
|
||||
For example, the `kubectl drain` subcommand lets you mark a node as going out of
|
||||
service. When you run `kubectl drain`, the tool tries to evict all of the Pods on
|
||||
the Node you're taking out of service. The eviction request that `kubectl` submits on
|
||||
your behalf may be temporarily rejected, so the tool periodically retries all failed
|
||||
requests until all Pods on the target node are terminated, or until a configurable timeout
|
||||
is reached.
|
||||
|
||||
A PDB specifies the number of replicas that an application can tolerate having, relative to how
|
||||
many it is intended to have. For example, a Deployment which has a `.spec.replicas: 5` is
|
||||
supposed to have 5 pods at any given time. If its PDB allows for there to be 4 at a time,
|
||||
then the Eviction API will allow voluntary disruption of one, but not two pods, at a time.
|
||||
then the Eviction API will allow voluntary disruption of one (but not two) pods at a time.
|
||||
|
||||
The group of pods that comprise the application is specified using a label selector, the same
|
||||
as the one used by the application's controller (deployment, stateful-set, etc).
|
||||
|
||||
The "intended" number of pods is computed from the `.spec.replicas` of the pods controller.
|
||||
The controller is discovered from the pods using the `.metadata.ownerReferences` of the object.
|
||||
The "intended" number of pods is computed from the `.spec.replicas` of the workload resource
|
||||
that is managing those pods. The control plane discovers the owning workload resource by
|
||||
examining the `.metadata.ownerReferences` of the Pod.
|
||||
|
||||
PDBs cannot prevent [involuntary disruptions](#voluntary-and-involuntary-disruptions) from
|
||||
occurring, but they do count against the budget.
|
||||
|
||||
Pods which are deleted or unavailable due to a rolling upgrade to an application do count
|
||||
against the disruption budget, but controllers (like deployment and stateful-set)
|
||||
are not limited by PDBs when doing rolling upgrades -- the handling of failures
|
||||
during application updates is configured in the controller spec.
|
||||
(Learn about [updating a deployment](/docs/concepts/workloads/controllers/deployment/#updating-a-deployment).)
|
||||
against the disruption budget, but workload resources (such as Deployment and StatefulSet)
|
||||
are not limited by PDBs when doing rolling upgrades. Instead, the handling of failures
|
||||
during application updates is configured in the spec for the specific workload resource.
|
||||
|
||||
When a pod is evicted using the eviction API, it is gracefully terminated (see
|
||||
`terminationGracePeriodSeconds` in [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).)
|
||||
When a pod is evicted using the eviction API, it is gracefully
|
||||
[terminated](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination), honoring the
|
||||
`terminationGracePeriodSeconds` setting in its [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).)
|
||||
|
||||
## PDB Example
|
||||
## PodDisruptionBudget example {#pdb-example}
|
||||
|
||||
Consider a cluster with 3 nodes, `node-1` through `node-3`.
|
||||
The cluster is running several applications. One of them has 3 replicas initially called
|
||||
|
@ -272,4 +270,6 @@ the nodes in your cluster, such as a node or system software upgrade, here are s
|
|||
|
||||
* Learn more about [draining nodes](/docs/tasks/administer-cluster/safely-drain-node/)
|
||||
|
||||
* Learn about [updating a deployment](/docs/concepts/workloads/controllers/deployment/#updating-a-deployment)
|
||||
including steps to maintain its availability during the rollout.
|
||||
|
||||
|
|
|
@ -6,16 +6,60 @@ weight: 30
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< comment >}}Updated: 4/14/2015{{< /comment >}}
|
||||
{{< comment >}}Edited and moved to Concepts section: 2/2/17{{< /comment >}}
|
||||
|
||||
This page describes the lifecycle of a Pod.
|
||||
This page describes the lifecycle of a Pod. Pods follow a defined lifecycle, starting
|
||||
in the `Pending` [phase](#pod-phase), moving through `Running` if at least one
|
||||
of its primary containers starts OK, and then through either the `Succeeded` or
|
||||
`Failed` phases depending on whether any container in the Pod terminated in failure.
|
||||
|
||||
Whilst a Pod is running, the kubelet is able to restart containers to handle some
|
||||
kind of faults. Within a Pod, Kubernetes tracks different container
|
||||
[states](#container-states) and handles
|
||||
|
||||
In the Kubernetes API, Pods have both a specification and an actual status. The
|
||||
status for a Pod object consists of a set of [Pod conditions](#pod-conditions).
|
||||
You can also inject [custom readiness information](#pod-readiness-gate) into the
|
||||
condition data for a Pod, if that is useful to your application.
|
||||
|
||||
Pods are only [scheduled](/docs/concepts/scheduling-eviction/) once in their lifetime.
|
||||
Once a Pod is scheduled (assigned) to a Node, the Pod runs on that Node until it stops
|
||||
or is [terminated](#pod-termination).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Pod lifetime
|
||||
|
||||
Like individual application containers, Pods are considered to be relatively
|
||||
ephemeral (rather than durable) entities. Pods are created, assigned a unique
|
||||
ID ([UID](/docs/concepts/overview/working-with-objects/names/#uids)), and scheduled
|
||||
to nodes where they remain until termination (according to restart policy) or
|
||||
deletion.
|
||||
If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that node
|
||||
are [scheduled for deletion](#pod-garbage-collection) after a timeout period.
|
||||
|
||||
Pods do not, by themselves, self-heal. If a Pod is scheduled to a
|
||||
{{< glossary_tooltip text="node" term_id="node" >}} that then fails,
|
||||
or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't
|
||||
survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a
|
||||
higher-level abstraction, called a
|
||||
{{< glossary_tooltip term_id="controller" text="controller" >}}, that handles the work of
|
||||
managing the relatively disposable Pod instances.
|
||||
|
||||
A given Pod (as defined by a UID) is never "rescheduled" to a different node; instead,
|
||||
that Pod can be replaced by a new, near-identical Pod, with even the same name i
|
||||
desired, but with a different UID.
|
||||
|
||||
When something is said to have the same lifetime as a Pod, such as a
|
||||
{{< glossary_tooltip term_id="volume" text="volume" >}},
|
||||
that means that the thing exists as long as that specific Pod (with that exact UID)
|
||||
exists. If that Pod is deleted for any reason, and even if an identical replacement
|
||||
is created, the related thing (a volume, in this example) is also destroyed and
|
||||
created anew.
|
||||
|
||||
{{< figure src="/images/docs/pod.svg" title="Pod diagram" width="50%" >}}
|
||||
|
||||
*A multi-container Pod that contains a file puller and a
|
||||
web server that uses a persistent volume for shared storage between the containers.*
|
||||
|
||||
## Pod phase
|
||||
|
||||
A Pod's `status` field is a
|
||||
|
@ -24,7 +68,7 @@ object, which has a `phase` field.
|
|||
|
||||
The phase of a Pod is a simple, high-level summary of where the Pod is in its
|
||||
lifecycle. The phase is not intended to be a comprehensive rollup of observations
|
||||
of Container or Pod state, nor is it intended to be a comprehensive state machine.
|
||||
of container or Pod state, nor is it intended to be a comprehensive state machine.
|
||||
|
||||
The number and meanings of Pod phase values are tightly guarded.
|
||||
Other than what is documented here, nothing should be assumed about Pods that
|
||||
|
@ -34,188 +78,106 @@ Here are the possible values for `phase`:
|
|||
|
||||
Value | Description
|
||||
:-----|:-----------
|
||||
`Pending` | The Pod has been accepted by the Kubernetes system, but one or more of the Container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.
|
||||
`Running` | The Pod has been bound to a node, and all of the Containers have been created. At least one Container is still running, or is in the process of starting or restarting.
|
||||
`Succeeded` | All Containers in the Pod have terminated in success, and will not be restarted.
|
||||
`Failed` | All Containers in the Pod have terminated, and at least one Container has terminated in failure. That is, the Container either exited with non-zero status or was terminated by the system.
|
||||
`Unknown` | For some reason the state of the Pod could not be obtained, typically due to an error in communicating with the host of the Pod.
|
||||
`Pending` | The Pod has been accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a Pod spends waiting to bescheduled as well as the time spent downloading container images over the network.
|
||||
`Running` | The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
|
||||
`Succeeded` | All containers in the Pod have terminated in success, and will not be restarted.
|
||||
`Failed` | All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
|
||||
`Unknown` | For some reason the state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the Pod should be running.
|
||||
|
||||
If a node dies or is disconnected from the rest of the cluster, Kubernetes
|
||||
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
|
||||
|
||||
## Container states
|
||||
|
||||
As well as the [phase](#pod-phase) of the Pod overall, Kubernetes tracks the state of
|
||||
each container inside a Pod. You can use
|
||||
[container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/) to
|
||||
trigger events to run at certain points in a container's lifecycle.
|
||||
|
||||
Once the {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}
|
||||
assigns a Pod to a Node, the kubelet starts creating containers for that Pod
|
||||
using a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
|
||||
There are three possible container states: `Waiting`, `Running`, and `Terminated`.
|
||||
|
||||
To the check state of a Pod's containers, you can use
|
||||
`kubectl describe pod <name-of-pod>`. The output shows the state for each container
|
||||
within that Pod.
|
||||
|
||||
Each state has a specific meaning:
|
||||
|
||||
### `Waiting` {#container-state-waiting}
|
||||
|
||||
If a container is not in either the `Running` or `Terminated` state, it `Waiting`.
|
||||
A container in the `Waiting` state is still running the operations it requires in
|
||||
order to complete start up: for example, pulling the container image from a container
|
||||
image registry, or applying {{< glossary_tooltip text="Secret" term_id="secret" >}}
|
||||
data.
|
||||
When you use `kubectl` to query a Pod with a container that is `Waiting`, you also see
|
||||
a Reason field to summarize why the container is in that state.
|
||||
|
||||
### `Running` {#container-state-running}
|
||||
|
||||
The `Running` status indicates that a container is executing without issues. If there
|
||||
was a `postStart` hook configured, it has already executed and executed. When you use
|
||||
`kubectl` to query a Pod with a container that is `Running`, you also see information
|
||||
about when the container entered the `Running` state.
|
||||
|
||||
### `Terminated` {#container-state-terminated}
|
||||
|
||||
A container in the `Terminated` state has begin execution and has then either run to
|
||||
completion or has failed for some reason. When you use `kubectl` to query a Pod with
|
||||
a container that is `Terminated`, you see a reason, and exit code, and the start and
|
||||
finish time for that container's period of execution.
|
||||
|
||||
If a container has a `preStop` hook configured, that runs before the container enters
|
||||
the `Terminated` state.
|
||||
|
||||
## Container restart policy {#restart-policy}
|
||||
|
||||
The `spec` of a Pod has a `restartPolicy` field with possible values Always, OnFailure,
|
||||
and Never. The default value is Always.
|
||||
|
||||
The `restartPolicy` applies to all containers in the Pod. `restartPolicy` only
|
||||
refers to restarts of the containers by the kubelet on the same node. After containers
|
||||
in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s,
|
||||
40s, …), that is capped at five minutes. Once a container has executed with no problems
|
||||
for 10 minutes without any problems, the kubelet resets the restart backoff timer for
|
||||
that container.
|
||||
|
||||
## Pod conditions
|
||||
|
||||
A Pod has a PodStatus, which has an array of
|
||||
[PodConditions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podcondition-v1-core)
|
||||
through which the Pod has or has not passed. Each element of the PodCondition
|
||||
array has six possible fields:
|
||||
through which the Pod has or has not passed:
|
||||
|
||||
* The `lastProbeTime` field provides a timestamp for when the Pod condition
|
||||
was last probed.
|
||||
* `PodScheduled`: the Pod has been scheduled to a node.
|
||||
* `ContainersReady`: all containers in the Pod are ready.
|
||||
* `Initialized`: all [init containers](/docs/concepts/workloads/pods/init-containers/)
|
||||
have started successfully.
|
||||
* `Ready`: the Pod is able to serve requests and should be added to the load
|
||||
balancing pools of all matching Services.
|
||||
|
||||
* The `lastTransitionTime` field provides a timestamp for when the Pod
|
||||
last transitioned from one status to another.
|
||||
|
||||
* The `message` field is a human-readable message indicating details
|
||||
about the transition.
|
||||
|
||||
* The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition.
|
||||
|
||||
* The `status` field is a string, with possible values "`True`", "`False`", and "`Unknown`".
|
||||
|
||||
* The `type` field is a string with the following possible values:
|
||||
|
||||
* `PodScheduled`: the Pod has been scheduled to a node;
|
||||
* `Ready`: the Pod is able to serve requests and should be added to the load
|
||||
balancing pools of all matching Services;
|
||||
* `Initialized`: all [init containers](/docs/concepts/workloads/pods/init-containers)
|
||||
have started successfully;
|
||||
* `ContainersReady`: all containers in the Pod are ready.
|
||||
Field name | Description
|
||||
:--------------------|:-----------
|
||||
`type` | Name of this Pod condition.
|
||||
`status` | Indicates whether that condition is applicable, with possible values "`True`", "`False`", or "`Unknown`".
|
||||
`lastProbeTime` | Timestamp of when the Pod condition was last probed.
|
||||
`lastTransitionTime` | Timestamp for when the Pod last transitioned from one status to another.
|
||||
`reason` | Machine-readable, UpperCamelCase text indicating the reason for the condition's last transition.
|
||||
`messsage | Human-readable message indicating details about the last status transition.
|
||||
|
||||
|
||||
|
||||
## Container probes
|
||||
|
||||
A [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) is a diagnostic
|
||||
performed periodically by the [kubelet](/docs/admin/kubelet/)
|
||||
on a Container. To perform a diagnostic,
|
||||
the kubelet calls a
|
||||
[Handler](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core) implemented by
|
||||
the Container. There are three types of handlers:
|
||||
|
||||
* [ExecAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#execaction-v1-core):
|
||||
Executes a specified command inside the Container. The diagnostic
|
||||
is considered successful if the command exits with a status code of 0.
|
||||
|
||||
* [TCPSocketAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#tcpsocketaction-v1-core):
|
||||
Performs a TCP check against the Container's IP address on
|
||||
a specified port. The diagnostic is considered successful if the port is open.
|
||||
|
||||
* [HTTPGetAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core):
|
||||
Performs an HTTP Get request against the Container's IP
|
||||
address on a specified port and path. The diagnostic is considered successful
|
||||
if the response has a status code greater than or equal to 200 and less than 400.
|
||||
|
||||
Each probe has one of three results:
|
||||
|
||||
* Success: The Container passed the diagnostic.
|
||||
* Failure: The Container failed the diagnostic.
|
||||
* Unknown: The diagnostic failed, so no action should be taken.
|
||||
|
||||
The kubelet can optionally perform and react to three kinds of probes on running
|
||||
Containers:
|
||||
|
||||
* `livenessProbe`: Indicates whether the Container is running. If
|
||||
the liveness probe fails, the kubelet kills the Container, and the Container
|
||||
is subjected to its [restart policy](#restart-policy). If a Container does not
|
||||
provide a liveness probe, the default state is `Success`.
|
||||
|
||||
* `readinessProbe`: Indicates whether the Container is ready to service requests.
|
||||
If the readiness probe fails, the endpoints controller removes the Pod's IP
|
||||
address from the endpoints of all Services that match the Pod. The default
|
||||
state of readiness before the initial delay is `Failure`. If a Container does
|
||||
not provide a readiness probe, the default state is `Success`.
|
||||
|
||||
* `startupProbe`: Indicates whether the application within the Container is started.
|
||||
All other probes are disabled if a startup probe is provided, until it succeeds.
|
||||
If the startup probe fails, the kubelet kills the Container, and the Container
|
||||
is subjected to its [restart policy](#restart-policy). If a Container does not
|
||||
provide a startup probe, the default state is `Success`.
|
||||
|
||||
### When should you use a liveness probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="stable" >}}
|
||||
|
||||
If the process in your Container is able to crash on its own whenever it
|
||||
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
|
||||
probe; the kubelet will automatically perform the correct action in accordance
|
||||
with the Pod's `restartPolicy`.
|
||||
|
||||
If you'd like your Container to be killed and restarted if a probe fails, then
|
||||
specify a liveness probe, and specify a `restartPolicy` of Always or OnFailure.
|
||||
|
||||
### When should you use a readiness probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="stable" >}}
|
||||
|
||||
If you'd like to start sending traffic to a Pod only when a probe succeeds,
|
||||
specify a readiness probe. In this case, the readiness probe might be the same
|
||||
as the liveness probe, but the existence of the readiness probe in the spec means
|
||||
that the Pod will start without receiving any traffic and only start receiving
|
||||
traffic after the probe starts succeeding.
|
||||
If your Container needs to work on loading large data, configuration files, or migrations during startup, specify a readiness probe.
|
||||
|
||||
If you want your Container to be able to take itself down for maintenance, you
|
||||
can specify a readiness probe that checks an endpoint specific to readiness that
|
||||
is different from the liveness probe.
|
||||
|
||||
Note that if you just want to be able to drain requests when the Pod is deleted,
|
||||
you do not necessarily need a readiness probe; on deletion, the Pod automatically
|
||||
puts itself into an unready state regardless of whether the readiness probe exists.
|
||||
The Pod remains in the unready state while it waits for the Containers in the Pod
|
||||
to stop.
|
||||
|
||||
### When should you use a startup probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
|
||||
If your Container usually starts in more than `initialDelaySeconds + failureThreshold × periodSeconds`, you should specify a startup probe that checks the same endpoint as the liveness probe. The default for `periodSeconds` is 30s.
|
||||
You should then set its `failureThreshold` high enough to allow the Container to start, without changing the default values of the liveness probe. This helps to protect against deadlocks.
|
||||
|
||||
For more information about how to set up a liveness, readiness, startup probe, see
|
||||
[Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
|
||||
## Pod and Container status
|
||||
|
||||
For detailed information about Pod Container status, see
|
||||
[PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core)
|
||||
and
|
||||
[ContainerStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerstatus-v1-core).
|
||||
Note that the information reported as Pod status depends on the current
|
||||
[ContainerState](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerstatus-v1-core).
|
||||
|
||||
## Container States
|
||||
|
||||
Once Pod is assigned to a node by scheduler, kubelet starts creating containers using container runtime.There are three possible states of containers: Waiting, Running and Terminated. To check state of container, you can use `kubectl describe pod [POD_NAME]`. State is displayed for each container within that Pod.
|
||||
|
||||
* `Waiting`: Default state of container. If container is not in either Running or Terminated state, it is in Waiting state. A container in Waiting state still runs its required operations, like pulling images, applying Secrets, etc. Along with this state, a message and reason about the state are displayed to provide more information.
|
||||
|
||||
```yaml
|
||||
...
|
||||
State: Waiting
|
||||
Reason: ErrImagePull
|
||||
...
|
||||
```
|
||||
|
||||
* `Running`: Indicates that the container is executing without issues. The `postStart` hook (if any) is executed prior to the container entering a Running state. This state also displays the time when the container entered Running state.
|
||||
|
||||
```yaml
|
||||
...
|
||||
State: Running
|
||||
Started: Wed, 30 Jan 2019 16:46:38 +0530
|
||||
...
|
||||
```
|
||||
|
||||
* `Terminated`: Indicates that the container completed its execution and has stopped running. A container enters into this when it has successfully completed execution or when it has failed for some reason. Regardless, a reason and exit code is displayed, as well as the container's start and finish time. Before a container enters into Terminated, `preStop` hook (if any) is executed.
|
||||
|
||||
```yaml
|
||||
...
|
||||
State: Terminated
|
||||
Reason: Completed
|
||||
Exit Code: 0
|
||||
Started: Wed, 30 Jan 2019 11:45:26 +0530
|
||||
Finished: Wed, 30 Jan 2019 11:45:26 +0530
|
||||
...
|
||||
```
|
||||
|
||||
## Pod readiness {#pod-readiness-gate}
|
||||
### Pod readiness {#pod-readiness-gate}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
Your application can inject extra feedback or signals into PodStatus:
|
||||
_Pod readiness_. To use this, set `readinessGates` in the PodSpec to specify
|
||||
a list of additional conditions that the kubelet evaluates for Pod readiness.
|
||||
_Pod readiness_. To use this, set `readinessGates` in the Pod's `spec` to
|
||||
specify a list of additional conditions that the kubelet evaluates for Pod readiness.
|
||||
|
||||
Readiness gates are determined by the current state of `status.condition`
|
||||
fields for the Pod. If Kubernetes cannot find such a
|
||||
condition in the `status.conditions` field of a Pod, the status of the condition
|
||||
fields for the Pod. If Kubernetes cannot find such a condition in the
|
||||
`status.conditions` field of a Pod, the status of the condition
|
||||
is defaulted to "`False`".
|
||||
|
||||
Here is an example:
|
||||
|
@ -258,152 +220,226 @@ For a Pod that uses custom conditions, that Pod is evaluated to be ready **only*
|
|||
when both the following statements apply:
|
||||
|
||||
* All containers in the Pod are ready.
|
||||
* All conditions specified in `ReadinessGates` are `True`.
|
||||
* All conditions specified in `readinessGates` are `True`.
|
||||
|
||||
When a Pod's containers are Ready but at least one custom condition is missing or
|
||||
`False`, the kubelet sets the Pod's condition to `ContainersReady`.
|
||||
`False`, the kubelet sets the Pod's [condition](#pod-condition) to `ContainersReady`.
|
||||
|
||||
## Restart policy
|
||||
## Container probes
|
||||
|
||||
A PodSpec has a `restartPolicy` field with possible values Always, OnFailure,
|
||||
and Never. The default value is Always.
|
||||
`restartPolicy` applies to all Containers in the Pod. `restartPolicy` only
|
||||
refers to restarts of the Containers by the kubelet on the same node. Exited
|
||||
Containers that are restarted by the kubelet are restarted with an exponential
|
||||
back-off delay (10s, 20s, 40s ...) capped at five minutes, and is reset after ten
|
||||
minutes of successful execution. As discussed in the
|
||||
[Pods document](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof),
|
||||
once bound to a node, a Pod will never be rebound to another node.
|
||||
A [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) is a diagnostic
|
||||
performed periodically by the [kubelet](/docs/admin/kubelet/)
|
||||
on a Container. To perform a diagnostic,
|
||||
the kubelet calls a
|
||||
[Handler](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core) implemented by
|
||||
the container. There are three types of handlers:
|
||||
|
||||
* [ExecAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#execaction-v1-core):
|
||||
Executes a specified command inside the container. The diagnostic
|
||||
is considered successful if the command exits with a status code of 0.
|
||||
|
||||
## Pod lifetime
|
||||
* [TCPSocketAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#tcpsocketaction-v1-core):
|
||||
Performs a TCP check against the Pod's IP address on
|
||||
a specified port. The diagnostic is considered successful if the port is open.
|
||||
|
||||
In general, Pods remain until a human or
|
||||
* [HTTPGetAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core):
|
||||
Performs an HTTP `GET` request against the Pod's IP
|
||||
address on a specified port and path. The diagnostic is considered successful
|
||||
if the response has a status code greater than or equal to 200 and less than 400.
|
||||
|
||||
Each probe has one of three results:
|
||||
|
||||
* `Success`: The container passed the diagnostic.
|
||||
* `Failure`: The container failed the diagnostic.
|
||||
* `Unknown`: The diagnostic failed, so no action should be taken.
|
||||
|
||||
The kubelet can optionally perform and react to three kinds of probes on running
|
||||
containers:
|
||||
|
||||
* `livenessProbe`: Indicates whether the container is running. If
|
||||
the liveness probe fails, the kubelet kills the container, and the container
|
||||
is subjected to its [restart policy](#restart-policy). If a Container does not
|
||||
provide a liveness probe, the default state is `Success`.
|
||||
|
||||
* `readinessProbe`: Indicates whether the container is ready to respond to requests.
|
||||
If the readiness probe fails, the endpoints controller removes the Pod's IP
|
||||
address from the endpoints of all Services that match the Pod. The default
|
||||
state of readiness before the initial delay is `Failure`. If a Container does
|
||||
not provide a readiness probe, the default state is `Success`.
|
||||
|
||||
* `startupProbe`: Indicates whether the application within the container is started.
|
||||
All other probes are disabled if a startup probe is provided, until it succeeds.
|
||||
If the startup probe fails, the kubelet kills the container, and the container
|
||||
is subjected to its [restart policy](#restart-policy). If a Container does not
|
||||
provide a startup probe, the default state is `Success`.
|
||||
|
||||
For more information about how to set up a liveness, readiness, or startup probe,
|
||||
see [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
|
||||
### When should you use a liveness probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="stable" >}}
|
||||
|
||||
If the process in your container is able to crash on its own whenever it
|
||||
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
|
||||
probe; the kubelet will automatically perform the correct action in accordance
|
||||
with the Pod's `restartPolicy`.
|
||||
|
||||
If you'd like your container to be killed and restarted if a probe fails, then
|
||||
specify a liveness probe, and specify a `restartPolicy` of Always or OnFailure.
|
||||
|
||||
### When should you use a readiness probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="stable" >}}
|
||||
|
||||
If you'd like to start sending traffic to a Pod only when a probe succeeds,
|
||||
specify a readiness probe. In this case, the readiness probe might be the same
|
||||
as the liveness probe, but the existence of the readiness probe in the spec means
|
||||
that the Pod will start without receiving any traffic and only start receiving
|
||||
traffic after the probe starts succeeding.
|
||||
If your container needs to work on loading large data, configuration files, or
|
||||
migrations during startup, specify a readiness probe.
|
||||
|
||||
If you want your container to be able to take itself down for maintenance, you
|
||||
can specify a readiness probe that checks an endpoint specific to readiness that
|
||||
is different from the liveness probe.
|
||||
|
||||
{{< note >}}
|
||||
If you just want to be able to drain requests when the Pod is deleted, you do not
|
||||
necessarily need a readiness probe; on deletion, the Pod automatically puts itself
|
||||
into an unready state regardless of whether the readiness probe exists.
|
||||
The Pod remains in the unready state while it waits for the containers in the Pod
|
||||
to stop.
|
||||
{{< /note >}}
|
||||
|
||||
### When should you use a startup probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
|
||||
Startup probes are useful for Pods that have containers that take a long time to
|
||||
come into service. Rather than set a long liveness interval, you can configure
|
||||
a separate configuration for probing the container as it starts up, allowing
|
||||
a time longer than the liveness interval would allow.
|
||||
|
||||
If your container usually starts in more than
|
||||
`initialDelaySeconds + failureThreshold × periodSeconds`, you should specify a
|
||||
startup probe that checks the same endpoint as the liveness probe. The default for
|
||||
`periodSeconds` is 30s. You should then set its `failureThreshold` high enough to
|
||||
allow the container to start, without changing the default values of the liveness
|
||||
probe. This helps to protect against deadlocks.
|
||||
|
||||
## Termination of Pods {#pod-termination}
|
||||
|
||||
Because Pods represent processes running on nodes in the cluster, it is important to
|
||||
allow those processes to gracefully terminate when they are no longer needed (rather
|
||||
than being abruptly stopped with a `KILL` signal and having no chance to clean up).
|
||||
|
||||
The design aim is for you to be able to request deletion and know when processes
|
||||
terminate, but also be able to ensure that deletes eventually complete.
|
||||
When you request deletion of a Pod, the cluster records and tracks the intended grace period
|
||||
before the Pod is allowed to be forcefully killed. With that forceful shutdown tracking in
|
||||
place, the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} attempts graceful
|
||||
shutdown.
|
||||
|
||||
Typically, the container runtime sends a a TERM signal is sent to the main process in each
|
||||
container. Once the grace period has expired, the KILL signal is sent to any remainig
|
||||
processes, and the Pod is then deleted from the
|
||||
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}. If the kubelet or the
|
||||
container runtime's management service is restarted while waiting for processes to terminate, the
|
||||
cluster retries from the start including the full original grace period.
|
||||
|
||||
An example flow:
|
||||
|
||||
1. You use the `kubectl` tool to manually delete a specific Pod, with the default grace period
|
||||
(30 seconds).
|
||||
1. The Pod in the API server is updated with the time beyond which the Pod is considered "dead"
|
||||
along with the grace period.
|
||||
If you use `kubectl describe` to check on the Pod you're deleting, that Pod shows up as
|
||||
"Terminating".
|
||||
On the node where the Pod is running: as soon as the kubelet sees that a Pod has been marked
|
||||
as terminating (a graceful shutdown duration has been set), the kubelet begins the local Pod
|
||||
shutdown process.
|
||||
1. If one of the Pod's containers has defined a `preStop`
|
||||
[hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), the kubelet
|
||||
runs that hook inside of the container. If the `preStop` hook is still running after the
|
||||
grace period expires, the kubelet requests a small, one-off grace period extension of 2
|
||||
seconds.
|
||||
{{< note >}}
|
||||
If the `preStop` hook needs longer to complete than the default grace period allows,
|
||||
you must modify `terminationGracePeriodSeconds` to suit this.
|
||||
{{< /note >}}
|
||||
1. The kubelet triggers the container runtime to send a TERM signal to process 1 inside each
|
||||
container.
|
||||
{{< note >}}
|
||||
The containers in the Pod receive the TERM signal at different times and in an arbitrary
|
||||
order. If the order of shutdowns matters, consider using a `preStop` hook to synchronize.
|
||||
{{< /note >}}
|
||||
1. At the same time as the kubelet is starting graceful shutdown, the control plane removes that
|
||||
shutting-down Pod from Endpoints (and, if enabled, EndpointSlice) objects where these represent
|
||||
a {{< glossary_tooltip term_id="service" text="Service" >}} with a configured
|
||||
{{< glossary_tooltip text="selector" term_id="selector" >}}.
|
||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and other workload resources
|
||||
no longer treat the shutting-down Pod as a valid, in-service replica. Pods that shut down slowly
|
||||
cannot continue to serve traffic as load balancers (like the service proxy) remove the Pod from
|
||||
the list of endpoints as soon as the termination grace period _begins_.
|
||||
1. When the grace period expires, the kubelet triggers forcible shutdown. The container runtime sends
|
||||
`SIGKILL` to any processes still running in any container in the Pod.
|
||||
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
|
||||
1. The kubelet triggers forcible removal of Pod object from the API server, by setting grace period
|
||||
to 0 (immediate deletion).
|
||||
1. The API server deletes the Pod's API object, which is then no longer visible from any client.
|
||||
|
||||
### Forced Pod termination {#pod-termination-forced}
|
||||
|
||||
{{< caution >}}
|
||||
Forced deletions can be potentially disruptiove for some workloads and their Pods.
|
||||
{{< /caution >}}
|
||||
|
||||
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports
|
||||
the `--grace-period=<seconds>` option which allows you to override the default and specify your
|
||||
own value.
|
||||
|
||||
Setting the grace period to `0` forcibly and immediately deletes the Pod from the API
|
||||
server. If the pod was still running on a node, that forcible deletion triggers the kubelet to
|
||||
begin immediate cleanup.
|
||||
|
||||
{{< note >}}
|
||||
You must specify an additional flag `--force` along with `--grace-period=0` in order to perform force deletions.
|
||||
{{< /note >}}
|
||||
|
||||
When a force deletion is performed, the API server does not wait for confirmation
|
||||
from the kubelet that the Pod has been terminated on the node it was running on. It
|
||||
removes the Pod in the API immediately so a new Pod can be created with the same
|
||||
name. On the node, Pods that are set to terminate immediately will still be given
|
||||
a small grace period before being force killed.
|
||||
|
||||
If you need to force-delete Pods that are part of a StatefulSet, refer to the task
|
||||
documentation for
|
||||
[deleting Pods from a StatefulSet](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
|
||||
### Garbage collection of failed Pods {#pod-garbage-collection}
|
||||
|
||||
For failed Pods, the API objects remain in the cluster's API until a human or
|
||||
{{< glossary_tooltip term_id="controller" text="controller" >}} process
|
||||
explicitly removes them.
|
||||
|
||||
The control plane cleans up terminated Pods (with a phase of `Succeeded` or
|
||||
`Failed`), when the number of Pods exceeds the configured threshold
|
||||
(determined by `terminated-pod-gc-threshold` in the kube-controller-manager).
|
||||
This avoids a resource leak as Pods are created and terminated over time.
|
||||
|
||||
There are different kinds of resources for creating Pods:
|
||||
|
||||
- Use a {{< glossary_tooltip term_id="deployment" >}},
|
||||
{{< glossary_tooltip term_id="replica-set" >}} or {{< glossary_tooltip term_id="statefulset" >}}
|
||||
for Pods that are not expected to terminate, for example, web servers.
|
||||
|
||||
- Use a {{< glossary_tooltip term_id="job" >}}
|
||||
for Pods that are expected to terminate once their work is complete;
|
||||
for example, batch computations. Jobs are appropriate only for Pods with
|
||||
`restartPolicy` equal to OnFailure or Never.
|
||||
|
||||
- Use a {{< glossary_tooltip term_id="daemonset" >}}
|
||||
for Pods that need to run one per eligible node.
|
||||
|
||||
All workload resources contain a PodSpec. It is recommended to create the
|
||||
appropriate workload resource and let the resource's controller create Pods
|
||||
for you, rather than directly create Pods yourself.
|
||||
|
||||
If a node dies or is disconnected from the rest of the cluster, Kubernetes
|
||||
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
|
||||
|
||||
## Examples
|
||||
|
||||
### Advanced liveness probe example
|
||||
|
||||
Liveness probes are executed by the kubelet, so all requests are made in the
|
||||
kubelet network namespace.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-http
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- /server
|
||||
image: k8s.gcr.io/liveness
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
# when "host" is not defined, "PodIP" will be used
|
||||
# host: my-host
|
||||
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
|
||||
# scheme: HTTPS
|
||||
path: /healthz
|
||||
port: 8080
|
||||
httpHeaders:
|
||||
- name: X-Custom-Header
|
||||
value: Awesome
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
name: liveness
|
||||
```
|
||||
|
||||
### Example states
|
||||
|
||||
* Pod is running and has one Container. Container exits with success.
|
||||
* Log completion event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Pod `phase` becomes Succeeded.
|
||||
* Never: Pod `phase` becomes Succeeded.
|
||||
|
||||
* Pod is running and has one Container. Container exits with failure.
|
||||
* Log failure event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||
* Never: Pod `phase` becomes Failed.
|
||||
|
||||
* Pod is running and has two Containers. Container 1 exits with failure.
|
||||
* Log failure event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||
* Never: Do not restart Container; Pod `phase` stays Running.
|
||||
* If Container 1 is not running, and Container 2 exits:
|
||||
* Log failure event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||
* Never: Pod `phase` becomes Failed.
|
||||
|
||||
* Pod is running and has one Container. Container runs out of memory.
|
||||
* Container terminates in failure.
|
||||
* Log OOM event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||
* Never: Log failure event; Pod `phase` becomes Failed.
|
||||
|
||||
* Pod is running, and a disk dies.
|
||||
* Kill all Containers.
|
||||
* Log appropriate event.
|
||||
* Pod `phase` becomes Failed.
|
||||
* If running under a controller, Pod is recreated elsewhere.
|
||||
|
||||
* Pod is running, and its node is segmented out.
|
||||
* Node controller waits for timeout.
|
||||
* Node controller sets Pod `phase` to Failed.
|
||||
* If running under a controller, Pod is recreated elsewhere.
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Get hands-on experience
|
||||
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
|
||||
* Get hands-on experience
|
||||
[Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
|
||||
* Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
|
||||
|
||||
[configuring Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
|
||||
* Learn more about [container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
|
||||
|
||||
* For detailed information about Pod / Container status in the API, see [PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core)
|
||||
and
|
||||
[ContainerStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerstatus-v1-core).
|
||||
|
||||
|
|
|
@ -1,123 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- erictune
|
||||
title: Pod Overview
|
||||
content_type: concept
|
||||
weight: 10
|
||||
card:
|
||||
name: concepts
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
This page provides an overview of `Pod`, the smallest deployable object in the Kubernetes object model.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
## Understanding Pods
|
||||
|
||||
A *Pod* is the basic execution unit of a Kubernetes application--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your {{< glossary_tooltip term_id="cluster" text="cluster" >}}.
|
||||
|
||||
A Pod encapsulates an application's container (or, in some cases, multiple containers), storage resources, a unique network identity (IP address), as well as options that govern how the container(s) should run. A Pod represents a unit of deployment: *a single instance of an application in Kubernetes*, which might consist of either a single {{< glossary_tooltip text="container" term_id="container" >}} or a small number of containers that are tightly coupled and that share resources.
|
||||
|
||||
[Docker](https://www.docker.com) is the most common container runtime used in a Kubernetes Pod, but Pods support other [container runtimes](/docs/setup/production-environment/container-runtimes/) as well.
|
||||
|
||||
|
||||
Pods in a Kubernetes cluster can be used in two main ways:
|
||||
|
||||
* **Pods that run a single container**. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly.
|
||||
* **Pods that run multiple containers that need to work together**. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service--one container serving files from a shared volume to the public, while a separate "sidecar" container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.
|
||||
|
||||
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (to provide more overall resources by running more instances), you should use multiple Pods, one for each instance. In Kubernetes, this is typically referred to as _replication_.
|
||||
Replicated Pods are usually created and managed as a group by a workload resource and its {{< glossary_tooltip text="_controller_" term_id="controller" >}}.
|
||||
See [Pods and controllers](#pods-and-controllers) for more information on how Kubernetes uses controllers to implement workload scaling and healing.
|
||||
|
||||
### How Pods manage multiple containers
|
||||
|
||||
Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.
|
||||
|
||||
Note that grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. For example, you might have a container that acts as a web server for files in a shared volume, and a separate "sidecar" container that updates those files from a remote source, as in the following diagram:
|
||||
|
||||
{{< figure src="/images/docs/pod.svg" alt="example pod diagram" width="50%" >}}
|
||||
|
||||
Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}} as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}. Init containers run and complete before the app containers are started.
|
||||
|
||||
Pods provide two kinds of shared resources for their constituent containers: *networking* and *storage*.
|
||||
|
||||
#### Networking
|
||||
|
||||
Each Pod is assigned a unique IP address for each address family. Every container in a Pod shares the network namespace, including the IP address and network ports. Containers *inside a Pod* can communicate with one another using `localhost`. When containers in a Pod communicate with entities *outside the Pod*, they must coordinate how they use the shared network resources (such as ports).
|
||||
|
||||
#### Storage
|
||||
|
||||
A Pod can specify a set of shared storage {{< glossary_tooltip text="volumes" term_id="volume" >}}. All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted. See [Volumes](/docs/concepts/storage/volumes/) for more information on how Kubernetes implements shared storage in a Pod.
|
||||
|
||||
## Working with Pods
|
||||
|
||||
You'll rarely create individual Pods directly in Kubernetes--even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a {{< glossary_tooltip text="_controller_" term_id="controller" >}}), it is scheduled to run on a {{< glossary_tooltip term_id="node" >}} in your cluster. The Pod remains on that node until the process is terminated, the pod object is deleted, the Pod is *evicted* for lack of resources, or the node fails.
|
||||
|
||||
{{< note >}}
|
||||
Restarting a container in a Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for running a container. A Pod persists until it is deleted.
|
||||
{{< /note >}}
|
||||
|
||||
Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fails, or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a controller, that handles the work of managing the relatively disposable Pod instances. Thus, while it is possible to use Pod directly, it's far more common in Kubernetes to manage your pods using a controller.
|
||||
|
||||
### Pods and controllers
|
||||
|
||||
You can use workload resources to create and manage multiple Pods for you. A controller for the resource handles replication and rollout and automatic healing in case of Pod failure. For example, if a Node fails, a controller notices that Pods on that Node have stopped working and creates a replacement Pod. The scheduler places the replacement Pod onto a healthy Node.
|
||||
|
||||
Here are some examples of workload resources that manage one or more Pods:
|
||||
|
||||
* {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
|
||||
* {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}
|
||||
* {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}
|
||||
|
||||
|
||||
## Pod templates
|
||||
|
||||
Controllers for {{< glossary_tooltip text="workload" term_id="workload" >}} resources create Pods
|
||||
from a pod template and manage those Pods on your behalf.
|
||||
|
||||
PodTemplates are specifications for creating Pods, and are included in workload resources such as
|
||||
[Deployments](/docs/concepts/workloads/controllers/deployment/),
|
||||
[Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and
|
||||
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/).
|
||||
|
||||
Each controller for a workload resource uses the PodTemplate inside the workload object to make actual Pods. The PodTemplate is part of the desired state of whatever workload resource you used to run your app.
|
||||
|
||||
The sample below is a manifest for a simple Job with a `template` that starts one container. The container in that Pod prints a message then pauses.
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: hello
|
||||
spec:
|
||||
template:
|
||||
# This is the pod template
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
|
||||
restartPolicy: OnFailure
|
||||
# The pod template ends here
|
||||
```
|
||||
|
||||
Modifying the pod template or switching to a new pod template has no effect on the Pods that already exist. Pods do not receive template updates directly; instead, a new Pod is created to match the revised pod template.
|
||||
|
||||
For example, a Deployment controller ensures that the running Pods match the current pod template. If the template is updated, the controller has to remove the existing Pods and create new Pods based on the updated template. Each workload controller implements its own rules for handling changes to the Pod template.
|
||||
|
||||
On Nodes, the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} does not directly observe or manage any of the details around pod templates and updates; those details are abstracted away. That abstraction and separation of concerns simplifies system semantics, and makes it feasible to extend the cluster's behavior without changing existing code.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about [Pods](/docs/concepts/workloads/pods/pod/)
|
||||
* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explains common layouts for Pods with more than one container
|
||||
* Learn more about Pod behavior:
|
||||
* [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods)
|
||||
* [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/)
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Pod Topology Spread Constraints
|
||||
content_type: concept
|
||||
weight: 50
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,209 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
title: Pods
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
_Pods_ are the smallest deployable units of computing that can be created and
|
||||
managed in Kubernetes.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## What is a Pod?
|
||||
|
||||
A _Pod_ (as in a pod of whales or pea pod) is a group of one or more
|
||||
{{< glossary_tooltip text="containers" term_id="container" >}} (such as
|
||||
Docker containers), with shared storage/network, and a specification
|
||||
for how to run the containers. A Pod's contents are always co-located and
|
||||
co-scheduled, and run in a shared context. A Pod models an
|
||||
application-specific "logical host" - it contains one or more application
|
||||
containers which are relatively tightly coupled — in a pre-container
|
||||
world, being executed on the same physical or virtual machine would mean being
|
||||
executed on the same logical host.
|
||||
|
||||
While Kubernetes supports more container runtimes than just Docker, Docker is
|
||||
the most commonly known runtime, and it helps to describe Pods in Docker terms.
|
||||
|
||||
The shared context of a Pod is a set of Linux namespaces, cgroups, and
|
||||
potentially other facets of isolation - the same things that isolate a Docker
|
||||
container. Within a Pod's context, the individual applications may have
|
||||
further sub-isolations applied.
|
||||
|
||||
Containers within a Pod share an IP address and port space, and
|
||||
can find each other via `localhost`. They can also communicate with each
|
||||
other using standard inter-process communications like SystemV semaphores or
|
||||
POSIX shared memory. Containers in different Pods have distinct IP addresses
|
||||
and can not communicate by IPC without
|
||||
[special configuration](/docs/concepts/policy/pod-security-policy/).
|
||||
These containers usually communicate with each other via Pod IP addresses.
|
||||
|
||||
Applications within a Pod also have access to shared {{< glossary_tooltip text="volumes" term_id="volume" >}}, which are defined
|
||||
as part of a Pod and are made available to be mounted into each application's
|
||||
filesystem.
|
||||
|
||||
In terms of [Docker](https://www.docker.com/) constructs, a Pod is modelled as
|
||||
a group of Docker containers with shared namespaces and shared filesystem
|
||||
volumes.
|
||||
|
||||
Like individual application containers, Pods are considered to be relatively
|
||||
ephemeral (rather than durable) entities. As discussed in
|
||||
[pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), Pods are created, assigned a unique ID (UID), and
|
||||
scheduled to nodes where they remain until termination (according to restart
|
||||
policy) or deletion. If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that node are
|
||||
scheduled for deletion, after a timeout period. A given Pod (as defined by a UID) is not
|
||||
"rescheduled" to a new node; instead, it can be replaced by an identical Pod,
|
||||
with even the same name if desired, but with a new UID (see [replication
|
||||
controller](/docs/concepts/workloads/controllers/replicationcontroller/) for more details).
|
||||
|
||||
When something is said to have the same lifetime as a Pod, such as a volume,
|
||||
that means that it exists as long as that Pod (with that UID) exists. If that
|
||||
Pod is deleted for any reason, even if an identical replacement is created, the
|
||||
related thing (e.g. volume) is also destroyed and created anew.
|
||||
|
||||
{{< figure src="/images/docs/pod.svg" title="Pod diagram" width="50%" >}}
|
||||
|
||||
*A multi-container Pod that contains a file puller and a
|
||||
web server that uses a persistent volume for shared storage between the containers.*
|
||||
|
||||
## Motivation for Pods
|
||||
|
||||
### Management
|
||||
|
||||
Pods are a model of the pattern of multiple cooperating processes which form a
|
||||
cohesive unit of service. They simplify application deployment and management
|
||||
by providing a higher-level abstraction than the set of their constituent
|
||||
applications. Pods serve as unit of deployment, horizontal scaling, and
|
||||
replication. Colocation (co-scheduling), shared fate (e.g. termination),
|
||||
coordinated replication, resource sharing, and dependency management are
|
||||
handled automatically for containers in a Pod.
|
||||
|
||||
### Resource sharing and communication
|
||||
|
||||
Pods enable data sharing and communication among their constituents.
|
||||
|
||||
The applications in a Pod all use the same network namespace (same IP and port
|
||||
space), and can thus "find" each other and communicate using `localhost`.
|
||||
Because of this, applications in a Pod must coordinate their usage of ports.
|
||||
Each Pod has an IP address in a flat shared networking space that has full
|
||||
communication with other physical computers and Pods across the network.
|
||||
|
||||
Containers within the Pod see the system hostname as being the same as the configured
|
||||
`name` for the Pod. There's more about this in the [networking](/docs/concepts/cluster-administration/networking/)
|
||||
section.
|
||||
|
||||
In addition to defining the application containers that run in the Pod, the Pod
|
||||
specifies a set of shared storage volumes. Volumes enable data to survive
|
||||
container restarts and to be shared among the applications within the Pod.
|
||||
|
||||
## Uses of pods
|
||||
|
||||
Pods can be used to host vertically integrated application stacks (e.g. LAMP),
|
||||
but their primary motivation is to support co-located, co-managed helper
|
||||
programs, such as:
|
||||
|
||||
* content management systems, file and data loaders, local cache managers, etc.
|
||||
* log and checkpoint backup, compression, rotation, snapshotting, etc.
|
||||
* data change watchers, log tailers, logging and monitoring adapters, event publishers, etc.
|
||||
* proxies, bridges, and adapters
|
||||
* controllers, managers, configurators, and updaters
|
||||
|
||||
Individual Pods are not intended to run multiple instances of the same
|
||||
application, in general.
|
||||
|
||||
For a longer explanation, see [The Distributed System ToolKit: Patterns for
|
||||
Composite
|
||||
Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns).
|
||||
|
||||
## Alternatives considered
|
||||
|
||||
_Why not just run multiple programs in a single (Docker) container?_
|
||||
|
||||
1. Transparency. Making the containers within the Pod visible to the
|
||||
infrastructure enables the infrastructure to provide services to those
|
||||
containers, such as process management and resource monitoring. This
|
||||
facilitates a number of conveniences for users.
|
||||
1. Decoupling software dependencies. The individual containers may be
|
||||
versioned, rebuilt and redeployed independently. Kubernetes may even support
|
||||
live updates of individual containers someday.
|
||||
1. Ease of use. Users don't need to run their own process managers, worry about
|
||||
signal and exit-code propagation, etc.
|
||||
1. Efficiency. Because the infrastructure takes on more responsibility,
|
||||
containers can be lighter weight.
|
||||
|
||||
_Why not support affinity-based co-scheduling of containers?_
|
||||
|
||||
That approach would provide co-location, but would not provide most of the
|
||||
benefits of Pods, such as resource sharing, IPC, guaranteed fate sharing, and
|
||||
simplified management.
|
||||
|
||||
## Durability of pods (or lack thereof)
|
||||
|
||||
Pods aren't intended to be treated as durable entities. They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
|
||||
|
||||
In general, users shouldn't need to create Pods directly. They should almost
|
||||
always use controllers even for singletons, for example,
|
||||
[Deployments](/docs/concepts/workloads/controllers/deployment/).
|
||||
Controllers provide self-healing with a cluster scope, as well as replication
|
||||
and rollout management.
|
||||
Controllers like [StatefulSet](/docs/concepts/workloads/controllers/statefulset.md)
|
||||
can also provide support to stateful Pods.
|
||||
|
||||
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema), and [Tupperware](https://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
|
||||
|
||||
Pod is exposed as a primitive in order to facilitate:
|
||||
|
||||
* scheduler and controller pluggability
|
||||
* support for pod-level operations without the need to "proxy" them via controller APIs
|
||||
* decoupling of Pod lifetime from controller lifetime, such as for bootstrapping
|
||||
* decoupling of controllers and services — the endpoint controller just watches Pods
|
||||
* clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller"
|
||||
* high-availability applications, which will expect Pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions or image prefetching.
|
||||
|
||||
## Termination of Pods
|
||||
|
||||
Because Pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up). Users should be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container. Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server. If the Kubelet or the container manager is restarted while waiting for processes to terminate, the termination will be retried with the full grace period.
|
||||
|
||||
An example flow:
|
||||
|
||||
1. User sends command to delete Pod, with default grace period (30s)
|
||||
1. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period.
|
||||
1. Pod shows up as "Terminating" when listed in client commands
|
||||
1. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the Pod shutdown process.
|
||||
1. If one of the Pod's containers has defined a [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), it is invoked inside of the container. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) one-time extended grace period. You must modify `terminationGracePeriodSeconds` if the `preStop` hook needs longer to complete.
|
||||
1. The container is sent the TERM signal. Note that not all containers in the Pod will receive the TERM signal at the same time and may each require a `preStop` hook if the order in which they shut down matters.
|
||||
1. (simultaneous with 3) Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers. Pods that shutdown slowly cannot continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.
|
||||
1. When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
|
||||
1. The Kubelet will finish deleting the Pod on the API server by setting grace period 0 (immediate deletion). The Pod disappears from the API and is no longer visible from the client.
|
||||
|
||||
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports the `--grace-period=<seconds>` option which allows a user to override the default and specify their own value. The value `0` [force deletes](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) the Pod.
|
||||
You must specify an additional flag `--force` along with `--grace-period=0` in order to perform force deletions.
|
||||
|
||||
### Force deletion of pods
|
||||
|
||||
Force deletion of a Pod is defined as deletion of a Pod from the cluster state and etcd immediately. When a force deletion is performed, the API server does not wait for confirmation from the kubelet that the Pod has been terminated on the node it was running on. It removes the Pod in the API immediately so a new Pod can be created with the same name. On the node, Pods that are set to terminate immediately will still be given a small grace period before being force killed.
|
||||
|
||||
Force deletions can be potentially dangerous for some Pods and should be performed with caution. In case of StatefulSet Pods, please refer to the task documentation for [deleting Pods from a StatefulSet](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
|
||||
## Privileged mode for pod containers
|
||||
|
||||
Any container in a Pod can enable privileged mode, using the `privileged` flag on the [security context](/docs/tasks/configure-pod-container/security-context/) of the container spec. This is useful for containers that want to use Linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as separate Pods that don't need to be compiled into the kubelet.
|
||||
|
||||
{{< note >}}
|
||||
Your container runtime must support the concept of a privileged container for this setting to be relevant.
|
||||
{{< /note >}}
|
||||
|
||||
## API Object
|
||||
|
||||
Pod is a top-level resource in the Kubernetes REST API.
|
||||
The [Pod API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) definition
|
||||
describes the object in detail.
|
||||
When creating the manifest for a Pod object, make sure the name specified is a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
reviewers:
|
||||
- jessfraz
|
||||
title: Pod Preset
|
||||
title: Pod Presets
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
@ -32,20 +32,20 @@ specific service do not need to know all the details about that service.
|
|||
|
||||
In order to use Pod presets in your cluster you must ensure the following:
|
||||
|
||||
1. You have enabled the API type `settings.k8s.io/v1alpha1/podpreset`. For
|
||||
example, this can be done by including `settings.k8s.io/v1alpha1=true` in
|
||||
the `--runtime-config` option for the API server. In minikube add this flag
|
||||
`--extra-config=apiserver.runtime-config=settings.k8s.io/v1alpha1=true` while
|
||||
starting the cluster.
|
||||
1. You have enabled the admission controller `PodPreset`. One way to doing this
|
||||
is to include `PodPreset` in the `--enable-admission-plugins` option value specified
|
||||
for the API server. In minikube, add this flag
|
||||
|
||||
```shell
|
||||
--extra-config=apiserver.enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodPreset
|
||||
```
|
||||
|
||||
while starting the cluster.
|
||||
1. You have enabled the API type `settings.k8s.io/v1alpha1/podpreset`. For
|
||||
example, this can be done by including `settings.k8s.io/v1alpha1=true` in
|
||||
the `--runtime-config` option for the API server. In minikube add this flag
|
||||
`--extra-config=apiserver.runtime-config=settings.k8s.io/v1alpha1=true` while
|
||||
starting the cluster.
|
||||
1. You have enabled the admission controller named `PodPreset`. One way to doing this
|
||||
is to include `PodPreset` in the `--enable-admission-plugins` option value specified
|
||||
for the API server. For example, if you use Minikube, add this flag:
|
||||
|
||||
```shell
|
||||
--extra-config=apiserver.enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodPreset
|
||||
```
|
||||
|
||||
while starting your cluster.
|
||||
|
||||
## How it works
|
||||
|
||||
|
@ -64,31 +64,28 @@ When a pod creation request occurs, the system does the following:
|
|||
modified by a `PodPreset`. The annotation is of the form
|
||||
`podpreset.admission.kubernetes.io/podpreset-<pod-preset name>: "<resource version>"`.
|
||||
|
||||
Each Pod can be matched by zero or more Pod Presets; and each `PodPreset` can be
|
||||
applied to zero or more pods. When a `PodPreset` is applied to one or more
|
||||
Pods, Kubernetes modifies the Pod Spec. For changes to `Env`, `EnvFrom`, and
|
||||
`VolumeMounts`, Kubernetes modifies the container spec for all containers in
|
||||
the Pod; for changes to `Volume`, Kubernetes modifies the Pod Spec.
|
||||
Each Pod can be matched by zero or more PodPresets; and each PodPreset can be
|
||||
applied to zero or more Pods. When a PodPreset is applied to one or more
|
||||
Pods, Kubernetes modifies the Pod Spec. For changes to `env`, `envFrom`, and
|
||||
`volumeMounts`, Kubernetes modifies the container spec for all containers in
|
||||
the Pod; for changes to `volumes`, Kubernetes modifies the Pod Spec.
|
||||
|
||||
{{< note >}}
|
||||
A Pod Preset is capable of modifying the following fields in a Pod spec when appropriate:
|
||||
- The `.spec.containers` field.
|
||||
- The `initContainers` field (requires Kubernetes version 1.14.0 or later).
|
||||
- The `.spec.containers` field
|
||||
- The `.spec.initContainers` field
|
||||
{{< /note >}}
|
||||
|
||||
### Disable Pod Preset for a Specific Pod
|
||||
### Disable Pod Preset for a specific pod
|
||||
|
||||
There may be instances where you wish for a Pod to not be altered by any Pod
|
||||
Preset mutations. In these cases, you can add an annotation in the Pod Spec
|
||||
preset mutations. In these cases, you can add an annotation in the Pod's `.spec`
|
||||
of the form: `podpreset.admission.kubernetes.io/exclude: "true"`.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
See [Injecting data into a Pod using PodPreset](/docs/tasks/inject-data-application/podpreset/)
|
||||
|
||||
For more information about the background, see the [design proposal for PodPreset](https://git.k8s.io/community/contributors/design-proposals/service-catalog/pod-preset.md).
|
||||
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Pod
|
||||
id: pod
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/workloads/pods/pod-overview/
|
||||
full_link: /docs/concepts/workloads/pods/
|
||||
short_description: >
|
||||
A Pod represents a set of running containers in your cluster.
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ Windows applications constitute a large portion of the services and applications
|
|||
|
||||
## Windows containers in Kubernetes
|
||||
|
||||
To enable the orchestration of Windows containers in Kubernetes, simply include Windows nodes in your existing Linux cluster. Scheduling Windows containers in [Pods](/docs/concepts/workloads/pods/pod-overview/) on Kubernetes is as simple and easy as scheduling Linux-based containers.
|
||||
To enable the orchestration of Windows containers in Kubernetes, simply include Windows nodes in your existing Linux cluster. Scheduling Windows containers in {{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is as simple and easy as scheduling Linux-based containers.
|
||||
|
||||
In order to run Windows containers, your Kubernetes cluster must include multiple operating systems, with control plane nodes running Linux and workers running either Windows or Linux depending on your workload needs. Windows Server 2019 is the only Windows operating system supported, enabling [Kubernetes Node](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) on Windows (including kubelet, [container runtime](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd), and kube-proxy). For a detailed explanation of Windows distribution channels see the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19).
|
||||
|
||||
|
@ -56,7 +56,7 @@ Windows containers with process isolation have strict compatibility rules, [wher
|
|||
|
||||
Key Kubernetes elements work the same way in Windows as they do in Linux. In this section, we talk about some of the key workload enablers and how they map to Windows.
|
||||
|
||||
* [Pods](/docs/concepts/workloads/pods/pod-overview/)
|
||||
* [Pods](/docs/concepts/workloads/pods/)
|
||||
|
||||
A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. You may not deploy Windows and Linux containers in the same Pod. All containers in a Pod are scheduled onto a single Node where each Node represents a specific platform and architecture. The following Pod capabilities, properties and events are supported with Windows containers:
|
||||
|
||||
|
|
|
@ -45,13 +45,14 @@ Here is the configuration file for the application Deployment:
|
|||
kubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml
|
||||
```
|
||||
The preceding command creates a
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
object and an associated
|
||||
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
object. The ReplicaSet has two
|
||||
[Pods](/docs/concepts/workloads/pods/pod/),
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}
|
||||
and an associated
|
||||
{{< glossary_tooltip term_id="replica-set" text="ReplicaSet" >}}.
|
||||
The ReplicaSet has two
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
each of which runs the Hello World application.
|
||||
|
||||
|
||||
1. Display information about the Deployment:
|
||||
```shell
|
||||
kubectl get deployments hello-world
|
||||
|
|
|
@ -36,7 +36,7 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu
|
|||
This example assumes the following:
|
||||
|
||||
1. You have an [existing Kubernetes cluster](/docs/setup/).
|
||||
2. You have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_.
|
||||
2. You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip term_id="service" text="Services" >}}, and {{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
|
||||
|
||||
## Understand the default namespace
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ This page shows how to view, work in, and delete {{< glossary_tooltip text="name
|
|||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Have an [existing Kubernetes cluster](/docs/setup/).
|
||||
* Have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_.
|
||||
2. You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip term_id="service" text="Services" >}}, and {{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
|
|
@ -34,7 +34,7 @@ This task assumes that you have met the following prerequisites:
|
|||
You can use `kubectl drain` to safely evict all of your pods from a
|
||||
node before you perform maintenance on the node (e.g. kernel upgrade,
|
||||
hardware maintenance, etc.). Safe evictions allow the pod's containers
|
||||
to [gracefully terminate](/docs/concepts/workloads/pods/pod/#termination-of-pods)
|
||||
to [gracefully terminate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||
and will respect the `PodDisruptionBudgets` you have specified.
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
@ -75,7 +75,7 @@ set to RUNNING until the postStart handler completes.
|
|||
Kubernetes sends the preStop event immediately before the Container is terminated.
|
||||
Kubernetes' management of the Container blocks until the preStop handler completes,
|
||||
unless the Pod's grace period expires. For more details, see
|
||||
[Termination of Pods](/docs/concepts/workloads/pods/pod/#termination-of-pods).
|
||||
[Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes only sends the preStop event when a Pod is *terminated*.
|
||||
|
|
|
@ -14,7 +14,7 @@ without the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
|
|||
observing them.
|
||||
Unlike Pods that are managed by the control plane (for example, a
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}});
|
||||
instead, the kubelet watches each static Pod (and restarts it if it crashes).
|
||||
instead, the kubelet watches each static Pod (and restarts it if it fails).
|
||||
|
||||
Static Pods are always bound to one {{< glossary_tooltip term_id="kubelet" >}} on a specific node.
|
||||
|
||||
|
|
|
@ -17,7 +17,8 @@ This page shows how to debug Pods and ReplicationControllers.
|
|||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
* You should be familiar with the basics of
|
||||
[Pods](/docs/concepts/workloads/pods/pod/) and [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}} and with
|
||||
Pods' [lifecycles](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ kubectl delete pods -l app=myapp
|
|||
|
||||
### Persistent Volumes
|
||||
|
||||
Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have left the [terminating state](/docs/concepts/workloads/pods/pod/#termination-of-pods) might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
|
||||
Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have terminated might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
|
||||
|
||||
{{< note >}}
|
||||
Use caution when deleting a PVC, as it may lead to data loss.
|
||||
|
|
|
@ -37,7 +37,7 @@ You can perform a graceful pod deletion with the following command:
|
|||
kubectl delete pods <pod>
|
||||
```
|
||||
|
||||
For the above to lead to graceful termination, the Pod **must not** specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. The practice of setting a `pod.Spec.TerminationGracePeriodSeconds` of 0 seconds is unsafe and strongly discouraged for StatefulSet Pods. Graceful deletion is safe and will ensure that the [Pod shuts down gracefully](/docs/concepts/workloads/pods/pod/#termination-of-pods) before the kubelet deletes the name from the apiserver.
|
||||
For the above to lead to graceful termination, the Pod **must not** specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. The practice of setting a `pod.Spec.TerminationGracePeriodSeconds` of 0 seconds is unsafe and strongly discouraged for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod [shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) before the kubelet deletes the name from the apiserver.
|
||||
|
||||
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a [timeout](/docs/admin/node/#node-condition). Pods may also enter these states when the user attempts graceful deletion of a Pod on an unreachable Node. The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
|
||||
|
|
|
@ -65,7 +65,7 @@ This tutorial provides a container image that uses NGINX to echo back all the re
|
|||
|
||||
## Create a Deployment
|
||||
|
||||
A Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) is a group of one or more Containers,
|
||||
A Kubernetes [*Pod*](/docs/concepts/workloads/pods/) is a group of one or more Containers,
|
||||
tied together for the purposes of administration and networking. The Pod in this
|
||||
tutorial has only one Container. A Kubernetes
|
||||
[*Deployment*](/docs/concepts/workloads/controllers/deployment/) checks on the health of your
|
||||
|
|
|
@ -20,7 +20,7 @@ weight: 20
|
|||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<p>
|
||||
A Pod is the basic execution unit of a Kubernetes application. Each Pod represents a part of a workload that is running on your cluster. <a href="/docs/concepts/workloads/pods/pod-overview/#understanding-pods">Learn more about Pods</a>.
|
||||
A Pod is the basic execution unit of a Kubernetes application. Each Pod represents a part of a workload that is running on your cluster. <a href="/docs/concepts/workloads/pods/">Learn more about Pods</a>.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -28,7 +28,7 @@ weight: 10
|
|||
<div class="col-md-8">
|
||||
<h3>Overview of Kubernetes Services</h3>
|
||||
|
||||
<p>Kubernetes <a href="/docs/concepts/workloads/pods/pod-overview/">Pods</a> are mortal. Pods in fact have a <a href="/docs/concepts/workloads/pods/pod-lifecycle/">lifecycle</a>. When a worker node dies, the Pods running on the Node are also lost. A <a href="/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> might then dynamically drive the cluster back to desired state via creation of new Pods to keep your application running. As another example, consider an image-processing backend with 3 replicas. Those replicas are exchangeable; the front-end system should not care about backend replicas or even if a Pod is lost and recreated. That said, each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so that your applications continue to function.</p>
|
||||
<p>Kubernetes <a href="/docs/concepts/workloads/pods/">Pods</a> are mortal. Pods in fact have a <a href="/docs/concepts/workloads/pods/pod-lifecycle/">lifecycle</a>. When a worker node dies, the Pods running on the Node are also lost. A <a href="/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> might then dynamically drive the cluster back to desired state via creation of new Pods to keep your application running. As another example, consider an image-processing backend with 3 replicas. Those replicas are exchangeable; the front-end system should not care about backend replicas or even if a Pod is lost and recreated. That said, each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so that your applications continue to function.</p>
|
||||
|
||||
<p>A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML <a href="/docs/concepts/configuration/overview/#general-configuration-tips">(preferred)</a> or JSON, like all Kubernetes objects. The set of Pods targeted by a Service is usually determined by a <i>LabelSelector</i> (see below for why you might want a Service without including <code>selector</code> in the spec).</p>
|
||||
|
||||
|
|
|
@ -52,11 +52,11 @@ kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
|
|||
|
||||
|
||||
The preceding command creates a
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
object and an associated
|
||||
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
object. The ReplicaSet has five
|
||||
[Pods](/docs/concepts/workloads/pods/pod/),
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}
|
||||
and an associated
|
||||
{{< glossary_tooltip term_id="replica-set" text="ReplicaSet" >}}.
|
||||
The ReplicaSet has five
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
each of which runs the Hello World application.
|
||||
|
||||
1. Display information about the Deployment:
|
||||
|
|
|
@ -72,7 +72,7 @@
|
|||
/docs/concepts/abstractions/controllers/statefulsets/ /docs/concepts/workloads/controllers/statefulset/ 301
|
||||
/docs/concepts/abstractions/init-containers/ /docs/concepts/workloads/pods/init-containers/ 301
|
||||
/docs/concepts/abstractions/overview/ /docs/concepts/overview/working-with-objects/kubernetes-objects/ 301
|
||||
/docs/concepts/abstractions/pod/ /docs/concepts/workloads/pods/pod-overview/ 301
|
||||
/docs/concepts/abstractions/pod/ /docs/concepts/workloads/pods/ 301
|
||||
/docs/concepts/api-extension/apiserver-aggregation/ /docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/ 301
|
||||
/docs/concepts/api-extension/custom-resources/ /docs/concepts/extend-kubernetes/api-extension/custom-resources/ 301
|
||||
/docs/concepts/containers/overview/ /docs/concepts/containers/ 301
|
||||
|
@ -126,12 +126,14 @@
|
|||
/docs/concepts/overview/object-management-kubectl/imperative-config/ /docs/tasks/manage-kubernetes-objects/imperative-config/ 301
|
||||
/docs/concepts/overview/object-management-kubectl/kustomization/ /docs/tasks/manage-kubernetes-objects/kustomization/ 301
|
||||
/docs/concepts/workloads/controllers/cron-jobs/deployment/ /docs/concepts/workloads/controllers/cron-jobs/ 301
|
||||
/docs/concepts/workloads/controllers/daemonset/docs/concepts/workloads/pods/pod/ /docs/concepts/workloads/pods/pod/ 301
|
||||
/docs/concepts/workloads/controllers/deployment/docs/concepts/workloads/pods/pod/ /docs/concepts/workloads/pods/pod/ 301
|
||||
/docs/concepts/workloads/controllers/daemonset/docs/concepts/workloads/pods/pod/ /docs/concepts/workloads/pods/ 301
|
||||
/docs/concepts/workloads/controllers/deployment/docs/concepts/workloads/pods/pod/ /docs/concepts/workloads/pods/ 301
|
||||
|
||||
/docs/concepts/workloads/controllers/jobs-run-to-completion/ /docs/concepts/workloads/controllers/job/ 301
|
||||
/docs/concepts/workloads/controllers/statefulsets/ /docs/concepts/workloads/controllers/statefulset/ 301
|
||||
/docs/concepts/workloads/controllers/statefulset.md /docs/concepts/workloads/controllers/statefulset/ 301!
|
||||
/docs/concepts/workloads/pods/pod/ /docs/concepts/workloads/pods/ 301
|
||||
/docs/concepts/workloads/pods/pod-overview/ /docs/concepts/workloads/pods/ 301
|
||||
/docs/concepts/workloads/pods/init-containers/Kubernetes/ /docs/concepts/workloads/pods/init-containers/ 301
|
||||
|
||||
/docs/consumer-guideline/pod-security-coverage/ /docs/concepts/policy/pod-security-policy/ 301
|
||||
|
@ -383,14 +385,14 @@
|
|||
/docs/user-guide/persistent-volumes/index /docs/concepts/storage/persistent-volumes/ 301
|
||||
/docs/user-guide/persistent-volumes/index.md /docs/concepts/storage/persistent-volumes/ 301
|
||||
/docs/user-guide/persistent-volumes/walkthrough/ /docs/tasks/configure-pod-container/configure-persistent-volume-storage/ 301
|
||||
/docs/user-guide/pod-preset/ /docs/tasks/inject-data-application/podpreset/ 301
|
||||
/docs/user-guide/pod-preset/ /docs/concepts/workloads/pods/podpreset/ 301
|
||||
/docs/user-guide/pod-security-policy/ /docs/concepts/policy/pod-security-policy/ 301
|
||||
/docs/user-guide/pod-states/ /docs/concepts/workloads/pods/pod-lifecycle/ 301
|
||||
/docs/user-guide/pod-templates/ /docs/concepts/workloads/pods/pod-overview/ 301
|
||||
/docs/user-guide/pod-templates/ /docs/concepts/workloads/pods/#pod-templates 301
|
||||
/docs/user-guide/pods/ /docs/concepts/workloads/pods/pod/ 301
|
||||
/docs/user-guide/pods/init-container/ /docs/concepts/workloads/pods/init-containers/ 301
|
||||
/docs/user-guide/pods/multi-container/ /docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/ 301
|
||||
/docs/user-guide/pods/single-container/ /docs/tasks/run-application/run-stateless-application-deployment/ 301
|
||||
/docs/user-guide/pods/multi-container/ /docs/concepts/workloads/pods/#using-pods 301
|
||||
/docs/user-guide/pods/single-container/ /docs/concepts/workloads/pods/#using-pods 301
|
||||
/docs/user-guide/prereqs/ /docs/tasks/tools/install-kubectl/ 301
|
||||
/docs/user-guide/production-pods/ /docs/tasks/ 301
|
||||
/docs/user-guide/projected-volume/ /docs/tasks/configure-pod-container/configure-projected-volume-storage/ 301
|
||||
|
|
Loading…
Reference in New Issue