website/docs/concepts/workloads/pods/pod.md

197 lines
13 KiB
Markdown
Raw Normal View History

---
approvers:
title: Pods
---
* TOC
{:toc}
_pods_ are the smallest deployable units of computing that can be created and
managed in Kubernetes.
## What is a Pod?
A _pod_ (as in a pod of whales or pea pod) is a group of one or more containers
(such as Docker containers), with shared storage/network, and a specification
for how to run the containers. A pod's contents are always co-located and
co-scheduled, and run in a shared context. A pod models an
application-specific "logical host" - it contains one or more application
containers which are relatively tightly coupled — in a pre-container
world, they would have executed on the same physical or virtual machine.
While Kubernetes supports more container runtimes than just Docker, Docker is
the most commonly known runtime, and it helps to describe pods in Docker terms.
The shared context of a pod is a set of Linux namespaces, cgroups, and
potentially other facets of isolation - the same things that isolate a Docker
container. Within a pod's context, the individual applications may have
further sub-isolations applied.
Containers within a pod share an IP address and port space, and
can find each other via `localhost`. They can also communicate with each
other using standard inter-process communications like SystemV semaphores or
POSIX shared memory. Containers in different pods have distinct IP addresses
and can not communicate by IPC.
Applications within a pod also have access to shared volumes, which are defined
as part of a pod and are made available to be mounted into each application's
filesystem.
In terms of [Docker](https://www.docker.com/) constructs, a pod is modelled as
a group of Docker containers with shared namespaces and shared
[volumes](/docs/concepts/storage/volumes/). PID namespace sharing is not yet implemented in Docker.
Like individual application containers, pods are considered to be relatively
ephemeral (rather than durable) entities. As discussed in [life of a
pod](/docs/concepts/workloads/pods/pod-lifecycle/), pods are created, assigned a unique ID (UID), and
scheduled to nodes where they remain until termination (according to restart
policy) or deletion. If a node dies, the pods scheduled to that node are
scheduled for deletion, after a timeout period. A given pod (as defined by a UID) is not
"rescheduled" to a new node; instead, it can be replaced by an identical pod,
with even the same name if desired, but with a new UID (see [replication
controller](/docs/concepts/workloads/controllers/replicationcontroller/) for more details). (In the future, a
higher-level API may support pod migration.)
When something is said to have the same lifetime as a pod, such as a volume,
that means that it exists as long as that pod (with that UID) exists. If that
pod is deleted for any reason, even if an identical replacement is created, the
related thing (e.g. volume) is also destroyed and created anew.
![pod diagram](/images/docs/pod.svg){: style="max-width: 50%" }
*A multi-container pod that contains a file puller and a
web server that uses a persistent volume for shared storage between the containers.*
## Motivation for pods
### Management
Pods are a model of the pattern of multiple cooperating processes which form a
cohesive unit of service. They simplify application deployment and management
by providing a higher-level abstraction than the set of their constituent
applications. Pods serve as unit of deployment, horizontal scaling, and
replication. Colocation (co-scheduling), shared fate (e.g. termination),
coordinated replication, resource sharing, and dependency management are
handled automatically for containers in a pod.
### Resource sharing and communication
Pods enable data sharing and communication among their constituents.
The applications in a pod all use the same network namespace (same IP and port
space), and can thus "find" each other and communicate using `localhost`.
Because of this, applications in a pod must coordinate their usage of ports.
Each pod has an IP address in a flat shared networking space that has full
communication with other physical computers and pods across the network.
The hostname is set to the pod's Name for the application containers within the
pod. [More details on networking](/docs/concepts/cluster-administration/networking/).
In addition to defining the application containers that run in the pod, the pod
specifies a set of shared storage volumes. Volumes enable data to survive
container restarts and to be shared among the applications within the pod.
## Uses of pods
Pods can be used to host vertically integrated application stacks (e.g. LAMP),
but their primary motivation is to support co-located, co-managed helper
programs, such as:
* content management systems, file and data loaders, local cache managers, etc.
* log and checkpoint backup, compression, rotation, snapshotting, etc.
* data change watchers, log tailers, logging and monitoring adapters, event publishers, etc.
* proxies, bridges, and adapters
* controllers, managers, configurators, and updaters
Individual pods are not intended to run multiple instances of the same
application, in general.
For a longer explanation, see [The Distributed System ToolKit: Patterns for
Composite
Containers](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html).
## Alternatives considered
_Why not just run multiple programs in a single (Docker) container?_
1. Transparency. Making the containers within the pod visible to the
infrastructure enables the infrastructure to provide services to those
containers, such as process management and resource monitoring. This
facilitates a number of conveniences for users.
2. Decoupling software dependencies. The individual containers may be
versioned, rebuilt and redeployed independently. Kubernetes may even support
live updates of individual containers someday.
3. Ease of use. Users don't need to run their own process managers, worry about
signal and exit-code propagation, etc.
4. Efficiency. Because the infrastructure takes on more responsibility,
containers can be lighter weight.
_Why not support affinity-based co-scheduling of containers?_
That approach would provide co-location, but would not provide most of the
benefits of pods, such as resource sharing, IPC, guaranteed fate sharing, and
simplified management.
## Durability of pods (or lack thereof)
Pods aren't intended to be treated as durable entities. They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [Deployments](/docs/concepts/workloads/controllers/deployment/)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
2017-05-02 21:12:48 +00:00
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
Pod is exposed as a primitive in order to facilitate:
* scheduler and controller pluggability
* support for pod-level operations without the need to "proxy" them via controller APIs
* decoupling of pod lifetime from controller lifetime, such as for bootstrapping
* decoupling of controllers and services — the endpoint controller just watches pods
* clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller"
* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](http://issue.k8s.io/3949)
Release 1.8 (#5659) * GC now supports non-core resources * Add two examples about how to analysis audits of kube-apiserver (#4264) * Deprecate system:nodes binding * [1.8] StatefulSet `initialized` annotation is now ignored. * inits the kubeadm upgrade docs addresses kubernetes/kubernetes.github.io/issues/4689 * adds kubeadm upgrade cmd to ToC addresses kubernetes/kubernetes.github.io/issues/4689 * add workload placement docs * ScaleIO - document udpate for 1.8 * Add documentation on storageClass.mountOptions and PV.mountOptions (#5254) * Add documentation on storageClass.mountOptions and PV.mountOptions * convert notes into callouts * Add docs for CustomResource validation add info about supported fields * advanced audit beta features (#5300) * Update job workload doc with backoff failure policy (#5319) Add to the Jobs documentation how to use the new backoffLimit field that limit the number of Pod failure before considering the Job as failed. * Documented additional AWS Service annotations (#4864) * Add device plugin doc under concepts/cluster-administration. (#5261) * Add device plugin doc under concepts/cluster-administration. * Update device-plugins.md * Update device-plugins.md Add meta description. Fix typo. Change bare metal deployment to manual deployment. * Update device-plugins.md Fix typo again. * Update page.version. (#5341) * Add documentation on storageClass.reclaimPolicy (#5171) * [Advanced audit] use new herf for audit-api (#5349) This tag contains all the changes in v1beta1 version. Update it now. * Added documentation around creating the InitializerConfiguration for the persistent volume label controller in the cloud-controller-manager (#5255) * Documentation for kubectl plugins (#5294) * Documentation for kubectl plugins * Update kubectl-plugins.md * Update kubectl-plugins.md * Updated CPU manager docs to match implementation. (#5332) * Noted limitation of alpha static cpumanager. * Updated CPU manager docs to match implementation. - Removed references to CPU pressure node condition and evictions. - Added note about new --cpu-manager-reconcile-period flag. - Added note about node allocatable requirements for static policy. - Noted limitation of alpha static cpumanager. * Move cpu-manager task link to rsc mgmt section. * init containers annotation removed in 1.8 (#5390) * Add documentation for TaintNodesByCondition (#5352) * Add documentation for TaintNodesByCondition * Update nodes.md * Update taint-and-toleration.md * Update daemonset.md * Update nodes.md * Update taint-and-toleration.md * Update daemonset.md * Fix deployments (#5421) * Document extended resources and OIR deprecation. (#5399) * Document extended resources and OIR deprecation. * Updated extended resources doc per reviews. * reverts extra spacing in _data/tasks.yml * addresses `kubeadm upgrade` review comments Feedback from @chenopis, @luxas, and @steveperry-53 addressed with this commit * HugePages documentation (#5419) * Update cpu-management-policies.md (#5407) Fixed the bad link. Modified "cpu" to "CPU". Added more 'yaml' as supplement. * Update RBAC docs for v1 (#5445) * Add user docs for pod priority and preemption (#5328) * Add user docs for pod priority and preemption * Update pod-priority-preemption.md * More updates * Update docs/admin/kubeadm.md for 1.8 (#5440) - Made a couple of minor wording changes (not strictly 1.8 related). - Did some reformatting (not strictly 1.8 related). - Updated references to the default token TTL (was infinite, now 24 hours). - Documented the new `--discovery-token-ca-cert-hash` and `--discovery-token-unsafe-skip-ca-verification` flags for `kubeadm join`. - Added references to the new `--discovery-token-ca-cert-hash` flag in all the default examples. - Added a new _Security model_ section that describes the security tradeoffs of the various discovery modes. - Documented the new `--groups` flag for `kubeadm token create`. - Added a note of caution under _Automating kubeadm_ that references the _Security model_ section. - Updated the component version table to drop 1.6 and add 1.8. - Update `_data/reference.yml` to try to get the sidebar fixed up and more consistent with `kubefed`. * Update StatefulSet Basics for 1.8 release (#5398) * addresses `kubeadm upgrade` review comments 2nd iteration review comments by @luxas * adds kubelet upgrade section to kubeadm upgrade * Fix a bulleted list on docs/admin/kubeadm.md. (#5458) I updated this doc yesterday and I was absolutely sure I fixed this, but I just saw that this commit got lost somehow. This was introduced recently in https://github.com/kubernetes/kubernetes.github.io/pull/5440. * Clarify the API to check for device plugins * Moving Flexvolume to separate out-of-tree section * addresses `kubeadm upgrade` review comments CC: @luxas * fixes kubeadm upgrade index * Update Stackdriver Logging documentation (#5495) * Re-update WordPress and MySQL PV doc to use apps/v1beta2 APIs (#5526) * Update statefulset concepts doc to use apps/v1beta2 APIs (#5420) * add document on kubectl's behavior regarding initializers (#5505) * Update docs/admin/kubeadm.md to cover self-hosting in 1.8. (#5497) This is a new beta feature in 1.8. * Update kubectl patch doc to use apps/v1beta2 APIs (#5422) * [1.8] Update "Run Applications" tasks to apps/v1beta2. (#5525) * Update replicated stateful application task for 1.8. * Update single instance stateful app task for 1.8. * Update stateless app task for 1.8. * Update kubectl patch task for 1.8. * fix the link of persistent storage (#5515) * update the admission-controllers.md index.md what-is-kubernetes.md link * fix the link of persistent storage * Add quota support for local ephemeral storage (#5493) * Add quota support for local ephemeral storage update the doc to this alpha feature * Update resource-quotas.md * Updated Deployments concepts doc (#5491) * Updated Deployments concepts doc * Addressed comments * Addressed more comments * Modify allocatable storage to ephemeral-storage (#5490) Update the doc to use ephemeral-storage instead of storage * Revamped concepts doc for ReplicaSet (#5463) * Revamped concepts doc for ReplicaSet * Minor changes to call out specific versions for selector defaulting and immutability * Addressed doc review comments * Remove petset documentations (#5395) * Update docs to use batch/v1beta1 cronjobs (#5475) * add federation job doc (#5485) * add federation job doc * Update job.md Edits for clarity and consistency * Update job.md Fixed a typo * update DaemonSet concept for 1.8 release (#5397) * update DaemonSet concept for 1.8 release * Update daemonset.md Fix typo. than -> then * Update bootstrap tokens doc for 1.8. (#5479) * Update bootstrap tokens doc for 1.8. This has some changes I missed when I was updating the main kubeadm documention: - Bootstrap tokens are now beta, not alpha (https://github.com/kubernetes/features/issues/130) - The apiserver flag to enable the authenticator changedin 1.8 (https://github.com/kubernetes/kubernetes/pull/51198) - Added `auth-extra-groups` documentaion (https://github.com/kubernetes/kubernetes/pull/50933) - Updated the _Token Management with `kubeadm`_ section to link to the main kubeadm docs, since it was just duplicated information. * Update bootstrap-tokens.md * Updated the Cassandra tutorial to use apps/v1beta2 (#5548) * add docs for AllowPrivilegeEscalation (#5448) Signed-off-by: Jess Frazelle <acidburn@microsoft.com> * Add local ephemeral storage alpha feature in managing compute resource (#5522) * Add local ephemeral storage alpha feature in managing compute resource Since 1.8, we add the local ephemeral storage alpha feature as one resource type to manage. Add this feature into the doc. * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Update manage-compute-resources-container.md * Added documentation for Metrics Server (#5560) * authorization: improve authorization debugging docs (#5549) * Document mount propagation (#5544) * Update /docs/setup/independent/create-cluster-kubeadm.md for 1.8. (#5524) This introduction needed a couple of small tweaks to cover the `--discovery-token-ca-cert-hash` flag added in https://github.com/kubernetes/kubernetes/pull/49520 and some version bumps. * Add task doc for alpha dynamic kubelet configuration (#5523) * Fix input/output of selfsubjectaccess review (#5593) * Add docs for implementing resize (#5528) * Add docs for implementing resize * Update admission-controllers.md * Added link to PVC section * minor typo fixes * Update NetworkPolicy concept guide with egress and CIDR changes (#5529) * update zookeeper tutorial for 1.8 release * add doc for hostpath type (#5503) * Federated Hpa feature doc (#5487) * Federated Hpa feature doc * Federated Hpa feature doc review fixes * Update hpa.md * Update hpa.md * update cloud controller manager docs for v1.8 * Update cronjob with defaults information (#5556) * Kubernetes 1.8 reference docs (#5632) * Kubernetes 1.8 reference docs * Kubectl reference docs for 1.8 * Update side bar with 1.8 kubectl and api ref docs links * remove petset.md * update on state of HostAlias in 1.8 with hostNetwork Pod support (#5644) * Fix cron job deletion section (#5655) * update imported docs (#5656) * Add documentation for certificate rotation. (#5639) * Link to using kubeadm page * fix the command output fix the command output * fix typo in api/resources reference: "Worloads" * Add documentation for certificate rotation. * Create TOC entry for cloud controller manager. (#5662) * Updates for new versions of API types * Followup 5655: fix link to garbage collection (#5666) * Temporarily redirect resources-reference to api-reference. (#5668) * Update config for 1.8 release. (#5661) * Update config for 1.8 release. * Address reviewer comments. * Switch references in HPA docs from alpha to beta (#5671) The HPA docs still referenced the alpha version. This switches them to talk about v2beta1, which is the appropriate version for Kubernetes 1.8 * Deprecate openstack heat (#5670) * Fix typo in pod preset conflict example Move container port definition to the correct line. * Highlight openstack-heat provider deprecation The openstack-heat provider for kube-up is being deprecated and will be removed in a future release. * Temporarily fix broken links by redirecting. (#5672) * Fix broken links. (#5675) * Fix render of code block (#5674) * Fix broken links. (#5677) * Add a small note about auto-bootstrapped CSR ClusterRoles (#5660) * Update kubeadm install doc for v1.8 (#5676) * add draft workloads api content for 1.8 (#5650) * add draft workloads api content for 1.8 * edits per review, add tables, for 1.8 workloads api doc * fix typo * Minor fixes to kubeadm 1.8 upgrade guide. (#5678) - The kubelet upgrade instructions should be done on every host, not just worker nodes. - We should just upgrade all packages, instead of calling out kubelet specifically. This will also upgrade kubectl, kubeadm, and kubernetes-cni, if installed. - Draining nodes should also ignore daemonsets, and master errors can be ignored. - Make sure that the new kubeadm download is chmoded correctly. - Add a step to run `kubeadm version` to verify after downloading. - Manually approve new kubelet CSRs if rotation is enabled (known issue). * Release 1.8 (#5680) * Fix versions for 1.8 API ref docs * Updates for 1.8 kubectl reference docs * Kubeadm /docs/admin/kubeadm.md cleanup, editing. (#5681) * Update docs/admin/kubeadm.md (mostly 1.8 related). This is Fabrizio's work, which I'm committing along with my edits (in a commit on top of this). * A few of my own edits to clarify and clean up some Markdown.
2017-09-29 04:46:51 +00:00
There is new first-class support for stateful pods with the [StatefulSet](/docs/concepts/workloads/controllers/statefulset.md) controller (currently in beta). The feature was alpha in 1.4 and was called PetSet. For prior versions of Kubernetes, best practice for having stateful pods is to create a replication controller with `replicas` equal to `1` and a corresponding service, see [this MySQL deployment example](/docs/tutorials/stateful-application/run-stateful-application/).
## Termination of Pods
Because pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up). Users should be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When a user requests deletion of a pod the system records the intended grace period before the pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container. Once the grace period has expired the KILL signal is sent to those processes and the pod is then deleted from the API server. If the Kubelet or the container manager is restarted while waiting for processes to terminate, the termination will be retried with the full grace period.
An example flow:
1. User sends command to delete Pod, with default grace period (30s)
2. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period.
3. Pod shows up as "Terminating" when listed in client commands
4. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.
1. If the pod has defined a [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), it is invoked inside of the pod. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.
2. The processes in the Pod are sent the TERM signal.
5. (simultaneous with 3), Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.
6. When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
7. The Kubelet will finish deleting the Pod on the API server by setting grace period 0 (immediate deletion). The Pod disappears from the API and is no longer visible from the client.
2017-07-06 02:45:47 +00:00
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports the `--grace-period=<seconds>` option which allows a user to override the default and specify their own value. The value `0` [force deletes](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) the pod. In kubectl version >= 1.5, you must specify an additional flag `--force` along with `--grace-period=0` in order to perform force deletions.
### Force deletion of pods
Force deletion of a pod is defined as deletion of a pod from the cluster state and etcd immediately. When a force deletion is performed, the apiserver does not wait for confirmation from the kubelet that the pod has been terminated on the node it was running on. It removes the pod in the API immediately so a new pod can be created with the same name. On the node, pods that are set to terminate immediately will still be given a small grace period before being force killed.
2017-07-06 02:52:25 +00:00
Force deletions can be potentially dangerous for some pods and should be performed with caution. In case of StatefulSet pods, please refer to the task documentation for [deleting Pods from a StatefulSet](/docs/tasks/run-application/force-delete-stateful-set-pod/).
## Privileged mode for pod containers
From Kubernetes v1.1, any container in a pod can enable privileged mode, using the `privileged` flag on the `SecurityContext` of the container spec. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as separate pods that don't need to be compiled into the kubelet.
If the master is running Kubernetes v1.1 or higher, and the nodes are running a version lower than v1.1, then new privileged pods will be accepted by api-server, but will not be launched. They will be pending state.
If user calls `kubectl describe pod FooPodName`, user can see the reason why the pod is in pending state. The events table in the describe command output will say:
`Error validating pod "FooPodName"."FooPodNamespace" from api, ignoring: spec.containers[0].securityContext.privileged: forbidden '<*>(0xc2089d3248)true'`
If the master is running a version lower than v1.1, then privileged pods cannot be created. If user attempts to create a pod, that has a privileged container, the user will get the following error:
`The Pod "FooPodName" is invalid.
spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20b222db0)true'`
## API Object
Pod is a top-level resource in the Kubernetes REST API. More details about the
API object can be found at: [Pod API
object](/docs/api-reference/{{page.version}}/#pod-v1-core).