Improve overview for workload APIs

Co-authored-by: Qiming Teng <tengqm@outlook.com>
Co-authored-by: Aldo Culquicondor <acondor@google.com>
pull/37593/head
Tim Bannister 2022-10-29 18:33:53 +01:00
parent 74ee1d9875
commit 50635afc37
9 changed files with 109 additions and 28 deletions

View File

@ -9,16 +9,16 @@ no_list: true
{{< glossary_definition term_id="workload" length="short" >}}
Whether your workload is a single component or several that work together, on Kubernetes you run
it inside a set of [_pods_](/docs/concepts/workloads/pods).
In Kubernetes, a `Pod` represents a set of running
In Kubernetes, a Pod represents a set of running
{{< glossary_tooltip text="containers" term_id="container" >}} on your cluster.
Kubernetes pods have a [defined lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/).
For example, once a pod is running in your cluster then a critical fault on the
{{< glossary_tooltip text="node" term_id="node" >}} where that pod is running means that
all the pods on that node fail. Kubernetes treats that level of failure as final: you
would need to create a new `Pod` to recover, even if the node later becomes healthy.
would need to create a new Pod to recover, even if the node later becomes healthy.
However, to make life considerably easier, you don't need to manage each `Pod` directly.
However, to make life considerably easier, you don't need to manage each Pod directly.
Instead, you can use _workload resources_ that manage a set of pods on your behalf.
These resources configure {{< glossary_tooltip term_id="controller" text="controllers" >}}
that make sure the right number of the right kind of pod are running, to match the state
@ -26,44 +26,51 @@ you specified.
Kubernetes provides several built-in workload resources:
* [`Deployment`](/docs/concepts/workloads/controllers/deployment/) and [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/)
* [Deployment](/docs/concepts/workloads/controllers/deployment/) and [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
(replacing the legacy resource
{{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}}).
`Deployment` is a good fit for managing a stateless application workload on your cluster,
where any `Pod` in the `Deployment` is interchangeable and can be replaced if needed.
* [`StatefulSet`](/docs/concepts/workloads/controllers/statefulset/) lets you
Deployment is a good fit for managing a stateless application workload on your cluster,
where any Pod in the Deployment is interchangeable and can be replaced if needed.
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) lets you
run one or more related Pods that do track state somehow. For example, if your workload
records data persistently, you can run a `StatefulSet` that matches each `Pod` with a
[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/). Your code, running in the
`Pods` for that `StatefulSet`, can replicate data to other `Pods` in the same `StatefulSet`
records data persistently, you can run a StatefulSet that matches each Pod with a
[PersistentVolume](/docs/concepts/storage/persistent-volumes/). Your code, running in the
Pods for that StatefulSet, can replicate data to other Pods in the same StatefulSet
to improve overall resilience.
* [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) defines `Pods` that provide
node-local facilities. These might be fundamental to the operation of your cluster, such
as a networking helper tool, or be part of an
{{< glossary_tooltip text="add-on" term_id="addons" >}}.
Every time you add a node to your cluster that matches the specification in a `DaemonSet`,
the control plane schedules a `Pod` for that `DaemonSet` onto the new node.
* [`Job`](/docs/concepts/workloads/controllers/job/) and
[`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/)
define tasks that run to completion and then stop. Jobs represent one-off tasks, whereas
`CronJobs` recur according to a schedule.
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) defines Pods that provide
facilities that are local to nodes.
Every time you add a node to your cluster that matches the specification in a DaemonSet,
the control plane schedules a Pod for that DaemonSet onto the new node.
Each pod in a DaemonSet performs a job similar to a system daemon on a classic Unix / POSIX
server. A DaemonSet might be fundamental to the operation of your cluster, such as
a plugin to run [cluster networking](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model),
it might help you to manage the node,
or it could provide optional behavior that enhances the container platform you are running.
* [Job](/docs/concepts/workloads/controllers/job/) and
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/) provide different ways to
define tasks that run to completion and then stop.
You can use a [Job](/docs/concepts/workloads/controllers/job/) to
define a task that runs to completion, just once. You can use a
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/) to run
the same Job multiple times according a schedule.
In the wider Kubernetes ecosystem, you can find third-party workload resources that provide
additional behaviors. Using a
[custom resource definition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/),
you can add in a third-party workload resource if you want a specific behavior that's not part
of Kubernetes' core. For example, if you wanted to run a group of `Pods` for your application but
of Kubernetes' core. For example, if you wanted to run a group of Pods for your application but
stop work unless _all_ the Pods are available (perhaps for some high-throughput distributed task),
then you can implement or install an extension that does provide that feature.
## {{% heading "whatsnext" %}}
As well as reading about each resource, you can learn about specific tasks that relate to them:
As well as reading about each API kind for workload management, you can read how to
do specific tasks:
* [Run a stateless application using a `Deployment`](/docs/tasks/run-application/run-stateless-application-deployment/)
* [Run a stateless application using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/)
* Run a stateful application either as a [single instance](/docs/tasks/run-application/run-single-instance-stateful-application/)
or as a [replicated set](/docs/tasks/run-application/run-replicated-stateful-application/)
* [Run automated tasks with a `CronJob`](/docs/tasks/job/automated-tasks-with-cron-jobs/)
* [Run automated tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
To learn about Kubernetes' mechanisms for separating code from configuration,
visit [Configuration](/docs/concepts/configuration/).
@ -76,6 +83,6 @@ for applications:
removes Jobs once a defined time has passed since they completed.
Once your application is running, you might want to make it available on the internet as
a [`Service`](/docs/concepts/services-networking/service/) or, for web application only,
using an [`Ingress`](/docs/concepts/services-networking/ingress).
a [Service](/docs/concepts/services-networking/service/) or, for web application only,
using an [Ingress](/docs/concepts/services-networking/ingress).

View File

@ -3,3 +3,55 @@ title: "Workload Resources"
weight: 20
---
Kubernetes provides several built-in APIs for declarative management of your
{{< glossary_tooltip text="workloads" term_id="workload" >}}
and the components of those workloads.
Ultimately, your applications run as containers inside
{{< glossary_tooltip term_id="Pod" text="Pods" >}}; however, managing individual
Pods would be a lot of effort. For example, if a Pod fails, you probably want to
run a new Pod to replace it. Kubernetes can do that for you.
You use the Kubernetes API to create workload
{{< glossary_tooltip text="object" term_id="object" >}} that represent a higher level
abstraction than a Pod, and then the Kubernetes
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} automatically manages
Pod objects on your behalf, based on the specification for the workload object you defined.
The built-in APIs for managing workloads are:
[Deployment](/docs/concepts/workloads/controllers/deployment/) (and, indirectly, [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)),
the most common way to run an application on your cluster.
Deployment is a good fit for managing a stateless application workload on your cluster, where
any Pod in the Deployment is interchangeable and can be replaced if needed.
(Deployments are a replacement for the legacy
{{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}} API).
A [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) lets you
manage one or more Pods all running the same application code where the Pods rely
on having a distinct identity. This is different from a Deployment where the Pods are
expected to be interchangeable.
The most common use for a StatefulSet is to be able to make a link between its Pods and
their persistent storage. For example, you can run a StatefulSet that associates each Pod
with a [PersistentVolume](/docs/concepts/storage/persistent-volumes/). If one of the Pods
in the StatefulSet fails, Kubernetes makes a replacement Pod that is connected to the
same PersistentVolume.
A [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) defines Pods that provide
facilities that are local to a specific {{< glossary_tooltip text="node" term_id="node" >}};
for example, a driver that lets containers on that node access a storage system. You use a DaemonSet
when the driver, or other node-level service, has to run on the node where it's useful.
Each Pod in a DaemonSet performs a role similar to a system daemon on a classic Unix / POSIX
server.
A DaemonSet might be fundamental to the operation of your cluster,
such as a plugin to let that node access
[cluster networking](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model),
it might help you to manage the node,
or it could provide less essential facilities that enhance the container platform you are running.
You can run DaemonSets (and their pods) across every node in your cluster, or across just a subset (for example,
only install the GPU accelerator driver on nodes that have a GPU installed).
You can use a [Job](/docs/concepts/workloads/controllers/job/) and / or
a [CronJob](/docs/concepts/workloads/controllers/cron-jobs/) to
define tasks that run to completion and then stop. A Job represents a one-off task,
whereas each CronJob repeats according to a schedule.

View File

@ -5,7 +5,10 @@ reviewers:
- janetkuo
title: CronJob
content_type: concept
description: >-
A CronJob starts one-time Jobs on a repeating schedule.
weight: 80
hide_summary: true # Listed separately in section index
---
<!-- overview -->

View File

@ -6,8 +6,11 @@ reviewers:
- janetkuo
- kow3ns
title: DaemonSet
description: >-
A DaemonSet defines Pods that provide node-local facilities. These might be fundamental to the operation of your cluster, such as a networking helper tool, or be part of an add-on.
content_type: concept
weight: 40
hide_summary: true # Listed separately in section index
---
<!-- overview -->

View File

@ -6,9 +6,11 @@ feature:
title: Automated rollouts and rollbacks
description: >
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
description: >-
A Deployment manages a set of Pods to run an application workload, usually one that doesn't maintain state.
content_type: concept
weight: 10
hide_summary: true # Listed separately in section index
---
<!-- overview -->

View File

@ -5,11 +5,14 @@ reviewers:
- soltysh
title: Jobs
content_type: concept
description: >-
Jobs represent one-off tasks that run to completion and then stop.
feature:
title: Batch execution
description: >
In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
weight: 50
hide_summary: true # Listed separately in section index
---
<!-- overview -->

View File

@ -12,7 +12,11 @@ feature:
kills containers that don't respond to your user-defined health check,
and doesn't advertise them to clients until they are ready to serve.
content_type: concept
description: >-
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
Usually, you define a Deployment and let that Deployment manage ReplicaSets automatically.
weight: 20
hide_summary: true # Listed separately in section index
---
<!-- overview -->

View File

@ -5,6 +5,9 @@ reviewers:
title: ReplicationController
content_type: concept
weight: 90
description: >-
Legacy API for managing workloads that can scale horizontally.
Superseded by the Deployment and ReplicaSet APIs.
---
<!-- overview -->
@ -19,7 +22,7 @@ always up and available.
<!-- body -->
## How a ReplicationController Works
## How a ReplicationController works
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the
ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a

View File

@ -8,7 +8,11 @@ reviewers:
- smarterclayton
title: StatefulSets
content_type: concept
description: >-
A StatefulSet runs a group of Pods, and maintains a sticky identity for each of those Pods. This is useful for managing
applications that need persistent storage or a stable, unique network identity.
weight: 30
hide_summary: true # Listed separately in section index
---
<!-- overview -->