Create garbage collection docs

pull/28870/head
Shannon Kularathna 2021-07-08 21:41:31 +00:00
parent 5288513b58
commit b1573ad314
10 changed files with 726 additions and 284 deletions

View File

@ -0,0 +1,164 @@
---
title: Garbage Collection
content_type: concept
weight: 50
---
<!-- overview -->
{{<glossary_definition term_id="garbage-collection" length="short">}} This
allows the clean up of resources like the following:
* [Failed pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)
* [Completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
* [Objects without owner references](#owners-dependents)
* [Unused containers and container images](#containers-images)
* [Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete](/docs/concepts/storage/persistent-volumes/#delete)
* [Stale or expired CertificateSigningRequests (CSRs)](/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
* {{<glossary_tooltip text="Nodes" term_id="node">}} deleted in the following scenarios:
* On a cloud when the cluster uses a [cloud controller manager](/docs/concepts/architecture/cloud-controller/)
* On-premises when the cluster uses an addon similar to a cloud controller
manager
* [Node Lease objects](/docs/concepts/architecture/nodes/#heartbeats)
## Owners and dependents {#owners-dependents}
Many objects in Kubernetes link to each other through [*owner references*](/docs/concepts/overview/working-with-objects/owners-dependents/).
Owner references tell the control plane which objects are dependent on others.
Kubernetes uses owner references to give the control plane, and other API
clients, the opportunity to clean up related resources before deleting an
object. In most cases, Kubernetes manages owner references automatically.
Ownership is different from the [labels and selectors](/docs/concepts/overview/working-with-objects/labels/)
mechanism that some resources also use. For example, consider a
{{<glossary_tooltip text="Service" term_id="service">}} that creates
`EndpointSlice` objects. The Service uses *labels* to allow the control plane to
determine which `EndpointSlice` objects are used for that Service. In addition
to the labels, each `EndpointSlice` that is managed on behalf of a Service has
an owner reference. Owner references help different parts of Kubernetes avoid
interfering with objects they dont control.
## Cascading deletion {#cascading-deletion}
Kubernetes checks for and deletes objects that no longer have owner
references, like the pods left behind when you delete a ReplicaSet. When you
delete an object, you can control whether Kubernetes deletes the object's
dependents automatically, in a process called *cascading deletion*. There are
two types of cascading deletion, as follows:
* Foreground cascading deletion
* Background cascading deletion
You can also control how and when garbage collection deletes resources that have
owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id="finalizer">}}.
### Foreground cascading deletion {#foreground-deletion}
In foreground cascading deletion, the owner object you're deleting first enters
a *deletion in progress* state. In this state, the following happens to the
owner object:
* The Kubernetes API server sets the object's `metadata.deletionTimestamp`
field to the time the object was marked for deletion.
* The Kubernetes API server also sets the `metadata.finalizers` field to
`foregroundDeletion`.
* The object remains visible through the Kubernetes API until the deletion
process is complete.
After the owner object enters the deletion in progress state, the controller
deletes the dependents. After deleting all the dependent objects, the controller
deletes the owner object. At this point, the object is no longer visible in the
Kubernetes API.
During foreground cascading deletion, the only dependents that block owner
deletion are those that have the `ownerReference.blockOwnerDeletion=true` field.
See [Use foreground cascading deletion](/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion)
to learn more.
### Background cascading deletion {#background-deletion}
In background cascading deletion, the Kubernetes API server deletes the owner
object immediately and the controller cleans up the dependent objects in
the background. By default, Kubernetes uses background cascading deletion unless
you manually use foreground deletion or choose to orphan the dependent objects.
See [Use background cascading deletion](/docs/tasks/administer-cluster/use-cascading-deletion/#use-background-cascading-deletion)
to learn more.
### Orphaned dependents
When Kubernetes deletes an owner object, the dependents left behind are called
*orphan* objects. By default, Kubernetes deletes dependent objects. To learn how
to override this behaviour, see [Delete owner objects and orphan dependents](/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy).
## Garbage collection of unused containers and images {#containers-images}
The {{<glossary_tooltip text="kubelet" term_id="kubelet">}} performs garbage
collection on unused images every five minutes and on unused containers every
minute. You should avoid using external garbage collection tools, as these can
break the kubelet behavior and remove containers that should exist.
To configure options for unused container and image garbage collection, tune the
kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
and change the parameters related to garbage collection using the
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
resource type.
### Container image lifecycle
Kubernetes manages the lifecycle of all images through its *image manager*,
which is part of the kubelet, with the cooperation of cadvisor. The kubelet
considers the following disk usage limits when making garbage collection
decisions:
* `HighThresholdPercent`
* `LowThresholdPercent`
Disk usage above the configured `HighThresholdPercent` value triggers garbage
collection, which deletes images in order based on the last time they were used,
starting with the oldest first. The kubelet deletes images
until disk usage reaches the `LowThresholdPercent` value.
### Container image garbage collection {#container-image-garbage-collection}
The kubelet garbage collects unused containers based on the following variables,
which you can define:
* `MinAge`: the minimum age at which the kubelet can garbage collect a
container. Disable by setting to `0`.
* `MaxPerPodContainer`: the maximum number of dead containers each Pod pair
can have. Disable by setting to less than `0`.
* `MaxContainers`: the maximum number of dead containers the cluster can have.
Disable by setting to less than `0`.
In addition to these variables, the kubelet garbage collects unidentified and
deleted containers, typically starting with the oldest first.
`MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other
in situations where retaining the maximum number of containers per Pod
(`MaxPerPodContainer`) would go outside the allowable total of global dead
containers (`MaxContainers`). In this situation, the kubelet adjusts
`MaxPodPerContainer` to address the conflict. A worst-case scenario would be to
downgrade `MaxPerPodContainer` to `1` and evict the oldest containers.
Additionally, containers owned by pods that have been deleted are removed once
they are older than `MinAge`.
{{<note>}}
The kubelet only garbage collects the containers it manages.
{{</note>}}
## Configuring garbage collection {#configuring-gc}
You can tune garbage collection of resources by configuring options specific to
the controllers managing those resources. The following pages show you how to
configure garbage collection:
* [Configuring cascading deletion of Kubernetes objects](/docs/tasks/administer-cluster/use-cascading-deletion/)
* [Configuring cleanup of finished Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
<!-- * [Configuring unused container and image garbage collection](/docs/tasks/administer-cluster/reconfigure-kubelet/) -->
## {{% heading "whatsnext" %}}
* Learn more about [ownership of Kubernetes objects](/docs/concepts/overview/working-with-objects/owners-dependents/).
* Learn more about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
* Learn about the [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) (beta) that cleans up finished Jobs.

View File

@ -1,98 +0,0 @@
---
title: Garbage collection for container images
content_type: concept
weight: 70
---
<!-- overview -->
Garbage collection is a helpful function of kubelet that will clean up unused
[images](/docs/concepts/containers/#container-images) and unused
[containers](/docs/concepts/containers/). Kubelet will perform garbage collection
for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially
break the behavior of kubelet by removing containers expected to exist.
<!-- body -->
## Image Collection
Kubernetes manages lifecycle of all images through imageManager, with the cooperation
of cadvisor.
The policy for garbage collecting images takes two factors into consideration:
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold
will trigger garbage collection. The garbage collection will delete least recently used images until the low
threshold has been met.
## Container Collection
The policy for garbage collecting containers considers three user-defined variables.
`MinAge` is the minimum age at which a container can be garbage collected.
`MaxPerPodContainer` is the maximum number of dead containers every single
pod (UID, container name) pair is allowed to have.
`MaxContainers` is the maximum number of total dead containers.
These variables can be individually disabled by setting `MinAge` to zero and
setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
Kubelet will act on containers that are unidentified, deleted, or outside of
the boundaries set by the previously mentioned flags. The oldest containers
will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may
potentially conflict with each other in situations where retaining the maximum
number of containers per pod (`MaxPerPodContainer`) would go outside the
allowable range of global dead containers (`MaxContainers`).
`MaxPerPodContainer` would be adjusted in this situation: A worst case
scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest
containers. Additionally, containers owned by pods that have been deleted are
removed once they are older than `MinAge`.
Containers that are not managed by kubelet are not subject to container garbage collection.
## User Configuration
You can adjust the following thresholds to tune image garbage collection with the following kubelet flags :
1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
Default is 85%.
2. `image-gc-low-threshold`, the percent of disk usage to which image garbage collection attempts
to free. Default is 80%.
You can customize the garbage collection policy through the following kubelet flags:
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
2. `maximum-dead-containers-per-container`, maximum number of old instances to be retained
per container. Default is 1.
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
Default is -1, which means there is no global limit.
Containers can potentially be garbage collected before their usefulness has expired. These containers
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
`maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be
retained per expected container. A larger value for `maximum-dead-containers` is also recommended for a
similar reason.
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
## Deprecation
Some kubelet Garbage Collection features in this doc will be replaced by kubelet eviction in the future.
Including:
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context |
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
## {{% heading "whatsnext" %}}
See [Configuring Out Of Resource Handling](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
for more details.

View File

@ -316,4 +316,5 @@ Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.
## {{% heading "whatsnext" %}}
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md)
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md).
* Learn about [container image garbage collection](/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection).

View File

@ -0,0 +1,80 @@
---
title: Finalizers
content_type: concept
weight: 60
---
<!-- overview -->
{{<glossary_definition term_id="finalizer" length="long">}}
You can use finalizers to control {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}
of resources by alerting {{<glossary_tooltip text="controllers" term_id="controller">}} to perform specific cleanup tasks before
deleting the target resource.
Finalizers don't usually specify the code to execute. Instead, they are
typically lists of keys on a specific resource similar to annotations.
Kubernetes specifies some finalizers automatically, but you can also specify
your own.
## How finalizers work
When you create a resource using a manifest file, you can specify finalizers in
the `metadata.finalizers` field. When you attempt to delete the resource, the
controller that manages it notices the values in the `finalizers` field and does
the following:
* Modifies the object to add a `metadata.deletionTimestamp` field with the
time you started the deletion.
* Marks the object as read-only until its `metadata.finalizers` field is empty.
The controller then attempts to satisfy the requirements of the finalizers
specified for that resource. Each time a finalizer condition is satisfied, the
controller removes that key from the resource's `finalizers` field. When the
field is empty, garbage collection continues. You can also use finalizers to
prevent deletion of unmanaged resources.
A common example of a finalizer is `kubernetes.io/pv-protection`, which prevents
accidental deletion of `PersistentVolume` objects. When a `PersistentVolume`
object is in use by a Pod, Kubernetes adds the `pv-protection` finalizer. If you
try to delete the `PersistentVolume`, it enters a `Terminating` status, but the
controller can't delete it because the finalizer exists. When the Pod stops
using the `PersistentVolume`, Kubernetes clears the `pv-protection` finalizer,
and the controller deletes the volume.
## Owner references, labels, and finalizers {#owners-labels-finalizers}
Like {{<glossary_tooltip text="labels" term_id="label">}}, [owner references](/concepts/overview/working-with-objects/owners-dependents/)
describe the relationships between objects in Kubernetes, but are used for a
different purpose. When a
{{<glossary_tooltip text="controllers" term_id="controller">}} manages objects
like Pods, it uses labels to track changes to groups of related objects. For
example, when a {{<glossary_tooltip text="Job" term_id="job">}} creates one or
more Pods, the Job controller applies labels to those pods and tracks changes to
any Pods in the cluster with the same label.
The Job controller also adds *owner references* to those Pods, pointing at the
Job that created the Pods. If you delete the Job while these Pods are running,
Kubernetes uses the owner references (not labels) to determine which Pods in the
cluster need cleanup.
Kubernetes also processes finalizers when it identifies owner references on a
resource targeted for deletion.
In some situations, finalizers can block the deletion of dependent objects,
which can cause the targeted owner object to remain in a read-only state for
longer than expected without being fully deleted. In these situations, you
should check finalizers and owner references on the target owner and dependent
objects to troubleshoot the cause.
{{<note>}}
In cases where objects are stuck in a deleting state, try to avoid manually
removing finalizers to allow deletion to continue. Finalizers are usually added
to resources for a reason, so forcefully removing them can lead to issues in
your cluster.
{{</note>}}
## {{% heading "whatsnext" %}}
* Read [Using Finalizers to Control Deletion](/blog/2021/05/14/using-finalizers-to-control-deletion/)
on the Kubernetes blog.

View File

@ -0,0 +1,71 @@
---
title: Owners and Dependents
content_type: concept
weight: 60
---
<!-- overview -->
In Kubernetes, some objects are *owners* of other objects. For example, a
{{<glossary_tooltip text="ReplicaSet" term_id="replica-set">}} is the owner of a set of Pods. These owned objects are *dependents*
of their owner.
Ownership is different from the [labels and selectors](/docs/concepts/overview/working-with-objects/labels/)
mechanism that some resources also use. For example, consider a Service that
creates `EndpointSlice` objects. The Service uses labels to allow the control plane to
determine which `EndpointSlice` objects are used for that Service. In addition
to the labels, each `EndpointSlice` that is managed on behalf of a Service has
an owner reference. Owner references help different parts of Kubernetes avoid
interfering with objects they dont control.
## Owner references in object specifications
Dependent objects have a `metadata.ownerReferences` field that references their
owner object. A valid owner reference consists of the object name and a UID
within the same namespace as the dependent object. Kubernetes sets the value of
this field automatically for objects that are dependents of other objects like
ReplicaSets, DaemonSets, Deployments, Jobs and CronJobs, and ReplicationControllers.
You can also configure these relationships manually by changing the value of
this field. However, you usually don't need to and can allow Kubernetes to
automatically manage the relationships.
Dependent objects also have an `ownerReferences.blockOwnerDeletion` field that
takes a boolean value and controls whether specific dependents can block garbage
collection from deleting their owner object. Kubernetes automatically sets this
field to `true` if a {{<glossary_tooltip text="controller" term_id="controller">}}
(for example, the Deployment controller) sets the value of the
`metadata.ownerReferences` field. You can also set the value of the
`blockOwnerDeletion` field manually to control which dependents block garbage
collection.
A Kubernetes admission controller controls user access to change this field for
dependent resources, based on the delete permissions of the owner. This control
prevents unauthorized users from delaying owner object deletion.
## Ownership and finalizers
When you tell Kubernetes to delete a resource, the API server allows the
managing controller to process any [finalizer rules](/docs/concepts/overview/working-with-objects/finalizers/)
for the resource. {{<glossary_tooltip text="Finalizers" term_id="finalizer">}}
prevent accidental deletion of resources your cluster may still need to function
correctly. For example, if you try to delete a `PersistentVolume` that is still
in use by a Pod, the deletion does not happen immediately because the
`PersistentVolume` has the `kubernetes.io/pv-protection` finalizer on it.
Instead, the volume remains in the `Terminating` status until Kubernetes clears
the finalizer, which only happens after the `PersistentVolume` is no longer
bound to a Pod.
Kubernetes also adds finalizers to an owner resource when you use either
[foreground or orphan cascading deletion](/docs/concepts/architecture/garbage-collection/#cascading-deletion).
In foreground deletion, it adds the `foreground` finalizer so that the
controller must delete dependent resources that also have
`ownerReferences.blockOwnerDeletion=true` before it deletes the owner. If you
specify an orphan deletion policy, Kubernetes adds the `orphan` finalizer so
that the controller ignores dependent resources after it deletes the owner
object.
## {{% heading "whatsnext" %}}
* Learn more about [Kubernetes finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
* Learn about [garbage collection](/docs/concepts/architecture/garbage-collection).
* Read the API reference for [object metadata](/docs/reference/kubernetes-api/common-definitions/object-meta/#System).

View File

@ -1,184 +0,0 @@
---
title: Garbage Collection
content_type: concept
weight: 60
---
<!-- overview -->
The role of the Kubernetes garbage collector is to delete certain objects
that once had an owner, but no longer have an owner.
<!-- body -->
## Owners and dependents
Some Kubernetes objects are owners of other objects. For example, a ReplicaSet
is the owner of a set of Pods. The owned objects are called *dependents* of the
owner object. Every dependent object has a `metadata.ownerReferences` field that
points to the owning object.
Sometimes, Kubernetes sets the value of `ownerReference` automatically. For
example, when you create a ReplicaSet, Kubernetes automatically sets the
`ownerReference` field of each Pod in the ReplicaSet. In 1.8, Kubernetes
automatically sets the value of `ownerReference` for objects created or adopted
by ReplicationController, ReplicaSet, StatefulSet, DaemonSet, Deployment, Job
and CronJob.
You can also specify relationships between owners and dependents by manually
setting the `ownerReference` field.
Here's a configuration file for a ReplicaSet that has three Pods:
{{< codenew file="controllers/replicaset.yaml" >}}
If you create the ReplicaSet and then view the Pod metadata, you can see
OwnerReferences field:
```shell
kubectl apply -f https://k8s.io/examples/controllers/replicaset.yaml
kubectl get pods --output=yaml
```
The output shows that the Pod owner is a ReplicaSet named `my-repset`:
```yaml
apiVersion: v1
kind: Pod
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
...
```
{{< note >}}
Cross-namespace owner references are disallowed by design.
Namespaced dependents can specify cluster-scoped or namespaced owners.
A namespaced owner **must** exist in the same namespace as the dependent.
If it does not, the owner reference is treated as absent, and the dependent
is subject to deletion once all owners are verified absent.
Cluster-scoped dependents can only specify cluster-scoped owners.
In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner,
it is treated as having an unresolvable owner reference, and is not able to be garbage collected.
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported.
You can check for that kind of Event by running
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`.
{{< /note >}}
## Controlling how the garbage collector deletes dependents
When you delete an object, you can specify whether the object's dependents are
also deleted automatically. Deleting dependents automatically is called *cascading
deletion*. There are two modes of *cascading deletion*: *background* and *foreground*.
If you delete an object without deleting its dependents
automatically, the dependents are said to be *orphaned*.
### Foreground cascading deletion
In *foreground cascading deletion*, the root object first
enters a "deletion in progress" state. In the "deletion in progress" state,
the following things are true:
* The object is still visible via the REST API
* The object's `deletionTimestamp` is set
* The object's `metadata.finalizers` contains the value "foregroundDeletion".
Once the "deletion in progress" state is set, the garbage
collector deletes the object's dependents. Once the garbage collector has deleted all
"blocking" dependents (objects with `ownerReference.blockOwnerDeletion=true`), it deletes
the owner object.
Note that in the "foregroundDeletion", only dependents with
`ownerReference.blockOwnerDeletion=true` block the deletion of the owner object.
Kubernetes version 1.7 added an [admission controller](/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement) that controls user access to set
`blockOwnerDeletion` to true based on delete permissions on the owner object, so that
unauthorized dependents cannot delay deletion of an owner object.
If an object's `ownerReferences` field is set by a controller (such as Deployment or ReplicaSet),
blockOwnerDeletion is set automatically and you do not need to manually modify this field.
### Background cascading deletion
In *background cascading deletion*, Kubernetes deletes the owner object
immediately and the garbage collector then deletes the dependents in
the background.
### Setting the cascading deletion policy
To control the cascading deletion policy, set the `propagationPolicy`
field on the `deleteOptions` argument when deleting an Object. Possible values include "Orphan",
"Foreground", or "Background".
Here's an example that deletes dependents in background:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
-H "Content-Type: application/json"
```
Here's an example that deletes dependents in foreground:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
Here's an example that orphans dependents:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
kubectl also supports cascading deletion.
To delete dependents in the foreground using kubectl, set `--cascade=foreground`. To
orphan dependents, set `--cascade=orphan`.
The default behavior is to delete the dependents in the background which is the
behavior when `--cascade` is omitted or explicitly set to `background`.
Here's an example that orphans the dependents of a ReplicaSet:
```shell
kubectl delete replicaset my-repset --cascade=orphan
```
### Additional note on Deployments
Prior to 1.7, When using cascading deletes with Deployments you *must* use `propagationPolicy: Foreground`
to delete not only the ReplicaSets created, but also their Pods. If this type of _propagationPolicy_
is not used, only the ReplicaSets will be deleted, and the Pods will be orphaned.
See [kubeadm/#149](https://github.com/kubernetes/kubeadm/issues/149#issuecomment-284766613) for more information.
## Known issues
Tracked at [#26120](https://github.com/kubernetes/kubernetes/issues/26120)
## {{% heading "whatsnext" %}}
[Design Doc 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md)
[Design Doc 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md)

View File

@ -0,0 +1,31 @@
---
title: Finalizer
id: finalizer
date: 2021-07-07
full_link: /docs/concepts/overview/working-with-objects/finalizers/
short_description: >
A namespaced key that tells Kubernetes to wait until specific conditions are met
before it fully deletes an object marked for deletion.
aka:
tags:
- fundamental
- operation
---
Finalizers are namespaced keys that tell Kubernetes to wait until specific
conditions are met before it fully deletes resources marked for deletion.
Finalizers alert {{<glossary_tooltip text="controllers" term_id="controller">}}
to clean up resources the deleted object owned.
<!--more-->
When you tell Kubernetes to delete an object that has finalizers specified for
it, the Kubernetes API marks the object for deletion, putting it into a
read-only state. The target object remains in a terminating state while the
control plane, or other components, take the actions defined by the finalizers.
After these actions are complete, the controller removes the relevant finalizers
from the target object. When the `metadata.finalizers` field is empty,
Kubernetes considers the deletion complete.
You can use finalizers to control {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}
of resources. For example, you can define a finalizer to clean up related resources or
infrastructure before the controller deletes the target resource.

View File

@ -0,0 +1,24 @@
---
title: Garbage Collection
id: garbage-collection
date: 2021-07-07
full_link: /docs/concepts/workloads/controllers/garbage-collection/
short_description: >
A collective term for the various mechanisms Kubernetes uses to clean up cluster
resources.
aka:
tags:
- fundamental
- operation
---
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up
cluster resources.
<!--more-->
Kubernetes uses garbage collection to clean up resources like [unused containers and images](/docs/concepts/workloads/controllers/garbage-collection/#containers-images),
[failed Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection),
[objects owned by the targeted resource](/docs/concepts/overview/working-with-objects/owners-dependents/),
[completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/), and resources
that have expired or failed.

View File

@ -0,0 +1,352 @@
---
title: Use Cascading Deletion in a Cluster
content_type: task
---
<!--overview-->
This page shows you how to specify the type of [cascading deletion](/docs/concepts/workloads/controllers/garbage-collection/#cascading-deletion)
to use in your cluster during {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
You also need to [create a sample Deployment](/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment)
to experiment with the different types of cascading deletion. You will need to
recreate the Deployment for each type.
## Check owner references on your pods
Check that the `ownerReferences` field is present on your pods:
```shell
kubectl get pods -l app=nginx --output=yaml
```
The output has an `ownerReferences` field similar to this:
```
apiVersion: v1
...
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-deployment-6b474476c4
uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7
...
```
## Use foreground cascading deletion {#use-foreground-cascading-deletion}
By default, Kubernetes uses [background cascading deletion](/docs/concepts/workloads/controllers/garbage-collection/#background-deletion)
to delete dependents of an object. You can switch to foreground cascading deletion
using either `kubectl` or the Kubernetes API, depending on the Kubernetes
version your cluster runs. {{<version-check>}}
{{<tabs name="foreground_deletion">}}
{{% tab name="Kubernetes 1.20.x and later" %}}
You can delete objects using foreground cascading deletion using `kubectl` or the
Kubernetes API.
**Using kubectl**
Run the following command:
<!--TODO: verify release after which the --cascade flag is switched to a string in https://github.com/kubernetes/kubectl/commit/fd930e3995957b0093ecc4b9fd8b0525d94d3b4e-->
```shell
kubectl delete deployment nginx-deployment --cascade=foreground
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
The output contains a `foregroundDeletion` {{<glossary_tooltip text="finalizer" term_id="finalizer">}}
like this:
```
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": {
"name": "nginx-deployment",
"namespace": "default",
"uid": "d1ce1b02-cae8-4288-8a53-30e84d8fa505",
"resourceVersion": "1363097",
"creationTimestamp": "2021-07-08T20:24:37Z",
"deletionTimestamp": "2021-07-08T20:27:39Z",
"finalizers": [
"foregroundDeletion"
]
...
```
{{% /tab %}}
{{% tab name="Versions prior to Kubernetes 1.20.x" %}}
You can delete objects using foreground cascading deletion by calling the
Kubernetes API.
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
The output contains a `foregroundDeletion` {{<glossary_tooltip text="finalizer" term_id="finalizer">}}
like this:
```
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": {
"name": "nginx-deployment",
"namespace": "default",
"uid": "d1ce1b02-cae8-4288-8a53-30e84d8fa505",
"resourceVersion": "1363097",
"creationTimestamp": "2021-07-08T20:24:37Z",
"deletionTimestamp": "2021-07-08T20:27:39Z",
"finalizers": [
"foregroundDeletion"
]
...
```
{{% /tab %}}
{{</tabs>}}
## Use background cascading deletion {#use-background-cascading-deletion}
1. [Create a sample Deployment](/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment).
1. Use either `kubectl` or the Kubernetes API to delete the Deployment,
depending on the Kubernetes version your cluster runs. {{<version-check>}}
{{<tabs name="background_deletion">}}
{{% tab name="Kubernetes version 1.20.x and later" %}}
You can delete objects using background cascading deletion using `kubectl`
or the Kubernetes API.
Kubernetes uses background cascading deletion by default, and does so
even if you run the following commands without the `--cascade` flag or the
`propagationPolicy` argument.
**Using kubectl**
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=background
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
-H "Content-Type: application/json"
```
The output is similar to this:
```
"kind": "Status",
"apiVersion": "v1",
...
"status": "Success",
"details": {
"name": "nginx-deployment",
"group": "apps",
"kind": "deployments",
"uid": "cc9eefb9-2d49-4445-b1c1-d261c9396456"
}
```
{{% /tab %}}
{{% tab name="Versions prior to Kubernetes 1.20.x" %}}
Kubernetes uses background cascading deletion by default, and does so
even if you run the following commands without the `--cascade` flag or the
`propagationPolicy: Background` argument.
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
**Using kubectl**
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=true
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
-H "Content-Type: application/json"
```
The output is similar to this:
```
"kind": "Status",
"apiVersion": "v1",
...
"status": "Success",
"details": {
"name": "nginx-deployment",
"group": "apps",
"kind": "deployments",
"uid": "cc9eefb9-2d49-4445-b1c1-d261c9396456"
}
```
{{% /tab %}}
{{</tabs>}}
## Delete owner objects and orphan dependents {#set-orphan-deletion-policy}
By default, when you tell Kubernetes to delete an object, the
{{<glossary_tooltip text="controller" term_id="controller">}} also deletes
dependent objects. You can make Kubernetes *orphan* these dependents using
`kubectl` or the Kubernetes API, depending on the Kubernetes version your
cluster runs. {{<version-check>}}
{{<tabs name="orphan_objects">}}
{{% tab name="Kubernetes version 1.20.x and later" %}}
**Using kubectl**
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=orphan
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
The output contains `orphan` in the `finalizers` field, similar to this:
```
"kind": "Deployment",
"apiVersion": "apps/v1",
"namespace": "default",
"uid": "6f577034-42a0-479d-be21-78018c466f1f",
"creationTimestamp": "2021-07-09T16:46:37Z",
"deletionTimestamp": "2021-07-09T16:47:08Z",
"deletionGracePeriodSeconds": 0,
"finalizers": [
"orphan"
],
...
```
{{% /tab %}}
{{% tab name="Versions prior to Kubernetes 1.20.x" %}}
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
**Using kubectl**
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=false
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
The output contains `orphan` in the `finalizers` field, similar to this:
```
"kind": "Deployment",
"apiVersion": "apps/v1",
"namespace": "default",
"uid": "6f577034-42a0-479d-be21-78018c466f1f",
"creationTimestamp": "2021-07-09T16:46:37Z",
"deletionTimestamp": "2021-07-09T16:47:08Z",
"deletionGracePeriodSeconds": 0,
"finalizers": [
"orphan"
],
...
```
{{% /tab %}}
{{</tabs>}}
You can check that the Pods managed by the Deployment are still running:
```shell
kubectl get pods -l app=nginx
```
## {{% heading "whatsnext" %}}
* Learn about [owners and dependents](/docs/concepts/overview/working-with-objects/owners-dependents/) in Kubernetes.
* Learn about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
* Learn about [garbage collection](/docs/concepts/workloads/controllers/garbage-collection/).

View File

@ -89,6 +89,7 @@
/docs/concepts/cluster-administration/device-plugins/ /docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ 301
/docs/concepts/cluster-administration/etcd-upgrade/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301
/docs/concepts/cluster-administration/guaranteed-scheduling-critical-addon-pods/ /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/ 301
/docs/concepts/cluster-administration/kubelet-garbage-collection/ /docs/concepts/architecture/garbage-collection/#containers-images 301
/docs/concepts/cluster-administration/master-node-communication/ /docs/concepts/architecture/master-node-communication/ 301
/docs/concepts/cluster-administration/network-plugins/ /docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/ 301
/docs/concepts/cluster-administration/out-of-resource/ /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301
@ -145,7 +146,7 @@
/docs/concepts/workloads/controllers/cron-jobs/deployment/ /docs/concepts/workloads/controllers/cron-jobs/ 301
/docs/concepts/workloads/controllers/daemonset/docs/concepts/workloads/pods/pod/ /docs/concepts/workloads/pods/ 301
/docs/concepts/workloads/controllers/deployment/docs/concepts/workloads/pods/pod/ /docs/concepts/workloads/pods/ 301
/docs/concepts/workloads/controllers/garbage-collection/ /docs/concepts/architecture/garbage-collection/ 301
/docs/concepts/workloads/controllers/jobs-run-to-completion/ /docs/concepts/workloads/controllers/job/ 301
/docs/concepts/workloads/controllers/statefulsets/ /docs/concepts/workloads/controllers/statefulset/ 301
/docs/concepts/workloads/controllers/statefulset.md /docs/concepts/workloads/controllers/statefulset/ 301!