From 2e3da350b4d28deeb855a773371a14f0fc70664b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thomas=20G=C3=BCttler?= Date: Fri, 13 Jan 2023 11:39:39 +0100 Subject: [PATCH] Update content/en/docs/tasks/administer-cluster/safely-drain-node.md Co-authored-by: Tim Bannister --- .../docs/tasks/administer-cluster/safely-drain-node.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index 7542adc7bd..7963898361 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -69,10 +69,12 @@ Next, tell Kubernetes to drain the node: kubectl drain --ignore-daemonsets ``` -If there are daemon set managed pods, drain will not proceed without `--ignore-daemonsets`, -and regardless it will not delete any daemon set managed pods, -because those pods would be immediately replaced by the daemon set controller, -which ignores unschedulable markings. +If there are DaemonSet managed pods, drain will usually not succeed unless you specify +`--ignore-daemonsets`. The `kubectl drain` subcommand on its own does not actually drain +a node of its DaemonSet pods: +the DaemonSet controller (part of the control plane) immediately replaces missing Pods with +new equivalent Pods. The DaemonSet controller also creates Pods that ignore unschedulable +taints, which allows the new Pods to launch onto a node that you are draining. Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node).