From 08075dc56e4a826f8ade720f2ed5931bb4fe766a Mon Sep 17 00:00:00 2001 From: matheusjunior Date: Sun, 7 May 2023 16:36:55 -0300 Subject: [PATCH 1/2] Add note about `nodeName` ignoring a drained node Clarify that `nodeName` still places Pods on nodes with `SchedulingDisabled` status. Please check issue [117843](https://github.com/kubernetes/kubernetes/issues/117843) for more info. --- .../en/docs/tasks/administer-cluster/safely-drain-node.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index 5afcd3eac1..0479719b3e 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -62,6 +62,11 @@ and respecting the PodDisruptionBudget you have defined). It is then safe to bring down the node by powering down its physical machine or, if running on a cloud platform, deleting its virtual machine. +{{< note >}} +[nodeName](/docs/concepts/scheduling-eviction/assign-pod-node/#nodename) bypasses the scheduler, +thus evicted Pods will still run on a drained node. +{{< /note >}} + First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with ```shell From b230abcfb2f4c5108aa1a083e307cf10fa5398a3 Mon Sep 17 00:00:00 2001 From: matheusjunior Date: Tue, 9 May 2023 09:45:51 -0300 Subject: [PATCH 2/2] Update content/en/docs/tasks/administer-cluster/safely-drain-node.md Co-authored-by: Tim Bannister --- .../docs/tasks/administer-cluster/safely-drain-node.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index 0479719b3e..6e2eda63fd 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -63,8 +63,13 @@ bring down the node by powering down its physical machine or, if running on a cloud platform, deleting its virtual machine. {{< note >}} -[nodeName](/docs/concepts/scheduling-eviction/assign-pod-node/#nodename) bypasses the scheduler, -thus evicted Pods will still run on a drained node. +If any new Pods tolerate the `node.kubernetes.io/unschedulable` taint, then those Pods +might be scheduled to the node you have drained. Avoid tolerating that taint other than +for DaemonSets. + +If you or another API user directly set the [`nodeName`](/docs/concepts/scheduling-eviction/assign-pod-node/#nodename) +field for a Pod (bypassing the scheduler), then the Pod is bound to the specified node +and will run there, even though you have drained that node and marked it unschedulable. {{< /note >}} First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with