diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 0a394ad022..56deeb1985 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -415,7 +415,7 @@ and make sure that the node is empty, then deconfigure the node. Talking to the control-plane node with the appropriate credentials, run: ```bash -kubectl drain --delete-local-data --force --ignore-daemonsets +kubectl drain --delete-emptydir-data --force --ignore-daemonsets ``` Before removing the node, reset the state installed by `kubeadm`: diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index 22f929c06f..e98830b9e3 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -379,7 +379,7 @@ This might impact other applications on the Node, so it's best to **only do this in a test cluster**. ```shell -kubectl drain --force --delete-local-data --ignore-daemonsets +kubectl drain --force --delete-emptydir-data --ignore-daemonsets ``` Now you can watch as the Pod reschedules on a different Node: diff --git a/content/en/docs/tutorials/stateful-application/zookeeper.md b/content/en/docs/tutorials/stateful-application/zookeeper.md index 6d517ef229..2844ae6a0e 100644 --- a/content/en/docs/tutorials/stateful-application/zookeeper.md +++ b/content/en/docs/tutorials/stateful-application/zookeeper.md @@ -937,7 +937,7 @@ Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) drain the node on which the `zk-0` Pod is scheduled. ```shell -kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data +kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data ``` ``` @@ -972,7 +972,7 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node `zk-1` is scheduled. ```shell -kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned +kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned ``` ``` @@ -1015,7 +1015,7 @@ Continue to watch the Pods of the stateful set, and drain the node on which `zk-2` is scheduled. ```shell -kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data +kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data ``` ``` @@ -1101,7 +1101,7 @@ zk-1 1/1 Running 0 13m Attempt to drain the node on which `zk-2` is scheduled. ```shell -kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data +kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data ``` The output: