diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index fd1c298785..2f85f49a10 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -1022,13 +1022,6 @@ Get the nodes in your cluster. kubectl get nodes ``` -Use [`kubectl cordon`](/docs/user-guide/kubectl/{{page.version}}/#cordon) to -cordon all but four of the nodes in your cluster. - -```shell{% raw %} -kubectl cordon < node name > -```{% endraw %} - Get the `zk-budget` PodDisruptionBudget. ```shell @@ -1060,6 +1053,13 @@ kubernetes-minion-group-i4c4 {% endraw %} ``` +Use [`kubectl cordon`](/docs/user-guide/kubectl/{{page.version}}/#cordon) to +cordon the three nodes that the Pods are currently scheduled on. + +```shell{% raw %} +kubectl cordon < node name > +{% endraw %}``` + Use [`kubectl drain`](/docs/user-guide/kubectl/{{page.version}}/#drain) to cordon and drain the node on which the `zk-0` Pod is scheduled. @@ -1095,7 +1095,8 @@ Keep watching the StatefulSet's Pods in the first terminal and drain the node on `zk-1` is scheduled. ```shell{% raw %} -kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-minion-group-ixsl" cordoned +kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data +node "kubernetes-minion-group-ixsl" cordoned WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-ixsl, kube-proxy-kubernetes-minion-group-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74 pod "zk-1" deleted node "kubernetes-minion-group-ixsl" drained