Update kubectl deprecated flag delete-local-data

Signed-off-by: XinYang <xinydev@gmail.com>

revert changes

Signed-off-by: XinYang <xinydev@gmail.com>

update more

Signed-off-by: XinYang <xinydev@gmail.com>
pull/29076/head
XinYang 2021-07-22 20:27:32 +08:00
parent 11ca5f06dc
commit d095a148f2
No known key found for this signature in database
GPG Key ID: 86BFE47CD837772A
3 changed files with 6 additions and 6 deletions

View File

@ -415,7 +415,7 @@ and make sure that the node is empty, then deconfigure the node.
Talking to the control-plane node with the appropriate credentials, run:
```bash
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
```
Before removing the node, reset the state installed by `kubeadm`:

View File

@ -379,7 +379,7 @@ This might impact other applications on the Node, so it's best to
**only do this in a test cluster**.
```shell
kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets
kubectl drain <node-name> --force --delete-emptydir-data --ignore-daemonsets
```
Now you can watch as the Pod reschedules on a different Node:

View File

@ -937,7 +937,7 @@ Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain)
drain the node on which the `zk-0` Pod is scheduled.
```shell
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```
```
@ -972,7 +972,7 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
`zk-1` is scheduled.
```shell
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned
```
```
@ -1015,7 +1015,7 @@ Continue to watch the Pods of the stateful set, and drain the node on which
`zk-2` is scheduled.
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```
```
@ -1101,7 +1101,7 @@ zk-1 1/1 Running 0 13m
Attempt to drain the node on which `zk-2` is scheduled.
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```
The output: