Outdated terminology: “kubernetes-minion-group” (#14859)

pull/14971/head
mhamdi semah 2019-06-18 22:49:50 +02:00 committed by Kubernetes Prow Robot
parent b473890928
commit 82650353cb
4 changed files with 38 additions and 38 deletions

View File

@ -85,7 +85,7 @@ If your cluster runs short on resources you can easily add more machines to it i
If you're using GCE or Google Kubernetes Engine it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
```shell
gcloud compute instance-groups managed resize kubernetes-minion-group --size=42 --zone=$ZONE
gcloud compute instance-groups managed resize kubernetes-node-pool --size=42 --zone=$ZONE
```
Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.

View File

@ -363,7 +363,7 @@ The Node name should show up in the last column:
```
NAME READY STATUS RESTARTS AGE IP NODE
mysql-2 2/2 Running 0 15m 10.244.5.27 kubernetes-minion-group-9l2t
mysql-2 2/2 Running 0 15m 10.244.5.27 kubernetes-node-9l2t
```
Then drain the Node by running the following command, which cordons it so
@ -387,14 +387,14 @@ It should look something like this:
```
NAME READY STATUS RESTARTS AGE IP NODE
mysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-minion-group-9l2t
mysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-node-9l2t
[...]
mysql-2 0/2 Pending 0 0s <none> kubernetes-minion-group-fjlm
mysql-2 0/2 Init:0/2 0 0s <none> kubernetes-minion-group-fjlm
mysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-minion-group-fjlm
mysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-minion-group-fjlm
mysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-minion-group-fjlm
mysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-minion-group-fjlm
mysql-2 0/2 Pending 0 0s <none> kubernetes-node-fjlm
mysql-2 0/2 Init:0/2 0 0s <none> kubernetes-node-fjlm
mysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-node-fjlm
mysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-node-fjlm
mysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-node-fjlm
mysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-node-fjlm
```
And again, you should see server ID `102` disappear from the

View File

@ -67,13 +67,13 @@ kubectl get nodes
The output is similar to this:
```
NAME STATUS ROLES AGE VERSION
kubernetes-minion-group-6jst Ready <none> 2h v1.13.0
kubernetes-minion-group-cx31 Ready <none> 2h v1.13.0
kubernetes-minion-group-jj1t Ready <none> 2h v1.13.0
kubernetes-node-6jst Ready <none> 2h v1.13.0
kubernetes-node-cx31 Ready <none> 2h v1.13.0
kubernetes-node-jj1t Ready <none> 2h v1.13.0
```
Get the proxy mode on one of the node
```console
kubernetes-minion-group-6jst $ curl localhost:10249/proxyMode
kubernetes-node-6jst $ curl localhost:10249/proxyMode
```
The output is:
```
@ -326,18 +326,18 @@ kubectl get pod -o wide -l run=source-ip-app
The output is similar to this:
```
NAME READY STATUS RESTARTS AGE IP NODE
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-minion-group-6jst
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst
```
Curl the `/healthz` endpoint on different nodes.
```console
kubernetes-minion-group-6jst $ curl localhost:32122/healthz
kubernetes-node-6jst $ curl localhost:32122/healthz
```
The output is similar to this:
```
1 Service Endpoints found
```
```console
kubernetes-minion-group-jj1t $ curl localhost:32122/healthz
kubernetes-node-jj1t $ curl localhost:32122/healthz
```
The output is similar to this:
```

View File

@ -832,9 +832,9 @@ for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo "";
All of the Pods in the `zk` `StatefulSet` are deployed on different nodes.
```shell
kubernetes-minion-group-cxpk
kubernetes-minion-group-a5aq
kubernetes-minion-group-2g2d
kubernetes-node-cxpk
kubernetes-node-a5aq
kubernetes-node-2g2d
```
This is because the Pods in the `zk` `StatefulSet` have a `PodAntiAffinity` specified.
@ -906,9 +906,9 @@ In another terminal, use this command to get the nodes that the Pods are current
```shell
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
kubernetes-minion-group-pb41
kubernetes-minion-group-ixsl
kubernetes-minion-group-i4c4
kubernetes-node-pb41
kubernetes-node-ixsl
kubernetes-node-i4c4
```
Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) to cordon and
@ -916,11 +916,11 @@ drain the node on which the `zk-0` Pod is scheduled.
```shell
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
node "kubernetes-minion-group-pb41" cordoned
node "kubernetes-node-pb41" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-pb41, kube-proxy-kubernetes-minion-group-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
pod "zk-0" deleted
node "kubernetes-minion-group-pb41" drained
node "kubernetes-node-pb41" drained
```
As there are four nodes in your cluster, `kubectl drain`, succeeds and the
@ -947,11 +947,11 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
`zk-1` is scheduled.
```shell
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-minion-group-ixsl" cordoned
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-ixsl, kube-proxy-kubernetes-minion-group-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
pod "zk-1" deleted
node "kubernetes-minion-group-ixsl" drained
node "kubernetes-node-ixsl" drained
```
The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
@ -986,10 +986,10 @@ Continue to watch the Pods of the stateful set, and drain the node on which
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
node "kubernetes-minion-group-i4c4" cordoned
node "kubernetes-node-i4c4" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4
There are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget.
pod/zk-2
```
@ -1025,9 +1025,9 @@ numChildren = 0
Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon) to uncordon the first node.
```shell
kubectl uncordon kubernetes-minion-group-pb41
kubectl uncordon kubernetes-node-pb41
node "kubernetes-minion-group-pb41" uncordoned
node "kubernetes-node-pb41" uncordoned
```
`zk-1` is rescheduled on this node. Wait until `zk-1` is Running and Ready.
@ -1070,11 +1070,11 @@ kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-dae
The output:
```
node "kubernetes-minion-group-i4c4" already cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
node "kubernetes-node-i4c4" already cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
pod "heapster-v1.2.0-2604621511-wht1r" deleted
pod "zk-2" deleted
node "kubernetes-minion-group-i4c4" drained
node "kubernetes-node-i4c4" drained
```
This time `kubectl drain` succeeds.
@ -1082,11 +1082,11 @@ This time `kubectl drain` succeeds.
Uncordon the second node to allow `zk-2` to be rescheduled.
```shell
kubectl uncordon kubernetes-minion-group-ixsl
kubectl uncordon kubernetes-node-ixsl
```
```
node "kubernetes-minion-group-ixsl" uncordoned
node "kubernetes-node-ixsl" uncordoned
```
You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure that your services remain available during maintenance. If drain is used to cordon nodes and evict pods prior to taking the node offline for maintenance, services that express a disruption budget will have that budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled.