Merge branch 'master' into fix-commands
commit
85cf7ae2ff
|
@ -54,9 +54,18 @@ toc:
|
|||
- docs/user-guide/kubectl/kubectl_apply.md
|
||||
- docs/user-guide/kubectl/kubectl_attach.md
|
||||
- docs/user-guide/kubectl/kubectl_autoscale.md
|
||||
- docs/user-guide/kubectl/kubectl_certificate.md
|
||||
- docs/user-guide/kubectl/kubectl_certificate_approve.md
|
||||
- docs/user-guide/kubectl/kubectl_certificate_deny.md
|
||||
- docs/user-guide/kubectl/kubectl_cluster-info.md
|
||||
- docs/user-guide/kubectl/kubectl_cluster-info_dump.md
|
||||
- docs/user-guide/kubectl/kubectl_completion.md
|
||||
- docs/user-guide/kubectl/kubectl_config.md
|
||||
- docs/user-guide/kubectl/kubectl_config_current-context.md
|
||||
- docs/user-guide/kubectl/kubectl_config_delete-cluster.md
|
||||
- docs/user-guide/kubectl/kubectl_config_delete-context.md
|
||||
- docs/user-guide/kubectl/kubectl_config_get-clusters.md
|
||||
- docs/user-guide/kubectl/kubectl_config_get-contexts.md
|
||||
- docs/user-guide/kubectl/kubectl_config_set-cluster.md
|
||||
- docs/user-guide/kubectl/kubectl_config_set-context.md
|
||||
- docs/user-guide/kubectl/kubectl_config_set-credentials.md
|
||||
|
@ -66,13 +75,20 @@ toc:
|
|||
- docs/user-guide/kubectl/kubectl_config_view.md
|
||||
- docs/user-guide/kubectl/kubectl_convert.md
|
||||
- docs/user-guide/kubectl/kubectl_cordon.md
|
||||
- docs/user-guide/kubectl/kubectl_cp.md
|
||||
- docs/user-guide/kubectl/kubectl_create.md
|
||||
- docs/user-guide/kubectl/kubectl_create_configmap.md
|
||||
- docs/user-guide/kubectl/kubectl_create_deployment.md
|
||||
- docs/user-guide/kubectl/kubectl_create_namespace.md
|
||||
- docs/user-guide/kubectl/kubectl_create_quota.md
|
||||
- docs/user-guide/kubectl/kubectl_create_secret_docker-registry.md
|
||||
- docs/user-guide/kubectl/kubectl_create_secret.md
|
||||
- docs/user-guide/kubectl/kubectl_create_secret_generic.md
|
||||
- docs/user-guide/kubectl/kubectl_create_secret_tls.md
|
||||
- docs/user-guide/kubectl/kubectl_create_serviceaccount.md
|
||||
- docs/user-guide/kubectl/kubectl_create_service_clusterip.md
|
||||
- docs/user-guide/kubectl/kubectl_create_service_loadbalancer.md
|
||||
- docs/user-guide/kubectl/kubectl_create_service_nodeport.md
|
||||
- docs/user-guide/kubectl/kubectl_delete.md
|
||||
- docs/user-guide/kubectl/kubectl_describe.md
|
||||
- docs/user-guide/kubectl/kubectl_drain.md
|
||||
|
@ -83,6 +99,7 @@ toc:
|
|||
- docs/user-guide/kubectl/kubectl_get.md
|
||||
- docs/user-guide/kubectl/kubectl_label.md
|
||||
- docs/user-guide/kubectl/kubectl_logs.md
|
||||
- docs/user-guide/kubectl/kubectl_options.md
|
||||
- docs/user-guide/kubectl/kubectl_patch.md
|
||||
- docs/user-guide/kubectl/kubectl_port-forward.md
|
||||
- docs/user-guide/kubectl/kubectl_proxy.md
|
||||
|
@ -92,9 +109,17 @@ toc:
|
|||
- docs/user-guide/kubectl/kubectl_rollout_history.md
|
||||
- docs/user-guide/kubectl/kubectl_rollout_pause.md
|
||||
- docs/user-guide/kubectl/kubectl_rollout_resume.md
|
||||
- docs/user-guide/kubectl/kubectl_rollout_status.md
|
||||
- docs/user-guide/kubectl/kubectl_rollout_undo.md
|
||||
- docs/user-guide/kubectl/kubectl_run.md
|
||||
- docs/user-guide/kubectl/kubectl_scale.md
|
||||
- docs/user-guide/kubectl/kubectl_set.md
|
||||
- docs/user-guide/kubectl/kubectl_set_image.md
|
||||
- docs/user-guide/kubectl/kubectl_set_resources.md
|
||||
- docs/user-guide/kubectl/kubectl_taint.md
|
||||
- docs/user-guide/kubectl/kubectl_top.md
|
||||
- docs/user-guide/kubectl/kubectl_top_node.md
|
||||
- docs/user-guide/kubectl/kubectl_top_pod.md
|
||||
- docs/user-guide/kubectl/kubectl_uncordon.md
|
||||
- docs/user-guide/kubectl/kubectl_version.md
|
||||
- title: Superseded and Deprecated Commands
|
||||
|
|
|
@ -44,11 +44,13 @@ down its physical machine or, if running on a cloud platform, deleting its
|
|||
virtual machine.
|
||||
|
||||
First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
Next, tell Kubernetes to drain the node:
|
||||
|
||||
```shell
|
||||
kubectl drain <node name>
|
||||
```
|
||||
|
@ -56,6 +58,7 @@ kubectl drain <node name>
|
|||
Once it returns (without giving an error), you can power down the node
|
||||
(or equivalently, if on a cloud platform, delete the virtual machine backing the node).
|
||||
If you leave the node in the cluster during the maintenance operation, you need to run
|
||||
|
||||
```shell
|
||||
kubectl uncordon <node name>
|
||||
```
|
||||
|
|
|
@ -43,7 +43,7 @@ $ kubectl get pods -l run=my-nginx -o yaml | grep podIP
|
|||
podIP: 10.244.2.5
|
||||
```
|
||||
|
||||
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
|
||||
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interfaces, but the need for this is radically diminished because of the networking model.
|
||||
|
||||
You can read more about [how we achieve this](/docs/admin/networking/#how-to-achieve-this) if you're curious.
|
||||
|
||||
|
|
|
@ -7,8 +7,8 @@ title: Debugging Services
|
|||
---
|
||||
|
||||
An issue that comes up rather frequently for new installations of Kubernetes is
|
||||
that `Services` are not working properly. You've run all your `Pod`s and
|
||||
`Deployment`s, but you get no response when you try to access them.
|
||||
that `Services` are not working properly. You've run all your `Pods` and
|
||||
`Deployments`, but you get no response when you try to access them.
|
||||
This document will hopefully help you to figure out what's going wrong.
|
||||
|
||||
* TOC
|
||||
|
@ -17,7 +17,7 @@ This document will hopefully help you to figure out what's going wrong.
|
|||
## Conventions
|
||||
|
||||
Throughout this doc you will see various commands that you can run. Some
|
||||
commands need to be run within `Pod`, others on a Kubernetes `Node`, and others
|
||||
commands need to be run within a `Pod`, others on a Kubernetes `Node`, and others
|
||||
can run anywhere you have `kubectl` and credentials for the cluster. To make it
|
||||
clear what is expected, this document will use the following conventions.
|
||||
|
||||
|
@ -71,7 +71,7 @@ $ kubectl exec -ti <POD-NAME> -c <CONTAINER-NAME> sh
|
|||
|
||||
## Setup
|
||||
|
||||
For the purposes of this walk-through, let's run some `Pod`s. Since you're
|
||||
For the purposes of this walk-through, let's run some `Pods`. Since you're
|
||||
probably debugging your own `Service` you can substitute your own details, or you
|
||||
can follow along and get a second data point.
|
||||
|
||||
|
@ -109,7 +109,7 @@ spec:
|
|||
protocol: TCP
|
||||
```
|
||||
|
||||
Confirm your `Pod`s are running:
|
||||
Confirm your `Pods` are running:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -l app=hostnames
|
||||
|
@ -196,7 +196,7 @@ Address: 10.0.1.175
|
|||
```
|
||||
|
||||
If this fails, perhaps your `Pod` and `Service` are in different
|
||||
`Namespace`s, try a namespace-qualified name:
|
||||
`Namespaces`, try a namespace-qualified name:
|
||||
|
||||
```shell
|
||||
u@pod$ nslookup hostnames.default
|
||||
|
@ -207,7 +207,7 @@ Name: hostnames.default
|
|||
Address: 10.0.1.175
|
||||
```
|
||||
|
||||
If this works, you'll need to ensure that `Pod`s and `Service`s run in the same
|
||||
If this works, you'll need to ensure that `Pods` and `Services` run in the same
|
||||
`Namespace`. If this still fails, try a fully-qualified name:
|
||||
|
||||
```shell
|
||||
|
@ -326,18 +326,18 @@ $ kubectl get service hostnames -o json
|
|||
```
|
||||
|
||||
Is the port you are trying to access in `spec.ports[]`? Is the `targetPort`
|
||||
correct for your `Pod`s? If you meant it to be a numeric port, is it a number
|
||||
(9376) or a string "9376"? If you meant it to be a named port, do your `Pod`s
|
||||
correct for your `Pods`? If you meant it to be a numeric port, is it a number
|
||||
(9376) or a string "9376"? If you meant it to be a named port, do your `Pods`
|
||||
expose a port with the same name? Is the port's `protocol` the same as the
|
||||
`Pod`'s?
|
||||
|
||||
## Does the Service have any Endpoints?
|
||||
|
||||
If you got this far, we assume that you have confirmed that your `Service`
|
||||
exists and resolves by DNS. Now let's check that the `Pod`s you ran are
|
||||
exists and is resolved by DNS. Now let's check that the `Pods` you ran are
|
||||
actually being selected by the `Service`.
|
||||
|
||||
Earlier we saw that the `Pod`s were running. We can re-check that:
|
||||
Earlier we saw that the `Pods` were running. We can re-check that:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -l app=hostnames
|
||||
|
@ -347,7 +347,7 @@ hostnames-bvc05 1/1 Running 0 1h
|
|||
hostnames-yp2kp 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
The "AGE" column says that these `Pod`s are about an hour old, which implies that
|
||||
The "AGE" column says that these `Pods` are about an hour old, which implies that
|
||||
they are running fine and not crashing.
|
||||
|
||||
The `-l app=hostnames` argument is a label selector - just like our `Service`
|
||||
|
@ -360,16 +360,16 @@ NAME ENDPOINTS
|
|||
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
|
||||
```
|
||||
|
||||
This confirms that the control loop has found the correct `Pod`s for your
|
||||
This confirms that the control loop has found the correct `Pods` for your
|
||||
`Service`. If the `hostnames` row is blank, you should check that the
|
||||
`spec.selector` field of your `Service` actually selects for `metadata.labels`
|
||||
values on your `Pod`s.
|
||||
values on your `Pods`.
|
||||
|
||||
## Are the Pods working?
|
||||
|
||||
At this point, we know that your `Service` exists and has selected your `Pod`s.
|
||||
Let's check that the `Pod`s are actually working - we can bypass the `Service`
|
||||
mechanism and go straight to the `Pod`s.
|
||||
At this point, we know that your `Service` exists and has selected your `Pods`.
|
||||
Let's check that the `Pods` are actually working - we can bypass the `Service`
|
||||
mechanism and go straight to the `Pods`.
|
||||
|
||||
```shell
|
||||
u@pod$ wget -qO- 10.244.0.5:9376
|
||||
|
@ -384,19 +384,19 @@ hostnames-yp2kp
|
|||
|
||||
We expect each `Pod` in the `Endpoints` list to return its own hostname. If
|
||||
this is not what happens (or whatever the correct behavior is for your own
|
||||
`Pod`s), you should investigate what's happening there. You might find
|
||||
`kubectl logs` to be useful or `kubectl exec` directly to your `Pod`s and check
|
||||
`Pods`), you should investigate what's happening there. You might find
|
||||
`kubectl logs` to be useful or `kubectl exec` directly to your `Pods` and check
|
||||
service from there.
|
||||
|
||||
## Is the kube-proxy working?
|
||||
|
||||
If you get here, your `Service` is running, has `Endpoints`, and your `Pod`s
|
||||
If you get here, your `Service` is running, has `Endpoints`, and your `Pods`
|
||||
are actually serving. At this point, the whole `Service` proxy mechanism is
|
||||
suspect. Let's confirm it, piece by piece.
|
||||
|
||||
### Is kube-proxy running?
|
||||
|
||||
Confirm that `kube-proxy` is running on your `Node`s. You should get something
|
||||
Confirm that `kube-proxy` is running on your `Nodes`. You should get something
|
||||
like the below:
|
||||
|
||||
```shell
|
||||
|
@ -429,7 +429,7 @@ should double-check your `Node` configuration and installation steps.
|
|||
### Is kube-proxy writing iptables rules?
|
||||
|
||||
One of the main responsibilities of `kube-proxy` is to write the `iptables`
|
||||
rules which implement `Service`s. Let's check that those rules are getting
|
||||
rules which implement `Services`. Let's check that those rules are getting
|
||||
written.
|
||||
|
||||
The kube-proxy can run in either "userspace" mode or "iptables" mode.
|
||||
|
@ -620,7 +620,7 @@ UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1
|
|||
## Seek help
|
||||
|
||||
If you get this far, something very strange is happening. Your `Service` is
|
||||
running, has `Endpoints`, and your `Pod`s are actually serving. You have DNS
|
||||
running, has `Endpoints`, and your `Pods` are actually serving. You have DNS
|
||||
working, `iptables` rules installed, and `kube-proxy` does not seem to be
|
||||
misbehaving. And yet your `Service` is not working. You should probably let
|
||||
us know, so we can help investigate!
|
||||
|
|
|
@ -11,13 +11,13 @@ Drain node in preparation for maintenance
|
|||
|
||||
Drain node in preparation for maintenance.
|
||||
|
||||
The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supports eviction (http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force.
|
||||
The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supports [eviction](http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force.
|
||||
|
||||
'drain' waits for graceful termination. You should not operate on the machine until the command completes.
|
||||
|
||||
When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.
|
||||
|
||||
! http://kubernetes.io/images/docs/kubectl_drain.svg
|
||||
![Workflow](http://kubernetes.io/images/docs/kubectl_drain.svg)
|
||||
|
||||
```
|
||||
kubectl drain NODE
|
||||
|
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
---
|
||||
This file is autogenerated, but we've stopped checking such files into the
|
||||
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
|
||||
populate this file.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_top-node.md?pixel)]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
---
|
||||
This file is autogenerated, but we've stopped checking such files into the
|
||||
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
|
||||
populate this file.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_top-pod.md?pixel)]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
|
@ -80,7 +80,7 @@ service "my-nginx-svc" deleted
|
|||
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
|
||||
|
||||
```shell
|
||||
$ kubectl get $(k create -f docs/user-guide/nginx/ -o name | grep service)
|
||||
$ kubectl get $(kubectl create -f docs/user-guide/nginx/ -o name | grep service)
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-nginx-svc 10.0.0.208 80/TCP 0s
|
||||
```
|
||||
|
|
|
@ -14,7 +14,7 @@ From this point onwards, it is assumed that `kubectl` is on your path from one o
|
|||
|
||||
The [`kubectl run`](/docs/user-guide/kubectl/kubectl_run) line below will create a [`Deployment`](/docs/user-guide/deployments) named `my-nginx`, and
|
||||
two [nginx](https://registry.hub.docker.com/_/nginx/) [pods](/docs/user-guide/pods) listening on port 80. The `Deployment` will ensure that there are
|
||||
always exactly two pod running as specified in its spec.
|
||||
always exactly two pods running as specified in its spec.
|
||||
|
||||
```shell
|
||||
kubectl run my-nginx --image=nginx --replicas=2 --port=80
|
||||
|
|
|
@ -106,7 +106,7 @@ Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) syst
|
|||
* Kubernetes does not provide nor mandate a comprehensive application configuration language/system (e.g., [jsonnet](https://github.com/google/jsonnet)).
|
||||
* Kubernetes does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
|
||||
|
||||
On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Gondor](https://gondor.io/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes.
|
||||
On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Eldarion](http://eldarion.cloud/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes.
|
||||
|
||||
Since Kubernetes operates at the application level rather than at just the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, monitoring, etc. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable.
|
||||
|
||||
|
|
Loading…
Reference in New Issue