Merge branches 'release-1.6' and 'master' of https://github.com/kubernetes/kubernetes.github.io into release-1.6

* 'release-1.6' of https://github.com/kubernetes/kubernetes.github.io:

* 'master' of https://github.com/kubernetes/kubernetes.github.io: (23 commits)
  Apply changes from PR #2787
  rephrase the sentence to make expression clear (#2789)
  update index.md (#2788)
  Apply changes from PR #2784 (#2950)
  The link URL of  [kube-controller-manager] is wrong
  Added pod name in run command
  Fix grammar in docs/admin/daemon.md
  update index.md (#2755)
  Update garbage-collection.md (#2732)
  Fix typo (#2842)
  Fix typo
  Fix the typos
  Apply typo fixes from #2791 (#2949)
  Fix typo in kubectl_completion.md
  fix typeo (#2856)
  Use kubectl config current-context to simplify the instructions
  fix a typo in /docs/user-guide/configmap/index.md
  Fix monitor-node-health.md
  amend monitor-node-health.md
  Update manage-compute-resources-container.md
  ...

# Conflicts:
#	docs/tools/index.md
pull/2947/head
Andrew Chen 2017-03-21 19:04:37 -07:00
commit 7215af010f
8 changed files with 20 additions and 23 deletions

View File

@ -21,8 +21,8 @@ Some typical uses of a DaemonSet are:
https://github.com/prometheus/node_exporter), `collectd`, New Relic agent, or Ganglia `gmond`.
In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon.
A more complex setup might use multiple DaemonSets would be used for a single type of daemon,
but with different flags and/or different memory and cpu requests for different hardware types.
A more complex setup might use multiple DaemonSets for a single type of daemon, but with
different flags and/or different memory and cpu requests for different hardware types.
## Writing a DaemonSet Spec

View File

@ -51,7 +51,7 @@ Default is -1, which means there is no global limit.
Containers can potentially be garbage collected before their usefulness has expired. These containers
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
`maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be
retained per expected container. A higher value for `maximum-dead-containers` is also recommended for a
retained per expected container. A larger value for `maximum-dead-containers` is also recommended for a
similar reason.
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.

View File

@ -61,7 +61,7 @@ The node condition is represented as a JSON object. For example, the following r
]
```
If the Status of the Ready condition is "Unknown" or "False" for longer than the `pod-eviction-timeout`, an argument passed to the [kube-controller-manager](docs/admin/kube-controller-manager/), all of the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on it. The decision to delete the pods cannot be communicated to the kubelet until it re-establishes communication with the apiserver. In the meantime, the pods which are scheduled for deletion may continue to run on the partitioned node.
If the Status of the Ready condition is "Unknown" or "False" for longer than the `pod-eviction-timeout`, an argument passed to the [kube-controller-manager](/docs/admin/kube-controller-manager/), all of the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on it. The decision to delete the pods cannot be communicated to the kubelet until it re-establishes communication with the apiserver. In the meantime, the pods which are scheduled for deletion may continue to run on the partitioned node.
In versions of Kubernetes prior to 1.5, the node controller would [force delete](/docs/user-guide/pods/#force-deletion-of-pods) these unreachable pods from the apiserver. However, in 1.5 and higher, the node controller does not force delete pods until it is confirmed that they have stopped running in the cluster. One can see these pods which may be running on an unreachable node as being in the "Terminating" or "Unknown" states. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from Kubernetes causes all the Pod objects running on it to be deleted from the apiserver, freeing up their names.

View File

@ -40,7 +40,7 @@ Here's an example `.yaml` file that shows the required fields and object spec fo
{% include code.html language="yaml" file="nginx-deployment.yaml" ghlink="/docs/concepts/overview/working-with-objects/nginx-deployment.yaml" %}
One way to create a Deployment using a `.yaml` file like the one above is to use the []`kubectl create`]() command in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example:
One way to create a Deployment using a `.yaml` file like the one above is to use the [`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) command in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example:
```shell
$ kubectl create -f docs/user-guide/nginx-deployment.yaml --record

View File

@ -21,7 +21,7 @@ This guide uses a simple nginx server to demonstrate proof of concept. The same
## Exposing pods to the cluster
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
We did this in a previous example, but let's do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
{% include code.html language="yaml" file="run-my-nginx.yaml" ghlink="/docs/concepts/services-networking/run-my-nginx.yaml" %}
@ -187,7 +187,7 @@ Now modify your nginx replicas to start an https server using the certificate in
Noteworthy points about the nginx-secure-app manifest:
- It contains both Deployment and Service specification in the same file
- It contains both Deployment and Service specification in the same file.
- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
@ -207,7 +207,7 @@ node $ curl -k https://10.244.3.5
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
{% include code.html language="yaml" file="curlpod.yaml" ghlink="/docs/concepts/services-networking/curlpod.yaml" %}
@ -260,7 +260,7 @@ $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
<h1>Welcome to nginx!</h1>
```
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
```shell
$ kubectl edit svc my-nginx

View File

@ -13,7 +13,7 @@ This page shows how to use an HTTP proxy to access the Kubernetes API.
* If you do not already have an application running in your cluster, start
a Hello world application by entering this command:
kubectl run --image=gcr.io/google-samples/node-hello:1.0 --port=8080
kubectl run node-hello --image=gcr.io/google-samples/node-hello:1.0 --port=8080
{% endcapture %}

View File

@ -10,23 +10,23 @@ Kubernetes contains several built-in tools to help you work with the Kubernetes
Kubernetes contains the following built-in tools:
##### Kubectl
##### Kubectl
[`kubectl`](/docs/user-guide/kubectl/) is the command line tool for Kubernetes. It controls the Kubernetes cluster manager.
##### Kubeadm
##### Kubeadm
[`kubeadm`](/docs/getting-started-guides/kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in beta).
[`kubeadm`](/docs/getting-started-guides/kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha).
##### Kubefed
[`kubefed`](/docs/admin/federation/kubefed/) is the command line tool
to help you administrate your federated clusters.
##### Dashboard
##### Dashboard
[Dashboard](/docs/user-guide/ui/), the web-based user interface of Kubernetes, allows you to deploy containerized applications
to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself.
to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself.
#### Third-Party Tools
@ -37,7 +37,7 @@ Kubernetes supports various third-party tools. These include, but are not limite
[Kubernetes Helm](https://github.com/kubernetes/helm) is a tool for managing packages of pre-configured
Kubernetes resources, aka Kubernetes charts.
Use Helm to:
Use Helm to:
* Find and use popular software packaged as Kubernetes charts
* Share your own applications as Kubernetes charts
@ -45,10 +45,9 @@ Use Helm to:
* Intelligently manage your Kubernetes manifest files
* Manage releases of Helm packages
##### Kompose
##### Kompose
[Kompose](https://github.com/kubernetes-incubator/kompose) is a tool to help users familiar with Docker Compose
move to Kubernetes.
[Kompose](https://github.com/kubernetes-incubator/kompose) is a tool to help Docker Compose users move to Kubernetes.
Use Kompose to:

View File

@ -11,7 +11,7 @@ run on particular nodes. There are several ways to do this, and they all use
[label selectors](/docs/user-guide/labels/) to make the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
(e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.)
but there are some circumstances where you may want more control where a pod lands, e.g. to ensure
but there are some circumstances where you may want more control on a node where a pod lands, e.g. to ensure
that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different
services that communicate a lot into the same availability zone.
@ -36,9 +36,7 @@ This example assumes that you have a basic understanding of Kubernetes pods and
### Step One: Attach label to the node
Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to.
Then, to add a label to the node you've chosen, run `kubectl label nodes <node-name> <label-key>=<label-value>`. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`.
Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to, and then run `kubectl label nodes <node-name> <label-key>=<label-value>` to add a label to the node you've chosen. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`.
If this fails with an "invalid command" error, you're likely using an older version of kubectl that doesn't have the `label` command. In that case, see the [previous version](https://github.com/kubernetes/kubernetes/blob/a053dbc313572ed60d89dae9821ecab8bfd676dc/examples/node-selection/README.md) of this guide for instructions on how to manually set labels on a node.