Merge branch 'master' into 4549-complete-sentence

pull/4576/head
Andrew Chen 2017-08-01 22:55:07 -07:00 committed by GitHub
commit 97b86a6fba
9 changed files with 39 additions and 31 deletions

View File

@ -14,7 +14,7 @@ This is a lightweight version of a broader Cluster Federation feature (previousl
nickname ["Ubernetes"](https://git.k8s.io/community/contributors/design-proposals/federation.md)).
Full Cluster Federation allows combining separate
Kubernetes clusters running in different regions or cloud providers
(or on-premise data centers). However, many
(or on-premises data centers). However, many
users simply want to run a more available Kubernetes cluster in multiple zones
of their single cloud provider, and this is what the multizone support in 1.2 allows
(this previously went by the nickname "Ubernetes Lite").

View File

@ -7,14 +7,13 @@ title: Deployments
{% capture overview %}
A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/) (the next-generation ReplicationController).
You describe the desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to
create new ReplicaSets, or remove existing Deployments and adopt all their resources with new Deployments.
A _Deployment_ controller provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
**Note:** You should not manage ReplicaSets owned by a Deployment. If you do so, you are racing with the Deployment
controller! All the use cases should be covered by manipulating the Deployment object. Consider opening
an issue in the main Kubernetes repository if your use case is not covered below.
You describe a _desired state_ in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
**Note:** You should not manage ReplicaSets owned by a Deployment. All the use cases should be covered by manipulating the Deployment object. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
{: .note}
{% endcapture %}
@ -23,7 +22,7 @@ an issue in the main Kubernetes repository if your use case is not covered below
## Use Case
A typical use case is:
The following are typical use cases for Deployments:
* [Create a Deployment to rollout a ReplicaSet](#creating-a-deployment). The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
* [Declare the new state of the Pods](#updating-a-deployment) by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
@ -106,10 +105,12 @@ The created ReplicaSet ensures that there are three nginx Pods at all times.
StatefulSets, etc.). Kubernetes doesn't stop you from overlapping, and if multiple
controllers have overlapping selectors, those controllers may fight with each other and won't behave
correctly.
{: .note}
### Pod-template-hash label
**Note:** This label is not meant to be changed by users!
**Note:** Do not change this label.
{: .note}
Note the pod-template-hash label in the example output in the pod labels above. This label is added by the
Deployment controller to every ReplicaSet that a Deployment creates or adopts. Its purpose is to make sure that child
@ -121,6 +122,7 @@ and in any existing Pods that the ReplicaSet may have.
**Note:** A Deployment's rollout is triggered if and only if the Deployment's pod template (that is, `.spec.template`)
is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
{: .note}
Suppose that we now want to update the nginx Pods to use the `nginx:1.9.1` image
instead of the `nginx:1.7.9` image.
@ -268,6 +270,7 @@ for example if you update the labels or container images of the template. Other
do not create a Deployment revision, so that we can facilitate simultaneous manual- or auto-scaling.
This means that when you roll back to an earlier revision, only the Deployment's pod template part is
rolled back.
{: .note}
Suppose that we made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
@ -313,6 +316,7 @@ ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` speci
Kubernetes by default sets the value to 1 and spec.replicas to 1 so if you haven't cared about setting those
parameters, your Deployment can have 100% unavailability by default! This will be fixed in Kubernetes in a future
version.
{: .note}
```shell
$ kubectl describe deployment
@ -579,6 +583,7 @@ nginx-3926361531 3 3 3 28s
```
**Note:** You cannot rollback a paused Deployment until you resume it.
{: .note}
## Deployment status
@ -652,10 +657,12 @@ See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/d
**Note:** Kubernetes will take no action on a stalled Deployment other than to report a status condition with
`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for
example, rollback the Deployment to its previous version.
{: .note}
**Note:** If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can
safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the
deadline.
{: .note}
You may experience transient errors with your Deployments, either due to a low timeout that you have set or
due to any other kind of error that can be treated as transient. For example, let's suppose you have
@ -758,7 +765,7 @@ all revision history will be kept. In a future version, it will default to switc
**Note:** Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment
thus that Deployment will not be able to roll back.
{: .note}
## Use Cases
@ -809,6 +816,7 @@ Pods with `.spec.template` if the number of Pods is less than the desired number
**Note:** You should not create other pods whose labels match this selector, either directly, by creating
another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you
do so, the first Deployment thinks that it created these other pods. Kubernetes does not stop you from doing this.
{: .note}
If you have multiple controllers that have overlapping selectors, the controllers will fight with each
other and won't behave correctly.

View File

@ -1,5 +1,5 @@
---
title: Installing Kubernetes On-premise/Cloud Providers with Kubespray
title: Installing Kubernetes On-premises/Cloud Providers with Kubespray
---
## Overview

View File

@ -20,8 +20,8 @@ Other/newer ways to set up a Kubernetes cluster include:
* [Minikube](/docs/getting-started-guides/minikube/): Install a single-node Kubernetes cluster on your local machine for development and testing.
* [Installing Kubernetes on AWS with kops](/docs/getting-started-guides/kops/): Bring up a complete Kubernetes cluster on Amazon Web Services, using a tool called `kops`.
* [Installing Kubernetes on Linux with kubeadm](/docs/getting-started-guides/kubeadm/) (Beta): Install a secure Kubernetes cluster on any pre-existing machines running Linux, using the built-in `kubeadm` tool.
* [Installing Kubernetes On-premise/Cloud Providers with Kubespray](/docs/getting-started-guides/kubespray/): Deploy a Kubernetes cluster on-premises baremetal or hosted on cloud providers, with Ansible and `kubespray` tools.
* [Installing Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/): Deploy a Kubernetes cluster on-premise, baremetal, cloud providers, or localhost with Charms and `conjure-up`.
* [Installing Kubernetes On-premises/Cloud Providers with Kubespray](/docs/getting-started-guides/kubespray/): Deploy a Kubernetes cluster on-premises baremetal or hosted on cloud providers, with Ansible and `kubespray` tools.
* [Installing Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/): Deploy a Kubernetes cluster on-premises, baremetal, cloud providers, or localhost with Charms and `conjure-up`.
## Concepts, Tasks, and Tutorials

View File

@ -132,7 +132,7 @@ These solutions provide integration with third-party schedulers, resource manage
* Instructions specify GCE, but are generic enough to be adapted to most existing Mesos clusters
* [DCOS](/docs/getting-started-guides/dcos)
* Community Edition DCOS uses AWS
* Enterprise Edition DCOS supports cloud hosting, on-premise VMs, and bare metal
* Enterprise Edition DCOS supports cloud hosting, on-premises VMs, and bare metal
# Table of Solutions

View File

@ -22,7 +22,7 @@ unresponsive clusters).
Federated Ingress is released as an alpha feature, and supports Google Cloud Platform (GKE,
GCE and hybrid scenarios involving both) in Kubernetes v1.4. Work is under way to support other cloud
providers such as AWS, and other hybrid cloud scenarios (e.g. services
spanning private on-premise as well as public cloud Kubernetes
spanning private on-premises as well as public cloud Kubernetes
clusters).
You create Federated Ingresses in much that same way as traditional
@ -303,7 +303,3 @@ Check that:
[Federation proposal](https://git.k8s.io/community/contributors/design-proposals/federation.md).
{% endcapture %}
{% include templates/task.md %}

View File

@ -35,13 +35,13 @@ annotations["federation.kubernetes.io/replica-set-preferences"] = preferences {
#
# In English, the policy asserts that resources in the "production" namespace
# that are not annotated with "criticality=low" MUST be placed on clusters
# labelled with "on-premise=true".
# labelled with "on-premises=true".
annotations["federation.alpha.kubernetes.io/cluster-selector"] = selector {
input.metadata.namespace = "production"
not input.metadata.annotations.criticality = "low"
json.marshal([{
"operator": "=",
"key": "on-premise",
"key": "on-premises",
"values": "[true]",
}], selector)
}

View File

@ -29,18 +29,22 @@ Nvidia GPUs can be consumed via container level resource requirements using the
```yaml
apiVersion: v1
kind: pod
spec:
containers:
-
kind: Pod
metadata:
name: gpu-pod
spec:
containers:
-
name: gpu-container-1
resources:
limits:
image: gcr.io/google_containers/pause:2.0
resources:
limits:
alpha.kubernetes.io/nvidia-gpu: 2 # requesting 2 GPUs
-
name: gpu-container-2
resources:
limits:
image: gcr.io/google_containers/pause:2.0
resources:
limits:
alpha.kubernetes.io/nvidia-gpu: 3 # requesting 3 GPUs
```

View File

@ -50,7 +50,7 @@ cid: home
<div class="image-wrapper"><img src="images/suitcase.png"></div>
<div class="content">
<h4>Run Anywhere</h4>
<p>Kubernetes is open source giving you the freedom to take advantage of on-premise, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.</p>
<p>Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.</p>
</div>
</main>
</section>