From cb85b960e301d5561df89091772464d0d303657c Mon Sep 17 00:00:00 2001
From: Bilgin Ibryam
Date: Fri, 9 Dec 2016 00:16:46 +0000
Subject: [PATCH 01/63] Fixed wrong URL
---
docs/contribute/write-new-topic.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/contribute/write-new-topic.md b/docs/contribute/write-new-topic.md
index c34f3cfde1..d4037e724b 100644
--- a/docs/contribute/write-new-topic.md
+++ b/docs/contribute/write-new-topic.md
@@ -77,7 +77,7 @@ Depending page type, create an entry in one of these files:
{% capture whatsnext %}
* Learn about [using page templates](/docs/contribute/page-templates/).
* Learn about [staging your changes](/docs/contribute/stage-documentation-changes).
-* Learn about [creating a pull request](/docs/contribute/write-new-topic).
+* Learn about [creating a pull request](/docs/contribute/create-pull-request/).
{% endcapture %}
{% include templates/task.md %}
From 3ebef7eeceb9a93f19be6947961c2dd9738d9a5d Mon Sep 17 00:00:00 2001
From: Bilgin Ibryam
Date: Fri, 9 Dec 2016 08:49:26 +0000
Subject: [PATCH 02/63] Fixed typos
---
docs/admin/dns.md | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/docs/admin/dns.md b/docs/admin/dns.md
index d75acfa093..5e9d55f822 100644
--- a/docs/admin/dns.md
+++ b/docs/admin/dns.md
@@ -70,7 +70,7 @@ is no longer supported.
When enabled, pods are assigned a DNS A record in the form of `pod-ip-address.my-namespace.pod.cluster.local`.
-For example, a pod with ip `1.2.3.4` in the namespace `default` with a dns name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`.
+For example, a pod with ip `1.2.3.4` in the namespace `default` with a DNS name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`.
#### A Records and hostname based on Pod's hostname and subdomain fields
@@ -171,7 +171,7 @@ busybox 1/1 Running 0
Once that pod is running, you can exec nslookup in that environment:
```
-kubectl exec busybox -- nslookup kubernetes.default
+kubectl exec -ti busybox -- nslookup kubernetes.default
```
You should see something like:
@@ -194,10 +194,10 @@ If the nslookup command fails, check the following:
Take a look inside the resolv.conf file. (See "Inheriting DNS from the node" and "Known issues" below for more information)
```
-cat /etc/resolv.conf
+kubectl exec busybox cat /etc/resolv.conf
```
-Verify that the search path and name server are set up like the following (note that seach path may vary for different cloud providers):
+Verify that the search path and name server are set up like the following (note that search path may vary for different cloud providers):
```
search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal
@@ -210,7 +210,7 @@ options ndots:5
Errors such as the following indicate a problem with the kube-dns add-on or associated Services:
```
-$ kubectl exec busybox -- nslookup kubernetes.default
+$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10
@@ -220,7 +220,7 @@ nslookup: can't resolve 'kubernetes.default'
or
```
-$ kubectl exec busybox -- nslookup kubernetes.default
+$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
@@ -244,7 +244,7 @@ kube-dns-v19-ezo1y 3/3 Running 0
...
```
-If you see that no pod is running or that the pod has failed/completed, the dns add-on may not be deployed by default in your current environment and you will have to deploy it manually.
+If you see that no pod is running or that the pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.
#### Check for Errors in the DNS pod
@@ -258,7 +258,7 @@ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system
See if there is any suspicious log. W, E, F letter at the beginning represent Warning, Error and Failure. Please search for entries that have these as the logging level and use [kubernetes issues](https://github.com/kubernetes/kubernetes/issues) to report unexpected errors.
-#### Is dns service up?
+#### Is DNS service up?
Verify that the DNS service is up by using the `kubectl get service` command.
@@ -277,7 +277,7 @@ kube-dns 10.0.0.10 53/UDP,53/TCP 1h
If you have created the service or in the case it should be created by default but it does not appear, see this [debugging services page](http://kubernetes.io/docs/user-guide/debugging-services/) for more information.
-#### Are dns endpoints exposed?
+#### Are DNS endpoints exposed?
You can verify that dns endpoints are exposed by using the `kubectl get endpoints` command.
@@ -348,7 +348,7 @@ some of those settings will be lost. As a partial workaround, the node can run
`dnsmasq` which will provide more `nameserver` entries, but not more `search`
entries. You can also use kubelet's `--resolv-conf` flag.
-If you are using Alpine version 3.3 or earlier as your base image, dns may not
+If you are using Alpine version 3.3 or earlier as your base image, DNS may not
work properly owing to a known issue with Alpine. Check [here](https://github.com/kubernetes/kubernetes/issues/30215)
for more information.
From 08a199d6d17009a85edc3aaa226845d91079fb36 Mon Sep 17 00:00:00 2001
From: Eric Baum
Date: Fri, 9 Dec 2016 23:03:48 +0000
Subject: [PATCH 03/63] Update header
Updates header to remove hamburger on desktop, set 100px margin, and
adds 100px margin to body.
---
_includes/head-header.html | 10 +++++-
_sass/_base.sass | 71 ++++++++++++++++++++++++++++++++++++++
_sass/_desktop.sass | 20 +++++++++--
images/search-icon.svg | 13 +++++++
js/script.js | 18 ++++++++++
5 files changed, 128 insertions(+), 4 deletions(-)
create mode 100644 images/search-icon.svg
diff --git a/_includes/head-header.html b/_includes/head-header.html
index 17a83fa31e..598f6e80fb 100644
--- a/_includes/head-header.html
+++ b/_includes/head-header.html
@@ -20,8 +20,16 @@
+
@@ -198,9 +197,8 @@ title: Production-Grade Container Orchestration
ga('create', 'UA-36037335-10', 'auto');
ga('send', 'pageview');
-
-
From e971f9cb627135e0a5b19740e856343eab705328 Mon Sep 17 00:00:00 2001
From: Ben Balter
Date: Tue, 13 Dec 2016 14:42:51 -0500
Subject: [PATCH 15/63] bump github pages gem to v109 to get workflow
improvements
---
Gemfile.lock | 35 +++++++++++++++++++++++++++--------
1 file changed, 27 insertions(+), 8 deletions(-)
diff --git a/Gemfile.lock b/Gemfile.lock
index c6f7150060..4d72be803d 100644
--- a/Gemfile.lock
+++ b/Gemfile.lock
@@ -7,7 +7,8 @@ GEM
minitest (~> 5.1)
thread_safe (~> 0.3, >= 0.3.4)
tzinfo (~> 1.1)
- addressable (2.4.0)
+ addressable (2.5.0)
+ public_suffix (~> 2.0, >= 2.0.2)
coffee-script (2.4.1)
coffee-script-source
execjs
@@ -21,22 +22,28 @@ GEM
ffi (1.9.14)
forwardable-extended (2.6.0)
gemoji (2.1.0)
- github-pages (104)
+ github-pages (109)
activesupport (= 4.2.7)
- github-pages-health-check (= 1.2.0)
- jekyll (= 3.3.0)
+ github-pages-health-check (= 1.3.0)
+ jekyll (= 3.3.1)
jekyll-avatar (= 0.4.2)
jekyll-coffeescript (= 1.0.1)
+ jekyll-default-layout (= 0.1.4)
jekyll-feed (= 0.8.0)
jekyll-gist (= 1.4.0)
jekyll-github-metadata (= 2.2.0)
jekyll-mentions (= 1.2.0)
+ jekyll-optional-front-matter (= 0.1.2)
jekyll-paginate (= 1.1.0)
+ jekyll-readme-index (= 0.0.3)
jekyll-redirect-from (= 0.11.0)
+ jekyll-relative-links (= 0.2.1)
jekyll-sass-converter (= 1.3.0)
jekyll-seo-tag (= 2.1.0)
jekyll-sitemap (= 0.12.0)
jekyll-swiss (= 0.4.0)
+ jekyll-theme-primer (= 0.1.1)
+ jekyll-titles-from-headings (= 0.1.2)
jemoji (= 0.7.0)
kramdown (= 1.11.1)
liquid (= 3.0.6)
@@ -45,17 +52,17 @@ GEM
minima (= 2.0.0)
rouge (= 1.11.1)
terminal-table (~> 1.4)
- github-pages-health-check (1.2.0)
+ github-pages-health-check (1.3.0)
addressable (~> 2.3)
net-dns (~> 0.8)
octokit (~> 4.0)
- public_suffix (~> 1.4)
+ public_suffix (~> 2.0)
typhoeus (~> 0.7)
html-pipeline (2.4.2)
activesupport (>= 2)
nokogiri (>= 1.4)
i18n (0.7.0)
- jekyll (3.3.0)
+ jekyll (3.3.1)
addressable (~> 2.4)
colorator (~> 1.0)
jekyll-sass-converter (~> 1.0)
@@ -70,6 +77,8 @@ GEM
jekyll (~> 3.0)
jekyll-coffeescript (1.0.1)
coffee-script (~> 2.2)
+ jekyll-default-layout (0.1.4)
+ jekyll (~> 3.0)
jekyll-feed (0.8.0)
jekyll (~> 3.3)
jekyll-gist (1.4.0)
@@ -81,9 +90,15 @@ GEM
activesupport (~> 4.0)
html-pipeline (~> 2.3)
jekyll (~> 3.0)
+ jekyll-optional-front-matter (0.1.2)
+ jekyll (~> 3.0)
jekyll-paginate (1.1.0)
+ jekyll-readme-index (0.0.3)
+ jekyll (~> 3.0)
jekyll-redirect-from (0.11.0)
jekyll (>= 2.0)
+ jekyll-relative-links (0.2.1)
+ jekyll (~> 3.3)
jekyll-sass-converter (1.3.0)
sass (~> 3.2)
jekyll-seo-tag (2.1.0)
@@ -91,6 +106,10 @@ GEM
jekyll-sitemap (0.12.0)
jekyll (~> 3.3)
jekyll-swiss (0.4.0)
+ jekyll-theme-primer (0.1.1)
+ jekyll (~> 3.3)
+ jekyll-titles-from-headings (0.1.2)
+ jekyll (~> 3.3)
jekyll-watch (1.5.0)
listen (~> 3.0, < 3.1)
jemoji (0.7.0)
@@ -116,7 +135,7 @@ GEM
sawyer (~> 0.8.0, >= 0.5.3)
pathutil (0.14.0)
forwardable-extended (~> 2.6)
- public_suffix (1.5.3)
+ public_suffix (2.0.4)
rb-fsevent (0.9.8)
rb-inotify (0.9.7)
ffi (>= 0.5.0)
From 2fa89315ebfbc30853a2b0c379043bd3485729bb Mon Sep 17 00:00:00 2001
From: steveperry-53
Date: Thu, 8 Dec 2016 14:22:13 -0800
Subject: [PATCH 16/63] Write new Task: Distributing Credentials Securely.
---
_data/tasks.yml | 2 +
.../distribute-credentials-secure.md | 122 ++++++++++++++++++
docs/tasks/administer-cluster/secret-pod.yaml | 16 +++
docs/tasks/administer-cluster/secret.yaml | 7 +
4 files changed, 147 insertions(+)
create mode 100644 docs/tasks/administer-cluster/distribute-credentials-secure.md
create mode 100644 docs/tasks/administer-cluster/secret-pod.yaml
create mode 100644 docs/tasks/administer-cluster/secret.yaml
diff --git a/_data/tasks.yml b/_data/tasks.yml
index ee468f48d6..c720dfb09b 100644
--- a/_data/tasks.yml
+++ b/_data/tasks.yml
@@ -14,6 +14,8 @@ toc:
path: /docs/tasks/configure-pod-container/assign-cpu-ram-container/
- title: Configuring a Pod to Use a Volume for Storage
path: /docs/tasks/configure-pod-container/configure-volume-storage/
+ - title: Distributing Credentials Securely
+ path: /docs/tasks/administer-cluster/distribute-credentials-secure/
- title: Accessing Applications in a Cluster
section:
diff --git a/docs/tasks/administer-cluster/distribute-credentials-secure.md b/docs/tasks/administer-cluster/distribute-credentials-secure.md
new file mode 100644
index 0000000000..017b6aa459
--- /dev/null
+++ b/docs/tasks/administer-cluster/distribute-credentials-secure.md
@@ -0,0 +1,122 @@
+---
+---
+
+{% capture overview %}
+This page shows how to create a Secret and a Pod that has access to the Secret.
+{% endcapture %}
+
+{% capture prerequisites %}
+
+{% include task-tutorial-prereqs.md %}
+
+{% endcapture %}
+
+{% capture steps %}
+
+### Converting your secret data to a base-64 representation
+
+Suppose you want to have two pieces of secret data: a username `my-app` and a password
+`39528$vdg7Jb`. First, use [Base64 encoding](https://www.base64encode.org/) to
+convert your username and password to a base-64 representation. Here's a Linux
+example:
+
+ echo 'my-app' | base64
+ echo '39528$vdg7Jb' | base64
+
+The output shows that the base-64 representation of your username is `bXktYXBwCg==`,
+and the base-64 representation of your password is `Mzk1MjgkdmRnN0piCg==`.
+
+### Creating a Secret
+
+Here is a configuration file you can use to create a Secret that holds your
+username and password:
+
+{% include code.html language="yaml" file="secret.yaml" ghlink="/docs/tasks/administer-cluster/secret.yaml" %}
+
+1. Create the Secret
+
+ kubectl create -f http://k8s.io/docs/tasks/administer-cluster/secret.yaml
+
+1. View information about the Secret:
+
+ kubectl get secret test-secret
+
+ Output:
+
+ NAME TYPE DATA AGE
+ test-secret Opaque 2 1m
+
+
+1. View more detailed information about the Secret:
+
+ kubectl describe secret test-secret
+
+ Output:
+
+ Name: test-secret
+ Namespace: default
+ Labels:
+ Annotations:
+
+ Type: Opaque
+
+ Data
+ ====
+ password: 13 bytes
+ username: 7 bytes
+
+### Creating a Pod that has access to the secret data
+
+Here is a configuration file you can use to create a Pod:
+
+{% include code.html language="yaml" file="secret-pod.yaml" ghlink="/docs/tasks/administer-cluster/secret-pod.yaml" %}
+
+1. Create the Pod:
+
+ kubectl create -f http://k8s.io/docs/tasks/administer-cluster/secret-pod.yaml
+
+1. Verify that your Pod is running:
+
+ kubectl get pods
+
+ Output:
+
+ NAME READY STATUS RESTARTS AGE
+ secret-test-pod 1/1 Running 0 42m
+
+
+1. Get a shell into the Container that is running in your Pod:
+
+ kubectl exec -it secret-test-pod -- /bin/bash
+
+1. In your shell, go to the directory where the secret data is exposed:
+
+ root@secret-test-pod:/# cd /etc/secret-volume
+
+1. In your shell, list the files in the `/etc/secret-volume` directory:
+
+ root@secret-test-pod:/etc/secret-volume# ls
+
+ The output shows two files, one for each piece of secret data:
+
+ password username
+
+1. In your shell, display the contents of the `username` and `password` files:
+
+ root@secret-test-pod:/etc/secret-volume# cat username password
+
+ The output is your username and password:
+
+ my-app
+ 39528$vdg7Jb
+
+{% endcapture %}
+
+{% capture whatsnext %}
+
+* Learn more about [secrets](/docs/user-guide/secrets/).
+* See [Secret](docs/api-reference/v1/definitions/#_v1_secret).
+
+{% endcapture %}
+
+{% include templates/task.md %}
diff --git a/docs/tasks/administer-cluster/secret-pod.yaml b/docs/tasks/administer-cluster/secret-pod.yaml
new file mode 100644
index 0000000000..abbd6cb1d5
--- /dev/null
+++ b/docs/tasks/administer-cluster/secret-pod.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-test-pod
+spec:
+ containers:
+ - name: test-container
+ image: nginx
+ volumeMounts:
+ # name must match the volume name below
+ - name: secret-volume
+ mountPath: /etc/secret-volume
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: test-secret
diff --git a/docs/tasks/administer-cluster/secret.yaml b/docs/tasks/administer-cluster/secret.yaml
new file mode 100644
index 0000000000..64627d638f
--- /dev/null
+++ b/docs/tasks/administer-cluster/secret.yaml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: test-secret
+data:
+ username: bXktYXBwCg==
+ password: Mzk1MjgkdmRnN0piCg==
From c8d67ab1825cd074eba1e5bd66cf5562173b35b2 Mon Sep 17 00:00:00 2001
From: steveperry-53
Date: Fri, 9 Dec 2016 14:07:15 -0800
Subject: [PATCH 17/63] Addressed comments by pwittrock.
---
_data/tasks.yml | 3 +-
.../distribute-credentials-secure.md | 60 +++++++++++++++++--
.../secret-envars-pod.yaml | 19 ++++++
.../secret-pod.yaml | 1 +
.../secret.yaml | 0
docs/tasks/index.md | 1 +
6 files changed, 77 insertions(+), 7 deletions(-)
rename docs/tasks/{administer-cluster => configure-pod-container}/distribute-credentials-secure.md (58%)
create mode 100644 docs/tasks/configure-pod-container/secret-envars-pod.yaml
rename docs/tasks/{administer-cluster => configure-pod-container}/secret-pod.yaml (82%)
rename docs/tasks/{administer-cluster => configure-pod-container}/secret.yaml (100%)
diff --git a/_data/tasks.yml b/_data/tasks.yml
index c720dfb09b..277efa4886 100644
--- a/_data/tasks.yml
+++ b/_data/tasks.yml
@@ -15,7 +15,7 @@ toc:
- title: Configuring a Pod to Use a Volume for Storage
path: /docs/tasks/configure-pod-container/configure-volume-storage/
- title: Distributing Credentials Securely
- path: /docs/tasks/administer-cluster/distribute-credentials-secure/
+ path: /docs/tasks/configure-pod-container/distribute-credentials-secure/
- title: Accessing Applications in a Cluster
section:
@@ -36,6 +36,7 @@ toc:
section:
- title: Assigning Pods to Nodes
path: /docs/tasks/administer-cluster/assign-pods-nodes/
+
- title: Autoscaling the DNS Service in a Cluster
path: /docs/tasks/administer-cluster/dns-horizontal-autoscaling/
- title: Safely Draining a Node while Respecting Application SLOs
diff --git a/docs/tasks/administer-cluster/distribute-credentials-secure.md b/docs/tasks/configure-pod-container/distribute-credentials-secure.md
similarity index 58%
rename from docs/tasks/administer-cluster/distribute-credentials-secure.md
rename to docs/tasks/configure-pod-container/distribute-credentials-secure.md
index 017b6aa459..c2828315cb 100644
--- a/docs/tasks/administer-cluster/distribute-credentials-secure.md
+++ b/docs/tasks/configure-pod-container/distribute-credentials-secure.md
@@ -2,7 +2,8 @@
---
{% capture overview %}
-This page shows how to create a Secret and a Pod that has access to the Secret.
+This page shows how to securely inject sensitive data, such as passwords and
+encryption keys, into Pods.
{% endcapture %}
{% capture prerequisites %}
@@ -37,6 +38,11 @@ username and password:
kubectl create -f http://k8s.io/docs/tasks/administer-cluster/secret.yaml
+ **Note:** If you want to skip the Base64 encoding step, you can create a Secret
+ by using the `kubectl create secret` command:
+
+ kubectl create secret generic test-secret --from-literal=username="my-app",password="39528$vdg7Jb"
+
1. View information about the Secret:
kubectl get secret test-secret
@@ -65,7 +71,7 @@ username and password:
password: 13 bytes
username: 7 bytes
-### Creating a Pod that has access to the secret data
+### Creating a Pod that has access to the secret data through a Volume
Here is a configuration file you can use to create a Pod:
@@ -77,7 +83,7 @@ Here is a configuration file you can use to create a Pod:
1. Verify that your Pod is running:
- kubectl get pods
+ kubectl get pod secret-test-pod
Output:
@@ -89,7 +95,9 @@ Here is a configuration file you can use to create a Pod:
kubectl exec -it secret-test-pod -- /bin/bash
-1. In your shell, go to the directory where the secret data is exposed:
+1. The secret data is exposed to the Container through a Volume mounted under
+`/etc/secret-volume`. In your shell, go to the directory where the secret data
+is exposed:
root@secret-test-pod:/# cd /etc/secret-volume
@@ -110,12 +118,52 @@ Here is a configuration file you can use to create a Pod:
my-app
39528$vdg7Jb
+### Creating a Pod that has access to the secret data through environment variables
+
+Here is a configuration file you can use to create a Pod:
+
+{% include code.html language="yaml" file="secret-envars-pod.yaml" ghlink="/docs/tasks/administer-cluster/secret-envars-pod.yaml" %}
+
+1. Create the Pod:
+
+ kubectl create -f http://k8s.io/docs/tasks/administer-cluster/secret-envars-pod.yaml
+
+1. Verify that your Pod is running:
+
+ kubectl get pod secret-envars-test-pod
+
+ Output:
+
+ NAME READY STATUS RESTARTS AGE
+ secret-envars-test-pod 1/1 Running 0 4m
+
+1. Get a shell into the Container that is running in your Pod:
+
+ kubectl exec -it secret-envars-test-pod -- /bin/bash
+
+1. In your shell, display the environment variables:
+
+ root@secret-envars-test-pod:/# printenv
+
+ The output includes your username and password:
+
+ ...
+ SECRET_USERNAME=my-app
+ ...
+ SECRET_PASSWORD=39528$vdg7Jb
+
{% endcapture %}
{% capture whatsnext %}
-* Learn more about [secrets](/docs/user-guide/secrets/).
-* See [Secret](docs/api-reference/v1/definitions/#_v1_secret).
+* Learn more about [Secrets](/docs/user-guide/secrets/).
+* Learn about [Volumes](/docs/user-guide/volumes/).
+
+#### Reference
+
+* [Secret](docs/api-reference/v1/definitions/#_v1_secret)
+* [Volume](docs/api-reference/v1/definitions/#_v1_volume)
+* [Pod](docs/api-reference/v1/definitions/#_v1_pod)
{% endcapture %}
diff --git a/docs/tasks/configure-pod-container/secret-envars-pod.yaml b/docs/tasks/configure-pod-container/secret-envars-pod.yaml
new file mode 100644
index 0000000000..1637c0eac3
--- /dev/null
+++ b/docs/tasks/configure-pod-container/secret-envars-pod.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-envars-test-pod
+spec:
+ containers:
+ - name: envars-test-container
+ image: nginx
+ env:
+ - name: SECRET_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: test-secret
+ key: username
+ - name: SECRET_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: test-secret
+ key: password
diff --git a/docs/tasks/administer-cluster/secret-pod.yaml b/docs/tasks/configure-pod-container/secret-pod.yaml
similarity index 82%
rename from docs/tasks/administer-cluster/secret-pod.yaml
rename to docs/tasks/configure-pod-container/secret-pod.yaml
index abbd6cb1d5..78633c477c 100644
--- a/docs/tasks/administer-cluster/secret-pod.yaml
+++ b/docs/tasks/configure-pod-container/secret-pod.yaml
@@ -10,6 +10,7 @@ spec:
# name must match the volume name below
- name: secret-volume
mountPath: /etc/secret-volume
+ # The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume
secret:
diff --git a/docs/tasks/administer-cluster/secret.yaml b/docs/tasks/configure-pod-container/secret.yaml
similarity index 100%
rename from docs/tasks/administer-cluster/secret.yaml
rename to docs/tasks/configure-pod-container/secret.yaml
diff --git a/docs/tasks/index.md b/docs/tasks/index.md
index 4daee756ca..6a2aaee6a4 100644
--- a/docs/tasks/index.md
+++ b/docs/tasks/index.md
@@ -10,6 +10,7 @@ single thing, typically by giving a short sequence of steps.
* [Defining Environment Variables for a Container](/docs/tasks/configure-pod-container/define-environment-variable-container/)
* [Defining a Command and Arguments for a Container](/docs/tasks/configure-pod-container/define-command-argument-container/)
* [Assigning CPU and RAM Resources to a Container](/docs/tasks/configure-pod-container/assign-cpu-ram-container/)
+* [Distributing Credentials Securely](/docs/tasks/configure-pod-container/distribute-credentials-secure/)
#### Accessing Applications in a Cluster
From b43e72124ff656e54321d65ea18e3783c3800f82 Mon Sep 17 00:00:00 2001
From: Eric Baum
Date: Tue, 13 Dec 2016 21:37:53 +0000
Subject: [PATCH 18/63] Updates logo
---
images/nav_logo.svg | 111 ++++++++++++++++++++++++++++++++++++++++++-
images/nav_logo2.svg | 109 +++++++++++++++++++++++++++++++++++++++++-
2 files changed, 218 insertions(+), 2 deletions(-)
diff --git a/images/nav_logo.svg b/images/nav_logo.svg
index 666997a143..982c04f4aa 100644
--- a/images/nav_logo.svg
+++ b/images/nav_logo.svg
@@ -1 +1,110 @@
-
\ No newline at end of file
+
+
+
diff --git a/images/nav_logo2.svg b/images/nav_logo2.svg
index 1c88bd436a..92b8d19ac4 100644
--- a/images/nav_logo2.svg
+++ b/images/nav_logo2.svg
@@ -1 +1,108 @@
-
\ No newline at end of file
+
+
+
From 3921711fd90c17ffb98ba0b3909764a4c9d2b623 Mon Sep 17 00:00:00 2001
From: Janet Kuo
Date: Tue, 13 Dec 2016 14:07:07 -0800
Subject: [PATCH 19/63] Add left nav for apps API group
---
_data/reference.yml | 7 +++++++
docs/api-reference/README.md | 1 +
docs/reference.md | 5 ++++-
3 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/_data/reference.yml b/_data/reference.yml
index ce0504eed8..6b3351c954 100644
--- a/_data/reference.yml
+++ b/_data/reference.yml
@@ -41,6 +41,13 @@ toc:
- title: Batch API Definitions
path: /docs/api-reference/batch/v1/definitions/
+- title: Apps API
+ section:
+ - title: Apps API Operations
+ path: /docs/api-reference/apps/v1beta1/operations/
+ - title: Apps API Definitions
+ path: /docs/api-reference/apps/v1beta1/definitions/
+
- title: Extensions API
section:
- title: Extensions API Operations
diff --git a/docs/api-reference/README.md b/docs/api-reference/README.md
index c0c1f3620d..a2fae5b001 100644
--- a/docs/api-reference/README.md
+++ b/docs/api-reference/README.md
@@ -8,6 +8,7 @@ Use the following reference docs to understand the kubernetes REST API for vario
* extensions/v1beta1: [operations](/docs/api-reference/extensions/v1beta1/operations.html), [model definitions](/docs/api-reference/extensions/v1beta1/definitions.html)
* batch/v1: [operations](/docs/api-reference/batch/v1/operations.html), [model definitions](/docs/api-reference/batch/v1/definitions.html)
* autoscaling/v1: [operations](/docs/api-reference/autoscaling/v1/operations.html), [model definitions](/docs/api-reference/autoscaling/v1/definitions.html)
+* apps/v1beta1: [operations](/docs/api-reference/apps/v1beta1/operations.html), [model definitions](/docs/api-reference/apps/v1beta1/definitions.html)
diff --git a/docs/reference.md b/docs/reference.md
index 88f35a74f4..dc1cd2f297 100644
--- a/docs/reference.md
+++ b/docs/reference.md
@@ -6,7 +6,10 @@ In the reference section, you can find reference documentation for Kubernetes AP
## API References
* [Kubernetes API](/docs/api/) - The core API for Kubernetes.
-* [Extensions API](/docs/api-reference/extensions/v1beta1/operations/) - Manages extensions resources such as Jobs, Ingress and HorizontalPodAutoscalers.
+* [Autoscaling API](/docs/api-reference/autoscaling/v1/operations/) - Manages autoscaling resources such as HorizontalPodAutoscalers.
+* [Batch API](/docs/api-reference/batch/v1/operations/) - Manages batch resources such as Jobs.
+* [Apps API](/docs/api-reference/apps/v1beta1/operations/) - Manages apps resources such as StatefulSets.
+* [Extensions API](/docs/api-reference/extensions/v1beta1/operations/) - Manages extensions resources such as Ingress, Deployments, and ReplicaSets.
## CLI References
From 93ffad35f27e17ecd0914e7e2031729790401114 Mon Sep 17 00:00:00 2001
From: "Madhusudan.C.S"
Date: Tue, 13 Dec 2016 15:21:12 -0800
Subject: [PATCH 20/63] Fix URL typo and whitespace.
---
_data/guides.yml | 2 +-
docs/admin/federation/kubefed.md | 13 ++++++-------
docs/tools/index.md | 6 ++++++
3 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/_data/guides.yml b/_data/guides.yml
index 583deaeedd..933a50fcbc 100644
--- a/_data/guides.yml
+++ b/_data/guides.yml
@@ -306,6 +306,6 @@ toc:
- title: Administering Federation
section:
- title: Using `kubefed`
- path: /docs/admin/federation/kubfed/
+ path: /docs/admin/federation/kubefed/
- title: Using `federation-up` and `deploy.sh`
path: /docs/admin/federation/
diff --git a/docs/admin/federation/kubefed.md b/docs/admin/federation/kubefed.md
index de40263ecb..52d83d3535 100644
--- a/docs/admin/federation/kubefed.md
+++ b/docs/admin/federation/kubefed.md
@@ -3,6 +3,10 @@ assignees:
- madhusudancs
---
+
+* TOC
+{:toc}
+
Kubernetes version 1.5 includes a new command line tool called
`kubefed` to help you administrate your federated clusters.
`kubefed` helps you to deploy a new Kubernetes cluster federation
@@ -14,11 +18,6 @@ using `kubefed`.
> Note: `kubefed` is an alpha feature in Kubernetes 1.5.
-
-* TOC
-{:toc}
-
-
## Prerequisites
This guide assumes that you have a running Kubernetes cluster. Please
@@ -61,8 +60,8 @@ The output should contain an entry corresponding to your host cluster,
similar to the following:
```
-CURRENT NAME CLUSTER AUTHINFO NAMESPACE
- gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1
+CURRENT NAME CLUSTER AUTHINFO NAMESPACE
+ gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1
```
diff --git a/docs/tools/index.md b/docs/tools/index.md
index 482df866b4..6b79c323da 100644
--- a/docs/tools/index.md
+++ b/docs/tools/index.md
@@ -13,6 +13,12 @@ assignees:
[`kubectl`](/docs/user-guide/kubectl/) is the command line tool for Kubernetes. It controls the Kubernetes cluster manager.
+### Kubefed
+
+[`kubefed`](/docs/admin/federation/kubefed/) is the command line tool
+to help you administrate your federated clusters.
+
+
### Dashboard
[Dashboard](/docs/user-guide/ui/), the web-based user interface of Kubernetes, allows you to deploy containerized applications
From 47a75ca01181fccaeeb1e4baefb8e39e88e964de Mon Sep 17 00:00:00 2001
From: Eric Baum
Date: Wed, 14 Dec 2016 00:54:21 +0000
Subject: [PATCH 21/63] Minor header change
Change "Try Kubernetes" link point to /docs/tutorials/kubernetes-basics/
instead of "Hello Node"
Reduce font weight in links across the top.
---
_includes/head-header.html | 2 +-
_sass/_base.sass | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/_includes/head-header.html b/_includes/head-header.html
index 598f6e80fb..bb8d1e7f77 100644
--- a/_includes/head-header.html
+++ b/_includes/head-header.html
@@ -30,7 +30,7 @@
diff --git a/_sass/_base.sass b/_sass/_base.sass
index 7ef5103ff8..27d19a0fd3 100644
--- a/_sass/_base.sass
+++ b/_sass/_base.sass
@@ -245,7 +245,7 @@ ul.global-nav
a
color: #fff
- font-weight: bold
+ font-weight: 400
padding: 0
position: relative
From f664e4c65a2e5e20f6f37875d551d3084e23976c Mon Sep 17 00:00:00 2001
From: Janet Kuo
Date: Wed, 14 Dec 2016 11:21:28 -0800
Subject: [PATCH 22/63] In DaemonSet doc, link to node selection doc instead of
repo
---
docs/admin/daemons.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/admin/daemons.md b/docs/admin/daemons.md
index be3137bc93..bab12268ba 100644
--- a/docs/admin/daemons.md
+++ b/docs/admin/daemons.md
@@ -74,7 +74,7 @@ a node for testing.
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create pods on nodes which match that [node
-selector](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection).
+selector](/docs/user-guide/node-selection/).
If you specify a `scheduler.alpha.kubernetes.io/affinity` annotation in `.spec.template.metadata.annotations`,
then DaemonSet controller will create pods on nodes which match that [node affinity](../../user-guide/node-selection/#alpha-feature-in-kubernetes-v12-node-affinity).
From ae62b9864d9dddbfa51d46a5ad5d252390e1f969 Mon Sep 17 00:00:00 2001
From: Andrew Watson
Date: Wed, 14 Dec 2016 15:12:34 -0500
Subject: [PATCH 23/63] Header displayed twice
It was displaying the header twice
---
docs/user-guide/node-selection/index.md | 2 --
1 file changed, 2 deletions(-)
diff --git a/docs/user-guide/node-selection/index.md b/docs/user-guide/node-selection/index.md
index 725848b544..3152cb4e37 100644
--- a/docs/user-guide/node-selection/index.md
+++ b/docs/user-guide/node-selection/index.md
@@ -5,8 +5,6 @@ assignees:
---
-# Constraining pods to run on particular nodes
-
You can constrain a [pod](/docs/user-guide/pods/) to only be able to run on particular [nodes](/docs/admin/node/) or to prefer to
run on particular nodes. There are several ways to do this, and they all use
[label selectors](/docs/user-guide/labels/) to make the selection.
From e9cf14ffb404d01d3e775e553db395443b201f67 Mon Sep 17 00:00:00 2001
From: Kenneth Owens
Date: Wed, 14 Dec 2016 13:56:21 -0800
Subject: [PATCH 24/63] Adds zookeeper example (#1894)
* Initial commit
* Adds section for cleanup
Corrects some spelling errors
decapitalizes liveness and readiness
* Adds test for zookeeper example
* Address enisoc review
* Remove space between shell and raw annotation
* Remove extranous inserted text
* Remove fencing statement
* Modify sentence for grammer
* refocus to zookeeper with some loss of generality
* Spelling, Grammar, DNS link
* update to address foxish comments
---
_data/tutorials.yml | 4 +-
docs/tutorials/index.md | 2 +
.../stateful-application/zookeeper.md | 1248 +++++++++++++++++
.../stateful-application/zookeeper.yaml | 164 +++
test/examples_test.go | 8 +
5 files changed, 1425 insertions(+), 1 deletion(-)
create mode 100644 docs/tutorials/stateful-application/zookeeper.md
create mode 100644 docs/tutorials/stateful-application/zookeeper.yaml
diff --git a/_data/tutorials.yml b/_data/tutorials.yml
index 41664efa31..82396ca65a 100644
--- a/_data/tutorials.yml
+++ b/_data/tutorials.yml
@@ -58,4 +58,6 @@ toc:
- title: Running a Single-Instance Stateful Application
path: /docs/tutorials/stateful-application/run-stateful-application/
- title: Running a Replicated Stateful Application
- path: /docs/tutorials/stateful-application/run-replicated-stateful-application/
\ No newline at end of file
+ path: /docs/tutorials/stateful-application/run-replicated-stateful-application/
+ - title: Running ZooKeeper, A CP Distributed System
+ path: /docs/tutorials/stateful-application/zookeeper/
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 507ca6d8e1..1b52a15e1a 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -26,6 +26,8 @@ each of which has a sequence of steps.
* [Running a Replicated Stateful Application](/docs/tutorials/stateful-application/run-replicated-stateful-application/)
+* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
+
### What's next
If you would like to write a tutorial, see
diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md
new file mode 100644
index 0000000000..38315cd37a
--- /dev/null
+++ b/docs/tutorials/stateful-application/zookeeper.md
@@ -0,0 +1,1248 @@
+---
+assignees:
+- bprashanth
+- enisoc
+- erictune
+- foxish
+- janetkuo
+- kow3ns
+- smarterclayton
+---
+
+{% capture overview %}
+This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on
+Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/),
+[PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget),
+and [PodAntiAffinity](/docs/user-guide/node-selection/).
+{% endcapture %}
+
+{% capture prerequisites %}
+
+Before starting this tutorial, you should be familiar with the following
+Kubernetes concepts.
+
+* [Pods](/docs/user-guide/pods/single-container/)
+* [Cluster DNS](/docs/admin/dns/)
+* [Headless Services](/docs/user-guide/services/#headless-services)
+* [PersistentVolumes](/docs/user-guide/volumes/)
+* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/)
+* [ConfigMaps](/docs/user-guide/configmap/)
+* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
+* [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget)
+* [PodAntiAffinity](/docs/user-guide/node-selection/)
+* [kubectl CLI](/docs/user-guide/kubectl)
+
+You will require a cluster with at least four nodes, and each node will require
+at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and
+drain the cluster's nodes. **This means that all Pods on the cluster's nodes
+will be terminated and evicted, and the nodes will, temporarily, become
+unschedulable.** You should use a dedicated cluster for this tutorial, or you
+should ensure that the disruption you cause will not interfere with other
+tenants.
+
+This tutorial assumes that your cluster is configured to dynamically provision
+PersistentVolumes. If your cluster is not configured to do so, you
+will have to manually provision three 20 GiB volumes prior to starting this
+tutorial.
+{% endcapture %}
+
+{% capture objectives %}
+After this tutorial, you will know the following.
+
+* How to deploy a ZooKeeper ensemble using StatefulSet.
+* How to consistently configure the ensemble using ConfigMaps.
+* How to spread the deployment of ZooKeeper servers in the ensemble.
+* How to use PodDisruptionBudgets to ensure service availability during planned maintenance.
+{% endcapture %}
+
+{% capture lessoncontent %}
+
+#### ZooKeeper Basics
+
+[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a
+distributed, open-source coordination service for distributed applications.
+ZooKeeper allows you to read, write, and observe updates to data. Data are
+organized in a file system like hierarchy and replicated to all ZooKeeper
+servers in the ensemble (a set of ZooKeeper servers). All operations on data
+are atomic and sequentially consistent. ZooKeeper ensures this by using the
+[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf)
+consensus protocol to replicate a state machine across all servers in the ensemble.
+
+The ensemble uses the Zab protocol to elect a leader, and
+data can not be written until a leader is elected. Once a leader is
+elected, the ensemble uses Zab to ensure that all writes are replicated to a
+quorum before they are acknowledged and made visible to clients. Without respect
+to weighted quorums, a quorum is a majority component of the ensemble containing
+the current leader. For instance, if the ensemble has three servers, a component
+that contains the leader and one other server constitutes a quorum. If the
+ensemble can not achieve a quorum, data can not be written.
+
+ZooKeeper servers keep their entire state machine in memory, but every mutation
+is written to a durable WAL (Write Ahead Log) on storage media. When a server
+crashes, it can recover its previous state by replaying the WAL. In order to
+prevent the WAL from growing without bound, ZooKeeper servers will periodically
+snapshot their in memory state to storage media. These snapshots can be loaded
+directly into memory, and all WAL entries that preceded the snapshot may be
+safely discarded.
+
+### Creating a ZooKeeper Ensemble
+
+The manifest below contains a
+[Headless Service](/docs/user-guide/services/#headless-services),
+a [ConfigMap](/docs/user-guide/configmap/),
+a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget),
+and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/).
+
+{% include code.html language="yaml" file="zookeeper.yaml" ghlink="/docs/tutorials/stateful-application/zookeeper.yaml" %}
+
+Open a command terminal, and use
+[`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) to create the
+manifest.
+
+```shell
+kubectl create -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml
+```
+
+This creates the `zk-headless` Headless Service, the `zk-config` ConfigMap,
+the `zk-budget` PodDisruptionBudget, and the `zk` StatefulSet.
+
+```shell
+service "zk-headless" created
+configmap "zk-config" created
+poddisruptionbudget "zk-budget" created
+statefulset "zk" created
+```
+
+Use [`kubectl get`](/docs/user-guide/kubectl/kubectl_get/) to watch the
+StatefulSet controller create the StatefulSet's Pods.
+
+```shell
+kubectl get pods -w -l app=zk
+```
+
+Once the `zk-2` Pod is Running and Ready, use `CRTL-C` to terminate kubectl.
+
+```shell
+NAME READY STATUS RESTARTS AGE
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 ContainerCreating 0 0s
+zk-0 0/1 Running 0 19s
+zk-0 1/1 Running 0 40s
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 ContainerCreating 0 0s
+zk-1 0/1 Running 0 18s
+zk-1 1/1 Running 0 40s
+zk-2 0/1 Pending 0 0s
+zk-2 0/1 Pending 0 0s
+zk-2 0/1 ContainerCreating 0 0s
+zk-2 0/1 Running 0 19s
+zk-2 1/1 Running 0 40s
+```
+
+The StatefulSet controller creates three Pods, and each Pod has a container with
+a [ZooKeeper 3.4.9](http://www-us.apache.org/dist/zookeeper/zookeeper-3.4.9/) server.
+
+#### Facilitating Leader Election
+
+As there is no terminating algorithm for electing a leader in an anonymous
+network, Zab requires explicit membership configuration in order to perform
+leader election. Each server in the ensemble needs to have a unique
+identifier, all servers need to know the global set of identifiers, and each
+identifier needs to be associated with a network address.
+
+Use [`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to get the hostnames
+of the Pods in the `zk` StatefulSet.
+
+```shell
+for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
+```
+
+The StatefulSet controller provides each Pod with a unique hostname based on its
+ordinal index. The hostnames take the form `-`.
+As the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's
+controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and
+`zk-2`.
+
+```shell
+zk-0
+zk-1
+zk-2
+```
+
+The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and
+each server's identifier is stored in a file called `myid` in the server’s
+data directory.
+
+Examine the contents of the `myid` file for each server.
+
+```shell
+for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
+```
+
+As the identifiers are natural numbers and the ordinal indices are non-negative
+integers, you can generate an identifier by adding one to the ordinal.
+
+```shell
+myid zk-0
+1
+myid zk-1
+2
+myid zk-2
+3
+```
+
+Get the FQDN (Fully Qualified Domain Name) of each Pod in the `zk` StatefulSet.
+
+```shell
+for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
+```
+
+The `zk-headless` Service creates a domain for all of the Pods,
+`zk-headless.default.svc.cluster.local`.
+
+```shell
+zk-0.zk-headless.default.svc.cluster.local
+zk-1.zk-headless.default.svc.cluster.local
+zk-2.zk-headless.default.svc.cluster.local
+```
+
+The A records in [Kubernetes DNS](/docs/admin/dns/) resolve the FQDNs to the Pods' IP addresses.
+If the Pods are rescheduled, the A records will be updated with the Pods' new IP
+addresses, but the A record's names will not change.
+
+ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use
+`kubectl exec` to view the contents of the `zoo.cfg` file in the `zk-0` Pod.
+
+```
+kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
+```
+
+For the `server.1`, `server.2`, and `server.3` properties at the bottom of
+the file, the `1`, `2`, and `3` correspond to the identifiers in the
+ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in
+the `zk` StatefulSet.
+
+```shell
+clientPort=2181
+dataDir=/var/lib/zookeeper/data
+dataLogDir=/var/lib/zookeeper/log
+tickTime=2000
+initLimit=10
+syncLimit=2000
+maxClientCnxns=60
+minSessionTimeout= 4000
+maxSessionTimeout= 40000
+autopurge.snapRetainCount=3
+autopurge.purgeInteval=0
+server.1=zk-0.zk-headless.default.svc.cluster.local:2888:3888
+server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888
+server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888
+```
+
+#### Achieving Consensus
+
+Consensus protocols require that the identifiers of each participant be
+unique. No two participants in the Zab protocol should claim the same unique
+identifier. This is necessary to allow the processes in the system to agree on
+which processes have committed which data. If two Pods were launched with the
+same ordinal, two ZooKeeper servers would both identify themselves as the same
+ server.
+
+When you created the `zk` StatefulSet, the StatefulSet's controller created
+each Pod sequentially, in the order defined by the Pods' ordinal indices, and it
+waited for each Pod to be Running and Ready before creating the next Pod.
+
+```shell
+kubectl get pods -w -l app=zk
+NAME READY STATUS RESTARTS AGE
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 ContainerCreating 0 0s
+zk-0 0/1 Running 0 19s
+zk-0 1/1 Running 0 40s
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 ContainerCreating 0 0s
+zk-1 0/1 Running 0 18s
+zk-1 1/1 Running 0 40s
+zk-2 0/1 Pending 0 0s
+zk-2 0/1 Pending 0 0s
+zk-2 0/1 ContainerCreating 0 0s
+zk-2 0/1 Running 0 19s
+zk-2 1/1 Running 0 40s
+```
+
+The A records for each Pod are only entered when the Pod becomes Ready. Therefore,
+the FQDNs of the ZooKeeper servers will only resolve to a single endpoint, and that
+endpoint will be the unique ZooKeeper server claiming the identity configured
+in its `myid` file.
+
+```shell
+zk-0.zk-headless.default.svc.cluster.local
+zk-1.zk-headless.default.svc.cluster.local
+zk-2.zk-headless.default.svc.cluster.local
+```
+
+This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files
+represents a correctly configured ensemble.
+
+```shell
+server.1=zk-0.zk-headless.default.svc.cluster.local:2888:3888
+server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888
+server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888
+```
+
+When the servers use the Zab protocol to attempt to commit a value, they will
+either achieve consensus and commit the value (if leader election has succeeded
+and at least two of the Pods are Running and Ready), or they will fail to do so
+(if either of the aforementioned conditions are not met). No state will arise
+where one server acknowledges a write on behalf of another.
+
+#### Sanity Testing the Ensemble
+
+The most basic sanity test is to write some data to one ZooKeeper server and
+to read the data from another.
+
+Use the `zkCli.sh` script to write `world` to the path `/hello` on the `zk-0` Pod.
+
+```shell
+kubectl exec zk-0 zkCli.sh create /hello world
+```
+
+This will write `world` to the `/hello` path in the ensemble.
+
+```shell
+WATCHER::
+
+WatchedEvent state:SyncConnected type:None path:null
+Created /hello
+```
+
+Get the data from the `zk-1` Pod.
+
+```shell
+kubectl exec zk-1 zkCli.sh get /hello
+```
+
+The data that you created on `zk-0` is available on all of the servers in the
+ensemble.
+
+```shell
+WATCHER::
+
+WatchedEvent state:SyncConnected type:None path:null
+world
+cZxid = 0x100000002
+ctime = Thu Dec 08 15:13:30 UTC 2016
+mZxid = 0x100000002
+mtime = Thu Dec 08 15:13:30 UTC 2016
+pZxid = 0x100000002
+cversion = 0
+dataVersion = 0
+aclVersion = 0
+ephemeralOwner = 0x0
+dataLength = 5
+numChildren = 0
+```
+
+#### Providing Durable Storage
+
+As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section,
+ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots
+in memory state, to storage media. Using WALs to provide durability is a common
+technique for applications that use consensus protocols to achieve a replicated
+state machine and for storage applications in general.
+
+Use [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete the
+`zk` StatefulSet.
+
+```shell
+kubectl delete statefulset zk
+statefulset "zk" deleted
+```
+
+Watch the termination of the Pods in the StatefulSet.
+
+```shell
+get pods -w -l app=zk
+```
+
+When `zk-0` if fully terminated, use `CRTL-C` to terminate kubectl.
+
+```shell
+zk-2 1/1 Terminating 0 9m
+zk-0 1/1 Terminating 0 11m
+zk-1 1/1 Terminating 0 10m
+zk-2 0/1 Terminating 0 9m
+zk-2 0/1 Terminating 0 9m
+zk-2 0/1 Terminating 0 9m
+zk-1 0/1 Terminating 0 10m
+zk-1 0/1 Terminating 0 10m
+zk-1 0/1 Terminating 0 10m
+zk-0 0/1 Terminating 0 11m
+zk-0 0/1 Terminating 0 11m
+zk-0 0/1 Terminating 0 11m
+```
+Reapply the manifest in `zookeeper.yaml`.
+
+```shell
+kubectl apply -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml
+```
+
+The `zk` StatefulSet will be created, but, as they already exist, the other API
+Objects in the manifest will not be modified.
+
+```shell
+statefulset "zk" created
+Error from server (AlreadyExists): error when creating "zookeeper.yaml": services "zk-headless" already exists
+Error from server (AlreadyExists): error when creating "zookeeper.yaml": configmaps "zk-config" already exists
+Error from server (AlreadyExists): error when creating "zookeeper.yaml": poddisruptionbudgets.policy "zk-budget" already exists
+```
+
+Watch the StatefulSet controller recreate the StatefulSet's Pods.
+
+```shell
+kubectl get pods -w -l app=zk
+```
+
+Once the `zk-2` Pod is Running and Ready, use `CRTL-C` to terminate kubectl.
+
+```shell
+NAME READY STATUS RESTARTS AGE
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 ContainerCreating 0 0s
+zk-0 0/1 Running 0 19s
+zk-0 1/1 Running 0 40s
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 ContainerCreating 0 0s
+zk-1 0/1 Running 0 18s
+zk-1 1/1 Running 0 40s
+zk-2 0/1 Pending 0 0s
+zk-2 0/1 Pending 0 0s
+zk-2 0/1 ContainerCreating 0 0s
+zk-2 0/1 Running 0 19s
+zk-2 1/1 Running 0 40s
+```
+
+Get the value you entered during the [sanity test](#sanity-testing-the-ensemble),
+from the `zk-2` Pod.
+
+```shell
+kubectl exec zk-2 zkCli.sh get /hello
+```
+
+Even though all of the Pods in the `zk` StatefulSet have been terminated and
+recreated, the ensemble still serves the original value.
+
+```shell
+WATCHER::
+
+WatchedEvent state:SyncConnected type:None path:null
+world
+cZxid = 0x100000002
+ctime = Thu Dec 08 15:13:30 UTC 2016
+mZxid = 0x100000002
+mtime = Thu Dec 08 15:13:30 UTC 2016
+pZxid = 0x100000002
+cversion = 0
+dataVersion = 0
+aclVersion = 0
+ephemeralOwner = 0x0
+dataLength = 5
+numChildren = 0
+```
+
+The `volumeClaimTemplates` field, of the `zk` StatefulSet's `spec`, specifies a
+PersistentVolume that will be provisioned for each Pod.
+
+```yaml
+volumeClaimTemplates:
+ - metadata:
+ name: datadir
+ annotations:
+ volume.alpha.kubernetes.io/storage-class: anything
+ spec:
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 20Gi
+```
+
+
+The StatefulSet controller generates a PersistentVolumeClaim for each Pod in
+the StatefulSet.
+
+Get the StatefulSet's PersistentVolumeClaims.
+
+```shell
+kubectl get pvc -l app=zk
+```
+
+When the StatefulSet recreated its Pods, the Pods' PersistentVolumes were
+remounted.
+
+```shell
+NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
+datadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
+datadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
+datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
+```
+
+The `volumeMounts` section of the StatefulSet's container `template` causes the
+PersistentVolumes to be mounted to the ZooKeeper servers' data directories.
+
+```shell
+volumeMounts:
+ - name: datadir
+ mountPath: /var/lib/zookeeper
+```
+
+When a Pod in the `zk` StatefulSet is (re)scheduled, it will always have the
+same PersistentVolume mounted to the ZooKeeper server's data directory.
+Even when the Pods are rescheduled, all of the writes made to the ZooKeeper
+servers' WALs, and all of their snapshots, remain durable.
+
+### Ensuring Consistent Configuration
+
+As noted in the [Facilitating Leader Election](#facilitating-leader-election) and
+[Achieving Consensus](#achieving-consensus) sections, the servers in a
+ZooKeeper ensemble require consistent configuration in order to elect a leader
+and form a quorum. They also require consistent configuration of the Zab protocol
+in order for the protocol to work correctly over a network. You can use
+ConfigMaps to achieve this.
+
+Get the `zk-config` ConfigMap.
+
+```shell
+ kubectl get cm zk-config -o yaml
+apiVersion: v1
+data:
+ client.cnxns: "60"
+ ensemble: zk-0;zk-1;zk-2
+ init: "10"
+ jvm.heap: 2G
+ purge.interval: "0"
+ snap.retain: "3"
+ sync: "5"
+ tick: "2000"
+```
+
+The `env` field of the `zk` StatefulSet's Pod `template` reads the ConfigMap
+into environment variables. These variables are injected into the containers
+environment.
+
+```yaml
+env:
+ - name : ZK_ENSEMBLE
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: ensemble
+ - name : ZK_HEAP_SIZE
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: jvm.heap
+ - name : ZK_TICK_TIME
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: tick
+ - name : ZK_INIT_LIMIT
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: init
+ - name : ZK_SYNC_LIMIT
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: tick
+ - name : ZK_MAX_CLIENT_CNXNS
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: client.cnxns
+ - name: ZK_SNAP_RETAIN_COUNT
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: snap.retain
+ - name: ZK_PURGE_INTERVAL
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: purge.interval
+```
+
+The entry point of the container invokes a bash script, `zkConfig.sh`, prior to
+launching the ZooKeeper server process. This bash script generates the
+ZooKeeper configuration files from the supplied environment variables.
+
+```yaml
+ command:
+ - sh
+ - -c
+ - zkGenConfig.sh && zkServer.sh start-foreground
+```
+
+Examine the environment of all of the Pods in the `zk` StatefulSet.
+
+```shell
+for i in 0 1 2; do kubectl exec zk-$i env | grep ZK_*;echo""; done
+```
+
+All of the variables populated from `zk-config` contain identical values. This
+allows the `zkGenConfig.sh` script to create consistent configurations for all
+of the ZooKeeper servers in the ensemble.
+
+```shell
+ZK_ENSEMBLE=zk-0;zk-1;zk-2
+ZK_HEAP_SIZE=2G
+ZK_TICK_TIME=2000
+ZK_INIT_LIMIT=10
+ZK_SYNC_LIMIT=2000
+ZK_MAX_CLIENT_CNXNS=60
+ZK_SNAP_RETAIN_COUNT=3
+ZK_PURGE_INTERVAL=0
+ZK_CLIENT_PORT=2181
+ZK_SERVER_PORT=2888
+ZK_ELECTION_PORT=3888
+ZK_USER=zookeeper
+ZK_DATA_DIR=/var/lib/zookeeper/data
+ZK_DATA_LOG_DIR=/var/lib/zookeeper/log
+ZK_LOG_DIR=/var/log/zookeeper
+
+ZK_ENSEMBLE=zk-0;zk-1;zk-2
+ZK_HEAP_SIZE=2G
+ZK_TICK_TIME=2000
+ZK_INIT_LIMIT=10
+ZK_SYNC_LIMIT=2000
+ZK_MAX_CLIENT_CNXNS=60
+ZK_SNAP_RETAIN_COUNT=3
+ZK_PURGE_INTERVAL=0
+ZK_CLIENT_PORT=2181
+ZK_SERVER_PORT=2888
+ZK_ELECTION_PORT=3888
+ZK_USER=zookeeper
+ZK_DATA_DIR=/var/lib/zookeeper/data
+ZK_DATA_LOG_DIR=/var/lib/zookeeper/log
+ZK_LOG_DIR=/var/log/zookeeper
+
+ZK_ENSEMBLE=zk-0;zk-1;zk-2
+ZK_HEAP_SIZE=2G
+ZK_TICK_TIME=2000
+ZK_INIT_LIMIT=10
+ZK_SYNC_LIMIT=2000
+ZK_MAX_CLIENT_CNXNS=60
+ZK_SNAP_RETAIN_COUNT=3
+ZK_PURGE_INTERVAL=0
+ZK_CLIENT_PORT=2181
+ZK_SERVER_PORT=2888
+ZK_ELECTION_PORT=3888
+ZK_USER=zookeeper
+ZK_DATA_DIR=/var/lib/zookeeper/data
+ZK_DATA_LOG_DIR=/var/lib/zookeeper/log
+ZK_LOG_DIR=/var/log/zookeeper
+```
+
+#### Configuring Logging
+
+One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging.
+ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default,
+it uses a time and size based rolling file appender for its logging configuration.
+Get the logging configuration from one of Pods in the `zk` StatefulSet.
+
+```shell
+kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties
+```
+
+The logging configuration below will cause the ZooKeeper process to write all
+of its logs to the standard output file stream.
+
+```shell
+zookeeper.root.logger=CONSOLE
+zookeeper.console.threshold=INFO
+log4j.rootLogger=${zookeeper.root.logger}
+log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
+log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
+log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
+log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
+```
+
+This is the simplest possible way to safely log inside the container. As the
+application's logs are being written to standard out, Kubernetes will handle
+log rotation for you. Kubernetes also implements a sane retention policy that
+ensures application logs written to standard out and standard error do not
+exhaust local storage media.
+
+Use [`kubectl logs`](/docs/user-guide/kubectl/kubectl_logs/) to retrieve the last
+few log lines from one of the Pods.
+
+```shell
+kubectl logs zk-0 --tail 20
+```
+
+Application logs that are written to standard out or standard error are viewable
+using `kubectl logs` and from the Kubernetes Dashboard.
+
+```shell
+2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740
+2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client)
+2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749
+2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52749
+2016-12-06 19:34:26,156 [myid:1] - INFO [Thread-1137:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52749 (no session established for client)
+2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52750
+2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52750
+2016-12-06 19:34:26,226 [myid:1] - INFO [Thread-1138:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52750 (no session established for client)
+2016-12-06 19:34:36,151 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52760
+2016-12-06 19:34:36,152 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52760
+2016-12-06 19:34:36,152 [myid:1] - INFO [Thread-1139:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52760 (no session established for client)
+2016-12-06 19:34:36,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52761
+2016-12-06 19:34:36,231 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52761
+2016-12-06 19:34:36,231 [myid:1] - INFO [Thread-1140:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52761 (no session established for client)
+2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52767
+2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52767
+2016-12-06 19:34:46,149 [myid:1] - INFO [Thread-1141:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52767 (no session established for client)
+2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52768
+2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52768
+2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)
+```
+
+Kubernetes also supports more powerful, but more complex, logging integrations
+with [Google Cloud Logging](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md)
+and [ELK](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-es/README.md).
+For cluster level log shipping and aggregation, you should consider deploying a
+[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html)
+container to rotate and ship your logs.
+
+#### Configuring a Non-Privileged User
+
+The best practices with respect to allowing an application to run as a privileged
+user inside of a container are a matter of debate. If your organization requires
+that applications be run as a non-privileged user you can use a
+[SecurityContext](/docs/user-guide/security-context/) to control the user that
+the entry point runs as.
+
+The `zk` StatefulSet's Pod `template` contains a SecurityContext.
+
+```yaml
+securityContext:
+ runAsUser: 1000
+ fsGroup: 1000
+```
+
+In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000
+corresponds to the zookeeper group.
+
+Get the ZooKeeper process information from the `zk-0` Pod.
+
+```shell
+kubectl exec zk-0 -- ps -elf
+```
+
+As the `runAsUser` field of the `securityContext` object is set to 1000,
+instead of running as root, the ZooKeeper process runs as the zookeeper user.
+
+```shell
+F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
+4 S zookeep+ 1 0 0 80 0 - 1127 - 20:46 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
+0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg
+```
+
+By default, when the Pod's PersistentVolume is mounted to the ZooKeeper server's
+data directory, it is only accessible by the root user. This configuration
+prevents the ZooKeeper process from writing to its WAL and storing its snapshots.
+
+Get the file permissions of the ZooKeeper data directory on the `zk-0` Pod.
+
+```shell
+kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data
+```
+
+As the `fsGroup` field of the `securityContext` object is set to 1000,
+the ownership of the Pods' PersistentVolumes is set to the zookeeper group,
+and the ZooKeeper process is able to successfully read and write its data.
+
+```shell
+drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
+```
+
+### Managing the ZooKeeper Process
+
+The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision)
+documentation indicates that "You will want to have a supervisory process that
+manages each of your ZooKeeper server processes (JVM)." Utilizing a watchdog
+(supervisory process) to restart failed processes in a distributed system is a
+common pattern. When deploying an application in Kubernetes, rather than using
+an external utility as a supervisory process, you should use Kubernetes as the
+watchdog for your application.
+
+#### Handling Process Failure
+
+
+[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how
+Kubernetes handles process failures for the entry point of the container in a Pod.
+For Pods in a StatefulSet, the only appropriate RestartPolicy is Always, and this
+is the default value. For stateful applications you should **never** override
+the default policy.
+
+
+Examine the process tree for the ZooKeeper server running in the `zk-0` Pod.
+
+```shell
+kubectl exec zk-0 -- ps -ef
+```
+
+The command used as the container's entry point has PID 1, and the
+the ZooKeeper process, a child of the entry point, has PID 23.
+
+
+```
+UID PID PPID C STIME TTY TIME CMD
+zookeep+ 1 0 0 15:03 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
+zookeep+ 27 1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg
+```
+
+
+In one terminal watch the Pods in the `zk` StatefulSet.
+
+```shell
+kubectl get pod -w -l app=zk
+```
+
+
+In another terminal, kill the ZooKeeper process in Pod `zk-0`.
+
+```shell
+ kubectl exec zk-0 -- pkill java
+```
+
+
+The death of the ZooKeeper process caused its parent process to terminate. As
+the RestartPolicy of the container is Always, the parent process was relaunched.
+
+
+```shell
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Running 0 21m
+zk-1 1/1 Running 0 20m
+zk-2 1/1 Running 0 19m
+NAME READY STATUS RESTARTS AGE
+zk-0 0/1 Error 0 29m
+zk-0 0/1 Running 1 29m
+zk-0 1/1 Running 1 29m
+```
+
+
+If your application uses a script (such as zkServer.sh) to launch the process
+that implements the application's business logic, the script must terminate with the
+child process. This ensures that Kubernetes will restart the application's
+container when the process implementing the application's business logic fails.
+
+
+#### Testing for Liveness
+
+
+Configuring your application to restart failed processes is not sufficient to
+keep a distributed system healthy. There are many scenarios where
+a system's processes can be both alive and unresponsive, or otherwise
+unhealthy. You should use liveness probes in order to notify Kubernetes
+that your application's processes are unhealthy and should be restarted.
+
+
+The Pod `template` for the `zk` StatefulSet specifies a liveness probe.
+
+
+```yaml
+ livenessProbe:
+ exec:
+ command:
+ - "zkOk.sh"
+ initialDelaySeconds: 15
+ timeoutSeconds: 5
+```
+
+
+The probe calls a simple bash script that uses the ZooKeeper `ruok` four letter
+word to test the server's health.
+
+
+```bash
+ZK_CLIENT_PORT=${ZK_CLIENT_PORT:-2181}
+OK=$(echo ruok | nc 127.0.0.1 $ZK_CLIENT_PORT)
+if [ "$OK" == "imok" ]; then
+ exit 0
+else
+ exit 1
+fi
+```
+
+
+In one terminal window, watch the Pods in the `zk` StatefulSet.
+
+
+```shell
+kubectl get pod -w -l app=zk
+```
+
+
+In another window, delete the `zkOk.sh` script from the file system of Pod `zk-0`.
+
+
+```shell
+kubectl exec zk-0 -- rm /opt/zookeeper/bin/zkOk.sh
+```
+
+
+When the liveness probe for the ZooKeeper process fails, Kubernetes will
+automatically restart the process for you, ensuring that unhealthy processes in
+the ensemble are restarted.
+
+
+```shell
+kubectl get pod -w -l app=zk
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Running 0 1h
+zk-1 1/1 Running 0 1h
+zk-2 1/1 Running 0 1h
+NAME READY STATUS RESTARTS AGE
+zk-0 0/1 Running 0 1h
+zk-0 0/1 Running 1 1h
+zk-0 1/1 Running 1 1h
+```
+
+
+#### Testing for Readiness
+
+
+Readiness is not the same as liveness. If a process is alive, it is scheduled
+and healthy. If a process is ready, it is able to process input. Liveness is
+a necessary, but not sufficient, condition for readiness. There are many cases,
+particularly during initialization and termination, when a process can be
+alive but not ready.
+
+
+If you specify a readiness probe, Kubernetes will ensure that your application's
+processes will not receive network traffic until their readiness checks pass.
+
+
+For a ZooKeeper server, liveness implies readiness. Therefore, the readiness
+probe from the `zookeeper.yaml` manifest is identical to the liveness probe.
+
+
+```yaml
+ readinessProbe:
+ exec:
+ command:
+ - "zkOk.sh"
+ initialDelaySeconds: 15
+ timeoutSeconds: 5
+```
+
+
+Even though the liveness and readiness probes are identical, it is important
+to specify both. This ensures that only healthy servers in the ZooKeeper
+ensemble receive network traffic.
+
+
+### Tolerating Node Failure
+
+ZooKeeper needs a quorum of servers in order to successfully commit mutations
+to data. For a three server ensemble, two servers must be healthy in order for
+writes to succeed. In quorum based systems, members are deployed across failure
+domains to ensure availability. In order to avoid an outage, due to the loss of an
+individual machine, best practices preclude co-locating multiple instances of the
+application on the same machine.
+
+By default, Kubernetes may co-locate Pods in a StatefulSet on the same node.
+For the three server ensemble you created, if two servers reside on the same
+node, and that node fails, the clients of your ZooKeeper service will experience
+an outage until at least one of the Pods can be rescheduled.
+
+You should always provision additional capacity to allow the processes of critical
+systems to be rescheduled in the event of node failures. If you do so, then the
+outage will only last until the Kubernetes scheduler reschedules one of the ZooKeeper
+servers. However, if you want your service to tolerate node failures with no downtime,
+you should use a `PodAntiAffinity` annotation.
+
+Get the nodes for Pods in the `zk` Stateful Set.
+
+```shell{% raw %}
+for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
+``` {% endraw %}
+
+All of the Pods in the `zk` StatefulSet are deployed on different nodes.
+
+```shell
+kubernetes-minion-group-cxpk
+kubernetes-minion-group-a5aq
+kubernetes-minion-group-2g2d
+```
+
+This is because the Pods in the `zk` StatefulSet contain a
+[PodAntiAffinity](/docs/user-guide/node-selection/) annotation.
+
+```yaml
+scheduler.alpha.kubernetes.io/affinity: >
+ {
+ "podAntiAffinity": {
+ "requiredDuringSchedulingRequiredDuringExecution": [{
+ "labelSelector": {
+ "matchExpressions": [{
+ "key": "app",
+ "operator": "In",
+ "values": ["zk-headless"]
+ }]
+ },
+ "topologyKey": "kubernetes.io/hostname"
+ }]
+ }
+ }
+```
+
+The `requiredDuringSchedulingRequiredDuringExecution` field tells the
+Kubernetes Scheduler that it should never co-locate two Pods from the `zk-headless`
+Service in the domain defined by the `topologyKey`. The `topologyKey`
+`kubernetes.io/hostname` indicates that the domain is an individual node. Using
+different rules, labels, and selectors, you can extend this technique to spread
+your ensemble across physical, network, and power failure domains.
+
+### Surviving Maintenance
+
+**In this section you will cordon and drain nodes. If you are using this tutorial
+on a shared cluster, be sure that this will not adversely affect other tenants.**
+
+The previous section showed you how to spread your Pods across nodes to survive
+unplanned node failures, but you also need to plan for temporary node failures
+that occur due to planned maintenance.
+
+Get the nodes in your cluster.
+
+```shell
+kubectl get nodes
+```
+
+Use [`kubectl cordon`](/docs/user-guide/kubectl/kubectl_cordon/) to
+cordon all but four of the nodes in your cluster.
+
+```shell{% raw %}
+kubectl cordon < node name >
+```{% endraw %}
+
+Get the `zk-budget` PodDisruptionBudget.
+
+```shell
+kubectl get poddisruptionbudget zk-budget
+```
+
+The `min-available` field indicates to Kubernetes that at least two Pods from
+`zk` StatefulSet must be available at any time.
+
+```yaml
+NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
+zk-budget 2 1 1h
+
+```
+
+In one terminal, watch the Pods in the `zk` StatefulSet.
+
+```shell
+kubectl get pods -w -l app=zk
+```
+
+In another terminal, get the nodes that the Pods are currently scheduled on.
+
+```shell{% raw %}
+for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
+kubernetes-minion-group-pb41
+kubernetes-minion-group-ixsl
+kubernetes-minion-group-i4c4
+{% endraw %}```
+
+Use [`kubectl drain`](/docs/user-guide/kubectl/kubectl_drain/) to cordon and
+drain the node on which the `zk-0` Pod is scheduled.
+
+```shell {% raw %}
+kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
+WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-pb41, kube-proxy-kubernetes-minion-group-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
+pod "zk-0" deleted
+node "kubernetes-minion-group-pb41" drained
+{% endraw %}```
+
+As there are four nodes in your cluster, `kubectl drain`, succeeds and the
+`zk-0` is rescheduled to another node.
+
+```
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Running 2 1h
+zk-1 1/1 Running 0 1h
+zk-2 1/1 Running 0 1h
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 ContainerCreating 0 0s
+zk-0 0/1 Running 0 51s
+zk-0 1/1 Running 0 1m
+```
+
+Keep watching the StatefulSet's Pods in the first terminal and drain the node on which
+`zk-1` is scheduled.
+
+```shell{% raw %}
+kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-minion-group-ixsl" cordoned
+WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-ixsl, kube-proxy-kubernetes-minion-group-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
+pod "zk-1" deleted
+node "kubernetes-minion-group-ixsl" drained
+{% endraw %}```
+
+The `zk-1` Pod can not be scheduled. As the `zk` StatefulSet contains a
+`PodAntiAffinity` annotation preventing co-location of the Pods, and as only
+two nodes are schedulable, the Pod will remain in a Pending state.
+
+```shell
+kubectl get pods -w -l app=zk
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Running 2 1h
+zk-1 1/1 Running 0 1h
+zk-2 1/1 Running 0 1h
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 ContainerCreating 0 0s
+zk-0 0/1 Running 0 51s
+zk-0 1/1 Running 0 1m
+zk-1 1/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 0s
+```
+
+Continue to watch the Pods of the stateful set, and drain the node on which
+`zk-2` is scheduled.
+
+```shell{% raw %}
+kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
+node "kubernetes-minion-group-i4c4" cordoned
+WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
+WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4
+There are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget.
+pod/zk-2
+{% endraw %}```
+
+Use `CRTL-C` to terminate to kubectl.
+
+You can not drain the third node because evicting `zk-2` would violate `zk-budget`. However,
+the node will remain cordoned.
+
+Use `zkCli.sh` to retrieve the value you entered during the sanity test from `zk-0`.
+
+```shell
+kubectl exec zk-0 zkCli.sh get /hello
+```
+
+The service is still available because its PodDisruptionBudget is respected.
+
+```
+WatchedEvent state:SyncConnected type:None path:null
+world
+cZxid = 0x200000002
+ctime = Wed Dec 07 00:08:59 UTC 2016
+mZxid = 0x200000002
+mtime = Wed Dec 07 00:08:59 UTC 2016
+pZxid = 0x200000002
+cversion = 0
+dataVersion = 0
+aclVersion = 0
+ephemeralOwner = 0x0
+dataLength = 5
+numChildren = 0
+```
+
+Use [`kubectl uncordon`](/docs/user-guide/kubectl/kubectl_uncordon/) to uncordon the first node.
+
+```shell
+kubectl uncordon kubernetes-minion-group-pb41
+node "kubernetes-minion-group-pb41" uncordoned
+```
+
+`zk-1` is rescheduled on this node. Wait until `zk-1` is Running and Ready.
+
+```shell
+kubectl get pods -w -l app=zk
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Running 2 1h
+zk-1 1/1 Running 0 1h
+zk-2 1/1 Running 0 1h
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 ContainerCreating 0 0s
+zk-0 0/1 Running 0 51s
+zk-0 1/1 Running 0 1m
+zk-1 1/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 12m
+zk-1 0/1 ContainerCreating 0 12m
+zk-1 0/1 Running 0 13m
+zk-1 1/1 Running 0 13m
+```
+
+Attempt to drain the node on which `zk-2` is scheduled.
+
+```shell{% raw %}
+kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
+node "kubernetes-minion-group-i4c4" already cordoned
+WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
+pod "heapster-v1.2.0-2604621511-wht1r" deleted
+pod "zk-2" deleted
+node "kubernetes-minion-group-i4c4" drained
+{% endraw %}```
+
+This time `kubectl drain` succeeds.
+
+Uncordon the second node to allow `zk-2` to be rescheduled.
+
+```shell
+kubectl uncordon kubernetes-minion-group-ixsl
+node "kubernetes-minion-group-ixsl" uncordoned
+```
+
+You can use `kubectl drain` in conjunction with PodDisruptionBudgets to ensure that your service
+remains available during maintenance. If drain is used to cordon nodes and evict pods prior to
+taking the node offline for maintenance, services that express a disruption budget will have that
+budget respected. You should always allocate additional capacity for critical services so that
+their Pods can be immediately rescheduled.
+
+{% endcapture %}
+
+{% capture cleanup %}
+* Use `kubectl uncordon` to uncordon all the nodes in your cluster.
+* You will need to delete the persistent storage media for the PersistentVolumes
+used in this tutorial. Follow the necessary steps, based on your environment,
+storage configuration, and provisioning method, to ensure that all storage is
+reclaimed.
+{% endcapture %}
+{% include templates/tutorial.md %}
diff --git a/docs/tutorials/stateful-application/zookeeper.yaml b/docs/tutorials/stateful-application/zookeeper.yaml
new file mode 100644
index 0000000000..75c4220576
--- /dev/null
+++ b/docs/tutorials/stateful-application/zookeeper.yaml
@@ -0,0 +1,164 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: zk-headless
+ labels:
+ app: zk-headless
+spec:
+ ports:
+ - port: 2888
+ name: server
+ - port: 3888
+ name: leader-election
+ clusterIP: None
+ selector:
+ app: zk
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: zk-config
+data:
+ ensemble: "zk-0;zk-1;zk-2"
+ jvm.heap: "2G"
+ tick: "2000"
+ init: "10"
+ sync: "5"
+ client.cnxns: "60"
+ snap.retain: "3"
+ purge.interval: "1"
+---
+apiVersion: policy/v1beta1
+kind: PodDisruptionBudget
+metadata:
+ name: zk-budget
+spec:
+ selector:
+ matchLabels:
+ app: zk
+ minAvailable: 2
+---
+apiVersion: apps/v1beta1
+kind: StatefulSet
+metadata:
+ name: zk
+spec:
+ serviceName: zk-headless
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: zk
+ annotations:
+ pod.alpha.kubernetes.io/initialized: "true"
+ scheduler.alpha.kubernetes.io/affinity: >
+ {
+ "podAntiAffinity": {
+ "requiredDuringSchedulingRequiredDuringExecution": [{
+ "labelSelector": {
+ "matchExpressions": [{
+ "key": "app",
+ "operator": "In",
+ "values": ["zk-headless"]
+ }]
+ },
+ "topologyKey": "kubernetes.io/hostname"
+ }]
+ }
+ }
+ spec:
+ containers:
+ - name: k8szk
+ imagePullPolicy: Always
+ image: gcr.io/google_samples/k8szk:v1
+ resources:
+ requests:
+ memory: "4Gi"
+ cpu: "1"
+ ports:
+ - containerPort: 2181
+ name: client
+ - containerPort: 2888
+ name: server
+ - containerPort: 3888
+ name: leader-election
+ env:
+ - name : ZK_ENSEMBLE
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: ensemble
+ - name : ZK_HEAP_SIZE
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: jvm.heap
+ - name : ZK_TICK_TIME
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: tick
+ - name : ZK_INIT_LIMIT
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: init
+ - name : ZK_SYNC_LIMIT
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: tick
+ - name : ZK_MAX_CLIENT_CNXNS
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: client.cnxns
+ - name: ZK_SNAP_RETAIN_COUNT
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: snap.retain
+ - name: ZK_PURGE_INTERVAL
+ valueFrom:
+ configMapKeyRef:
+ name: zk-config
+ key: purge.interval
+ - name: ZK_CLIENT_PORT
+ value: "2181"
+ - name: ZK_SERVER_PORT
+ value: "2888"
+ - name: ZK_ELECTION_PORT
+ value: "3888"
+ command:
+ - sh
+ - -c
+ - zkGenConfig.sh && zkServer.sh start-foreground
+ readinessProbe:
+ exec:
+ command:
+ - "zkOk.sh"
+ initialDelaySeconds: 15
+ timeoutSeconds: 5
+ livenessProbe:
+ exec:
+ command:
+ - "zkOk.sh"
+ initialDelaySeconds: 15
+ timeoutSeconds: 5
+ volumeMounts:
+ - name: datadir
+ mountPath: /var/lib/zookeeper
+ securityContext:
+ runAsUser: 1000
+ fsGroup: 1000
+ volumeClaimTemplates:
+ - metadata:
+ name: datadir
+ annotations:
+ volume.alpha.kubernetes.io/storage-class: anything
+ spec:
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 20Gi
diff --git a/test/examples_test.go b/test/examples_test.go
index cb876db9ec..22e71c8bb0 100644
--- a/test/examples_test.go
+++ b/test/examples_test.go
@@ -38,6 +38,8 @@ import (
"k8s.io/kubernetes/pkg/apis/extensions"
expvalidation "k8s.io/kubernetes/pkg/apis/extensions/validation"
"k8s.io/kubernetes/pkg/capabilities"
+ "k8s.io/kubernetes/pkg/apis/policy"
+ policyvalidation "k8s.io/kubernetes/pkg/apis/policy/validation"
"k8s.io/kubernetes/pkg/registry/batch/job"
"k8s.io/kubernetes/pkg/runtime"
"k8s.io/kubernetes/pkg/types"
@@ -147,6 +149,11 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
t.Namespace = api.NamespaceDefault
}
errors = apps_validation.ValidateStatefulSet(t)
+ case *policy.PodDisruptionBudget:
+ if t.Namespace == "" {
+ t.Namespace = api.NamespaceDefault
+ }
+ errors = policyvalidation.ValidatePodDisruptionBudget(t)
default:
errors = field.ErrorList{}
errors = append(errors, field.InternalError(field.NewPath(""), fmt.Errorf("no validation defined for %#v", obj)))
@@ -323,6 +330,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"mysql-configmap": {&api.ConfigMap{}},
"mysql-statefulset": {&apps.StatefulSet{}},
"web": {&api.Service{}, &apps.StatefulSet{}},
+ "zookeeper": {&api.Service{}, &api.ConfigMap{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}},
},
}
From a3a3c2fd80ed9f7dd17e40528385b1573856d290 Mon Sep 17 00:00:00 2001
From: "Madhusudan.C.S"
Date: Wed, 14 Dec 2016 14:51:55 -0800
Subject: [PATCH 25/63] Removed backticks from the left nav entries.
---
_data/guides.yml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/_data/guides.yml b/_data/guides.yml
index 933a50fcbc..cd685afd5a 100644
--- a/_data/guides.yml
+++ b/_data/guides.yml
@@ -305,7 +305,7 @@ toc:
- title: Administering Federation
section:
- - title: Using `kubefed`
+ - title: Using kubefed
path: /docs/admin/federation/kubefed/
- - title: Using `federation-up` and `deploy.sh`
+ - title: Using federation-up and deploy.sh
path: /docs/admin/federation/
From a1dededa56d75a1919c6af33377946ef03f48eda Mon Sep 17 00:00:00 2001
From: Jimmy Cuadra
Date: Wed, 14 Dec 2016 15:52:22 -0800
Subject: [PATCH 26/63] Fix the formatting of bullet lists on the kubelet auth
page.
---
.../kubelet-authentication-authorization.md | 36 +++++++++++--------
1 file changed, 21 insertions(+), 15 deletions(-)
diff --git a/docs/admin/kubelet-authentication-authorization.md b/docs/admin/kubelet-authentication-authorization.md
index b0617b8854..509792bf24 100644
--- a/docs/admin/kubelet-authentication-authorization.md
+++ b/docs/admin/kubelet-authentication-authorization.md
@@ -17,35 +17,40 @@ This document describes how to authenticate and authorize access to the kubelet'
## Kubelet authentication
By default, requests to the kubelet's HTTPS endpoint that are not rejected by other configured
-authentication methods are treated as anonymous requests, and given a username of `system:anonymous`
+authentication methods are treated as anonymous requests, and given a username of `system:anonymous`
and a group of `system:unauthenticated`.
To disable anonymous access and send `401 Unauthorized` responses to unauthenticated requests:
+
* start the kubelet with the `--anonymous-auth=false` flag
To enable X509 client certificate authentication to the kubelet's HTTPS endpoint:
-* start the kubelet with the `--client-ca-file` flag, providing a CA bundle to verify client certificates with
+
+* start the kubelet with the `--client-ca-file` flag, providing a CA bundle to verify client certificates with
* start the apiserver with `--kubelet-client-certificate` and `--kubelet-client-key` flags
* see the [apiserver authentication documentation](/docs/admin/authentication/#x509-client-certs) for more details
To enable API bearer tokens (including service account tokens) to be used to authenticate to the kubelet's HTTPS endpoint:
+
* ensure the `authentication.k8s.io/v1beta1` API group is enabled in the API server
* start the kubelet with the `--authentication-token-webhook`, `--kubeconfig`, and `--require-kubeconfig` flags
-* the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens
+* the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens
## Kubelet authorization
Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is `AlwaysAllow`, which allows all requests.
There are many possible reasons to subdivide access to the kubelet API:
+
* anonymous auth is enabled, but anonymous users' ability to call the kubelet API should be limited
* bearer token auth is enabled, but arbitrary API users' (like service accounts) ability to call the kubelet API should be limited
* client certificate auth is enabled, but only some of the client certificates signed by the configured CA should be allowed to use the kubelet API
To subdivide access to the kubelet API, delegate authorization to the API server:
+
* ensure the `authorization.k8s.io/v1beta1` API group is enabled in the API server
* start the kubelet with the `--authorization-mode=Webhook`, `--kubeconfig`, and `--require-kubeconfig` flags
-* the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized
+* the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized
The kubelet authorizes API requests using the same [request attributes](/docs/admin/authorization/#request-attributes) approach as the apiserver.
@@ -63,19 +68,20 @@ The resource and subresource is determined from the incoming request's path:
Kubelet API | resource | subresource
-------------|----------|------------
-/stats/* | nodes | stats
-/metrics/* | nodes | metrics
-/logs/* | nodes | log
-/spec/* | nodes | spec
+/stats/\* | nodes | stats
+/metrics/\* | nodes | metrics
+/logs/\* | nodes | log
+/spec/\* | nodes | spec
*all others* | nodes | proxy
-The namespace and API group attributes are always an empty string, and
+The namespace and API group attributes are always an empty string, and
the resource name is always the name of the kubelet's `Node` API object.
-When running in this mode, ensure the user identified by the `--kubelet-client-certificate` and `--kubelet-client-key`
+When running in this mode, ensure the user identified by the `--kubelet-client-certificate` and `--kubelet-client-key`
flags passed to the apiserver is authorized for the following attributes:
-* verb=*, resource=nodes, subresource=proxy
-* verb=*, resource=nodes, subresource=stats
-* verb=*, resource=nodes, subresource=log
-* verb=*, resource=nodes, subresource=spec
-* verb=*, resource=nodes, subresource=metrics
+
+* verb=\*, resource=nodes, subresource=proxy
+* verb=\*, resource=nodes, subresource=stats
+* verb=\*, resource=nodes, subresource=log
+* verb=\*, resource=nodes, subresource=spec
+* verb=\*, resource=nodes, subresource=metrics
From 8d8c5f9c0a8d6e40c574f7edbcfbc3c124cd8402 Mon Sep 17 00:00:00 2001
From: Alejandro Escobar
Date: Wed, 14 Dec 2016 12:47:36 -0800
Subject: [PATCH 27/63] updated the links to documents that do not exists
locally but remotely in github. These links are broken online.
missed a link.
---
docs/getting-started-guides/minikube.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/docs/getting-started-guides/minikube.md b/docs/getting-started-guides/minikube.md
index 9d0264ccb3..4807e8dec8 100644
--- a/docs/getting-started-guides/minikube.md
+++ b/docs/getting-started-guides/minikube.md
@@ -308,11 +308,11 @@ Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmach
For more information about minikube, see the [proposal](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/local-cluster-ux.md).
## Additional Links:
-* **Goals and Non-Goals**: For the goals and non-goals of the minikube project, please see our [roadmap](./ROADMAP.md).
-* **Development Guide**: See [CONTRIBUTING.md](./CONTRIBUTING.md) for an overview of how to send pull requests.
-* **Building Minikube**: For instructions on how to build/test minikube from source, see the [build guide](./BUILD_GUIDE.md)
-* **Adding a New Dependency**: For instructions on how to add a new dependency to minikube see the [adding dependencies guide](./ADD_DEPENDENCY.md)
-* **Updating Kubernetes**: For instructions on how to add a new dependency to minikube see the [updating kubernetes guide](./UPDATE_KUBERNETES.md)
+* **Goals and Non-Goals**: For the goals and non-goals of the minikube project, please see our [roadmap](https://github.com/kubernetes/minikube/blob/master/ROADMAP.md).
+* **Development Guide**: See [CONTRIBUTING.md](https://github.com/kubernetes/minikube/blob/master/CONTRIBUTING.md) for an overview of how to send pull requests.
+* **Building Minikube**: For instructions on how to build/test minikube from source, see the [build guide](https://github.com/kubernetes/minikube/blob/master/BUILD_GUIDE.md)
+* **Adding a New Dependency**: For instructions on how to add a new dependency to minikube see the [adding dependencies guide](https://github.com/kubernetes/minikube/blob/master/ADD_DEPENDENCY.md)
+* **Updating Kubernetes**: For instructions on how to add a new dependency to minikube see the [updating kubernetes guide](https://github.com/kubernetes/minikube/blob/master/UPDATE_KUBERNETES.md)
## Community
From 061a332ac4e94590a3e9211c23e5af0f06814bec Mon Sep 17 00:00:00 2001
From: dbaumgarten
Date: Thu, 15 Dec 2016 14:58:46 +0100
Subject: [PATCH 28/63] Wrong path for cloud-config in kubeadm.md
The cloud-config file should be located under `/etc/kubernetes/cloud-config` instead of /etc/kubernetes/cloud-config.json.
(See: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/master/manifests.go#L41 )
If the file is not in this location the controller-manager will fail to start, as he is given the --cloud-provider option without --cloud-config. (--cloud-config will only be used when `/etc/kubernetes/cloud-config` exists https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/master/manifests.go#L367 )
---
docs/admin/kubeadm.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/admin/kubeadm.md b/docs/admin/kubeadm.md
index e1c8537149..bc64c5de84 100644
--- a/docs/admin/kubeadm.md
+++ b/docs/admin/kubeadm.md
@@ -82,7 +82,7 @@ of the box. You can specify a cloud provider using `--cloud-provider`.
Valid values are the ones supported by `controller-manager`, namely `"aws"`,
`"azure"`, `"cloudstack"`, `"gce"`, `"mesos"`, `"openstack"`, `"ovirt"`,
`"rackspace"`, `"vsphere"`. In order to provide additional configuration for
-the cloud provider, you should create a `/etc/kubernetes/cloud-config.json`
+the cloud provider, you should create a `/etc/kubernetes/cloud-config`
file manually, before running `kubeadm init`. `kubeadm` automatically
picks those settings up and ensures other nodes are configured correctly.
You must also set the `--cloud-provider` and `--cloud-config` parameters
From 01bfb7925f788ceb8b3ba112997307708f60d050 Mon Sep 17 00:00:00 2001
From: Joe Rocklin
Date: Thu, 15 Dec 2016 11:47:10 -0500
Subject: [PATCH 29/63] Fix link to nuage
previous markdown resulted in a relative reference, which lead to a 404.
---
docs/admin/networking.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/admin/networking.md b/docs/admin/networking.md
index 9b77b3557a..903bac24f8 100644
--- a/docs/admin/networking.md
+++ b/docs/admin/networking.md
@@ -171,7 +171,7 @@ Lars Kellogg-Stedman.
### Nuage Networks VCS (Virtualized Cloud Services)
-[Nuage](www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
+[Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage’s policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform’s real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
From 7a145852b9aff00dfccfd6aaca61219743acd837 Mon Sep 17 00:00:00 2001
From: Michail Kargakis
Date: Wed, 9 Nov 2016 16:39:48 +0100
Subject: [PATCH 30/63] Proportional scaling in Deployments
---
docs/user-guide/deployments.md | 69 ++++++++++++++++++++++++++++++++++
1 file changed, 69 insertions(+)
diff --git a/docs/user-guide/deployments.md b/docs/user-guide/deployments.md
index 84ea561bf4..8f138459d0 100644
--- a/docs/user-guide/deployments.md
+++ b/docs/user-guide/deployments.md
@@ -395,6 +395,75 @@ Events:
You can set `.spec.revisionHistoryLimit` field to specify how much revision history of this deployment you want to keep. By default,
all revision history will be kept; explicitly setting this field to `0` disallows a deployment being rolled back.
+## Scaling a Deployment
+
+You can scale a Deployment by using the following command:
+
+```shell
+$ kubectl scale deployment nginx-deployment --replicas 10
+deployment "nginx-deployment" scaled
+```
+
+Assuming [horizontal pod autoscaling](/docs/user-guide/horizontal-pod-autoscaling/walkthrough.md) is enabled
+in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of
+Pods you want to run based on the CPU utilization of your existing Pods.
+
+```shell
+$ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
+deployment "nginx-deployment" autoscaled
+```
+
+RollingUpdate Deployments support running multitple versions of an application at the same time. When you
+or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress
+or paused), then the Deployment controller will balance the additional replicas in the existing active
+ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*.
+
+For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2.
+
+```shell
+$ kubectl get deploy
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+nginx-deployment 10 10 10 10 50s
+```
+
+You update to a new image which happens to be unresolvable from inside the cluster.
+
+```shell
+$ kubectl set image deploy/nginx-deployment nginx=nginx:sometag
+deployment "nginx-deployment" image updated
+```
+
+The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191 but it's blocked due to the
+maxUnavailable requirement that we mentioned above.
+
+```shell
+$ kubectl get rs
+NAME DESIRED CURRENT READY AGE
+nginx-deployment-1989198191 5 5 0 9s
+nginx-deployment-618515232 8 8 8 1m
+```
+
+Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas
+to 15. The Deployment controller needs to decide where to add these new 5 replicas. If we weren't using
+proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, we
+spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the
+most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the
+ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.
+
+In our example above, 3 replicas will be added to the old ReplicaSet and 2 replicas will be added to the
+new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming
+the new replicas become healthy.
+
+```shell
+$ kubectl get deploy
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+nginx-deployment 15 18 7 8 7m
+$ kubectl get rs
+NAME DESIRED CURRENT READY AGE
+nginx-deployment-1989198191 7 7 0 7m
+nginx-deployment-618515232 11 11 11 7m
+```
+
## Pausing and Resuming a Deployment
You can also pause a Deployment mid-way and then resume it. A use case is to support canary deployment.
From bfa604351ff04bd35c4d5af5cb24adae59ef2bf3 Mon Sep 17 00:00:00 2001
From: Ben Balter
Date: Thu, 15 Dec 2016 15:16:54 -0500
Subject: [PATCH 31/63] add explicit titles to docs
---
docs/admin/accessing-the-api.md | 2 +-
docs/admin/addons.md | 1 +
docs/admin/admission-controllers.md | 2 +-
docs/admin/apparmor/index.md | 2 +-
docs/admin/authentication.md | 3 ++-
docs/admin/authorization.md | 2 +-
docs/admin/cluster-components.md | 2 +-
docs/admin/cluster-large.md | 15 +++++++--------
docs/admin/cluster-management.md | 2 +-
docs/admin/cluster-troubleshooting.md | 2 +-
docs/admin/daemons.md | 2 +-
docs/admin/dns.md | 2 +-
docs/admin/etcd.md | 3 +--
docs/admin/federation-apiserver.md | 2 ++
docs/admin/federation-controller-manager.md | 2 ++
docs/admin/federation/index.md | 3 ++-
docs/admin/garbage-collection.md | 2 +-
docs/admin/high-availability/index.md | 8 ++++----
docs/admin/index.md | 2 +-
docs/admin/kube-apiserver.md | 2 ++
docs/admin/kube-controller-manager.md | 2 ++
docs/admin/kube-proxy.md | 2 ++
docs/admin/kube-scheduler.md | 2 ++
docs/admin/kubeadm.md | 3 +--
.../admin/kubelet-authentication-authorization.md | 2 +-
docs/admin/kubelet-tls-bootstrapping.md | 2 +-
docs/admin/kubelet.md | 2 ++
docs/admin/limitrange/index.md | 2 +-
docs/admin/master-node-communication.md | 2 +-
docs/admin/multi-cluster.md | 2 +-
docs/admin/multiple-schedulers.md | 2 +-
docs/admin/multiple-zones.md | 2 +-
docs/admin/namespaces/index.md | 2 +-
docs/admin/namespaces/walkthrough.md | 2 +-
docs/admin/network-plugins.md | 2 +-
docs/admin/networking.md | 2 +-
docs/admin/node-conformance.md | 2 +-
docs/admin/node-problem.md | 2 +-
docs/admin/node.md | 2 +-
docs/admin/out-of-resource.md | 2 +-
docs/admin/ovs-networking.md | 2 +-
docs/admin/resourcequota/index.md | 2 +-
docs/admin/resourcequota/walkthrough.md | 2 +-
docs/admin/salt.md | 2 +-
docs/admin/service-accounts-admin.md | 2 +-
docs/admin/static-pods.md | 2 +-
.../api-reference/autoscaling/v1/definitions.html | 2 ++
docs/api-reference/autoscaling/v1/operations.html | 2 ++
docs/api-reference/batch/v1/definitions.html | 2 ++
docs/api-reference/batch/v1/operations.html | 2 ++
.../extensions/v1beta1/definitions.html | 2 ++
.../extensions/v1beta1/operations.html | 2 ++
docs/api-reference/v1/definitions.html | 2 ++
docs/api-reference/v1/operations.html | 2 ++
docs/api.md | 2 +-
.../abstractions/controllers/statefulsets.md | 1 +
docs/concepts/index.md | 1 +
docs/concepts/object-metadata/annotations.md | 1 +
docs/contribute/create-pull-request.md | 1 +
docs/contribute/page-templates.md | 5 +++--
docs/contribute/stage-documentation-changes.md | 1 +
docs/contribute/style-guide.md | 1 +
docs/contribute/write-new-topic.md | 1 +
docs/federation/api-reference/README.md | 2 ++
docs/getting-started-guides/alternatives.md | 1 +
docs/getting-started-guides/aws.md | 2 +-
docs/getting-started-guides/azure.md | 2 +-
docs/getting-started-guides/binary_release.md | 2 +-
.../centos/centos_manual_config.md | 2 +-
docs/getting-started-guides/clc.md | 3 ++-
docs/getting-started-guides/cloudstack.md | 2 +-
docs/getting-started-guides/coreos/azure/index.md | 1 +
.../coreos/bare_metal_offline.md | 2 +-
docs/getting-started-guides/coreos/index.md | 2 +-
docs/getting-started-guides/dcos.md | 2 +-
docs/getting-started-guides/docker-multinode.md | 1 +
.../fedora/fedora_ansible_config.md | 2 +-
.../fedora/fedora_manual_config.md | 2 +-
.../fedora/flannel_multi_node_cluster.md | 3 ++-
docs/getting-started-guides/gce.md | 3 +--
docs/getting-started-guides/index.md | 2 +-
docs/getting-started-guides/kops.md | 1 +
docs/getting-started-guides/kubeadm.md | 2 +-
docs/getting-started-guides/kubectl.md | 1 +
docs/getting-started-guides/libvirt-coreos.md | 2 +-
.../logging-elasticsearch.md | 2 +-
docs/getting-started-guides/logging.md | 2 +-
docs/getting-started-guides/meanstack.md | 2 +-
docs/getting-started-guides/mesos-docker.md | 3 +--
docs/getting-started-guides/mesos/index.md | 2 +-
docs/getting-started-guides/minikube.md | 2 +-
.../network-policy/calico.md | 2 +-
.../network-policy/romana.md | 2 +-
.../network-policy/walkthrough.md | 2 +-
docs/getting-started-guides/openstack-heat.md | 2 +-
docs/getting-started-guides/ovirt.md | 2 +-
docs/getting-started-guides/photon-controller.md | 2 +-
docs/getting-started-guides/rackspace.md | 2 +-
docs/getting-started-guides/rkt/index.md | 2 +-
docs/getting-started-guides/rkt/notes.md | 2 +-
docs/getting-started-guides/scratch.md | 2 +-
docs/getting-started-guides/vsphere.md | 2 +-
docs/getting-started-guides/windows/index.md | 2 +-
docs/hellonode.md | 2 +-
docs/index.md | 2 +-
docs/reference.md | 3 ++-
docs/reporting-security-issues.md | 2 +-
docs/samples.md | 3 ++-
.../port-forward-access-application-cluster.md | 1 +
.../http-proxy-access-api.md | 1 +
.../tasks/administer-cluster/assign-pods-nodes.md | 1 +
.../dns-horizontal-autoscaling.md | 1 +
.../tasks/administer-cluster/safely-drain-node.md | 4 ++--
.../assign-cpu-ram-container.md | 1 +
.../configure-volume-storage.md | 1 +
.../define-command-argument-container.md | 1 +
.../define-environment-variable-container.md | 1 +
.../distribute-credentials-secure.md | 1 +
.../determine-reason-pod-failure.md | 1 +
docs/tasks/index.md | 1 +
.../debugging-a-statefulset.md | 2 +-
docs/tasks/manage-stateful-set/delete-pods.md | 2 +-
.../manage-stateful-set/deleting-a-statefulset.md | 2 +-
.../manage-stateful-set/scale-stateful-set.md | 2 +-
.../upgrade-pet-set-to-stateful-set.md | 2 +-
docs/tasks/troubleshoot/debug-init-containers.md | 2 +-
docs/tools/index.md | 2 +-
docs/troubleshooting.md | 2 +-
docs/tutorials/index.md | 1 +
.../kubernetes-basics/cluster-interactive.html | 1 +
.../kubernetes-basics/cluster-intro.html | 5 +++--
.../kubernetes-basics/deploy-interactive.html | 1 +
.../tutorials/kubernetes-basics/deploy-intro.html | 1 +
.../kubernetes-basics/explore-interactive.html | 1 +
.../kubernetes-basics/explore-intro.html | 1 +
.../kubernetes-basics/expose-interactive.html | 1 +
.../tutorials/kubernetes-basics/expose-intro.html | 1 +
docs/tutorials/kubernetes-basics/index.html | 1 +
.../kubernetes-basics/scale-interactive.html | 1 +
docs/tutorials/kubernetes-basics/scale-intro.html | 1 +
.../kubernetes-basics/update-interactive.html | 1 +
.../tutorials/kubernetes-basics/update-intro.html | 1 +
.../stateful-application/basic-stateful-set.md | 1 +
.../run-replicated-stateful-application.md | 2 +-
.../run-stateful-application.md | 1 +
docs/tutorials/stateful-application/zookeeper.md | 1 +
.../expose-external-ip-address-service.md | 1 +
.../expose-external-ip-address.md | 1 +
.../run-stateless-application-deployment.md | 1 +
docs/user-guide/accessing-the-cluster.md | 14 +++++++-------
docs/user-guide/annotations.md | 2 +-
docs/user-guide/application-troubleshooting.md | 2 +-
docs/user-guide/compute-resources.md | 2 +-
docs/user-guide/config-best-practices.md | 2 +-
docs/user-guide/configmap/index.md | 3 ++-
docs/user-guide/configuring-containers.md | 2 +-
docs/user-guide/connecting-applications.md | 2 +-
.../connecting-to-applications-port-forward.md | 2 +-
.../connecting-to-applications-proxy.md | 2 +-
docs/user-guide/container-environment.md | 2 +-
docs/user-guide/containers.md | 2 +-
docs/user-guide/cron-jobs.md | 2 +-
.../debugging-pods-and-replication-controllers.md | 2 +-
docs/user-guide/debugging-services.md | 2 +-
docs/user-guide/deploying-applications.md | 3 +--
docs/user-guide/deployments.md | 2 +-
docs/user-guide/docker-cli-to-kubectl.md | 2 +-
docs/user-guide/downward-api/index.md | 2 +-
docs/user-guide/downward-api/volume/index.md | 1 +
docs/user-guide/environment-guide/index.md | 12 ++++++------
docs/user-guide/federation/configmap.md | 1 +
docs/user-guide/federation/daemonsets.md | 1 +
docs/user-guide/federation/deployment.md | 1 +
docs/user-guide/federation/events.md | 1 +
docs/user-guide/federation/federated-ingress.md | 1 +
docs/user-guide/federation/federated-services.md | 2 +-
docs/user-guide/federation/index.md | 1 +
docs/user-guide/federation/namespaces.md | 1 +
docs/user-guide/federation/replicasets.md | 1 +
docs/user-guide/federation/secrets.md | 1 +
docs/user-guide/garbage-collection.md | 2 +-
docs/user-guide/getting-into-containers.md | 2 +-
.../horizontal-pod-autoscaling/index.md | 2 +-
.../horizontal-pod-autoscaling/walkthrough.md | 2 +-
docs/user-guide/identifiers.md | 2 +-
docs/user-guide/images.md | 2 +-
docs/user-guide/index.md | 2 +-
docs/user-guide/ingress.md | 2 +-
docs/user-guide/introspection-and-debugging.md | 2 +-
docs/user-guide/jobs.md | 2 +-
docs/user-guide/jobs/expansions/index.md | 1 +
docs/user-guide/jobs/work-queue-1/index.md | 1 +
docs/user-guide/jobs/work-queue-2/index.md | 1 +
docs/user-guide/jsonpath.md | 2 +-
docs/user-guide/kubeconfig-file.md | 14 +++++++-------
docs/user-guide/kubectl-cheatsheet.md | 2 +-
docs/user-guide/kubectl-conventions.md | 2 +-
docs/user-guide/kubectl-overview.md | 2 +-
docs/user-guide/kubectl/index.md | 2 ++
docs/user-guide/kubectl/kubectl_annotate.md | 2 ++
docs/user-guide/kubectl/kubectl_api-versions.md | 2 ++
docs/user-guide/kubectl/kubectl_apply.md | 2 ++
docs/user-guide/kubectl/kubectl_attach.md | 2 ++
docs/user-guide/kubectl/kubectl_autoscale.md | 2 ++
docs/user-guide/kubectl/kubectl_cluster-info.md | 2 ++
docs/user-guide/kubectl/kubectl_config.md | 2 ++
.../kubectl/kubectl_config_current-context.md | 2 ++
.../kubectl/kubectl_config_set-cluster.md | 2 ++
.../kubectl/kubectl_config_set-context.md | 2 ++
.../kubectl/kubectl_config_set-credentials.md | 2 ++
docs/user-guide/kubectl/kubectl_config_set.md | 2 ++
docs/user-guide/kubectl/kubectl_config_unset.md | 2 ++
.../kubectl/kubectl_config_use-context.md | 2 ++
docs/user-guide/kubectl/kubectl_config_view.md | 2 ++
docs/user-guide/kubectl/kubectl_convert.md | 2 ++
docs/user-guide/kubectl/kubectl_cordon.md | 2 ++
docs/user-guide/kubectl/kubectl_create.md | 2 ++
.../kubectl/kubectl_create_configmap.md | 2 ++
.../kubectl/kubectl_create_namespace.md | 2 ++
docs/user-guide/kubectl/kubectl_create_secret.md | 2 ++
.../kubectl_create_secret_docker-registry.md | 2 ++
.../kubectl/kubectl_create_secret_generic.md | 2 ++
.../kubectl/kubectl_create_serviceaccount.md | 2 ++
docs/user-guide/kubectl/kubectl_delete.md | 2 ++
docs/user-guide/kubectl/kubectl_describe.md | 2 ++
docs/user-guide/kubectl/kubectl_drain.md | 2 ++
docs/user-guide/kubectl/kubectl_edit.md | 2 ++
docs/user-guide/kubectl/kubectl_exec.md | 2 ++
docs/user-guide/kubectl/kubectl_explain.md | 2 ++
docs/user-guide/kubectl/kubectl_expose.md | 2 ++
docs/user-guide/kubectl/kubectl_get.md | 2 ++
docs/user-guide/kubectl/kubectl_label.md | 2 ++
docs/user-guide/kubectl/kubectl_logs.md | 2 ++
docs/user-guide/kubectl/kubectl_patch.md | 2 ++
docs/user-guide/kubectl/kubectl_port-forward.md | 2 ++
docs/user-guide/kubectl/kubectl_proxy.md | 2 ++
docs/user-guide/kubectl/kubectl_replace.md | 2 ++
docs/user-guide/kubectl/kubectl_rolling-update.md | 2 ++
docs/user-guide/kubectl/kubectl_rollout.md | 2 ++
.../user-guide/kubectl/kubectl_rollout_history.md | 2 ++
docs/user-guide/kubectl/kubectl_rollout_pause.md | 2 ++
docs/user-guide/kubectl/kubectl_rollout_resume.md | 2 ++
docs/user-guide/kubectl/kubectl_rollout_undo.md | 2 ++
docs/user-guide/kubectl/kubectl_run.md | 2 ++
docs/user-guide/kubectl/kubectl_scale.md | 2 ++
docs/user-guide/kubectl/kubectl_stop.md | 2 ++
docs/user-guide/kubectl/kubectl_uncordon.md | 2 ++
docs/user-guide/kubectl/kubectl_version.md | 2 ++
docs/user-guide/labels.md | 2 +-
docs/user-guide/liveness/index.md | 2 +-
docs/user-guide/load-balancer.md | 2 +-
docs/user-guide/logging.md | 2 +-
docs/user-guide/managing-deployments.md | 3 ++-
docs/user-guide/monitoring.md | 2 +-
docs/user-guide/namespaces.md | 2 +-
docs/user-guide/networkpolicies.md | 2 +-
docs/user-guide/node-selection/index.md | 2 +-
docs/user-guide/persistent-volumes/index.md | 2 +-
docs/user-guide/persistent-volumes/walkthrough.md | 2 +-
docs/user-guide/petset.md | 2 +-
docs/user-guide/petset/bootstrapping/index.md | 1 +
docs/user-guide/pod-security-policy/index.md | 2 +-
docs/user-guide/pod-states.md | 2 +-
docs/user-guide/pods/index.md | 4 ++--
docs/user-guide/pods/multi-container.md | 2 +-
docs/user-guide/pods/single-container.md | 2 +-
docs/user-guide/prereqs.md | 2 +-
docs/user-guide/production-pods.md | 2 +-
docs/user-guide/quick-start.md | 2 +-
docs/user-guide/replicasets.md | 2 +-
docs/user-guide/replication-controller/index.md | 2 +-
.../replication-controller/operations.md | 3 ++-
.../resizing-a-replication-controller.md | 2 +-
docs/user-guide/rolling-updates.md | 2 +-
docs/user-guide/secrets/index.md | 2 +-
docs/user-guide/secrets/walkthrough.md | 4 ++--
docs/user-guide/security-context.md | 2 +-
docs/user-guide/service-accounts.md | 2 +-
docs/user-guide/services-firewalls.md | 2 +-
docs/user-guide/services/index.md | 2 +-
docs/user-guide/services/operations.md | 3 ++-
docs/user-guide/sharing-clusters.md | 2 +-
docs/user-guide/simple-nginx.md | 2 +-
docs/user-guide/thirdpartyresources.md | 2 +-
docs/user-guide/ui.md | 3 +--
docs/user-guide/update-demo/index.md | 15 +++++++--------
docs/user-guide/volumes.md | 2 +-
docs/user-guide/working-with-resources.md | 2 +-
docs/whatisk8s.md | 3 +--
editdocs.md | 1 +
kubernetes/third_party/swagger-ui/index.md | 4 ++++
291 files changed, 409 insertions(+), 212 deletions(-)
diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md
index cb3f3d4ce4..0e491ccf0d 100644
--- a/docs/admin/accessing-the-api.md
+++ b/docs/admin/accessing-the-api.md
@@ -3,7 +3,7 @@ assignees:
- bgrant0607
- erictune
- lavalamp
-
+title: Overview
---
This document describes how access to the Kubernetes API is controlled.
diff --git a/docs/admin/addons.md b/docs/admin/addons.md
index 1555f8263c..d387c972b6 100644
--- a/docs/admin/addons.md
+++ b/docs/admin/addons.md
@@ -1,4 +1,5 @@
---
+title: Installing Addons
---
## Overview
diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md
index 24da796163..475f2e4be9 100644
--- a/docs/admin/admission-controllers.md
+++ b/docs/admin/admission-controllers.md
@@ -6,7 +6,7 @@ assignees:
- erictune
- janetkuo
- thockin
-
+title: Using Admission Controllers
---
* TOC
diff --git a/docs/admin/apparmor/index.md b/docs/admin/apparmor/index.md
index 9730c07953..4c2d02d989 100644
--- a/docs/admin/apparmor/index.md
+++ b/docs/admin/apparmor/index.md
@@ -1,7 +1,7 @@
---
assignees:
- stclair
-
+title: AppArmor
---
AppArmor is a Linux kernel enhancement that can reduce the potential attack surface of an
diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md
index 0cb5ea4dbe..ab41a6fd39 100644
--- a/docs/admin/authentication.md
+++ b/docs/admin/authentication.md
@@ -5,8 +5,9 @@ assignees:
- ericchiang
- deads2k
- liggitt
-
+title: Authenticating
---
+
* TOC
{:toc}
diff --git a/docs/admin/authorization.md b/docs/admin/authorization.md
index f1f46985b2..6f76a1c033 100644
--- a/docs/admin/authorization.md
+++ b/docs/admin/authorization.md
@@ -4,7 +4,7 @@ assignees:
- lavalamp
- deads2k
- liggitt
-
+title: Using Authorization Plugins
---
In Kubernetes, authorization happens as a separate step from authentication.
diff --git a/docs/admin/cluster-components.md b/docs/admin/cluster-components.md
index c1bcae8577..0b913d8956 100644
--- a/docs/admin/cluster-components.md
+++ b/docs/admin/cluster-components.md
@@ -1,7 +1,7 @@
---
assignees:
- lavalamp
-
+title: Kubernetes Components
---
This document outlines the various binary components that need to run to
diff --git a/docs/admin/cluster-large.md b/docs/admin/cluster-large.md
index d2285c3346..f41df12689 100644
--- a/docs/admin/cluster-large.md
+++ b/docs/admin/cluster-large.md
@@ -1,11 +1,10 @@
----
-assignees:
-- davidopp
-- lavalamp
-
----
-
-
+---
+assignees:
+- davidopp
+- lavalamp
+title: Building Large Clusters
+---
+
## Support
At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
diff --git a/docs/admin/cluster-management.md b/docs/admin/cluster-management.md
index 97362c4bab..b1c4c340a3 100644
--- a/docs/admin/cluster-management.md
+++ b/docs/admin/cluster-management.md
@@ -2,7 +2,7 @@
assignees:
- lavalamp
- thockin
-
+title: Cluster Management Guide
---
* TOC
diff --git a/docs/admin/cluster-troubleshooting.md b/docs/admin/cluster-troubleshooting.md
index 8bab089ce6..89cd99926b 100644
--- a/docs/admin/cluster-troubleshooting.md
+++ b/docs/admin/cluster-troubleshooting.md
@@ -1,7 +1,7 @@
---
assignees:
- davidopp
-
+title: Troubleshooting Clusters
---
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
diff --git a/docs/admin/daemons.md b/docs/admin/daemons.md
index bab12268ba..fd2bc8afcb 100644
--- a/docs/admin/daemons.md
+++ b/docs/admin/daemons.md
@@ -1,7 +1,7 @@
---
assignees:
- erictune
-
+title: Daemon Sets
---
* TOC
diff --git a/docs/admin/dns.md b/docs/admin/dns.md
index 6115be7363..6ed558859b 100644
--- a/docs/admin/dns.md
+++ b/docs/admin/dns.md
@@ -3,7 +3,7 @@ assignees:
- ArtfulCoder
- davidopp
- lavalamp
-
+title: Using DNS Pods and Services
---
## Introduction
diff --git a/docs/admin/etcd.md b/docs/admin/etcd.md
index 14b36a33be..ea4f6b09b3 100644
--- a/docs/admin/etcd.md
+++ b/docs/admin/etcd.md
@@ -1,10 +1,9 @@
---
assignees:
- lavalamp
-
+title: Configuring Kubernetes Use of etcd
---
-
[etcd](https://coreos.com/etcd/docs/2.2.1/) is a highly-available key value
store which Kubernetes uses for persistent storage of all of its REST API
objects.
diff --git a/docs/admin/federation-apiserver.md b/docs/admin/federation-apiserver.md
index 9236c62d38..77b066854c 100644
--- a/docs/admin/federation-apiserver.md
+++ b/docs/admin/federation-apiserver.md
@@ -1,5 +1,7 @@
---
+title: federation-apiserver
---
+
## federation-apiserver
diff --git a/docs/admin/federation-controller-manager.md b/docs/admin/federation-controller-manager.md
index a65bbef5f3..5e87fce3d0 100644
--- a/docs/admin/federation-controller-manager.md
+++ b/docs/admin/federation-controller-manager.md
@@ -1,5 +1,7 @@
---
+title: federation-controller-mananger
---
+
## federation-controller-manager
diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md
index ec40d581bb..478f7563de 100644
--- a/docs/admin/federation/index.md
+++ b/docs/admin/federation/index.md
@@ -3,8 +3,9 @@ assignees:
- madhusudancs
- mml
- nikhiljindal
-
+title: Using `federation-up` and `deploy.sh`
---
+
This guide explains how to set up cluster federation that lets us control multiple Kubernetes clusters.
diff --git a/docs/admin/garbage-collection.md b/docs/admin/garbage-collection.md
index a3112a07f1..0276596f6c 100644
--- a/docs/admin/garbage-collection.md
+++ b/docs/admin/garbage-collection.md
@@ -1,7 +1,7 @@
---
assignees:
- mikedanese
-
+title: Configuring kubelet Garbage Collection
---
* TOC
diff --git a/docs/admin/high-availability/index.md b/docs/admin/high-availability/index.md
index ad78270e4a..42e51d3d51 100644
--- a/docs/admin/high-availability/index.md
+++ b/docs/admin/high-availability/index.md
@@ -1,7 +1,7 @@
----
-
----
-
+---
+title: Building High-Availability Clusters
+---
+
## Introduction
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
diff --git a/docs/admin/index.md b/docs/admin/index.md
index 3624bb4202..98f38b428a 100644
--- a/docs/admin/index.md
+++ b/docs/admin/index.md
@@ -2,7 +2,7 @@
assignees:
- davidopp
- lavalamp
-
+title: Admin Guide
---
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
diff --git a/docs/admin/kube-apiserver.md b/docs/admin/kube-apiserver.md
index 12d2b76a49..e8142fac4e 100644
--- a/docs/admin/kube-apiserver.md
+++ b/docs/admin/kube-apiserver.md
@@ -1,5 +1,7 @@
---
+title: kube-apiserver
---
+
## kube-apiserver
diff --git a/docs/admin/kube-controller-manager.md b/docs/admin/kube-controller-manager.md
index 68cca731e7..5dab0da7e2 100644
--- a/docs/admin/kube-controller-manager.md
+++ b/docs/admin/kube-controller-manager.md
@@ -1,5 +1,7 @@
---
+title: kube-controller-manager
---
+
## kube-controller-manager
diff --git a/docs/admin/kube-proxy.md b/docs/admin/kube-proxy.md
index 491a91d06e..f643748624 100644
--- a/docs/admin/kube-proxy.md
+++ b/docs/admin/kube-proxy.md
@@ -1,5 +1,7 @@
---
+title: kube-proxy
---
+
## kube-proxy
diff --git a/docs/admin/kube-scheduler.md b/docs/admin/kube-scheduler.md
index 9b4d7264e6..bb6799bb73 100644
--- a/docs/admin/kube-scheduler.md
+++ b/docs/admin/kube-scheduler.md
@@ -1,5 +1,7 @@
---
+title: kube-scheduler
---
+
## kube-scheduler
diff --git a/docs/admin/kubeadm.md b/docs/admin/kubeadm.md
index e1c8537149..3dc59fd2d6 100644
--- a/docs/admin/kubeadm.md
+++ b/docs/admin/kubeadm.md
@@ -4,10 +4,9 @@ assignees:
- luxas
- errordeveloper
- jbeda
-
+title: kubeadm reference
---
-
This document provides information on how to use kubeadm's advanced options.
Running `kubeadm init` bootstraps a Kubernetes cluster. This consists of the
diff --git a/docs/admin/kubelet-authentication-authorization.md b/docs/admin/kubelet-authentication-authorization.md
index b0617b8854..035f3068fa 100644
--- a/docs/admin/kubelet-authentication-authorization.md
+++ b/docs/admin/kubelet-authentication-authorization.md
@@ -1,7 +1,7 @@
---
assignees:
- liggitt
-
+title: Kubelet authentication/authorization
---
* TOC
diff --git a/docs/admin/kubelet-tls-bootstrapping.md b/docs/admin/kubelet-tls-bootstrapping.md
index 3458bdb310..f8d56923ee 100644
--- a/docs/admin/kubelet-tls-bootstrapping.md
+++ b/docs/admin/kubelet-tls-bootstrapping.md
@@ -1,7 +1,7 @@
---
assignees:
- mikedanese
-
+title: TLS bootstrapping
---
* TOC
diff --git a/docs/admin/kubelet.md b/docs/admin/kubelet.md
index a3004ea1aa..74186eb1ba 100644
--- a/docs/admin/kubelet.md
+++ b/docs/admin/kubelet.md
@@ -1,5 +1,7 @@
---
+title: Overview
---
+
## kubelet
diff --git a/docs/admin/limitrange/index.md b/docs/admin/limitrange/index.md
index 0336264bc3..767513a1a3 100644
--- a/docs/admin/limitrange/index.md
+++ b/docs/admin/limitrange/index.md
@@ -2,7 +2,7 @@
assignees:
- derekwaynecarr
- janetkuo
-
+title: Setting Pod CPU and Memory Limits
---
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
diff --git a/docs/admin/master-node-communication.md b/docs/admin/master-node-communication.md
index 9e8b9cfa9e..91ecff7ef9 100644
--- a/docs/admin/master-node-communication.md
+++ b/docs/admin/master-node-communication.md
@@ -3,7 +3,7 @@ assignees:
- dchen1107
- roberthbailey
- liggitt
-
+title: Master-Node communication
---
* TOC
diff --git a/docs/admin/multi-cluster.md b/docs/admin/multi-cluster.md
index 6359782409..1d238d8e13 100644
--- a/docs/admin/multi-cluster.md
+++ b/docs/admin/multi-cluster.md
@@ -1,7 +1,7 @@
---
assignees:
- davidopp
-
+title: Using Multiple Clusters
---
You may want to set up multiple Kubernetes clusters, both to
diff --git a/docs/admin/multiple-schedulers.md b/docs/admin/multiple-schedulers.md
index 8ba152ac04..eb1c4c44f9 100644
--- a/docs/admin/multiple-schedulers.md
+++ b/docs/admin/multiple-schedulers.md
@@ -2,7 +2,7 @@
assignees:
- davidopp
- madhusudancs
-
+title: Configuring Multiple Schedulers
---
Kubernetes ships with a default scheduler that is described [here](/docs/admin/kube-scheduler/).
diff --git a/docs/admin/multiple-zones.md b/docs/admin/multiple-zones.md
index bfde54213e..e215b31716 100644
--- a/docs/admin/multiple-zones.md
+++ b/docs/admin/multiple-zones.md
@@ -3,7 +3,7 @@ assignees:
- jlowdermilk
- justinsb
- quinton-hoole
-
+title: Running in Multiple Zones
---
## Introduction
diff --git a/docs/admin/namespaces/index.md b/docs/admin/namespaces/index.md
index 574f41b10a..b723a9c361 100644
--- a/docs/admin/namespaces/index.md
+++ b/docs/admin/namespaces/index.md
@@ -2,7 +2,7 @@
assignees:
- derekwaynecarr
- janetkuo
-
+title: Sharing a Cluster with Namespaces
---
A Namespace is a mechanism to partition resources created by users into
diff --git a/docs/admin/namespaces/walkthrough.md b/docs/admin/namespaces/walkthrough.md
index 2a3e6298ea..9faecf89e9 100644
--- a/docs/admin/namespaces/walkthrough.md
+++ b/docs/admin/namespaces/walkthrough.md
@@ -2,7 +2,7 @@
assignees:
- derekwaynecarr
- janetkuo
-
+title: Namespaces Walkthrough
---
Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster.
diff --git a/docs/admin/network-plugins.md b/docs/admin/network-plugins.md
index 6c5f354423..6b80f1a002 100644
--- a/docs/admin/network-plugins.md
+++ b/docs/admin/network-plugins.md
@@ -3,7 +3,7 @@ assignees:
- dcbw
- freehan
- thockin
-
+title: Network Plugins
---
* TOC
diff --git a/docs/admin/networking.md b/docs/admin/networking.md
index 9b77b3557a..231b7b315d 100644
--- a/docs/admin/networking.md
+++ b/docs/admin/networking.md
@@ -2,7 +2,7 @@
assignees:
- lavalamp
- thockin
-
+title: Networking in Kubernetes
---
Kubernetes approaches networking somewhat differently than Docker does by
diff --git a/docs/admin/node-conformance.md b/docs/admin/node-conformance.md
index 52935bc4cc..f53ba858b1 100644
--- a/docs/admin/node-conformance.md
+++ b/docs/admin/node-conformance.md
@@ -1,7 +1,7 @@
---
assignees:
- Random-Liu
-
+title: Validate Node Setup
---
* TOC
diff --git a/docs/admin/node-problem.md b/docs/admin/node-problem.md
index b6926ba15b..0d7b57005e 100644
--- a/docs/admin/node-problem.md
+++ b/docs/admin/node-problem.md
@@ -2,7 +2,7 @@
assignees:
- Random-Liu
- dchen1107
-
+title: Monitoring Node Health
---
* TOC
diff --git a/docs/admin/node.md b/docs/admin/node.md
index ad0867ffc8..3c3e16178d 100644
--- a/docs/admin/node.md
+++ b/docs/admin/node.md
@@ -3,7 +3,7 @@ assignees:
- caesarxuchao
- dchen1107
- lavalamp
-
+title: Nodes
---
* TOC
diff --git a/docs/admin/out-of-resource.md b/docs/admin/out-of-resource.md
index 8af7114ed6..a663703d9c 100644
--- a/docs/admin/out-of-resource.md
+++ b/docs/admin/out-of-resource.md
@@ -3,7 +3,7 @@ assignees:
- derekwaynecarr
- vishh
- timstclair
-
+title: Configuring Out Of Resource Handling
---
* TOC
diff --git a/docs/admin/ovs-networking.md b/docs/admin/ovs-networking.md
index 7a8f89506c..9370dcec46 100644
--- a/docs/admin/ovs-networking.md
+++ b/docs/admin/ovs-networking.md
@@ -2,7 +2,7 @@
assignees:
- lavalamp
- thockin
-
+title: Kubernetes OpenVSwitch GRE/VxLAN networking
---
This document describes how OpenVSwitch is used to setup networking between pods across nodes.
diff --git a/docs/admin/resourcequota/index.md b/docs/admin/resourcequota/index.md
index ff76942702..c967975dec 100644
--- a/docs/admin/resourcequota/index.md
+++ b/docs/admin/resourcequota/index.md
@@ -1,7 +1,7 @@
---
assignees:
- derekwaynecarr
-
+title: Resource Quotas
---
When several users or teams share a cluster with a fixed number of nodes,
diff --git a/docs/admin/resourcequota/walkthrough.md b/docs/admin/resourcequota/walkthrough.md
index 7422f2abcf..d5ef21ff6c 100644
--- a/docs/admin/resourcequota/walkthrough.md
+++ b/docs/admin/resourcequota/walkthrough.md
@@ -2,7 +2,7 @@
assignees:
- derekwaynecarr
- janetkuo
-
+title: Applying Resource Quotas and Limits
---
This example demonstrates a typical setup to control for resource usage in a namespace.
diff --git a/docs/admin/salt.md b/docs/admin/salt.md
index 5d82b54d39..ba4d4fe227 100644
--- a/docs/admin/salt.md
+++ b/docs/admin/salt.md
@@ -2,7 +2,7 @@
assignees:
- davidopp
- lavalamp
-
+title: Configuring Kubernetes with Salt
---
The Kubernetes cluster can be configured using Salt.
diff --git a/docs/admin/service-accounts-admin.md b/docs/admin/service-accounts-admin.md
index 810f4d7515..4a31fbeced 100644
--- a/docs/admin/service-accounts-admin.md
+++ b/docs/admin/service-accounts-admin.md
@@ -4,7 +4,7 @@ assignees:
- davidopp
- lavalamp
- liggitt
-
+title: Managing Service Accounts
---
*This is a Cluster Administrator guide to service accounts. It assumes knowledge of
diff --git a/docs/admin/static-pods.md b/docs/admin/static-pods.md
index 531494fb04..36235929a3 100644
--- a/docs/admin/static-pods.md
+++ b/docs/admin/static-pods.md
@@ -1,7 +1,7 @@
---
assignees:
- jsafrane
-
+title: Static Pods
---
**If you are running clustered Kubernetes and are using static pods to run a pod on every node, you should probably be using a [DaemonSet](/docs/admin/daemons/)!**
diff --git a/docs/api-reference/autoscaling/v1/definitions.html b/docs/api-reference/autoscaling/v1/definitions.html
index 949fa2e507..768d4e1543 100755
--- a/docs/api-reference/autoscaling/v1/definitions.html
+++ b/docs/api-reference/autoscaling/v1/definitions.html
@@ -1,5 +1,7 @@
---
+title: Autoscaling API Definitions
---
+
diff --git a/docs/api-reference/autoscaling/v1/operations.html b/docs/api-reference/autoscaling/v1/operations.html
index 0e38da7627..cfc457d1e5 100755
--- a/docs/api-reference/autoscaling/v1/operations.html
+++ b/docs/api-reference/autoscaling/v1/operations.html
@@ -1,5 +1,7 @@
---
+title: Autoscaling API Operations
---
+
diff --git a/docs/api-reference/batch/v1/definitions.html b/docs/api-reference/batch/v1/definitions.html
index be391e4acd..9989f4c4ca 100755
--- a/docs/api-reference/batch/v1/definitions.html
+++ b/docs/api-reference/batch/v1/definitions.html
@@ -1,5 +1,7 @@
---
+title: Batch API Definitions
---
+
diff --git a/docs/api-reference/batch/v1/operations.html b/docs/api-reference/batch/v1/operations.html
index 691318f810..5be3ce0b60 100755
--- a/docs/api-reference/batch/v1/operations.html
+++ b/docs/api-reference/batch/v1/operations.html
@@ -1,5 +1,7 @@
---
+title: Batch API Operations
---
+
diff --git a/docs/api-reference/extensions/v1beta1/definitions.html b/docs/api-reference/extensions/v1beta1/definitions.html
index 3863ee11f8..b92378524f 100755
--- a/docs/api-reference/extensions/v1beta1/definitions.html
+++ b/docs/api-reference/extensions/v1beta1/definitions.html
@@ -1,5 +1,7 @@
---
+title: Extensions API Definitions
---
+
diff --git a/docs/api-reference/extensions/v1beta1/operations.html b/docs/api-reference/extensions/v1beta1/operations.html
index c1d9c191ec..a97f64b789 100755
--- a/docs/api-reference/extensions/v1beta1/operations.html
+++ b/docs/api-reference/extensions/v1beta1/operations.html
@@ -1,5 +1,7 @@
---
+title: Extensions API Operations
---
+
diff --git a/docs/api-reference/v1/definitions.html b/docs/api-reference/v1/definitions.html
index 6c2515eeaa..e207f68c0a 100755
--- a/docs/api-reference/v1/definitions.html
+++ b/docs/api-reference/v1/definitions.html
@@ -1,5 +1,7 @@
---
+title: Kubernetes API Definitions
---
+
diff --git a/docs/api-reference/v1/operations.html b/docs/api-reference/v1/operations.html
index dfdb663b8b..f75e9a44f5 100755
--- a/docs/api-reference/v1/operations.html
+++ b/docs/api-reference/v1/operations.html
@@ -1,5 +1,7 @@
---
+title: Kubernetes API Operations
---
+
diff --git a/docs/api.md b/docs/api.md
index 3479dbff94..7964f604d0 100644
--- a/docs/api.md
+++ b/docs/api.md
@@ -3,7 +3,7 @@ assignees:
- bgrant0607
- erictune
- lavalamp
-
+title: Kubernetes API Overview
---
Primary system and API concepts are documented in the [User guide](/docs/user-guide/).
diff --git a/docs/concepts/abstractions/controllers/statefulsets.md b/docs/concepts/abstractions/controllers/statefulsets.md
index 01825fc257..6f8e9629c3 100644
--- a/docs/concepts/abstractions/controllers/statefulsets.md
+++ b/docs/concepts/abstractions/controllers/statefulsets.md
@@ -7,6 +7,7 @@ assignees:
- janetkuo
- kow3ns
- smarterclayton
+title: StatefulSets
---
{% capture overview %}
diff --git a/docs/concepts/index.md b/docs/concepts/index.md
index 72d1ecebb2..c26b972202 100644
--- a/docs/concepts/index.md
+++ b/docs/concepts/index.md
@@ -1,4 +1,5 @@
---
+title: Concepts
---
The Concepts section of the Kubernetes documentation is a work in progress.
diff --git a/docs/concepts/object-metadata/annotations.md b/docs/concepts/object-metadata/annotations.md
index e337493fe1..fbf73f48fd 100644
--- a/docs/concepts/object-metadata/annotations.md
+++ b/docs/concepts/object-metadata/annotations.md
@@ -1,4 +1,5 @@
---
+title: Annotations
---
{% capture overview %}
diff --git a/docs/contribute/create-pull-request.md b/docs/contribute/create-pull-request.md
index e74b49436e..4637c0b066 100644
--- a/docs/contribute/create-pull-request.md
+++ b/docs/contribute/create-pull-request.md
@@ -1,4 +1,5 @@
---
+title: Creating a Documentation Pull Request
---
{% capture overview %}
diff --git a/docs/contribute/page-templates.md b/docs/contribute/page-templates.md
index 4b19cde39b..93fa03a6bb 100644
--- a/docs/contribute/page-templates.md
+++ b/docs/contribute/page-templates.md
@@ -1,7 +1,8 @@
---
redirect_from:
- - /docs/templatedemos/
- - /docs/templatedemos.html
+- "/docs/templatedemos/"
+- "/docs/templatedemos.html"
+title: Using Page Templates
---
diff --git a/kubernetes/third_party/swagger-ui/index.md b/kubernetes/third_party/swagger-ui/index.md
index c481f993f3..425a061477 100644
--- a/kubernetes/third_party/swagger-ui/index.md
+++ b/kubernetes/third_party/swagger-ui/index.md
@@ -1,3 +1,7 @@
+---
+title: Kubernetes API Swagger Spec
+---
+
---
Kubernetes swagger UI has now been replaced by our generated API reference docs
From 664459c407614594af78ea25dba8deef767e9a14 Mon Sep 17 00:00:00 2001
From: Janet Kuo
Date: Thu, 15 Dec 2016 13:00:17 -0800
Subject: [PATCH 32/63] Bump default {{page.version}} to v1.5.1
---
_config.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/_config.yml b/_config.yml
index 8b131eb433..6397dda511 100644
--- a/_config.yml
+++ b/_config.yml
@@ -17,7 +17,7 @@ defaults:
scope:
path: ""
values:
- version: "v1.3"
+ version: "v1.5.1"
githubbranch: "master"
docsbranch: "master"
-
From 9b432f24e61fe68c79b4d9a7337130834d881331 Mon Sep 17 00:00:00 2001
From: Janet Kuo
Date: Thu, 15 Dec 2016 13:14:47 -0800
Subject: [PATCH 33/63] Remove .0 suffix from all references to
{{page.version}}
---
docs/admin/network-plugins.md | 4 ++--
docs/getting-started-guides/minikube.md | 6 +++---
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/docs/admin/network-plugins.md b/docs/admin/network-plugins.md
index 6c5f354423..7f2d59f0a2 100644
--- a/docs/admin/network-plugins.md
+++ b/docs/admin/network-plugins.md
@@ -26,13 +26,13 @@ The kubelet has a single default network plugin, and a default network common to
## Network Plugin Requirements
-Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/pkg/kubelet/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
+Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/pkg/kubelet/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
By default if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like docker with a bridge) work correctly with the iptables proxy.
### Exec
-Place plugins in `network-plugin-dir/plugin-name/plugin-name`, i.e if you have a bridge plugin and `network-plugin-dir` is `/usr/lib/kubernetes`, you'd place the bridge plugin executable at `/usr/lib/kubernetes/bridge/bridge`. See [this comment](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/pkg/kubelet/network/exec/exec.go) for more details.
+Place plugins in `network-plugin-dir/plugin-name/plugin-name`, i.e if you have a bridge plugin and `network-plugin-dir` is `/usr/lib/kubernetes`, you'd place the bridge plugin executable at `/usr/lib/kubernetes/bridge/bridge`. See [this comment](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/pkg/kubelet/network/exec/exec.go) for more details.
### CNI
diff --git a/docs/getting-started-guides/minikube.md b/docs/getting-started-guides/minikube.md
index 4807e8dec8..b7fefcf3c4 100644
--- a/docs/getting-started-guides/minikube.md
+++ b/docs/getting-started-guides/minikube.md
@@ -36,13 +36,13 @@ Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a
**Kubectl for Linux/amd64**
```
-curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
+curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
```
**Kubectl for OS X/amd64**
```
-curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
+curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
```
### Instructions
@@ -316,4 +316,4 @@ For more information about minikube, see the [proposal](https://github.com/kuber
## Community
-Contributions, questions, and comments are all welcomed and encouraged! minkube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
\ No newline at end of file
+Contributions, questions, and comments are all welcomed and encouraged! minkube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
From 668e1a27ddfae0f7485d8be41f3e43ceeb9da022 Mon Sep 17 00:00:00 2001
From: gunjan5
Date: Thu, 15 Dec 2016 13:25:49 -0800
Subject: [PATCH 34/63] Update Calico docs link
---
docs/admin/addons.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/admin/addons.md b/docs/admin/addons.md
index 1555f8263c..e91ba6e661 100644
--- a/docs/admin/addons.md
+++ b/docs/admin/addons.md
@@ -11,7 +11,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
## Networking and Network Policy
-* [Calico](http://docs.projectcalico.org/v1.6/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider.
+* [Calico](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
From b301f4ad3a6f737d06f8653abc6b059bb179206d Mon Sep 17 00:00:00 2001
From: Ben Balter
Date: Thu, 15 Dec 2016 17:06:04 -0500
Subject: [PATCH 35/63] add notitle attribute to pages that shouldnt have a
title
---
docs/admin/federation-apiserver.md | 1 +
docs/admin/federation-controller-manager.md | 1 +
docs/admin/kube-apiserver.md | 1 +
docs/admin/kube-controller-manager.md | 1 +
docs/admin/kube-proxy.md | 1 +
docs/admin/kube-scheduler.md | 1 +
docs/admin/kubelet.md | 1 +
7 files changed, 7 insertions(+)
diff --git a/docs/admin/federation-apiserver.md b/docs/admin/federation-apiserver.md
index 77b066854c..72d71547c7 100644
--- a/docs/admin/federation-apiserver.md
+++ b/docs/admin/federation-apiserver.md
@@ -1,5 +1,6 @@
---
title: federation-apiserver
+notitle: true
---
## federation-apiserver
diff --git a/docs/admin/federation-controller-manager.md b/docs/admin/federation-controller-manager.md
index 5e87fce3d0..d3dca5bf06 100644
--- a/docs/admin/federation-controller-manager.md
+++ b/docs/admin/federation-controller-manager.md
@@ -1,5 +1,6 @@
---
title: federation-controller-mananger
+notitle: true
---
## federation-controller-manager
diff --git a/docs/admin/kube-apiserver.md b/docs/admin/kube-apiserver.md
index e8142fac4e..bc08ef1f0a 100644
--- a/docs/admin/kube-apiserver.md
+++ b/docs/admin/kube-apiserver.md
@@ -1,5 +1,6 @@
---
title: kube-apiserver
+notitle: true
---
## kube-apiserver
diff --git a/docs/admin/kube-controller-manager.md b/docs/admin/kube-controller-manager.md
index 5dab0da7e2..f6f11c5f37 100644
--- a/docs/admin/kube-controller-manager.md
+++ b/docs/admin/kube-controller-manager.md
@@ -1,5 +1,6 @@
---
title: kube-controller-manager
+notitle: true
---
## kube-controller-manager
diff --git a/docs/admin/kube-proxy.md b/docs/admin/kube-proxy.md
index f643748624..31d3263b5d 100644
--- a/docs/admin/kube-proxy.md
+++ b/docs/admin/kube-proxy.md
@@ -1,5 +1,6 @@
---
title: kube-proxy
+notitle: true
---
## kube-proxy
diff --git a/docs/admin/kube-scheduler.md b/docs/admin/kube-scheduler.md
index bb6799bb73..6d3b8c9f64 100644
--- a/docs/admin/kube-scheduler.md
+++ b/docs/admin/kube-scheduler.md
@@ -1,5 +1,6 @@
---
title: kube-scheduler
+notitle: true
---
## kube-scheduler
diff --git a/docs/admin/kubelet.md b/docs/admin/kubelet.md
index 74186eb1ba..b272f869ab 100644
--- a/docs/admin/kubelet.md
+++ b/docs/admin/kubelet.md
@@ -1,5 +1,6 @@
---
title: Overview
+notitle: true
---
## kubelet
From 6b9326f0851d6e6a603698f5dd48463ee1593298 Mon Sep 17 00:00:00 2001
From: Doug Davis
Date: Thu, 15 Dec 2016 18:30:55 -0500
Subject: [PATCH 36/63] fix indentation of a few lines
---
docs/user-guide/kubectl-overview.md | 3 ---
1 file changed, 3 deletions(-)
diff --git a/docs/user-guide/kubectl-overview.md b/docs/user-guide/kubectl-overview.md
index cc08e47c68..eca4770bc1 100644
--- a/docs/user-guide/kubectl-overview.md
+++ b/docs/user-guide/kubectl-overview.md
@@ -18,7 +18,6 @@ kubectl [command] [TYPE] [NAME] [flags]
```
where `command`, `TYPE`, `NAME`, and `flags` are:
-
* `command`: Specifies the operation that you want to perform on one or more resources, for example `create`, `get`, `describe`, `delete`.
* `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-sensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:
@@ -27,11 +26,9 @@ where `command`, `TYPE`, `NAME`, and `flags` are:
$ kubectl get pods pod1
$ kubectl get po pod1
```
-
* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `$ kubectl get pods`.
When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files:
-
* To specify resources by type and name:
* To group resources if they are all the same type: `TYPE1 name1 name2 name<#>`
Example: `$ kubectl get pod example-pod1 example-pod2`
From 5961a799fe5b3b985fbb6c03d6cc2cda34d190ca Mon Sep 17 00:00:00 2001
From: Cole Mickens
Date: Mon, 14 Nov 2016 16:00:18 -0800
Subject: [PATCH 37/63] azure: update for k8s on acs launch
---
_data/guides.yml | 4 +-
docs/getting-started-guides/azure.md | 28 +-
.../coreos/azure/.gitignore | 1 -
...kubernetes-cluster-main-nodes-template.yml | 335 ------------------
.../coreos/azure/index.md | 246 -------------
.../coreos/azure/package.json | 19 -
docs/getting-started-guides/coreos/index.md | 6 -
docs/getting-started-guides/index.md | 10 +-
images/docs/initial_cluster.png | Bin 173212 -> 0 bytes
9 files changed, 31 insertions(+), 618 deletions(-)
delete mode 100644 docs/getting-started-guides/coreos/azure/.gitignore
delete mode 100644 docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml
delete mode 100644 docs/getting-started-guides/coreos/azure/index.md
delete mode 100644 docs/getting-started-guides/coreos/azure/package.json
delete mode 100644 images/docs/initial_cluster.png
diff --git a/_data/guides.yml b/_data/guides.yml
index cd685afd5a..c134e7a8ca 100644
--- a/_data/guides.yml
+++ b/_data/guides.yml
@@ -171,10 +171,10 @@ toc:
path: /docs/getting-started-guides/gce/
- title: Running Kubernetes on AWS EC2
path: /docs/getting-started-guides/aws/
+ - title: Running Kubernetes on Azure Container Service
+ path: https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
- title: Running Kubernetes on Azure
path: /docs/getting-started-guides/azure/
- - title: Running Kubernetes on Azure (Weave-based)
- path: /docs/getting-started-guides/coreos/azure/
- title: Running Kubernetes on CenturyLink Cloud
path: /docs/getting-started-guides/clc/
- title: Running Kubernetes on IBM SoftLayer
diff --git a/docs/getting-started-guides/azure.md b/docs/getting-started-guides/azure.md
index 40652e3172..093ce614f9 100644
--- a/docs/getting-started-guides/azure.md
+++ b/docs/getting-started-guides/azure.md
@@ -1,12 +1,30 @@
---
assignees:
- colemickens
-- jeffmendoza
+- brendandburns
---
-The recommended approach for deploying a Kubernetes 1.4 cluster on Azure is the
-[`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere) project.
+## Azure Container Service
-You will want to take a look at the
-[Azure Getting Started Guide](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/azure/README.md).
+The [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) offers simple
+deployments of one of three open source orchestrators: DC/OS, Swarm, and Kubernetes clusters.
+
+For an example of deploying a Kubernetes cluster onto Azure via the Azure Container Service:
+
+**[Microsoft Azure Container Service - Kubernetes Walkthrough](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough)**
+
+## Custom Deployments: ACS-Engine
+
+The core of the Azure Container Service is **open source** and available on GitHub for the community
+to use and contribute to: **[ACS-Engine](https://github.com/Azure/acs-engine)**.
+
+ACS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Container
+Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple
+agent pools, and more. Some community contributions to ACS-Engine may even become features of the Azure Container Service.
+
+The input to ACS-Engine is similar to the ARM template syntax used to deploy a cluster directly with the Azure Container Service.
+The resulting output is an Azure Resource Manager Template that can then be checked into source control and can then be used
+to deploy Kubernetes clusters into Azure.
+
+You can get started quickly by following the **[ACS-Engine Kubernetes Walkthrough](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md)**.
diff --git a/docs/getting-started-guides/coreos/azure/.gitignore b/docs/getting-started-guides/coreos/azure/.gitignore
deleted file mode 100644
index c2658d7d1b..0000000000
--- a/docs/getting-started-guides/coreos/azure/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-node_modules/
diff --git a/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml b/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml
deleted file mode 100644
index d44b26318d..0000000000
--- a/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml
+++ /dev/null
@@ -1,335 +0,0 @@
-## This file is used as input to deployment script, which amends it as needed.
-## More specifically, we need to add environment files for as many nodes as we
-## are going to deploy.
-
-write_files:
- - path: /opt/bin/curl-retry.sh
- permissions: '0755'
- owner: root
- content: |
- #!/bin/sh -x
- until curl $@
- do sleep 1
- done
-
-coreos:
- update:
- group: stable
- reboot-strategy: off
- units:
- - name: systemd-networkd-wait-online.service
- drop-ins:
- - name: 50-check-github-is-reachable.conf
- content: |
- [Service]
- ExecStart=/bin/sh -x -c \
- 'until curl --silent --fail https://status.github.com/api/status.json | grep -q \"good\"; do sleep 2; done'
-
- - name: weave-network.target
- enable: true
- content: |
- [Unit]
- Description=Weave Network Setup Complete
- Documentation=man:systemd.special(7)
- RefuseManualStart=no
- After=network-online.target
- [Install]
- WantedBy=multi-user.target
- WantedBy=kubernetes-master.target
- WantedBy=kubernetes-node.target
-
- - name: kubernetes-master.target
- enable: true
- command: start
- content: |
- [Unit]
- Description=Kubernetes Cluster Master
- Documentation=http://kubernetes.io/
- RefuseManualStart=no
- After=weave-network.target
- Requires=weave-network.target
- ConditionHost=kube-00
- Wants=kube-apiserver.service
- Wants=kube-scheduler.service
- Wants=kube-controller-manager.service
- Wants=kube-proxy.service
- [Install]
- WantedBy=multi-user.target
-
- - name: kubernetes-node.target
- enable: true
- command: start
- content: |
- [Unit]
- Description=Kubernetes Cluster Node
- Documentation=http://kubernetes.io/
- RefuseManualStart=no
- After=weave-network.target
- Requires=weave-network.target
- ConditionHost=!kube-00
- Wants=kube-proxy.service
- Wants=kubelet.service
- [Install]
- WantedBy=multi-user.target
-
- - name: 10-weave.network
- runtime: false
- content: |
- [Match]
- Type=bridge
- Name=weave*
- [Network]
-
- - name: install-weave.service
- enable: true
- content: |
- [Unit]
- After=network-online.target
- After=docker.service
- Before=weave.service
- Description=Install Weave
- Documentation=http://docs.weave.works/
- Requires=network-online.target
- [Service]
- EnvironmentFile=-/etc/weave.%H.env
- EnvironmentFile=-/etc/weave.env
- Type=oneshot
- RemainAfterExit=yes
- TimeoutStartSec=0
- ExecStartPre=/bin/mkdir -p /opt/bin/
- ExecStartPre=/opt/bin/curl-retry.sh \
- --silent \
- --location \
- git.io/weave \
- --output /opt/bin/weave
- ExecStartPre=/usr/bin/chmod +x /opt/bin/weave
- ExecStart=/opt/bin/weave setup
- [Install]
- WantedBy=weave-network.target
- WantedBy=weave.service
-
- - name: weaveproxy.service
- enable: true
- content: |
- [Unit]
- After=install-weave.service
- After=docker.service
- Description=Weave proxy for Docker API
- Documentation=http://docs.weave.works/
- Requires=docker.service
- Requires=install-weave.service
- [Service]
- EnvironmentFile=-/etc/weave.%H.env
- EnvironmentFile=-/etc/weave.env
- ExecStartPre=/opt/bin/weave launch-proxy --rewrite-inspect --without-dns
- ExecStart=/usr/bin/docker attach weaveproxy
- Restart=on-failure
- ExecStop=/opt/bin/weave stop-proxy
- [Install]
- WantedBy=weave-network.target
-
- - name: weave.service
- enable: true
- content: |
- [Unit]
- After=install-weave.service
- After=docker.service
- Description=Weave Network Router
- Documentation=http://docs.weave.works/
- Requires=docker.service
- Requires=install-weave.service
- [Service]
- TimeoutStartSec=0
- EnvironmentFile=-/etc/weave.%H.env
- EnvironmentFile=-/etc/weave.env
- ExecStartPre=/opt/bin/weave launch-router $WEAVE_PEERS
- ExecStart=/usr/bin/docker attach weave
- Restart=on-failure
- ExecStop=/opt/bin/weave stop-router
- [Install]
- WantedBy=weave-network.target
-
- - name: weave-expose.service
- enable: true
- content: |
- [Unit]
- After=install-weave.service
- After=weave.service
- After=docker.service
- Documentation=http://docs.weave.works/
- Requires=docker.service
- Requires=install-weave.service
- Requires=weave.service
- [Service]
- Type=oneshot
- RemainAfterExit=yes
- TimeoutStartSec=0
- EnvironmentFile=-/etc/weave.%H.env
- EnvironmentFile=-/etc/weave.env
- ExecStart=/opt/bin/weave expose
- ExecStop=/opt/bin/weave hide
- [Install]
- WantedBy=weave-network.target
-
- - name: install-kubernetes.service
- enable: true
- content: |
- [Unit]
- After=network-online.target
- Before=kube-apiserver.service
- Before=kube-controller-manager.service
- Before=kubelet.service
- Before=kube-proxy.service
- Description=Download Kubernetes Binaries
- Documentation=http://kubernetes.io/
- Requires=network-online.target
- [Service]
- Environment=KUBE_RELEASE_TARBALL=https://github.com/kubernetes/kubernetes/releases/download/v1.2.2/kubernetes.tar.gz
- ExecStartPre=/bin/mkdir -p /opt/
- ExecStart=/opt/bin/curl-retry.sh --silent --location $KUBE_RELEASE_TARBALL --output /tmp/kubernetes.tgz
- ExecStart=/bin/tar xzvf /tmp/kubernetes.tgz -C /tmp/
- ExecStart=/bin/tar xzvf /tmp/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /opt
- ExecStartPost=/bin/chmod o+rx -R /opt/kubernetes
- ExecStartPost=/bin/ln -s /opt/kubernetes/server/bin/kubectl /opt/bin/
- ExecStartPost=/bin/mv /tmp/kubernetes/examples/guestbook /home/core/guestbook-example
- ExecStartPost=/bin/chown core. -R /home/core/guestbook-example
- ExecStartPost=/bin/rm -rf /tmp/kubernetes
- ExecStartPost=/bin/sed 's/# type: LoadBalancer/type: NodePort/' -i /home/core/guestbook-example/frontend-service.yaml
- RemainAfterExit=yes
- Type=oneshot
- [Install]
- WantedBy=kubernetes-master.target
- WantedBy=kubernetes-node.target
-
- - name: kube-apiserver.service
- enable: true
- content: |
- [Unit]
- After=install-kubernetes.service
- Before=kube-controller-manager.service
- Before=kube-scheduler.service
- ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-apiserver
- Description=Kubernetes API Server
- Documentation=http://kubernetes.io/
- Wants=install-kubernetes.service
- ConditionHost=kube-00
- [Service]
- ExecStart=/opt/kubernetes/server/bin/kube-apiserver \
- --insecure-bind-address=0.0.0.0 \
- --advertise-address=$public_ipv4 \
- --insecure-port=8080 \
- $ETCD_SERVERS \
- --service-cluster-ip-range=10.16.0.0/12 \
- --cloud-provider= \
- --logtostderr=true
- Restart=always
- RestartSec=10
- [Install]
- WantedBy=kubernetes-master.target
-
- - name: kube-scheduler.service
- enable: true
- content: |
- [Unit]
- After=kube-apiserver.service
- After=install-kubernetes.service
- ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-scheduler
- Description=Kubernetes Scheduler
- Documentation=http://kubernetes.io/
- Wants=kube-apiserver.service
- ConditionHost=kube-00
- [Service]
- ExecStart=/opt/kubernetes/server/bin/kube-scheduler \
- --logtostderr=true \
- --master=127.0.0.1:8080
- Restart=always
- RestartSec=10
- [Install]
- WantedBy=kubernetes-master.target
-
- - name: kube-controller-manager.service
- enable: true
- content: |
- [Unit]
- After=install-kubernetes.service
- After=kube-apiserver.service
- ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-controller-manager
- Description=Kubernetes Controller Manager
- Documentation=http://kubernetes.io/
- Wants=kube-apiserver.service
- Wants=install-kubernetes.service
- ConditionHost=kube-00
- [Service]
- ExecStart=/opt/kubernetes/server/bin/kube-controller-manager \
- --master=127.0.0.1:8080 \
- --logtostderr=true
- Restart=always
- RestartSec=10
- [Install]
- WantedBy=kubernetes-master.target
-
- - name: kubelet.service
- enable: true
- content: |
- [Unit]
- After=install-kubernetes.service
- ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubelet
- Description=Kubernetes Kubelet
- Documentation=http://kubernetes.io/
- Wants=install-kubernetes.service
- ConditionHost=!kube-00
- [Service]
- ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/
- ExecStart=/opt/kubernetes/server/bin/kubelet \
- --docker-endpoint=unix://var/run/weave/weave.sock \
- --address=0.0.0.0 \
- --port=10250 \
- --hostname-override=%H \
- --api-servers=http://kube-00:8080 \
- --logtostderr=true \
- --cluster-dns=10.16.0.3 \
- --cluster-domain=kube.local \
- --config=/etc/kubernetes/manifests/
- Restart=always
- RestartSec=10
- [Install]
- WantedBy=kubernetes-node.target
-
- - name: kube-proxy.service
- enable: true
- content: |
- [Unit]
- After=install-kubernetes.service
- ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-proxy
- Description=Kubernetes Proxy
- Documentation=http://kubernetes.io/
- Wants=install-kubernetes.service
- [Service]
- ExecStart=/opt/kubernetes/server/bin/kube-proxy \
- --master=http://kube-00:8080 \
- --logtostderr=true
- Restart=always
- RestartSec=10
- [Install]
- WantedBy=kubernetes-master.target
- WantedBy=kubernetes-node.target
-
- - name: kube-create-addons.service
- enable: true
- content: |
- [Unit]
- After=install-kubernetes.service
- ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubectl
- ConditionPathIsDirectory=/etc/kubernetes/addons/
- ConditionHost=kube-00
- Description=Kubernetes Addons
- Documentation=http://kubernetes.io/
- Wants=install-kubernetes.service
- Wants=kube-apiserver.service
- [Service]
- Type=oneshot
- RemainAfterExit=no
- ExecStart=/bin/bash -c 'until /opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/; do sleep 2; done'
- SuccessExitStatus=1
- [Install]
- WantedBy=kubernetes-master.target
diff --git a/docs/getting-started-guides/coreos/azure/index.md b/docs/getting-started-guides/coreos/azure/index.md
deleted file mode 100644
index 589cf81fcc..0000000000
--- a/docs/getting-started-guides/coreos/azure/index.md
+++ /dev/null
@@ -1,246 +0,0 @@
----
----
-
-* TOC
-{:toc}
-
-
-In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
-
-### Prerequisites
-
-1. You need an Azure account.
-
-## Let's go!
-
-To get started, you need to checkout the code:
-
-```shell
-https://github.com/weaveworks-guides/weave-kubernetes-coreos-azure
-cd weave-kubernetes-coreos-azure
-```
-
-You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
-
-First, you need to install some of the dependencies with
-
-```shell
-npm install
-```
-
-Now, all you need to do is:
-
-```shell
-./azure-login.js -u
-./create-kubernetes-cluster.js
-```
-
-This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
-If you need to pass Azure specific options for the creation script you can do this via additional environment variables e.g.
-
-```shell
-AZ_SUBSCRIPTION= AZ_LOCATION="East US" ./create-kubernetes-cluster.js
-# or
-AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
-```
-
-
-
-Once the creation of Azure VMs has finished, you should see the following:
-
-```shell
-...
-azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf `
-azure_wrapper/info: The hosts in this deployment are:
- [ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
-azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
-```
-
-Let's login to the master node like so:
-
-```shell
-ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
-```
-
-> Note: config file name will be different, make sure to use the one you see.
-
-Check there are 2 nodes in the cluster:
-
-```shell
-core@kube-00 ~ $ kubectl get nodes
-NAME LABELS STATUS
-kube-01 kubernetes.io/hostname=kube-01 Ready
-kube-02 kubernetes.io/hostname=kube-02 Ready
-```
-
-## Deploying the workload
-
-Let's follow the Guestbook example now:
-
-```shell
-kubectl create -f ~/guestbook-example
-```
-
-You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
-
-```shell
-kubectl get pods --watch
-```
-
-> Note: the most time it will spend downloading Docker container images on each of the nodes.
-
-Eventually you should see:
-
-```shell
-NAME READY STATUS RESTARTS AGE
-frontend-0a9xi 1/1 Running 0 4m
-frontend-4wahe 1/1 Running 0 4m
-frontend-6l36j 1/1 Running 0 4m
-redis-master-talmr 1/1 Running 0 4m
-redis-slave-12zfd 1/1 Running 0 4m
-redis-slave-3nbce 1/1 Running 0 4m
-```
-
-## Scaling
-
-Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
-
-You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/kubernetes/docs/getting-started-guides/coreos/azure/`).
-
-First, lets set the size of new VMs:
-
-```shell
-export AZ_VM_SIZE=Large
-```
-
-Now, run scale script with state file of the previous deployment and number of nodes to add:
-
-```shell
-core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
-...
-azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf `
-azure_wrapper/info: The hosts in this deployment are:
- [ 'etcd-00',
- 'etcd-01',
- 'etcd-02',
- 'kube-00',
- 'kube-01',
- 'kube-02',
- 'kube-03',
- 'kube-04' ]
-azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
-```
-
-> Note: this step has created new files in `./output`.
-
-Back on `kube-00`:
-
-```shell
-core@kube-00 ~ $ kubectl get nodes
-NAME LABELS STATUS
-kube-01 kubernetes.io/hostname=kube-01 Ready
-kube-02 kubernetes.io/hostname=kube-02 Ready
-kube-03 kubernetes.io/hostname=kube-03 Ready
-kube-04 kubernetes.io/hostname=kube-04 Ready
-```
-
-You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
-
-First, double-check how many replication controllers there are:
-
-```shell
-core@kube-00 ~ $ kubectl get rc
-ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
-redis-master master redis name=redis-master 1
-redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
-```
-
-As there are 4 nodes, let's scale proportionally:
-
-```shell
-core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
-scaled
-core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
-scaled
-```
-
-Check what you have now:
-
-```shell
-core@kube-00 ~ $ kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
-redis-master master redis name=redis-master 1
-redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
-```
-
-You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
-
-```shell
-core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
-NAME READY STATUS RESTARTS AGE
-frontend-0a9xi 1/1 Running 0 22m
-frontend-4wahe 1/1 Running 0 22m
-frontend-6l36j 1/1 Running 0 22m
-frontend-z9oxo 1/1 Running 0 41s
-```
-
-## Exposing the app to the outside world
-
-There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
-
-```shell
-./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
-Guestbook app is on port 31605, will map it to port 80 on kube-00
-info: Executing command vm endpoint create
-+ Getting virtual machines
-+ Reading network configuration
-+ Updating network configuration
-info: vm endpoint create command OK
-info: Executing command vm endpoint show
-+ Getting virtual machines
-data: Name : tcp-80-31605
-data: Local port : 31605
-data: Protcol : tcp
-data: Virtual IP Address : 137.117.156.164
-data: Direct server return : Disabled
-info: vm endpoint show command OK
-```
-
-You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
-
-## Next steps
-
-You now have a full-blown cluster running in Azure, congrats!
-
-You should probably try deploy other [example apps](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) or write your own ;)
-
-## Tear down...
-
-If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
-
-```shell
-./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
-```
-
-> Note: make sure to use the _latest state file_, as after scaling there is a new one.
-
-By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
-
-## Support Level
-
-
-IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
--------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
-Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
-
-
-For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
-
-
-## Further reading
-
-Please see the [Kubernetes docs](/docs/) for more details on administering
-and using a Kubernetes cluster
-
diff --git a/docs/getting-started-guides/coreos/azure/package.json b/docs/getting-started-guides/coreos/azure/package.json
deleted file mode 100644
index 2ab720ea45..0000000000
--- a/docs/getting-started-guides/coreos/azure/package.json
+++ /dev/null
@@ -1,19 +0,0 @@
-{
- "name": "coreos-azure-weave",
- "version": "1.0.0",
- "description": "Small utility to bring up a woven CoreOS cluster",
- "main": "index.js",
- "scripts": {
- "test": "echo \"Error: no test specified\" && exit 1"
- },
- "author": "Ilya Dmitrichenko ",
- "license": "Apache 2.0",
- "dependencies": {
- "azure-cli": "^0.10.1",
- "colors": "^1.0.3",
- "js-yaml": "^3.2.5",
- "openssl-wrapper": "^0.2.1",
- "underscore": "^1.7.0",
- "underscore.string": "^3.0.2"
- }
-}
diff --git a/docs/getting-started-guides/coreos/index.md b/docs/getting-started-guides/coreos/index.md
index 80199a61c0..d0840cedde 100644
--- a/docs/getting-started-guides/coreos/index.md
+++ b/docs/getting-started-guides/coreos/index.md
@@ -71,12 +71,6 @@ Guide to running a single master, multi-worker cluster controlled by an OS X men
-[**Resizable multi-node cluster on Azure with Weave**](/docs/getting-started-guides/coreos/azure/)
-
-Guide to running an HA etcd cluster with a single master on Azure. Uses the Azure node.js CLI to resize the cluster.
-
-
-
[**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
Configure a single master, single worker cluster on VMware ESXi.
diff --git a/docs/getting-started-guides/index.md b/docs/getting-started-guides/index.md
index 609a0cc03d..6bac4acde8 100644
--- a/docs/getting-started-guides/index.md
+++ b/docs/getting-started-guides/index.md
@@ -37,6 +37,9 @@ Use the [Minikube getting started guide](/docs/getting-started-guides/minikube/)
[Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes
clusters.
+[Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) can easily deploy Kubernetes
+clusters.
+
[Stackpoint.io](https://stackpoint.io) provides Kubernetes infrastructure automation and management for multiple public clouds.
[AppsCode.com](https://appscode.com/products/cloud-deployment/) provides managed Kubernetes clusters for various public clouds (including AWS and Google Cloud Platform).
@@ -54,8 +57,7 @@ few commands, and have active community support.
- [GCE](/docs/getting-started-guides/gce)
- [AWS](/docs/getting-started-guides/aws)
-- [Azure](/docs/getting-started-guides/azure/)
-- [Azure](/docs/getting-started-guides/coreos/azure/) (Weave-based, contributed by WeaveWorks employees)
+- [Azure](/docs/getting-started-guides/azure)
- [CenturyLink Cloud](/docs/getting-started-guides/clc)
- [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer)
@@ -129,8 +131,8 @@ AppsCode.com | Saltstack | Debian | multi-support | [docs](https://ap
KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | | Commercial
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/products/kubernetes/) | | Commercial
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project
-Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
-Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens))
+Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | | Commercial
+Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | [Community (Microsoft)](https://github.com/Azure/acs-engine)
Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns))
Docker Multi Node | custom | N/A | flannel | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns))
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project
diff --git a/images/docs/initial_cluster.png b/images/docs/initial_cluster.png
deleted file mode 100644
index 99646a3fd06ece2c88cbe47a35d59a863d5f8e7a..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 173212
zcmeEubyU>d+BSlKhzN>`fP|z-cb9Z20@5ik^w8ZRBHi8HLo+le(hM*l3^1fL!+^BZ
z@QvrZ=e&=Pp0&>R|I1p;Vt#S&z3;vE9oKzrLRDYN5!@!bje&tdpdc@;hJkTw3IhYP
z>=q9C%B7Ly4-AYukF2DmR28J8s8yZp0amu=7#Q-QiOIMc@dJjdtg}H^k#p*|{;EP1gi}R@S%|z#<8H#?_VnFgvE7=IREK1eQw1RLs4t5_MeOPM=rh|!JT*30btgSgb|y~)V>9luLgeN1VJVUqjrSeS
zF#B7dwmOeb>-W#cFq)e~0xKkfZR0KtvcCEcrHW+5Ltau?%xD(`?+lIJ1JrI@XHV>R
zDrpvZR1k`4ms;g@$q%F~x4khR#*5|n8kKRME+kk=^Mg<5XyUoj55qJ84!~FcfcO=x
zjXNASByJ=l$uUVs=m#HIeDR2^CD5ab3`SW#7UzHFl@KrEOe1%g@7eaGvxM{I3U+YR
zx;}mX9rBff2Oa)IW=-Q`D`!?vN3KjuB}#F=J6se_axY~y`b)0!FL|=7IjLd=)beI9
z4&V=B{R@V1)rSU2$w>eMCQ7^|k|)C|;dc{y-tM<${21f3)6c5PR>taG!+R^sYwF{X4EAwZ(QHminK(DSuvjX|?R;K2F~?~IYVQ88P63wX}7dFHbYPu^ov
zE#YfR+_B+G!5Y8O>6_z+;TPZV1B;I+P@@a$QGQa*M**vF46%F(u^;&N30EWfJeBl3
zsR?I0BlOH);m6R}LE>f20*Yhm90=wp+GH55wUy@}$an!8oj*w^mJUqWTwm)e9=-Gy9|Z*w3N;PbY;GR*bR?*TVo)`*98
z?DYv)7pFs!a&PFO5vQc5=bYiA?#^F=v!9B4y)Cac)n<0P+ZvJidHV*wa5%5A-Lw^6
zbk%#t@kEX(y#ftAoZ)V&!!;b}>{QYZ?p+wtT*@BPrNkJxFgQ8)=uYSW-&;?#Z`@(P$OzBjJWh|SB9oY`#Nj*gA(OUi5s^aEK0EKe(afG_L*Tl31I$y(~tV``yk6048SLyS6Di#
z$NJSoELg!gk1KHXy40>PSO2D#ByJPlZg&F|*Dyfx``tUVw_d!k
zyoL2z&V@+o%aa<(3?lb0gm^MEu{Y`P6l89Ge5~`4MJ_Bhvfs^&P%ZK@EJJ$aX>#n#
zRkrbMZm0$iIYdq?#|Elak16_bDtBY7d{d>K
z+&A#92}XQhN(vhwP|!q7TOjFUp)rjur{?=aU;Q@8vP2yK480N*d60$3Kyj~sk)
zzHtiRj`otQ*nCHNJTwdeFTwUHeTc3VbMEort=}YhLG&~$MlnU{8C?R=3GquJwl0cE
zIS)Cy`>E2Vd9-uscwbz;VSXd~_VSy_TjHAdX{9GLvGUt=&T-^S8DALI;%+Oz<#81(
z6buT)#&K&!bV6*UP8b>Euvami@Xj#K?lgyAMu@N!KeKqYK|{jUmzd5DehJLh&!6LM
z7Hc*MyWf-YO*XWsKt1WTB2z;GegbX+|G~k5*TKla#Om3B(wfp*biYyDUhK8}o1(PW
zz9rlxZ?|===^H}5lFvWL=95B2tU~McXLWUXks`=@NUTgrZ~-o~`H23A%Zs8t9Hl1t
z@{{T(C6lBV9WO#Za|YYo#tL%^lMNFKv+d~#GY?~YXwQ6?i8G-h@pEEr!qhVg)s~7Q
z{nP^6f}P^wH-@=wT4!xe>{zCo+&+(=^g8v5^%ZHYIK-{koHES>@JLb-~@p~gz-!%_;(
z*3uJ&%);H`tlSRCcEhV&9FR#7r-~^^wz(HOGBETxu>qqK!y^?Dq5Uj|GOHre(q;+g
zDnR;rOj4s}K$5CW`jlto4ZEVoVz-tP0Frz5O6U%SJ4MF`AhS-$zg9!XutHgrsXUms
zq`Z{8gn4Q{9}W_(u&5x(*UE9%-pdt}SyT|0@vrl%L*)cUebNXD#^J>|y|Z>F^UlN_
zFj*s62R|2I7`LvKhW*i;HbkIbV6mz9M<5>LOE(?7;PdJSCg|4zo^F?7s}NFafPE4`
z6W|7U!t2hlk-o>Xmm!_9Dx$zq`gYt7zbrUeCQBw#CYpYN>uYUS?ZUyDSr$iI7FW_yBxQ(z(*f8+r8n;q;9Jja56tNDsF)H}@UV6LqB4
zEI-`e(ZPjC@T<^~Fqr^@;I2a(vf$f;<@(;OIPOm~aL>>~rj2N}yKK#Dat0{31_y|P
z=+e1Q*#3is1YdQp`J=gMp!ZzMyw0w|ZtHxjsEg<{&s48MZ;B(Y
zExR?=1F_|+g}xq(*jMrF>*Py%tMJ|89g-FB{9bHlmVcRl