Merge branch 'master' into escape-from-liquid-template

pull/1974/head
Peter Lee 2016-12-17 08:44:07 -06:00 committed by GitHub
commit 6f2b08ced7
12 changed files with 194 additions and 705 deletions

View File

@ -171,10 +171,10 @@ toc:
path: /docs/getting-started-guides/gce/
- title: Running Kubernetes on AWS EC2
path: /docs/getting-started-guides/aws/
- title: Running Kubernetes on Azure Container Service
path: https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
- title: Running Kubernetes on Azure
path: /docs/getting-started-guides/azure/
- title: Running Kubernetes on Azure (Weave-based)
path: /docs/getting-started-guides/coreos/azure/
- title: Running Kubernetes on CenturyLink Cloud
path: /docs/getting-started-guides/clc/
- title: Running Kubernetes on IBM SoftLayer

View File

@ -141,10 +141,10 @@ By default, `kubeadm init` automatically generates the token used to initialise
each new node. If you would like to manually specify this token, you can use the
`--token` flag. The token must be of the format `<6 character string>.<16 character string>`.
- `--use-kubernetes-version` (default 'v1.4.4') the kubernetes version to initialise
- `--use-kubernetes-version` (default 'v1.5.1') the kubernetes version to initialise
`kubeadm` was originally built for Kubernetes version **v1.4.0**, older versions are not
supported. With this flag you can try any future version, e.g. **v1.5.0-beta.1**
supported. With this flag you can try any future version, e.g. **v1.6.0-beta.1**
whenever it comes out (check [releases page](https://github.com/kubernetes/kubernetes/releases)
for a full list of available versions).
@ -168,6 +168,59 @@ necessary.
By default, when `kubeadm init` runs, a token is generated and revealed in the output.
That's the token you should use here.
## Using kubeadm with a configuration file
WARNING: kubeadm is in alpha and the configuration API syntax will likely change before GA.
It's possible to configure kubeadm with a configuration file instead of command line flags, and some more advanced features may only be
available as configuration file options.
### Sample Master Configuration
```yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddresses:
- <address1|string>
- <address2|string>
bindPort: <int>
externalDNSNames:
- <dnsname1|string>
- <dnsname2|string>
cloudProvider: <string>
discovery:
bindPort: <int>
etcd:
endpoints:
- <endpoint1|string>
- <endpoint2|string>
caFile: <path|string>
certFile: <path|string>
keyFile: <path|string>
kubernetesVersion: <string>
networking:
dnsDomain: <string>
serviceSubnet: <cidr>
podSubnet: <cidr>
secrets:
givenToken: <token|string>
```
### Sample Node Configuration
```yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: NodeConfiguration
apiPort: <int>
discoveryPort: <int>
masterAddresses:
- <master1>
secrets:
givenToken: <token|string>
```
## Automating kubeadm
Rather than copying the token you obtained from `kubeadm init` to each node, as
@ -175,13 +228,12 @@ in the basic `kubeadm` tutorials, you can parallelize the token distribution for
easier automation. To implement this automation, you must know the IP address
that the master will have after it is started.
1. Generate a token. This token must have the form `<6 character string>.<16
character string>`
1. Generate a token. This token must have the form `<6 character string>.<16 character string>`.
Here is a simple python one-liner for this:
Kubeadm can pre-generate a token for you:
```
python -c 'import random; print "%0x.%0x" % (random.SystemRandom().getrandbits(3*8), random.SystemRandom().getrandbits(8*8))'
```console
$ kubeadm token generate
```
1. Start both the master node and the worker nodes concurrently with this token. As they come up they should find each other and form the cluster.
@ -191,6 +243,7 @@ Once the cluster is up, you can grab the admin credentials from the master node
## Environment variables
There are some environment variables that modify the way that `kubeadm` works. Most users will have no need to set these.
These enviroment variables are a short-term solution, eventually they will be integrated in the kubeadm configuration file.
| Variable | Default | Description |
| --- | --- | --- |
@ -200,36 +253,10 @@ There are some environment variables that modify the way that `kubeadm` works.
| `KUBE_HYPERKUBE_IMAGE` | `` | If set, use a single hyperkube image with this name. If not set, individual images per server component will be used. |
| `KUBE_DISCOVERY_IMAGE` | `gcr.io/google_containers/kube-discovery-<arch>:1.0` | The bootstrap discovery helper image to use. |
| `KUBE_ETCD_IMAGE` | `gcr.io/google_containers/etcd-<arch>:2.2.5` | The etcd container image to use. |
| `KUBE_COMPONENT_LOGLEVEL` | `--v=4` | Logging configuration for all Kubernetes components |
| `KUBE_REPO_PREFIX` | `gcr.io/google_containers` | The image prefix for all images that are used. |
## Releases and release notes
If you already have kubeadm installed and want to upgrade, run `apt-get update && apt-get upgrade` or `yum update` to get the latest version of kubeadm.
- Second release between v1.4 and v1.5: `v1.5.0-alpha.2.421+a6bea3d79b8bba`
- Switch to the 10.96.0.0/12 subnet: [#35290](https://github.com/kubernetes/kubernetes/pull/35290)
- Fix kubeadm on AWS by including /etc/ssl/certs in the controller-manager [#33681](https://github.com/kubernetes/kubernetes/pull/33681)
- The API was refactored and is now componentconfig: [#33728](https://github.com/kubernetes/kubernetes/pull/33728), [#34147](https://github.com/kubernetes/kubernetes/pull/34147) and [#34555](https://github.com/kubernetes/kubernetes/pull/34555)
- Allow kubeadm to get config options from a file: [#34501](https://github.com/kubernetes/kubernetes/pull/34501), [#34885](https://github.com/kubernetes/kubernetes/pull/34885) and [#34891](https://github.com/kubernetes/kubernetes/pull/34891)
- Implement preflight checks: [#34341](https://github.com/kubernetes/kubernetes/pull/34341) and [#35843](https://github.com/kubernetes/kubernetes/pull/35843)
- Using kubernetes v1.4.4 by default: [#34419](https://github.com/kubernetes/kubernetes/pull/34419) and [#35270](https://github.com/kubernetes/kubernetes/pull/35270)
- Make api and discovery ports configurable and default to 6443: [#34719](https://github.com/kubernetes/kubernetes/pull/34719)
- Implement kubeadm reset: [#34807](https://github.com/kubernetes/kubernetes/pull/34807)
- Make kubeadm poll/wait for endpoints instead of directly fail when the master isn't available [#34703](https://github.com/kubernetes/kubernetes/pull/34703) and [#34718](https://github.com/kubernetes/kubernetes/pull/34718)
- Allow empty directories in the directory preflight check: [#35632](https://github.com/kubernetes/kubernetes/pull/35632)
- Started adding unit tests: [#35231](https://github.com/kubernetes/kubernetes/pull/35231), [#35326](https://github.com/kubernetes/kubernetes/pull/35326) and [#35332](https://github.com/kubernetes/kubernetes/pull/35332)
- Various enhancements: [#35075](https://github.com/kubernetes/kubernetes/pull/35075), [#35111](https://github.com/kubernetes/kubernetes/pull/35111), [#35119](https://github.com/kubernetes/kubernetes/pull/35119), [#35124](https://github.com/kubernetes/kubernetes/pull/35124), [#35265](https://github.com/kubernetes/kubernetes/pull/35265) and [#35777](https://github.com/kubernetes/kubernetes/pull/35777)
- Bug fixes: [#34352](https://github.com/kubernetes/kubernetes/pull/34352), [#34558](https://github.com/kubernetes/kubernetes/pull/34558), [#34573](https://github.com/kubernetes/kubernetes/pull/34573), [#34834](https://github.com/kubernetes/kubernetes/pull/34834), [#34607](https://github.com/kubernetes/kubernetes/pull/34607), [#34907](https://github.com/kubernetes/kubernetes/pull/34907) and [#35796](https://github.com/kubernetes/kubernetes/pull/35796)
- Initial v1.4 release: `v1.5.0-alpha.0.1534+cf7301f16c0363`
## Troubleshooting
* Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl config, eg.
```
# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
```
Refer to the [CHANGELOG.md](https://github.com/kubernetes/kubeadm/blob/master/CHANGELOG.md) for more information.

View File

@ -1,12 +1,30 @@
---
assignees:
- colemickens
- jeffmendoza
- brendandburns
---
The recommended approach for deploying a Kubernetes 1.4 cluster on Azure is the
[`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere) project.
## Azure Container Service
You will want to take a look at the
[Azure Getting Started Guide](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/azure/README.md).
The [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) offers simple
deployments of one of three open source orchestrators: DC/OS, Swarm, and Kubernetes clusters.
For an example of deploying a Kubernetes cluster onto Azure via the Azure Container Service:
**[Microsoft Azure Container Service - Kubernetes Walkthrough](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough)**
## Custom Deployments: ACS-Engine
The core of the Azure Container Service is **open source** and available on GitHub for the community
to use and contribute to: **[ACS-Engine](https://github.com/Azure/acs-engine)**.
ACS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Container
Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple
agent pools, and more. Some community contributions to ACS-Engine may even become features of the Azure Container Service.
The input to ACS-Engine is similar to the ARM template syntax used to deploy a cluster directly with the Azure Container Service.
The resulting output is an Azure Resource Manager Template that can then be checked into source control and can then be used
to deploy Kubernetes clusters into Azure.
You can get started quickly by following the **[ACS-Engine Kubernetes Walkthrough](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md)**.

View File

@ -1 +0,0 @@
node_modules/

View File

@ -1,335 +0,0 @@
## This file is used as input to deployment script, which amends it as needed.
## More specifically, we need to add environment files for as many nodes as we
## are going to deploy.
write_files:
- path: /opt/bin/curl-retry.sh
permissions: '0755'
owner: root
content: |
#!/bin/sh -x
until curl $@
do sleep 1
done
coreos:
update:
group: stable
reboot-strategy: off
units:
- name: systemd-networkd-wait-online.service
drop-ins:
- name: 50-check-github-is-reachable.conf
content: |
[Service]
ExecStart=/bin/sh -x -c \
'until curl --silent --fail https://status.github.com/api/status.json | grep -q \"good\"; do sleep 2; done'
- name: weave-network.target
enable: true
content: |
[Unit]
Description=Weave Network Setup Complete
Documentation=man:systemd.special(7)
RefuseManualStart=no
After=network-online.target
[Install]
WantedBy=multi-user.target
WantedBy=kubernetes-master.target
WantedBy=kubernetes-node.target
- name: kubernetes-master.target
enable: true
command: start
content: |
[Unit]
Description=Kubernetes Cluster Master
Documentation=http://kubernetes.io/
RefuseManualStart=no
After=weave-network.target
Requires=weave-network.target
ConditionHost=kube-00
Wants=kube-apiserver.service
Wants=kube-scheduler.service
Wants=kube-controller-manager.service
Wants=kube-proxy.service
[Install]
WantedBy=multi-user.target
- name: kubernetes-node.target
enable: true
command: start
content: |
[Unit]
Description=Kubernetes Cluster Node
Documentation=http://kubernetes.io/
RefuseManualStart=no
After=weave-network.target
Requires=weave-network.target
ConditionHost=!kube-00
Wants=kube-proxy.service
Wants=kubelet.service
[Install]
WantedBy=multi-user.target
- name: 10-weave.network
runtime: false
content: |
[Match]
Type=bridge
Name=weave*
[Network]
- name: install-weave.service
enable: true
content: |
[Unit]
After=network-online.target
After=docker.service
Before=weave.service
Description=Install Weave
Documentation=http://docs.weave.works/
Requires=network-online.target
[Service]
EnvironmentFile=-/etc/weave.%H.env
EnvironmentFile=-/etc/weave.env
Type=oneshot
RemainAfterExit=yes
TimeoutStartSec=0
ExecStartPre=/bin/mkdir -p /opt/bin/
ExecStartPre=/opt/bin/curl-retry.sh \
--silent \
--location \
git.io/weave \
--output /opt/bin/weave
ExecStartPre=/usr/bin/chmod +x /opt/bin/weave
ExecStart=/opt/bin/weave setup
[Install]
WantedBy=weave-network.target
WantedBy=weave.service
- name: weaveproxy.service
enable: true
content: |
[Unit]
After=install-weave.service
After=docker.service
Description=Weave proxy for Docker API
Documentation=http://docs.weave.works/
Requires=docker.service
Requires=install-weave.service
[Service]
EnvironmentFile=-/etc/weave.%H.env
EnvironmentFile=-/etc/weave.env
ExecStartPre=/opt/bin/weave launch-proxy --rewrite-inspect --without-dns
ExecStart=/usr/bin/docker attach weaveproxy
Restart=on-failure
ExecStop=/opt/bin/weave stop-proxy
[Install]
WantedBy=weave-network.target
- name: weave.service
enable: true
content: |
[Unit]
After=install-weave.service
After=docker.service
Description=Weave Network Router
Documentation=http://docs.weave.works/
Requires=docker.service
Requires=install-weave.service
[Service]
TimeoutStartSec=0
EnvironmentFile=-/etc/weave.%H.env
EnvironmentFile=-/etc/weave.env
ExecStartPre=/opt/bin/weave launch-router $WEAVE_PEERS
ExecStart=/usr/bin/docker attach weave
Restart=on-failure
ExecStop=/opt/bin/weave stop-router
[Install]
WantedBy=weave-network.target
- name: weave-expose.service
enable: true
content: |
[Unit]
After=install-weave.service
After=weave.service
After=docker.service
Documentation=http://docs.weave.works/
Requires=docker.service
Requires=install-weave.service
Requires=weave.service
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutStartSec=0
EnvironmentFile=-/etc/weave.%H.env
EnvironmentFile=-/etc/weave.env
ExecStart=/opt/bin/weave expose
ExecStop=/opt/bin/weave hide
[Install]
WantedBy=weave-network.target
- name: install-kubernetes.service
enable: true
content: |
[Unit]
After=network-online.target
Before=kube-apiserver.service
Before=kube-controller-manager.service
Before=kubelet.service
Before=kube-proxy.service
Description=Download Kubernetes Binaries
Documentation=http://kubernetes.io/
Requires=network-online.target
[Service]
Environment=KUBE_RELEASE_TARBALL=https://github.com/kubernetes/kubernetes/releases/download/v1.2.2/kubernetes.tar.gz
ExecStartPre=/bin/mkdir -p /opt/
ExecStart=/opt/bin/curl-retry.sh --silent --location $KUBE_RELEASE_TARBALL --output /tmp/kubernetes.tgz
ExecStart=/bin/tar xzvf /tmp/kubernetes.tgz -C /tmp/
ExecStart=/bin/tar xzvf /tmp/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /opt
ExecStartPost=/bin/chmod o+rx -R /opt/kubernetes
ExecStartPost=/bin/ln -s /opt/kubernetes/server/bin/kubectl /opt/bin/
ExecStartPost=/bin/mv /tmp/kubernetes/examples/guestbook /home/core/guestbook-example
ExecStartPost=/bin/chown core. -R /home/core/guestbook-example
ExecStartPost=/bin/rm -rf /tmp/kubernetes
ExecStartPost=/bin/sed 's/# type: LoadBalancer/type: NodePort/' -i /home/core/guestbook-example/frontend-service.yaml
RemainAfterExit=yes
Type=oneshot
[Install]
WantedBy=kubernetes-master.target
WantedBy=kubernetes-node.target
- name: kube-apiserver.service
enable: true
content: |
[Unit]
After=install-kubernetes.service
Before=kube-controller-manager.service
Before=kube-scheduler.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-apiserver
Description=Kubernetes API Server
Documentation=http://kubernetes.io/
Wants=install-kubernetes.service
ConditionHost=kube-00
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-apiserver \
--insecure-bind-address=0.0.0.0 \
--advertise-address=$public_ipv4 \
--insecure-port=8080 \
$ETCD_SERVERS \
--service-cluster-ip-range=10.16.0.0/12 \
--cloud-provider= \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-master.target
- name: kube-scheduler.service
enable: true
content: |
[Unit]
After=kube-apiserver.service
After=install-kubernetes.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-scheduler
Description=Kubernetes Scheduler
Documentation=http://kubernetes.io/
Wants=kube-apiserver.service
ConditionHost=kube-00
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-scheduler \
--logtostderr=true \
--master=127.0.0.1:8080
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-master.target
- name: kube-controller-manager.service
enable: true
content: |
[Unit]
After=install-kubernetes.service
After=kube-apiserver.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-controller-manager
Description=Kubernetes Controller Manager
Documentation=http://kubernetes.io/
Wants=kube-apiserver.service
Wants=install-kubernetes.service
ConditionHost=kube-00
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-master.target
- name: kubelet.service
enable: true
content: |
[Unit]
After=install-kubernetes.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubelet
Description=Kubernetes Kubelet
Documentation=http://kubernetes.io/
Wants=install-kubernetes.service
ConditionHost=!kube-00
[Service]
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/
ExecStart=/opt/kubernetes/server/bin/kubelet \
--docker-endpoint=unix://var/run/weave/weave.sock \
--address=0.0.0.0 \
--port=10250 \
--hostname-override=%H \
--api-servers=http://kube-00:8080 \
--logtostderr=true \
--cluster-dns=10.16.0.3 \
--cluster-domain=kube.local \
--config=/etc/kubernetes/manifests/
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-node.target
- name: kube-proxy.service
enable: true
content: |
[Unit]
After=install-kubernetes.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-proxy
Description=Kubernetes Proxy
Documentation=http://kubernetes.io/
Wants=install-kubernetes.service
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-proxy \
--master=http://kube-00:8080 \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=kubernetes-master.target
WantedBy=kubernetes-node.target
- name: kube-create-addons.service
enable: true
content: |
[Unit]
After=install-kubernetes.service
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubectl
ConditionPathIsDirectory=/etc/kubernetes/addons/
ConditionHost=kube-00
Description=Kubernetes Addons
Documentation=http://kubernetes.io/
Wants=install-kubernetes.service
Wants=kube-apiserver.service
[Service]
Type=oneshot
RemainAfterExit=no
ExecStart=/bin/bash -c 'until /opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/; do sleep 2; done'
SuccessExitStatus=1
[Install]
WantedBy=kubernetes-master.target

View File

@ -1,246 +0,0 @@
---
---
* TOC
{:toc}
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
### Prerequisites
1. You need an Azure account.
## Let's go!
To get started, you need to checkout the code:
```shell
https://github.com/weaveworks-guides/weave-kubernetes-coreos-azure
cd weave-kubernetes-coreos-azure
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
```shell
npm install
```
Now, all you need to do is:
```shell
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
If you need to pass Azure specific options for the creation script you can do this via additional environment variables e.g.
```shell
AZ_SUBSCRIPTION=<id> AZ_LOCATION="East US" ./create-kubernetes-cluster.js
# or
AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
```
![VMs in Azure](/images/docs/initial_cluster.png)
Once the creation of Azure VMs has finished, you should see the following:
```shell
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
```
Let's login to the master node like so:
```shell
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
```
## Deploying the workload
Let's follow the Guestbook example now:
```shell
kubectl create -f ~/guestbook-example
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
```shell
kubectl get pods --watch
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
```shell
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 4m
frontend-4wahe 1/1 Running 0 4m
frontend-6l36j 1/1 Running 0 4m
redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
```
## Scaling
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/kubernetes/docs/getting-started-guides/coreos/azure/`).
First, lets set the size of new VMs:
```shell
export AZ_VM_SIZE=Large
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```shell
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00',
'etcd-01',
'etcd-02',
'kube-00',
'kube-01',
'kube-02',
'kube-03',
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
```shell
core@kube-00 ~ $ kubectl get rc
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
```
As there are 4 nodes, let's scale proportionally:
```shell
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
```
Check what you have now:
```shell
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
```shell
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 22m
frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
```
## Exposing the app to the outside world
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
```shell
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
Guestbook app is on port 31605, will map it to port 80 on kube-00
info: Executing command vm endpoint create
+ Getting virtual machines
+ Reading network configuration
+ Updating network configuration
info: vm endpoint create command OK
info: Executing command vm endpoint show
+ Getting virtual machines
data: Name : tcp-80-31605
data: Local port : 31605
data: Protcol : tcp
data: Virtual IP Address : 137.117.156.164
data: Direct server return : Disabled
info: vm endpoint show command OK
```
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
## Next steps
You now have a full-blown cluster running in Azure, congrats!
You should probably try deploy other [example apps](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) or write your own ;)
## Tear down...
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
```shell
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster

View File

@ -1,19 +0,0 @@
{
"name": "coreos-azure-weave",
"version": "1.0.0",
"description": "Small utility to bring up a woven CoreOS cluster",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Ilya Dmitrichenko <errordeveloper@gmail.com>",
"license": "Apache 2.0",
"dependencies": {
"azure-cli": "^0.10.1",
"colors": "^1.0.3",
"js-yaml": "^3.2.5",
"openssl-wrapper": "^0.2.1",
"underscore": "^1.7.0",
"underscore.string": "^3.0.2"
}
}

View File

@ -71,12 +71,6 @@ Guide to running a single master, multi-worker cluster controlled by an OS X men
<hr/>
[**Resizable multi-node cluster on Azure with Weave**](/docs/getting-started-guides/coreos/azure/)
Guide to running an HA etcd cluster with a single master on Azure. Uses the Azure node.js CLI to resize the cluster.
<hr/>
[**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
Configure a single master, single worker cluster on VMware ESXi.

View File

@ -37,6 +37,9 @@ Use the [Minikube getting started guide](/docs/getting-started-guides/minikube/)
[Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes
clusters.
[Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) can easily deploy Kubernetes
clusters.
[Stackpoint.io](https://stackpoint.io) provides Kubernetes infrastructure automation and management for multiple public clouds.
[AppsCode.com](https://appscode.com/products/cloud-deployment/) provides managed Kubernetes clusters for various public clouds (including AWS and Google Cloud Platform).
@ -54,8 +57,7 @@ few commands, and have active community support.
- [GCE](/docs/getting-started-guides/gce)
- [AWS](/docs/getting-started-guides/aws)
- [Azure](/docs/getting-started-guides/azure/)
- [Azure](/docs/getting-started-guides/coreos/azure/) (Weave-based, contributed by WeaveWorks employees)
- [Azure](/docs/getting-started-guides/azure)
- [CenturyLink Cloud](/docs/getting-started-guides/clc)
- [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer)
@ -129,8 +131,8 @@ AppsCode.com | Saltstack | Debian | multi-support | [docs](https://ap
KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | | Commercial
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/products/kubernetes/) | | Commercial
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens))
Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | | Commercial
Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | [Community (Microsoft)](https://github.com/Azure/acs-engine)
Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns))
Docker Multi Node | custom | N/A | flannel | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns))
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project

View File

@ -14,7 +14,7 @@ li>.highlighter-rouge {position:relative; top:3px;}
## Overview
This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+.
The installation uses a tool called `kubeadm` which is part of Kubernetes 1.4.
The installation uses a tool called `kubeadm` which is part of Kubernetes.
This process works with local VMs, physical servers and/or cloud servers.
It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).
@ -38,7 +38,7 @@ If you are not constrained, other tools build on kubeadm to give you complete cl
## Prerequisites
1. One or more machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+
1. One or more machines running Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+
1. 1GB or more of RAM per machine (any less will leave little room for your apps)
1. Full network connectivity between all machines in the cluster (public or private network is fine)
@ -62,12 +62,12 @@ You will install the following packages on all the machines:
* `kubeadm`: the command to bootstrap the cluster.
NOTE: If you already have kubeadm installed, you should do a `apt-get update && apt-get upgrade` or `yum update` to get the latest version of kubeadm.
See the reference doc if you want to read about the different [kubeadm releases](/docs/admin/kubeadm)
See the reference doc if you want to read about the different [kubeadm releases](https://github.com/kubernetes/kubeadm/blob/master/CHANGELOG.md)
For each host in turn:
* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
* If the machine is running Ubuntu 16.04 or HypriotOS v1.0.1, run:
* If the machine is running Ubuntu or HypriotOS, run:
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
@ -78,7 +78,7 @@ For each host in turn:
# apt-get install -y docker.io
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
If the machine is running CentOS 7, run:
If the machine is running CentOS, run:
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
@ -124,24 +124,36 @@ This may take several minutes.
The output should look like:
<master/tokens> generated token: "f0c861.753c505740ecde4c"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 61.346626 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 4.506807 seconds
<master/discovery> created essential addon: kube-discovery
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "064158.548b9ddb1d3fad3e"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 61.317580 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 6.556101 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 6.020980 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Kubernetes master initialised successfully!
Your Kubernetes master has initialized successfully!
You can connect any number of nodes by running:
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
kubeadm join --token <token> <master-ip>
You can now join any number of machines by running the following on each node:
kubeadm join --token=<token> <master-ip>
Make a record of the `kubeadm join` command that `kubeadm init` outputs.
You will need this in a moment.
@ -161,9 +173,9 @@ This will remove the "dedicated" taint from any nodes that have it, including th
### (3/4) Installing a pod network
You must install a pod network add-on so that your pods can communicate with each other.
You must install a pod network add-on so that your pods can communicate with each other.
**It is necessary to do this before you try to deploy any applications to your cluster, and before `kube-dns` will start up. Note also that `kubeadm` only supports CNI based networks and therefore kubenet based networks will not work.**
**It is necessary to do this before you try to deploy any applications to your cluster, and before `kube-dns` will start up. Note also that `kubeadm` only supports CNI based networks and therefore kubenet based networks will not work.**
Several projects provide Kubernetes pod networks using CNI, some of which
also support [Network Policy](/docs/user-guide/networkpolicies/). See the [add-ons page](/docs/admin/addons/) for a complete list of available network add-ons.
@ -189,13 +201,22 @@ If you want to add any new machines as nodes to your cluster, for each machine:
For example:
# kubeadm join --token <token> <master-ip>
<util/tokens> validating provided token
<node/discovery> created cluster info discovery client, requesting info from "http://138.68.156.129:9898/cluster-info/v1/?token-id=0f8588"
<node/discovery> cluster info object received, verifying signature using given token
<node/discovery> cluster info signature and contents are valid, will use API endpoints [https://138.68.156.129:443]
<node/csr> created API client to obtain unique certificate for this node, generating keys and certificate signing request
<node/csr> received signed certificate from the API server, generating kubelet configuration
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://192.168.x.y:9898/cluster-info/v1/?token-id=f11877"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.x.y:6443]
[bootstrap] Trying to connect to endpoint https://192.168.x.y:6443
[bootstrap] Detected server version: v1.5.1
[bootstrap] Successfully established connection with endpoint "https://192.168.x.y:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:yournode | CA: false
Not before: 2016-12-15 19:44:00 +0000 UTC Not After: 2017-12-15 19:44:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
@ -206,8 +227,6 @@ For example:
A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created.
Note that there currently isn't a out-of-the-box way of connecting to the Master's API Server via `kubectl` from a node. Read issue [#35729](https://github.com/kubernetes/kubernetes/issues/35729) for more details.
### (Optional) Controlling your cluster from machines other than the master
In order to get a kubectl on your laptop for example to talk to your cluster, you need to copy the `KubeConfig` file from your master to your laptop like this:
@ -217,7 +236,7 @@ In order to get a kubectl on your laptop for example to talk to your cluster, yo
### (Optional) Connecting to the API Server
If you want to connect to the API Server for viewing the dashboard (note: not deployed by default) from outside the cluster for example, you can use `kubectl proxy`:
If you want to connect to the API Server for viewing the dashboard (note: the dashboard isn't deployed by default) from outside the cluster for example, you can use `kubectl proxy`:
# scp root@<master ip>:/etc/kubernetes/admin.conf .
# kubectl --kubeconfig ./admin.conf proxy
@ -277,7 +296,7 @@ See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, includi
* Slack Channel: [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
* Mailing List: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
* [GitHub Issues](https://github.com/kubernetes/kubernetes/issues): please tag `kubeadm` issues with `@kubernetes/sig-cluster-lifecycle`
* [GitHub Issues in the kubeadm repository](https://github.com/kubernetes/kubeadm/issues)
## kubeadm is multi-platform
@ -285,11 +304,7 @@ kubeadm deb packages and binaries are built for amd64, arm and arm64, following
deb-packages are released for ARM and ARM 64-bit, but not RPMs (yet, reach out if there's interest).
ARM had some issues when making v1.4, see [#32517](https://github.com/kubernetes/kubernetes/pull/32517) [#33485](https://github.com/kubernetes/kubernetes/pull/33485), [#33117](https://github.com/kubernetes/kubernetes/pull/33117) and [#33376](https://github.com/kubernetes/kubernetes/pull/33376).
However, thanks to the PRs above, `kube-apiserver` works on ARM from the `v1.4.1` release, so make sure you're at least using `v1.4.1` when running on ARM 32-bit
The multiarch flannel daemonset can be installed this way.
Currently, only the pod network flannel is working on multiple architectures. You can install it this way:
# export ARCH=amd64
# curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" | sed "s/amd64/${ARCH}/g" | kubectl create -f -
@ -297,33 +312,47 @@ The multiarch flannel daemonset can be installed this way.
Replace `ARCH=amd64` with `ARCH=arm` or `ARCH=arm64` depending on the platform you're running on.
Note that the Raspberry Pi 3 is in ARM 32-bit mode, so for RPi 3 you should set `ARCH` to `arm`, not `arm64`.
## Cloudprovider integrations (experimental)
Enabling specific cloud providers is a common request, this currently requires manual configuration and is therefore not yet supported. If you wish to do so,
edit the `kubeadm` dropin for the `kubelet` service (`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`) on all nodes, including the master.
If your cloud provider requires any extra packages installed on host, for example for volume mounting/unmounting, install those packages.
Specify the `--cloud-provider` flag to kubelet and set it to the cloud of your choice. If your cloudprovider requires a configuration
file, create the file `/etc/kubernetes/cloud-config` on every node and set the values your cloud requires. Also append
`--cloud-config=/etc/kubernetes/cloud-config` to the kubelet arguments.
Lastly, run `kubeadm init --cloud-provider=xxx` to bootstrap your cluster with cloud provider features.
This workflow is not yet fully supported, however we hope to make it extremely easy to spin up clusters with cloud providers in the future.
(See [this proposal](https://github.com/kubernetes/community/pull/128) for more information) The [Kubelet Dynamic Settings](https://github.com/kubernetes/kubernetes/pull/29459) feature may also help to fully automate this process in the future.
## Limitations
Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course.
Also you can take a look at the troubleshooting section in the [reference document](/docs/admin/kubeadm/#troubleshooting)
1. The cluster created here doesn't have cloud-provider integrations by default, so for example it doesn't work automatically with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
To set up kubeadm with CloudProvider integrations (it's experimental, but try), refer to the [kubeadm reference](/docs/admin/kubeadm/) document.
Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) for exposing applications to the internet.
1. The cluster created here has a single master, with a single `etcd` database running on it.
This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch.
Adding HA support (multiple `etcd` servers, multiple API servers, etc) to `kubeadm` is still a work-in-progress.
Workaround: regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html).
The `etcd` data directory configured by `kubeadm` is at `/var/lib/etcd` on the master.
1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770).
Workaround: use `docker logs` on the nodes where the containers are running as a workaround.
1. The HostPort functionality does not work with kubeadm due to that CNI networking is used, see issue [#31307](https://github.com/kubernetes/kubernetes/issues/31307).
1. The `HostPort` and `HostIP` functionality does not work with kubeadm due to that CNI networking is used, see issue [#31307](https://github.com/kubernetes/kubernetes/issues/31307).
Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) instead, or use HostNetwork.
1. A running `firewalld` service may conflict with kubeadm, so if you want to run `kubeadm`, you should disable `firewalld` until issue [#35535](https://github.com/kubernetes/kubernetes/issues/35535) is resolved.
1. Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl config, eg.
Workaround: Disable `firewalld` or configure it to allow Kubernetes the pod and service cidrs.
1. If you see errors like `etcd cluster unavailable or misconfigured`, it's because of high load on the machine which makes the `etcd` container a bit unresponsive (it might miss some requests) and therefore kubelet will restart it. This will get better with `etcd3`.
```console
# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
```
Workaround: Set `failureThreshold` in `/etc/kubernetes/manifests/etcd.json` to a larger value.
1. There is no built-in way of fetching the token easily once the cluster is up and running, but here is a `kubectl` command you can copy and paste that will print out the token for you:
```console
# kubectl -n kube-system get secret clusterinfo -o yaml | grep token-map | awk '{print $2}' | base64 -d | sed "s|{||g;s|}||g;s|:|.|g;s/\"//g;" | xargs echo
```
1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one).
By default, it doesn't do this and kubelet ends-up using first non-loopback network interface, which is usually NATed.

View File

@ -39,6 +39,7 @@ Credentials can be provided in several ways:
- Using AWS EC2 Container Registry (ECR)
- use IAM roles and policies to control access to ECR repositories
- automatically refreshes ECR login credentials
- Using Azure Container Registry (ACR)
- Configuring Nodes to Authenticate to a Private Registry
- all pods can read any configured private registries
- requires node configuration by cluster administrator
@ -100,6 +101,25 @@ Troubleshooting:
- `plugins.go:56] Registering credential provider: aws-ecr-key`
- `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider`
### Using Azure Container Registry (ACR)
When using [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/)
you can authenticate using either an admin user or a service principal.
In either case, authentication is done via standard Docker authentication. These instructions assume the
[azure-cli](https://github.com/azure/azure-cli) command line tool.
You first need to create a registry and generate credentials, complete documentation for this can be found in
the [Azure container registry documentation](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli).
Once you have created your container registry, you will use the following credentials to login:
* `DOCKER_USER` : service principal, or admin username
* `DOCKER_PASSWORD`: service principal password, or admin user password
* `DOCKER_REGISTRY_SERVER`: `${some-registry-name}.azurecr.io`
* `DOCKER_EMAIL`: `${some-email-address}`
Once you have those variables filled in you can [configure a Kubernetes Secret and use it to deploy a Pod]
(http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod).
### Configuring Nodes to Authenticate to a Private Repository
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB