Merge branch 'master' into acr
commit
22eb15218a
|
@ -171,10 +171,10 @@ toc:
|
|||
path: /docs/getting-started-guides/gce/
|
||||
- title: Running Kubernetes on AWS EC2
|
||||
path: /docs/getting-started-guides/aws/
|
||||
- title: Running Kubernetes on Azure Container Service
|
||||
path: https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
|
||||
- title: Running Kubernetes on Azure
|
||||
path: /docs/getting-started-guides/azure/
|
||||
- title: Running Kubernetes on Azure (Weave-based)
|
||||
path: /docs/getting-started-guides/coreos/azure/
|
||||
- title: Running Kubernetes on CenturyLink Cloud
|
||||
path: /docs/getting-started-guides/clc/
|
||||
- title: Running Kubernetes on IBM SoftLayer
|
||||
|
|
|
@ -1,12 +1,30 @@
|
|||
---
|
||||
assignees:
|
||||
- colemickens
|
||||
- jeffmendoza
|
||||
- brendandburns
|
||||
|
||||
---
|
||||
|
||||
The recommended approach for deploying a Kubernetes 1.4 cluster on Azure is the
|
||||
[`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere) project.
|
||||
## Azure Container Service
|
||||
|
||||
You will want to take a look at the
|
||||
[Azure Getting Started Guide](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/azure/README.md).
|
||||
The [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) offers simple
|
||||
deployments of one of three open source orchestrators: DC/OS, Swarm, and Kubernetes clusters.
|
||||
|
||||
For an example of deploying a Kubernetes cluster onto Azure via the Azure Container Service:
|
||||
|
||||
**[Microsoft Azure Container Service - Kubernetes Walkthrough](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough)**
|
||||
|
||||
## Custom Deployments: ACS-Engine
|
||||
|
||||
The core of the Azure Container Service is **open source** and available on GitHub for the community
|
||||
to use and contribute to: **[ACS-Engine](https://github.com/Azure/acs-engine)**.
|
||||
|
||||
ACS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Container
|
||||
Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple
|
||||
agent pools, and more. Some community contributions to ACS-Engine may even become features of the Azure Container Service.
|
||||
|
||||
The input to ACS-Engine is similar to the ARM template syntax used to deploy a cluster directly with the Azure Container Service.
|
||||
The resulting output is an Azure Resource Manager Template that can then be checked into source control and can then be used
|
||||
to deploy Kubernetes clusters into Azure.
|
||||
|
||||
You can get started quickly by following the **[ACS-Engine Kubernetes Walkthrough](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md)**.
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
node_modules/
|
|
@ -1,335 +0,0 @@
|
|||
## This file is used as input to deployment script, which amends it as needed.
|
||||
## More specifically, we need to add environment files for as many nodes as we
|
||||
## are going to deploy.
|
||||
|
||||
write_files:
|
||||
- path: /opt/bin/curl-retry.sh
|
||||
permissions: '0755'
|
||||
owner: root
|
||||
content: |
|
||||
#!/bin/sh -x
|
||||
until curl $@
|
||||
do sleep 1
|
||||
done
|
||||
|
||||
coreos:
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
units:
|
||||
- name: systemd-networkd-wait-online.service
|
||||
drop-ins:
|
||||
- name: 50-check-github-is-reachable.conf
|
||||
content: |
|
||||
[Service]
|
||||
ExecStart=/bin/sh -x -c \
|
||||
'until curl --silent --fail https://status.github.com/api/status.json | grep -q \"good\"; do sleep 2; done'
|
||||
|
||||
- name: weave-network.target
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Weave Network Setup Complete
|
||||
Documentation=man:systemd.special(7)
|
||||
RefuseManualStart=no
|
||||
After=network-online.target
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
WantedBy=kubernetes-master.target
|
||||
WantedBy=kubernetes-node.target
|
||||
|
||||
- name: kubernetes-master.target
|
||||
enable: true
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Cluster Master
|
||||
Documentation=http://kubernetes.io/
|
||||
RefuseManualStart=no
|
||||
After=weave-network.target
|
||||
Requires=weave-network.target
|
||||
ConditionHost=kube-00
|
||||
Wants=kube-apiserver.service
|
||||
Wants=kube-scheduler.service
|
||||
Wants=kube-controller-manager.service
|
||||
Wants=kube-proxy.service
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
- name: kubernetes-node.target
|
||||
enable: true
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Cluster Node
|
||||
Documentation=http://kubernetes.io/
|
||||
RefuseManualStart=no
|
||||
After=weave-network.target
|
||||
Requires=weave-network.target
|
||||
ConditionHost=!kube-00
|
||||
Wants=kube-proxy.service
|
||||
Wants=kubelet.service
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
- name: 10-weave.network
|
||||
runtime: false
|
||||
content: |
|
||||
[Match]
|
||||
Type=bridge
|
||||
Name=weave*
|
||||
[Network]
|
||||
|
||||
- name: install-weave.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
After=docker.service
|
||||
Before=weave.service
|
||||
Description=Install Weave
|
||||
Documentation=http://docs.weave.works/
|
||||
Requires=network-online.target
|
||||
[Service]
|
||||
EnvironmentFile=-/etc/weave.%H.env
|
||||
EnvironmentFile=-/etc/weave.env
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
TimeoutStartSec=0
|
||||
ExecStartPre=/bin/mkdir -p /opt/bin/
|
||||
ExecStartPre=/opt/bin/curl-retry.sh \
|
||||
--silent \
|
||||
--location \
|
||||
git.io/weave \
|
||||
--output /opt/bin/weave
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/weave
|
||||
ExecStart=/opt/bin/weave setup
|
||||
[Install]
|
||||
WantedBy=weave-network.target
|
||||
WantedBy=weave.service
|
||||
|
||||
- name: weaveproxy.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=install-weave.service
|
||||
After=docker.service
|
||||
Description=Weave proxy for Docker API
|
||||
Documentation=http://docs.weave.works/
|
||||
Requires=docker.service
|
||||
Requires=install-weave.service
|
||||
[Service]
|
||||
EnvironmentFile=-/etc/weave.%H.env
|
||||
EnvironmentFile=-/etc/weave.env
|
||||
ExecStartPre=/opt/bin/weave launch-proxy --rewrite-inspect --without-dns
|
||||
ExecStart=/usr/bin/docker attach weaveproxy
|
||||
Restart=on-failure
|
||||
ExecStop=/opt/bin/weave stop-proxy
|
||||
[Install]
|
||||
WantedBy=weave-network.target
|
||||
|
||||
- name: weave.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=install-weave.service
|
||||
After=docker.service
|
||||
Description=Weave Network Router
|
||||
Documentation=http://docs.weave.works/
|
||||
Requires=docker.service
|
||||
Requires=install-weave.service
|
||||
[Service]
|
||||
TimeoutStartSec=0
|
||||
EnvironmentFile=-/etc/weave.%H.env
|
||||
EnvironmentFile=-/etc/weave.env
|
||||
ExecStartPre=/opt/bin/weave launch-router $WEAVE_PEERS
|
||||
ExecStart=/usr/bin/docker attach weave
|
||||
Restart=on-failure
|
||||
ExecStop=/opt/bin/weave stop-router
|
||||
[Install]
|
||||
WantedBy=weave-network.target
|
||||
|
||||
- name: weave-expose.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=install-weave.service
|
||||
After=weave.service
|
||||
After=docker.service
|
||||
Documentation=http://docs.weave.works/
|
||||
Requires=docker.service
|
||||
Requires=install-weave.service
|
||||
Requires=weave.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
TimeoutStartSec=0
|
||||
EnvironmentFile=-/etc/weave.%H.env
|
||||
EnvironmentFile=-/etc/weave.env
|
||||
ExecStart=/opt/bin/weave expose
|
||||
ExecStop=/opt/bin/weave hide
|
||||
[Install]
|
||||
WantedBy=weave-network.target
|
||||
|
||||
- name: install-kubernetes.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Before=kube-apiserver.service
|
||||
Before=kube-controller-manager.service
|
||||
Before=kubelet.service
|
||||
Before=kube-proxy.service
|
||||
Description=Download Kubernetes Binaries
|
||||
Documentation=http://kubernetes.io/
|
||||
Requires=network-online.target
|
||||
[Service]
|
||||
Environment=KUBE_RELEASE_TARBALL=https://github.com/kubernetes/kubernetes/releases/download/v1.2.2/kubernetes.tar.gz
|
||||
ExecStartPre=/bin/mkdir -p /opt/
|
||||
ExecStart=/opt/bin/curl-retry.sh --silent --location $KUBE_RELEASE_TARBALL --output /tmp/kubernetes.tgz
|
||||
ExecStart=/bin/tar xzvf /tmp/kubernetes.tgz -C /tmp/
|
||||
ExecStart=/bin/tar xzvf /tmp/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /opt
|
||||
ExecStartPost=/bin/chmod o+rx -R /opt/kubernetes
|
||||
ExecStartPost=/bin/ln -s /opt/kubernetes/server/bin/kubectl /opt/bin/
|
||||
ExecStartPost=/bin/mv /tmp/kubernetes/examples/guestbook /home/core/guestbook-example
|
||||
ExecStartPost=/bin/chown core. -R /home/core/guestbook-example
|
||||
ExecStartPost=/bin/rm -rf /tmp/kubernetes
|
||||
ExecStartPost=/bin/sed 's/# type: LoadBalancer/type: NodePort/' -i /home/core/guestbook-example/frontend-service.yaml
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
[Install]
|
||||
WantedBy=kubernetes-master.target
|
||||
WantedBy=kubernetes-node.target
|
||||
|
||||
- name: kube-apiserver.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=install-kubernetes.service
|
||||
Before=kube-controller-manager.service
|
||||
Before=kube-scheduler.service
|
||||
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-apiserver
|
||||
Description=Kubernetes API Server
|
||||
Documentation=http://kubernetes.io/
|
||||
Wants=install-kubernetes.service
|
||||
ConditionHost=kube-00
|
||||
[Service]
|
||||
ExecStart=/opt/kubernetes/server/bin/kube-apiserver \
|
||||
--insecure-bind-address=0.0.0.0 \
|
||||
--advertise-address=$public_ipv4 \
|
||||
--insecure-port=8080 \
|
||||
$ETCD_SERVERS \
|
||||
--service-cluster-ip-range=10.16.0.0/12 \
|
||||
--cloud-provider= \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=kubernetes-master.target
|
||||
|
||||
- name: kube-scheduler.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=kube-apiserver.service
|
||||
After=install-kubernetes.service
|
||||
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-scheduler
|
||||
Description=Kubernetes Scheduler
|
||||
Documentation=http://kubernetes.io/
|
||||
Wants=kube-apiserver.service
|
||||
ConditionHost=kube-00
|
||||
[Service]
|
||||
ExecStart=/opt/kubernetes/server/bin/kube-scheduler \
|
||||
--logtostderr=true \
|
||||
--master=127.0.0.1:8080
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=kubernetes-master.target
|
||||
|
||||
- name: kube-controller-manager.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=install-kubernetes.service
|
||||
After=kube-apiserver.service
|
||||
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-controller-manager
|
||||
Description=Kubernetes Controller Manager
|
||||
Documentation=http://kubernetes.io/
|
||||
Wants=kube-apiserver.service
|
||||
Wants=install-kubernetes.service
|
||||
ConditionHost=kube-00
|
||||
[Service]
|
||||
ExecStart=/opt/kubernetes/server/bin/kube-controller-manager \
|
||||
--master=127.0.0.1:8080 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=kubernetes-master.target
|
||||
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=install-kubernetes.service
|
||||
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubelet
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=http://kubernetes.io/
|
||||
Wants=install-kubernetes.service
|
||||
ConditionHost=!kube-00
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/
|
||||
ExecStart=/opt/kubernetes/server/bin/kubelet \
|
||||
--docker-endpoint=unix://var/run/weave/weave.sock \
|
||||
--address=0.0.0.0 \
|
||||
--port=10250 \
|
||||
--hostname-override=%H \
|
||||
--api-servers=http://kube-00:8080 \
|
||||
--logtostderr=true \
|
||||
--cluster-dns=10.16.0.3 \
|
||||
--cluster-domain=kube.local \
|
||||
--config=/etc/kubernetes/manifests/
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=kubernetes-node.target
|
||||
|
||||
- name: kube-proxy.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=install-kubernetes.service
|
||||
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-proxy
|
||||
Description=Kubernetes Proxy
|
||||
Documentation=http://kubernetes.io/
|
||||
Wants=install-kubernetes.service
|
||||
[Service]
|
||||
ExecStart=/opt/kubernetes/server/bin/kube-proxy \
|
||||
--master=http://kube-00:8080 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=kubernetes-master.target
|
||||
WantedBy=kubernetes-node.target
|
||||
|
||||
- name: kube-create-addons.service
|
||||
enable: true
|
||||
content: |
|
||||
[Unit]
|
||||
After=install-kubernetes.service
|
||||
ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubectl
|
||||
ConditionPathIsDirectory=/etc/kubernetes/addons/
|
||||
ConditionHost=kube-00
|
||||
Description=Kubernetes Addons
|
||||
Documentation=http://kubernetes.io/
|
||||
Wants=install-kubernetes.service
|
||||
Wants=kube-apiserver.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=no
|
||||
ExecStart=/bin/bash -c 'until /opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/; do sleep 2; done'
|
||||
SuccessExitStatus=1
|
||||
[Install]
|
||||
WantedBy=kubernetes-master.target
|
|
@ -1,246 +0,0 @@
|
|||
---
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
|
||||
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You need an Azure account.
|
||||
|
||||
## Let's go!
|
||||
|
||||
To get started, you need to checkout the code:
|
||||
|
||||
```shell
|
||||
https://github.com/weaveworks-guides/weave-kubernetes-coreos-azure
|
||||
cd weave-kubernetes-coreos-azure
|
||||
```
|
||||
|
||||
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
|
||||
|
||||
First, you need to install some of the dependencies with
|
||||
|
||||
```shell
|
||||
npm install
|
||||
```
|
||||
|
||||
Now, all you need to do is:
|
||||
|
||||
```shell
|
||||
./azure-login.js -u <your_username>
|
||||
./create-kubernetes-cluster.js
|
||||
```
|
||||
|
||||
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
|
||||
If you need to pass Azure specific options for the creation script you can do this via additional environment variables e.g.
|
||||
|
||||
```shell
|
||||
AZ_SUBSCRIPTION=<id> AZ_LOCATION="East US" ./create-kubernetes-cluster.js
|
||||
# or
|
||||
AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
|
||||
```
|
||||
|
||||

|
||||
|
||||
Once the creation of Azure VMs has finished, you should see the following:
|
||||
|
||||
```shell
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
|
||||
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
|
||||
```
|
||||
|
||||
Let's login to the master node like so:
|
||||
|
||||
```shell
|
||||
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
|
||||
```
|
||||
|
||||
> Note: config file name will be different, make sure to use the one you see.
|
||||
|
||||
Check there are 2 nodes in the cluster:
|
||||
|
||||
```shell
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-01 kubernetes.io/hostname=kube-01 Ready
|
||||
kube-02 kubernetes.io/hostname=kube-02 Ready
|
||||
```
|
||||
|
||||
## Deploying the workload
|
||||
|
||||
Let's follow the Guestbook example now:
|
||||
|
||||
```shell
|
||||
kubectl create -f ~/guestbook-example
|
||||
```
|
||||
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
|
||||
|
||||
```shell
|
||||
kubectl get pods --watch
|
||||
```
|
||||
|
||||
> Note: the most time it will spend downloading Docker container images on each of the nodes.
|
||||
|
||||
Eventually you should see:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-0a9xi 1/1 Running 0 4m
|
||||
frontend-4wahe 1/1 Running 0 4m
|
||||
frontend-6l36j 1/1 Running 0 4m
|
||||
redis-master-talmr 1/1 Running 0 4m
|
||||
redis-slave-12zfd 1/1 Running 0 4m
|
||||
redis-slave-3nbce 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
## Scaling
|
||||
|
||||
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
|
||||
|
||||
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/kubernetes/docs/getting-started-guides/coreos/azure/`).
|
||||
|
||||
First, lets set the size of new VMs:
|
||||
|
||||
```shell
|
||||
export AZ_VM_SIZE=Large
|
||||
```
|
||||
|
||||
Now, run scale script with state file of the previous deployment and number of nodes to add:
|
||||
|
||||
```shell
|
||||
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
[ 'etcd-00',
|
||||
'etcd-01',
|
||||
'etcd-02',
|
||||
'kube-00',
|
||||
'kube-01',
|
||||
'kube-02',
|
||||
'kube-03',
|
||||
'kube-04' ]
|
||||
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
|
||||
```
|
||||
|
||||
> Note: this step has created new files in `./output`.
|
||||
|
||||
Back on `kube-00`:
|
||||
|
||||
```shell
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-01 kubernetes.io/hostname=kube-01 Ready
|
||||
kube-02 kubernetes.io/hostname=kube-02 Ready
|
||||
kube-03 kubernetes.io/hostname=kube-03 Ready
|
||||
kube-04 kubernetes.io/hostname=kube-04 Ready
|
||||
```
|
||||
|
||||
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
|
||||
|
||||
First, double-check how many replication controllers there are:
|
||||
|
||||
```shell
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
|
||||
redis-master master redis name=redis-master 1
|
||||
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
|
||||
```
|
||||
|
||||
As there are 4 nodes, let's scale proportionally:
|
||||
|
||||
```shell
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
|
||||
scaled
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
|
||||
scaled
|
||||
```
|
||||
|
||||
Check what you have now:
|
||||
|
||||
```shell
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
|
||||
redis-master master redis name=redis-master 1
|
||||
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
|
||||
```
|
||||
|
||||
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
|
||||
|
||||
```shell
|
||||
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-0a9xi 1/1 Running 0 22m
|
||||
frontend-4wahe 1/1 Running 0 22m
|
||||
frontend-6l36j 1/1 Running 0 22m
|
||||
frontend-z9oxo 1/1 Running 0 41s
|
||||
```
|
||||
|
||||
## Exposing the app to the outside world
|
||||
|
||||
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
|
||||
|
||||
```shell
|
||||
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
|
||||
Guestbook app is on port 31605, will map it to port 80 on kube-00
|
||||
info: Executing command vm endpoint create
|
||||
+ Getting virtual machines
|
||||
+ Reading network configuration
|
||||
+ Updating network configuration
|
||||
info: vm endpoint create command OK
|
||||
info: Executing command vm endpoint show
|
||||
+ Getting virtual machines
|
||||
data: Name : tcp-80-31605
|
||||
data: Local port : 31605
|
||||
data: Protcol : tcp
|
||||
data: Virtual IP Address : 137.117.156.164
|
||||
data: Direct server return : Disabled
|
||||
info: vm endpoint show command OK
|
||||
```
|
||||
|
||||
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
|
||||
|
||||
## Next steps
|
||||
|
||||
You now have a full-blown cluster running in Azure, congrats!
|
||||
|
||||
You should probably try deploy other [example apps](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) or write your own ;)
|
||||
|
||||
## Tear down...
|
||||
|
||||
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
|
||||
|
||||
```shell
|
||||
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
|
||||
```
|
||||
|
||||
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
|
||||
|
||||
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
|
||||
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
||||
|
||||
## Further reading
|
||||
|
||||
Please see the [Kubernetes docs](/docs/) for more details on administering
|
||||
and using a Kubernetes cluster
|
||||
|
|
@ -1,19 +0,0 @@
|
|||
{
|
||||
"name": "coreos-azure-weave",
|
||||
"version": "1.0.0",
|
||||
"description": "Small utility to bring up a woven CoreOS cluster",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
"test": "echo \"Error: no test specified\" && exit 1"
|
||||
},
|
||||
"author": "Ilya Dmitrichenko <errordeveloper@gmail.com>",
|
||||
"license": "Apache 2.0",
|
||||
"dependencies": {
|
||||
"azure-cli": "^0.10.1",
|
||||
"colors": "^1.0.3",
|
||||
"js-yaml": "^3.2.5",
|
||||
"openssl-wrapper": "^0.2.1",
|
||||
"underscore": "^1.7.0",
|
||||
"underscore.string": "^3.0.2"
|
||||
}
|
||||
}
|
|
@ -71,12 +71,6 @@ Guide to running a single master, multi-worker cluster controlled by an OS X men
|
|||
|
||||
<hr/>
|
||||
|
||||
[**Resizable multi-node cluster on Azure with Weave**](/docs/getting-started-guides/coreos/azure/)
|
||||
|
||||
Guide to running an HA etcd cluster with a single master on Azure. Uses the Azure node.js CLI to resize the cluster.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
|
||||
|
||||
Configure a single master, single worker cluster on VMware ESXi.
|
||||
|
|
|
@ -37,6 +37,9 @@ Use the [Minikube getting started guide](/docs/getting-started-guides/minikube/)
|
|||
[Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes
|
||||
clusters.
|
||||
|
||||
[Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) can easily deploy Kubernetes
|
||||
clusters.
|
||||
|
||||
[Stackpoint.io](https://stackpoint.io) provides Kubernetes infrastructure automation and management for multiple public clouds.
|
||||
|
||||
[AppsCode.com](https://appscode.com/products/cloud-deployment/) provides managed Kubernetes clusters for various public clouds (including AWS and Google Cloud Platform).
|
||||
|
@ -54,8 +57,7 @@ few commands, and have active community support.
|
|||
|
||||
- [GCE](/docs/getting-started-guides/gce)
|
||||
- [AWS](/docs/getting-started-guides/aws)
|
||||
- [Azure](/docs/getting-started-guides/azure/)
|
||||
- [Azure](/docs/getting-started-guides/coreos/azure/) (Weave-based, contributed by WeaveWorks employees)
|
||||
- [Azure](/docs/getting-started-guides/azure)
|
||||
- [CenturyLink Cloud](/docs/getting-started-guides/clc)
|
||||
- [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer)
|
||||
|
||||
|
@ -129,8 +131,8 @@ AppsCode.com | Saltstack | Debian | multi-support | [docs](https://ap
|
|||
KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | | Commercial
|
||||
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/products/kubernetes/) | | Commercial
|
||||
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
|
||||
Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens))
|
||||
Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | | Commercial
|
||||
Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | [Community (Microsoft)](https://github.com/Azure/acs-engine)
|
||||
Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns))
|
||||
Docker Multi Node | custom | N/A | flannel | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns))
|
||||
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 169 KiB |
Loading…
Reference in New Issue