Merge pull request #63 from mfanjie/dos2unix

convert md file format under docs/getting-started-guides to unix mode
pull/62/head^2
johndmulhausen 2016-03-07 08:07:03 +00:00
commit b9a24a362c
12 changed files with 2410 additions and 2392 deletions

View File

@ -1,53 +1,53 @@
---
---
* TOC
{:toc}
## Prerequisites
1. You need an AWS account. Visit [http://aws.amazon.com](http://aws.amazon.com) to get started
2. Install and configure [AWS Command Line Interface](http://aws.amazon.com/cli)
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
NOTE: This script use the 'default' AWS profile by default.
You may explicitly set AWS profile to use using the `AWS_DEFAULT_PROFILE` environment variable:
{:toc}
## Prerequisites
1. You need an AWS account. Visit [http://aws.amazon.com](http://aws.amazon.com) to get started
2. Install and configure [AWS Command Line Interface](http://aws.amazon.com/cli)
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
NOTE: This script use the 'default' AWS profile by default.
You may explicitly set AWS profile to use using the `AWS_DEFAULT_PROFILE` environment variable:
```shell
export AWS_DEFAULT_PROFILE=myawsprofile
export AWS_DEFAULT_PROFILE=myawsprofile
```
## Cluster turnup
### Supported procedure: `get-kube`
## Cluster turnup
### Supported procedure: `get-kube`
```shell
#Using wget
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
#Using cURL
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
#Using wget
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
#Using cURL
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
```
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/util.sh)
using [cluster/aws/config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh).
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth.
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with EC2 instances running on Ubuntu.
You can override the variables defined in [config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh) to change this behavior as follows:
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/util.sh)
using [cluster/aws/config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh).
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth.
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with EC2 instances running on Ubuntu.
You can override the variables defined in [config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh) to change this behavior as follows:
```shell
export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=2
export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=2
export MASTER_SIZE=m3.medium
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=mycompany-kubernetes-artifacts
export INSTANCE_PREFIX=k8s
...
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=mycompany-kubernetes-artifacts
export INSTANCE_PREFIX=k8s
...
```
If you don't specify master and minion sizes, the scripts will attempt to guess
@ -88,55 +88,55 @@ instance storage, you may want to increase the `NODE_ROOT_DISK_SIZE` value,
although the default value of 32 is probably sufficient for the smaller
instance types in the m4 family.
The script will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
If these already exist, make sure you want them to be used here.
NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key.
### Alternatives
CoreOS maintains [a CLI tool](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html), `kube-aws` that will create and manage a Kubernetes cluster based on [CoreOS](http://www.coreos.com), using AWS tools: EC2, CloudFormation and Autoscaling.
## Getting started with your cluster
### Command line administration tool: `kubectl`
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
Next, add the appropriate binary folder to your `PATH` to access kubectl:
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
The script will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
If these already exist, make sure you want them to be used here.
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key.
### Alternatives
CoreOS maintains [a CLI tool](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html), `kube-aws` that will create and manage a Kubernetes cluster based on [CoreOS](http://www.coreos.com), using AWS tools: EC2, CloudFormation and Autoscaling.
## Getting started with your cluster
### Command line administration tool: `kubectl`
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
Next, add the appropriate binary folder to your `PATH` to access kubectl:
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/kubectl)
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
For more information, please read [kubeconfig files](/docs/user-guide/kubeconfig-file)
### Examples
See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
## Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/kubectl)
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
For more information, please read [kubeconfig files](/docs/user-guide/kubeconfig-file)
### Examples
See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
## Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
```shell
cluster/kube-down.sh
cluster/kube-down.sh
```
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.

View File

@ -1,130 +1,130 @@
---
---
This guide will walk you through installing [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) on [Datacenter Operating System (DCOS)](https://mesosphere.com/product/) with the [DCOS CLI](https://github.com/mesosphere/dcos-cli) and operating Kubernetes with the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl).
This guide will walk you through installing [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) on [Datacenter Operating System (DCOS)](https://mesosphere.com/product/) with the [DCOS CLI](https://github.com/mesosphere/dcos-cli) and operating Kubernetes with the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl).
* TOC
{:toc}
## About Kubernetes on DCOS
DCOS is system software that manages computer cluster hardware and software resources and provides common services for distributed applications. Among other services, it provides [Apache Mesos](http://mesos.apache.org/) as its cluster kernel and [Marathon](https://mesosphere.github.io/marathon/) as its init system. With DCOS CLI, Mesos frameworks like [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) can be installed with a single command.
Another feature of the DCOS CLI is that it allows plugins like the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl). This allows for easy access to a version-compatible Kubectl without having to manually download or install.
Further information about the benefits of installing Kubernetes on DCOS can be found in the [Kubernetes-Mesos documentation](https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/README.md).
For more details about the Kubernetes DCOS packaging, see the [Kubernetes-Mesos project](https://github.com/mesosphere/kubernetes-mesos).
Since Kubernetes-Mesos is still alpha, it is a good idea to familiarize yourself with the [current known issues](https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/docs/issues.md) which may limit or modify the behavior of Kubernetes on DCOS.
If you have problems completing the steps below, please [file an issue against the kubernetes-mesos project](https://github.com/mesosphere/kubernetes-mesos/issues).
## Resources
Explore the following resources for more information about Kubernetes, Kubernetes on Mesos/DCOS, and DCOS itself.
- [DCOS Documentation](https://docs.mesosphere.com/)
- [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/)
- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
- [Kubernetes on Mesos Documentation](https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/README.md)
- [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases)
- [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos)
## Prerequisites
- A running [DCOS cluster](https://mesosphere.com/product/)
- [DCOS Community Edition](https://docs.mesosphere.com/install/) is currently available on [AWS](https://mesosphere.com/amazon/).
- [DCOS Enterprise Edition](https://mesosphere.com/product/) can be deployed on virtual or bare metal machines. Contact sales@mesosphere.com for more info and to set up an engagement.
- [DCOS CLI](https://docs.mesosphere.com/install/cli/) installed locally
## Install
1. Configure and validate the [Mesosphere Multiverse](https://github.com/mesosphere/multiverse) as a package source repository
{:toc}
## About Kubernetes on DCOS
DCOS is system software that manages computer cluster hardware and software resources and provides common services for distributed applications. Among other services, it provides [Apache Mesos](http://mesos.apache.org/) as its cluster kernel and [Marathon](https://mesosphere.github.io/marathon/) as its init system. With DCOS CLI, Mesos frameworks like [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) can be installed with a single command.
Another feature of the DCOS CLI is that it allows plugins like the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl). This allows for easy access to a version-compatible Kubectl without having to manually download or install.
Further information about the benefits of installing Kubernetes on DCOS can be found in the [Kubernetes-Mesos documentation](https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/README.md).
For more details about the Kubernetes DCOS packaging, see the [Kubernetes-Mesos project](https://github.com/mesosphere/kubernetes-mesos).
Since Kubernetes-Mesos is still alpha, it is a good idea to familiarize yourself with the [current known issues](https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/docs/issues.md) which may limit or modify the behavior of Kubernetes on DCOS.
If you have problems completing the steps below, please [file an issue against the kubernetes-mesos project](https://github.com/mesosphere/kubernetes-mesos/issues).
## Resources
Explore the following resources for more information about Kubernetes, Kubernetes on Mesos/DCOS, and DCOS itself.
- [DCOS Documentation](https://docs.mesosphere.com/)
- [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/)
- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
- [Kubernetes on Mesos Documentation](https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/README.md)
- [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases)
- [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos)
## Prerequisites
- A running [DCOS cluster](https://mesosphere.com/product/)
- [DCOS Community Edition](https://docs.mesosphere.com/install/) is currently available on [AWS](https://mesosphere.com/amazon/).
- [DCOS Enterprise Edition](https://mesosphere.com/product/) can be deployed on virtual or bare metal machines. Contact sales@mesosphere.com for more info and to set up an engagement.
- [DCOS CLI](https://docs.mesosphere.com/install/cli/) installed locally
## Install
1. Configure and validate the [Mesosphere Multiverse](https://github.com/mesosphere/multiverse) as a package source repository
```shell
$ dcos config prepend package.sources https://github.com/mesosphere/multiverse/archive/version-1.x.zip
$ dcos package update --validate
$ dcos config prepend package.sources https://github.com/mesosphere/multiverse/archive/version-1.x.zip
$ dcos package update --validate
```
2. Install etcd
By default, the Kubernetes DCOS package starts a single-node etcd. In order to avoid state loss in the event of Kubernetes component container failure, install an HA [etcd-mesos](https://github.com/mesosphere/etcd-mesos) cluster on DCOS.
2. Install etcd
By default, the Kubernetes DCOS package starts a single-node etcd. In order to avoid state loss in the event of Kubernetes component container failure, install an HA [etcd-mesos](https://github.com/mesosphere/etcd-mesos) cluster on DCOS.
```shell
$ dcos package install etcd
$ dcos package install etcd
```
3. Verify that etcd is installed and healthy
The etcd cluster takes a short while to deploy. Verify that `/etcd` is healthy before going on to the next step.
3. Verify that etcd is installed and healthy
The etcd cluster takes a short while to deploy. Verify that `/etcd` is healthy before going on to the next step.
```shell
$ dcos marathon app list
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
/etcd 128 0.2 1/1 1/1 --- DOCKER None
$ dcos marathon app list
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
/etcd 128 0.2 1/1 1/1 --- DOCKER None
```
4. Create Kubernetes installation configuration
Configure Kubernetes to use the HA etcd installed on DCOS.
4. Create Kubernetes installation configuration
Configure Kubernetes to use the HA etcd installed on DCOS.
```shell
$ cat >/tmp/options.json <<EOF
{
"kubernetes": {
"etcd-mesos-framework-name": "etcd"
}
}
EOF
$ cat >/tmp/options.json <<EOF
{
"kubernetes": {
"etcd-mesos-framework-name": "etcd"
}
}
EOF
```
5. Install Kubernetes
5. Install Kubernetes
```shell
$ dcos package install --options=/tmp/options.json kubernetes
$ dcos package install --options=/tmp/options.json kubernetes
```
6. Verify that Kubernetes is installed and healthy
The Kubernetes cluster takes a short while to deploy. Verify that `/kubernetes` is healthy before going on to the next step.
6. Verify that Kubernetes is installed and healthy
The Kubernetes cluster takes a short while to deploy. Verify that `/kubernetes` is healthy before going on to the next step.
```shell
$ dcos marathon app list
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
/etcd 128 0.2 1/1 1/1 --- DOCKER None
/kubernetes 768 1 1/1 1/1 --- DOCKER None
$ dcos marathon app list
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
/etcd 128 0.2 1/1 1/1 --- DOCKER None
/kubernetes 768 1 1/1 1/1 --- DOCKER None
```
7. Verify that Kube-DNS & Kube-UI are deployed, running, and ready
7. Verify that Kube-DNS & Kube-UI are deployed, running, and ready
```shell
$ dcos kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-v8-tjxk9 4/4 Running 0 1m
kube-ui-v2-tjq7b 1/1 Running 0 1m
$ dcos kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-v8-tjxk9 4/4 Running 0 1m
kube-ui-v2-tjq7b 1/1 Running 0 1m
```
Names and ages may vary.
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/README.md) or the [Kubernetes User Guide](/docs/user-guide/).
## Uninstall
1. Stop and delete all replication controllers and pods in each namespace:
Before uninstalling Kubernetes, destroy all the pods and replication controllers. The uninstall process will try to do this itself, but by default it times out quickly and may leave your cluster in a dirty state.
Names and ages may vary.
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/README.md) or the [Kubernetes User Guide](/docs/user-guide/).
## Uninstall
1. Stop and delete all replication controllers and pods in each namespace:
Before uninstalling Kubernetes, destroy all the pods and replication controllers. The uninstall process will try to do this itself, but by default it times out quickly and may leave your cluster in a dirty state.
```shell
$ dcos kubectl delete rc,pods --all --namespace=default
$ dcos kubectl delete rc,pods --all --namespace=kube-system
$ dcos kubectl delete rc,pods --all --namespace=default
$ dcos kubectl delete rc,pods --all --namespace=kube-system
```
2. Validate that all pods have been deleted
2. Validate that all pods have been deleted
```shell
$ dcos kubectl get pods --all-namespaces
$ dcos kubectl get pods --all-namespaces
```
3. Uninstall Kubernetes
3. Uninstall Kubernetes
```shell
$ dcos package uninstall kubernetes
$ dcos package uninstall kubernetes
```

View File

@ -1,6 +1,6 @@
---
---
_Note_:
These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.
@ -14,75 +14,75 @@ The only thing you need is a machine with **Docker 1.7.1 or higher**
## Overview
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/images/docs/k8s-docker.png)
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
1) A _bootstrap_ Docker instance which is used to start system daemons like `flanneld` and `etcd`
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
This pattern is necessary because the `flannel` daemon is responsible for setting up and managing the network that interconnects
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
You can specify the version on every node before install:
```shell
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/images/docs/k8s-docker.png)
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
1) A _bootstrap_ Docker instance which is used to start system daemons like `flanneld` and `etcd`
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
This pattern is necessary because the `flannel` daemon is responsible for setting up and managing the network that interconnects
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
You can specify the version on every node before install:
```shell
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
Otherwise, we'll use latest `hyperkube` image as default k8s version.
## Master Node
The first step in the process is to initialize the master node.
```
Otherwise, we'll use latest `hyperkube` image as default k8s version.
## Master Node
The first step in the process is to initialize the master node.
The MASTER_IP step here is optional, it defaults to the first value of `hostname -I`.
Clone the Kubernetes repo, and run [master.sh](/docs/getting-started-guides/docker-multinode/master.sh) on the master machine _with root_:
```shell
Clone the Kubernetes repo, and run [master.sh](/docs/getting-started-guides/docker-multinode/master.sh) on the master machine _with root_:
```shell
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./master.sh
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./master.sh
```
`Master done!`
See [here](/docs/getting-started-guides/docker-multinode/master) for detailed instructions explanation.
## Adding a worker node
Once your master is up and running you can add one or more workers on different machines.
Clone the Kubernetes repo, and run [worker.sh](/docs/getting-started-guides/docker-multinode/worker.sh) on the worker machine _with root_:
```shell
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./worker.sh
`Master done!`
See [here](/docs/getting-started-guides/docker-multinode/master) for detailed instructions explanation.
## Adding a worker node
Once your master is up and running you can add one or more workers on different machines.
Clone the Kubernetes repo, and run [worker.sh](/docs/getting-started-guides/docker-multinode/worker.sh) on the worker machine _with root_:
```shell
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./worker.sh
```
`Worker done!`
See [here](/docs/getting-started-guides/docker-multinode/worker) for a detailed explanation.
## Deploy a DNS
See [here](/docs/getting-started-guides/docker-multinode/deployDNS) for instructions.
## Testing your cluster
Once your cluster has been created you can [test it out](/docs/getting-started-guides/docker-multinode/testing)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
`Worker done!`
See [here](/docs/getting-started-guides/docker-multinode/worker) for a detailed explanation.
## Deploy a DNS
See [here](/docs/getting-started-guides/docker-multinode/deployDNS) for instructions.
## Testing your cluster
Once your cluster has been created you can [test it out](/docs/getting-started-guides/docker-multinode/testing)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)

View File

@ -1,36 +1,36 @@
---
---
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
Here's a diagram of what the final result will look like:
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/images/docs/k8s-singlenode-docker.png)
* TOC
{:toc}
## Prerequisites
1. You need to have docker installed on one machine.
2. Decide what Kubernetes version to use. Set the `${K8S_VERSION}` variable to
a released version of Kubernetes >= "1.2.0-alpha.7"
### Run it
```shell
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
{:toc}
## Prerequisites
1. You need to have docker installed on one machine.
2. Decide what Kubernetes version to use. Set the `${K8S_VERSION}` variable to
a released version of Kubernetes >= "1.2.0-alpha.7"
### Run it
```shell
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
@ -39,17 +39,17 @@ docker run \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged=true --v=2
```
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
> If you would like to mount an external device as a volume, add `--volume=/dev:/dev` to the command above. It may however, cause some problems described in [#18230](https://github.com/kubernetes/kubernetes/issues/18230)
This actually runs the kubelet, which in turn runs a [pod](/docs/user-guide/pods/) that contains the other master components.
This actually runs the kubelet, which in turn runs a [pod](/docs/user-guide/pods/) that contains the other master components.
### Download `kubectl`
At this point you should have a running Kubernetes cluster. You can test this
At this point you should have a running Kubernetes cluster. You can test this
by downloading the kubectl binary for `${K8S_VERSION}` (look at the URL in the
following links) and make it available by editing your PATH environment
variable.
@ -58,94 +58,94 @@ variable.
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/amd64/kubectl))
([linux/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/386/kubectl))
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/arm/kubectl))
For example, OS X:
```shell
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
```
Linux:
```shell
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
```shell
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Create configuration:
```shell
$ kubectl config set-cluster test-doc --server=http://localhost:8080
```shell
$ kubectl config set-cluster test-doc --server=http://localhost:8080
$ kubectl config set-context test-doc --cluster=test-doc
$ kubectl config use-context test-doc
```
```
For Max OS X users instead of `localhost` you will have to use IP address of your docker machine,
which you can find by running `docker-machine env <machinename>` (see [documentation](https://docs.docker.com/machine/reference/env/)
for details).
### Test it out
List the nodes in your cluster by running:
```shell
kubectl get nodes
```
This should print:
```shell
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
### Run an application
```shell
kubectl run nginx --image=nginx --port=80
```
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```shell
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP (if a LoadBalancer is configured)
```shell
kubectl get svc nginx
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
```shell
kubectl get svc nginx --template={{.spec.clusterIP}}
```
Hit the webserver with the first IP (CLUSTER_IP):
```shell
curl <insert-cluster-ip-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
List the nodes in your cluster by running:
```shell
kubectl get nodes
```
This should print:
```shell
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
### Run an application
```shell
kubectl run nginx --image=nginx --port=80
```
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```shell
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP (if a LoadBalancer is configured)
```shell
kubectl get svc nginx
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
```shell
kubectl get svc nginx --template={{.spec.clusterIP}}
```
Hit the webserver with the first IP (CLUSTER_IP):
```shell
curl <insert-cluster-ip-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
## Deploy a DNS
See [here](/docs/getting-started-guides/docker-multinode/deployDNS/) for instructions.
### A note on turning down your cluster
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
the cluster, you need to first kill the kubelet container, and then any other containers.
You may use `docker kill $(docker ps -aq)`, note this removes _all_ containers running under Docker, so use with caution.
### A note on turning down your cluster
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
the cluster, you need to first kill the kubelet container, and then any other containers.
You may use `docker kill $(docker ps -aq)`, note this removes _all_ containers running under Docker, so use with caution.
### Troubleshooting

View File

@ -1,215 +1,215 @@
---
---
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
* TOC
{:toc}
### Before you start
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) for hosted cluster installation and management.
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
### Prerequisites
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details.
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
1. Enable the [Compute Engine Instance Group Manager API](https://developers.google.com/console/help/new/#activatingapis) in the [Google Cloud developers console](https://console.developers.google.com).
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
1. Make sure you have credentials for GCloud by running ` gcloud auth login`.
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
1. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
### Starting a cluster
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
{:toc}
### Before you start
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) for hosted cluster installation and management.
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
### Prerequisites
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details.
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
1. Enable the [Compute Engine Instance Group Manager API](https://developers.google.com/console/help/new/#activatingapis) in the [Google Cloud developers console](https://console.developers.google.com).
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
1. Make sure you have credentials for GCloud by running ` gcloud auth login`.
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
1. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
### Starting a cluster
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
```shell
curl -sS https://get.k8s.io | bash
curl -sS https://get.k8s.io | bash
```
or
or
```shell
wget -q -O - https://get.k8s.io | bash
wget -q -O - https://get.k8s.io | bash
```
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](/docs/getting-started-guides/logging), while `heapster` provides [monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) services.
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](/docs/getting-started-guides/logging), while `heapster` provides [monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) services.
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
```shell
cd kubernetes
cluster/kube-up.sh
cd kubernetes
cluster/kube-up.sh
```
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce/#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](/docs/troubleshooting/#slack).
The next few steps will show you:
1. how to set up the command line client on your workstation to manage the cluster
1. examples of how to use the cluster
1. how to delete the cluster
1. how to start clusters with non-default options (like larger clusters)
### Installing the Kubernetes command line tools on your workstation
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
The next step is to make sure the `kubectl` tool is in your path.
The [kubectl](/docs/user-guide/kubectl/kubectl) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
You will use it to look at your new cluster and bring up example apps.
Add the appropriate binary folder to your `PATH` to access kubectl:
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce/#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](/docs/troubleshooting/#slack).
The next few steps will show you:
1. how to set up the command line client on your workstation to manage the cluster
1. examples of how to use the cluster
1. how to delete the cluster
1. how to start clusters with non-default options (like larger clusters)
### Installing the Kubernetes command line tools on your workstation
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
The next step is to make sure the `kubectl` tool is in your path.
The [kubectl](/docs/user-guide/kubectl/kubectl) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
You will use it to look at your new cluster and bring up example apps.
Add the appropriate binary folder to your `PATH` to access kubectl:
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
**Note**: gcloud also ships with `kubectl`, which by default is added to your path.
However the gcloud bundled kubectl version may be older than the one downloaded by the
get.k8s.io install script. We recommend you use the downloaded binary to avoid
potential issues with client/server version skew.
#### Enabling bash completion of the Kubernetes command line tools
You may find it useful to enable `kubectl` bash completion:
**Note**: gcloud also ships with `kubectl`, which by default is added to your path.
However the gcloud bundled kubectl version may be older than the one downloaded by the
get.k8s.io install script. We recommend you use the downloaded binary to avoid
potential issues with client/server version skew.
#### Enabling bash completion of the Kubernetes command line tools
You may find it useful to enable `kubectl` bash completion:
```
$ source ./contrib/completions/bash/kubectl
$ source ./contrib/completions/bash/kubectl
```
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
```
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
```
but then you have to update it when you update kubectl.
### Getting started with your cluster
#### Inspect your cluster
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
but then you have to update it when you update kubectl.
### Getting started with your cluster
#### Inspect your cluster
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
```shell
$ kubectl get --all-namespaces services
$ kubectl get --all-namespaces services
```
should show a set of [services](/docs/user-guide/services) that look something like this:
should show a set of [services](/docs/user-guide/services) that look something like this:
```shell
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
default kubernetes 10.0.0.1 <none> 443/TCP <none> 1d
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP k8s-app=kube-dns 1d
kube-system kube-ui 10.0.0.3 <none> 80/TCP k8s-app=kube-ui 1d
...
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
default kubernetes 10.0.0.1 <none> 443/TCP <none> 1d
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP k8s-app=kube-dns 1d
kube-system kube-ui 10.0.0.3 <none> 80/TCP k8s-app=kube-ui 1d
...
```
Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup.
You can do this via the
Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup.
You can do this via the
```shell
$ kubectl get --all-namespaces pods
$ kubectl get --all-namespaces pods
```
command.
You'll see a list of pods that looks something like this (the name specifics will be different):
command.
You'll see a list of pods that looks something like this (the name specifics will be different):
```shell
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
kube-system kube-dns-v5-7ztia 3/3 Running 0 15m
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
kube-system kube-dns-v5-7ztia 3/3 Running 0 15m
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
```
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
#### Run some examples
Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) is a good "getting started" walkthrough.
### Tearing down the cluster
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
#### Run some examples
Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) is a good "getting started" walkthrough.
### Tearing down the cluster
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
```shell
cd kubernetes
cluster/kube-down.sh
cd kubernetes
cluster/kube-down.sh
```
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
### Customizing
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 4 worker VMs. You
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
You can view a transcript of a successful cluster creation
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
### Troubleshooting
#### Project settings
You need to have the Google Cloud Storage API, and the Google Cloud Storage
JSON API enabled. It is activated by default for new projects. Otherwise, it
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
details.
Also ensure that-- as listed in the [Prerequsites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
#### Cluster initialization hang
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
#### SSH
If you're having trouble SSHing into your instances, ensure the GCE firewall
isn't blocking port 22 to your VMs. By default, this should work but if you
have edited firewall rules or created a new non-default network, you'll need to
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
--description "SSH allowed from anywhere" --allow tcp:22`
Additionally, your GCE SSH key must either have no passcode or you need to be
using `ssh-agent`.
#### Networking
The instances must be able to connect to each other using their private IP. The
script uses the "default" network which should have a firewall rule called
"default-allow-internal" which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in `cluster/config-default.sh` create a new rule with the following
field values:
* Source Ranges: `10.0.0.0/8`
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
### Customizing
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 4 worker VMs. You
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
You can view a transcript of a successful cluster creation
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
### Troubleshooting
#### Project settings
You need to have the Google Cloud Storage API, and the Google Cloud Storage
JSON API enabled. It is activated by default for new projects. Otherwise, it
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
details.
Also ensure that-- as listed in the [Prerequsites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
#### Cluster initialization hang
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
#### SSH
If you're having trouble SSHing into your instances, ensure the GCE firewall
isn't blocking port 22 to your VMs. By default, this should work but if you
have edited firewall rules or created a new non-default network, you'll need to
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
--description "SSH allowed from anywhere" --allow tcp:22`
Additionally, your GCE SSH key must either have no passcode or you need to be
using `ssh-agent`.
#### Networking
The instances must be able to connect to each other using their private IP. The
script uses the "default" network which should have a firewall rule called
"default-allow-internal" which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in `cluster/config-default.sh` create a new rule with the following
field values:
* Source Ranges: `10.0.0.0/8`
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`

View File

@ -1,248 +1,248 @@
---
---
[Juju](https://jujucharms.com/docs/stable/about-juju) makes it easy to deploy
Kubernetes by provisioning, installing and configuring all the systems in
the cluster. Once deployed the cluster can easily scale up with one command
to increase the cluster size.
[Juju](https://jujucharms.com/docs/stable/about-juju) makes it easy to deploy
Kubernetes by provisioning, installing and configuring all the systems in
the cluster. Once deployed the cluster can easily scale up with one command
to increase the cluster size.
* TOC
{:toc}
## Prerequisites
> Note: If you're running kube-up, on Ubuntu - all of the dependencies
> will be handled for you. You may safely skip to the section:
> [Launch Kubernetes Cluster](#launch-kubernetes-cluster)
### On Ubuntu
[Install the Juju client](https://jujucharms.com/get-started) on your
local Ubuntu system:
{:toc}
```shell
sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju-core juju-quickstart
```
### With Docker
If you are not using Ubuntu or prefer the isolation of Docker, you may
run the following:
## Prerequisites
```shell
mkdir ~/.juju
> Note: If you're running kube-up, on Ubuntu - all of the dependencies
> will be handled for you. You may safely skip to the section:
> [Launch Kubernetes Cluster](#launch-kubernetes-cluster)
### On Ubuntu
[Install the Juju client](https://jujucharms.com/get-started) on your
local Ubuntu system:
```shell
sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju-core juju-quickstart
```
### With Docker
If you are not using Ubuntu or prefer the isolation of Docker, you may
run the following:
```shell
mkdir ~/.juju
sudo docker run -v ~/.juju:/home/ubuntu/.juju -ti jujusolutions/jujubox:latest
```
At this point from either path you will have access to the `juju
quickstart` command.
To set up the credentials for your chosen cloud run:
```
```shell
At this point from either path you will have access to the `juju
quickstart` command.
To set up the credentials for your chosen cloud run:
```shell
juju quickstart --constraints="mem=3.75G" -i
```
> The `constraints` flag is optional, it changes the size of virtual machines
> that Juju will generate when it requests a new machine. Larger machines
> will run faster but cost more money than smaller machines.
Follow the dialogue and choose `save` and `use`. Quickstart will now
bootstrap the juju root node and setup the juju web based user
interface.
## Launch Kubernetes cluster
You will need to export the `KUBERNETES_PROVIDER` environment variable before
bringing up the cluster.
```
```shell
export KUBERNETES_PROVIDER=juju
> The `constraints` flag is optional, it changes the size of virtual machines
> that Juju will generate when it requests a new machine. Larger machines
> will run faster but cost more money than smaller machines.
Follow the dialogue and choose `save` and `use`. Quickstart will now
bootstrap the juju root node and setup the juju web based user
interface.
## Launch Kubernetes cluster
You will need to export the `KUBERNETES_PROVIDER` environment variable before
bringing up the cluster.
```shell
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
```
If this is your first time running the `kube-up.sh` script, it will install
the required dependencies to get started with Juju, additionally it will
launch a curses based configuration utility allowing you to select your cloud
provider and enter the proper access credentials.
Next it will deploy the kubernetes master, etcd, 2 nodes with flannel based
Software Defined Networking (SDN) so containers on different hosts can
communicate with each other.
## Exploring the cluster
The `juju status` command provides information about each unit in the cluster:
```shell
$ juju status --format=oneline
- docker/0: 52.4.92.78 (started)
- flannel-docker/0: 52.4.92.78 (started)
- kubernetes/0: 52.4.92.78 (started)
- docker/1: 52.6.104.142 (started)
- flannel-docker/1: 52.6.104.142 (started)
- kubernetes/1: 52.6.104.142 (started)
- etcd/0: 52.5.216.210 (started) 4001/tcp
- juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp
- kubernetes-master/0: 52.6.19.238 (started) 8080/tcp
```
You can use `juju ssh` to access any of the units:
```shell
juju ssh kubernetes-master/0
```
## Run some containers!
`kubectl` is available on the Kubernetes master node. We'll ssh in to
launch some containers, but one could use `kubectl` locally by setting
`KUBERNETES_MASTER` to point at the ip address of "kubernetes-master/0".
No pods will be available before starting a container:
If this is your first time running the `kube-up.sh` script, it will install
the required dependencies to get started with Juju, additionally it will
launch a curses based configuration utility allowing you to select your cloud
provider and enter the proper access credentials.
```shell
kubectl get pods
NAME READY STATUSRESTARTS AGE
kubectl get replicationcontrollers
Next it will deploy the kubernetes master, etcd, 2 nodes with flannel based
Software Defined Networking (SDN) so containers on different hosts can
communicate with each other.
## Exploring the cluster
The `juju status` command provides information about each unit in the cluster:
```shell
$ juju status --format=oneline
- docker/0: 52.4.92.78 (started)
- flannel-docker/0: 52.4.92.78 (started)
- kubernetes/0: 52.4.92.78 (started)
- docker/1: 52.6.104.142 (started)
- flannel-docker/1: 52.6.104.142 (started)
- kubernetes/1: 52.6.104.142 (started)
- etcd/0: 52.5.216.210 (started) 4001/tcp
- juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp
- kubernetes-master/0: 52.6.19.238 (started) 8080/tcp
```
You can use `juju ssh` to access any of the units:
```shell
juju ssh kubernetes-master/0
```
## Run some containers!
`kubectl` is available on the Kubernetes master node. We'll ssh in to
launch some containers, but one could use `kubectl` locally by setting
`KUBERNETES_MASTER` to point at the ip address of "kubernetes-master/0".
No pods will be available before starting a container:
```shell
kubectl get pods
NAME READY STATUSRESTARTS AGE
kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
```
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
```json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
```
Create the pod with kubectl:
Create the pod with kubectl:
```shell
kubectl create -f pod.json
```
Get info on the pod:
```shell
kubectl get pods
```
To test the hello app, we need to locate which node is hosting
the container. Better tooling for using Juju to introspect container
is in the works but we can use `juju run` and `juju status` to find
our hello app.
Exit out of our ssh session and run:
```shell
juju run --unit kubernetes/0 "docker ps -n=1"
...
juju run --unit kubernetes/1 "docker ps -n=1"
CONTAINER IDIMAGE COMMAND CREATED STATUS PORTS NAMES
02beb61339d8quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hourk8s_hello....
```
We see "kubernetes/1" has our container, we can open port 80:
```shell
juju run --unit kubernetes/1 "open-port 80"
juju expose kubernetes
sudo apt-get install curl
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
```shell
kubectl create -f pod.json
```
Finally delete the pod:
```shell
juju ssh kubernetes-master/0
kubectl delete pods hello
```
## Scale out cluster
We can add node units like so:
Get info on the pod:
```shell
```shell
kubectl get pods
```
To test the hello app, we need to locate which node is hosting
the container. Better tooling for using Juju to introspect container
is in the works but we can use `juju run` and `juju status` to find
our hello app.
Exit out of our ssh session and run:
```shell
juju run --unit kubernetes/0 "docker ps -n=1"
...
juju run --unit kubernetes/1 "docker ps -n=1"
CONTAINER IDIMAGE COMMAND CREATED STATUS PORTS NAMES
02beb61339d8quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hourk8s_hello....
```
We see "kubernetes/1" has our container, we can open port 80:
```shell
juju run --unit kubernetes/1 "open-port 80"
juju expose kubernetes
sudo apt-get install curl
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
```
Finally delete the pod:
```shell
juju ssh kubernetes-master/0
kubectl delete pods hello
```
## Scale out cluster
We can add node units like so:
```shell
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
```
## Launch the "k8petstore" example app
The [k8petstore example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/k8petstore/) is available as a
[juju action](https://jujucharms.com/docs/devel/actions).
```
```shell
## Launch the "k8petstore" example app
The [k8petstore example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/k8petstore/) is available as a
[juju action](https://jujucharms.com/docs/devel/actions).
```shell
juju action do kubernetes-master/0
```
> Note: this example includes curl statements to exercise the app, which
> automatically generates "petstore" transactions written to redis, and allows
> you to visualize the throughput in your browser.
## Tear down cluster
```
```shell
> Note: this example includes curl statements to exercise the app, which
> automatically generates "petstore" transactions written to redis, and allows
> you to visualize the throughput in your browser.
## Tear down cluster
```shell
./kube-down.sh
```
or destroy your current Juju environment (using the `juju env` command):
```
```shell
or destroy your current Juju environment (using the `juju env` command):
```shell
juju destroy-environment --force `juju env`
```
## More Info
The Kubernetes charms and bundles can be found in the `kubernetes` project on
github.com:
- [Bundle Repository](http://releases.k8s.io/{{page.githubbranch}}/cluster/juju/bundles)
* [Kubernetes master charm](https://releases.k8s.io/{{page.githubbranch}}/cluster/juju/charms/trusty/kubernetes-master)
* [Kubernetes node charm](https://releases.k8s.io/{{page.githubbranch}}/cluster/juju/charms/trusty/kubernetes)
- [More about Juju](https://jujucharms.com)
### Cloud compatibility
Juju runs natively against a variety of public cloud providers. Juju currently
works with [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws),
[Windows Azure](https://jujucharms.com/docs/stable/config-azure),
[DigitalOcean](https://jujucharms.com/docs/stable/config-digitalocean),
[Google Compute Engine](https://jujucharms.com/docs/stable/config-gce),
[HP Public Cloud](https://jujucharms.com/docs/stable/config-hpcloud),
[Joyent](https://jujucharms.com/docs/stable/config-joyent),
[LXC](https://jujucharms.com/docs/stable/config-LXC), any
[OpenStack](https://jujucharms.com/docs/stable/config-openstack) deployment,
[Vagrant](https://jujucharms.com/docs/stable/config-vagrant), and
[Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware).
If you do not see your favorite cloud provider listed many clouds can be
configured for [manual provisioning](https://jujucharms.com/docs/stable/config-manual).
The Kubernetes bundle has been tested on GCE and AWS and found to work with
```
## More Info
The Kubernetes charms and bundles can be found in the `kubernetes` project on
github.com:
- [Bundle Repository](http://releases.k8s.io/{{page.githubbranch}}/cluster/juju/bundles)
* [Kubernetes master charm](https://releases.k8s.io/{{page.githubbranch}}/cluster/juju/charms/trusty/kubernetes-master)
* [Kubernetes node charm](https://releases.k8s.io/{{page.githubbranch}}/cluster/juju/charms/trusty/kubernetes)
- [More about Juju](https://jujucharms.com)
### Cloud compatibility
Juju runs natively against a variety of public cloud providers. Juju currently
works with [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws),
[Windows Azure](https://jujucharms.com/docs/stable/config-azure),
[DigitalOcean](https://jujucharms.com/docs/stable/config-digitalocean),
[Google Compute Engine](https://jujucharms.com/docs/stable/config-gce),
[HP Public Cloud](https://jujucharms.com/docs/stable/config-hpcloud),
[Joyent](https://jujucharms.com/docs/stable/config-joyent),
[LXC](https://jujucharms.com/docs/stable/config-LXC), any
[OpenStack](https://jujucharms.com/docs/stable/config-openstack) deployment,
[Vagrant](https://jujucharms.com/docs/stable/config-vagrant), and
[Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware).
If you do not see your favorite cloud provider listed many clouds can be
configured for [manual provisioning](https://jujucharms.com/docs/stable/config-manual).
The Kubernetes bundle has been tested on GCE and AWS and found to work with
version 1.0.0.

View File

@ -1,294 +1,294 @@
---
---
* TOC
{:toc}
### Highlights
* Super-fast cluster boot-up (few seconds instead of several minutes for vagrant)
* Reduced disk usage thanks to [COW](https://en.wikibooks.org/wiki/QEMU/Images#Copy_on_write)
* Reduced memory footprint thanks to [KSM](https://www.kernel.org/doc/Documentation/vm/ksm.txt)
### Warnings about `libvirt-coreos` use case
The primary goal of the `libvirt-coreos` cluster provider is to deploy a multi-node Kubernetes cluster on local VMs as fast as possible and to be as light as possible in term of resources used.
In order to achieve that goal, its deployment is very different from the "standard production deployment" method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
The `libvirt-coreos` cluster provider doesn't aim at being production look-alike.
Another difference is that no security is enforced on `libvirt-coreos` at all. For example,
* Kube API server is reachable via a clear-text connection (no SSL);
* Kube API server requires no credentials;
* etcd access is not protected;
* Kubernetes secrets are not protected as securely as they are on production environments;
* etc.
So, an k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like:
* un-authenticated access to Kube API server;
* Access to Kubernetes private data structures inside etcd;
* etc.
On the other hand, `libvirt-coreos` might be useful for people investigating low level implementation of Kubernetes because debugging techniques like sniffing the network traffic or introspecting the etcd content are easier on `libvirt-coreos` than on a production deployment.
### Prerequisites
1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html)
2. Install [ebtables](http://ebtables.netfilter.org/)
3. Install [qemu](http://wiki.qemu.org/Main_Page)
4. Install [libvirt](http://libvirt.org/)
{:toc}
### Highlights
* Super-fast cluster boot-up (few seconds instead of several minutes for vagrant)
* Reduced disk usage thanks to [COW](https://en.wikibooks.org/wiki/QEMU/Images#Copy_on_write)
* Reduced memory footprint thanks to [KSM](https://www.kernel.org/doc/Documentation/vm/ksm.txt)
### Warnings about `libvirt-coreos` use case
The primary goal of the `libvirt-coreos` cluster provider is to deploy a multi-node Kubernetes cluster on local VMs as fast as possible and to be as light as possible in term of resources used.
In order to achieve that goal, its deployment is very different from the "standard production deployment" method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
The `libvirt-coreos` cluster provider doesn't aim at being production look-alike.
Another difference is that no security is enforced on `libvirt-coreos` at all. For example,
* Kube API server is reachable via a clear-text connection (no SSL);
* Kube API server requires no credentials;
* etcd access is not protected;
* Kubernetes secrets are not protected as securely as they are on production environments;
* etc.
So, an k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like:
* un-authenticated access to Kube API server;
* Access to Kubernetes private data structures inside etcd;
* etc.
On the other hand, `libvirt-coreos` might be useful for people investigating low level implementation of Kubernetes because debugging techniques like sniffing the network traffic or introspecting the etcd content are easier on `libvirt-coreos` than on a production deployment.
### Prerequisites
1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html)
2. Install [ebtables](http://ebtables.netfilter.org/)
3. Install [qemu](http://wiki.qemu.org/Main_Page)
4. Install [libvirt](http://libvirt.org/)
5. Install [openssl](http://openssl.org/)
6. Enable and start the libvirt daemon, e.g:
* ``systemctl enable libvirtd && systemctl start libvirtd`` # for systemd-based systems
* ``/etc/init.d/libvirt-bin start`` # for init.d-based systems
7. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html)
8. Check that your $HOME is accessible to the qemu user²
#### &sup1; Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
You can test it with the following command:
6. Enable and start the libvirt daemon, e.g:
* ``systemctl enable libvirtd && systemctl start libvirtd`` # for systemd-based systems
* ``/etc/init.d/libvirt-bin start`` # for init.d-based systems
7. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html)
8. Check that your $HOME is accessible to the qemu user²
#### &sup1; Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
You can test it with the following command:
```shell
virsh -c qemu:///system pool-list
virsh -c qemu:///system pool-list
```
If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
```shell
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
```
```conf
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
```conf
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
```
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
```shell
$ ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
$ usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account
$ ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
$ usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account
```
(Replace `$USER` with your login name)
#### &sup2; Qemu will run with a specific user. It must have access to the VMs drives
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
As we're using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
(Replace `$USER` with your login name)
#### &sup2; Qemu will run with a specific user. It must have access to the VMs drives
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
As we're using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
```shell
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
```
In order to fix that issue, you have several possibilities:
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
* backed by a filesystem with a lot of free disk space
* writable by your user;
* accessible by the qemu user.
* Grant the qemu user access to the storage pool.
On Arch:
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
* backed by a filesystem with a lot of free disk space
* writable by your user;
* accessible by the qemu user.
* Grant the qemu user access to the storage pool.
On Arch:
```shell
setfacl -m g:kvm:--x ~
setfacl -m g:kvm:--x ~
```
### Setup
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
To start your local cluster, open a shell and run:
### Setup
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
To start your local cluster, open a shell and run:
```shell
cd kubernetes
export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh
cd kubernetes
export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
The `NUM_NODES` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
The `KUBE_PUSH` environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`.
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
You can check that your machines are there and running with:
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
The `NUM_NODES` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
The `KUBE_PUSH` environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`.
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
You can check that your machines are there and running with:
```shell
$ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_node-01 running
17 kubernetes_node-02 running
18 kubernetes_node-03 running
$ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_node-01 running
17 kubernetes_node-02 running
18 kubernetes_node-03 running
```
You can check that the Kubernetes cluster is working with:
You can check that the Kubernetes cluster is working with:
```shell
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
192.168.10.4 <none> Ready
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
192.168.10.4 <none> Ready
```
The VMs are running [CoreOS](https://coreos.com/).
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
The user to use to connect to the VM is `core`.
The IP to connect to the master is 192.168.10.1.
The IPs to connect to the nodes are 192.168.10.2 and onwards.
Connect to `kubernetes_master`:
The VMs are running [CoreOS](https://coreos.com/).
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
The user to use to connect to the VM is `core`.
The IP to connect to the master is 192.168.10.1.
The IPs to connect to the nodes are 192.168.10.2 and onwards.
Connect to `kubernetes_master`:
```shell
ssh core@192.168.10.1
ssh core@192.168.10.1
```
Connect to `kubernetes_node-01`:
Connect to `kubernetes_node-01`:
```shell
ssh core@192.168.10.2
ssh core@192.168.10.2
```
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
```shell
export KUBERNETES_PROVIDER=libvirt-coreos
export KUBERNETES_PROVIDER=libvirt-coreos
```
Bring up a libvirt-CoreOS cluster of 5 nodes
Bring up a libvirt-CoreOS cluster of 5 nodes
```shell
NUM_NODES=5 cluster/kube-up.sh
NUM_NODES=5 cluster/kube-up.sh
```
Destroy the libvirt-CoreOS cluster
Destroy the libvirt-CoreOS cluster
```shell
cluster/kube-down.sh
cluster/kube-down.sh
```
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
```shell
cluster/kube-push.sh
cluster/kube-push.sh
```
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
```shell
KUBE_PUSH=local cluster/kube-push.sh
KUBE_PUSH=local cluster/kube-push.sh
```
Interact with the cluster
Interact with the cluster
```shell
kubectl ...
kubectl ...
```
### Troubleshooting
#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz
Build the release tarballs:
### Troubleshooting
#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz
Build the release tarballs:
```shell
make release
make release
```
#### Can't find virsh in PATH, please fix and retry.
Install libvirt
On Arch:
#### Can't find virsh in PATH, please fix and retry.
Install libvirt
On Arch:
```shell
pacman -S qemu libvirt
pacman -S qemu libvirt
```
On Ubuntu 14.04.1:
On Ubuntu 14.04.1:
```shell
aptitude install qemu-system-x86 libvirt-bin
aptitude install qemu-system-x86 libvirt-bin
```
On Fedora 21:
On Fedora 21:
```shell
yum install qemu libvirt
yum install qemu libvirt
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
Start the libvirt daemon
On Arch:
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
Start the libvirt daemon
On Arch:
```shell
systemctl start libvirtd
systemctl start libvirtd
```
On Ubuntu 14.04.1:
On Ubuntu 14.04.1:
```shell
service libvirt-bin start
service libvirt-bin start
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
Fix libvirt access permission (Remember to adapt `$USER`)
On Arch and Fedora 21:
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
Fix libvirt access permission (Remember to adapt `$USER`)
On Arch and Fedora 21:
```shell
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
```
```conf
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
```conf
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
```
On Ubuntu:
On Ubuntu:
```shell
usermod -a -G libvirtd $USER
usermod -a -G libvirtd $USER
```
#### error: Out of memory initializing network (virsh net-create...)
#### error: Out of memory initializing network (virsh net-create...)
Ensure libvirtd has been restarted since ebtables was installed.

View File

@ -1,116 +1,116 @@
---
---
* TOC
{:toc}
### Requirements
#### Linux
Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](/docs/getting-started-guides/vagrant), or on a cloud provider like [Google Compute Engine](/docs/getting-started-guides/gce)
#### Docker
At least [Docker](https://docs.docker.com/installation/#installation)
1.3+. Ensure the Docker daemon is running and can be contacted (try `docker
ps`). Some of the Kubernetes components need to run as root, which normally
works fine with docker.
#### etcd
You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``.
#### go
You need [go](https://golang.org/doc/install) at least 1.3+ in your path, please make sure it is installed and in your ``$PATH``.
### Starting the cluster
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root):
{:toc}
### Requirements
#### Linux
Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](/docs/getting-started-guides/vagrant), or on a cloud provider like [Google Compute Engine](/docs/getting-started-guides/gce)
#### Docker
At least [Docker](https://docs.docker.com/installation/#installation)
1.3+. Ensure the Docker daemon is running and can be contacted (try `docker
ps`). Some of the Kubernetes components need to run as root, which normally
works fine with docker.
#### etcd
You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``.
#### go
You need [go](https://golang.org/doc/install) at least 1.3+ in your path, please make sure it is installed and in your ``$PATH``.
### Starting the cluster
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root):
```shell
cd kubernetes
hack/local-up-cluster.sh
cd kubernetes
hack/local-up-cluster.sh
```
This will build and start a lightweight local cluster, consisting of a master
and a single node. Type Control-C to shut it down.
You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will
print the commands to run to point kubectl at the local cluster.
### Running a container
Your cluster is running, and you want to start running containers!
You can now use any of the cluster/kubectl.sh commands to interact with your local setup.
This will build and start a lightweight local cluster, consisting of a master
and a single node. Type Control-C to shut it down.
You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will
print the commands to run to point kubectl at the local cluster.
### Running a container
Your cluster is running, and you want to start running containers!
You can now use any of the cluster/kubectl.sh commands to interact with your local setup.
```shell
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal
sudo docker images
## you should see it pulling the nginx image, once the above command returns it
sudo docker ps
## you should see your container running!
exit
## end wait
## introspect Kubernetes!
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal
sudo docker images
## you should see it pulling the nginx image, once the above command returns it
sudo docker ps
## you should see your container running!
exit
## end wait
## introspect Kubernetes!
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
```
### Running a user defined pod
Note the difference between a [container](/docs/user-guide/containers)
and a [pod](/docs/user-guide/pods). Since you only asked for the former, Kubernetes will create a wrapper pod for you.
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
### Running a user defined pod
Note the difference between a [container](/docs/user-guide/containers)
and a [pod](/docs/user-guide/pods). Since you only asked for the former, Kubernetes will create a wrapper pod for you.
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
```shell
cluster/kubectl.sh create -f docs/user-guide/pod.yaml
cluster/kubectl.sh create -f docs/user-guide/pod.yaml
```
Congratulations!
### Troubleshooting
#### I cannot reach service IPs on the network.
Some firewall software that uses iptables may not interact well with
kubernetes. If you have trouble around networking, try disabling any
firewall or other iptables-using systems, first. Also, you can check
if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`.
By default the IP range for service cluster IPs is 10.0.*.* - depending on your
docker installation, this may conflict with IPs for containers. If you find
containers running with IPs in this range, edit hack/local-cluster-up.sh and
change the service-cluster-ip-range flag to something else.
#### I cannot create a replication controller with replica size greater than 1! What gives?
You are running a single node setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers.
#### I changed Kubernetes code, how do I run it?
Congratulations!
### Troubleshooting
#### I cannot reach service IPs on the network.
Some firewall software that uses iptables may not interact well with
kubernetes. If you have trouble around networking, try disabling any
firewall or other iptables-using systems, first. Also, you can check
if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`.
By default the IP range for service cluster IPs is 10.0.*.* - depending on your
docker installation, this may conflict with IPs for containers. If you find
containers running with IPs in this range, edit hack/local-cluster-up.sh and
change the service-cluster-ip-range flag to something else.
#### I cannot create a replication controller with replica size greater than 1! What gives?
You are running a single node setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers.
#### I changed Kubernetes code, how do I run it?
```shell
cd kubernetes
hack/build-go.sh
hack/local-up-cluster.sh
cd kubernetes
hack/build-go.sh
hack/local-up-cluster.sh
```
#### kubectl claims to start a container but `get pods` and `docker ps` don't show it.
One or more of the KUbernetes daemons might've crashed. Tail the logs of each in /tmp.
#### The pods fail to connect to the services by host names
#### kubectl claims to start a container but `get pods` and `docker ps` don't show it.
One or more of the KUbernetes daemons might've crashed. Tail the logs of each in /tmp.
#### The pods fail to connect to the services by host names
The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. Related documents can be found [here](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns#how-do-i-configure-it)

View File

@ -1,329 +1,329 @@
---
---
* TOC
{:toc}
## About Kubernetes on Mesos
<!-- TODO: Update, clean up. -->
Mesos allows dynamic sharing of cluster resources between Kubernetes and other first-class Mesos frameworks such as [Hadoop][1], [Spark][2], and [Chronos][3].
Mesos also ensures applications from different frameworks running on your cluster are isolated and that resources are allocated fairly among them.
Mesos clusters can be deployed on nearly every IaaS cloud provider infrastructure or in your own physical datacenter. Kubernetes on Mesos runs on-top of that and therefore allows you to easily move Kubernetes workloads from one of these environments to the other.
This tutorial will walk you through setting up Kubernetes on a Mesos cluster.
It provides a step by step walk through of adding Kubernetes to a Mesos cluster and starting your first pod with an nginx webserver.
**NOTE:** There are [known issues with the current implementation][7] and support for centralized logging and monitoring is not yet available.
Please [file an issue against the kubernetes-mesos project][8] if you have problems completing the steps below.
Further information is available in the Kubernetes on Mesos [contrib directory][13].
### Prerequisites
- Understanding of [Apache Mesos][6]
- A running [Mesos cluster on Google Compute Engine][5]
- A [VPN connection][10] to the cluster
- A machine in the cluster which should become the Kubernetes *master node* with:
- Go (see [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/devel/development.md#go-versions) for required versions)
- make (i.e. build-essential)
- Docker
**Note**: You *can*, but you *don't have to* deploy Kubernetes-Mesos on the same machine the Mesos master is running on.
### Deploy Kubernetes-Mesos
Log into the future Kubernetes *master node* over SSH, replacing the placeholder below with the correct IP address.
{:toc}
## About Kubernetes on Mesos
<!-- TODO: Update, clean up. -->
Mesos allows dynamic sharing of cluster resources between Kubernetes and other first-class Mesos frameworks such as [Hadoop][1], [Spark][2], and [Chronos][3].
Mesos also ensures applications from different frameworks running on your cluster are isolated and that resources are allocated fairly among them.
Mesos clusters can be deployed on nearly every IaaS cloud provider infrastructure or in your own physical datacenter. Kubernetes on Mesos runs on-top of that and therefore allows you to easily move Kubernetes workloads from one of these environments to the other.
This tutorial will walk you through setting up Kubernetes on a Mesos cluster.
It provides a step by step walk through of adding Kubernetes to a Mesos cluster and starting your first pod with an nginx webserver.
**NOTE:** There are [known issues with the current implementation][7] and support for centralized logging and monitoring is not yet available.
Please [file an issue against the kubernetes-mesos project][8] if you have problems completing the steps below.
Further information is available in the Kubernetes on Mesos [contrib directory][13].
### Prerequisites
- Understanding of [Apache Mesos][6]
- A running [Mesos cluster on Google Compute Engine][5]
- A [VPN connection][10] to the cluster
- A machine in the cluster which should become the Kubernetes *master node* with:
- Go (see [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/devel/development.md#go-versions) for required versions)
- make (i.e. build-essential)
- Docker
**Note**: You *can*, but you *don't have to* deploy Kubernetes-Mesos on the same machine the Mesos master is running on.
### Deploy Kubernetes-Mesos
Log into the future Kubernetes *master node* over SSH, replacing the placeholder below with the correct IP address.
```shell
ssh jclouds@${ip_address_of_master_node}
ssh jclouds@${ip_address_of_master_node}
```
Build Kubernetes-Mesos.
Build Kubernetes-Mesos.
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
export KUBERNETES_CONTRIB=mesos
make
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
export KUBERNETES_CONTRIB=mesos
make
```
Set some environment variables.
The internal IP address of the master may be obtained via `hostname -i`.
Set some environment variables.
The internal IP address of the master may be obtained via `hostname -i`.
```shell
export KUBERNETES_MASTER_IP=$(hostname -i)
export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
export KUBERNETES_MASTER_IP=$(hostname -i)
export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
```
Note that KUBERNETES_MASTER is used as the api endpoint. If you have existing `~/.kube/config` and point to another endpoint, you need to add option `--server=${KUBERNETES_MASTER}` to kubectl in later steps.
### Deploy etcd
Start etcd and verify that it is running:
Note that KUBERNETES_MASTER is used as the api endpoint. If you have existing `~/.kube/config` and point to another endpoint, you need to add option `--server=${KUBERNETES_MASTER}` to kubectl in later steps.
### Deploy etcd
Start etcd and verify that it is running:
```shell
sudo docker run -d --hostname $(uname -n) --name etcd \
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.2.1 \
--listen-client-urls http://0.0.0.0:4001 \
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
sudo docker run -d --hostname $(uname -n) --name etcd \
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.2.1 \
--listen-client-urls http://0.0.0.0:4001 \
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
```
```shell
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd7bac9e2301 quay.io/coreos/etcd:v2.2.1 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd7bac9e2301 quay.io/coreos/etcd:v2.2.1 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
```
It's also a good idea to ensure your etcd instance is reachable by testing it
It's also a good idea to ensure your etcd instance is reachable by testing it
```shell
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
```
If connectivity is OK, you will see an output of the available keys in etcd (if any).
### Start Kubernetes-Mesos Services
Update your PATH to more easily run the Kubernetes-Mesos binaries:
If connectivity is OK, you will see an output of the available keys in etcd (if any).
### Start Kubernetes-Mesos Services
Update your PATH to more easily run the Kubernetes-Mesos binaries:
```shell
export PATH="$(pwd)/_output/local/go/bin:$PATH"
export PATH="$(pwd)/_output/local/go/bin:$PATH"
```
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos-master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos-master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
```shell
export MESOS_MASTER=<host:port or zk:// url>
export MESOS_MASTER=<host:port or zk:// url>
```
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
```shell
$ cat <<EOF >mesos-cloud.conf
[mesos-cloud]
mesos-master = ${MESOS_MASTER}
EOF
$ cat <<EOF >mesos-cloud.conf
[mesos-cloud]
mesos-master = ${MESOS_MASTER}
EOF
```
Now start the kubernetes-mesos API server, controller manager, and scheduler on the master node:
Now start the kubernetes-mesos API server, controller manager, and scheduler on the master node:
```shell
$ km apiserver \
--address=${KUBERNETES_MASTER_IP} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
--service-cluster-ip-range=10.10.10.0/24 \
--port=8888 \
--cloud-provider=mesos \
--cloud-config=mesos-cloud.conf \
--secure-port=0 \
--v=1 >apiserver.log 2>&1 &
$ km controller-manager \
--master=${KUBERNETES_MASTER_IP}:8888 \
--cloud-provider=mesos \
--cloud-config=./mesos-cloud.conf \
--v=1 >controller.log 2>&1 &
$ km scheduler \
--address=${KUBERNETES_MASTER_IP} \
--mesos-master=${MESOS_MASTER} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
--mesos-user=root \
--api-servers=${KUBERNETES_MASTER_IP}:8888 \
--cluster-dns=10.10.10.10 \
--cluster-domain=cluster.local \
--v=2 >scheduler.log 2>&1 &
$ km apiserver \
--address=${KUBERNETES_MASTER_IP} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
--service-cluster-ip-range=10.10.10.0/24 \
--port=8888 \
--cloud-provider=mesos \
--cloud-config=mesos-cloud.conf \
--secure-port=0 \
--v=1 >apiserver.log 2>&1 &
$ km controller-manager \
--master=${KUBERNETES_MASTER_IP}:8888 \
--cloud-provider=mesos \
--cloud-config=./mesos-cloud.conf \
--v=1 >controller.log 2>&1 &
$ km scheduler \
--address=${KUBERNETES_MASTER_IP} \
--mesos-master=${MESOS_MASTER} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
--mesos-user=root \
--api-servers=${KUBERNETES_MASTER_IP}:8888 \
--cluster-dns=10.10.10.10 \
--cluster-domain=cluster.local \
--v=2 >scheduler.log 2>&1 &
```
Disown your background jobs so that they'll stay running if you log out.
Disown your background jobs so that they'll stay running if you log out.
```shell
disown -a
disown -a
```
#### Validate KM Services
Interact with the kubernetes-mesos framework via `kubectl`:
#### Validate KM Services
Interact with the kubernetes-mesos framework via `kubectl`:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
```
```shell
# NOTE: your service IPs will likely differ
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
k8sm-scheduler component=scheduler,provider=k8sm <none> 10.10.10.113 10251/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.1 443/TCP
# NOTE: your service IPs will likely differ
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
k8sm-scheduler component=scheduler,provider=k8sm <none> 10.10.10.113 10251/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.1 443/TCP
```
Lastly, look for Kubernetes in the Mesos web GUI by pointing your browser to
`http://<mesos-master-ip:port>`. Make sure you have an active VPN connection.
Go to the Frameworks tab, and look for an active framework named "Kubernetes".
## Spin up a pod
Write a JSON pod description to a local file:
Lastly, look for Kubernetes in the Mesos web GUI by pointing your browser to
`http://<mesos-master-ip:port>`. Make sure you have an active VPN connection.
Go to the Frameworks tab, and look for an active framework named "Kubernetes".
## Spin up a pod
Write a JSON pod description to a local file:
```shell
$ cat <<EOPOD >nginx.yaml
$ cat <<EOPOD >nginx.yaml
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOPOD
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOPOD
```
Send the pod description to Kubernetes using the `kubectl` CLI:
Send the pod description to Kubernetes using the `kubectl` CLI:
```shell
$ kubectl create -f ./nginx.yaml
pods/nginx
$ kubectl create -f ./nginx.yaml
pods/nginx
```
Wait a minute or two while `dockerd` downloads the image layers from the internet.
We can use the `kubectl` interface to monitor the status of our pod:
Wait a minute or two while `dockerd` downloads the image layers from the internet.
We can use the `kubectl` interface to monitor the status of our pod:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 14s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 14s
```
Verify that the pod task is running in the Mesos web GUI. Click on the
Kubernetes framework. The next screen should show the running Mesos task that
started the Kubernetes pod.
## Launching kube-dns
Kube-dns is an addon for Kubernetes which adds DNS-based service discovery to the cluster. For a detailed explanation see [DNS in Kubernetes][4].
The kube-dns addon runs as a pod inside the cluster. The pod consists of three co-located containers:
- a local etcd instance
- the [skydns][11] DNS server
- the kube2sky process to glue skydns to the state of the Kubernetes cluster.
The skydns container offers DNS service via port 53 to the cluster. The etcd communication works via local 127.0.0.1 communication
We assume that kube-dns will use
- the service IP `10.10.10.10`
- and the `cluster.local` domain.
Note that we have passed these two values already as parameter to the apiserver above.
A template for an replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file:
- replace `{{ pillar['dns_replicas'] }}` with `1`
- replace `{{ pillar['dns_domain'] }}` with `cluster.local.`
- add `--kube_master_url=${KUBERNETES_MASTER}` parameter to the kube2sky container command.
In addition the service template at [cluster/addons/dns/skydns-svc.yaml.in][12] needs the following replacement:
- `{{ pillar['dns_server'] }}` with `10.10.10.10`.
To do this automatically:
Verify that the pod task is running in the Mesos web GUI. Click on the
Kubernetes framework. The next screen should show the running Mesos task that
started the Kubernetes pod.
## Launching kube-dns
Kube-dns is an addon for Kubernetes which adds DNS-based service discovery to the cluster. For a detailed explanation see [DNS in Kubernetes][4].
The kube-dns addon runs as a pod inside the cluster. The pod consists of three co-located containers:
- a local etcd instance
- the [skydns][11] DNS server
- the kube2sky process to glue skydns to the state of the Kubernetes cluster.
The skydns container offers DNS service via port 53 to the cluster. The etcd communication works via local 127.0.0.1 communication
We assume that kube-dns will use
- the service IP `10.10.10.10`
- and the `cluster.local` domain.
Note that we have passed these two values already as parameter to the apiserver above.
A template for an replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file:
- replace `{{ pillar['dns_replicas'] }}` with `1`
- replace `{{ pillar['dns_domain'] }}` with `cluster.local.`
- add `--kube_master_url=${KUBERNETES_MASTER}` parameter to the kube2sky container command.
In addition the service template at [cluster/addons/dns/skydns-svc.yaml.in][12] needs the following replacement:
- `{{ pillar['dns_server'] }}` with `10.10.10.10`.
To do this automatically:
```shell
sed -e "s/{{ pillar\['dns_replicas'\] }}/1/g;"\
"s,\(command = \"/kube2sky\"\),\\1\\"$'\n'" - --kube_master_url=${KUBERNETES_MASTER},;"\
"s/{{ pillar\['dns_domain'\] }}/cluster.local/g" \
cluster/addons/dns/skydns-rc.yaml.in > skydns-rc.yaml
sed -e "s/{{ pillar\['dns_server'\] }}/10.10.10.10/g" \
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml
sed -e "s/{{ pillar\['dns_replicas'\] }}/1/g;"\
"s,\(command = \"/kube2sky\"\),\\1\\"$'\n'" - --kube_master_url=${KUBERNETES_MASTER},;"\
"s/{{ pillar\['dns_domain'\] }}/cluster.local/g" \
cluster/addons/dns/skydns-rc.yaml.in > skydns-rc.yaml
sed -e "s/{{ pillar\['dns_server'\] }}/10.10.10.10/g" \
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml
```
Now the kube-dns pod and service are ready to be launched:
Now the kube-dns pod and service are ready to be launched:
```shell
kubectl create -f ./skydns-rc.yaml
kubectl create -f ./skydns-svc.yaml
kubectl create -f ./skydns-rc.yaml
kubectl create -f ./skydns-svc.yaml
```
Check with `kubectl get pods --namespace=kube-system` that 3/3 containers of the pods are eventually up and running. Note that the kube-dns pods run in the `kube-system` namespace, not in `default`.
To check that the new DNS service in the cluster works, we start a busybox pod and use that to do a DNS lookup. First create the `busybox.yaml` pod spec:
Check with `kubectl get pods --namespace=kube-system` that 3/3 containers of the pods are eventually up and running. Note that the kube-dns pods run in the `kube-system` namespace, not in `default`.
To check that the new DNS service in the cluster works, we start a busybox pod and use that to do a DNS lookup. First create the `busybox.yaml` pod spec:
```shell
cat <<EOF >busybox.yaml
cat <<EOF >busybox.yaml
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
```
Then start the pod:
Then start the pod:
```shell
kubectl create -f ./busybox.yaml
kubectl create -f ./busybox.yaml
```
When the pod is up and running, start a lookup for the Kubernetes master service, made available on 10.10.10.1 by default:
When the pod is up and running, start a lookup for the Kubernetes master service, made available on 10.10.10.1 by default:
```shell
kubectl exec busybox -- nslookup kubernetes
kubectl exec busybox -- nslookup kubernetes
```
If everything works fine, you will get this output:
If everything works fine, you will get this output:
```shell
Server: 10.10.10.10
Address 1: 10.10.10.10
Name: kubernetes
Address 1: 10.10.10.1
Server: 10.10.10.10
Address 1: 10.10.10.10
Name: kubernetes
Address 1: 10.10.10.1
```
## What next?
Try out some of the standard [Kubernetes examples][9].
Read about Kubernetes on Mesos' architecture in the [contrib directory][13].
**NOTE:** Some examples require Kubernetes DNS to be installed on the cluster.
Future work will add instructions to this guide to enable support for Kubernetes DNS.
**NOTE:** Please be aware that there are [known issues with the current Kubernetes-Mesos implementation][7].
[1]: http://mesosphere.com/docs/tutorials/run-hadoop-on-mesos-using-installer
[2]: http://mesosphere.com/docs/tutorials/run-spark-on-mesos
[3]: http://mesosphere.com/docs/tutorials/run-chronos-on-mesos
[4]: https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md
[5]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/
[6]: http://mesos.apache.org/
[7]: https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/docs/issues.md
[8]: https://github.com/mesosphere/kubernetes-mesos/issues
[9]: https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples
[10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup
[11]: https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-rc.yaml.in
[12]: https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-svc.yaml.in
## What next?
Try out some of the standard [Kubernetes examples][9].
Read about Kubernetes on Mesos' architecture in the [contrib directory][13].
**NOTE:** Some examples require Kubernetes DNS to be installed on the cluster.
Future work will add instructions to this guide to enable support for Kubernetes DNS.
**NOTE:** Please be aware that there are [known issues with the current Kubernetes-Mesos implementation][7].
[1]: http://mesosphere.com/docs/tutorials/run-hadoop-on-mesos-using-installer
[2]: http://mesosphere.com/docs/tutorials/run-spark-on-mesos
[3]: http://mesosphere.com/docs/tutorials/run-chronos-on-mesos
[4]: https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md
[5]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/
[6]: http://mesos.apache.org/
[7]: https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/docs/issues.md
[8]: https://github.com/mesosphere/kubernetes-mesos/issues
[9]: https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples
[10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup
[11]: https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-rc.yaml.in
[12]: https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-svc.yaml.in
[13]: https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/README.md

File diff suppressed because it is too large Load Diff

View File

@ -1,272 +1,272 @@
---
---
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
* TOC
{:toc}
### Prerequisites
1. Install latest version >= 1.7.4 of vagrant from http://www.vagrantup.com/downloads.html
2. Install one of:
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt`
### Setup
Setting up a cluster is as simple as running:
{:toc}
### Prerequisites
1. Install latest version >= 1.7.4 of vagrant from http://www.vagrantup.com/downloads.html
2. Install one of:
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt`
### Setup
Setting up a cluster is as simple as running:
```shell
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
```
Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run:
Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run:
```shell
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
```shell
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
By default, each VM in the cluster is running Fedora.
To access the master or any node:
By default, each VM in the cluster is running Fedora.
To access the master or any node:
```shell
vagrant ssh master
vagrant ssh node-1
vagrant ssh master
vagrant ssh node-1
```
If you are running more than one node, you can access the others by:
If you are running more than one node, you can access the others by:
```shell
vagrant ssh node-2
vagrant ssh node-3
vagrant ssh node-2
vagrant ssh node-3
```
Each node in the cluster installs the docker daemon and the kubelet.
The master node instantiates the Kubernetes master components as pods on the machine.
To view the service status and/or logs on the kubernetes-master:
Each node in the cluster installs the docker daemon and the kubelet.
The master node instantiates the Kubernetes master components as pods on the machine.
To view the service status and/or logs on the kubernetes-master:
```shell
[vagrant@kubernetes-master ~] $ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
[vagrant@kubernetes-master ~] $ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
```
To view the services on any of the nodes:
To view the services on any of the nodes:
```shell
[vagrant@kubernetes-master ~] $ vagrant ssh node-1
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
[vagrant@kubernetes-master ~] $ vagrant ssh node-1
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
```
### Interacting with your Kubernetes cluster with Vagrant.
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
To push updates to new Kubernetes code after making source changes:
### Interacting with your Kubernetes cluster with Vagrant.
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
To push updates to new Kubernetes code after making source changes:
```shell
./cluster/kube-push.sh
./cluster/kube-push.sh
```
To stop and then restart the cluster:
To stop and then restart the cluster:
```shell
vagrant halt
./cluster/kube-up.sh
vagrant halt
./cluster/kube-up.sh
```
To destroy the cluster:
To destroy the cluster:
```shell
vagrant destroy
vagrant destroy
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with `make`
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with `make`
```shell
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.1.4 <none>
10.245.1.5 <none>
10.245.1.3 <none>
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.1.4 <none>
10.245.1.5 <none>
10.245.1.3 <none>
```
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
```shell
cat ~/.kubernetes_vagrant_auth
cat ~/.kubernetes_vagrant_auth
```
```json
{ "User": "vagrant",
"Password": "vagrant",
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
{ "User": "vagrant",
"Password": "vagrant",
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
```shell
./cluster/kubectl.sh get nodes
./cluster/kubectl.sh get nodes
```
### Running containers
Your cluster is running, you can list the nodes in your cluster:
### Running containers
Your cluster is running, you can list the nodes in your cluster:
```shell
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.2.4 <none>
10.245.2.3 <none>
10.245.2.2 <none>
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.2.4 <none>
10.245.2.3 <none>
10.245.2.2 <none>
```
Now start running some containers!
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
Before starting a container there will be no pods, services and replication controllers.
Now start running some containers!
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
Before starting a container there will be no pods, services and replication controllers.
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
Start a container running nginx with a replication controller and three replicas
Start a container running nginx with a replication controller and three replicas
```shell
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
When listing the pods, you will see that three containers have been started and are in Waiting state:
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 0/1 Pending 0 10s
my-nginx-gr3hh 0/1 Pending 0 10s
my-nginx-xql4j 0/1 Pending 0 10s
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 0/1 Pending 0 10s
my-nginx-gr3hh 0/1 Pending 0 10s
my-nginx-xql4j 0/1 Pending 0 10s
```
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
```shell
$ vagrant ssh node-1 -c 'sudo docker images'
kubernetes-node-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
$ vagrant ssh node-1 -c 'sudo docker images'
kubernetes-node-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
Once the docker image for nginx has been downloaded, the container will start and you can list it:
```shell
$ vagrant ssh node-1 -c 'sudo docker ps'
kubernetes-node-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
$ vagrant ssh node-1 -c 'sudo docker ps'
kubernetes-node-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
```
Going back to listing the pods, services and replicationcontrollers, you now have:
Going back to listing the pods, services and replicationcontrollers, you now have:
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 1m
my-nginx-gr3hh 1/1 Running 0 1m
my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 1m
my-nginx-gr3hh 1/1 Running 0 1m
my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
my-nginx my-nginx nginx run=my-nginx 3 1m
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) application to learn how to create a service.
You can already play with scaling the replicas with:
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) application to learn how to create a service.
You can already play with scaling the replicas with:
```shell
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 2m
my-nginx-gr3hh 1/1 Running 0 2m
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 2m
my-nginx-gr3hh 1/1 Running 0 2m
```
Congratulations!
### Troubleshooting
#### I keep downloading the same (large) box all the time!
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
Congratulations!
### Troubleshooting
#### I keep downloading the same (large) box all the time!
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
```shell
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
#### I am getting timeouts when trying to curl the master from my host!
@ -306,71 +306,89 @@ If you do not see a response on your host machine, you will most likely need to
If you do see a network, but are still unable to ping the machine, check if your VPN is blocking the request.
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
```shell
rm ~/.kubernetes_vagrant_auth
rm ~/.kubernetes_vagrant_auth
```
After using kubectl.sh make sure that the correct credentials are set:
After using kubectl.sh make sure that the correct credentials are set:
```shell
cat ~/.kubernetes_vagrant_auth
cat ~/.kubernetes_vagrant_auth
```
```json
{
"User": "vagrant",
"Password": "vagrant"
}
{
"User": "vagrant",
"Password": "vagrant"
}
```
#### I just created the cluster, but I do not see my container running!
If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
#### I want to make changes to Kubernetes code!
To set up a vagrant cluster for hacking, follow the [vagrant developer guide](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/developer-guides/vagrant.md).
#### I have brought Vagrant up but the nodes cannot validate!
Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
#### I want to change the number of nodes!
You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so:
#### I just created the cluster, but I do not see my container running!
If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
#### I want to make changes to Kubernetes code!
To set up a vagrant cluster for hacking, follow the [vagrant developer guide](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/developer-guides/vagrant.md).
#### I have brought Vagrant up but the nodes cannot validate!
Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
#### I want to change the number of nodes!
You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so:
```shell
export NUM_NODES=1
export NUM_NODES=1
```
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
```shell
export KUBERNETES_MEMORY=2048
export KUBERNETES_MEMORY=2048
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
```shell
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_NODE_MEMORY=2048
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_NODE_MEMORY=2048
```
#### I ran vagrant suspend and nothing works!
`vagrant suspend` seems to mess up the network. This is not supported at this time.
#### I want vagrant to sync folders via nfs!
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
```shell
export KUBERNETES_VAGRANT_USE_NFS=true
#### I want to set proxy settings for my Kubernetes cluster boot strapping!
If you are behind a proxy, you need to install vagrant proxy plugin and set the proxy settings by
```sh
vagrant plugin install vagrant-proxyconf
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```
Optionally you can specify addresses to not proxy, for example
```sh
export KUBERNETES_NO_PROXY=127.0.0.1
```
If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
#### I ran vagrant suspend and nothing works!
`vagrant suspend` seems to mess up the network. This is not supported at this time.
#### I want vagrant to sync folders via nfs!
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
```shell
export KUBERNETES_VAGRANT_USE_NFS=true
```

View File

@ -1,96 +1,96 @@
---
---
The example below creates a Kubernetes cluster with 4 worker node Virtual
Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This
cluster is set up and controlled from your workstation (or wherever you find
convenient).
The example below creates a Kubernetes cluster with 4 worker node Virtual
Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This
cluster is set up and controlled from your workstation (or wherever you find
convenient).
* TOC
{:toc}
### Prerequisites
1. You need administrator credentials to an ESXi machine or vCenter instance.
2. You must have Go (see [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/development.md#go-versions) for supported versions) installed: [www.golang.org](http://www.golang.org).
3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
{:toc}
### Prerequisites
1. You need administrator credentials to an ESXi machine or vCenter instance.
2. You must have Go (see [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/development.md#go-versions) for supported versions) installed: [www.golang.org](http://www.golang.org).
3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
```shell
export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
```
4. Install the govc tool to interact with ESXi/vCenter:
4. Install the govc tool to interact with ESXi/vCenter:
```shell
go get github.com/vmware/govmomi/govc
go get github.com/vmware/govmomi/govc
```
5. Get or build a [binary release](/docs/getting-started-guides/binary_release)
### Setup
Download a prebuilt Debian 8.2 VMDK that we'll use as a base image:
5. Get or build a [binary release](/docs/getting-started-guides/binary_release)
### Setup
Download a prebuilt Debian 8.2 VMDK that we'll use as a base image:
```shell
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2016-01-08/kube.vmdk.gz{,.md5}
md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2016-01-08/kube.vmdk.gz{,.md5}
md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
```
Import this VMDK into your vSphere datastore:
Import this VMDK into your vSphere datastore:
```shell
export GOVC_URL='hostname' # hostname of the vc
export GOVC_URL='hostname' # hostname of the vc
export GOVC_USERNAME='username' # username for logging into the vsphere.
export GOVC_PASSWORD='password' # password for the above username
export GOVC_NETWORK='Network Name' # Name of the network the vms should join. Many times it could be "VM Network"
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
govc import.vmdk kube.vmdk ./kube/
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
govc import.vmdk kube.vmdk ./kube/
```
Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
```shell
govc datastore.ls ./kube/
govc datastore.ls ./kube/
```
Take a look at the file `cluster/vsphere/config-common.sh` fill in the required
parameters. The guest login for the image that you imported is `kube:kube`.
### Starting a cluster
Now, let's continue with deploying Kubernetes.
This process takes about ~20-30 minutes depending on your network.
Take a look at the file `cluster/vsphere/config-common.sh` fill in the required
parameters. The guest login for the image that you imported is `kube:kube`.
### Starting a cluster
Now, let's continue with deploying Kubernetes.
This process takes about ~20-30 minutes depending on your network.
#### From extracted binary release
```shell
cd kubernetes
KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
```
#### Build from source
```shell
cd kubernetes
cd kubernetes
make release
KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
```
Refer to the top level README and the getting started guide for Google Compute
Engine. Once you have successfully reached this point, your vSphere Kubernetes
deployment works just as any other one!
**Enjoy!**
### Extra: debugging deployment failure
The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You
can log into any VM as the `kube` user to poke around and figure out what is
going on (find yourself authorized with your SSH key, or use the password
Refer to the top level README and the getting started guide for Google Compute
Engine. Once you have successfully reached this point, your vSphere Kubernetes
deployment works just as any other one!
**Enjoy!**
### Extra: debugging deployment failure
The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You
can log into any VM as the `kube` user to poke around and figure out what is
going on (find yourself authorized with your SSH key, or use the password
`kube` otherwise).