Move local cluster setup guides to the kubernetes/kubernetes repo under docs/devel/local-cluster/.

Done to encourage folks to use minikube and to reduce cluster setup guide maintenance overhead.
pull/957/head
Phillip Wittrock 2016-08-03 13:55:33 -07:00
parent 388ec22c6c
commit e28c21045f
5 changed files with 11 additions and 817 deletions

View File

@ -145,12 +145,8 @@ toc:
section:
- title: Running Kubernetes Locally via Minikube
path: /docs/getting-started-guides/minikube/
- title: Running Kubernetes Locally via Docker
path: /docs/getting-started-guides/docker/
- title: Running Kubernetes Locally with No VM
path: /docs/getting-started-guides/locally/
- title: Running Kubernetes Locally with Vagrant
path: /docs/getting-started-guides/vagrant/
- title: Deprecated Alternatives
path: /docs/getting-started-guides/alternatives/
- title: Running Kubernetes on Turn-key Cloud Solutions
section:
- title: Running Kubernetes on Google Container Engine

View File

@ -0,0 +1,9 @@
---
assignees:
- pwittrock
---
# *Stop. These guides are superseded by [Minikube](../minikube/). They are only listed here for completeness.*
* [Using Vagrant](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/local-cluster/vagrant.md)
* *Advanced:* [Directly using Kubernetes raw binaries (Linux Only)](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/local-cluster/local.md)

View File

@ -1,276 +0,0 @@
---
assignees:
- brendandburns
- fgrzadkowski
---
**Stop. This guide has been superseded by [Minikube](../minikube/) which is the recommended method of running Kubernetes on your local machine.**
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/images/docs/k8s-singlenode-docker.png)
* TOC
{:toc}
## Prerequisites
**Note: These steps have not been tested with the [Docker For Mac or Docker For Windows beta programs](https://blog.docker.com/2016/03/docker-for-mac-windows-beta/).**
1) You need to have Docker version >= "1.10" installed on the machine.
2) Enable mount propagation. Hyperkube is running in a container which has to mount volumes for other containers, for example in case of persistent storage. The required steps depend on the init system.
In case of **systemd**, change MountFlags in the Docker unit file to shared.
```shell
DOCKER_CONF=$(systemctl cat docker | head -1 | awk '{print $2}')
sed -i.bak 's/^\(MountFlags=\).*/\1shared/' $DOCKER_CONF
systemctl daemon-reload
systemctl restart docker
```
**Otherwise**, manually set the mount point used by Hyperkube to be shared:
```shell
mkdir -p /var/lib/kubelet
mount --bind /var/lib/kubelet /var/lib/kubelet
mount --make-shared /var/lib/kubelet
```
### Run it
#### Step 1: Decide which Kubernetes version to use
Set the `${K8S_VERSION}` variable to a version of Kubernetes >= "v1.2.0".
If you'd like to use the current **stable** version of Kubernetes, run the following:
```sh
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
```
and for the **latest** available version (including unstable releases):
```sh
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/latest.txt)
```
#### Step 2: Start Hyperkube
```shell
export ARCH=amd64
docker run -d \
--volume=/sys:/sys:rw \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw,shared \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged \
--name=kubelet \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--hostname-override=127.0.0.1 \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged --v=2
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
> If you would like to mount an external device as a volume, add `--volume=/dev:/dev` to the command above. It may however, cause some problems described in [#18230](https://github.com/kubernetes/kubernetes/issues/18230)
> Architectures other than `amd64` are experimental and sometimes unstable, but feel free to try them out! Valid values: `arm`, `arm64` and `ppc64le`. ARM is available with Kubernetes version `v1.3.0-alpha.2` and higher. ARM 64-bit and PowerPC 64 little-endian are available with `v1.3.0-alpha.3` and higher. Track progress on multi-arch support [here](https://github.com/kubernetes/kubernetes/issues/17981)
> If you are behind a proxy, you need to pass the proxy setup to curl in the containers to pull the certificates. Create a .curlrc under /root folder (because the containers are running as root) with the following line:
```
proxy = <your_proxy_server>:<port>
```
This actually runs the kubelet, which in turn runs a [pod](/docs/user-guide/pods/) that contains the other master components.
**SECURITY WARNING**: Services exposed via Kubernetes using Hyperkube are available on the host node's public network interface / IP address. Because of this, this guide is not suitable for any host node/server that is directly internet accessible. Refer to [#21735](https://github.com/kubernetes/kubernetes/issues/21735) for addtional info.
### Download `kubectl`
At this point you should have a running Kubernetes cluster. You can test it out
by downloading the kubectl binary for `${K8S_VERSION}` (in this example: `{{page.version}}.0`).
Downloads:
- `linux/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl
- `linux/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl
- `linux/arm`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl
- `linux/arm64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl
- `linux/ppc64le`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl
- `OS X/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl
- `OS X/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl
- `windows/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe
- `windows/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/386/kubectl.exe
The generic download path is:
```
http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}
```
An example install with `linux/amd64`:
```
curl -sSL "https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl" > /usr/bin/kubectl
chmod +x /usr/bin/kubectl
```
On OS X, to make the API server accessible locally, setup a ssh tunnel.
```shell
docker-machine ssh `docker-machine active` -N -L 8080:localhost:8080
```
Setting up a ssh tunnel is applicable to remote docker hosts as well.
(Optional) Create kubernetes cluster configuration:
```shell
kubectl config set-cluster test-doc --server=http://localhost:8080
kubectl config set-context test-doc --cluster=test-doc
kubectl config use-context test-doc
```
### Test it out
List the nodes in your cluster by running:
```shell
kubectl get nodes
```
This should print:
```shell
NAME STATUS AGE
127.0.0.1 Ready 1h
```
### Run an application
```shell
kubectl run nginx --image=nginx --port=80
```
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```shell
kubectl expose deployment nginx --port=80
```
Run the following command to obtain the cluster local IP of this service we just created:
```shell{% raw %}
ip=$(kubectl get svc nginx --template={{.spec.clusterIP}})
echo $ip
{% endraw %}```
Hit the webserver with this IP:
```shell{% raw %}
curl $ip
{% endraw %}```
On OS X, since docker is running inside a VM, run the following command instead:
```shell
docker-machine ssh `docker-machine active` curl $ip
```
## Deploy a DNS
Read [documentation for manually deploying a DNS](/docs/getting-started-guides/docker-multinode/#deploy-dns-manually-for-v12x) for instructions.
### Turning down your cluster
1\. Delete the nginx service and deployment:
If you plan on re-creating your nginx deployment and service you will need to clean it up.
```shell
kubectl delete service,deployments nginx
```
2\. Delete all the containers including the kubelet:
```shell
docker rm -f kubelet
docker rm -f `docker ps | grep k8s | awk '{print $1}'`
```
3\. Cleanup the filesystem:
On OS X, first ssh into the docker VM:
```shell
docker-machine ssh `docker-machine active`
```
```shell
grep /var/lib/kubelet /proc/mounts | awk '{print $2}' | sudo xargs -n1 umount
sudo rm -rf /var/lib/kubelet
```
### Troubleshooting
#### Node is in `NotReady` state
If you see your node as `NotReady` it's possible that your OS does not have memcg enabled.
1. Your kernel should support memory accounting. Ensure that the
following configs are turned on in your linux kernel:
```shell
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
```
2. Enable the memory accounting in the kernel, at boot, as command line
parameters as follows:
```shell
GRUB_CMDLINE_LINUX="cgroup_enable=memory=1"
```
NOTE: The above is specifically for GRUB2.
You can check the command line parameters passed to your kernel by looking at the
output of /proc/cmdline:
```shell
$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory=1
```
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.

View File

@ -1,132 +0,0 @@
---
assignees:
- erictune
- mikedanese
- thockin
---
**Stop. This guide has been superseded by [Minikube](../minikube/) which is the recommended method of running Kubernetes on your local machine.**
* TOC
{:toc}
### Requirements
#### Linux
Not running Linux? Consider running Linux in a local virtual machine with [vagrant](https://www.vagrantup.com/), or on a cloud provider like [Google Compute Engine](/docs/getting-started-guides/gce)
#### Docker
At least [Docker](https://docs.docker.com/installation/#installation)
1.8.3+. Ensure the Docker daemon is running and can be contacted (try `docker
ps`). Some of the Kubernetes components need to run as root, which normally
works fine with docker.
#### etcd
You need an [etcd](https://github.com/coreos/etcd/releases) in your path, please make sure it is installed and in your ``$PATH``.
#### go
You need [go](https://golang.org/doc/install) at least 1.4+ in your path, please make sure it is installed and in your ``$PATH``.
### Starting the cluster
First, you need to [download Kubernetes](/docs/getting-started-guides/binary_release/). Then open a separate tab of your terminal
and run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root):
```shell
cd kubernetes
hack/local-up-cluster.sh
```
This will build and start a lightweight local cluster, consisting of a master
and a single node. Type Control-C to shut it down.
You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will
print the commands to run to point kubectl at the local cluster.
### Running a container
Your cluster is running, and you want to start running containers!
You can now use any of the cluster/kubectl.sh commands to interact with your local setup.
```shell
export KUBERNETES_PROVIDER=local
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get deployments
cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal
sudo docker images
## you should see it pulling the nginx image, once the above command returns it
sudo docker ps
## you should see your container running!
exit
## end wait
## create a service for nginx, which serves on port 80
cluster/kubectl.sh expose deployment my-nginx --port=80 --name=my-nginx
## introspect Kubernetes!
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get deployments
## Test the nginx service with the IP/port from "get services" command
curl http://10.X.X.X:80/
```
### Running a user defined pod
Note the difference between a [container](/docs/user-guide/containers)
and a [pod](/docs/user-guide/pods). Since you only asked for the former, Kubernetes will create a wrapper pod for you.
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
```shell
cluster/kubectl.sh create -f docs/user-guide/pod.yaml
```
Congratulations!
### FAQs
#### I cannot reach service IPs on the network.
Some firewall software that uses iptables may not interact well with
kubernetes. If you have trouble around networking, try disabling any
firewall or other iptables-using systems, first. Also, you can check
if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`.
By default the IP range for service cluster IPs is 10.0.*.* - depending on your
docker installation, this may conflict with IPs for containers. If you find
containers running with IPs in this range, edit hack/local-cluster-up.sh and
change the service-cluster-ip-range flag to something else.
#### I changed Kubernetes code, how do I run it?
```shell
cd kubernetes
hack/build-go.sh
hack/local-up-cluster.sh
```
#### kubectl claims to start a container but `get pods` and `docker ps` don't show it.
One or more of the Kubernetes daemons might've crashed. Tail the [logs](/docs/admin/cluster-troubleshooting/#looking-at-logs) of each in /tmp.
```shell
$ ls /tmp/kube*.log
$ tail -f /tmp/kube-apiserver.log
```
#### The pods fail to connect to the services by host names
The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. Related documents can be found [here](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns#how-do-i-configure-it)

View File

@ -1,403 +0,0 @@
---
assignees:
- brendandburns
- derekwaynecarr
- jbeda
---
**Stop. This guide has been superseded by [Minikube](../minikube/) which is the recommended method of running Kubernetes on your local machine.**
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
* TOC
{:toc}
### Prerequisites
1. Install latest version >= 1.7.4 of [Vagrant](http://www.vagrantup.com/downloads.html)
2. Install one of:
1. The latest version of [Virtual Box](https://www.virtualbox.org/wiki/Downloads)
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt`
### Setup
Setting up a cluster is as simple as running:
```sh
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
```
Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run:
```sh
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
If you'd like more than one node, set the `NUM_NODES` environment variable to the number you want:
```sh
export NUM_NODES=3
```
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
```sh
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
By default, each VM in the cluster is running Fedora.
To access the master or any node:
```sh
vagrant ssh master
vagrant ssh node-1
```
If you are running more than one node, you can access the others by:
```sh
vagrant ssh node-2
vagrant ssh node-3
```
Each node in the cluster installs the docker daemon and the kubelet.
The master node instantiates the Kubernetes master components as pods on the machine.
To view the service status and/or logs on the kubernetes-master:
```console
[vagrant@kubernetes-master ~] $ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
```
To view the services on any of the nodes:
```console
[vagrant@kubernetes-master ~] $ vagrant ssh node-1
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
```
### Interacting with your Kubernetes cluster with Vagrant.
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
To push updates to new Kubernetes code after making source changes:
```sh
./cluster/kube-push.sh
```
To stop and then restart the cluster:
```sh
vagrant halt
./cluster/kube-up.sh
```
To destroy the cluster:
```sh
vagrant destroy
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with `make`
```console
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.1.4 <none>
10.245.1.5 <none>
10.245.1.3 <none>
```
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
```sh
cat ~/.kubernetes_vagrant_auth
```
```json
{ "User": "vagrant",
"Password": "vagrant",
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
```sh
./cluster/kubectl.sh get nodes
```
### Running containers
Your cluster is running, you can list the nodes in your cluster:
```sh
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.2.4 <none>
10.245.2.3 <none>
10.245.2.2 <none>
```
Now start running some containers!
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
Before starting a container there will be no pods, services and replication controllers.
```sh
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
Start a container running nginx with a replication controller and three replicas
```sh
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
```sh
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 0/1 Pending 0 10s
my-nginx-gr3hh 0/1 Pending 0 10s
my-nginx-xql4j 0/1 Pending 0 10s
```
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
```sh
$ vagrant ssh node-1 -c 'sudo docker images'
kubernetes-node-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
```sh
$ vagrant ssh node-1 -c 'sudo docker ps'
kubernetes-node-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
```
Going back to listing the pods, services and replicationcontrollers, you now have:
```sh
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 1m
my-nginx-gr3hh 1/1 Running 0 1m
my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
my-nginx my-nginx nginx run=my-nginx 3 1m
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Learn about [running your first containers](../user-guide/simple-nginx/) application to learn how to create a service.
You can already play with scaling the replicas with:
```sh
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 2m
my-nginx-gr3hh 1/1 Running 0 2m
```
Congratulations!
## Troubleshooting
#### I keep downloading the same (large) box all the time!
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
```sh
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
#### I am getting timeouts when trying to curl the master from my host!
During provision of the cluster, you may see the following message:
```sh
Validating node-1
.............
Waiting for each node to be registered with cloud provider
error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout
```
Some users have reported VPNs may prevent traffic from being routed to the host machine into the virtual machine network.
To debug, first verify that the master is binding to the proper IP address:
```sh
$ vagrant ssh master
$ ifconfig | grep eth1 -C 2
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.245.1.2 netmask
255.255.255.0 broadcast 10.245.1.255
```
Then verify that your host machine has a network connection to a bridge that can serve that address:
```sh
$ ifconfig | grep 10.245.1 -C 2
vboxnet5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255
inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20<link>
ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet)
```
If you do not see a response on your host machine, you will most likely need to connect your host to the virtual network created by the virtualization provider.
If you do see a network, but are still unable to ping the machine, check if your VPN is blocking the request.
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
```sh
rm ~/.kubernetes_vagrant_auth
```
After using kubectl.sh make sure that the correct credentials are set:
```sh
cat ~/.kubernetes_vagrant_auth
```
```json
{
"User": "vagrant",
"Password": "vagrant"
}
```
#### I just created the cluster, but I do not see my container running!
If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
#### I have brought Vagrant up but the nodes cannot validate!
Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
#### I want to change the number of nodes!
You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so:
```sh
export NUM_NODES=1
```
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
```sh
export KUBERNETES_MEMORY=2048
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
```sh
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_NODE_MEMORY=2048
```
#### I want to set proxy settings for my Kubernetes cluster boot strapping!
If you are behind a proxy, you need to install vagrant proxy plugin and set the proxy settings by
```sh
vagrant plugin install vagrant-proxyconf
export VAGRANT_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export VAGRANT_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```
Optionally you can specify addresses to not proxy, for example
```sh
export VAGRANT_NO_PROXY=127.0.0.1
```
If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
#### I ran vagrant suspend and nothing works!
`vagrant suspend` seems to mess up the network. This is not supported at this time.
#### I want vagrant to sync folders via nfs!
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
```sh
export KUBERNETES_VAGRANT_USE_NFS=true
```