|
|
|
@ -0,0 +1,254 @@
|
|
|
|
|
---
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
<style>
|
|
|
|
|
li>.highlighter-rouge {position:relative; top:3px;}
|
|
|
|
|
</style>
|
|
|
|
|
|
|
|
|
|
## Overview
|
|
|
|
|
|
|
|
|
|
This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04 or CentOS 7.
|
|
|
|
|
The installation uses a tool called `kubeadm` which is part of Kubernetes 1.4.
|
|
|
|
|
|
|
|
|
|
This process works with local VMs, physical servers and/or cloud servers.
|
|
|
|
|
It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).
|
|
|
|
|
|
|
|
|
|
**The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/getting-started-guides/kubeadm/#feedback)!**
|
|
|
|
|
|
|
|
|
|
## Prerequisites
|
|
|
|
|
|
|
|
|
|
1. One or more machines running Ubuntu 16.04 or CentOS 7
|
|
|
|
|
1. 1GB or more of RAM per machine (any less will leave little room for your apps)
|
|
|
|
|
1. Full network connectivity between all machines in the cluster (public or private network is fine)
|
|
|
|
|
|
|
|
|
|
## Objectives
|
|
|
|
|
|
|
|
|
|
* Install a secure Kubernetes cluster on your machines
|
|
|
|
|
* Install a pod network on the cluster so that application components (pods) can talk to each other
|
|
|
|
|
* Install a sample microservices application (a socks shop) on the cluster
|
|
|
|
|
|
|
|
|
|
## Instructions
|
|
|
|
|
|
|
|
|
|
### (1/4) Installing kubelet and kubeadm on your hosts
|
|
|
|
|
|
|
|
|
|
You will install the following packages on all the machines:
|
|
|
|
|
|
|
|
|
|
* `docker`: the container runtime, which Kubernetes depends on.
|
|
|
|
|
* `kubelet`: the most core component of Kubernetes.
|
|
|
|
|
It runs on all of the machines in your cluster and does things like starting pods and containers.
|
|
|
|
|
* `kubectl`: the command to control the cluster once it's running.
|
|
|
|
|
You will only use this on the master.
|
|
|
|
|
* `kubeadm`: the command to bootstrap the cluster.
|
|
|
|
|
|
|
|
|
|
For each host in turn:
|
|
|
|
|
|
|
|
|
|
<!--
|
|
|
|
|
# curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
|
|
|
|
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
|
|
|
|
|
deb http://packages.cloud.google.com/apt kubernetes-xenial main
|
|
|
|
|
EOF
|
|
|
|
|
# apt-get update
|
|
|
|
|
# apt-get install -y kubeadm docker.io§
|
|
|
|
|
-->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
|
|
|
|
|
* If the machine is running Ubuntu 16.04, run:
|
|
|
|
|
|
|
|
|
|
# apt-get install -y docker.io socat apt-transport-https
|
|
|
|
|
# curl -s -L \
|
|
|
|
|
https://storage.googleapis.com/kubeadm/kubernetes-xenial-preview-bundle.txz | tar xJv
|
|
|
|
|
# dpkg -i kubernetes-xenial-preview-bundle/*.deb
|
|
|
|
|
|
|
|
|
|
If the machine is running CentOS 7, run:
|
|
|
|
|
|
|
|
|
|
# cat <<EOF > /etc/yum.repos.d/k8s.repo
|
|
|
|
|
[kubelet]
|
|
|
|
|
name=kubelet
|
|
|
|
|
baseurl=http://files.rm-rf.ca/rpms/kubelet/
|
|
|
|
|
enabled=1
|
|
|
|
|
gpgcheck=0
|
|
|
|
|
EOF
|
|
|
|
|
# yum install docker kubelet kubeadm kubectl kubernetes-cni
|
|
|
|
|
# systemctl enable docker && systemctl start docker
|
|
|
|
|
# systemctl enable kubelet && systemctl start kubelet
|
|
|
|
|
|
|
|
|
|
The kubelet is now restarting every few seconds, as it waits in a crashloop for `kubeadm` to tell it what to do.
|
|
|
|
|
|
|
|
|
|
### (2/4) Initializing your master
|
|
|
|
|
|
|
|
|
|
The master is the machine where the "control plane" components run, including `etcd` (the cluster database) and the API server (which the `kubectl` CLI communicates with).
|
|
|
|
|
All of these components run in pods started by `kubelet`.
|
|
|
|
|
|
|
|
|
|
To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run:
|
|
|
|
|
|
|
|
|
|
# kubeadm init --use-kubernetes-version v1.4.0-beta.11
|
|
|
|
|
|
|
|
|
|
This will download and install the cluster database and "control plane" components.
|
|
|
|
|
This may take several minutes.
|
|
|
|
|
|
|
|
|
|
The output should look like:
|
|
|
|
|
|
|
|
|
|
<master/tokens> generated token: "f0c861.753c505740ecde4c"
|
|
|
|
|
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
|
|
|
|
|
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
|
|
|
|
|
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
|
|
|
|
|
<master/apiclient> created API client configuration
|
|
|
|
|
<master/apiclient> created API client, waiting for the control plane to become ready
|
|
|
|
|
<master/apiclient> all control plane components are healthy after 61.346626 seconds
|
|
|
|
|
<master/apiclient> waiting for at least one node to register and become ready
|
|
|
|
|
<master/apiclient> first node is ready after 4.506807 seconds
|
|
|
|
|
<master/discovery> created essential addon: kube-discovery
|
|
|
|
|
<master/addons> created essential addon: kube-proxy
|
|
|
|
|
<master/addons> created essential addon: kube-dns
|
|
|
|
|
|
|
|
|
|
Kubernetes master initialised successfully!
|
|
|
|
|
|
|
|
|
|
You can connect any number of nodes by running:
|
|
|
|
|
|
|
|
|
|
kubeadm join --token <token> <master-ip>
|
|
|
|
|
|
|
|
|
|
Make a record of the `kubeadm join` command that `kubeadm init` outputs.
|
|
|
|
|
You will need this in a moment.
|
|
|
|
|
The key included here is secret, keep it safe — anyone with this key can add authenticated nodes to your cluster.
|
|
|
|
|
|
|
|
|
|
The key is used for mutual authentication between the master and the joining nodes.
|
|
|
|
|
|
|
|
|
|
By default, your cluster will not schedule pods on the master for security reasons.
|
|
|
|
|
If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, run:
|
|
|
|
|
|
|
|
|
|
# kubectl taint nodes --all dedicated-
|
|
|
|
|
node "test-01" tainted
|
|
|
|
|
taint key="dedicated" and effect="" not found.
|
|
|
|
|
taint key="dedicated" and effect="" not found.
|
|
|
|
|
|
|
|
|
|
This will remove the "dedicated" taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.
|
|
|
|
|
|
|
|
|
|
### (3/4) Joining your nodes
|
|
|
|
|
|
|
|
|
|
The nodes are where your workloads (containers and pods, etc) run.
|
|
|
|
|
If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g. `sudo su -`) and run the command that was output by `kubeadm init`.
|
|
|
|
|
For example:
|
|
|
|
|
|
|
|
|
|
# kubeadm join --token <token> <master-ip>
|
|
|
|
|
<util/tokens> validating provided token
|
|
|
|
|
<node/discovery> created cluster info discovery client, requesting info from "http://138.68.156.129:9898/cluster-info/v1/?token-id=0f8588"
|
|
|
|
|
<node/discovery> cluster info object received, verifying signature using given token
|
|
|
|
|
<node/discovery> cluster info signature and contents are valid, will use API endpoints [https://138.68.156.129:443]
|
|
|
|
|
<node/csr> created API client to obtain unique certificate for this node, generating keys and certificate signing request
|
|
|
|
|
<node/csr> received signed certificate from the API server, generating kubelet configuration
|
|
|
|
|
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
|
|
|
|
|
|
|
|
|
|
Node join complete:
|
|
|
|
|
* Certificate signing request sent to master and response
|
|
|
|
|
received.
|
|
|
|
|
* Kubelet informed of new secure connection details.
|
|
|
|
|
|
|
|
|
|
Run 'kubectl get nodes' on the master to see this machine join.
|
|
|
|
|
|
|
|
|
|
A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created.
|
|
|
|
|
|
|
|
|
|
**YOUR CLUSTER IS NOT READY YET!**
|
|
|
|
|
|
|
|
|
|
Before you can deploy applications to it, you need to install a pod network.
|
|
|
|
|
|
|
|
|
|
### (4/4) Installing a pod network
|
|
|
|
|
|
|
|
|
|
You must install a pod network add-on so that your pods can communicate with each other when they are on different hosts.
|
|
|
|
|
**It is necessary to do this before you try to deploy any applications to your cluster.**
|
|
|
|
|
|
|
|
|
|
Several projects provide Kubernetes pod networks.
|
|
|
|
|
You can see a complete list of available network add-ons on the [add-ons page](/docs/admin/addons/).
|
|
|
|
|
|
|
|
|
|
By way of example, you can install [Weave Net](https://github.com/weaveworks/weave-kube) by logging in to the master and running:
|
|
|
|
|
|
|
|
|
|
# kubectl apply -f https://git.io/weave-kube
|
|
|
|
|
daemonset "weave-net" created
|
|
|
|
|
|
|
|
|
|
If you prefer [Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) or [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm), please refer to their respective installation guides.
|
|
|
|
|
You should only install one pod network per cluster.
|
|
|
|
|
|
|
|
|
|
Once a pod network has been installed, you can confirm that it is working by checking that the `kube-dns` pod is `Running` in the output of `kubectl get pods --all-namespaces`.
|
|
|
|
|
**This signifies that your cluster is ready.**
|
|
|
|
|
|
|
|
|
|
### (Optional) Installing a sample application
|
|
|
|
|
|
|
|
|
|
As an example, install a sample microservices application, a socks shop, to put your cluster through its paces.
|
|
|
|
|
To learn more about the sample microservices app, see the [GitHub README](https://github.com/microservices-demo/microservices-demo).
|
|
|
|
|
|
|
|
|
|
# git clone https://github.com/microservices-demo/microservices-demo
|
|
|
|
|
# kubectl apply -f microservices-demo/deploy/kubernetes/manifests
|
|
|
|
|
|
|
|
|
|
You can then find out the port that the [NodePort feature of services](/docs/user-guide/services/) allocated for the front-end service by running:
|
|
|
|
|
|
|
|
|
|
# kubectl describe svc front-end
|
|
|
|
|
Name: front-end
|
|
|
|
|
Namespace: default
|
|
|
|
|
Labels: name=front-end
|
|
|
|
|
Selector: name=front-end
|
|
|
|
|
Type: NodePort
|
|
|
|
|
IP: 100.66.88.176
|
|
|
|
|
Port: <unset> 80/TCP
|
|
|
|
|
NodePort: <unset> 31869/TCP
|
|
|
|
|
Endpoints: <none>
|
|
|
|
|
Session Affinity: None
|
|
|
|
|
|
|
|
|
|
It takes several minutes to download and start all the containers, watch the output of `kubectl get pods` to see when they're all up and running.
|
|
|
|
|
|
|
|
|
|
Then go to the IP address of your cluster's master node in your browser, and specify the given port.
|
|
|
|
|
So for example, `http://<master_ip>:<port>`.
|
|
|
|
|
In the example above, this was `31869`, but it is a different port for you.
|
|
|
|
|
|
|
|
|
|
If there is a firewall, make sure it exposes this port to the internet before you try to access it.
|
|
|
|
|
|
|
|
|
|
### Explore other add-ons
|
|
|
|
|
|
|
|
|
|
See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## What's next
|
|
|
|
|
|
|
|
|
|
* Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/).
|
|
|
|
|
* Install Kubernetes with [a cloud provider configurations](/docs/getting-started-guides/) to add Load Balancer and Persistent Volume support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Cleanup
|
|
|
|
|
|
|
|
|
|
* To uninstall the socks shop, run `kubectl delete -f microservices-demo/deploy/kubernetes/manifests` on the master.
|
|
|
|
|
|
|
|
|
|
* To undo what `kubeadm` did, simply delete the machines you created for this tutorial, or run the script below and then uninstall the packages.
|
|
|
|
|
<details>
|
|
|
|
|
<pre><code>systemctl stop kubelet;
|
|
|
|
|
docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null;
|
|
|
|
|
rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni;
|
|
|
|
|
ip link set cbr0 down; ip link del cbr0;
|
|
|
|
|
ip link set cni0 down; ip link del cni0;
|
|
|
|
|
systemctl start kubelet</code></pre>
|
|
|
|
|
</details> <!-- *syntax-highlighting-hack -->
|
|
|
|
|
|
|
|
|
|
## Feedback
|
|
|
|
|
|
|
|
|
|
* Slack Channel: [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
|
|
|
|
|
* Mailing List: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
|
|
|
|
|
* [GitHub Issues](https://github.com/kubernetes/kubernetes/issues): please tag `kubeadm` issues with `@kubernetes/sig-cluster-lifecycle`
|
|
|
|
|
|
|
|
|
|
## Limitations
|
|
|
|
|
|
|
|
|
|
Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course.
|
|
|
|
|
|
|
|
|
|
1. The cluster created here doesn't have cloud-provider integrations, so for example won't work with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
|
|
|
|
|
To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/).
|
|
|
|
|
|
|
|
|
|
Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) for exposing applications to the internet.
|
|
|
|
|
1. The cluster created here has a single master, with a single `etcd` database running on it.
|
|
|
|
|
This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch.
|
|
|
|
|
Adding HA support (multiple `etcd` servers, multiple API servers, etc) to `kubeadm` is still a work-in-progress.
|
|
|
|
|
|
|
|
|
|
Workaround: regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html).
|
|
|
|
|
The `etcd` data directory configured by `kubeadm` is at `/var/lib/etcd` on the master.
|
|
|
|
|
1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770).
|
|
|
|
|
|
|
|
|
|
Workaround: use `docker logs` on the nodes where the containers are running as a workaround.
|
|
|
|
|
1. There is not yet an easy way to generate a `kubeconfig` file which can be used to authenticate to the cluster remotely with `kubectl` on, for example, your workstation.
|
|
|
|
|
|
|
|
|
|
Workaround: copy the kubelet's `kubeconfig` from the master: use `scp root@<master>:/etc/kubernetes/admin.conf .` and then e.g. `kubectl --kubeconfig ./admin.conf get nodes` from your workstation.
|