From 22a8dbb58e2d35a9e8ff5c7d3406c432945c96b5 Mon Sep 17 00:00:00 2001 From: Luke Marsden Date: Thu, 15 Sep 2016 13:47:42 +0100 Subject: [PATCH] Squashed commits for kubeadm and addons docs. --- _data/guides.yml | 8 +- docs/admin/addons.md | 24 +++++ docs/index.md | 13 ++- docs/kubeadm.md | 247 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 286 insertions(+), 6 deletions(-) create mode 100644 docs/admin/addons.md create mode 100644 docs/kubeadm.md diff --git a/_data/guides.yml b/_data/guides.yml index 785f58ae26..b956870e58 100644 --- a/_data/guides.yml +++ b/_data/guides.yml @@ -8,10 +8,12 @@ toc: section: - title: What is Kubernetes? path: /docs/whatisk8s/ + - title: Installing Kubernetes Easily on Linux + path: /docs/kubeadm/ + - title: Hello World on Google Container Engine + path: /docs/hellonode/ - title: Downloading or Building Kubernetes path: /docs/getting-started-guides/binary_release/ - - title: Hello World Walkthrough - path: /docs/hellonode/ - title: Online Training Course path: https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615 @@ -250,6 +252,8 @@ toc: path: /docs/admin/ - title: Cluster Management Guide path: /docs/admin/cluster-management/ + - title: Installing Addons + path: /docs/admin/addons/ - title: Sharing a Cluster with Namespaces path: /docs/admin/namespaces/ - title: Namespaces Walkthrough diff --git a/docs/admin/addons.md b/docs/admin/addons.md new file mode 100644 index 0000000000..4a3d1710f6 --- /dev/null +++ b/docs/admin/addons.md @@ -0,0 +1,24 @@ +--- +--- + +## Overview + +Add-ons extend the functionality of Kubernetes in a pluggable way. + +This page lists some of the available add-ons and links to their respective installation instructions. + +## Networking and Network Policy + +* [Weave Net](https://github.com/weaveworks/weave-kube) is an easy, fast and reliable pod network that carries on working in the face of network partitions, does not depend on a database, and supports Kubernetes policy. +* [Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) is a simple, scalable, secure L3 networking and network policy provider. +* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing cloud native networking and network policy. + +## Visualization & Control + +* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself. + +## Legacy Add-ons + +There are several other add-ons documented in the deprecated [cluster/addons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) directory. + +Well-maintained ones should be linked to here. PRs welcome! diff --git a/docs/index.md b/docs/index.md index fb40b3fa5f..6c83b696a0 100644 --- a/docs/index.md +++ b/docs/index.md @@ -77,13 +77,18 @@ h2, h3, h4 { Read the Overview
-

Hello Node!

-

In this quickstart, we’ll be creating a Kubernetes instance that stands up a simple “Hello World” app using Node.js. In just a few minutes you'll go from zero to deployed Kubernetes app on Google Container Engine.

- Get Started +

Hello World on Google Container Engine

+

In this quickstart, we’ll be creating a Kubernetes instance that stands up a simple “Hello World” app using Node.js. In just a few minutes you'll go from zero to deployed Kubernetes app on Google Container Engine (GKE), a hosted service from Google.

+ Get Started on GKE +
+
+

Installing Kubernetes Easily on Linux

+

This quickstart will show you how to install a secure Kubernetes cluster on any computers running Linux, using a tool called kubeadm which is part of Kubernetes. It'll work with local VMs, physical servers and/or cloud servers, either manually or as part of your own automation. It is currently in alpha but please try it out and give us feedback!

+ Install Kubernetes Easily

Guided Tutorial

-

If you’ve completed the quickstart, a great next step is Kubernetes 101. You will follow a path through the various features of Kubernetes, with code examples along the way, learning all of the core concepts. There's also a Kubernetes 201!

+

If you’ve completed one of the quickstarts, a great next step is Kubernetes 101. You will follow a path through the various features of Kubernetes, with code examples along the way, learning all of the core concepts. There's also a Kubernetes 201!

Kubernetes 101
diff --git a/docs/kubeadm.md b/docs/kubeadm.md new file mode 100644 index 0000000000..c3f3836d8e --- /dev/null +++ b/docs/kubeadm.md @@ -0,0 +1,247 @@ +--- +--- + + + +## Overview + +This quickstart will show you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04 or CentOS 7, using a tool called `kubeadm` which is part of Kubernetes. + +This process should work with local VMs, physical servers and/or cloud servers. +It is intended to be simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc). + +**The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/kubeadm/#feedback)!** + +## Prerequisites + +1. One or more machines running Ubuntu 16.04 or CentOS 7 +1. 2GB or more of RAM per machine +1. A network connection with open ports between the machines (public or private network is fine) + +## Objectives + +* Install a secure Kubernetes cluster on your machines +* Install a pod network on the cluster so that application components (pods) can talk to each other +* Install a sample microservices application (a socks shop) on the cluster + +## Instructions + +### (1/4) Installing kubelet and kubeadm on your hosts + +You will now install the following packages on all the machines: + +* `docker`: the container runtime, which Kubernetes depends on. +* `kubelet`: the most core component of Kubernetes. + It runs on all of the machines in your cluster and does things like starting pods and containers. +* `kubectl`: the command to control the cluster once it's running. + You will only use this on the master. +* `kubeadm`: the command to bootstrap the cluster. + +For each host in turn: + + + + +* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`). +* If the machine is running Ubuntu 16.04, run: + + # apt-get install -y docker.io socat + # curl -s -L \ + "https://www.dropbox.com/s/tso6dc7b94ch2sk/debs-5ab576.txz?dl=1" | tar xJv + # dpkg -i debian/bin/unstable/xenial/*.deb + + If the machine is running CentOS 7, run: + + # cat < /etc/yum.repos.d/k8s.repo + [kubelet] + name=kubelet + baseurl=http://files.rm-rf.ca/rpms/kubelet/ + enabled=1 + gpgcheck=0 + EOF + # yum install docker kubelet kubeadm kubectl kubernetes-cni + # systemctl enable docker && systemctl start docker + # systemctl enable kubelet && systemctl start kubelet + +The kubelet will now be restarting every few seconds, as it waits in a crashloop for `kubeadm` to tell it what to do. + +Optionally, see also [more details on installing Docker](https://docs.docker.com/engine/installation/#/on-linux). + +### (2/4) Initializing your master + +The master is the machine where the "control plane" components run, including `etcd` (the cluster database) and the API server (which the `kubectl` CLI communicates with). +All of these components will run in pods started by `kubelet`. + +To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run: + +* If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, run: + + # kubeadm init --schedule-pods-here + +* If you do not want to be able to schedule pods on the master (perhaps for security reasons), run: + + # kubeadm init + +This will download and install the cluster database and "control plane" components. +This may take several minutes. + +The output should look like: + + generated token: "f0c861.753c505740ecde4c" + created keys and certificates in "/etc/kubernetes/pki" + created "/etc/kubernetes/kubelet.conf" + created "/etc/kubernetes/admin.conf" + created API client configuration + created API client, waiting for the control plane to become ready + all control plane components are healthy after 61.346626 seconds + waiting for at least one node to register and become ready + first node is ready after 4.506807 seconds + created essential addon: kube-discovery + created essential addon: kube-proxy + created essential addon: kube-dns + + Kubernetes master initialised successfully! + + You can connect any number of nodes by running: + + kubeadm join --token + +Make a record of the `kubeadm join` command that `kubeadm init` outputs. +You will need this in a moment. +The key included here is secret, keep it safe — anyone with this key will be able to add authenticated nodes to your cluster. + +The key is used for mutual authentication between the master and the joining nodes. + +### (3/4) Joining your nodes + +The nodes are where your workloads (containers and pods, etc) will run. +If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g. `sudo su -`) and run the command that was output by `kubeadm init`. +For example: + + # kubeadm join --token + validating provided token + created cluster info discovery client, requesting info from "http://138.68.156.129:9898/cluster-info/v1/?token-id=0f8588" + cluster info object received, verifying signature using given token + cluster info signature and contents are valid, will use API endpoints [https://138.68.156.129:443] + created API client to obtain unique certificate for this node, generating keys and certificate signing request + received signed certificate from the API server, generating kubelet configuration + created "/etc/kubernetes/kubelet.conf" + + Node join complete: + * Certificate signing request sent to master and response + received. + * Kubelet informed of new secure connection details. + + Run 'kubectl get nodes' on the master to see this machine join. + +A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created. + +**YOUR CLUSTER IS NOT READY YET!** + +Before you can deploy applications to it, you need to install a pod network. + +### (4/4) Installing a pod network + +You must install a pod network add-on so that your pods can communicate with each other when they are on different hosts. + +Several projects provide Kubernetes pod networks. +You can see a complete list of available network add-ons on the [add-ons page](/docs/admin/addons/). + +By way of example, you can install Weave Net by running on the master: + + # kubectl apply -f https://git.io/weave-kube + daemonset "weave-net" created + +**You should install a pod network on the master before you try to deploy any applications to your cluster.** + +Once a pod network command has installed, a few seconds later you should see the `kube-dns` pod go into `Running` in the output of `kubectl get pods --all-namespaces`. + +**This signifies that your cluster is ready.** + +You can learn more about other available pod networks on the [add-ons page](/docs/admin/addons/). + +### (Optional) Installing a sample application + +As an example, you will now install a sample microservices application, a socks shop, to put your cluster through its paces. +To learn more about the sample microservices app, see the [GitHub README](https://github.com/microservices-demo/microservices-demo). + +Here you will install the NodePort version of the Socks Shop, which doesn't depend on Load Balancer integration, since our cluster doesn't have that: + + # kubectl apply -f https://raw.githubusercontent.com/lukemarsden/microservices-demo/master/deploy/kubernetes/definitions/wholeWeaveDemo-NodePort.yaml + +You can then find out the port that the [NodePort feature of services](/docs/user-guide/services/) allocated for the front-end service by running: + + # kubectl describe svc front-end + Name: front-end + Namespace: default + Labels: name=front-end + Selector: name=front-end + Type: NodePort + IP: 100.66.88.176 + Port: 80/TCP + NodePort: 31869/TCP + Endpoints: + Session Affinity: None + +It will take several minutes to download and start all the containers, watch the output of `kubectl get pods` to see when they're all up and running. + +Then go to the IP address of your cluster's master node in your browser, and specify the given port. +So for example, `http://:`. +In the example above, this was `31869`, but it will be a different port for you. + +If there is a firewall, make sure it exposes this port to the internet before you try to access it. + +### Explore other add-ons + +See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster. + + +## What's next + +* Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/). +* Install Kubernetes with [a cloud provider configurations](/docs/getting-started-guides/) to add Load Balancer and Persistent Volume support. + + +## Cleanup + +* To uninstall the socks shop, run `kubectl delete -f https://raw.githubusercontent.com/lukemarsden/microservices-demo/master/deploy/kubernetes/definitions/wholeWeaveDemo-NodePort.yaml`. +* To uninstall Kubernetes, simply delete the machines you created for this tutorial. + Or alternatively, uninstall the `kubelet`, `kubeadm` and `kubectl` packages and then manually delete all the Docker container that were created by this process. + +## Feedback + +* Slack Channel: [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/) +* Mailing List: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle) +* [GitHub Issues](https://github.com/kubernetes/kubernetes/issues): please tag `kubeadm` issues with `@kubernetes/sig-cluster-lifecycle` + +## Limitations + +Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course. + +1. The cluster created here doesn't have cloud-provider integrations, so for example won't work with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs). + To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/). + + Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) to demonstrate exposing the sample application on the internet. +1. The cluster created here will have a single master, with a single `etcd` database running on it. + This means that if the master fails, your cluster will lose its configuration data and will need to be recreated from scratch. + Adding HA support (multiple `etcd` servers, multiple API servers, etc) to `kubeadm` is still a work-in-progress. + + Workaround: regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). + The `etcd` data directory configured by `kubeadm` is at `/var/lib/etcd` on the master. +1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770). + + Workaround: use `docker logs` on the nodes where the containers are running as a workaround. +1. There is not yet an easy way to generate a `kubeconfig` file which can be used to authenticate to the cluster remotely with `kubectl` on, for example, your workstation. + + Workaround: copy the kubelet's `kubeconfig` from the master: use `scp root@:/etc/kubernetes/kubelet.conf .` and then e.g. `kubectl --kubeconfig ./kubelet.conf get nodes` from your workstation.