diff --git a/_data/guides.yml b/_data/guides.yml index 785f58ae26..7c41285d0e 100644 --- a/_data/guides.yml +++ b/_data/guides.yml @@ -8,10 +8,12 @@ toc: section: - title: What is Kubernetes? path: /docs/whatisk8s/ + - title: Installing Kubernetes on Linux with kubeadm + path: /docs/getting-started-guides/kubeadm/ + - title: Hello World on Google Container Engine + path: /docs/hellonode/ - title: Downloading or Building Kubernetes path: /docs/getting-started-guides/binary_release/ - - title: Hello World Walkthrough - path: /docs/hellonode/ - title: Online Training Course path: https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615 @@ -250,6 +252,8 @@ toc: path: /docs/admin/ - title: Cluster Management Guide path: /docs/admin/cluster-management/ + - title: Installing Addons + path: /docs/admin/addons/ - title: Sharing a Cluster with Namespaces path: /docs/admin/namespaces/ - title: Namespaces Walkthrough diff --git a/docs/admin/addons.md b/docs/admin/addons.md new file mode 100644 index 0000000000..6e28edfaa2 --- /dev/null +++ b/docs/admin/addons.md @@ -0,0 +1,25 @@ +--- +--- + +## Overview + +Add-ons extend the functionality of Kubernetes. + +This page lists some of the available add-ons and links to their respective installation instructions. + +## Networking and Network Policy + +* [Weave Net](https://github.com/weaveworks/weave-kube) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. +* [Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) is a secure L3 networking and network policy provider. +* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy. + +## Visualization & Control + +* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself. +* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes. + +## Legacy Add-ons + +There are several other add-ons documented in the deprecated [cluster/addons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) directory. + +Well-maintained ones should be linked to here. PRs welcome! diff --git a/docs/getting-started-guides/kubeadm.md b/docs/getting-started-guides/kubeadm.md new file mode 100644 index 0000000000..22e64fa539 --- /dev/null +++ b/docs/getting-started-guides/kubeadm.md @@ -0,0 +1,254 @@ +--- +--- + + + +## Overview + +This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04 or CentOS 7. +The installation uses a tool called `kubeadm` which is part of Kubernetes 1.4. + +This process works with local VMs, physical servers and/or cloud servers. +It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc). + +**The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/getting-started-guides/kubeadm/#feedback)!** + +## Prerequisites + +1. One or more machines running Ubuntu 16.04 or CentOS 7 +1. 1GB or more of RAM per machine (any less will leave little room for your apps) +1. Full network connectivity between all machines in the cluster (public or private network is fine) + +## Objectives + +* Install a secure Kubernetes cluster on your machines +* Install a pod network on the cluster so that application components (pods) can talk to each other +* Install a sample microservices application (a socks shop) on the cluster + +## Instructions + +### (1/4) Installing kubelet and kubeadm on your hosts + +You will install the following packages on all the machines: + +* `docker`: the container runtime, which Kubernetes depends on. +* `kubelet`: the most core component of Kubernetes. + It runs on all of the machines in your cluster and does things like starting pods and containers. +* `kubectl`: the command to control the cluster once it's running. + You will only use this on the master. +* `kubeadm`: the command to bootstrap the cluster. + +For each host in turn: + + + + +* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`). +* If the machine is running Ubuntu 16.04, run: + + # apt-get install -y docker.io socat apt-transport-https + # curl -s -L \ + https://storage.googleapis.com/kubeadm/kubernetes-xenial-preview-bundle.txz | tar xJv + # dpkg -i kubernetes-xenial-preview-bundle/*.deb + + If the machine is running CentOS 7, run: + + # cat < /etc/yum.repos.d/k8s.repo + [kubelet] + name=kubelet + baseurl=http://files.rm-rf.ca/rpms/kubelet/ + enabled=1 + gpgcheck=0 + EOF + # yum install docker kubelet kubeadm kubectl kubernetes-cni + # systemctl enable docker && systemctl start docker + # systemctl enable kubelet && systemctl start kubelet + +The kubelet is now restarting every few seconds, as it waits in a crashloop for `kubeadm` to tell it what to do. + +### (2/4) Initializing your master + +The master is the machine where the "control plane" components run, including `etcd` (the cluster database) and the API server (which the `kubectl` CLI communicates with). +All of these components run in pods started by `kubelet`. + +To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run: + + # kubeadm init --use-kubernetes-version v1.4.0-beta.11 + +This will download and install the cluster database and "control plane" components. +This may take several minutes. + +The output should look like: + + generated token: "f0c861.753c505740ecde4c" + created keys and certificates in "/etc/kubernetes/pki" + created "/etc/kubernetes/kubelet.conf" + created "/etc/kubernetes/admin.conf" + created API client configuration + created API client, waiting for the control plane to become ready + all control plane components are healthy after 61.346626 seconds + waiting for at least one node to register and become ready + first node is ready after 4.506807 seconds + created essential addon: kube-discovery + created essential addon: kube-proxy + created essential addon: kube-dns + + Kubernetes master initialised successfully! + + You can connect any number of nodes by running: + + kubeadm join --token + +Make a record of the `kubeadm join` command that `kubeadm init` outputs. +You will need this in a moment. +The key included here is secret, keep it safe — anyone with this key can add authenticated nodes to your cluster. + +The key is used for mutual authentication between the master and the joining nodes. + +By default, your cluster will not schedule pods on the master for security reasons. +If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, run: + + # kubectl taint nodes --all dedicated- + node "test-01" tainted + taint key="dedicated" and effect="" not found. + taint key="dedicated" and effect="" not found. + +This will remove the "dedicated" taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere. + +### (3/4) Joining your nodes + +The nodes are where your workloads (containers and pods, etc) run. +If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g. `sudo su -`) and run the command that was output by `kubeadm init`. +For example: + + # kubeadm join --token + validating provided token + created cluster info discovery client, requesting info from "http://138.68.156.129:9898/cluster-info/v1/?token-id=0f8588" + cluster info object received, verifying signature using given token + cluster info signature and contents are valid, will use API endpoints [https://138.68.156.129:443] + created API client to obtain unique certificate for this node, generating keys and certificate signing request + received signed certificate from the API server, generating kubelet configuration + created "/etc/kubernetes/kubelet.conf" + + Node join complete: + * Certificate signing request sent to master and response + received. + * Kubelet informed of new secure connection details. + + Run 'kubectl get nodes' on the master to see this machine join. + +A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created. + +**YOUR CLUSTER IS NOT READY YET!** + +Before you can deploy applications to it, you need to install a pod network. + +### (4/4) Installing a pod network + +You must install a pod network add-on so that your pods can communicate with each other when they are on different hosts. +**It is necessary to do this before you try to deploy any applications to your cluster.** + +Several projects provide Kubernetes pod networks. +You can see a complete list of available network add-ons on the [add-ons page](/docs/admin/addons/). + +By way of example, you can install [Weave Net](https://github.com/weaveworks/weave-kube) by logging in to the master and running: + + # kubectl apply -f https://git.io/weave-kube + daemonset "weave-net" created + +If you prefer [Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) or [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm), please refer to their respective installation guides. +You should only install one pod network per cluster. + +Once a pod network has been installed, you can confirm that it is working by checking that the `kube-dns` pod is `Running` in the output of `kubectl get pods --all-namespaces`. +**This signifies that your cluster is ready.** + +### (Optional) Installing a sample application + +As an example, install a sample microservices application, a socks shop, to put your cluster through its paces. +To learn more about the sample microservices app, see the [GitHub README](https://github.com/microservices-demo/microservices-demo). + + # git clone https://github.com/microservices-demo/microservices-demo + # kubectl apply -f microservices-demo/deploy/kubernetes/manifests + +You can then find out the port that the [NodePort feature of services](/docs/user-guide/services/) allocated for the front-end service by running: + + # kubectl describe svc front-end + Name: front-end + Namespace: default + Labels: name=front-end + Selector: name=front-end + Type: NodePort + IP: 100.66.88.176 + Port: 80/TCP + NodePort: 31869/TCP + Endpoints: + Session Affinity: None + +It takes several minutes to download and start all the containers, watch the output of `kubectl get pods` to see when they're all up and running. + +Then go to the IP address of your cluster's master node in your browser, and specify the given port. +So for example, `http://:`. +In the example above, this was `31869`, but it is a different port for you. + +If there is a firewall, make sure it exposes this port to the internet before you try to access it. + +### Explore other add-ons + +See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster. + + +## What's next + +* Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/). +* Install Kubernetes with [a cloud provider configurations](/docs/getting-started-guides/) to add Load Balancer and Persistent Volume support. + + +## Cleanup + +* To uninstall the socks shop, run `kubectl delete -f microservices-demo/deploy/kubernetes/manifests` on the master. + +* To undo what `kubeadm` did, simply delete the machines you created for this tutorial, or run the script below and then uninstall the packages. +
+
systemctl stop kubelet;
+  docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null;
+  rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni;
+  ip link set cbr0 down; ip link del cbr0;
+  ip link set cni0 down; ip link del cni0;
+  systemctl start kubelet
+
+ +## Feedback + +* Slack Channel: [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/) +* Mailing List: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle) +* [GitHub Issues](https://github.com/kubernetes/kubernetes/issues): please tag `kubeadm` issues with `@kubernetes/sig-cluster-lifecycle` + +## Limitations + +Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course. + +1. The cluster created here doesn't have cloud-provider integrations, so for example won't work with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs). + To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/). + + Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) for exposing applications to the internet. +1. The cluster created here has a single master, with a single `etcd` database running on it. + This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch. + Adding HA support (multiple `etcd` servers, multiple API servers, etc) to `kubeadm` is still a work-in-progress. + + Workaround: regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). + The `etcd` data directory configured by `kubeadm` is at `/var/lib/etcd` on the master. +1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770). + + Workaround: use `docker logs` on the nodes where the containers are running as a workaround. +1. There is not yet an easy way to generate a `kubeconfig` file which can be used to authenticate to the cluster remotely with `kubectl` on, for example, your workstation. + + Workaround: copy the kubelet's `kubeconfig` from the master: use `scp root@:/etc/kubernetes/admin.conf .` and then e.g. `kubectl --kubeconfig ./admin.conf get nodes` from your workstation. diff --git a/docs/index.md b/docs/index.md index fb40b3fa5f..5e29c42dcb 100644 --- a/docs/index.md +++ b/docs/index.md @@ -77,13 +77,18 @@ h2, h3, h4 { Read the Overview
-

Hello Node!

-

In this quickstart, we’ll be creating a Kubernetes instance that stands up a simple “Hello World” app using Node.js. In just a few minutes you'll go from zero to deployed Kubernetes app on Google Container Engine.

- Get Started +

Hello World on Google Container Engine

+

In this quickstart, we’ll be creating a Kubernetes instance that stands up a simple “Hello World” app using Node.js. In just a few minutes you'll go from zero to deployed Kubernetes app on Google Container Engine (GKE), a hosted service from Google.

+ Get Started on GKE +
+
+

Installing Kubernetes on Linux with kubeadm

+

This quickstart will show you how to install a secure Kubernetes cluster on any computers running Linux, using a tool called kubeadm which is part of Kubernetes. It'll work with local VMs, physical servers and/or cloud servers, either manually or as part of your own automation. It is currently in alpha but please try it out and give us feedback!

+ Install Kubernetes with kubeadm

Guided Tutorial

-

If you’ve completed the quickstart, a great next step is Kubernetes 101. You will follow a path through the various features of Kubernetes, with code examples along the way, learning all of the core concepts. There's also a Kubernetes 201!

+

If you’ve completed one of the quickstarts, a great next step is Kubernetes 101. You will follow a path through the various features of Kubernetes, with code examples along the way, learning all of the core concepts. There's also a Kubernetes 201!

Kubernetes 101