Merge pull request #1420 from luxas/update_kubeadm_arm
Update and refactor the kubeadm documentation + add HypriotOS instructions as wellpull/1507/head
commit
50f4c84265
|
@ -1,4 +1,10 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
- luxas
|
||||
- errordeveloper
|
||||
- jbeda
|
||||
|
||||
---
|
||||
|
||||
<style>
|
||||
|
@ -19,7 +25,7 @@ See the full [`kubeadm` reference](/docs/admin/kubeadm) for information on all `
|
|||
|
||||
## Prerequisites
|
||||
|
||||
1. One or more machines running Ubuntu 16.04 or CentOS 7
|
||||
1. One or more machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1
|
||||
1. 1GB or more of RAM per machine (any less will leave little room for your apps)
|
||||
1. Full network connectivity between all machines in the cluster (public or private network is fine)
|
||||
|
||||
|
@ -35,24 +41,26 @@ See the full [`kubeadm` reference](/docs/admin/kubeadm) for information on all `
|
|||
|
||||
You will install the following packages on all the machines:
|
||||
|
||||
* `docker`: the container runtime, which Kubernetes depends on.
|
||||
* `docker`: the container runtime, which Kubernetes depends on. v1.11.2 is recommended, but v1.10.3 and v1.12.1 are known to work as well.
|
||||
* `kubelet`: the most core component of Kubernetes.
|
||||
It runs on all of the machines in your cluster and does things like starting pods and containers.
|
||||
* `kubectl`: the command to control the cluster once it's running.
|
||||
You will only use this on the master.
|
||||
You will only need this on the master, but it can be useful to have on the other nodes as well.
|
||||
* `kubeadm`: the command to bootstrap the cluster.
|
||||
|
||||
For each host in turn:
|
||||
|
||||
* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
|
||||
* If the machine is running Ubuntu 16.04, run:
|
||||
* If the machine is running Ubuntu 16.04 or HypriotOS v1.0.1, run:
|
||||
|
||||
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
||||
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
|
||||
deb http://apt.kubernetes.io/ kubernetes-xenial main
|
||||
EOF
|
||||
# apt-get update
|
||||
# apt-get install -y docker.io kubelet kubeadm kubectl kubernetes-cni
|
||||
# # Install docker if you don't have it already.
|
||||
# apt-get install -y docker.io
|
||||
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
|
||||
|
||||
If the machine is running CentOS 7, run:
|
||||
|
||||
|
@ -80,6 +88,8 @@ Note: `setenforce 0` will no longer be necessary on CentOS once [#33555](https:/
|
|||
The master is the machine where the "control plane" components run, including `etcd` (the cluster database) and the API server (which the `kubectl` CLI communicates with).
|
||||
All of these components run in pods started by `kubelet`.
|
||||
|
||||
Right now you can't run `kubeadm init` twice without turning down the cluster in between, see [Turndown](#turndown).
|
||||
|
||||
To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run:
|
||||
|
||||
# kubeadm init
|
||||
|
@ -87,6 +97,10 @@ To initialize the master, pick one of the machines you previously installed `kub
|
|||
**Note:** this will autodetect the network interface to advertise the master on as the interface with the default gateway.
|
||||
If you want to use a different interface, specify `--api-advertise-addresses=<ip-address>` argument to `kubeadm init`.
|
||||
|
||||
If you want to use [flannel](https://github.com/coreos/flannel) as the pod network; specify `--pod-network-cidr=10.244.0.0/16` if you're using the daemonset manifest below. _However, please note that this is not required for any other networks, including Weave, which is the recommended pod network._
|
||||
|
||||
Please refer to the [kubeadm reference doc](/docs/admin/kubeadm/) if you want to read more about the flags `kubeadm init` provides.
|
||||
|
||||
This will download and install the cluster database and "control plane" components.
|
||||
This may take several minutes.
|
||||
|
||||
|
@ -127,7 +141,31 @@ If you want to be able to schedule pods on the master, for example if you want a
|
|||
|
||||
This will remove the "dedicated" taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.
|
||||
|
||||
### (3/4) Joining your nodes
|
||||
### (3/4) Installing a pod network
|
||||
|
||||
You must install a pod network add-on so that your pods can communicate with each other.
|
||||
In the meantime, the kubenet network plugin doesn't work. Instead, CNI plugin networks are supported, those you see below.
|
||||
**It is necessary to do this before you try to deploy any applications to your cluster, and before `kube-dns` will start up.**
|
||||
|
||||
Several projects provide Kubernetes pod networks.
|
||||
You can see a complete list of available network add-ons on the [add-ons page](/docs/admin/addons/).
|
||||
|
||||
By way of example, you can install [Weave Net](https://github.com/weaveworks/weave-kube) by logging in to the master and running:
|
||||
|
||||
# kubectl apply -f https://git.io/weave-kube
|
||||
daemonset "weave-net" created
|
||||
|
||||
If you prefer [Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) or [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm), please refer to their respective installation guides.
|
||||
|
||||
If you are on another architecture than amd64, you should use the flannel overlay network as described in [the multi-platform section](#kubeadm-is-multi-platform)
|
||||
|
||||
NOTE: You can install **only one** pod network per cluster.
|
||||
|
||||
Once a pod network has been installed, you can confirm that it is working by checking that the `kube-dns` pod is `Running` in the output of `kubectl get pods --all-namespaces`.
|
||||
|
||||
And once the `kube-dns` pod is up and running, you can continue by joining your nodes.
|
||||
|
||||
### (4/4) Joining your nodes
|
||||
|
||||
The nodes are where your workloads (containers and pods, etc) run.
|
||||
If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g. `sudo su -`) and run the command that was output by `kubeadm init`.
|
||||
|
@ -151,28 +189,12 @@ For example:
|
|||
|
||||
A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created.
|
||||
|
||||
**YOUR CLUSTER IS NOT READY YET!**
|
||||
### (Optional) Control your cluster from machines other than the master
|
||||
|
||||
Before you can deploy applications to it, you need to install a pod network.
|
||||
In order to get a kubectl on your laptop for example to talk to your cluster, you need to copy the `KubeConfig` file from your master to your laptop like this:
|
||||
|
||||
### (4/4) Installing a pod network
|
||||
|
||||
You must install a pod network add-on so that your pods can communicate with each other when they are on different hosts.
|
||||
**It is necessary to do this before you try to deploy any applications to your cluster.**
|
||||
|
||||
Several projects provide Kubernetes pod networks.
|
||||
You can see a complete list of available network add-ons on the [add-ons page](/docs/admin/addons/).
|
||||
|
||||
By way of example, you can install [Weave Net](https://github.com/weaveworks/weave-kube) by logging in to the master and running:
|
||||
|
||||
# kubectl apply -f https://git.io/weave-kube
|
||||
daemonset "weave-net" created
|
||||
|
||||
If you prefer [Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) or [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm), please refer to their respective installation guides.
|
||||
You should only install one pod network per cluster.
|
||||
|
||||
Once a pod network has been installed, you can confirm that it is working by checking that the `kube-dns` pod is `Running` in the output of `kubectl get pods --all-namespaces`.
|
||||
**This signifies that your cluster is ready.**
|
||||
# scp root@<master ip>:/etc/kubernetes/admin.conf .
|
||||
# kubectl --kubeconfig ./admin.conf get nodes
|
||||
|
||||
### (Optional) Installing a sample application
|
||||
|
||||
|
@ -204,19 +226,7 @@ In the example above, this was `31869`, but it is a different port for you.
|
|||
|
||||
If there is a firewall, make sure it exposes this port to the internet before you try to access it.
|
||||
|
||||
### Explore other add-ons
|
||||
|
||||
See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.
|
||||
|
||||
|
||||
## What's next
|
||||
|
||||
* Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/).
|
||||
* Install Kubernetes with [a cloud provider configurations](/docs/getting-started-guides/) to add Load Balancer and Persistent Volume support.
|
||||
* Learn about `kubeadm`'s advanced usage on the [advanced reference doc](/docs/admin/kubeadm/)
|
||||
|
||||
|
||||
## Cleanup
|
||||
## Tear down
|
||||
|
||||
* To uninstall the socks shop, run `kubectl delete namespace sock-shop` on the master.
|
||||
|
||||
|
@ -232,18 +242,43 @@ See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, includi
|
|||
If you wish to start over, run `systemctl start kubelet` followed by `kubeadm init` or `kubeadm join`.
|
||||
<!-- *syntax-highlighting-hack -->
|
||||
|
||||
## Explore other add-ons
|
||||
|
||||
See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.
|
||||
|
||||
## What's next
|
||||
|
||||
* Learn about `kubeadm`'s advanced usage on the [advanced reference doc](/docs/admin/kubeadm/)
|
||||
* Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/).
|
||||
|
||||
## Feedback
|
||||
|
||||
* Slack Channel: [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
|
||||
* Mailing List: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
|
||||
* [GitHub Issues](https://github.com/kubernetes/kubernetes/issues): please tag `kubeadm` issues with `@kubernetes/sig-cluster-lifecycle`
|
||||
|
||||
## kubeadm is multi-platform
|
||||
|
||||
kubeadm deb packages and binaries are built for amd64, arm and arm64, following the [multi-platform proposal](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/multi-platform.md).
|
||||
|
||||
deb-packages are released for ARM and ARM 64-bit, but not RPMs (yet, reach out if there's interest).
|
||||
|
||||
Anyway, ARM had some issues when making v1.4, see [#32517](https://github.com/kubernetes/kubernetes/pull/32517) [#33485](https://github.com/kubernetes/kubernetes/pull/33485), [#33117](https://github.com/kubernetes/kubernetes/pull/33117) and [#33376](https://github.com/kubernetes/kubernetes/pull/33376).
|
||||
|
||||
However, thanks to the PRs above, `kube-apiserver` works on ARM from the `v1.4.1` release, so make sure you're at least using `v1.4.1` when running on ARM 32-bit
|
||||
|
||||
The multiarch flannel daemonset can be installed this way. Make sure you replace `ARCH=amd64` with `ARCH=arm` or `ARCH=arm64` if necessary.
|
||||
|
||||
# ARCH=amd64 curl -sSL https://raw.githubusercontent.com/luxas/flannel/update-daemonset/Documentation/kube-flannel.yml | sed "s/amd64/${ARCH}/g" | kubectl create -f -
|
||||
|
||||
And obviously replace `ARCH=amd64` with `ARCH=arm` or `ARCH=arm64` depending on the platform you're running on.
|
||||
|
||||
## Limitations
|
||||
|
||||
Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course.
|
||||
|
||||
1. The cluster created here doesn't have cloud-provider integrations, so for example won't work with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
|
||||
To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/).
|
||||
1. The cluster created here doesn't have cloud-provider integrations by default, so for example it doesn't work automatically with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
|
||||
To set up kubeadm with CloudProvider integrations (it's experimental, but try), refer to the [kubeadm reference](/docs/admin/kubeadm/) document.
|
||||
|
||||
Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) for exposing applications to the internet.
|
||||
1. The cluster created here has a single master, with a single `etcd` database running on it.
|
||||
|
@ -255,9 +290,6 @@ Please note: `kubeadm` is a work in progress and these limitations will be addre
|
|||
1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770).
|
||||
|
||||
Workaround: use `docker logs` on the nodes where the containers are running as a workaround.
|
||||
1. There is not yet an easy way to generate a `kubeconfig` file which can be used to authenticate to the cluster remotely with `kubectl` on, for example, your workstation.
|
||||
|
||||
Workaround: copy the kubelet's `kubeconfig` from the master: use `scp root@<master>:/etc/kubernetes/admin.conf .` and then e.g. `kubectl --kubeconfig ./admin.conf get nodes` from your workstation.
|
||||
|
||||
1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one).
|
||||
By default, it doesn't do this and kubelet ends-up using first non-loopback network interface, which is usually NATed.
|
||||
|
|
Loading…
Reference in New Issue