Update the kubeadm documentation to reflect the new release

reviewable/pr1608/r4
Lucas Käldström 2016-11-06 15:29:11 -08:00
parent 4e9f5c486d
commit 05f67bad8f
3 changed files with 74 additions and 30 deletions

View File

@ -11,6 +11,7 @@ This page lists some of the available add-ons and links to their respective inst
* [Weave Net](https://github.com/weaveworks/weave-kube) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. * [Weave Net](https://github.com/weaveworks/weave-kube) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
* [Calico](http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider. * [Calico](http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy. * [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize). * [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).

View File

@ -3,6 +3,7 @@ assignees:
- mikedanese - mikedanese
- luxas - luxas
- errordeveloper - errordeveloper
- jbeda
--- ---
@ -104,17 +105,16 @@ and `--external-etcd-keyfile` flags.
- `--pod-network-cidr` - `--pod-network-cidr`
By default, `kubeadm init` does not set node CIDR's for pods and allows you to For certain networking solutions the Kubernetes master can also play a role in
bring your own networking configuration through a CNI compatible network allocating network ranges (CIDRs) to each node. This includes many cloud providers
controller addon such as [Weave Net](https://github.com/weaveworks/weave-kube), and flannel. You can specify a subnet range that will be broken down and handed out
[Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) to each node with the `--pod-network-cidr` flag. This should be a minimum of a /16 so
or [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm). controller-manager is able to assign /24 subnets to each node in the cluster.
If you are using a compatible cloud provider or flannel, you can specify a If you are using flannel with [this manifest](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml)
subnet to use for each pod on the cluster with the `--pod-network-cidr` flag. you should use `--pod-network-cidr=10.244.0.0/16`. Most CNI based networking solutions
This should be a minimum of a /16 so that kubeadm is able to assign /24 subnets do not require this flag.
to each node in the cluster.
- `--service-cidr` (default '10.12.0.0/12') - `--service-cidr` (default '10.96.0.0/12')
You can use the `--service-cidr` flag to override the subnet Kubernetes uses to You can use the `--service-cidr` flag to override the subnet Kubernetes uses to
assign pods IP addresses. If you do, you will also need to update the assign pods IP addresses. If you do, you will also need to update the
@ -141,7 +141,7 @@ By default, `kubeadm init` automatically generates the token used to initialise
each new node. If you would like to manually specify this token, you can use the each new node. If you would like to manually specify this token, you can use the
`--token` flag. The token must be of the format `<6 character string>.<16 character string>`. `--token` flag. The token must be of the format `<6 character string>.<16 character string>`.
- `--use-kubernetes-version` (default 'v1.4.1') the kubernetes version to initialise - `--use-kubernetes-version` (default 'v1.4.4') the kubernetes version to initialise
`kubeadm` was originally built for Kubernetes version **v1.4.0**, older versions are not `kubeadm` was originally built for Kubernetes version **v1.4.0**, older versions are not
supported. With this flag you can try any future version, e.g. **v1.5.0-beta.1** supported. With this flag you can try any future version, e.g. **v1.5.0-beta.1**
@ -203,6 +203,27 @@ There are some environment variables that modify the way that `kubeadm` works.
| `KUBE_COMPONENT_LOGLEVEL` | `--v=4` | Logging configuration for all Kubernetes components | | `KUBE_COMPONENT_LOGLEVEL` | `--v=4` | Logging configuration for all Kubernetes components |
## Releases and release notes
If you already have kubeadm installed and want to upgrade, run `apt-get update && apt-get upgrade` or `yum update` to get the latest version of kubeadm.
- Second release between v1.4 and v1.5: `v1.5.0-alpha.2.421+a6bea3d79b8bba`
- Switch to the 10.96.0.0/12 subnet: [#35290](https://github.com/kubernetes/kubernetes/pull/35290)
- Fix kubeadm on AWS by including /etc/ssl/certs in the controller-manager [#33681](https://github.com/kubernetes/kubernetes/pull/33681)
- The API was refactored and is now componentconfig: [#33728](https://github.com/kubernetes/kubernetes/pull/33728), [#34147](https://github.com/kubernetes/kubernetes/pull/34147) and [#34555](https://github.com/kubernetes/kubernetes/pull/34555)
- Allow kubeadm to get config options from a file: [#34501](https://github.com/kubernetes/kubernetes/pull/34501), [#34885](https://github.com/kubernetes/kubernetes/pull/34885) and [#34891](https://github.com/kubernetes/kubernetes/pull/34891)
- Implement preflight checks: [#34341](https://github.com/kubernetes/kubernetes/pull/34341) and [#35843](https://github.com/kubernetes/kubernetes/pull/35843)
- Using kubernetes v1.4.4 by default: [#34419](https://github.com/kubernetes/kubernetes/pull/34419) and [#35270](https://github.com/kubernetes/kubernetes/pull/35270)
- Make api and discovery ports configurable and default to 6443: [#34719](https://github.com/kubernetes/kubernetes/pull/34719)
- Implement kubeadm reset: [#34807](https://github.com/kubernetes/kubernetes/pull/34807)
- Make kubeadm poll/wait for endpoints instead of directly fail when the master isn't available [#34703](https://github.com/kubernetes/kubernetes/pull/34703) and [#34718](https://github.com/kubernetes/kubernetes/pull/34718)
- Allow empty directories in the directory preflight check: [#35632](https://github.com/kubernetes/kubernetes/pull/35632)
- Started adding unit tests: [#35231](https://github.com/kubernetes/kubernetes/pull/35231), [#35326](https://github.com/kubernetes/kubernetes/pull/35326) and [#35332](https://github.com/kubernetes/kubernetes/pull/35332)
- Various enhancements: [#35075](https://github.com/kubernetes/kubernetes/pull/35075), [#35111](https://github.com/kubernetes/kubernetes/pull/35111), [#35119](https://github.com/kubernetes/kubernetes/pull/35119), [#35124](https://github.com/kubernetes/kubernetes/pull/35124), [#35265](https://github.com/kubernetes/kubernetes/pull/35265) and [#35777](https://github.com/kubernetes/kubernetes/pull/35777)
- Bug fixes: [#34352](https://github.com/kubernetes/kubernetes/pull/34352), [#34558](https://github.com/kubernetes/kubernetes/pull/34558), [#34573](https://github.com/kubernetes/kubernetes/pull/34573), [#34834](https://github.com/kubernetes/kubernetes/pull/34834), [#34607](https://github.com/kubernetes/kubernetes/pull/34607), [#34907](https://github.com/kubernetes/kubernetes/pull/34907) and [#35796](https://github.com/kubernetes/kubernetes/pull/35796)
- Initial v1.4 release: `v1.5.0-alpha.0.1534+cf7301f16c0363`
## Troubleshooting ## Troubleshooting
* Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl config, eg. * Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl config, eg.

View File

@ -13,7 +13,7 @@ li>.highlighter-rouge {position:relative; top:3px;}
## Overview ## Overview
This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04 or CentOS 7. This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+.
The installation uses a tool called `kubeadm` which is part of Kubernetes 1.4. The installation uses a tool called `kubeadm` which is part of Kubernetes 1.4.
This process works with local VMs, physical servers and/or cloud servers. This process works with local VMs, physical servers and/or cloud servers.
@ -38,7 +38,7 @@ If you are not constrained, other tools build on kubeadm to give you complete cl
## Prerequisites ## Prerequisites
1. One or more machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1 1. One or more machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+
1. 1GB or more of RAM per machine (any less will leave little room for your apps) 1. 1GB or more of RAM per machine (any less will leave little room for your apps)
1. Full network connectivity between all machines in the cluster (public or private network is fine) 1. Full network connectivity between all machines in the cluster (public or private network is fine)
@ -61,6 +61,9 @@ You will install the following packages on all the machines:
You will only need this on the master, but it can be useful to have on the other nodes as well. You will only need this on the master, but it can be useful to have on the other nodes as well.
* `kubeadm`: the command to bootstrap the cluster. * `kubeadm`: the command to bootstrap the cluster.
NOTE: If you already have kubeadm installed, you should do a `apt-get update && apt-get upgrade` or `yum update` to get the latest version of kubeadm.
See the reference doc if you want to read about the different [kubeadm releases](/docs/admin/kubeadm)
For each host in turn: For each host in turn:
* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`). * SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
@ -94,7 +97,7 @@ For each host in turn:
The kubelet is now restarting every few seconds, as it waits in a crashloop for `kubeadm` to tell it what to do. The kubelet is now restarting every few seconds, as it waits in a crashloop for `kubeadm` to tell it what to do.
Note: `setenforce 0` will no longer be necessary on CentOS once [#33555](https://github.com/kubernetes/kubernetes/pull/33555) is included in a released version of `kubeadm`. Note: To disable SELinux by running `setenforce 0` is required in order to allow containers to access the host filesystem, which is required by pod networks for example. You have to do this until kubelet can handle SELinux better.
### (2/4) Initializing your master ### (2/4) Initializing your master
@ -103,6 +106,8 @@ All of these components run in pods started by `kubelet`.
Right now you can't run `kubeadm init` twice without tearing down the cluster in between, see [Tear down](#tear-down). Right now you can't run `kubeadm init` twice without tearing down the cluster in between, see [Tear down](#tear-down).
If you try to run `kubeadm init` and your machine is in a state that is incompatible with starting a Kubernetes cluster, `kubeadm` will warn you about things that might not work or it will error out for unsatisfied mandatory requirements.
To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run: To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run:
# kubeadm init # kubeadm init
@ -201,16 +206,27 @@ For example:
A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created. A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created.
### (Optional) Control your cluster from machines other than the master Note that there currently isn't a out-of-the-box way of connecting to the Master's API Server via `kubectl` from a node. Read issue [#35729](https://github.com/kubernetes/kubernetes/issues/35729) for more details.
### (Optional) Controlling your cluster from machines other than the master
In order to get a kubectl on your laptop for example to talk to your cluster, you need to copy the `KubeConfig` file from your master to your laptop like this: In order to get a kubectl on your laptop for example to talk to your cluster, you need to copy the `KubeConfig` file from your master to your laptop like this:
# scp root@<master ip>:/etc/kubernetes/admin.conf . # scp root@<master ip>:/etc/kubernetes/admin.conf .
# kubectl --kubeconfig ./admin.conf get nodes # kubectl --kubeconfig ./admin.conf get nodes
### (Optional) Connecting to the API Server
If you want to connect to the API Server for viewing the dashboard (note: not deployed by default) from outside the cluster for example, you can use `kubectl proxy`:
# scp root@<master ip>:/etc/kubernetes/admin.conf .
# kubectl --kubeconfig ./admin.conf proxy
You can now access the API Server locally at `http://localhost:8001/api/v1`
### (Optional) Installing a sample application ### (Optional) Installing a sample application
As an example, install a sample microservices application, a socks shop, to put your cluster through its paces. As an example, install a sample microservices application, a socks shop, to put your cluster through its paces. Note that this demo does only work on `amd64`.
To learn more about the sample microservices app, see the [GitHub README](https://github.com/microservices-demo/microservices-demo). To learn more about the sample microservices app, see the [GitHub README](https://github.com/microservices-demo/microservices-demo).
# kubectl create namespace sock-shop # kubectl create namespace sock-shop
@ -242,17 +258,11 @@ If there is a firewall, make sure it exposes this port to the internet before yo
* To uninstall the socks shop, run `kubectl delete namespace sock-shop` on the master. * To uninstall the socks shop, run `kubectl delete namespace sock-shop` on the master.
* To undo what `kubeadm` did, simply delete the machines you created for this tutorial, or run the script below and then start over or uninstall the packages. * To undo what `kubeadm` did, simply run:
# kubeadm reset
<br>
Reset local state:
<pre><code>systemctl stop kubelet;
docker rm -f -v $(docker ps -q);
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
</code></pre>
If you wish to start over, run `systemctl start kubelet` followed by `kubeadm init` or `kubeadm join`. If you wish to start over, run `systemctl start kubelet` followed by `kubeadm init` or `kubeadm join`.
<!-- *syntax-highlighting-hack -->
## Explore other add-ons ## Explore other add-ons
@ -275,19 +285,22 @@ kubeadm deb packages and binaries are built for amd64, arm and arm64, following
deb-packages are released for ARM and ARM 64-bit, but not RPMs (yet, reach out if there's interest). deb-packages are released for ARM and ARM 64-bit, but not RPMs (yet, reach out if there's interest).
Anyway, ARM had some issues when making v1.4, see [#32517](https://github.com/kubernetes/kubernetes/pull/32517) [#33485](https://github.com/kubernetes/kubernetes/pull/33485), [#33117](https://github.com/kubernetes/kubernetes/pull/33117) and [#33376](https://github.com/kubernetes/kubernetes/pull/33376). ARM had some issues when making v1.4, see [#32517](https://github.com/kubernetes/kubernetes/pull/32517) [#33485](https://github.com/kubernetes/kubernetes/pull/33485), [#33117](https://github.com/kubernetes/kubernetes/pull/33117) and [#33376](https://github.com/kubernetes/kubernetes/pull/33376).
However, thanks to the PRs above, `kube-apiserver` works on ARM from the `v1.4.1` release, so make sure you're at least using `v1.4.1` when running on ARM 32-bit However, thanks to the PRs above, `kube-apiserver` works on ARM from the `v1.4.1` release, so make sure you're at least using `v1.4.1` when running on ARM 32-bit
The multiarch flannel daemonset can be installed this way. Make sure you replace `ARCH=amd64` with `ARCH=arm` or `ARCH=arm64` if necessary. The multiarch flannel daemonset can be installed this way.
# ARCH=amd64 curl -sSL https://raw.githubusercontent.com/luxas/flannel/update-daemonset/Documentation/kube-flannel.yml | sed "s/amd64/${ARCH}/g" | kubectl create -f - # export ARCH=amd64
# curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" | sed "s/amd64/${ARCH}/g" | kubectl create -f -
And obviously replace `ARCH=amd64` with `ARCH=arm` or `ARCH=arm64` depending on the platform you're running on. Replace `ARCH=amd64` with `ARCH=arm` or `ARCH=arm64` depending on the platform you're running on.
Note that the Raspberry Pi 3 is in ARM 32-bit mode, so for RPi 3 you should set `ARCH` to `arm`, not `arm64`.
## Limitations ## Limitations
Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course. Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course.
Also you can take a look at the troubleshooting section in the [reference document](/docs/admin/kubeadm/#troubleshooting)
1. The cluster created here doesn't have cloud-provider integrations by default, so for example it doesn't work automatically with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs). 1. The cluster created here doesn't have cloud-provider integrations by default, so for example it doesn't work automatically with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
To set up kubeadm with CloudProvider integrations (it's experimental, but try), refer to the [kubeadm reference](/docs/admin/kubeadm/) document. To set up kubeadm with CloudProvider integrations (it's experimental, but try), refer to the [kubeadm reference](/docs/admin/kubeadm/) document.
@ -302,6 +315,15 @@ Please note: `kubeadm` is a work in progress and these limitations will be addre
1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770). 1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770).
Workaround: use `docker logs` on the nodes where the containers are running as a workaround. Workaround: use `docker logs` on the nodes where the containers are running as a workaround.
1. The HostPort functionality does not work with kubeadm due to that CNI networking is used, see issue [#31307](https://github.com/kubernetes/kubernetes/issues/31307).
Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) instead, or use HostNetwork.
1. A running `firewalld` service may conflict with kubeadm, so if you want to run `kubeadm`, you should disable `firewalld` until issue [#35535](https://github.com/kubernetes/kubernetes/issues/35535) is resolved.
Workaround: Disable `firewalld` or configure it to allow Kubernetes the pod and service cidrs.
1. If you see errors like `etcd cluster unavailable or misconfigured`, it's because of high load on the machine which makes the `etcd` container a bit unresponsive (it might miss some requests) and therefore kubelet will restart it. This will get better with `etcd3`.
Workaround: Set `failureThreshold` in `/etc/kubernetes/manifests/etcd.json` to a larger value.
1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one). 1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one).
By default, it doesn't do this and kubelet ends-up using first non-loopback network interface, which is usually NATed. By default, it doesn't do this and kubelet ends-up using first non-loopback network interface, which is usually NATed.