kubeadm-setup: update all setup related documents for 1.15 (#14594)

pull/14812/head
Lubomir I. Ivanov 2019-06-11 02:18:19 +03:00 committed by Kubernetes Prow Robot
parent d1bdefd251
commit b51345a681
5 changed files with 94 additions and 78 deletions

View File

@ -1,18 +1,18 @@
---
reviewers:
- sig-cluster-lifecycle
title: Creating a single master cluster with kubeadm
title: Creating a single control-plane cluster with kubeadm
content_template: templates/task
weight: 30
---
{{% capture overview %}}
<img src="https://raw.githubusercontent.com/cncf/artwork/master/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
<img src="https://raw.githubusercontent.com/cncf/artwork/master/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
Because you can install kubeadm on various types of machine (e.g. laptop, server,
Raspberry Pi, etc.), it's well suited for integration with provisioning systems
Because you can install kubeadm on various types of machine (e.g. laptop, server,
Raspberry Pi, etc.), it's well suited for integration with provisioning systems
such as Terraform or Ansible.
kubeadm's simplicity means it can serve a wide range of use cases:
@ -33,17 +33,17 @@ installing deb or rpm packages. The responsible SIG for kubeadm,
but you may also build them from source for other OSes.
### kubeadm Maturity
### kubeadm maturity
| Area | Maturity Level |
|---------------------------|--------------- |
| Command line UX | GA |
| Implementation | GA |
| Config file API | beta |
| Config file API | Beta |
| CoreDNS | GA |
| kubeadm alpha subcommands | alpha |
| High availability | alpha |
| DynamicKubeletConfig | alpha |
| kubeadm alpha subcommands | Alpha |
| High availability | Beta |
| DynamicKubeletConfig | Alpha |
kubeadm's overall feature state is **GA**. Some sub-features, like the configuration
@ -69,6 +69,8 @@ timeframe; which also applies to `kubeadm`.
| v1.11.x | June 2018 | March 2019   |
| v1.12.x | September 2018 | June 2019   |
| v1.13.x | December 2018 | September 2019   |
| v1.14.x | March 2019 | December 2019   |
| v1.15.x | June 2019 | March 2020   |
{{% /capture %}}
@ -77,17 +79,17 @@ timeframe; which also applies to `kubeadm`.
- One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS
- 2 GB or more of RAM per machine. Any less leaves little room for your
apps.
- 2 CPUs or more on the master
- 2 CPUs or more on the control-plane node
- Full network connectivity among all machines in the cluster. A public or
private network is fine.
{{% /capture %}}
{{% capture steps %}}
## Objectives
* Install a single master Kubernetes cluster or [high availability cluster](/docs/setup/independent/high-availability/)
* Install a single control-plane Kubernetes cluster or [high-availability cluster](/docs/setup/independent/high-availability/)
* Install a Pod network on the cluster so that your Pods can
talk to each other
@ -102,17 +104,17 @@ If you have already installed kubeadm, run `apt-get update &&
apt-get upgrade` or `yum update` to get the latest version of kubeadm.
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
kubeadm to tell it what to do. This crashloop is expected and normal.
After you initialize your master, the kubelet runs normally.
kubeadm to tell it what to do. This crashloop is expected and normal.
After you initialize your control-plane, the kubelet runs normally.
{{< /note >}}
### Initializing your master
### Initializing your control-plane node
The master is the machine where the control plane components run, including
The control-plane node is the machine where the control plane components run, including
etcd (the cluster database) and the API server (which the kubectl CLI
communicates with).
1. Choose a pod network add-on, and verify whether it requires any arguments to
1. Choose a pod network add-on, and verify whether it requires any arguments to
be passed to kubeadm initialization. Depending on which
third-party provider you choose, you might need to set the `--pod-network-cidr` to
a provider-specific value. See [Installing a pod network add-on](#pod-network).
@ -120,18 +122,18 @@ a provider-specific value. See [Installing a pod network add-on](#pod-network).
by using a list of well known domain socket paths. To use different container runtime or
if there are more than one installed on the provisioned node, specify the `--cri-socket`
argument to `kubeadm init`. See [Installing runtime](/docs/setup/independent/install-kubeadm/#installing-runtime).
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
with the default gateway to advertise the master's IP. To use a different
network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
with the default gateway to advertise the control-plane's IP. To use a different
network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
connectivity to gcr.io registries.
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
connectivity to gcr.io registries.
Now run:
```bash
kubeadm init <args>
kubeadm init <args>
```
### More information
@ -150,7 +152,7 @@ components do not currently support multi-architecture.
`kubeadm init` first runs a series of prechecks to ensure that the machine
is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`
then downloads and installs the cluster control plane components. This may take several minutes.
then downloads and installs the cluster control plane components. This may take several minutes.
The output should look like:
```none
@ -239,8 +241,8 @@ export KUBECONFIG=/etc/kubernetes/admin.conf
Make a record of the `kubeadm join` command that `kubeadm init` outputs. You
need this command to [join nodes to your cluster](#join-nodes).
The token is used for mutual authentication between the master and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this
The token is used for mutual authentication between the control-plane node and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this
token can add authenticated nodes to your cluster. These tokens can be listed,
created, and deleted with the `kubeadm token` command. See the
[kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-token/).
@ -258,8 +260,8 @@ each other.
kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).**
Several projects provide Kubernetes pod networks using CNI, some of which also
support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
- [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9.
Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/docs/reference/access-authn-authz/rbac/).
@ -423,8 +425,8 @@ out our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/)
### Control plane node isolation
By default, your cluster will not schedule pods on the master for security
reasons. If you want to be able to schedule pods on the master, e.g. for a
By default, your cluster will not schedule pods on the control-plane node for security
reasons. If you want to be able to schedule pods on the control-plane node, e.g. for a
single-machine Kubernetes cluster for development, run:
```bash
@ -440,7 +442,7 @@ taint "node-role.kubernetes.io/master:" not found
```
This will remove the `node-role.kubernetes.io/master` taint from any nodes that
have it, including the master node, meaning that the scheduler will then be able
have it, including the control-plane node, meaning that the scheduler will then be able
to schedule pods everywhere.
### Joining your nodes {#join-nodes}
@ -455,7 +457,7 @@ The nodes are where your workloads (containers and pods, etc) run. To add new no
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
```
If you do not have the token, you can get it by running the following command on the master node:
If you do not have the token, you can get it by running the following command on the control-plane node:
``` bash
kubeadm token list
@ -472,7 +474,7 @@ TOKEN TTL EXPIRES USAGES DESCRIPTION
```
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired,
you can create a new token by running the following command on the master node:
you can create a new token by running the following command on the control-plane node:
``` bash
kubeadm token create
@ -484,7 +486,7 @@ The output is similar to this:
5didvk.d09sbcov8ph2amjw
```
If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the master node:
If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the control-plane node:
``` bash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
@ -517,12 +519,12 @@ Run 'kubectl get nodes' on the master to see this machine join.
```
A few seconds later, you should notice this node in the output from `kubectl get
nodes` when run on the master.
nodes` when run on the control-plane node.
### (Optional) Controlling your cluster from machines other than the master
### (Optional) Controlling your cluster from machines other than the control-plane node
In order to get a kubectl on some other computer (e.g. laptop) to talk to your
cluster, you need to copy the administrator kubeconfig file from your master
cluster, you need to copy the administrator kubeconfig file from your control-plane node
to your workstation like this:
``` bash
@ -562,7 +564,7 @@ To undo what kubeadm did, you should first [drain the
node](/docs/reference/generated/kubectl/kubectl-commands#drain) and make
sure that the node is empty before shutting it down.
Talking to the master with the appropriate credentials, run:
Talking to the control-plane node with the appropriate credentials, run:
```bash
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
@ -650,18 +652,17 @@ supports your chosen platform.
## Limitations {#limitations}
Please note: kubeadm is a work in progress and these limitations will be
addressed in due course.
The cluster created here has a single control-plane node, with a single etcd database
running on it. This means that if the control-plane node fails, your cluster may lose
data and may need to be recreated from scratch.
1. The cluster created here has a single master, with a single etcd database
running on it. This means that if the master fails, your cluster may lose
data and may need to be recreated from scratch. Adding HA support
(multiple etcd servers, multiple API servers, etc) to kubeadm is
still a work-in-progress.
Workarounds:
Workaround: regularly
[back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the master.
* Regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node.
* Use multiple control-plane nodes by completing the
[HA setup](/docs/setup/independent/ha-topology) instead.
## Troubleshooting {#troubleshooting}

View File

@ -34,7 +34,7 @@ Each control plane node creates a local etcd member and this etcd member communi
the `kube-apiserver` of this node. The same applies to the local `kube-controller-manager`
and `kube-scheduler` instances.
This topology couples the control planes and etcd members on the same nodes. It is simpler to set up than a cluster
This topology couples the control planes and etcd members on the same nodes. It is simpler to set up than a cluster
with external etcd nodes, and simpler to manage for replication.
However, a stacked cluster runs the risk of failed coupling. If one node goes down, both an etcd member and a control
@ -43,7 +43,7 @@ plane instance are lost, and redundancy is compromised. You can mitigate this ri
You should therefore run a minimum of three stacked control plane nodes for an HA cluster.
This is the default topology in kubeadm. A local etcd member is created automatically
on control plane nodes when using `kubeadm init` and `kubeadm join --experimental-control-plane`.
on control plane nodes when using `kubeadm init` and `kubeadm join --control-plane`.
![Stacked etcd topology](/images/kubeadm/kubeadm-ha-topology-stacked-etcd.svg)

View File

@ -19,12 +19,10 @@ control plane nodes and etcd members are separated.
Before proceeding, you should carefully consider which approach best meets the needs of your applications
and environment. [This comparison topic](/docs/setup/independent/ha-topology/) outlines the advantages and disadvantages of each.
You should also be aware that setting up HA clusters with kubeadm is still experimental and will be further
simplified in future versions. You might encounter issues with upgrading your clusters, for example.
We encourage you to try either approach, and provide us with feedback in the kubeadm
[issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
If you encounter issues with setting up the HA cluster, please provide us with feedback
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14).
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15).
{{< caution >}}
This page does not address running your cluster on a cloud provider. In a cloud
@ -127,20 +125,20 @@ the `networking` object of `ClusterConfiguration`.
1. Initialize the control plane:
```sh
sudo kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs
sudo kubeadm init --config=kubeadm-config.yaml --upload-certs
```
- The `--experimental-upload-certs` flags is used to upload the certificates that should be shared
- The `--upload-certs` flags is used to upload the certificates that should be shared
across all the control-plane instances to the cluster. If instead, you prefer to copy certs across
control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual
certificate distribution](#manual-certs) section bellow.
After the command completes you should see something like so:
```sh
...
You can now join any number of control-plane node by running the following command on each as a root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.
@ -149,15 +147,27 @@ the `networking` object of `ClusterConfiguration`.
```
- Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.
- When `--experimental-upload-certs` is used with `kubeadm init`, the certificates of the primary control plane
- When `--upload-certs` is used with `kubeadm init`, the certificates of the primary control plane
are encrypted and uploaded in the `kubeadm-certs` Secret.
- To re-upload the certificates and generate a new decryption key, use the following command on a control plane
node that is already joined to the cluster:
```sh
sudo kubeadm init phase upload-certs --experimental-upload-certs
sudo kubeadm init phase upload-certs --upload-certs
```
- You can also specify a custom `--certificate-key` during `init` that can later be used by `join`.
To generate such a key you can use the following command:
```sh
kubeadm alpha certs certificate-key
```
{{< note >}}
The `kubeadm init` flags `--config` and `--certificate-key` cannot be mixed, therefore if you want
to use the [kubeadm configuration](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2) you must add the `certificateKey` field in the appropriate config locations (under `InitConfiguration` and `JoinConfiguration: controlPlane`).
{{< /note >}}
{{< note >}}
The `kubeadm-certs` Secret and decryption key expire after two hours.
{{< /note >}}
@ -186,9 +196,11 @@ As stated in the command output, the certificate-key gives access to cluster sen
### Steps for the rest of the control plane nodes
{{< caution >}}
You must join new control plane nodes sequentially, only after the first node has finished initializing.
{{< /caution >}}
{{< note >}}
Since kubeadm version 1.15 you can join multiple control-plane nodes in parallel.
Prior to this version, you must join new control plane nodes sequentially, only after
the first node has finished initializing.
{{< /note >}}
For each additional control plane node you should:
@ -196,10 +208,10 @@ For each additional control plane node you should:
It should look something like this:
```sh
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
```
- The `--experimental-control-plane` flag tells `kubeadm join` to create a new control plane.
- The `--control-plane` flag tells `kubeadm join` to create a new control plane.
- The `--certificate-key ...` will cause the control plane certificates to be downloaded
from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key.
@ -261,7 +273,7 @@ etcd topology this is managed automatically.
The following steps are exactly the same as described for stacked etcd setup:
1. Run `sudo kubeadm init --config kubeadm-config.yaml --experimental-upload-certs` on this node.
1. Run `sudo kubeadm init --config kubeadm-config.yaml --upload-certs` on this node.
1. Write the output join commands that are returned to a text file for later use.
@ -293,7 +305,7 @@ sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery
## Manual certificate distribution {#manual-certs}
If you choose to not use `kubeadm init` with the `--experimental-upload-certs` flag this means that
If you choose to not use `kubeadm init` with the `--upload-certs` flag this means that
you are going to have to manually copy the certificates from the primary control plane node to the
joining control plane nodes.

View File

@ -54,7 +54,7 @@ route, we recommend you add IP route(s) so Kubernetes cluster addresses go via t
## Check required ports
### Master node(s)
### Control-plane node(s)
| Protocol | Direction | Port Range | Purpose | Used By |
|----------|-----------|------------|-------------------------|---------------------------|
@ -76,7 +76,7 @@ route, we recommend you add IP route(s) so Kubernetes cluster addresses go via t
Any port numbers marked with * are overridable, so you will need to ensure any
custom ports you provide are also open.
Although etcd ports are included in master nodes, you can also host your own
Although etcd ports are included in control-plane nodes, you can also host your own
etcd cluster externally or on custom ports.
The pod network plugin you use (see below) may also require certain ports to be
@ -201,7 +201,7 @@ systemctl enable --now kubelet
Install CNI plugins (required for most pod network):
```bash
CNI_VERSION="v0.6.0"
CNI_VERSION="v0.7.5"
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
```
@ -209,7 +209,7 @@ curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_
Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI))
```bash
CRICTL_VERSION="v1.11.1"
CRICTL_VERSION="v1.12.0"
mkdir -p /opt/bin
curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz
```
@ -241,7 +241,7 @@ systemctl enable --now kubelet
The kubelet is now restarting every few seconds, as it waits in a crashloop for
kubeadm to tell it what to do.
## Configure cgroup driver used by kubelet on Master Node
## Configure cgroup driver used by kubelet on control-plane node
When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet
and set it in the `/var/lib/kubelet/kubeadm-flags.env` file during runtime.
@ -266,6 +266,9 @@ systemctl daemon-reload
systemctl restart kubelet
```
The automatic detection of cgroup driver for other container runtimes
like CRI-O and containerd is work in progress.
## Troubleshooting
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).

View File

@ -60,7 +60,7 @@ This may be caused by a number of problems. The most common are:
1. Install Docker again following instructions
[here](/docs/setup/independent/install-kubeadm/#installing-docker).
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
[Configure cgroup driver used by kubelet on control-plane node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
for detailed instructions.
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
@ -100,7 +100,7 @@ Right after `kubeadm init` there should not be any pods in these states.
until you have deployed the network solution.
- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state
after deploying the network solution and nothing happens to `coredns` (or `kube-dns`),
it's very likely that the Pod Network solution that you installed is somehow broken.
it's very likely that the Pod Network solution that you installed is somehow broken.
You might have to grant it more RBAC privileges or use a newer version. Please file
an issue in the Pod Network providers' issue tracker and get the issue triaged there.
- If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option