refactor setting up cluster using kubeadm docs (#7104)

pull/7637/head
Stewart-YU 2018-03-05 16:43:51 +08:00 committed by k8s-ci-robot
parent 2b7f83ffce
commit 7cb230f2a5
2 changed files with 39 additions and 54 deletions

View File

@ -23,8 +23,8 @@ see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-clust
* 2 GB or more of RAM per machine (any less will leave little room for your apps)
* 2 CPUs or more
* Full network connectivity between all machines in the cluster (public or private network is fine)
* Unique hostname, MAC address, and product_uuid for every node
* Certain ports are open on your machines. See the section below for more details
* Unique hostname, MAC address, and product_uuid for every node. See [here](https://kubernetes.io/docs/setup/independent/install-kubeadm/#verify-the-mac-address-and-product_uuid-are-unique-for-every-node) for more details.
* Certain ports are open on your machines. See [here](/docs/setup/indenpendent/install-kubeadm/#check-required-ports) for more details.
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
{% endcapture %}
@ -39,7 +39,7 @@ see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-clust
It is very likely that hardware devices will have unique addresses, although some virtual machines may have
identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster.
If these values are not unique to each node, the installation process
[may fail](https://github.com/kubernetes/kubeadm/issues/31).
may [fail](https://github.com/kubernetes/kubeadm/issues/31).
## Check network adapters
@ -87,7 +87,8 @@ Versions 17.06+ _might work_, but have not yet been tested and verified by the K
Please proceed with executing the following commands based on your OS as root. You may become the root user by executing `sudo -i` after SSH-ing to each host.
You can use the following commands to install Docker on your system:
If you already have the required versions of the Docker installed, you can move on to next section.
If not, you can use the following commands to install Docker on your system:
{% capture docker_ubuntu %}
@ -138,20 +139,6 @@ systemctl enable docker && systemctl start docker
{% endcapture %}
**Note**: Make sure that the cgroup driver used by kubelet is the same as the one used by
Docker. To ensure compatibility you can either update Docker, like so:
```bash
cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
```
and restart Docker. Or ensure the `--cgroup-driver` kubelet flag is set to the same value
as Docker (e.g. `cgroupfs`).
{% assign tab_set_name = "docker_install" %}
{% assign tab_names = "Ubuntu, Debian or HypriotOS;CentOS, RHEL or Fedora; Container Linux" | split: ';' | compact %}
{% assign tab_contents = site.emptyArray | push: docker_ubuntu | push: docker_centos | push: docker_coreos %}
@ -273,6 +260,31 @@ systemctl enable kubelet && systemctl start kubelet
The kubelet is now restarting every few seconds, as it waits in a crashloop for
kubeadm to tell it what to do.
## Configure cgroup driver used by kubelet on Master Node
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:
```bash
docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
```
If the Docker cgroup driver and the kubelet config don't match, change the kubelet config to match the Docker cgroup driver. The
flag you need to change is `--cgroup-driver`. If it's already set, you can update like so:
```bash
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
```
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.
Then restart kubelet:
```bash
systemctl daemon-reload
systemctl restart kubelet
```
## Troubleshooting
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).

View File

@ -23,7 +23,7 @@ If your cluster is in an error state, you may have trouble in the configuration
{% endcapture %}
#### `ebtables` or executable not found during installation
#### `ebtables` or some similar executable not found during installation
If you see the following warnings while running `kubeadm init`
@ -61,8 +61,15 @@ This may be caused by a number of problems. The most common are:
1. Install docker again following instructions
[here](/docs/setup/independent/install-kubeadm/#installing-docker).
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
[Errors on CentOS when setting up masters](#errors-on-centos-when-setting-up-masters)
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
for detailed instructions.
The `kubectl describe pod` or `kubectl logs` commands can help you diagnose errors. For example:
```bash
kubectl -n ${NAMESPACE} describe pod ${POD_NAME}
kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}
```
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
@ -134,40 +141,6 @@ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
#### Errors on CentOS when setting up masters
If you are using CentOS and encounter difficulty while setting up the master node
verify that your Docker cgroup driver matches the kubelet config:
```bash
docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
```
If the Docker cgroup driver and the kubelet config don't match, change the kubelet config to match the Docker cgroup driver. The
flag you need to change is `--cgroup-driver`. If it's already set, you can update like so:
```bash
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
```
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.
Then restart kubelet:
```bash
systemctl daemon-reload
systemctl restart kubelet
```
The `kubectl describe pod` or `kubectl logs` commands can help you diagnose errors. For example:
```bash
kubectl -n ${NAMESPACE} describe pod ${POD_NAME}
kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}
```
### Default NIC When using flannel as the pod network in Vagrant
The following error might indicate that something was wrong in the pod network: