diff --git a/docs/setup/independent/high-availability.md b/docs/setup/independent/high-availability.md index c5cb8751ed..c4c6b72d8c 100644 --- a/docs/setup/independent/high-availability.md +++ b/docs/setup/independent/high-availability.md @@ -54,76 +54,76 @@ For **Option 2**: you can skip to the next step. Any reference to `etcd0`, `etcd 1. Install `cfssl` and `cfssljson`: - ```shell - curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 - curl -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 - chmod +x /usr/local/bin/cfssl* - ``` + ```shell + curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 + curl -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 + chmod +x /usr/local/bin/cfssl* + ``` 1. SSH into `etcd0` and run the following: - ```shell - mkdir -p /etc/kubernetes/pki/etcd - cd /etc/kubernetes/pki/etcd - ``` - ```shell - cat >ca-config.json <ca-csr.json <ca-config.json <ca-csr.json <client.json <client.json <" - ``` + ```shell + ssh-keygen -t rsa -b 4096 -C "" + ``` - Make sure to replace `` with your email, a placeholder, or an empty string. Keep hitting enter until files exist in `~/.ssh`. + Make sure to replace `` with your email, a placeholder, or an empty string. Keep hitting enter until files exist in `~/.ssh`. 1. Output the contents of the public key file for `etcd1` and `etcd2`, like so: - ```shell - cat ~/.ssh/id_rsa.pub - ``` + ```shell + cat ~/.ssh/id_rsa.pub + ``` 1. Finally, copy the output for each and paste them into `etcd0`'s `~/.ssh/authorized_keys` file. This will permit `etcd1` and `etcd2` to SSH in to the machine. @@ -181,32 +181,32 @@ In order to copy certs between machines, you must enable SSH access for `scp`. 1. In order to generate certs, each etcd machine needs the root CA generated by `etcd0`. On `etcd1` and `etcd2`, run the following: - ```shell - mkdir -p /etc/kubernetes/pki/etcd - cd /etc/kubernetes/pki/etcd - scp root@:/etc/kubernetes/pki/etcd/ca.pem . - scp root@:/etc/kubernetes/pki/etcd/ca-key.pem . - scp root@:/etc/kubernetes/pki/etcd/client.pem . - scp root@:/etc/kubernetes/pki/etcd/client-key.pem . - scp root@:/etc/kubernetes/pki/etcd/ca-config.json . - ``` + ```shell + mkdir -p /etc/kubernetes/pki/etcd + cd /etc/kubernetes/pki/etcd + scp root@:/etc/kubernetes/pki/etcd/ca.pem . + scp root@:/etc/kubernetes/pki/etcd/ca-key.pem . + scp root@:/etc/kubernetes/pki/etcd/client.pem . + scp root@:/etc/kubernetes/pki/etcd/client-key.pem . + scp root@:/etc/kubernetes/pki/etcd/ca-config.json . + ``` - Where `` corresponds to the public or private IPv4 of `etcd0`. + Where `` corresponds to the public or private IPv4 of `etcd0`. 1. Once this is done, run the following on all etcd machines: - ```shell - cfssl print-defaults csr > config.json - sed -i '0,/CN/{s/example\.net/'"$PEER_NAME"'/}' config.json - sed -i 's/www\.example\.net/'"$PRIVATE_IP"'/' config.json - sed -i 's/example\.net/'"$PUBLIC_IP"'/' config.json + ```shell + cfssl print-defaults csr > config.json + sed -i '0,/CN/{s/example\.net/'"$PEER_NAME"'/}' config.json + sed -i 's/www\.example\.net/'"$PRIVATE_IP"'/' config.json + sed -i 's/example\.net/'"$PUBLIC_IP"'/' config.json - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer config.json | cfssljson -bare peer - ``` + cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server + cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer config.json | cfssljson -bare peer + ``` - The above will replace the default configuration with your machine's hostname as the peer name, and its IP addresses. Make sure - these are correct before generating the certs. If you found an error, reconfigure `config.json` and re-run the `cfssl` commands. + The above will replace the default configuration with your machine's hostname as the peer name, and its IP addresses. Make sure + these are correct before generating the certs. If you found an error, reconfigure `config.json` and re-run the `cfssl` commands. This will result in the following files: `peer.pem`, `peer-key.pem`, `server.pem`, `server-key.pem`. @@ -222,79 +222,79 @@ Please select one of the tabs to see installation instructions for the respectiv 1. First you will install etcd binaries like so: - ```shell - export ETCD_VERSION=v3.1.10 - curl -sSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xzv --strip-components=1 -C /usr/local/bin/ - rm -rf etcd-$ETCD_VERSION-linux-amd64* - ``` + ```shell + export ETCD_VERSION=v3.1.10 + curl -sSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xzv --strip-components=1 -C /usr/local/bin/ + rm -rf etcd-$ETCD_VERSION-linux-amd64* + ``` - It is worth noting that etcd v3.1.10 is the preferred version for Kubernetes v1.9. For other versions of Kubernetes please consult [the changelog](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md). + It is worth noting that etcd v3.1.10 is the preferred version for Kubernetes v1.9. For other versions of Kubernetes please consult [the changelog](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md). - Also, please realise that most distributions of Linux already have a version of etcd installed, so you will be replacing the system default. + Also, please realise that most distributions of Linux already have a version of etcd installed, so you will be replacing the system default. 1. Next, generate the environment file that systemd will use: - ``` - touch /etc/etcd.env - echo "PEER_NAME=$PEER_NAME" >> /etc/etcd.env - echo "PRIVATE_IP=$PRIVATE_IP" >> /etc/etcd.env - ``` + ``` + touch /etc/etcd.env + echo "PEER_NAME=$PEER_NAME" >> /etc/etcd.env + echo "PRIVATE_IP=$PRIVATE_IP" >> /etc/etcd.env + ``` 1. Now copy the systemd unit file like so: - ```shell - cat >/etc/systemd/system/etcd.service </etc/systemd/system/etcd.service <:2380,etcd1=https://:2380,etcd2=https://:2380 \ - --initial-cluster-token my-etcd-token \ - --initial-cluster-state new + ExecStart=/usr/local/bin/etcd --name ${PEER_NAME} \ + --data-dir /var/lib/etcd \ + --listen-client-urls https://${PRIVATE_IP}:2379 \ + --advertise-client-urls https://${PRIVATE_IP}:2379 \ + --listen-peer-urls https://${PRIVATE_IP}:2380 \ + --initial-advertise-peer-urls https://${PRIVATE_IP}:2380 \ + --cert-file=/etc/kubernetes/pki/etcd/server.pem \ + --key-file=/etc/kubernetes/pki/etcd/server-key.pem \ + --client-cert-auth \ + --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \ + --peer-cert-file=/etc/kubernetes/pki/etcd/peer.pem \ + --peer-key-file=/etc/kubernetes/pki/etcd/peer-key.pem \ + --peer-client-cert-auth \ + --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \ + --initial-cluster etcd0=https://:2380,etcd1=https://:2380,etcd2=https://:2380 \ + --initial-cluster-token my-etcd-token \ + --initial-cluster-state new - [Install] - WantedBy=multi-user.target - EOL - ``` + [Install] + WantedBy=multi-user.target + EOF + ``` - Make sure you replace ``, `` and `` with the appropriate IPv4 addresses. + Make sure you replace ``, `` and `` with the appropriate IPv4 addresses. 1. Finally, launch etcd like so: - ```shell - systemctl daemon-reload - systemctl start etcd - ``` + ```shell + systemctl daemon-reload + systemctl start etcd + ``` 1. Check that it launched successfully: - ```shell - systemctl status etcd - ``` + ```shell + systemctl status etcd + ``` {% endcapture %} {% capture static_pods %} @@ -303,78 +303,78 @@ Please select one of the tabs to see installation instructions for the respectiv 1. The first step is to run the following to generate the manifest file: - ```shell - cat >/etc/kubernetes/manifests/etcd.yaml < - namespace: kube-system - spec: - containers: - - command: - - etcd --name ${PEER_NAME} \ - - --data-dir /var/lib/etcd \ - - --listen-client-urls https://${PRIVATE_IP}:2379 \ - - --advertise-client-urls https://${PRIVATE_IP}:2379 \ - - --listen-peer-urls https://${PRIVATE_IP}:2380 \ - - --initial-advertise-peer-urls https://${PRIVATE_IP}:2380 \ - - --cert-file=/certs/server.pem \ - - --key-file=/certs/server-key.pem \ - - --client-cert-auth \ - - --trusted-ca-file=/certs/ca.pem \ - - --peer-cert-file=/certs/peer.pem \ - - --peer-key-file=/certs/peer-key.pem \ - - --peer-client-cert-auth \ - - --peer-trusted-ca-file=/certs/ca.pem \ - - --initial-cluster etcd0=https://:2380,etcd1=https://:2380,etcd1=https://:2380 \ - - --initial-cluster-token my-etcd-token \ - - --initial-cluster-state new - image: gcr.io/google_containers/etcd-amd64:3.1.0 - livenessProbe: - httpGet: - path: /health - port: 2379 - scheme: HTTP - initialDelaySeconds: 15 - timeoutSeconds: 15 - name: etcd - env: - - name: PUBLIC_IP - valueFrom: - fieldRef: - fieldPath: status.hostIP - - name: PRIVATE_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - - name: PEER_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - volumeMounts: - - mountPath: /var/lib/etcd - name: etcd - - mountPath: /certs - name: certs - hostNetwork: true - volumes: - - hostPath: - path: /var/lib/etcd - type: DirectoryOrCreate - name: etcd - - hostPath: - path: /etc/kubernetes/pki/etcd - name: certs - EOL - ``` + ```shell + cat >/etc/kubernetes/manifests/etcd.yaml < + namespace: kube-system + spec: + containers: + - command: + - etcd --name ${PEER_NAME} \ + - --data-dir /var/lib/etcd \ + - --listen-client-urls https://${PRIVATE_IP}:2379 \ + - --advertise-client-urls https://${PRIVATE_IP}:2379 \ + - --listen-peer-urls https://${PRIVATE_IP}:2380 \ + - --initial-advertise-peer-urls https://${PRIVATE_IP}:2380 \ + - --cert-file=/certs/server.pem \ + - --key-file=/certs/server-key.pem \ + - --client-cert-auth \ + - --trusted-ca-file=/certs/ca.pem \ + - --peer-cert-file=/certs/peer.pem \ + - --peer-key-file=/certs/peer-key.pem \ + - --peer-client-cert-auth \ + - --peer-trusted-ca-file=/certs/ca.pem \ + - --initial-cluster etcd0=https://:2380,etcd1=https://:2380,etcd1=https://:2380 \ + - --initial-cluster-token my-etcd-token \ + - --initial-cluster-state new + image: gcr.io/google_containers/etcd-amd64:3.1.0 + livenessProbe: + httpGet: + path: /health + port: 2379 + scheme: HTTP + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: etcd + env: + - name: PUBLIC_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: PRIVATE_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: PEER_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + volumeMounts: + - mountPath: /var/lib/etcd + name: etcd + - mountPath: /certs + name: certs + hostNetwork: true + volumes: + - hostPath: + path: /var/lib/etcd + type: DirectoryOrCreate + name: etcd + - hostPath: + path: /etc/kubernetes/pki/etcd + name: certs + EOF + ``` - Make sure you replace: - * `` with the name of the node you're running on (e.g. `etcd0`, `etcd1` or `etcd2`) - * ``, `` and `` with the public IPv4s of the other machines that host etcd. + Make sure you replace: + * `` with the name of the node you're running on (e.g. `etcd0`, `etcd1` or `etcd2`) + * ``, `` and `` with the public IPv4s of the other machines that host etcd. {% endcapture %} @@ -403,53 +403,53 @@ Only follow this step if your etcd is hosted on dedicated nodes (**Option 1**). 1. Run the following: - ```shell - mkdir -p /etc/kubernetes/pki/etcd - scp root@:/etc/kubernetes/pki/etcd/ca.pem /etc/kubernetes/pki/etcd - scp root@:/etc/kubernetes/pki/etcd/client.pem /etc/kubernetes/pki/etcd - scp root@:/etc/kubernetes/pki/etcd/client-key.pem /etc/kubernetes/pki/etcd - ``` + ```shell + mkdir -p /etc/kubernetes/pki/etcd + scp root@:/etc/kubernetes/pki/etcd/ca.pem /etc/kubernetes/pki/etcd + scp root@:/etc/kubernetes/pki/etcd/client.pem /etc/kubernetes/pki/etcd + scp root@:/etc/kubernetes/pki/etcd/client-key.pem /etc/kubernetes/pki/etcd + ``` ## Run `kubeadm init` on `master0` {#kubeadm-init-master0} 1. In order for kubeadm to run, you first need to write a configuration file: - ```shell - cat >config.yaml < - etcd: - endpoints: - - https://:2379 - - https://:2379 - - https://:2379 - caFile: /etc/kubernetes/pki/etcd/ca.pem - certFile: /etc/kubernetes/pki/etcd/client.pem - keyFile: /etc/kubernetes/pki/etcd/client-key.pem - networking: - podSubnet: - apiServerCertSANs: - - - apiServerExtraArgs: - apiserver-count: 3 - EOL - ``` + ```shell + cat >config.yaml < + etcd: + endpoints: + - https://:2379 + - https://:2379 + - https://:2379 + caFile: /etc/kubernetes/pki/etcd/ca.pem + certFile: /etc/kubernetes/pki/etcd/client.pem + keyFile: /etc/kubernetes/pki/etcd/client-key.pem + networking: + podSubnet: + apiServerCertSANs: + - + apiServerExtraArgs: + apiserver-count: 3 + EOF + ``` - Ensure that the following placeholders are replaced: + Ensure that the following placeholders are replaced: - - `` with the private IPv4 of the master server. - - ``, `` and `` with the IP addresses of your three etcd nodes - - `` with your Pod CIDR. Please read the [CNI network section](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network) of the docs for more information. Some CNI providers do not require a value to be set. + - `` with the private IPv4 of the master server. + - ``, `` and `` with the IP addresses of your three etcd nodes + - `` with your Pod CIDR. Please read the [CNI network section](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network) of the docs for more information. Some CNI providers do not require a value to be set. - **Note:** If you are using Kubernetes 1.9+, you can replace the `apiserver-count: 3` extra argument with `endpoint-reconciler-type=lease`. For more information, see [the documentation](https://kubernetes.io/docs/admin/high-availability/#endpoint-reconciler). + **Note:** If you are using Kubernetes 1.9+, you can replace the `apiserver-count: 3` extra argument with `endpoint-reconciler-type=lease`. For more information, see [the documentation](https://kubernetes.io/docs/admin/high-availability/#endpoint-reconciler). 1. When this is done, run kubeadm like so: - ```shell - kubeadm init --config=config.yaml - ``` + ```shell + kubeadm init --config=config.yaml + ``` ## Run `kubeadm init` on `master1` and `master2` @@ -460,10 +460,10 @@ Before running kubeadm on the other masters, you need to first copy the K8s CA c 1. Follow the steps in the [create ssh access](#create-ssh-access) section, but instead of adding to `etcd0`'s `authorized_keys` file, add them to `master0`. 1. Once you've done this, run: - ```shell - scp root@:/etc/kubernetes/pki/* /etc/kubernetes/pki - rm apiserver.crt - ``` + ```shell + scp root@:/etc/kubernetes/pki/* /etc/kubernetes/pki + rm apiserver.crt + ``` #### Option 2: Copy paste @@ -489,20 +489,20 @@ Next provision and set up the worker nodes. To do this, you will need to provisi 1. Reconfigure kube-proxy to access kube-apiserver via the load balancer: - ```shell - kubectl get configmap -n kube-system kube-proxy -o yaml > kube-proxy-сm.yaml - sed -i 's#server:.*#server: https://:6443#g' kube-proxy-cm.yaml - kubectl apply -f kube-proxy-cm.yaml --force - # restart all kube-proxy pods to ensure that they load the new configmap - kubectl delete pod -n kube-system -l k8s-app=kube-proxy - ``` + ```shell + kubectl get configmap -n kube-system kube-proxy -o yaml > kube-proxy-сm.yaml + sed -i 's#server:.*#server: https://:6443#g' kube-proxy-cm.yaml + kubectl apply -f kube-proxy-cm.yaml --force + # restart all kube-proxy pods to ensure that they load the new configmap + kubectl delete pod -n kube-system -l k8s-app=kube-proxy + ``` 1. Reconfigure the kubelet to access kube-apiserver via the load balancer: - ```shell - sudo sed -i 's#server:.*#server: https://:6443#g' /etc/kubernetes/kubelet.conf - sudo systemctl restart kubelet - ``` + ```shell + sudo sed -i 's#server:.*#server: https://:6443#g' /etc/kubernetes/kubelet.conf + sudo systemctl restart kubelet + ``` {% endcapture %}