Merge pull request #47888 from neolit123/1.32-add-linux-windows-task-pages
kubeadm: add task pages for adding Linux and Windows worker nodespull/47911/head
commit
4fc01b48d0
|
@ -166,8 +166,9 @@ The control-plane node is the machine where the control plane components run, in
|
|||
communicates with).
|
||||
|
||||
1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster
|
||||
to high availability you should specify the `--control-plane-endpoint` to set the shared endpoint
|
||||
for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.
|
||||
to [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/)
|
||||
you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes.
|
||||
Such an endpoint can be either a DNS name or an IP address of a load-balancer.
|
||||
1. Choose a Pod network add-on, and verify whether it requires any arguments to
|
||||
be passed to `kubeadm init`. Depending on which
|
||||
third-party provider you choose, you might need to set the `--pod-network-cidr` to
|
||||
|
@ -343,6 +344,11 @@ control-plane node or a node that has the kubeconfig credentials:
|
|||
kubectl apply -f <add-on.yaml>
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Only a few CNI plugins support Windows. More details and setup instructions can be found
|
||||
in [Adding Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/#network-config).
|
||||
{{< /note >}}
|
||||
|
||||
You can install only one Pod network per cluster.
|
||||
|
||||
Once a Pod network has been installed, you can confirm that it is working by
|
||||
|
@ -391,90 +397,20 @@ from the control plane node, which excludes it from the list of backend servers:
|
|||
kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers-
|
||||
```
|
||||
|
||||
### Joining your nodes {#join-nodes}
|
||||
### Adding more control plane nodes
|
||||
|
||||
The nodes are where your workloads (containers and Pods, etc) run. To add new nodes to your cluster do the following for each machine:
|
||||
See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/) for steps on creating a high availability kubeadm cluster by adding more control plane
|
||||
nodes.
|
||||
|
||||
* SSH to the machine
|
||||
* Become root (e.g. `sudo su -`)
|
||||
* [Install a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime)
|
||||
if needed
|
||||
* Run the command that was output by `kubeadm init`. For example:
|
||||
### Adding worker nodes {#join-nodes}
|
||||
|
||||
```bash
|
||||
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
|
||||
```
|
||||
The worker nodes are where your workloads run.
|
||||
|
||||
If you do not have the token, you can get it by running the following command on the control-plane node:
|
||||
The following pages show how to add Linux and Windows worker nodes to the cluster by using
|
||||
the `kubeadm join` command:
|
||||
|
||||
```bash
|
||||
kubeadm token list
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```console
|
||||
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
|
||||
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
|
||||
signing token generated by bootstrappers:
|
||||
'kubeadm init'. kubeadm:
|
||||
default-node-token
|
||||
```
|
||||
|
||||
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired,
|
||||
you can create a new token by running the following command on the control-plane node:
|
||||
|
||||
```bash
|
||||
kubeadm token create
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```console
|
||||
5didvk.d09sbcov8ph2amjw
|
||||
```
|
||||
|
||||
If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the
|
||||
following command chain on the control-plane node:
|
||||
|
||||
```bash
|
||||
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
|
||||
openssl dgst -sha256 -hex | sed 's/^.* //'
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```console
|
||||
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To specify an IPv6 tuple for `<control-plane-host>:<control-plane-port>`, IPv6 address must be enclosed in square brackets, for example: `[2001:db8::101]:2073`.
|
||||
{{< /note >}}
|
||||
|
||||
The output should look something like:
|
||||
|
||||
```
|
||||
[preflight] Running pre-flight checks
|
||||
|
||||
... (log output of join workflow) ...
|
||||
|
||||
Node join complete:
|
||||
* Certificate signing request sent to control-plane and response
|
||||
received.
|
||||
* Kubelet informed of new secure connection details.
|
||||
|
||||
Run 'kubectl get nodes' on control-plane to see this machine join.
|
||||
```
|
||||
|
||||
A few seconds later, you should notice this node in the output from `kubectl get
|
||||
nodes` when run on the control-plane node.
|
||||
|
||||
{{< note >}}
|
||||
As the cluster nodes are usually initialized sequentially, the CoreDNS Pods are likely to all run
|
||||
on the first control-plane node. To provide higher availability, please rebalance the CoreDNS Pods
|
||||
with `kubectl -n kube-system rollout restart deployment coredns` after at least one new node is joined.
|
||||
{{< /note >}}
|
||||
* [Adding Linux worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/)
|
||||
* [Adding Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)
|
||||
|
||||
### (Optional) Controlling your cluster from machines other than the control-plane node
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ see the [Creating a cluster with kubeadm](/docs/setup/production-environment/too
|
|||
* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions
|
||||
based on Debian and Red Hat, and those distributions without a package manager.
|
||||
* 2 GB or more of RAM per machine (any less will leave little room for your apps).
|
||||
* 2 CPUs or more.
|
||||
* 2 CPUs or more for control plane machines.
|
||||
* Full network connectivity between all machines in the cluster (public or private network is fine).
|
||||
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.
|
||||
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
|
||||
|
|
|
@ -0,0 +1,109 @@
|
|||
---
|
||||
title: Adding Linux worker nodes
|
||||
content_type: task
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This page explains how to add Linux worker nodes to a kubeadm cluster.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Each joining worker node has installed the required components from
|
||||
[Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/), such as,
|
||||
kubeadm, the kubelet and a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}}.
|
||||
* A running kubeadm cluster created by `kubeadm init` and following the steps
|
||||
in the document [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
* You need superuser access to the node.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Adding Linux worker nodes
|
||||
|
||||
To add new Linux worker nodes to your cluster do the following for each machine:
|
||||
|
||||
1. Connect to the machine by using SSH or another method.
|
||||
1. Run the command that was output by `kubeadm init`. For example:
|
||||
|
||||
```bash
|
||||
sudo kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
|
||||
```
|
||||
|
||||
### Additional information for kubeadm join
|
||||
|
||||
{{< note >}}
|
||||
To specify an IPv6 tuple for `<control-plane-host>:<control-plane-port>`, IPv6 address must be enclosed in square brackets, for example: `[2001:db8::101]:2073`.
|
||||
{{< /note >}}
|
||||
|
||||
If you do not have the token, you can get it by running the following command on the control plane node:
|
||||
|
||||
```bash
|
||||
sudo kubeadm token list
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```console
|
||||
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
|
||||
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
|
||||
signing token generated by bootstrappers:
|
||||
'kubeadm init'. kubeadm:
|
||||
default-node-token
|
||||
```
|
||||
|
||||
By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the
|
||||
current token has expired, you can create a new token by running the following command on the
|
||||
control plane node:
|
||||
|
||||
```bash
|
||||
sudo kubeadm token create
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```console
|
||||
5didvk.d09sbcov8ph2amjw
|
||||
```
|
||||
|
||||
If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the
|
||||
following commands on the control plane node:
|
||||
|
||||
```bash
|
||||
sudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \
|
||||
openssl dgst -sha256 -hex | sed 's/^.* //'
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```console
|
||||
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
|
||||
```
|
||||
|
||||
The output of the `kubeadm join` command should look something like:
|
||||
|
||||
```
|
||||
[preflight] Running pre-flight checks
|
||||
|
||||
... (log output of join workflow) ...
|
||||
|
||||
Node join complete:
|
||||
* Certificate signing request sent to control-plane and response
|
||||
received.
|
||||
* Kubelet informed of new secure connection details.
|
||||
|
||||
Run 'kubectl get nodes' on control-plane to see this machine join.
|
||||
```
|
||||
|
||||
A few seconds later, you should notice this node in the output from `kubectl get nodes`.
|
||||
(for example, run `kubectl` on a control plane node).
|
||||
|
||||
{{< note >}}
|
||||
As the cluster nodes are usually initialized sequentially, the CoreDNS Pods are likely to all run
|
||||
on the first control plane node. To provide higher availability, please rebalance the CoreDNS Pods
|
||||
with `kubectl -n kube-system rollout restart deployment coredns` after at least one new node is joined.
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* See how to [add Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/).
|
|
@ -0,0 +1,163 @@
|
|||
---
|
||||
title: Adding Windows worker nodes
|
||||
content_type: task
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This page explains how to add Windows worker nodes to a kubeadm cluster.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* A running [Windows Server 2022](https://www.microsoft.com/cloud-platform/windows-server-pricing)
|
||||
(or higher) instance with administrative access.
|
||||
* A running kubeadm cluster created by `kubeadm init` and following the steps
|
||||
in the document [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Adding Windows worker nodes
|
||||
|
||||
{{< note >}}
|
||||
To facilitate the addition of Windows worker nodes to a cluster, PowerShell scripts from the repository
|
||||
https://sigs.k8s.io/sig-windows-tools are used.
|
||||
{{< /note >}}
|
||||
|
||||
Do the following for each machine:
|
||||
|
||||
1. Open a PowerShell session on the machine.
|
||||
1. Make sure you are Administrator or a privileged user.
|
||||
|
||||
Then proceed with the steps outlined below.
|
||||
|
||||
### Install containerd
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
To install containerd, first run the following command:
|
||||
|
||||
```PowerShell
|
||||
curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/Install-Containerd.ps1
|
||||
``````
|
||||
|
||||
Then run the following command, but first replace `CONTAINERD_VERSION` with a recent release
|
||||
from the [containerd repository](https://github.com/containerd/containerd/releases).
|
||||
The version must not have a `v` prefix. For example, use `1.7.22` instead of `v1.7.22`:
|
||||
|
||||
```PowerShell
|
||||
.\Install-Containerd.ps1 -ContainerDVersion CONTAINERD_VERSION
|
||||
```
|
||||
|
||||
* Adjust any other parameters for `Install-Containerd.ps1` such as `netAdapterName` as you need them.
|
||||
* Set `skipHypervisorSupportCheck` if your machine does not support Hyper-V and cannot host Hyper-V isolated
|
||||
containers.
|
||||
* If you change the `Install-Containerd.ps1` optional parameters `CNIBinPath` and/or `CNIConfigPath` you will
|
||||
need to configure the installed Windows CNI plugin with matching values.
|
||||
|
||||
### Install kubeadm and kubelet
|
||||
|
||||
Run the following commands to install kubeadm and the kubelet:
|
||||
|
||||
```PowerShell
|
||||
curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/PrepareNode.ps1
|
||||
.\PrepareNode.ps1 -KubernetesVersion v{{< skew currentVersion >}}
|
||||
```
|
||||
|
||||
* Adjust the parameter `KubernetesVersion` of `PrepareNode.ps1` if needed.
|
||||
|
||||
### Run `kubeadm join`
|
||||
|
||||
Run the command that was output by `kubeadm init`. For example:
|
||||
|
||||
```bash
|
||||
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
|
||||
```
|
||||
|
||||
#### Additional information about kubeadm join
|
||||
|
||||
{{< note >}}
|
||||
To specify an IPv6 tuple for `<control-plane-host>:<control-plane-port>`, IPv6 address must be enclosed in square brackets, for example: `[2001:db8::101]:2073`.
|
||||
{{< /note >}}
|
||||
|
||||
If you do not have the token, you can get it by running the following command on the control plane node:
|
||||
|
||||
```bash
|
||||
kubeadm token list
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```console
|
||||
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
|
||||
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
|
||||
signing token generated by bootstrappers:
|
||||
'kubeadm init'. kubeadm:
|
||||
default-node-token
|
||||
```
|
||||
|
||||
By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the
|
||||
current token has expired, you can create a new token by running the following command on the
|
||||
control plane node:
|
||||
|
||||
```bash
|
||||
kubeadm token create
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```console
|
||||
5didvk.d09sbcov8ph2amjw
|
||||
```
|
||||
|
||||
If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the
|
||||
following commands on the control plane node:
|
||||
|
||||
```bash
|
||||
sudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \
|
||||
openssl dgst -sha256 -hex | sed 's/^.* //'
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```console
|
||||
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
|
||||
```
|
||||
|
||||
The output of the `kubeadm join` command should look something like:
|
||||
|
||||
```
|
||||
[preflight] Running pre-flight checks
|
||||
|
||||
... (log output of join workflow) ...
|
||||
|
||||
Node join complete:
|
||||
* Certificate signing request sent to control-plane and response
|
||||
received.
|
||||
* Kubelet informed of new secure connection details.
|
||||
|
||||
Run 'kubectl get nodes' on control-plane to see this machine join.
|
||||
```
|
||||
|
||||
A few seconds later, you should notice this node in the output from `kubectl get nodes`.
|
||||
(for example, run `kubectl` on a control plane node).
|
||||
|
||||
### Network configuration
|
||||
|
||||
CNI setup on clusters mixed with Linux and Windows nodes requires more steps than just
|
||||
running `kubectl apply` on a manifest file. Additionally, the CNI plugin running on control
|
||||
plane nodes must be prepared to support the CNI plugin running on Windows worker nodes.
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
Only a few CNI plugins currently support Windows. Below you can find individual setup instructions for them:
|
||||
* [Flannel](https://sigs.k8s.io/sig-windows-tools/guides/flannel.md)
|
||||
* [Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/)
|
||||
|
||||
### Install kubectl for Windows (optional) {#install-kubectl}
|
||||
|
||||
See [Install and Set Up kubectl on Windows](/docs/tasks/tools/install-kubectl-windows/).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* See how to [add Linux worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/).
|
Loading…
Reference in New Issue