add ha docs

pull/18346/head
Predrag Rogic 2024-03-14 22:23:57 +00:00
parent 55c611b548
commit 364f049eee
No known key found for this signature in database
GPG Key ID: F1FF5748C4855229
2 changed files with 110 additions and 120 deletions

View File

@ -151,12 +151,12 @@ func enableCPLB(cc config.ClusterConfig, r command.Runner, kubeadmCfg []byte) bo
// ref: https://github.com/kubernetes/kubernetes/blob/f90461c43e881d320b78d48793db10c110d488d1/pkg/proxy/ipvs/README.md?plain=1#L257-L269
if driver.IsVM(cc.Driver) {
if _, err := r.RunCmd(exec.Command("sudo", "sh", "-c", "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack")); err != nil {
klog.Errorf("unable to load ipvs kernel modules: %v", err)
klog.Warningf("unable to load ipvs kernel modules: %v", err)
return false
}
// for non-vm driver, only try to check if required ipvs kernel modules are already loaded
} else if _, err := r.RunCmd(exec.Command("sudo", "sh", "-c", "lsmod | grep ip_vs")); err != nil {
klog.Errorf("giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be present: %v", err)
klog.Infof("giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: %v", err)
return false
}

View File

@ -7,7 +7,7 @@ date: 2024-03-10
## Overview
minikube implements Kubernetes highly available cluster topology using [stacked control plane and etcd nodes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology) using [kube-vip](https://kube-vip.io/) in [ARP](https://kube-vip.io/#arp) mode.
minikube implements Kubernetes highly available cluster topology using [stacked control plane nodes with colocated etcd nodes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology) using [kube-vip](https://kube-vip.io/) in [ARP](https://kube-vip.io/#arp) mode.
This tutorial will show you how to start and explore a multi-control plane - HA cluster on minikube.
@ -16,23 +16,49 @@ This tutorial will show you how to start and explore a multi-control plane - HA
- minikube > v1.32.0
- kubectl
### Optional
- ip_vs kernel modules: ip_vs, ip_vs_rr and nf_conntrack
rationale:
kube-vip supports the [control-plane load-balancing](https://kube-vip.io/docs/about/architecture/?query=load-balanc#control-plane-load-balancing) by distributing API requests across control-plane nodes using IPVS (IP Virtual Server) and Layer 4 (TCP-based) round-robin.
minikube will try to automatically enable control-plane load-balancing if these ip_vs kernel modules are available, whereas (see the [drivers]({{<ref "/docs/drivers/">}}) section for more details):
- for VM-based drivers (eg, kvm2 or qemu): minikube will automatically try to load ip_vs kernel modules
- for container-based or bare-metal-based drivers (eg, docker or "none"): minikube will only check if ip_vs kernel modules are already loaded, but will not try to load them automatically (to avoid unintentional modification of the underlying host's os/kernel), so it's up to the user to make them available, if applicable and desired
## Caveat
While a minikube HA cluster will continue to operate (although degraded) after loosing any one control-plane node, keep in mind that there might be some components that are attached only to the primary control-plane node, like the storage-provisioner.
While a minikube HA cluster will continue to operate (although in degraded mode) after loosing any one control-plane node, keep in mind that there might be some components that are attached only to the primary control-plane node, like the storage-provisioner.
## Tutorial
- Start a HA cluster with the driver and container runtime of your choice:
- optional: if you plan on using a container-based or bare-metal-based driver on top of a Linux OS, check if the ip_vs kernel modules are already loaded
```shell
minikube start --ha --container-runtime=containerd --profile ha-demo
lsmod | grep ip_vs
```
```
😄 [ha-demo] minikube v1.32.0 on Opensuse-Tumbleweed 20240307
✨ Automatically selected the docker driver. Other choices: kvm2, qemu2, virtualbox, ssh
ip_vs_rr 12288 1
ip_vs 233472 3 ip_vs_rr
nf_conntrack 217088 9 xt_conntrack,nf_nat,nf_conntrack_tftp,nft_ct,xt_nat,nf_nat_tftp,nf_conntrack_netlink,xt_MASQUERADE,ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
libcrc32c 12288 5 nf_conntrack,nf_nat,btrfs,nf_tables,ip_vs
```
- Start a HA cluster with the driver and container runtime of your choice (here we chose docker driver and containerd container runtime):
```shell
minikube start --ha --driver=docker --container-runtime=containerd --profile ha-demo
```
```
😄 [ha-demo] minikube v1.32.0 on Opensuse-Tumbleweed 20240311
✨ Using the docker driver based on user configuration
📌 Using Docker driver with root privileges
👍 Starting "ha-demo" primary control-plane node in "ha-demo" cluster
🚜 Pulling base image v0.0.42-1708944392-18244 ...
🚜 Pulling base image v0.0.42-1710284843-18375 ...
🔥 Creating docker container (CPUs=2, Memory=5266MB) ...
📦 Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
▪ Generating certificates and keys ...
@ -43,22 +69,22 @@ minikube start --ha --container-runtime=containerd --profile ha-demo
🌟 Enabled addons: storage-provisioner, default-storageclass
👍 Starting "ha-demo-m02" control-plane node in "ha-demo" cluster
🚜 Pulling base image v0.0.42-1708944392-18244 ...
🚜 Pulling base image v0.0.42-1710284843-18375 ...
🔥 Creating docker container (CPUs=2, Memory=5266MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.67.2
▪ NO_PROXY=192.168.49.2
📦 Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
▪ env NO_PROXY=192.168.67.2
▪ env NO_PROXY=192.168.49.2
🔎 Verifying Kubernetes components...
👍 Starting "ha-demo-m03" control-plane node in "ha-demo" cluster
🚜 Pulling base image v0.0.42-1708944392-18244 ...
🚜 Pulling base image v0.0.42-1710284843-18375 ...
🔥 Creating docker container (CPUs=2, Memory=5266MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.67.2,192.168.67.3
▪ NO_PROXY=192.168.49.2,192.168.49.3
📦 Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
▪ env NO_PROXY=192.168.67.2
▪ env NO_PROXY=192.168.67.2,192.168.67.3
▪ env NO_PROXY=192.168.49.2
▪ env NO_PROXY=192.168.49.2,192.168.49.3
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "ha-demo" cluster and "default" namespace by default
```
@ -70,9 +96,9 @@ kubectl get nodes -owide
```
```
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ha-demo Ready control-plane 3m27s v1.28.4 192.168.67.2 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m02 Ready control-plane 3m9s v1.28.4 192.168.67.3 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m03 Ready control-plane 2m38s v1.28.4 192.168.67.4 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo Ready control-plane 4m21s v1.28.4 192.168.49.2 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m02 Ready control-plane 4m v1.28.4 192.168.49.3 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m03 Ready control-plane 3m37s v1.28.4 192.168.49.4 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
```
- Check the status of your HA cluster:
@ -84,7 +110,7 @@ minikube profile list
|---------|-----------|------------|----------------|------|---------|--------|-------|--------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active |
|---------|-----------|------------|----------------|------|---------|--------|-------|--------|
| ha-demo | docker | containerd | 192.168.67.254 | 8443 | v1.28.4 | HAppy | 3 | |
| ha-demo | docker | containerd | 192.168.49.254 | 8443 | v1.28.4 | HAppy | 3 | |
|---------|-----------|------------|----------------|------|---------|--------|-------|--------|
```
@ -128,18 +154,18 @@ clusters:
certificate-authority: /home/prezha/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 09 Mar 2024 23:13:20 GMT
last-update: Thu, 14 Mar 2024 21:38:07 GMT
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.254:8443
server: https://192.168.49.254:8443
name: ha-demo
contexts:
- context:
cluster: ha-demo
extensions:
- extension:
last-update: Sat, 09 Mar 2024 23:13:20 GMT
last-update: Thu, 14 Mar 2024 21:38:07 GMT
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
@ -159,54 +185,51 @@ users:
- Overview of the current leader and follower API servers
```shell
minikube ssh -p ha-demo -- 'sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig logs -n kube-system pod/kube-vip-minikube-ha'
minikube ssh -p ha-demo -- 'sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig logs -n kube-system pod/kube-vip-ha-demo'
```
```
time="2024-03-09T23:13:46Z" level=info msg="Starting kube-vip.io [v0.7.1]"
time="2024-03-09T23:13:46Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[false]"
time="2024-03-09T23:13:46Z" level=info msg="prometheus HTTP server started"
time="2024-03-09T23:13:46Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2024-03-09T23:13:46Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [ha-demo]"
I0309 23:13:46.190758 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-cp-lock...
E0309 23:13:49.522895 1 leaderelection.go:332] error retrieving resource lock kube-system/plndr-cp-lock: etcdserver: leader changed
I0309 23:13:50.639076 1 leaderelection.go:260] successfully acquired lease kube-system/plndr-cp-lock
time="2024-03-09T23:13:50Z" level=info msg="Node [ha-demo] is assuming leadership of the cluster"
time="2024-03-09T23:13:50Z" level=info msg="Starting IPVS LoadBalancer"
time="2024-03-09T23:13:50Z" level=info msg="IPVS Loadbalancer enabled for 1.2.1"
time="2024-03-09T23:13:50Z" level=info msg="Gratuitous Arp broadcast will repeat every 3 seconds for [192.168.67.254]"
time="2024-03-09T23:13:50Z" level=info msg="Kube-Vip is watching nodes for control-plane labels"
time="2024-03-09T23:13:50Z" level=info msg="Added backend for [192.168.67.254:8443] on [192.168.67.3:8443]"
time="2024-03-09T23:14:07Z" level=info msg="Added backend for [192.168.67.254:8443] on [192.168.67.4:8443]"
time="2024-03-14T21:38:34Z" level=info msg="Starting kube-vip.io [v0.7.1]"
time="2024-03-14T21:38:34Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[false]"
time="2024-03-14T21:38:34Z" level=info msg="prometheus HTTP server started"
time="2024-03-14T21:38:34Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2024-03-14T21:38:34Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [ha-demo]"
I0314 21:38:34.042420 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-cp-lock...
I0314 21:38:34.069509 1 leaderelection.go:260] successfully acquired lease kube-system/plndr-cp-lock
time="2024-03-14T21:38:34Z" level=info msg="Node [ha-demo] is assuming leadership of the cluster"
time="2024-03-14T21:38:34Z" level=info msg="Starting IPVS LoadBalancer"
time="2024-03-14T21:38:34Z" level=info msg="IPVS Loadbalancer enabled for 1.2.1"
time="2024-03-14T21:38:34Z" level=info msg="Gratuitous Arp broadcast will repeat every 3 seconds for [192.168.49.254]"
time="2024-03-14T21:38:34Z" level=info msg="Kube-Vip is watching nodes for control-plane labels"
time="2024-03-14T21:38:34Z" level=info msg="Added backend for [192.168.49.254:8443] on [192.168.49.3:8443]"
time="2024-03-14T21:38:48Z" level=info msg="Added backend for [192.168.49.254:8443] on [192.168.49.4:8443]"
```
```shell
minikube ssh -p ha-demo -- 'sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig logs -n kube-system pod/kube-vip-ha-demo-m02'
```
```
time="2024-03-09T23:13:36Z" level=info msg="Starting kube-vip.io [v0.7.1]"
time="2024-03-09T23:13:36Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[false]"
time="2024-03-09T23:13:36Z" level=info msg="prometheus HTTP server started"
time="2024-03-09T23:13:36Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2024-03-09T23:13:36Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [ha-demo-m02]"
I0309 23:13:36.656894 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-cp-lock...
E0309 23:13:46.663237 1 leaderelection.go:332] error retrieving resource lock kube-system/plndr-cp-lock: Get "https://kubernetes:8443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
time="2024-03-09T23:13:50Z" level=info msg="Node [ha-demo] is assuming leadership of the cluster"
time="2024-03-14T21:38:25Z" level=info msg="Starting kube-vip.io [v0.7.1]"
time="2024-03-14T21:38:25Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[false]"
time="2024-03-14T21:38:25Z" level=info msg="prometheus HTTP server started"
time="2024-03-14T21:38:25Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2024-03-14T21:38:25Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [ha-demo-m02]"
I0314 21:38:25.990817 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-cp-lock...
time="2024-03-14T21:38:34Z" level=info msg="Node [ha-demo] is assuming leadership of the cluster"
```
```shell
minikube ssh -p ha-demo -- 'sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig logs -n kube-system pod/kube-vip-ha-demo-m03'
```
```
time="2024-03-09T23:14:06Z" level=info msg="Starting kube-vip.io [v0.7.1]"
time="2024-03-09T23:14:06Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[false]"
time="2024-03-09T23:14:06Z" level=info msg="prometheus HTTP server started"
time="2024-03-09T23:14:06Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2024-03-09T23:14:06Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [ha-demo-m03]"
I0309 23:14:06.972839 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-cp-lock...
time="2024-03-09T23:14:08Z" level=info msg="Node [ha-demo] is assuming leadership of the cluster"
time="2024-03-14T21:38:48Z" level=info msg="Starting kube-vip.io [v0.7.1]"
time="2024-03-14T21:38:48Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[false]"
time="2024-03-14T21:38:48Z" level=info msg="prometheus HTTP server started"
time="2024-03-14T21:38:48Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2024-03-14T21:38:48Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [ha-demo-m03]"
I0314 21:38:48.856781 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-cp-lock...
time="2024-03-14T21:38:48Z" level=info msg="Node [ha-demo] is assuming leadership of the cluster"
```
- Overview of multi-etcd instances
```shell
@ -216,44 +239,12 @@ minikube ssh -p ha-demo -- 'sudo /var/lib/minikube/binaries/v1.28.4/kubectl --ku
+------------------+---------+-------------+---------------------------+---------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------------+---------------------------+---------------------------+------------+
| 798ac3f6077b24a | started | ha-demo-m02 | https://192.168.67.3:2380 | https://192.168.67.3:2379 | false |
| 7930f2d13748db77 | started | ha-demo-m03 | https://192.168.67.4:2380 | https://192.168.67.4:2379 | false |
| 8688e899f7831fc7 | started | ha-demo | https://192.168.67.2:2380 | https://192.168.67.2:2379 | false |
| 3c464e4a52eb93c5 | started | ha-demo-m03 | https://192.168.49.4:2380 | https://192.168.49.4:2379 | false |
| 59bde6852118b2a5 | started | ha-demo-m02 | https://192.168.49.3:2380 | https://192.168.49.3:2379 | false |
| aec36adc501070cc | started | ha-demo | https://192.168.49.2:2380 | https://192.168.49.2:2379 | false |
+------------------+---------+-------------+---------------------------+---------------------------+------------+
```
- Load balancer configuration
```shell
minikube ssh -p ha-demo -- 'sudo apt update && sudo apt install ipvsadm -y'
```
```shell
minikube ssh -p ha-demo -- 'sudo ipvsadm -ln'
```
```
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.67.254:8443 rr
-> 192.168.67.2:8443 Local 1 4 23
-> 192.168.67.3:8443 Local 1 3 24
-> 192.168.67.4:8443 Local 1 1 24
```
- Load balancer connections
```shell
minikube ssh -p ha-demo -- 'sudo ipvsadm -lnc | grep -i established'
```
```
TCP 14:58 ESTABLISHED 192.168.67.4:36468 192.168.67.254:8443 192.168.67.3:8443
TCP 14:40 ESTABLISHED 192.168.67.3:54706 192.168.67.254:8443 192.168.67.3:8443
TCP 14:58 ESTABLISHED 192.168.67.4:36464 192.168.67.254:8443 192.168.67.3:8443
TCP 14:57 ESTABLISHED 192.168.67.3:54260 192.168.67.254:8443 192.168.67.2:8443
TCP 14:55 ESTABLISHED 192.168.67.254:44480 192.168.67.254:8443 192.168.67.2:8443
TCP 14:59 ESTABLISHED 192.168.67.1:35728 192.168.67.254:8443 192.168.67.4:8443
TCP 14:58 ESTABLISHED 192.168.67.254:37856 192.168.67.254:8443 192.168.67.2:8443
```
- Loosing a control-plane node - degrades cluster, but not a problem!
```shell
@ -270,9 +261,9 @@ minikube node delete m02 -p ha-demo
kubectl get nodes -owide
```
```
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ha-demo Ready control-plane 70m v1.28.4 192.168.67.2 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m03 Ready control-plane 69m v1.28.4 192.168.67.4 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ha-demo Ready control-plane 7m16s v1.28.4 192.168.49.2 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m03 Ready control-plane 6m32s v1.28.4 192.168.49.4 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
```
```shell
minikube profile list
@ -281,11 +272,10 @@ minikube profile list
|---------|-----------|------------|----------------|------|---------|----------|-------|--------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active |
|---------|-----------|------------|----------------|------|---------|----------|-------|--------|
| ha-demo | docker | containerd | 192.168.67.254 | 8443 | v1.28.4 | Degraded | 2 | |
| ha-demo | docker | containerd | 192.168.49.254 | 8443 | v1.28.4 | Degraded | 2 | |
|---------|-----------|------------|----------------|------|---------|----------|-------|--------|
```
- Add a control-plane node
```shell
@ -294,7 +284,7 @@ minikube node add --control-plane -p ha-demo
```
😄 Adding node m04 to cluster ha-demo as [worker control-plane]
👍 Starting "ha-demo-m04" control-plane node in "ha-demo" cluster
🚜 Pulling base image v0.0.42-1708944392-18244 ...
🚜 Pulling base image v0.0.42-1710284843-18375 ...
🔥 Creating docker container (CPUs=2, Memory=5266MB) ...
📦 Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
🔎 Verifying Kubernetes components...
@ -304,10 +294,10 @@ minikube node add --control-plane -p ha-demo
kubectl get nodes -owide
```
```
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ha-demo Ready control-plane 72m v1.28.4 192.168.67.2 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m03 Ready control-plane 71m v1.28.4 192.168.67.4 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m04 Ready control-plane 28s v1.28.4 192.168.67.5 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ha-demo Ready control-plane 8m34s v1.28.4 192.168.49.2 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m03 Ready control-plane 7m50s v1.28.4 192.168.49.4 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m04 Ready control-plane 36s v1.28.4 192.168.49.5 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
```
```shell
minikube profile list
@ -316,7 +306,7 @@ minikube profile list
|---------|-----------|------------|----------------|------|---------|--------|-------|--------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active |
|---------|-----------|------------|----------------|------|---------|--------|-------|--------|
| ha-demo | docker | containerd | 192.168.67.254 | 8443 | v1.28.4 | HAppy | 3 | |
| ha-demo | docker | containerd | 192.168.49.254 | 8443 | v1.28.4 | HAppy | 3 | |
|---------|-----------|------------|----------------|------|---------|--------|-------|--------|
```
@ -328,7 +318,7 @@ minikube node add -p ha-demo
```
😄 Adding node m05 to cluster ha-demo as [worker]
👍 Starting "ha-demo-m05" worker node in "ha-demo" cluster
🚜 Pulling base image v0.0.42-1708944392-18244 ...
🚜 Pulling base image v0.0.42-1710284843-18375 ...
🔥 Creating docker container (CPUs=2, Memory=5266MB) ...
📦 Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
🔎 Verifying Kubernetes components...
@ -339,11 +329,11 @@ minikube node add -p ha-demo
kubectl get nodes -owide
```
```
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ha-demo Ready control-plane 73m v1.28.4 192.168.67.2 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m03 Ready control-plane 73m v1.28.4 192.168.67.4 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m04 Ready control-plane 110s v1.28.4 192.168.67.5 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m05 Ready <none> 23s v1.28.4 192.168.67.6 <none> Ubuntu 22.04.3 LTS 6.7.7-1-default containerd://1.6.28
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ha-demo Ready control-plane 9m35s v1.28.4 192.168.49.2 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m03 Ready control-plane 8m51s v1.28.4 192.168.49.4 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m04 Ready control-plane 97s v1.28.4 192.168.49.5 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
ha-demo-m05 Ready <none> 22s v1.28.4 192.168.49.6 <none> Ubuntu 22.04.4 LTS 6.7.7-1-default containerd://1.6.28
```
- Test by deploying a hello service, which just spits back the IP address the request was served from:
@ -414,10 +404,10 @@ deployment "hello" successfully rolled out
kubectl get pods -owide
```
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-7bf57d9696-2zrmx 1/1 Running 0 3m56s 10.244.0.4 ha-demo <none> <none>
hello-7bf57d9696-hd75l 1/1 Running 0 3m56s 10.244.1.2 ha-demo-m04 <none> <none>
hello-7bf57d9696-vn6jd 1/1 Running 0 3m56s 10.244.4.2 ha-demo-m05 <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-7bf57d9696-64v6m 1/1 Running 0 18s 10.244.3.2 ha-demo-m04 <none> <none>
hello-7bf57d9696-7gtlk 1/1 Running 0 18s 10.244.2.2 ha-demo-m03 <none> <none>
hello-7bf57d9696-99qsw 1/1 Running 0 18s 10.244.0.4 ha-demo <none> <none>
```
- Look at our service, to know what URL to hit
@ -429,7 +419,7 @@ minikube service list -p ha-demo
|-------------|------------|--------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|------------|--------------|---------------------------|
| default | hello | 80 | http://192.168.67.2:31000 |
| default | hello | 80 | http://192.168.49.2:31000 |
| default | kubernetes | No node port | |
| kube-system | kube-dns | No node port | |
|-------------|------------|--------------|---------------------------|
@ -438,17 +428,17 @@ minikube service list -p ha-demo
- Let's hit the URL a few times and see what comes back
```shell
curl http://192.168.67.2:31000
curl http://192.168.49.2:31000
```
```
Hello from hello-7bf57d9696-vn6jd (10.244.4.2)
Hello from hello-7bf57d9696-99qsw (10.244.0.4)
curl http://192.168.67.2:31000
Hello from hello-7bf57d9696-2zrmx (10.244.0.4)
curl http://192.168.49.2:31000
Hello from hello-7bf57d9696-7gtlk (10.244.2.2)
curl http://192.168.67.2:31000
Hello from hello-7bf57d9696-hd75l (10.244.1.2)
curl http://192.168.49.2:31000
Hello from hello-7bf57d9696-7gtlk (10.244.2.2)
curl http://192.168.67.2:31000
Hello from hello-7bf57d9696-vn6jd (10.244.4.2)
curl http://192.168.49.2:31000
Hello from hello-7bf57d9696-64v6m (10.244.3.2)
```