* iso: Extract buildroot target
Beofre we can build the iso, we need to clone and configure buildroot.
This is required to run iso-menuconfig-{arch}.
* iso: Extract iso-prepare-% target
This target prepare for building an iso or running menuconfig. With this
change we can run the {iso,linux}-menuconfig-{x86_64,aarch64} targets
without buidling the entire iso.
* iso: Fix linux-menuconfig-% target
Previouly it worked only after building the entire iso. Now we make this
target without building the iso or running iso-menuconfig.
On the first run this downloads and builds lot for packages required to
run the linux-menuconfig target, but it is much shorter than buidling
the entire iso.
* iso: Simplify linux-menuconfig-%
Preveviously we copied the defconfig manauly to the beoard config file.
This can be done using the special linux-update-defconfig target.
With this change we don't need to keep KERNEL_VERSION in the Makefile,
making future upgrade easier.
* iso: Update buildroot configuration for aarch64
Run `make iso-menuconfig-aarch64` without making any changes updates the
buildroot config. It seems that there were manual changes in the config
which are overwritten when running iso-menuconfig. Removing the manual
changes to make it easier to edit the configuration with kconfig.
* iso: Update buildroot configuration for x86_64
Same as the aarch64 change to make it easier to configure using kconfig.
* iso: Update linux configuration for aarch64
Same as iso-menuconfig-aarch64, run `make linux-menuconfig-aarch64` and
exit without any change to update the config. This seems to change the
order, removing manual changes from the config. This will make it easier
to configure using kconfig in the future.
* iso: Update linux configuration for x86_64
Same as the aarch64 changes to make it easier to configure using kconfig
in the future.
* iso: Disable all platform for aarch64
We run on qemu virt machine or apple virtualization so we don't need
support for all kinds of embeded Arm boards. This reduces the arm64 iso
size from 410 MiB to 392 MiB.
* Updating ISO to v1.36.0-1751221996-20991
* Updating ISO to v1.36.0-1751315722-20991
---------
Co-authored-by: minikube-bot <minikube-bot@google.com>
libkrun virtio-net driver enables TSO offloading and checksum
offloading by default, so we must use vment-helper --enable-tso and
--enable-checksum-offload with krunkit. These options do not work with
vfkit.
* vfkit: Log serial console to file
To make debugging easier, add virtio-serial device logging serial
console to file:
~/.minikube/machines/NAME/serial.log
To enable logging, we need to enable the console in the kernel command
line, since we still use direct kernel boot.
Example log:
% cat /Users/nir/.minikube/machines/vfkit/vfkig.log
[ 0.896094] cacheinfo: Unable to detect cache hierarchy for CPU 0
[ 0.897186] loop: module loaded
[ 0.897670] virtio_blk virtio2: [vda] 840488 512-byte logical blocks (430 MB/410 MiB)
[ 0.897733] vda: detected capacity change from 0 to 430329856
[ 0.898460] virtio_blk virtio3: [vdb] 40960000 512-byte logical blocks (21.0 GB/19.5 GiB)
[ 0.898533] vdb: detected capacity change from 0 to 20971520000
...
[ 1.794714] systemd[1]: Detected virtualization vm-other.
[ 1.794752] systemd[1]: Detected architecture arm64.
Welcome to Buildroot 2025.02!
[ 1.794944] systemd[1]: Hostname set to <minikube>.
[ 1.795011] systemd[1]: Initializing machine ID from random generator.
...
[ OK ] Started Container Runtime Interface for OCI (CRI-O).
[ OK ] Reached target Multi-User System.
Welcome to minikube
vfkit login: [ 6.681578] systemd-ssh-generator[630]: Binding SSH to AF_UNIX socket /run/ssh-unix-local/socket.
* vfkit: Use EFI bootloader
With the fixed iso, we can simplify the driver using the EFI bootloader
option[1] instead of the legacy and deprecated --kernel, --kernel-cmdline,
and --initrd options[2].
Example run:
% minikube start -p vfkit --driver vfkit --container-runtime containerd --network vmnet-shared
😄 [vfkit] minikube v1.36.0 on Darwin 15.5 (arm64)
✨ Using the vfkit driver based on user configuration
👍 Starting "vfkit" primary control-plane node in "vfkit" cluster
🔥 Creating vfkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
📦 Preparing Kubernetes v1.33.1 on containerd 1.7.23 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "vfkit" cluster and "default" namespace by default
Comparing direct kernel boot and --bootloader efi shows that it is little bit faster and boot time is more consistent.
% hyperfine -r 10 -C "minikube delete" \
"vfkit-efi/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes" \
"vfkit-direct/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes"
Benchmark 1: vfkit-efi/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes
Time (mean ± σ): 10.205 s ± 0.656 s [User: 0.381 s, System: 0.266 s]
Range (min … max): 9.106 s … 11.254 s 10 runs
Benchmark 2: vfkit-direct/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes
Time (mean ± σ): 10.933 s ± 1.616 s [User: 0.402 s, System: 0.406 s]
Range (min … max): 9.155 s … 14.168 s 10 runs
Summary
vfkit-efi/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes ran
1.07 ± 0.17 times faster than vfkit-direct/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes
[1] https://github.com/crc-org/vfkit/blob/main/doc/usage.md#efi-bootloader
[2] https://github.com/crc-org/vfkit/blob/main/doc/usage.md#deprecated-options
* docs: Update vfkit driver documentation
- Separate vfkit requirements and vmnet-shared requirements
- Update minimal macOS version required for --bootloader efi
- Simplify vfkit upgrade, it is available in brew now
* iso: Disable grub timeout
This speeds up machine boot by 5 seconds. The timeout may be helpful for
debugging boot issues but we don't have a way to access the serial
console for debugging currently.
Testing shows about ~5 seconds speedup.
| driver | timeout | start time |
|------------|---------|------------|
| vfkit[1] | 5.0 | 24.01 |
| vfkit[1] | 0.0 | 19.90 |
| qemu | 5.0 | 29.46 |
| qemu | 0.0 | 24.28 |
| krunkit[2] | 5.0 | 25.14 |
| krunkit[2] | 0.0 | 20.51 |
[1] Tested with #20833, booting using iso instead of direct kernel
boot. Direct kernel boot is little bit faster (e.g. 18.x).
[2] Tested with #20826
* Updating ISO to v1.36.0-1749153077-20895
---------
Co-authored-by: minikube-bot <minikube-bot@google.com>
* iso: Fix console for vfkit/krunkit
The serial console name depends on the driver. We had setting for qemu
that does not work for vfkit and krunkit, breaking boot from minikube
iso.
Fixed by using 2 console= options, one is known to work for qemu, and
one for vfkit and krunkit. With this we can use the same iso image with
qemu, vfkit, and krunkit.
This will allow simplifying vfkit driver. Previously we had to extract
the kernel and initrd and start it using the legacy --kernel,
--kernel-cmdline and --initrd options.
I tested this by building the iso with this fix and running with
--iso-url.
Example run with qemu:
% minikube start -p qemu --driver qemu --container-runtime containerd \
--iso-url file://$PWD/minikube-arm64.iso
😄 [qemu] minikube v1.36.0 on Darwin 15.5 (arm64)
✨ Using the qemu2 driver based on user configuration
🌐 Automatically selected the socket_vmnet network
👍 Starting "qemu" primary control-plane node in "qemu" cluster
🔥 Creating qemu2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
📦 Preparing Kubernetes v1.33.1 on containerd 1.7.23 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "qemu" cluster and "default" namespace by default
Example run with krunkit:
% minikube start -p krunkit --driver krunkit --container-runtime containerd \
--iso-url file://$PWD/minikube-arm64.iso
😄 [krunkit] minikube v1.36.0 on Darwin 15.5 (arm64)
✨ Using the krunkit (experimental) driver based on user configuration
👍 Starting "krunkit" primary control-plane node in "krunkit" cluster
🔥 Creating krunkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
📦 Preparing Kubernetes v1.33.1 on containerd 1.7.23 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "krunkit" cluster and "default" namespace by default
* Updating ISO to v1.36.0-1749066232-20832
---------
Co-authored-by: minikube-bot <minikube-bot@google.com>
* Fix KVM driver tests timeouts
Rewrite KVM driver waiting logic for domain start, getting ip address
and shutting domain down. Add more config/state outputs to aid future
debugging.
Bump go/libvirt to v1.11002.0 and set the minimum memory required for
running all tests to 3GB to avoid some really weird system behaviour.
* revert reduction of timelimit for TestCert tests run
* set memory and debug output in TestNoKubernetes tests
* extend kvm waitForStaticIP timeout
* add console log to debug output
* Updating ISO to v1.36.0-1748823857-20852
---------
Co-authored-by: minikube-bot <minikube-bot@google.com>