- Show how to deploy the generic-device-plugin to allow multiple pods to
use the host GPU.
- Show how to deploy multiple large language models using GPU
acceleration.
- Show how to deploy and configure Open WebUI to interact with the
models.
We use constants.OldestKubernetesVersion for testing the oldest version
and limiting --kubernetes-version when starting the clusters. Our
tradition is testing 6 releases back from current version, but we were
testing 14 release back (1.20.0).
For upgrading containerd to latest version (v2.1.4) we need to upgrade
to a newer release. Upgrade constants.OldestKubernetesVersion to 1.28.0
which seems to pass all tests.
legacyVersion() used in version_upgrade_test.go was 1.26.0. The comment
in file mention that this should be release from the last 6 month. We do
see failures in the relevant tests (TestRunningBinaryUpgrade) in many
builds so I bumped it as well to 1.32.0 (2 releases back from current).
In preload_test.go we tested --kubernetes-version=1.24.4 which is not
compatible with containerd v2. Use legacyVersion() instead so we don't
need to maintain another version.
We had many example of --kubernetes-version in the docs using older
version which are not supported. Replace all example with current
version to minimize future maintenance. We need to automated this later
so updating the version in minikube will also update the examples.
With this change we have 2 places to update kubernetes versions:
- constants.*KubernetesVersion
- legacyVersion()
Modified the Stackdriver link because the existing one no more exists. No clear the exact replacement in the opentelemetry site (exists exporter for rust, .net, etc... but not totally clear for this specific case where information is not more present) but it seems the added link is referencing to the Google Cloud Operations Suite that "replace" to stackdriver.
* remove un-needed gomod replaces
* add a make target gomodtidy
* update docs on using gomodtidy
* add automation to run go mod tidy on every push
* update contributing docs to be more helpful
* install gopogh if it is not installed in html_report
* addres PR reviews
* update docs headings
* krunkit: Add krunkit driver
krunkit is a tool to launch configurable virtual machines using the
libkrun platform, optimized for GPU accelerated virtual machines and AI
workloads on Apple silicon.
It is mostly compatible with vfkit; the driver is a simplified copy of
the vfkit driver. Unlike vfkit, krunkit is available only on Apple
silicon.
Changes compared to vfkit driver:
- krunkit requires unix socket for networking, so we must use
vment-helper.
- krunkit does not support HardStop, so we kill it using SIGKILL.
- We must enable vmnet offloading, required for krunkit.
- The code was simplified since vmnet-helper is always used
- Code was cleaned up to use .ResolveStorePath()
- Unused Upgrade() function was removed
- Types and functions that should not be public made private
We require krunkit 0.2.2, supporting --restul-uri=unix://.
* reason: Make vment-helper error driver agnostic
Previously it was used only for vfkit, so we suggested to fallback to
the `nat` network. This advice is not relevant to krunkit or to qemu
(which can also use vmnet-helper).
Change the error to recommend installing vment-helper. We need to think
how we can recommend other networks for vfkit and qemu. Another solution
is to create error for every driver+network combination but this seems
hard to manage.
* hack: Add krunkit integration test
This is the same way that we test vfkit. This test is not running in the
CI.
Issues:
- Need to install and configure vment-helper (requires root).
* site: Add krunkit driver documentation
* iso: Extract buildroot target
Beofre we can build the iso, we need to clone and configure buildroot.
This is required to run iso-menuconfig-{arch}.
* iso: Extract iso-prepare-% target
This target prepare for building an iso or running menuconfig. With this
change we can run the {iso,linux}-menuconfig-{x86_64,aarch64} targets
without buidling the entire iso.
* iso: Fix linux-menuconfig-% target
Previouly it worked only after building the entire iso. Now we make this
target without building the iso or running iso-menuconfig.
On the first run this downloads and builds lot for packages required to
run the linux-menuconfig target, but it is much shorter than buidling
the entire iso.
* iso: Simplify linux-menuconfig-%
Preveviously we copied the defconfig manauly to the beoard config file.
This can be done using the special linux-update-defconfig target.
With this change we don't need to keep KERNEL_VERSION in the Makefile,
making future upgrade easier.
* iso: Update buildroot configuration for aarch64
Run `make iso-menuconfig-aarch64` without making any changes updates the
buildroot config. It seems that there were manual changes in the config
which are overwritten when running iso-menuconfig. Removing the manual
changes to make it easier to edit the configuration with kconfig.
* iso: Update buildroot configuration for x86_64
Same as the aarch64 change to make it easier to configure using kconfig.
* iso: Update linux configuration for aarch64
Same as iso-menuconfig-aarch64, run `make linux-menuconfig-aarch64` and
exit without any change to update the config. This seems to change the
order, removing manual changes from the config. This will make it easier
to configure using kconfig in the future.
* iso: Update linux configuration for x86_64
Same as the aarch64 changes to make it easier to configure using kconfig
in the future.
* iso: Disable all platform for aarch64
We run on qemu virt machine or apple virtualization so we don't need
support for all kinds of embeded Arm boards. This reduces the arm64 iso
size from 410 MiB to 392 MiB.
* Updating ISO to v1.36.0-1751221996-20991
* Updating ISO to v1.36.0-1751315722-20991
---------
Co-authored-by: minikube-bot <minikube-bot@google.com>
* vfkit: Log serial console to file
To make debugging easier, add virtio-serial device logging serial
console to file:
~/.minikube/machines/NAME/serial.log
To enable logging, we need to enable the console in the kernel command
line, since we still use direct kernel boot.
Example log:
% cat /Users/nir/.minikube/machines/vfkit/vfkig.log
[ 0.896094] cacheinfo: Unable to detect cache hierarchy for CPU 0
[ 0.897186] loop: module loaded
[ 0.897670] virtio_blk virtio2: [vda] 840488 512-byte logical blocks (430 MB/410 MiB)
[ 0.897733] vda: detected capacity change from 0 to 430329856
[ 0.898460] virtio_blk virtio3: [vdb] 40960000 512-byte logical blocks (21.0 GB/19.5 GiB)
[ 0.898533] vdb: detected capacity change from 0 to 20971520000
...
[ 1.794714] systemd[1]: Detected virtualization vm-other.
[ 1.794752] systemd[1]: Detected architecture arm64.
Welcome to Buildroot 2025.02!
[ 1.794944] systemd[1]: Hostname set to <minikube>.
[ 1.795011] systemd[1]: Initializing machine ID from random generator.
...
[ OK ] Started Container Runtime Interface for OCI (CRI-O).
[ OK ] Reached target Multi-User System.
Welcome to minikube
vfkit login: [ 6.681578] systemd-ssh-generator[630]: Binding SSH to AF_UNIX socket /run/ssh-unix-local/socket.
* vfkit: Use EFI bootloader
With the fixed iso, we can simplify the driver using the EFI bootloader
option[1] instead of the legacy and deprecated --kernel, --kernel-cmdline,
and --initrd options[2].
Example run:
% minikube start -p vfkit --driver vfkit --container-runtime containerd --network vmnet-shared
😄 [vfkit] minikube v1.36.0 on Darwin 15.5 (arm64)
✨ Using the vfkit driver based on user configuration
👍 Starting "vfkit" primary control-plane node in "vfkit" cluster
🔥 Creating vfkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
📦 Preparing Kubernetes v1.33.1 on containerd 1.7.23 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "vfkit" cluster and "default" namespace by default
Comparing direct kernel boot and --bootloader efi shows that it is little bit faster and boot time is more consistent.
% hyperfine -r 10 -C "minikube delete" \
"vfkit-efi/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes" \
"vfkit-direct/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes"
Benchmark 1: vfkit-efi/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes
Time (mean ± σ): 10.205 s ± 0.656 s [User: 0.381 s, System: 0.266 s]
Range (min … max): 9.106 s … 11.254 s 10 runs
Benchmark 2: vfkit-direct/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes
Time (mean ± σ): 10.933 s ± 1.616 s [User: 0.402 s, System: 0.406 s]
Range (min … max): 9.155 s … 14.168 s 10 runs
Summary
vfkit-efi/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes ran
1.07 ± 0.17 times faster than vfkit-direct/out/minikube start --driver vfkit --network vmnet-shared --container-runtime containerd --no-kubernetes
[1] https://github.com/crc-org/vfkit/blob/main/doc/usage.md#efi-bootloader
[2] https://github.com/crc-org/vfkit/blob/main/doc/usage.md#deprecated-options
* docs: Update vfkit driver documentation
- Separate vfkit requirements and vmnet-shared requirements
- Update minimal macOS version required for --bootloader efi
- Simplify vfkit upgrade, it is available in brew now
Testing shows that we need changes changes:
- x86_64 cpu
- Ubuntu 22.04
- docker is required even if building without docker
- python2 instead of python
- genisoimage (for mkisofs)
- Installing Go manually (Ubuntu 22.04 have only Go 1.18)
- Target should be minikube-iso-aarch64 or minikube-iso-x86_64. Using
arm64 and amd64 fails.
I also cleaned up a little bit the formatting to make it easier to
maintain (one package per line).
Tested building:
- minikube-iso-aarch64
- minikube-iso-x86_64
I did not test the built iso images.