The test used `go:build iso` so it was not included in the integration
tests. Change to `go:build integration` so we test in the CI.
Rename the file and the test name to make it more clear that this test
is about the iso image.
Skip the test for non vm-driver, since this tests is about the iso
image.
We may need to add a similar test or adapt this test so it can be used
also with the kicbase image.
This test will be useful to validate #21800, avoiding regressions such
as #21788.
The notify helpers accept now *run.CommandOptions and use it to check if
we can interact with the user. Modify callers to pass options using
cmd/flags.CommandOptions().
vment.ValidateHelper() accept now *run.CommandOptions and use
options.NonInteractive to check if interaction is allowed. Update
callers to pass options from the minikube command.
Testing non-interactive mode:
% sudo rm /etc/sudoers.d/vmnet-helper
% sudo -k
% out/minikube start -d krunkit --interactive=false
😄 minikube v1.37.0 on Darwin 26.0.1 (arm64)
✨ Using the krunkit (experimental) driver based on user configuration
🤷 Exiting due to PROVIDER_KRUNKIT_NOT_FOUND: The 'krunkit' provider was not found: exit status 1: sudo: a password is required
💡 Suggestion: Install and configure vment-helper
📘 Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/krunkit/
Testing interactive mode:
% out/minikube start -d krunkit
😄 minikube v1.37.0 on Darwin 26.0.1 (arm64)
💡 Unable to run vmnet-helper without a password
To configure vment-helper to run without a password, please check the documentation:
https://github.com/nirs/vmnet-helper/#granting-permission-to-run-vmnet-helper
Password:
✨ Using the krunkit (experimental) driver based on user configuration
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🔥 Creating krunkit VM (CPUs=2, Memory=6144MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Some drivers need command line options since they need to pass command
line options back to minikube firewall package. The way to pass command
line options to the driver is via the NewDriver function, called by the
registry Loader function.
The registry Loader function is called by machine.LocalClient.Load,
which is part of the limachine API interface, which is not part of
minikube so we cannot change it. We pass the options to
machine.NewAPIClient(), so the client can pass the options to Load().
Some drivers need to validate vment helper in the registry StatusChecker
function, considering the --interactive and --download-only flags. So we
pas the options to the StatusChecker function.
This change create the options in most commands that call
machine.NewAPIClient or registry StatusChecker function and pass the
options down.
This change introduce the basic infrastructure for passing command line
options from the cmd/minikube/cmd package to other packages.
The cmd/flags package provides the CommandOptions() function returning
run.CommandOptions initialized using viper. This package keeps the
constants for command line options (e.g. "interactive") that we want to
share with various packages without accessing global state via viper.
To use options in drivers code, include CommandOptions in the
CommonDriver struct. The options will be initialized from the command
line options when creating a driver.
The basic idea is to create options in the command:
options := flags.CommandOptions()
And pass it to other packages, where code will use:
if options.NonInteractive {
Instead of:
if viper.GetBool("interactive") {
This is type safe and allows reliable parallel testing.
In #20255 we added an option to use a configuration file instead of
interactive mode, but the change broke interactive mode. Current
minikube segfaults on start:
% ./out/minikube addons configure registry-creds
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x8 pc=0x1067603dc]
goroutine 1 [running]:
k8s.io/minikube/cmd/minikube/cmd/config.processRegistryCredsConfig({0x106858a06, 0x8}, 0x0)
/Users/nir/src/minikube/cmd/minikube/cmd/config/configure_registry_creds.go:93 +0x2c
k8s.io/minikube/cmd/minikube/cmd/config.init.func8(0x140001f2b00?, {0x140003a83a0, 0x1, 0x106850650?})
/Users/nir/src/minikube/cmd/minikube/cmd/config/configure.go:69 +0x24c
github.com/spf13/cobra.(*Command).execute(0x10a088d40, {0x140003a8350, 0x1, 0x1})
/Users/nir/go/pkg/mod/github.com/spf13/cobra@v1.9.1/command.go:1019 +0x82c
github.com/spf13/cobra.(*Command).ExecuteC(0x10a084880)
/Users/nir/go/pkg/mod/github.com/spf13/cobra@v1.9.1/command.go:1148 +0x384
github.com/spf13/cobra.(*Command).Execute(...)
/Users/nir/go/pkg/mod/github.com/spf13/cobra@v1.9.1/command.go:1071
k8s.io/minikube/cmd/minikube/cmd.Execute()
/Users/nir/src/minikube/cmd/minikube/cmd/root.go:174 +0x550
main.main()
/Users/nir/src/minikube/cmd/minikube/main.go:95 +0x250
The issue is that loadAddonConfigFile() returns nil if the --config-file
flag is not specified, but the code expects non-nil config, handling
zero value as interactive mode. Fixed by returning zero value config in
this case.
With this change we run the normal interactive flow:
% ./out/minikube addons configure registry-creds
Do you want to enable AWS Elastic Container Registry? [y/n]: n
Do you want to enable Google Container Registry? [y/n]: n
Do you want to enable Docker Registry? [y/n]: y
-- Enter docker registry server url: docker.io
-- Enter docker registry username: nirs
-- Enter docker registry password:
Do you want to enable Azure Container Registry? [y/n]: n
✅ registry-creds was successfully configured
% out/minikube addons enable registry-creds
❗ registry-creds is a 3rd party addon and is not maintained or verified by minikube maintainers, enable at your own risk.
❗ registry-creds does not currently have an associated maintainer.
▪ Using image docker.io/upmcenterprises/registry-creds:1.10
🌟 The 'registry-creds' addon is enabled
Note that this addon does not work on arm64 since we have only amd64
image. The pod fail to start:
% kubectl logs deploy/registry-creds -n kube-system
exec /registry-creds: exec format error
The build fails on linux/arm64 because libvirt.org/go/libvirt is using
CGO, and cross compiling CGO requires a C cross compiler. Setting
GOARCH=arm64 is not enough. The issue is tracked in
https://github.com/kubernetes/minikube/issues/19959.
Previously we built the kvm driver also on arm64 as part of the
docker-machine-driver-kvm2 executable, but the build was skipped on
arm64.
Now that we build the driver as part of minikube, we cannot skip the
entire build. Change the build tag so the libvirt bits are built only on
amd64.
To make this possible, the generic linux bits needed by the registry
moved to pkg/drivers/kvm/driver.go, and a kvm_stub.go is used for
unsupported architectures.
In the registry Driver.Status(), move the arm64 check to front since
there is no point in checking that libvirt is installed correctly if the
driver is not supported yet.
Remove the docker-machine-driver-kvm2 wrapper and use the kvm driver as
internal driver.
To avoid dependency on libvirt shared library on Linux, we build now
with the libvirt_dlopen build tag. This is used only linux to avoid
linking with libvirt shared library. This is not documnted but can be
found in the source.
f7cdeba997/domain.go (L30)
With this we don't need libvirt devel libraries during build, and in
runtime we will fail if libvirt shared library is not installed.
With this change minikube can not be built for linux !amd64 since building
libvirt go binding requires CGO, and it does not work by changing GOARCH.