The special "default" network is created by libvirt and owned by the
system admin, but we try to delete it when deleting a profile.
To reproduce the issue, start and delete minikube on a system without
any other vm using the libvirt default network:
minikube start --driver kvm2 --network default
minikube delete
The default network will be delete, and the next minikube start will
fail, complaining about missing libvirt default networking and linking
to the complicated instructions how to recreate it.
Now we skip deletion of the special "default" network.
Example run log:
$ out/minikube delete -v10 --logtostderr 2>delete.log
* Deleting "minikube" in kvm2 ...
* Removed all traces of the "minikube" cluster.
$ cat delete.log
...
I0518 03:41:27.148838 1247331 out.go:177] * Deleting "minikube" in kvm2 ...
I0518 03:41:27.148857 1247331 main.go:141] libmachine: (minikube) Calling .Remove
I0518 03:41:27.149156 1247331 main.go:141] libmachine: (minikube) DBG | Removing machine...
I0518 03:41:27.159000 1247331 main.go:141] libmachine: (minikube) DBG | Trying to delete the networks (if possible)
I0518 03:41:27.169497 1247331 main.go:141] libmachine: (minikube) DBG | Using the default network, skipping deletion
I0518 03:41:27.169598 1247331 main.go:141] libmachine: (minikube) Successfully deleted networks
...
The curent domain xml template includes static nvram image using the
shared template image:
<nvram>/usr/share/AAVMF/AAVMF_VARS.fd</nvram>
This "works" when starting sinlge profile, but when starting a second
profile this breaks with:
virError(Code=55, Domain=24, Message='Requested operation is not
valid: Setting different SELinux label on /usr/share/AAVMF/AAVMF_VARS.fd
which is already in use
Which tells us that we are doing the wrong thing.
If we remove the nvram element, a new per-vm nvram is created
dynamially:
$ virsh -c qemu:///system dumpxml ex1 | grep nvram
<nvram template='/usr/share/AAVMF/AAVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/ex1_VARS.fd</nvram>
$ virsh -c qemu:///system dumpxml ex2 | grep nvram
<nvram template='/usr/share/AAVMF/AAVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/ex2_VARS.fd</nvram>
On linux/aarch64 (e.g. Asahi Linux on MacBook M*) booting from SATA
cdrom is broken and the VM drops into the UEFI shell.
It seems that linux/aarch64 supports only virtio and scsi devices[1].
Replace with scsi cdrom (like the x86 version) and addd a virtio-scsi
controller since the default scsi controller does not boot as well.
[1] https://kubevirt.io/user-guide/virtual_machines/virtual_machines_on_Arm64/#disks-and-volumes
On platforms where dhcp lease status is not updated immediately after
domain creation it fails to list ip addresses until next refresh
happens resulting in the following error:
8<----------8<----------8<----------8<----------8<----------8<----------
Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20480MB) ...
Failed to start kvm2 VM. Running "minikube delete" may fix it: creating
host: create: Error creating machine: Error in driver during machine
creation: IP not available after waiting: machine minikube didn't
return IP after 1 minute
Exiting due to GUEST_PROVISION: Failed to start host: creating host:
create: Error creating machine: Error in driver during machine
creation: IP not available after waiting: machine minikube didn't
return IP after 1 minute
8<----------8<----------8<----------8<----------8<----------8<----------
Using ARP instead of LEASE for ip address query is justifiable as
listing is done following the domain creation. In case of failure we
fallback to listing via LEASE source.
Signed-off-by: Anoop C S <anoopcs@cryptolab.net>
Having additional disks on the nodes is a requirement for developers
working on the storage components in Kubernetes. This commit adds the
extra-disks feature to the kvm2 driver.
Signed-off-by: Raghavendra Talur <raghavendra.talur@gmail.com>