libvirtd must be started before you run minikube. Failure to do so will result in the below error
Failed to connect socket to '/var/run/libvirt/libvirt-sock'
I believe this should be noted in the documentation to assist other users.
Added lines 40-43
Should use the proper name for display, even if we use a name more
suitable to naming classes and methods for the implementation...
Also use the --runtime=cri-o when testing, and update the github
repository now that cri-o has graduated from incubator to a sig.
See https://cri-o.io/
Mention that the instruction in this section doesn't work for old versions
of Ubuntu as it is.
Because the provided driver binary requires a later version of libvirt.
% ./docker-machine-driver-kvm2
./docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt-lxc.so.0: version `LIBVIRT_LXC_2.0.0' not found (required by ./docker-machine-driver-kvm2)
./docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_2.2.0' not found (required by ./docker-machine-driver-kvm2)
./docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_3.0.0' not found (required by ./docker-machine-driver-kvm2)
./docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_1.3.3' not found (required by ./docker-machine-driver-kvm2)
./docker-machine-driver-kvm2: /usr/lib/x86_64-linux-gnu/libvirt.so.0: version `LIBVIRT_2.0.0' not found (required by ./docker-machine-driver-kvm2)
%
With this addon dynamic provisioning based on Gluster can be enabled:
$ minikube addons enable storage-provisioner-gluster
This will deploy several pods in a new 'storage-gluster' namespace:
- glusterfs, storage service with a 10GB sparse /srv/fake-disk.img
- heketi, a smart Gluster volume manager
- glusterfile-provisioner, external-storage provisioner
In addition, the StorageClass 'glusterfile' will be created. It is
currently not configured as default StorageClass, so PVCs need to refer
to the new StorageClass.
Previously, minikube has been shipped with the default CNI config
(/etc/cni/net.d/k8s.conf) in its rootfs. This complicated a lot
when using a custom CNI plugin, as the default config was picked
by kubelet before the custom CNI plugin has installed its own CNI
config. So, the end result was that some Pods were attached to a
network defined in the default config, and some got managed by
the custom plugin.
This commit introduces the flag "--enable-default-cni" to
"minikube start" to trigger the provisioning of the default CNI
config.
Signed-off-by: Martynas Pumputis <m@lambda.lt>
This PR adds the code for enabling gvisor in minikube. It adds the pod
that will run when the addon is enabled, and the code for the image
which will run when this happens.
When gvisor is enabled, the pod will download runsc and the
gvisor-containerd-shim. It will replace the containerd config.toml and
restart containerd.
When gvisor is disabled, the pod will be deleted by the addon manager.
This will trigger a pre-stop hook which will revert the config.toml to
it's original state and restart containerd.
* Add config parameter for the cri socket path
Closes#3153
* Remove stray newline, when not using criSocket
* Add the --cri-socket parameter to configuration
Also fix the syntax for CRI-O, adding unix://
The instructions for HyperKit produce the error `install: root: Invalid argument`. @ran-dall Helped me figuire out that root was not being permited foir this command, and sent me the fix for my machine. Figure I'd share it here.
This commit introduces a new command, `minikube tunnel`, a LoadBalancer emulator functionality, that must be run with root permissions.
This command:
* Establishes networking routes from the host into the VM for all IP ranges used by Kubernetes.
* Enables a cluster controller that allocates IPs to services external `LoadBalancer` IPs.
* Cleans up routes and IPs when stopped (Ctrl+C), when `minikube` stops, and when `minikube tunnel` is ran with the `--cleanup` flag