Update "Using NVIDIA GPUs with minikube" (#19677)

Adding a note to delete existing minikube instance before starting one with the nvidia shim. This confused me for an good bit before I figured out why minikube couldn't see my GPUs.

Completely deleting the the instance and restarting from scratch seems to set everything up in a good state:
> 🌟  Enabled addons: nvidia-device-plugin, storage-provisioner, default-storageclass
pull/19681/head
Matt L 2024-09-20 13:25:53 -04:00 committed by GitHub
parent 1e9d04a314
commit bebee72e90
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 11 additions and 0 deletions

View File

@ -23,6 +23,7 @@ date: 2018-01-02
```shell ```shell
sudo sysctl net.core.bpf_jit_harden sudo sysctl net.core.bpf_jit_harden
``` ```
- If it's not `0` run: - If it's not `0` run:
```shell ```shell
echo "net.core.bpf_jit_harden=0" | sudo tee -a /etc/sysctl.conf echo "net.core.bpf_jit_harden=0" | sudo tee -a /etc/sysctl.conf
@ -35,10 +36,20 @@ date: 2018-01-02
```shell ```shell
sudo nvidia-ctk runtime configure --runtime=docker && sudo systemctl restart docker sudo nvidia-ctk runtime configure --runtime=docker && sudo systemctl restart docker
``` ```
- Delete existing minikube (optional)
If you have an existing minikube instance, you may need to delete it if it was built before installing the nvidia runtime shim.
```shell
minikube delete
```
This will make sure minikube does any required setup or addon installs now that the nvidia runtime is available.
- Start minikube: - Start minikube:
```shell ```shell
minikube start --driver docker --container-runtime docker --gpus all minikube start --driver docker --container-runtime docker --gpus all
``` ```
{{% /tab %}} {{% /tab %}}
{{% tab none %}} {{% tab none %}}
## Using the 'none' driver ## Using the 'none' driver