Merge pull request #16840 from spowelljr/addAddonDocs

site: Add addon readmes to website
pull/16860/head
Medya Ghazizadeh 2023-07-10 08:54:48 -07:00 committed by GitHub
commit 1d69ada274
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 477 additions and 32 deletions

View File

@ -1,8 +1,8 @@
## gVisor Addon
[gVisor](https://gvisor.dev/), a sandboxed container runtime, allows users to securely run pods with untrusted workloads within Minikube.
[gVisor](https://gvisor.dev/), a sandboxed container runtime, allows users to securely run pods with untrusted workloads within minikube.
### Starting Minikube
gVisor depends on the containerd runtime to run in Minikube.
### Starting minikube
gVisor depends on the containerd runtime to run in minikube.
When starting minikube, specify the following flags, along with any additional desired flags:
```shell
@ -29,7 +29,7 @@ NAME CREATED AT
runtimeclass.node.k8s.io/gvisor 2019-06-15T04:35:09Z
```
Once the pod has status `Running`, gVisor is enabled in Minikube.
Once the pod has status `Running`, gVisor is enabled in minikube.
### Running pods in gVisor

View File

@ -1,15 +1,13 @@
---
title: "Using Ambassador Ingress Controller"
linkTitle: "Using Ambassador Ingress Controller"
title: "Using the Ambassador Addon"
linkTitle: "Ambassador"
weight: 1
date: 2020-05-14
description: >
Using Ambassador Ingress Controller with Minikube
---
## Overview
[Ambassador](https://getambassador.io/) allows access to Kubernetes services running inside Minikube. Ambassador can be
[Ambassador](https://getambassador.io/) allows access to Kubernetes services running inside minikube. Ambassador can be
configured via both, [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resources and
[Mapping](https://www.getambassador.io/docs/latest/topics/using/intro-mappings/) resources.
@ -134,4 +132,3 @@ curl http://<Ambassdor's External IP'/hello-mapping/>
```
**Note:** Read more about mappings in Ambassador's
[documentation](https://www.getambassador.io/docs/latest/topics/using/mappings/).

View File

@ -1,5 +1,5 @@
---
title: "Using Cloud Spanner Addon"
title: "Using the Cloud Spanner Addon"
linkTitle: "Cloud Spanner"
weight: 1
date: 2022-10-14
@ -50,4 +50,4 @@ To disable this addon, simply run:
```shell script
minikube addons disable cloud-spanner
```
```

View File

@ -0,0 +1,81 @@
---
title: "Using the gVisor Addon"
linkTitle: "gVisor"
weight: 1
date: 2018-01-02
---
## gVisor Addon
[gVisor](https://gvisor.dev/), a sandboxed container runtime, allows users to securely run pods with untrusted workloads within minikube.
### Starting minikube
gVisor depends on the containerd runtime to run in minikube.
When starting minikube, specify the following flags, along with any additional desired flags:
```shell
$ minikube start --container-runtime=containerd \
--docker-opt containerd=/var/run/containerd/containerd.sock
```
### Enabling gVisor
To enable this addon, simply run:
```
$ minikube addons enable gvisor
```
Within one minute, the addon manager should pick up the change and you should
see the `gvisor` pod and `gvisor` [Runtime Class](https://kubernetes.io/docs/concepts/containers/runtime-class/):
```
$ kubectl get pod,runtimeclass gvisor -n kube-system
NAME READY STATUS RESTARTS AGE
pod/gvisor 1/1 Running 0 2m52s
NAME CREATED AT
runtimeclass.node.k8s.io/gvisor 2019-06-15T04:35:09Z
```
Once the pod has status `Running`, gVisor is enabled in minikube.
### Running pods in gVisor
To run a pod in gVisor, add the `gvisor` runtime class to the Pod spec in your
Kubernetes yaml:
```
runtimeClassName: gvisor
```
An example Pod is shown below:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-untrusted
spec:
runtimeClassName: gvisor
containers:
- name: nginx
image: nginx
```
### Disabling gVisor
To disable gVisor, run:
```
$ minikube addons disable gvisor
```
Within one minute, the addon manager should pick up the change.
Once the `gvisor` pod has status `Terminating`, or has been deleted, the gvisor addon should be disabled.
```
$ kubectl get pod gvisor -n kube-system
NAME READY STATUS RESTARTS AGE
gvisor 1/1 Terminating 0 5m
```
_Note: Once gVisor is disabled, any pod with the `gvisor` Runtime Class will fail with a FailedCreatePodSandBox error._

View File

@ -1,5 +1,5 @@
---
title: "Using Headlamp Addon"
title: "Using the Headlamp Addon"
linkTitle: "Headlamp"
weight: 1
date: 2022-06-08
@ -48,4 +48,4 @@ To disable this addon, simply run:
```shell script
minikube addons disable headlamp
```
```

View File

@ -0,0 +1,30 @@
---
title: "Using the Helm Tiller Addon"
linkTitle: "Helm Tiller"
weight: 1
date: 2019-09-23
---
## helm-tiller Addon
[Kubernetes Helm](https://helm.sh) - The Kubernetes Package Manager
### Enabling helm-tiller
To enable this addon, simply run:
```shell script
minikube addons enable helm-tiller
```
In a minute or so tiller will be installed into your cluster. You could run `helm init` each time you create a new minikube instance or you could just enable this addon.
Each time you start a new minikube instance, tiller will be automatically installed.
### Testing installation
```shell script
helm ls
```
If everything went well you shouldn't get any errors about tiller being installed in your cluster. If you haven't deployed any releases `helm ls` won't return anything.
### Deprecation of Tiller
When tiller is finally deprecated this addon won't be necessary anymore. If your version of helm doesn't use tiller, you don't need this addon.

View File

@ -1,5 +1,5 @@
---
title: "Using Inspektor Gadget Addon"
title: "Using the Inspektor Gadget Addon"
linkTitle: "Inspektor Gadget"
weight: 1
date: 2023-02-16
@ -30,4 +30,4 @@ To disable this addon, simply run:
```shell script
minikube addons disable inspektor-gadget
```
```

View File

@ -0,0 +1,40 @@
---
title: "Using the Istio Addon"
linkTitle: "Istio"
weight: 1
date: 2019-12-25
---
## istio Addon
[istio](https://istio.io/docs/setup/getting-started/) - Cloud platforms provide a wealth of benefits for the organizations that use them.
### Enable istio on minikube
Make sure to start minikube with at least 8192 MB of memory and 4 CPUs.
See official [Platform Setup](https://istio.io/docs/setup/platform-setup/) documentation.
```shell script
minikube start --memory=8192mb --cpus=4
```
To enable this addon, simply run:
```shell script
minikube addons enable istio-provisioner
minikube addons enable istio
```
In a minute or so istio default components will be installed into your cluster. You could run `kubectl get po -n istio-system` to see the progress for istio installation.
### Testing installation
```shell script
kubectl get po -n istio-system
```
If everything went well you shouldn't get any errors about istio being installed in your cluster. If you haven't deployed any releases `kubectl get po -n istio-system` won't return anything.
### Disable istio
To disable this addon, simply run:
```shell script
minikube addons disable istio-provisioner
minikube addons disable istio
```

View File

@ -1,5 +1,5 @@
---
title: "Using Kong Ingress Controller Addon"
title: "Using the Kong Ingress Controller Addon"
linkTitle: "Kong Ingress"
weight: 1
date: 2022-01-25

View File

@ -1,10 +1,8 @@
---
title: "How to use KubeVirt with minikube"
linkTitle: "KubeVirt Support"
title: "Using the KubeVirt Addon"
linkTitle: "KubeVirt"
weight: 1
date: 2020-05-26
description: >
Using KubeVirt with minikube
---
## Prerequisites

View File

@ -1,10 +1,8 @@
---
title: "NVIDIA GPU Support"
linkTitle: "NVIDIA GPU Support"
title: "Using the Nvidia Addons"
linkTitle: "Nvidia"
weight: 1
date: 2018-01-02
description: >
Using NVIDIA GPU support within minikube
---
## Prerequisites

View File

@ -1,10 +1,8 @@
---
title: "Using Minikube with Pod Security Policies"
linkTitle: "Using Minikube with Pod Security Policies"
title: "Using the Pod Security Policy Addon"
linkTitle: "Pod Security Policy"
weight: 1
date: 2019-11-24
description: >
Using Minikube with Pod Security Policies
---
## Overview
@ -13,7 +11,7 @@ This tutorial explains how to start minikube with Pod Security Policies (PSP) en
## Prerequisites
- Minikube 1.11.1 with Kubernetes 1.16.x or higher
- minikube 1.11.1 with Kubernetes 1.16.x or higher
## Tutorial
@ -32,7 +30,7 @@ controller to prevent issues during bootstrap.
Older versions of minikube do not ship with the `pod-security-policy` addon, so
the policies that addon enables must be separately applied to the cluster.
## Minikube 1.5.2 through 1.6.2
## minikube 1.5.2 through 1.6.2
Before starting minikube, you need to give it the PSP YAMLs in order to allow minikube to bootstrap.
@ -183,7 +181,7 @@ subjects:
apiGroup: rbac.authorization.k8s.io
```
### Minikube between 1.6.2 and 1.11.1
### minikube between 1.6.2 and 1.11.1
With minikube versions greater than 1.6.2 and less than 1.11.1, the YAML files
shown above will not be automatically applied to the cluster. You may have

View File

@ -0,0 +1,156 @@
---
title: "Using the Registry Aliases Addon"
linkTitle: "Registry Aliases"
weight: 1
date: 2020-03-07
---
## Registry Aliases Addon
An addon to minikube that can help push and pull from the minikube registry using custom domain names. The custom domain names will be made resolveable from with in cluster and at minikube node.
## How to use ?
### Start minikube
```shell
minikube start -p demo
```
This addon depends on `registry` addon, it need to be enabled before the alias addon is installed:
### Enable internal registry
```shell
minikube addons enable registry
```
Verifying the registry deployment
```shell
watch kubectl get pods -n kube-system
```
```shell
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-kpbzt 1/1 Running 0 16m
coredns-6955765f44-lzlsv 1/1 Running 0 16m
etcd-demo 1/1 Running 0 16m
kube-apiserver-demo 1/1 Running 0 16m
kube-controller-manager-demo 1/1 Running 0 16m
kube-proxy-q8rb9 1/1 Running 0 16m
kube-scheduler-demo 1/1 Running 0 16m
*registry-4k8zs* 1/1 Running 0 40s
registry-proxy-vs8jt 1/1 Running 0 40s
storage-provisioner 1/1 Running 0 16m
```
```shell
kubectl get svc -n kube-system
```
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 17m
registry ClusterIP 10.97.247.75 <none> 80/TCP 94s
```
>
> **NOTE:**
> Please make a note of the CLUSTER-IP of `registry` service
### Enable registry aliases addon
```shell
minikube addons enable registry-aliases
🌟 The 'registry-aliases' addon is enabled
```
You can check the mikikube vm's `/etc/hosts` file for the registry aliases entries:
```shell
watch minikube ssh -- cat /etc/hosts
```
```shell
127.0.0.1 localhost
127.0.1.1 demo
10.97.247.75 example.org
10.97.247.75 example.com
10.97.247.75 test.com
10.97.247.75 test.org
```
The above output shows that the Daemonset has added the `registryAliases` from the ConfigMap pointing to the internal registry's __CLUSTER-IP__.
### Update CoreDNS
The coreDNS would have been automatically updated by the patch-coredns. A successful job run will have coredns ConfigMap updated like:
```yaml
apiVersion: v1
data:
Corefile: |-
.:53 {
errors
health
rewrite name example.com registry.kube-system.svc.cluster.local
rewrite name example.org registry.kube-system.svc.cluster.local
rewrite name test.com registry.kube-system.svc.cluster.local
rewrite name test.org registry.kube-system.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
```
To verify it run the following command:
```shell
kubectl get cm -n kube-system coredns -o yaml
```
Once you have successfully patched you can now push and pull from the registry using suffix `example.com`, `example.org`,`test.com` and `test.org`.
The successful run will show the following extra pods (Daemonset, Job) in `kube-system` namespace:
```shell
NAME READY STATUS RESTARTS AGE
registry-aliases-hosts-update-995vx 1/1 Running 0 47s
registry-aliases-patch-core-dns-zsxfc 0/1 Completed 0 47s
```
## Verify with sample application
You can verify the deployment end to end using the example [application](https://github.com/kameshsampath/minikube-registry-aliases-demo).
```shell
git clone https://github.com/kameshsampath/minikube-registry-aliases-demo
cd minikube-registry-aliases-demo
```
Make sure you set the docker context using `eval $(minikube -p demo docker-env)`
Deploy the application using [Skaffold](https://skaffold.dev):
```shell
skaffold dev --port-forward
```
Once the application is running try doing `curl localhost:8080` to see the `Hello World` response
You can also update [skaffold.yaml](./skaffold.yaml) and [app.yaml](.k8s/app.yaml), to use `test.org`, `test.com` or `example.org` as container registry urls, and see all the container image names resolves to internal registry, resulting in successful build and deployment.
> **NOTE**:
>
> You can also update [skaffold.yaml](./skaffold.yaml) and [app. yaml](.k8s/app.yaml), to use `test.org`, `test.com` or > `example.org` as container registry urls, and see all the > container image names resolves to internal registry, resulting in successful build and deployment.

View File

@ -0,0 +1,147 @@
---
title: "Using the Storage Provisioner Gluster Addon"
linkTitle: "Storage Provisioner Cluster"
weight: 1
date: 2019-01-09
---
## storage-provisioner-gluster addon
[Gluster](https://gluster.org/), a scalable network filesystem that provides dynamic provisioning of PersistentVolumeClaims.
### Starting Minikube
This addon works within Minikube, without any additional configuration.
```shell
$ minikube start
```
### Enabling storage-provisioner-gluster
To enable this addon, simply run:
```
$ minikube addons enable storage-provisioner-gluster
```
Within one minute, the addon manager should pick up the change and you should see several Pods in the `storage-gluster` namespace:
```
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-dbcbf54fc-726vv 1/1 Running 0 1m
glusterfs-rvdmz 0/1 Running 0 40s
heketi-79997b9d85-42c49 0/1 ContainerCreating 0 40s
```
Some of the Pods need a little more time to get up an running than others, but in a few minutes everything should have been deployed and all Pods should be `READY`:
```
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-dbcbf54fc-726vv 1/1 Running 0 5m
glusterfs-rvdmz 1/1 Running 0 4m
heketi-79997b9d85-42c49 1/1 Running 1 4m
```
Once the Pods have status `Running`, the `glusterfile` StorageClass should have been marked as `default`:
```
$ kubectl get sc
NAME PROVISIONER AGE
glusterfile (default) gluster.org/glusterfile 3m
```
### Creating PVCs
The storage in the Gluster environment is limited to 10 GiB. This is because the data is stored in the Minikube VM (a sparse file `/srv/fake-disk.img`).
The following `yaml` creates a PVC, starts a CentOS developer Pod that generates a website and deploys an NGINX webserver that provides access to the website:
```
---
#
# Minimal PVC where a developer can build a website.
#
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: website
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Mi
storageClassName: glusterfile
---
#
# This pod will just download a fortune phrase and store it (as plain text) in
# index.html on the PVC. This is how we create websites?
#
# The root of the website stored on the above PVC is mounted on /mnt.
#
apiVersion: v1
kind: Pod
metadata:
name: centos-webdev
spec:
containers:
- image: centos:latest
name: centos
args:
- curl
- -o/mnt/index.html
- https://api.ef.gy/fortune
volumeMounts:
- mountPath: /mnt
name: website
# once the website is created, the pod will exit
restartPolicy: Never
volumes:
- name: website
persistentVolumeClaim:
claimName: website
---
#
# Start a NGINX webserver with the website.
# We'll skip creating a service, to keep things minimal.
#
apiVersion: v1
kind: Pod
metadata:
name: website-nginx
spec:
containers:
- image: gcr.io/google_containers/nginx-slim:0.8
name: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- mountPath: /usr/share/nginx/html
name: website
volumes:
- name: website
persistentVolumeClaim:
claimName: website
```
Because the PVC has been created with the `ReadWriteMany` accessMode, both Pods can access the PVC at the same time. Other website developer Pods can use the same PVC to update the contents of the site.
The above configuration does not expose the website on the Minikube VM. One way to see the contents of the website is to SSH into the Minikube VM and fetch the website there:
```
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
centos-webdev 0/1 Completed 0 1m 172.17.0.9 minikube
website-nginx 1/1 Running 0 24s 172.17.0.9 minikube
$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ curl http://172.17.0.9
I came, I saw, I deleted all your files.
$
```