remove stale storage-provisioner-gluster addon

The Gluster project has not has a release in a few years, maintenance
slowed down to the point it is almost standing still. Heketi as
component for deploying parts of the storage platform has been archived
in 2023.

Providing the storage-provisioner-gluster addon might give users the
wrong expectations. There is no guarantee Gluster is working with recent
minikube versions.
pull/20370/head
Niels de Vos 2025-02-06 10:56:23 +01:00
parent fdbff10de1
commit 00af75818f
15 changed files with 3 additions and 766 deletions

View File

@ -25,8 +25,6 @@
"registry.k8s.io/sig-storage/csi-snapshotter": "registry.cn-hangzhou.aliyuncs.com/google_containers/csi-snapshotter",
"registry.k8s.io/sig-storage/csi-provisioner": "registry.cn-hangzhou.aliyuncs.com/google_containers/csi-provisioner",
"docker.io/registry": "registry.cn-hangzhou.aliyuncs.com/google_containers/registry",
"docker.io/gluster/gluster-centos": "registry.cn-hangzhou.aliyuncs.com/google_containers/glusterfs-server",
"docker.io/heketi/heketi": "registry.cn-hangzhou.aliyuncs.com/google_containers/heketi",
"docker.io/coredns/coredns": "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns",
"docker.io/kindest/kindnetd": "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd",
"registry.k8s.io/ingress-nginx/controller": "registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller",

View File

@ -39,10 +39,6 @@ var (
//go:embed storage-provisioner/storage-provisioner.yaml.tmpl
StorageProvisionerAssets embed.FS
// StorageProvisionerGlusterAssets assets for storage-provisioner-gluster addon
//go:embed storage-provisioner-gluster/*.tmpl storage-provisioner-gluster/*.yaml
StorageProvisionerGlusterAssets embed.FS
// StorageProvisionerRancherAssets assets for storage-provisioner-rancher addon
//go:embed storage-provisioner-rancher/*.tmpl
StorageProvisionerRancherAssets embed.FS

View File

@ -1,5 +0,0 @@
{{ define "main" }}
<div style="padding-top:20px">
{{ .Render "content" }}
</div>
{{ end }}

View File

@ -1,141 +0,0 @@
## storage-provisioner-gluster addon
[Gluster](https://gluster.org/), a scalable network filesystem that provides dynamic provisioning of PersistentVolumeClaims.
### Starting Minikube
This addon works within Minikube, without any additional configuration.
```shell
$ minikube start
```
### Enabling storage-provisioner-gluster
To enable this addon, simply run:
```
$ minikube addons enable storage-provisioner-gluster
```
Within one minute, the addon manager should pick up the change and you should see several Pods in the `storage-gluster` namespace:
```
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-dbcbf54fc-726vv 1/1 Running 0 1m
glusterfs-rvdmz 0/1 Running 0 40s
heketi-79997b9d85-42c49 0/1 ContainerCreating 0 40s
```
Some of the Pods need a little more time to get up an running than others, but in a few minutes everything should have been deployed and all Pods should be `READY`:
```
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-dbcbf54fc-726vv 1/1 Running 0 5m
glusterfs-rvdmz 1/1 Running 0 4m
heketi-79997b9d85-42c49 1/1 Running 1 4m
```
Once the Pods have status `Running`, the `glusterfile` StorageClass should have been marked as `default`:
```
$ kubectl get sc
NAME PROVISIONER AGE
glusterfile (default) gluster.org/glusterfile 3m
```
### Creating PVCs
The storage in the Gluster environment is limited to 10 GiB. This is because the data is stored in the Minikube VM (a sparse file `/srv/fake-disk.img`).
The following `yaml` creates a PVC, starts a CentOS developer Pod that generates a website and deploys an NGINX webserver that provides access to the website:
```
---
#
# Minimal PVC where a developer can build a website.
#
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: website
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Mi
storageClassName: glusterfile
---
#
# This pod will just download a fortune phrase and store it (as plain text) in
# index.html on the PVC. This is how we create websites?
#
# The root of the website stored on the above PVC is mounted on /mnt.
#
apiVersion: v1
kind: Pod
metadata:
name: centos-webdev
spec:
containers:
- image: centos:latest
name: centos
args:
- curl
- -o/mnt/index.html
- https://api.ef.gy/fortune
volumeMounts:
- mountPath: /mnt
name: website
# once the website is created, the pod will exit
restartPolicy: Never
volumes:
- name: website
persistentVolumeClaim:
claimName: website
---
#
# Start a NGINX webserver with the website.
# We'll skip creating a service, to keep things minimal.
#
apiVersion: v1
kind: Pod
metadata:
name: website-nginx
spec:
containers:
- image: gcr.io/google_containers/nginx-slim:0.8
name: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- mountPath: /usr/share/nginx/html
name: website
volumes:
- name: website
persistentVolumeClaim:
claimName: website
```
Because the PVC has been created with the `ReadWriteMany` accessMode, both Pods can access the PVC at the same time. Other website developer Pods can use the same PVC to update the contents of the site.
The above configuration does not expose the website on the Minikube VM. One way to see the contents of the website is to SSH into the Minikube VM and fetch the website there:
```
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
centos-webdev 0/1 Completed 0 1m 172.17.0.9 minikube
website-nginx 1/1 Running 0 24s 172.17.0.9 minikube
$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ curl http://172.17.0.9
I came, I saw, I deleted all your files.
$
```

View File

@ -1,140 +0,0 @@
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
namespace: storage-gluster
name: glusterfs
labels:
glusterfs: daemonset
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
annotations:
description: GlusterFS DaemonSet
tags: glusterfs
spec:
selector:
matchLabels:
glusterfs: pod
glusterfs-node: pod
k8s-app: storage-provisioner-gluster
template:
metadata:
namespace: storage-gluster
name: glusterfs
labels:
glusterfs: pod
glusterfs-node: pod
k8s-app: storage-provisioner-gluster
spec:
#nodeSelector:
# kubernetes.io/hostname: minikube
hostNetwork: true
containers:
- image: {{.CustomRegistries.GlusterfsServer | default .ImageRepository | default .Registries.GlusterfsServer }}{{.Images.GlusterfsServer}}
imagePullPolicy: IfNotPresent
name: glusterfs
env:
- name: USE_FAKE_DISK
value: "enabled"
#- name: USE_FAKE_FILE
# value: "/srv/fake-disk.img"
#- name: USE_FAKE_SIZE
# value: "10G"
#- name: USE_FAKE_DEV
# value: "/dev/fake"
resources:
requests:
memory: 100Mi
cpu: 100m
volumeMounts:
# default location for fake-disk.img, it needs to be persistent
- name: fake-disk
mountPath: /srv
# the fstab for the bricks is under /var/lib/heketi
- name: glusterfs-heketi
mountPath: "/var/lib/heketi"
- name: glusterfs-run
mountPath: "/run"
- name: glusterfs-lvm
mountPath: "/run/lvm"
#- name: glusterfs-etc
# mountPath: "/etc/glusterfs"
- name: glusterfs-logs
mountPath: "/var/log/glusterfs"
- name: glusterfs-config
mountPath: "/var/lib/glusterd"
- name: glusterfs-dev
mountPath: "/dev"
# glusterfind uses /var/lib/misc/glusterfsd, yuck
- name: glusterfs-misc
mountPath: "/var/lib/misc/glusterfsd"
- name: glusterfs-cgroup
mountPath: "/sys/fs/cgroup"
readOnly: true
- name: glusterfs-ssl
mountPath: "/etc/ssl"
readOnly: true
- name: kernel-modules
mountPath: "/usr/lib/modules"
readOnly: true
securityContext:
capabilities: {}
privileged: true
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 40
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
periodSeconds: 25
successThreshold: 1
failureThreshold: 50
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 40
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
periodSeconds: 25
successThreshold: 1
failureThreshold: 50
volumes:
- name: fake-disk
hostPath:
path: /srv
- name: glusterfs-heketi
hostPath:
path: "/var/lib/heketi"
- name: glusterfs-run
- name: glusterfs-lvm
hostPath:
path: "/run/lvm"
- name: glusterfs-etc
hostPath:
path: "/etc/glusterfs"
- name: glusterfs-logs
hostPath:
path: "/var/log/glusterfs"
- name: glusterfs-config
hostPath:
path: "/var/lib/glusterd"
- name: glusterfs-dev
hostPath:
path: "/dev"
- name: glusterfs-misc
hostPath:
path: "/var/lib/misc/glusterfsd"
- name: glusterfs-cgroup
hostPath:
path: "/sys/fs/cgroup"
- name: glusterfs-ssl
hostPath:
path: "/etc/ssl"
- name: kernel-modules
hostPath:
path: "/usr/lib/modules"

View File

@ -1,163 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: storage-gluster
name: heketi-service-account
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
name: heketi-sa-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
namespace: storage-gluster
name: heketi-service-account
---
kind: Service
apiVersion: v1
metadata:
namespace: storage-gluster
name: heketi
labels:
glusterfs: heketi-service
heketi: service
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
annotations:
description: Exposes Heketi Service
spec:
selector:
glusterfs: heketi-pod
ports:
- name: heketi
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: storage-gluster
name: heketi-topology
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
data:
minikube.json: |+
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"minikube"
],
"storage": [
"172.17.0.1"
]
},
"zone": 1
},
"devices": [
"/dev/fake"
]
}
]
}
]
}
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: storage-gluster
name: heketi
labels:
glusterfs: heketi-deployment
heketi: deployment
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
annotations:
description: Defines how to deploy Heketi
spec:
replicas: 1
selector:
matchLabels:
glusterfs: heketi-pod
heketi: pod
k8s-app: storage-provisioner-gluster
template:
metadata:
namespace: storage-gluster
name: heketi
labels:
glusterfs: heketi-pod
heketi: pod
k8s-app: storage-provisioner-gluster
spec:
serviceAccountName: heketi-service-account
containers:
- image: {{.CustomRegistries.Heketi | default .ImageRepository | default .Registries.Heketi }}{{.Images.Heketi}}
imagePullPolicy: IfNotPresent
name: heketi
env:
- name: HEKETI_EXECUTOR
value: "kubernetes"
- name: HEKETI_FSTAB
value: "/var/lib/heketi/fstab"
- name: HEKETI_SNAPSHOT_LIMIT
value: '14'
- name: HEKETI_KUBE_GLUSTER_DAEMONSET
value: "y"
- name: HEKETI_IGNORE_STALE_OPERATIONS
value: "true"
- name: HEKETI_GLUSTERAPP_LOGLEVEL
value: "debug"
# initial topology.json in case the db does not exist
- name: HEKETI_TOPOLOGY_FILE
value: "/etc/heketi/topology/minikube.json"
ports:
- containerPort: 8080
volumeMounts:
- name: db
mountPath: "/var/lib/heketi"
- name: initial-topology
mountPath: "/etc/heketi/topology"
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 3
httpGet:
path: "/hello"
port: 8080
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 30
httpGet:
path: "/hello"
port: 8080
volumes:
- name: db
hostPath:
path: "/var/lib/heketi"
- name: initial-topology
configMap:
name: heketi-topology

View File

@ -1,9 +0,0 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: storage-gluster
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile

View File

@ -1,113 +0,0 @@
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: glusterfile
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: EnsureExists
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: gluster.org/glusterfile
reclaimPolicy: Delete
parameters:
resturl: "http://heketi.storage-gluster.svc.cluster.local:8080"
restuser: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: glusterfile-provisioner-runner
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get","create","delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["routes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "create","delete"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: storage-gluster
name: glusterfile-provisioner
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: glusterfile-provisioner
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
kind: ClusterRole
name: glusterfile-provisioner-runner
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
namespace: storage-gluster
name: glusterfile-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: storage-gluster
name: glusterfile-provisioner
labels:
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
annotations:
description: Defines how to deploy the glusterfile provisioner pod.
spec:
replicas: 1
selector:
matchLabels:
glusterfs: file-provisioner-pod
glusterfile: provisioner-pod
strategy:
type: Recreate
template:
metadata:
namespace: storage-gluster
name: glusterfile-provisioner
labels:
glusterfs: file-provisioner-pod
glusterfile: provisioner-pod
spec:
serviceAccountName: glusterfile-provisioner
containers:
- name: glusterfile-provisioner
image: {{.CustomRegistries.GlusterfileProvisioner | default .ImageRepository | default .Registries.GlusterfileProvisioner }}{{.Images.GlusterfileProvisioner}}
imagePullPolicy: IfNotPresent
env:
- name: PROVISIONER_NAME
value: gluster.org/glusterfile

View File

@ -37,10 +37,7 @@ func enableOrDisableStorageClasses(cc *config.ClusterConfig, name string, val st
}
class := defaultStorageClassProvisioner
switch name {
case "storage-provisioner-gluster":
class = "glusterfile"
case "storage-provisioner-rancher":
if name == "storage-provisioner-rancher" {
class = "local-path"
}

View File

@ -169,11 +169,6 @@ var Addons = []*Addon{
set: SetBool,
callbacks: []setFn{EnableOrDisableAddon},
},
{
name: "storage-provisioner-gluster",
set: SetBool,
callbacks: []setFn{enableOrDisableStorageClasses},
},
{
name: "storage-provisioner-rancher",
set: SetBool,

View File

@ -184,36 +184,6 @@ var Addons = map[string]*Addon{
}, map[string]string{
"StorageProvisioner": "gcr.io",
}),
"storage-provisioner-gluster": NewAddon([]*BinAsset{
MustBinAsset(addons.StorageProvisionerGlusterAssets,
"storage-provisioner-gluster/storage-gluster-ns.yaml",
vmpath.GuestAddonsDir,
"storage-gluster-ns.yaml",
"0640"),
MustBinAsset(addons.StorageProvisionerGlusterAssets,
"storage-provisioner-gluster/glusterfs-daemonset.yaml.tmpl",
vmpath.GuestAddonsDir,
"glusterfs-daemonset.yaml",
"0640"),
MustBinAsset(addons.StorageProvisionerGlusterAssets,
"storage-provisioner-gluster/heketi-deployment.yaml.tmpl",
vmpath.GuestAddonsDir,
"heketi-deployment.yaml",
"0640"),
MustBinAsset(addons.StorageProvisionerGlusterAssets,
"storage-provisioner-gluster/storage-provisioner-glusterfile.yaml.tmpl",
vmpath.GuestAddonsDir,
"storage-provisioner-glusterfile.yaml",
"0640"),
}, false, "storage-provisioner-gluster", "3rd party (Gluster)", "", "", map[string]string{
"Heketi": "heketi/heketi:10@sha256:76d5a6a3b7cf083d1e99efa1c15abedbc5c8b73bef3ade299ce9a4c16c9660f8",
"GlusterfileProvisioner": "gluster/glusterfile-provisioner:latest@sha256:9961a35cb3f06701958e202324141c30024b195579e5eb1704599659ddea5223",
"GlusterfsServer": "gluster/gluster-centos:latest@sha256:8167034b9abf2d16581f3f4571507ce7d716fb58b927d7627ef72264f802e908",
}, map[string]string{
"Heketi": "docker.io",
"GlusterfsServer": "docker.io",
"GlusterfileProvisioner": "docker.io",
}),
"storage-provisioner-rancher": NewAddon([]*BinAsset{
MustBinAsset(addons.StorageProvisionerRancherAssets,
"storage-provisioner-rancher/storage-provisioner-rancher.yaml.tmpl",

View File

@ -210,7 +210,6 @@ var (
DefaultNamespaces = []string{
"kube-system",
"kubernetes-dashboard",
"storage-gluster",
"istio-operator",
}

View File

@ -21,7 +21,7 @@ minikube pause [flags]
```
-A, --all-namespaces If set, pause all namespaces
-n, --namespaces strings namespaces to pause (default [kube-system,kubernetes-dashboard,storage-gluster,istio-operator])
-n, --namespaces strings namespaces to pause (default [kube-system,kubernetes-dashboard,istio-operator])
-o, --output string Format to print stdout in. Options include: [text,json] (default "text")
```

View File

@ -25,7 +25,7 @@ minikube unpause [flags]
```
-A, --all-namespaces If set, unpause all namespaces
-n, --namespaces strings namespaces to unpause (default [kube-system,kubernetes-dashboard,storage-gluster,istio-operator])
-n, --namespaces strings namespaces to unpause (default [kube-system,kubernetes-dashboard,istio-operator])
-o, --output string Format to print stdout in. Options include: [text,json] (default "text")
```

View File

@ -1,147 +0,0 @@
---
title: "Using the Storage Provisioner Gluster Addon"
linkTitle: "Storage Provisioner Gluster"
weight: 1
date: 2019-01-09
---
## storage-provisioner-gluster addon
[Gluster](https://gluster.org/), a scalable network filesystem that provides dynamic provisioning of PersistentVolumeClaims.
### Starting Minikube
This addon works within Minikube, without any additional configuration.
```shell
$ minikube start
```
### Enabling storage-provisioner-gluster
To enable this addon, simply run:
```
$ minikube addons enable storage-provisioner-gluster
```
Within one minute, the addon manager should pick up the change and you should see several Pods in the `storage-gluster` namespace:
```
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-dbcbf54fc-726vv 1/1 Running 0 1m
glusterfs-rvdmz 0/1 Running 0 40s
heketi-79997b9d85-42c49 0/1 ContainerCreating 0 40s
```
Some of the Pods need a little more time to get up an running than others, but in a few minutes everything should have been deployed and all Pods should be `READY`:
```
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-dbcbf54fc-726vv 1/1 Running 0 5m
glusterfs-rvdmz 1/1 Running 0 4m
heketi-79997b9d85-42c49 1/1 Running 1 4m
```
Once the Pods have status `Running`, the `glusterfile` StorageClass should have been marked as `default`:
```
$ kubectl get sc
NAME PROVISIONER AGE
glusterfile (default) gluster.org/glusterfile 3m
```
### Creating PVCs
The storage in the Gluster environment is limited to 10 GiB. This is because the data is stored in the Minikube VM (a sparse file `/srv/fake-disk.img`).
The following `yaml` creates a PVC, starts a CentOS developer Pod that generates a website and deploys an NGINX webserver that provides access to the website:
```
---
#
# Minimal PVC where a developer can build a website.
#
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: website
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Mi
storageClassName: glusterfile
---
#
# This pod will just download a fortune phrase and store it (as plain text) in
# index.html on the PVC. This is how we create websites?
#
# The root of the website stored on the above PVC is mounted on /mnt.
#
apiVersion: v1
kind: Pod
metadata:
name: centos-webdev
spec:
containers:
- image: centos:latest
name: centos
args:
- curl
- -o/mnt/index.html
- https://api.ef.gy/fortune
volumeMounts:
- mountPath: /mnt
name: website
# once the website is created, the pod will exit
restartPolicy: Never
volumes:
- name: website
persistentVolumeClaim:
claimName: website
---
#
# Start a NGINX webserver with the website.
# We'll skip creating a service, to keep things minimal.
#
apiVersion: v1
kind: Pod
metadata:
name: website-nginx
spec:
containers:
- image: gcr.io/google_containers/nginx-slim:0.8
name: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- mountPath: /usr/share/nginx/html
name: website
volumes:
- name: website
persistentVolumeClaim:
claimName: website
```
Because the PVC has been created with the `ReadWriteMany` accessMode, both Pods can access the PVC at the same time. Other website developer Pods can use the same PVC to update the contents of the site.
The above configuration does not expose the website on the Minikube VM. One way to see the contents of the website is to SSH into the Minikube VM and fetch the website there:
```
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
centos-webdev 0/1 Completed 0 1m 172.17.0.9 minikube
website-nginx 1/1 Running 0 24s 172.17.0.9 minikube
$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ curl http://172.17.0.9
I came, I saw, I deleted all your files.
$
```