diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 4ea6df948ac..2d2ad256786 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -65,33 +65,118 @@ mount each volume. Kubernetes supports several types of Volumes: - * `emptyDir` - * `hostPath` - * `gcePersistentDisk` * `awsElasticBlockStore` - * `nfs` - * `iscsi` - * `fc (fibre channel)` - * `flocker` - * `glusterfs` - * `rbd` - * `cephfs` - * `gitRepo` - * `secret` - * `persistentVolumeClaim` - * `downwardAPI` - * `projected` - * `azureFileVolume` * `azureDisk` - * `vsphereVolume` - * `Quobyte` - * `PortworxVolume` - * `ScaleIO` - * `StorageOS` + * `azureFile` + * `cephfs` + * `downwardAPI` + * `emptyDir` + * `fc` (fibre channel) + * `flocker` + * `gcePersistentDisk` + * `gitRepo` + * `glusterfs` + * `hostPath` + * `iscsi` * `local` + * `nfs` + * `persistentVolumeClaim` + * `projected` + * `portworxVolume` + * `quobyte` + * `rbd` + * `scaleIO` + * `secret` + * `storageos` + * `vsphereVolume` We welcome additional contributions. +### awsElasticBlockStore + +An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS +Volume](http://aws.amazon.com/ebs/) into your pod. Unlike +`emptyDir`, which is erased when a Pod is removed, the contents of an EBS +volume are preserved and the volume is merely unmounted. This means that an +EBS volume can be pre-populated with data, and that data can be "handed off" +between pods. + +**Important:** You must create an EBS volume using `aws ec2 create-volume` or the AWS API before you can use it. +{: .caution} + +There are some restrictions when using an awsElasticBlockStore volume: + +* the nodes on which pods are running must be AWS EC2 instances +* those instances need to be in the same region and availability-zone as the EBS volume +* EBS only supports a single EC2 instance mounting a volume + +#### Creating an EBS volume + +Before you can use an EBS volume with a pod, you need to create it. + +```shell +aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2 +``` + +Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume +type are suitable for your use!) + +#### AWS EBS Example configuration + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-ebs +spec: + containers: + - image: gcr.io/google_containers/test-webserver + name: test-container + volumeMounts: + - mountPath: /test-ebs + name: test-volume + volumes: + - name: test-volume + # This AWS EBS volume must already exist. + awsElasticBlockStore: + volumeID: + fsType: ext4 +``` + +### azureDisk + +A `azureDisk` is used to mount a Microsoft Azure [Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) into a Pod. + +More details can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/azure_disk/README.md). + +### azureFile + +A `azureFile` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0) +into a Pod. + +More details can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/azure_file/README.md). + +### cephfs + +A `cephfs` volume allows an existing CephFS volume to be +mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is +removed, the contents of a `cephfs` volume are preserved and the volume is merely +unmounted. This means that a CephFS volume can be pre-populated with data, and +that data can be "handed off" between pods. CephFS can be mounted by multiple +writers simultaneously. + +**Important:** You must have your own Ceph server running with the share exported before you can use it. +{: .caution} + +See the [CephFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/cephfs/) for more details. + +### downwardAPI + +A `downwardAPI` volume is used to make downward API data available to applications. +It mounts a directory and writes the requested data in plain text files. + +See the [`downwardAPI` volume example](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) for more details. + ### emptyDir An `emptyDir` volume is first created when a Pod is assigned to a Node, and @@ -138,6 +223,132 @@ spec: emptyDir: {} ``` +### fc (fibre channel) + +An `fc` volume allows an existing fibre channel volume to be mounted in a pod. +You can specify single or multiple target World Wide Names using the parameter +`targetWWNs` in your volume configuration. If multiple WWNs are specified, +targetWWNs expect that those WWNs are from multi-path connections. + +**Important:** You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them. +{: .caution} + +See the [FC example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/fibre_channel) for more details. + +### flocker + +[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management +and orchestration of data volumes backed by a variety of storage backends. + +A `flocker` volume allows a Flocker dataset to be mounted into a pod. If the +dataset does not already exist in Flocker, it needs to be first created with the Flocker +CLI or by using the Flocker API. If the dataset already exists it will be +reattached by Flocker to the node that the pod is scheduled. This means data +can be "handed off" between pods as required. + +**Important:** You must have your own Flocker installation running before you can use it. +{: .caution} + +See the [Flocker example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/flocker) for more details. + +### gcePersistentDisk + +A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [Persistent +Disk](http://cloud.google.com/compute/docs/disks) into your pod. Unlike +`emptyDir`, which is erased when a Pod is removed, the contents of a PD are +preserved and the volume is merely unmounted. This means that a PD can be +pre-populated with data, and that data can be "handed off" between pods. + +**Important:** You must create a PD using `gcloud` or the GCE API or UI before you can use it. +{: .caution} + +There are some restrictions when using a `gcePersistentDisk`: + +* the nodes on which pods are running must be GCE VMs +* those VMs need to be in the same GCE project and zone as the PD + +A feature of PD is that they can be mounted as read-only by multiple consumers +simultaneously. This means that you can pre-populate a PD with your dataset +and then serve it in parallel from as many pods as you need. Unfortunately, +PDs can only be mounted by a single consumer in read-write mode - no +simultaneous writers allowed. + +Using a PD on a pod controlled by a ReplicationController will fail unless +the PD is read-only or the replica count is 0 or 1. + +#### Creating a PD + +Before you can use a GCE PD with a pod, you need to create it. + +```shell +gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk +``` + +#### Example pod + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-pd +spec: + containers: + - image: gcr.io/google_containers/test-webserver + name: test-container + volumeMounts: + - mountPath: /test-pd + name: test-volume + volumes: + - name: test-volume + # This GCE PD must already exist. + gcePersistentDisk: + pdName: my-data-disk + fsType: ext4 +``` + +### gitRepo + +A `gitRepo` volume is an example of what can be done as a volume plugin. It +mounts an empty directory and clones a git repository into it for your pod to +use. In the future, such volumes may be moved to an even more decoupled model, +rather than extending the Kubernetes API for every such use case. + +Here is an example for gitRepo volume: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: server +spec: + containers: + - image: nginx + name: nginx + volumeMounts: + - mountPath: /mypath + name: git-volume + volumes: + - name: git-volume + gitRepo: + repository: "git@somewhere:me/my-git-repository.git" + revision: "22f1d8406d464b0c0874075539c1f2e96c253775" +``` + +### glusterfs + +A `glusterfs` volume allows a [Glusterfs](http://www.gluster.org) (an open +source networked filesystem) volume to be mounted into your pod. Unlike +`emptyDir`, which is erased when a Pod is removed, the contents of a +`glusterfs` volume are preserved and the volume is merely unmounted. This +means that a glusterfs volume can be pre-populated with data, and that data can +be "handed off" between pods. GlusterFS can be mounted by multiple writers +simultaneously. + +**Important:** You must have your own GlusterFS installation running before you can use it. +{: .caution} + +See the [GlusterFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/glusterfs) for more details. + ### hostPath A `hostPath` volume mounts a file or directory from the host node's filesystem @@ -202,126 +413,6 @@ spec: type: Directory ``` -### gcePersistentDisk - -A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [Persistent -Disk](http://cloud.google.com/compute/docs/disks) into your pod. Unlike -`emptyDir`, which is erased when a Pod is removed, the contents of a PD are -preserved and the volume is merely unmounted. This means that a PD can be -pre-populated with data, and that data can be "handed off" between pods. - -**Important:** You must create a PD using `gcloud` or the GCE API or UI before you can use it. -{: .caution} - -There are some restrictions when using a `gcePersistentDisk`: - -* the nodes on which pods are running must be GCE VMs -* those VMs need to be in the same GCE project and zone as the PD - -A feature of PD is that they can be mounted as read-only by multiple consumers -simultaneously. This means that you can pre-populate a PD with your dataset -and then serve it in parallel from as many pods as you need. Unfortunately, -PDs can only be mounted by a single consumer in read-write mode - no -simultaneous writers allowed. - -Using a PD on a pod controlled by a ReplicationController will fail unless -the PD is read-only or the replica count is 0 or 1. - -#### Creating a PD - -Before you can use a GCE PD with a pod, you need to create it. - -```shell -gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk -``` - -#### Example pod - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: test-pd -spec: - containers: - - image: gcr.io/google_containers/test-webserver - name: test-container - volumeMounts: - - mountPath: /test-pd - name: test-volume - volumes: - - name: test-volume - # This GCE PD must already exist. - gcePersistentDisk: - pdName: my-data-disk - fsType: ext4 -``` - -### awsElasticBlockStore - -An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS -Volume](http://aws.amazon.com/ebs/) into your pod. Unlike -`emptyDir`, which is erased when a Pod is removed, the contents of an EBS -volume are preserved and the volume is merely unmounted. This means that an -EBS volume can be pre-populated with data, and that data can be "handed off" -between pods. - -**Important:** You must create an EBS volume using `aws ec2 create-volume` or the AWS API before you can use it. -{: .caution} - -There are some restrictions when using an awsElasticBlockStore volume: - -* the nodes on which pods are running must be AWS EC2 instances -* those instances need to be in the same region and availability-zone as the EBS volume -* EBS only supports a single EC2 instance mounting a volume - -#### Creating an EBS volume - -Before you can use an EBS volume with a pod, you need to create it. - -```shell -aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2 -``` - -Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume -type are suitable for your use!) - -#### AWS EBS Example configuration - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: test-ebs -spec: - containers: - - image: gcr.io/google_containers/test-webserver - name: test-container - volumeMounts: - - mountPath: /test-ebs - name: test-volume - volumes: - - name: test-volume - # This AWS EBS volume must already exist. - awsElasticBlockStore: - volumeID: - fsType: ext4 -``` - -### nfs - -An `nfs` volume allows an existing NFS (Network File System) share to be -mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is -removed, the contents of an `nfs` volume are preserved and the volume is merely -unmounted. This means that an NFS volume can be pre-populated with data, and -that data can be "handed off" between pods. NFS can be mounted by multiple -writers simultaneously. - -**Important:** You must have your own NFS server running with the share exported before you can use it. -{: .caution} - -See the [NFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/nfs) for more details. - ### iscsi An `iscsi` volume allows an existing iSCSI (SCSI over IP) volume to be mounted @@ -341,123 +432,71 @@ simultaneous writers allowed. See the [iSCSI example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/iscsi) for more details. -### fc (fibre channel) +### local -An `fc` volume allows an existing fibre channel volume to be mounted in a pod. -You can specify single or multiple target World Wide Names using the parameter -`targetWWNs` in your volume configuration. If multiple WWNs are specified, -targetWWNs expect that those WWNs are from multi-path connections. +This volume type is alpha in 1.7. -**Important:** You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them. -{: .caution} +A `local` volume represents a mounted local storage device such as a disk, +partition or directory. -See the [FC example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/fibre_channel) for more details. +Local volumes can only be used as a statically created PersistentVolume. -### flocker +Compared to HostPath volumes, local volumes can be used in a durable manner +without manually scheduling pods to nodes, as the system is aware of the volume's +node constraints. -[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management -and orchestration of data volumes backed by a variety of storage backends. +However, local volumes are still subject to the availability of the underlying +node and are not suitable for all applications. -A `flocker` volume allows a Flocker dataset to be mounted into a pod. If the -dataset does not already exist in Flocker, it needs to be first created with the Flocker -CLI or by using the Flocker API. If the dataset already exists it will be -reattached by Flocker to the node that the pod is scheduled. This means data -can be "handed off" between pods as required. +The following is an example PersistentVolume spec using a `local` volume: -**Important:** You must have your own Flocker installation running before you can use it. -{: .caution} - -See the [Flocker example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/flocker) for more details. - -### glusterfs - -A `glusterfs` volume allows a [Glusterfs](http://www.gluster.org) (an open -source networked filesystem) volume to be mounted into your pod. Unlike -`emptyDir`, which is erased when a Pod is removed, the contents of a -`glusterfs` volume are preserved and the volume is merely unmounted. This -means that a glusterfs volume can be pre-populated with data, and that data can -be "handed off" between pods. GlusterFS can be mounted by multiple writers -simultaneously. - -**Important:** You must have your own GlusterFS installation running before you can use it. -{: .caution} - -See the [GlusterFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/glusterfs) for more details. - -### rbd - -An `rbd` volume allows a [Rados Block -Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your -pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of -a `rbd` volume are preserved and the volume is merely unmounted. This -means that a RBD volume can be pre-populated with data, and that data can -be "handed off" between pods. - -**Important:** You must have your own Ceph installation running before you can use RBD. -{: .caution} - -A feature of RBD is that it can be mounted as read-only by multiple consumers -simultaneously. This means that you can pre-populate a volume with your dataset -and then serve it in parallel from as many pods as you need. Unfortunately, -RBD volumes can only be mounted by a single consumer in read-write mode - no -simultaneous writers allowed. - -See the [RBD example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/rbd) for more details. - -### cephfs - -A `cephfs` volume allows an existing CephFS volume to be -mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is -removed, the contents of a `cephfs` volume are preserved and the volume is merely -unmounted. This means that a CephFS volume can be pre-populated with data, and -that data can be "handed off" between pods. CephFS can be mounted by multiple -writers simultaneously. - -**Important:** You must have your own Ceph server running with the share exported before you can use it. -{: .caution} - -See the [CephFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/cephfs/) for more details. - -### gitRepo - -A `gitRepo` volume is an example of what can be done as a volume plugin. It -mounts an empty directory and clones a git repository into it for your pod to -use. In the future, such volumes may be moved to an even more decoupled model, -rather than extending the Kubernetes API for every such use case. - -Here is an example for gitRepo volume: - -```yaml +``` yaml apiVersion: v1 -kind: Pod +kind: PersistentVolume metadata: - name: server + name: example-pv + annotations: + "volume.alpha.kubernetes.io/node-affinity": '{ + "requiredDuringSchedulingIgnoredDuringExecution": { + "nodeSelectorTerms": [ + { "matchExpressions": [ + { "key": "kubernetes.io/hostname", + "operator": "In", + "values": ["example-node"] + } + ]} + ]} + }' spec: - containers: - - image: nginx - name: nginx - volumeMounts: - - mountPath: /mypath - name: git-volume - volumes: - - name: git-volume - gitRepo: - repository: "git@somewhere:me/my-git-repository.git" - revision: "22f1d8406d464b0c0874075539c1f2e96c253775" + capacity: + storage: 100Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Delete + storageClassName: local-storage + local: + path: /mnt/disks/ssd1 ``` -### secret +**Note:** The local PersistentVolume cleanup and deletion requires manual intervention without the external provisioner. +{: .note} -A `secret` volume is used to pass sensitive information, such as passwords, to -pods. You can store secrets in the Kubernetes API and mount them as files for -use by pods without coupling to Kubernetes directly. `secret` volumes are -backed by tmpfs (a RAM-backed filesystem) so they are never written to -non-volatile storage. +For details on the `local` volume type, see the [Local Persistent Storage +user guide](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume). -**Important:** You must create a secret in the Kubernetes API before you can use it. +### nfs + +An `nfs` volume allows an existing NFS (Network File System) share to be +mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is +removed, the contents of an `nfs` volume are preserved and the volume is merely +unmounted. This means that an NFS volume can be pre-populated with data, and +that data can be "handed off" between pods. NFS can be mounted by multiple +writers simultaneously. + +**Important:** You must have your own NFS server running with the share exported before you can use it. {: .caution} -Secrets are described in more detail [here](/docs/user-guide/secrets). +See the [NFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/nfs) for more details. ### persistentVolumeClaim @@ -469,13 +508,6 @@ iSCSI volume) without knowing the details of the particular cloud environment. See the [PersistentVolumes example](/docs/concepts/storage/persistent-volumes/) for more details. -### downwardAPI - -A `downwardAPI` volume is used to make downward API data available to applications. -It mounts a directory and writes the requested data in plain text files. - -See the [`downwardAPI` volume example](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) for more details. - ### projected A `projected` volume maps several existing volume sources into the same directory. @@ -564,27 +596,189 @@ Each projected volume source is listed in the spec under `sources`. The parameters are nearly the same with two exceptions: * For secrets, the `secretName` field has been changed to `name` to be consistent -with ConfigMap naming. + with ConfigMap naming. * The `defaultMode` can only be specified at the projected level and not for each -volume source. However, as illustrated above, you can explicitly set the `mode` -for each individual projection. + volume source. However, as illustrated above, you can explicitly set the `mode` + for each individual projection. -### AzureFileVolume +### portworxVolume -A `AzureFileVolume` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0) -into a Pod. +A `portworxVolume` is an elastic block storage layer that runs hyperconverged with +Kubernetes. Portworx fingerprints storage in a server, tiers based on capabilities, +and aggregates capacity across multiple servers. Portworx runs in-guest in virtual +machines or on bare metal Linux nodes. -More details can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/azure_file/README.md). +A `portworxVolume` can be dynamically created through Kubernetes or it can also +be pre-provisioned and referenced inside a Kubernetes pod. +Here is an example pod referencing a pre-provisioned PortworxVolume: -### AzureDiskVolume +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-portworx-volume-pod +spec: + containers: + - image: gcr.io/google_containers/test-webserver + name: test-container + volumeMounts: + - mountPath: /mnt + name: pxvol + volumes: + - name: pxvol + # This Portworx volume must already exist. + portworxVolume: + volumeID: "pxvol" + fsType: "" +``` -A `AzureDiskVolume` is used to mount a Microsoft Azure [Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) into a Pod. +**Important:** Make sure you have an existing PortworxVolume with name `pxvol` +before using it in the pod. +{: .caution} -More details can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/azure_disk/README.md). +More details and examples can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/portworx/README.md). + +### quobyte + +A `quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume to +be mounted into your pod. + +**Important:** You must have your own Quobyte setup running with the volumes +created before you can use it. +{: .caution} + +See the [Quobyte example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/quobyte) for more details. + +### rbd + +An `rbd` volume allows a [Rados Block +Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your +pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of +a `rbd` volume are preserved and the volume is merely unmounted. This +means that a RBD volume can be pre-populated with data, and that data can +be "handed off" between pods. + +**Important:** You must have your own Ceph installation running before you can use RBD. +{: .caution} + +A feature of RBD is that it can be mounted as read-only by multiple consumers +simultaneously. This means that you can pre-populate a volume with your dataset +and then serve it in parallel from as many pods as you need. Unfortunately, +RBD volumes can only be mounted by a single consumer in read-write mode - no +simultaneous writers allowed. + +See the [RBD example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/rbd) for more details. + +### scaleIO + +ScaleIO is a software-based storage platform that can use existing hardware to +create clusters of scalable shared block networked storage. The `scaleIO` volume +plugin allows deployed pods to access existing ScaleIO +volumes (or it can dynamically provision new volumes for persistent volume claims, see +[ScaleIO Persistent Volumes](/docs/concepts/storage/persistent-volumes/#scaleio)). + +**Important:** You must have an existing ScaleIO cluster already setup and +running with the volumes created before you can use them. +{: .caution} + +The following is an example pod configuration with ScaleIO: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-0 +spec: + containers: + - image: gcr.io/google_containers/test-webserver + name: pod-0 + volumeMounts: + - mountPath: /test-pd + name: vol-0 + volumes: + - name: vol-0 + scaleIO: + gateway: https://localhost:443/api + system: scaleio + protectionDomain: sd0 + storagePool: sp1 + volumeName: vol-0 + secretRef: + name: sio-secret + fsType: xfs +``` + +For further detail, please the see the [ScaleIO examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/scaleio). + +### secret + +A `secret` volume is used to pass sensitive information, such as passwords, to +pods. You can store secrets in the Kubernetes API and mount them as files for +use by pods without coupling to Kubernetes directly. `secret` volumes are +backed by tmpfs (a RAM-backed filesystem) so they are never written to +non-volatile storage. + +**Important:** You must create a secret in the Kubernetes API before you can use it. +{: .caution} + +Secrets are described in more detail [here](/docs/user-guide/secrets). + +### storageOS + +A `storageos` volume allows an existing [StorageOS](https://www.storageos.com) +volume to be mounted into your pod. + +StorageOS runs as a container within your Kubernetes environment, making local +or attached storage accessible from any node within the Kubernetes cluster. +Data can be replicated to protect against node failure. Thin provisioning and +compression can improve utilization and reduce cost. + +At its core, StorageOS provides block storage to containers, accessible via a file system. + +The StorageOS container requires 64-bit Linux and has no additional dependencies. +A free developer license is available. + +**Important:** You must run the StorageOS container on each node that wants to +access StorageOS volumes or that will contribute storage capacity to the pool. +For installation instructions, consult the +[StorageOS documentation](https://docs.storageos.com). +{: .caution} + +```yaml +apiVersion: v1 +kind: Pod +metadata: + labels: + name: redis + role: master + name: test-storageos-redis +spec: + containers: + - name: master + image: kubernetes/redis:v1 + env: + - name: MASTER + value: "true" + ports: + - containerPort: 6379 + volumeMounts: + - mountPath: /redis-master-data + name: redis-data + volumes: + - name: redis-data + storageos: + # The `redis-vol01` volume must already exist within StorageOS in the `default` namespace. + volumeName: redis-vol01 + fsType: ext4 +``` + +For more information including Dynamic Provisioning and Persistent Volume Claims, please see the +[StorageOS examples](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/storageos). ### vsphereVolume -**Prerequisite:** Kubernetes with vSphere Cloud Provider configured. For cloudprovider configuration please refer [vSphere getting started guide](/docs/getting-started-guides/vsphere/). +**Prerequisite:** Kubernetes with vSphere Cloud Provider configured. For cloudprovider +configuration please refer [vSphere getting started guide](/docs/getting-started-guides/vsphere/). {: .note} A `vsphereVolume` is used to mount a vSphere VMDK Volume into your Pod. The contents @@ -638,185 +832,10 @@ spec: volumePath: "[DatastoreName] volumes/myDisk" fsType: ext4 ``` + More examples can be found [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere). -### Quobyte - -A `Quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume to be mounted into your pod. - -**Important:** You must have your own Quobyte setup running with the volumes created before you can use it. -{: .caution} - -See the [Quobyte example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/quobyte) for more details. - -### PortworxVolume -A `PortworxVolume` is an elastic block storage layer that runs hyperconverged with Kubernetes. Portworx fingerprints storage in a -server, tiers based on capabilities, and aggregates capacity across multiple servers. Portworx runs in-guest in virtual machines or on bare metal -Linux nodes. - -A `PortworxVolume` can be dynamically created through Kubernetes or it can also be pre-provisioned and referenced inside a Kubernetes pod. -Here is an example pod referencing a pre-provisioned PortworxVolume: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: test-portworx-volume-pod -spec: - containers: - - image: gcr.io/google_containers/test-webserver - name: test-container - volumeMounts: - - mountPath: /mnt - name: pxvol - volumes: - - name: pxvol - # This Portworx volume must already exist. - portworxVolume: - volumeID: "pxvol" - fsType: "" -``` - -**Important:** Make sure you have an existing PortworxVolume with name `pxvol` before using it in the pod. -{: .caution} - -More details and examples can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/portworx/README.md). - -### ScaleIO -ScaleIO is a software-based storage platform that can use existing hardware to create clusters of scalable -shared block networked storage. The ScaleIO volume plugin allows deployed pods to access existing ScaleIO -volumes (or it can dynamically provision new volumes for persistent volume claims, see -[ScaleIO Persistent Volumes](/docs/concepts/storage/persistent-volumes/#scaleio)). - -**Important:** You must have an existing ScaleIO cluster already setup and running with the volumes created before you can use them. -{: .caution} - -The following is an example pod configuration with ScaleIO: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: pod-0 -spec: - containers: - - image: gcr.io/google_containers/test-webserver - name: pod-0 - volumeMounts: - - mountPath: /test-pd - name: vol-0 - volumes: - - name: vol-0 - scaleIO: - gateway: https://localhost:443/api - system: scaleio - protectionDomain: sd0 - storagePool: sp1 - volumeName: vol-0 - secretRef: - name: sio-secret - fsType: xfs -``` - -For further detail, please the see the [ScaleIO examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/scaleio). - -### StorageOS -A `storageos` volume allows an existing [StorageOS](https://www.storageos.com) volume to be mounted into your pod. - -StorageOS runs as a container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster. Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost. - -At its core, StorageOS provides block storage to containers, accessible via a file system. - -The StorageOS container requires 64-bit Linux and has no additional dependencies. A free developer licence is available. - -**Important:** You must run the StorageOS container on each node that wants to access StorageOS volumes or that will contribute storage capacity to the pool. For installation instructions, consult the [StorageOS documentation](https://docs.storageos.com). -{: .caution} - -```yaml -apiVersion: v1 -kind: Pod -metadata: - labels: - name: redis - role: master - name: test-storageos-redis -spec: - containers: - - name: master - image: kubernetes/redis:v1 - env: - - name: MASTER - value: "true" - ports: - - containerPort: 6379 - volumeMounts: - - mountPath: /redis-master-data - name: redis-data - volumes: - - name: redis-data - storageos: - # The `redis-vol01` volume must already exist within StorageOS in the `default` namespace. - volumeName: redis-vol01 - fsType: ext4 -``` - -For more information including Dynamic Provisioning and Persistent Volume Claims, please see the -[StorageOS examples](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/storageos). - - -### local - -This volume type is alpha in 1.7. - -A `local` volume represents a mounted local storage device such as a disk, -partition or directory. - -Local volumes can only be used as a statically created PersistentVolume. - -Compared to HostPath volumes, local volumes can be used in a durable manner -without manually scheduling pods to nodes, as the system is aware of the volume's -node constraints. - -However, local volumes are still subject to the availability of the underlying -node and are not suitable for all applications. - -The following is an example PersistentVolume spec using a `local` volume: - -``` yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: example-pv - annotations: - "volume.alpha.kubernetes.io/node-affinity": '{ - "requiredDuringSchedulingIgnoredDuringExecution": { - "nodeSelectorTerms": [ - { "matchExpressions": [ - { "key": "kubernetes.io/hostname", - "operator": "In", - "values": ["example-node"] - } - ]} - ]} - }' -spec: - capacity: - storage: 100Gi - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Delete - storageClassName: local-storage - local: - path: /mnt/disks/ssd1 -``` - -**Note:** The local PersistentVolume cleanup and deletion requires manual intervention without the external provisioner. -{: .note} - -For details on the `local` volume type, see the [Local Persistent Storage -user guide](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume). - ## Using subPath Sometimes, it is useful to share one volume for multiple uses in a single pod. The `volumeMounts.subPath`