Reorg install and plugin docs (#1916)

* Reorg plugin docs

Signed-off-by: Carlisia <carlisia@vmware.com>

* Improve install docs

Signed-off-by: Carlisia <carlisia@vmware.com>

* Change path

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix broken links

Signed-off-by: Carlisia <carlisia@vmware.com>

* Address more feedback

Signed-off-by: Carlisia <carlisia@vmware.com>

* One more fix

Signed-off-by: Carlisia <carlisia@vmware.com>

* Minor changes to address feedback

Signed-off-by: Carlisia <carlisia@vmware.com>

* More fixes

Signed-off-by: Carlisia <carlisia@vmware.com>
pull/1810/head
KubeKween 2019-10-02 13:24:42 -07:00 committed by Nolan Brubaker
parent 81a4fcbb24
commit aa9ca9a69d
19 changed files with 256 additions and 274 deletions

View File

@ -4,7 +4,7 @@
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. Velero lets you:
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
@ -15,16 +15,7 @@ Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
You can run Velero in clusters on a cloud provider or on-premises. For detailed information, see [Compatible Storage Providers][99].
## Installation
We strongly recommend that you use an [official release][6] of Velero. The tarballs for each release contain the
`velero` command-line client. Follow the [installation instructions][28] to get started.
_The code and sample YAML files in the master branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## More information
## Documentation
[The documentation][29] provides a getting started guide, plus information about building from source, architecture, extending Velero, and more.
@ -56,5 +47,4 @@ See [the list of releases][6] to find out about feature changes.
[28]: https://velero.io/docs/install-overview
[29]: https://velero.io/docs/
[30]: https://velero.io/docs/troubleshooting
[99]: https://velero.io/docs/support-matrix
[100]: /site/docs/master/img/velero.png
[100]: https://velero.io/docs/master/img/velero.png

View File

@ -7,5 +7,5 @@ This directory contains sample YAML config files that can be used for exploring
* `nginx-app/`: A sample nginx app that can be used to test backups and restores.
[0]: /docs/get-started.md
[0]: /docs/contributions/minio.md
[1]: https://github.com/minio/minio

View File

@ -4,27 +4,27 @@ toc:
- page: About Velero
url: /index.html
- page: How Velero works
url: /about
url: /how-velero-works
- page: About locations
url: /locations
- page: Supported platforms
url: /support-matrix
- title: Install
subfolderitems:
- page: Overview
url: /install-overview
- page: Upgrade to 1.1
url: /upgrade-to-1.1
- page: Quick start with in-cluster MinIO
url: /get-started
- page: Run on AWS
url: /aws-config
- page: Run on Azure
url: /azure-config
- page: Run on GCP
url: /gcp-config
- page: Restic setup
- page: Requirements
url: /install-requirements
- page: Supported providers
url: /supported-providers
- page: Evaluation install
url: /contributions/minio
- page: Restic integration
url: /restic
- page: Examples
url: /examples
- page: Uninstalling
url: /uninstalling
- title: Use
subfolderitems:
- page: Disaster recovery
@ -37,10 +37,14 @@ toc:
url: /restore-reference
- page: Run in any namespace
url: /namespace
- page: Extend with plugins
url: /plugins
- page: Extend with hooks
url: /hooks
- title: Plugins
subfolderitems:
- page: Overview
url: /overview-plugins
- page: Custom plugins
url: /custom-plugins
- title: Troubleshoot
subfolderitems:
- page: Troubleshooting

View File

@ -40,6 +40,6 @@ As always, we welcome feedback and participation in the development of Velero. A
[1]: https://github.com/vmware-tanzu
[2]: https://blogs.vmware.com/cloudnative/2019/10/01/open-source-in-vmware-tanzu/
[3]: ../docs/master/support-matrix
[3]: ../docs/master/supported-providers
[4]: https://velero.io/docs/master/
[5]: https://github.com/vmware-tanzu/velero/issues#workspaces/velero-5c59c15e39d47b774b5864e3/board?milestones=v1.2%232019-10-31&filterLogic=any&repos=99143276&showPipelineDescriptions=false

View File

@ -4,7 +4,7 @@
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. Velero lets you:
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
@ -15,16 +15,7 @@ Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
You can run Velero in clusters on a cloud provider or on-premises. For detailed information, see [Compatible Storage Providers][99].
## Installation
We strongly recommend that you use an [official release][6] of Velero. The tarballs for each release contain the
`velero` command-line client. Follow the [installation instructions][28] to get started.
_The code and sample YAML files in the master branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## More information
## Documentation
[The documentation][29] provides a getting started guide, plus information about building from source, architecture, extending Velero, and more.
@ -53,7 +44,6 @@ See [the list of releases][6] to find out about feature changes.
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md
[14]: https://github.com/kubernetes/kubernetes
[24]: https://groups.google.com/forum/#!forum/projectvelero
[25]: https://kubernetes.slack.com/messages/velero
@ -61,5 +51,4 @@ See [the list of releases][6] to find out about feature changes.
[29]: https://velero.io/docs/master/
[30]: troubleshooting.md
[99]: support-matrix.md
[100]: img/velero.png

View File

@ -1,9 +1,9 @@
## Getting started
## Quick start evaluation install with Minio
The following example sets up the Velero server and client, then backs up and restores a sample application.
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
For additional functionality with this setup, see the docs on how to [expose Minio outside your cluster][31].
For additional functionality with this setup, see the section below on how to [expose Minio outside your cluster][1].
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
@ -202,8 +202,6 @@ When you run commands to get logs or describe a backup, the Velero server genera
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
For basic instructions on how to install the Velero server and client, see [the getting started example][1].
### Expose Minio with Service of type NodePort
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
@ -260,11 +258,9 @@ In this case:
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
[1]: get-started.md
[3]: install-overview.md
[17]: restic.md
[18]: debugging-restores.md
[1]: #expose-minio-with-service-of-type-nodeport
[3]: ../install-overview.md
[17]: ../restic.md
[18]: ../debugging-restores.md
[26]: https://github.com/vmware-tanzu/velero/releases
[30]: https://godoc.org/github.com/robfig/cron
[31]: #expose-minio-outside-your-cluster

View File

@ -0,0 +1,63 @@
## Examples
After you set up the Velero server, try these examples:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Create a backup:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Wait for the namespace to be deleted.
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
### Snapshot example (with PersistentVolumes)
> NOTE: For Azure, you must run Kubernetes version 1.7.2 or later to support PV snapshotting of managed disks.
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/with-pv.yaml
```
1. Create a backup with PV snapshotting:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Because the default [reclaim policy][1] for dynamically-provisioned PVs is "Delete", these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time. **Before continuing to the next step, check your cloud provider to confirm that the disk no longer exists.**
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
[1]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming

View File

@ -1,55 +1,37 @@
# Set up Velero on your platform
# Install overview
You can run Velero with a cloud provider or on-premises. For detailed information about the platforms that Velero supports, see [Compatible Storage Providers][99].
You can run Velero in clusters on a cloud provider or on-premises. For detailed information, see our list of [supported providers][0].
We strongly recommend that you use an [official release][1] of Velero. The tarballs for each release contain the
`velero` command-line client.
_The code and sample YAML files in the master branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## Set up your platform
You can run Velero in any namespace, which requires additional customization. See [Run in custom namespace][3].
You can also use Velero's integration with restic, which requires additional setup. See [restic instructions][20].
You can also use Velero's integration with restic, which requires additional setup. See [restic instructions][4].
## Cloud provider
The Velero client includes an `install` command to specify the settings for each supported cloud provider. You can install Velero for the included cloud providers using the following command:
```bash
velero install \
--provider <YOUR_PROVIDER> \
--bucket <YOUR_BUCKET> \
[--secret-file <PATH_TO_FILE>] \
[--no-secret] \
[--backup-location-config] \
[--snapshot-location-config] \
[--namespace] \
[--use-volume-snapshots] \
[--use-restic] \
[--pod-annotations] \
```
When using node-based IAM policies, `--secret-file` is not required, but `--no-secret` is required for confirmation.
For provider-specific instructions, see:
* [Run Velero on AWS][0]
* [Run Velero on GCP][1]
* [Run Velero on Azure][2]
* [Use IBM Cloud Object Store as Velero's storage destination][4]
When using restic on a storage provider that doesn't currently have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
The Velero client includes an `install` command to specify the settings for each supported cloud provider. For provider-specific instructions, see our list of [supported providers][0].
To see the YAML applied by the `velero install` command, use the `--dry-run -o yaml` arguments.
For more complex installation needs, use either the generated YAML, or the Helm chart.
For more complex installation needs, use either the generated YAML, or the [Helm chart][7].
When using node-based IAM policies, `--secret-file` is not required, but `--no-secret` is required for confirmation.
## On-premises
You can run Velero in an on-premises cluster in different ways depending on your requirements.
First, you must select an object storage backend that Velero can use to store backup data. [Compatible Storage Providers][99] contains information on various
options that are supported or have been reported to work by users. [Minio][101] is an option if you want to keep your backup data on-premises and you are
First, you must select an object storage backend that Velero can use to store backup data. [compatible storage providers][0] contains information on various
options that are supported or have been reported to work by users. [Minio][5] is an option if you want to keep your backup data on-premises and you are
not using another storage platform that offers an S3-compatible object storage API.
Second, if you need to back up persistent volume data, you must select a volume backup solution. [Volume Snapshot Providers][100] contains information on
the supported options. For example, if you use [Portworx][102] for persistent storage, you can install their Velero plugin to get native Portworx snapshots as part
of your Velero backups. If there is no native snapshot plugin available for your storage platform, you can use Velero's [restic integration][20], which provides a
Second, if you need to back up persistent volume data, you must select a volume backup solution. [volume snapshot providers][0] contains information on the supported options. For example, if you use [Portworx][6] for persistent storage, you can install their Velero plugin to get native Portworx snapshots as part of your Velero backups. If there is no native snapshot plugin available for your storage platform, you can use Velero's [restic integration][4], which provides a
platform-agnostic backup solution for volume data.
## Customize configuration
@ -58,40 +40,6 @@ Whether you run Velero on a cloud provider or on-premises, if you have more than
For details, see the documentation topics for individual cloud providers.
## Velero resource requirements
By default, the Velero deployment requests 500m CPU, 128Mi memory and sets a limit of 1000m CPU, 256Mi.
Default requests and limits are not set for the restic pods as CPU/Memory usage can depend heavily on the size of volumes being backed up.
If you need to customize these resource requests and limits, you can set the following flags in your `velero install` command:
```
velero install \
--provider <YOUR_PROVIDER> \
--bucket <YOUR_BUCKET> \
--secret-file <PATH_TO_FILE> \
--velero-pod-cpu-request <CPU_REQUEST> \
--velero-pod-mem-request <MEMORY_REQUEST> \
--velero-pod-cpu-limit <CPU_LIMIT> \
--velero-pod-mem-limit <MEMORY_LIMIT> \
[--use-restic] \
[--restic-pod-cpu-request <CPU_REQUEST>] \
[--restic-pod-mem-request <MEMORY_REQUEST>] \
[--restic-pod-cpu-limit <CPU_LIMIT>] \
[--restic-pod-mem-limit <MEMORY_LIMIT>]
```
Values for these flags follow the same format as [Kubernetes resource requirements][103].
## Removing Velero
If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by `velero install`:
```bash
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
## Installing with the Helm chart
When installing using the Helm chart, the provider's credential information will need to be appended into your values.
@ -104,77 +52,12 @@ helm install --set-file credentials.secretContents.cloud=./credentials-velero st
See your cloud provider's documentation for the contents and creation of the `credentials-velero` file.
## Examples
After you set up the Velero server, try these examples:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Create a backup:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Wait for the namespace to be deleted.
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
### Snapshot example (with PersistentVolumes)
> NOTE: For Azure, you must run Kubernetes version 1.7.2 or later to support PV snapshotting of managed disks.
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/with-pv.yaml
```
1. Create a backup with PV snapshotting:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Because the default [reclaim policy][19] for dynamically-provisioned PVs is "Delete", these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time. **Before continuing to the next step, check your cloud provider to confirm that the disk no longer exists.**
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
[0]: aws-config.md
[1]: gcp-config.md
[2]: azure-config.md
[0]: supported-providers.md
[1]: https://github.com/vmware-tanzu/velero/releases
[3]: namespace.md
[4]: ibm-config.md
[19]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
[20]: restic.md
[99]: support-matrix.md
[100]: support-matrix.md#volume-snapshot-providers
[101]: https://www.minio.io
[102]: https://portworx.com
[103]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu
[4]: restic.md
[5]: contributions/minio.md
[6]: https://portworx.com
[7]: #installing-with-the-Helm-chart

View File

@ -0,0 +1,34 @@
# Requirements for installing and running Velero
## Supported Kubernetes Versions
- In general, Velero works on Kubernetes version 1.7 or later (when Custom Resource Definitions were introduced).
- Restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. See [restic integration][1].
## Velero resource requirements
By default, the Velero deployment requests 500m CPU, 128Mi memory and sets a limit of 1000m CPU, 256Mi.
Default requests and limits are not set for the restic pods as CPU/Memory usage can depend heavily on the size of volumes being backed up.
If you need to customize these resource requests and limits, you can set the following flags in your `velero install` command:
```bash
velero install \
--provider <YOUR_PROVIDER> \
--bucket <YOUR_BUCKET> \
--secret-file <PATH_TO_FILE> \
--velero-pod-cpu-request <CPU_REQUEST> \
--velero-pod-mem-request <MEMORY_REQUEST> \
--velero-pod-cpu-limit <CPU_LIMIT> \
--velero-pod-mem-limit <MEMORY_LIMIT> \
[--use-restic] \
[--restic-pod-cpu-request <CPU_REQUEST>] \
[--restic-pod-mem-request <MEMORY_REQUEST>] \
[--restic-pod-cpu-limit <CPU_LIMIT>] \
[--restic-pod-mem-limit <MEMORY_LIMIT>]
```
Values for these flags follow the same format as [Kubernetes resource requirements][2].
[1]: restic.md
[2]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu

View File

@ -22,4 +22,4 @@ velero client config set namespace=<NAMESPACE_VALUE>
[0]: get-started.md#download
[0]: installing.md

View File

@ -0,0 +1,18 @@
# Velero plugin system
Velero has a plugin system which allows integration with a variety of providers for backup storage and volume snapshot operations. Please see the links on the left under `Plugins`.
Anyone can add integrations for any platform to provide additional backup and volume storage without modifying the Velero codebase.
## Creating a new plugin
To write a plugin for a new backup or volume storage platform, take a look at our [example repo][1] and at our documentation for [how to develop plugins][2].
## Adding a new plugin
After you publish your plugin on your own repository, open a PR that adds a link to it under the appropriate list of [supported providers][3] page in our documentation.
[1]: https://github.com/vmware-tanzu/velero-plugin-example/
[2]: plugins.md
[3]: supported-providers.md

View File

@ -28,7 +28,7 @@ cross-volume-type data migrations. Stay tuned as this evolves!
Ensure you've [downloaded latest release][3].
To install restic, use the `--use-restic` flag on the `velero install` command. See the [install overview][2] for more details.
To install restic, use the `--use-restic` flag on the `velero install` command. See the [install overview][2] for more details. When using restic on a storage provider that doesn't currently have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
Please note: For some PaaS/CaaS platforms based on Kubernetes such as RancherOS, OpenShift and Enterprise PKS, some modifications are required to the restic DaemonSet spec.

View File

@ -43,7 +43,6 @@ After you use the `velero install` command to install Velero into your cluster,
[15]: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#the-shared-credentials-file
[16]: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable
[17]: https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/
[18]: https://eksctl.io/
[20]: api-types/backupstoragelocation.md
[21]: api-types/volumesnapshotlocation.md

View File

@ -1,79 +0,0 @@
# Supported Kubernetes Versions
- In general, Velero works on Kubernetes version 1.7 or later (when Custom Resource Definitions were introduced).
- Restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. See [Restic Integration][17].
# Compatible Storage Providers
Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase.
## Backup Storage Providers
| Provider | Owner | Contact |
|---------------------------|----------|---------------------------------|
| [AWS S3][2] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Azure Blob Storage][3] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Google Cloud Storage][4] | Velero Team | [Slack][10], [GitHub Issue][11] |
## S3-Compatible Backup Storage Providers
Velero uses [Amazon's Go SDK][12] to connect to the S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero:
_Note that these providers are not regularly tested by the Velero team._
* [IBM Cloud][5]
* [Minio][9]
* Ceph RADOS v12.2.7
* [DigitalOcean][7]
* Quobyte
* [NooBaa][16]
* [Oracle Cloud][23]
_Some storage providers, like Quobyte, may need a different [signature algorithm version][15]._
## Volume Snapshot Providers
| Provider | Owner | Contact |
|----------------------------------|-----------------|---------------------------------|
| [AWS EBS][2] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Azure Managed Disks][3] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Google Compute Engine Disks][4] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Restic][1] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Portworx][6] | Portworx | [Slack][13], [GitHub Issue][14] |
| [DigitalOcean][7] | StackPointCloud | |
| [OpenEBS][18] | OpenEBS | [Slack][19], [GitHub Issue][20] |
| [AlibabaCloud][21] | AlibabaCloud | [GitHub Issue][22] |
| [HPE][24] | HPE | [Slack][25], [GitHub Issue][26] |
### Adding a new plugin
To write a plugin for a new backup or volume storage system, take a look at the [example repo][8].
After you publish your plugin, open a PR that adds your plugin to the appropriate list.
[1]: restic.md
[2]: aws-config.md
[3]: azure-config.md
[4]: gcp-config.md
[5]: ibm-config.md
[6]: https://docs.portworx.com/scheduler/kubernetes/ark.html
[7]: https://github.com/StackPointCloud/ark-plugin-digitalocean
[8]: https://github.com/vmware-tanzu/velero-plugin-example/
[9]: get-started.md
[10]: https://kubernetes.slack.com/messages/velero
[11]: https://github.com/vmware-tanzu/velero/issues
[12]: https://github.com/aws/aws-sdk-go/aws
[13]: https://portworx.slack.com/messages/px-k8s
[14]: https://github.com/portworx/ark-plugin/issues
[15]: api-types/backupstoragelocation.md#aws
[16]: http://www.noobaa.com/
[17]: restic.md
[18]: https://github.com/openebs/velero-plugin
[19]: https://openebs-community.slack.com/
[20]: https://github.com/openebs/velero-plugin/issues
[21]: https://github.com/AliyunContainerService/velero-plugin
[22]: https://github.com/AliyunContainerService/velero-plugin/issues
[23]: oracle-config.md
[24]: https://github.com/hpe-storage/velero-plugin
[25]: https://slack.hpedev.io/
[26]: https://github.com/hpe-storage/velero-plugin/issues

View File

@ -0,0 +1,77 @@
# Providers
Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase.
## Velero supported providers
| Provider | Object Store | Volume Snapshotter | Plugin |
|----------------------------|---------------------|------------------------------|---------------------------|
| [AWS S3][7] | AWS S3 | AWS EBS | [Velero plugin AWS][8] |
| [Azure Blob Storage][9] | Azure Blob Storage | Azure Managed Disks | [Velero plugin Azure][10] |
| [Google Cloud Storage][11] | Google Cloud Storage| Google Compute Engine Disks | [Velero plugin GCP][12] |
Contact: [Slack][28], [GitHub Issue][29]
## Community supported providers
| Provider | Object Store | Volume Snapshotter | Plugin | Contact |
|---------------------------|------------------------------|------------------------------------|------------------------|---------------------------------|
| [Portworx][31] | 🚫 | Portworx Volume | [Portworx][32] | [Slack][33], [GitHub Issue][34] |
| [DigitalOcean][15] | DigitalOcean Object Storage | DigitalOcean Volumes Block Storage | [StackPointCloud][16] | |
| [OpenEBS][17] | 🚫 | OpenEBS CStor Volume | [OpenEBS][18] | [Slack][19], [GitHub Issue][20] |
| [AlibabaCloud][21] | 🚫 | Alibaba Cloud | [AlibabaCloud][22] | [GitHub Issue][23] |
| [Hewlett Packard][24] | 🚫 | HPE Storage | [Hewlett Packard][25] | [Slack][26], [GitHub Issue][27] |
## S3-Compatible object store providers
Velero's AWS Object Store plugin uses [Amazon's Go SDK][0] to connect to the AWS S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero:
_Note that these storage providers are not regularly tested by the Velero team._
* [IBM Cloud][1]
* [Oracle Cloud][2]
* [Minio][3]
* [DigitalOcean][4]
* [NooBaa][5]
* Ceph RADOS v12.2.7
* Quobyte
_Some storage providers, like Quobyte, may need a different [signature algorithm version][6]._
## Non-supported volume snapshots
In the case you want to take volume snapshots but didn't find a plugin for your provider, Velero has support for snapshotting using restic. Please see the [restic integration][30] documentation.
[0]: https://github.com/aws/aws-sdk-go/aws
[1]: contributions/ibm-config.md
[2]: contributions/oracle-config.md
[3]: contributions/minio.md
[4]: https://github.com/StackPointCloud/ark-plugin-digitalocean
[5]: http://www.noobaa.com/
[6]: api-types/backupstoragelocation.md#aws
[7]: https://aws.amazon.com/s3/
[8]: aws-config.md
[9]: https://azure.microsoft.com/en-us/services/storage/blobs
[10]: azure-config.md
[11]: https://cloud.google.com/storage/
[12]: gcp-config.md
[15]: https://www.digitalocean.com/
[16]: https://github.com/StackPointCloud/ark-plugin-digitalocean
[17]: https://openebs.io/
[18]: https://github.com/openebs/velero-plugin
[19]: https://openebs-community.slack.com/
[20]: https://github.com/openebs/velero-plugin/issues
[21]: https://www.alibabacloud.com/
[22]: https://github.com/AliyunContainerService/velero-plugin
[23]: https://github.com/AliyunContainerService/velero-plugin/issues
[24]: https://www.hpe.com/us/en/storage.html
[25]: https://github.com/hpe-storage/velero-plugin
[26]: https://slack.hpedev.io/
[27]: https://github.com/hpe-storage/velero-plugin/issues
[28]: https://kubernetes.slack.com/messages/velero
[29]: https://github.com/vmware-tanzu/velero/issues
[30]: restic.md
[31]: https://portworx.com/
[32]: https://docs.portworx.com/scheduler/kubernetes/ark.html
[33]: https://portworx.slack.com/messages/px-k8s
[34]: https://github.com/portworx/ark-plugin/issues

View File

@ -0,0 +1,8 @@
# Uninstalling Velero
If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by `velero install`:
```bash
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```