|
@ -1,7 +1,8 @@
|
|||
## Current release:
|
||||
* [CHANGELOG-1.7.md][17]
|
||||
* [CHANGELOG-1.9.md][19]
|
||||
|
||||
## Older releases:
|
||||
* [CHANGELOG-1.8.md][18]
|
||||
* [CHANGELOG-1.6.md][16]
|
||||
* [CHANGELOG-1.5.md][15]
|
||||
* [CHANGELOG-1.4.md][14]
|
||||
|
@ -20,6 +21,8 @@
|
|||
* [CHANGELOG-0.3.md][1]
|
||||
|
||||
|
||||
[19]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.9.md
|
||||
[18]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.8.md
|
||||
[17]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.7.md
|
||||
[16]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.6.md
|
||||
[15]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.5.md
|
||||
|
|
|
@ -0,0 +1,104 @@
|
|||
## v1.9.0
|
||||
### 2022-06-13
|
||||
|
||||
### Download
|
||||
https://github.com/vmware-tanzu/velero/releases/tag/v1.9.0
|
||||
|
||||
### Container Image
|
||||
`velero/velero:v1.9.0`
|
||||
|
||||
### Documentation
|
||||
https://velero.io/docs/v1.9/
|
||||
|
||||
### Upgrading
|
||||
https://velero.io/docs/v1.9/upgrade-to-1.9/
|
||||
|
||||
### Highlights
|
||||
|
||||
#### Improvement to the CSI plugin
|
||||
- Bump up to the CSI volume snapshot v1 API
|
||||
- No VolumeSnapshot will be left in the source namespace of the workload
|
||||
- Report metrics for CSI snapshots
|
||||
|
||||
More improvements please refer to [CSI plugin improvement](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+label%3A%22CSI+plugin+-+GA+-+phase1%22+is%3Aclosed)
|
||||
|
||||
With these improvements we'll provide official support for CSI snapshots on AKS/EKS clusters. (with CSI plugin v0.3.0)
|
||||
|
||||
#### Refactor the controllers using Kubebuilder v3
|
||||
In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases.
|
||||
|
||||
#### Optionally restore status on selected resources
|
||||
Options are added to the CLI and Restore spec to control the group of resources whose status will be restored.
|
||||
|
||||
#### ExistingResourcePolicy in the restore API
|
||||
Users can choose to overwrite or patch the existing resources during restore by setting this policy.
|
||||
|
||||
#### Upgrade integrated Restic version and add skip TLS validation in Restic command
|
||||
Upgrade integrated Restic version, which will resolve some of the CVEs, and support skip TLS validation in Restic backup/restore.
|
||||
|
||||
#### Breaking changes
|
||||
With bumping up the API to v1 in CSI plugin, the v0.3.0 CSI plugin will only work for Kubernetes v1.20+
|
||||
|
||||
### All changes
|
||||
|
||||
* restic: add full support for setting SecurityContext for restore init container from configMap. (#4084, @MatthieuFin)
|
||||
* Add metrics backup_items_total and backup_items_errors (#4296, @tobiasgiese)
|
||||
* Convert PodVolumebackup controller to the Kubebuilder framework (#4436, @fgold)
|
||||
* Skip not mounted volumes when backing up (#4497, @dkeven)
|
||||
* Update doc for v1.8 (#4517, @reasonerjt)
|
||||
* Fix bug to make the restic prune frequency configurable (#4518, @ywk253100)
|
||||
* Add E2E test of backups sync from BSL (#4545, @mqiu)
|
||||
* Fix: OrderedResources in Schedules (#4550, @dbrekau)
|
||||
* Skip volumes of non-running pods when backing up (#4584, @bynare)
|
||||
* E2E SSR test add retry mechanism and logs (#4591, @mqiu)
|
||||
* Add pushing image to GCR in github workflow to facilitate some environments that have rate limitation to docker hub, e.g. vSphere. (#4623, @jxun)
|
||||
* Add existingResourcePolicy to Restore API (#4628, @shubham-pampattiwar)
|
||||
* Fix E2E backup namespaces test (#4634, @qiuming-best)
|
||||
* Update image used by E2E test to gcr.io (#4639, @jxun)
|
||||
* Add multiple label selector support to Velero Backup and Restore APIs (#4650, @shubham-pampattiwar)
|
||||
* Convert Pod Volume Restore resource/controller to the Kubebuilder framework (#4655, @ywk253100)
|
||||
* Update --use-owner-references-in-backup description in velero command line. (#4660, @jxun)
|
||||
* Avoid overwritten hook's exec.container parameter when running pod command executor. (#4661, @jxun)
|
||||
* Support regional pv for GKE (#4680, @jxun)
|
||||
* Bypass the remap CRD version plugin when v1beta1 CRD is not supported (#4686, @reasonerjt)
|
||||
* Add GINKGO_SKIP to support skip specific case in e2e test. (#4692, @jxun)
|
||||
* Add --pod-labels flag to velero install (#4694, @j4m3s-s)
|
||||
* Enable coverage in test.sh and upload to codecov (#4704, @reasonerjt)
|
||||
* Mark the BSL as "Unavailable" when gets any error and add a new field "Message" to the status to record the error message (#4719, @ywk253100)
|
||||
* Support multiple skip option for E2E test (#4725, @jxun)
|
||||
* Add PriorityClass to the AdditionalItems of Backup's PodAction and Restore's PodAction plugin to backup and restore PriorityClass if it is used by a Pod. (#4740, @phuongatemc)
|
||||
* Insert all restore errors and warnings into restore log. (#4743, @sseago)
|
||||
* Refactor schedule controller with kubebuilder (#4748, @ywk253100)
|
||||
* Garbage collector now adds labels to backups that failed to delete for BSLNotFound, BSLCannotGet, BSLReadOnly reasons. (#4757, @kaovilai)
|
||||
* Skip podvolumerestore creation when restore excludes pv/pvc (#4769, @half-life666)
|
||||
* Add parameter for e2e test to support modify kibishii install path. (#4778, @jxun)
|
||||
* Ensure the restore hook applied to new namespace based on the mapping (#4779, @reasonerjt)
|
||||
* Add ability to restore status on selected resources (#4785, @RafaeLeal)
|
||||
* Do not take snapshot for PV to avoid duplicated snapshotting, when CSI feature is enabled. (#4797, @jxun)
|
||||
* Bump up to v1 API for CSI snapshot (#4800, @reasonerjt)
|
||||
* fix: delete empty backups (#4817, @yuvalman)
|
||||
* Add CSI VolumeSnapshot related metrics. (#4818, @jxun)
|
||||
* Fix default-backup-ttl not work (#4831, @qiuming-best)
|
||||
* Make the vsc created by backup sync controller deletable (#4832, @reasonerjt)
|
||||
* Make in-progress backup/restore as failed when doing the reconcile to avoid hanging in in-progress status (#4833, @ywk253100)
|
||||
* Use controller-gen to generate the deep copy methods for objects (#4838, @ywk253100)
|
||||
* Update integrated Restic version and add insecureSkipTLSVerify for Restic CLI. (#4839, @jxun)
|
||||
* Modify CSI VolumeSnapshot metric related code. (#4854, @jxun)
|
||||
* Refactor backup deletion controller based on kubebuilder (#4855, @reasonerjt)
|
||||
* Remove VolumeSnapshots created during backup when CSI feature is enabled. (#4858, @jxun)
|
||||
* Convert Restic Repository resource/controller to the Kubebuilder framework (#4859, @qiuming-best)
|
||||
* Add ClusterClasses to the restore priority list (#4866, @reasonerjt)
|
||||
* Cleanup the .velero folder after restic done (#4872, @big-appled)
|
||||
* Delete orphan CSI snapshots in backup sync controller (#4887, @reasonerjt)
|
||||
* Make waiting VolumeSnapshot to ready process parallel. (#4889, @jxun)
|
||||
* continue rather than return for non-matching restore action label (#4890, @sseago)
|
||||
* Make in-progress PVB/PVR as failed when restic controller restarts to avoid hanging backup/restore (#4893, @ywk253100)
|
||||
* Refactor BSL controller with periodical enqueue source (#4894, @jxun)
|
||||
* Make garbage collection for expired backups configurable (#4897, @ywk253100)
|
||||
* Bump up the version of distroless to base-debian11 (#4898, @ywk253100)
|
||||
* Add schedule ordered resources E2E test (#4913, @qiuming-best)
|
||||
* Make velero completion zsh command output can be used by `source` command. (#4914, @jxun)
|
||||
* Enhance the map flag to support parsing input value contains entry delimiters (#4920, @ywk253100)
|
||||
* Fix E2E test [Backups][Deletion][Restic] on GCP. (#4968, @jxun)
|
||||
* Disable status as sub resource in CRDs (#4972, @ywk253100)
|
||||
* Add more information for failing to get path or snapshot in restic backup and restore. (#4988, @jxun)
|
|
@ -1 +0,0 @@
|
|||
restic: add full support for setting SecurityContext for restore init container from configMap.
|
|
@ -1 +0,0 @@
|
|||
Add metrics backup_items_total and backup_items_errors
|
|
@ -1 +0,0 @@
|
|||
Convert PodVolumebackup controller to the Kubebuilder framework
|
|
@ -1 +0,0 @@
|
|||
Skip not mounted volumes when backing up
|
|
@ -1 +0,0 @@
|
|||
Update doc for v1.8
|
|
@ -1 +0,0 @@
|
|||
Fix bug to make the restic prune frequency configurable
|
|
@ -1 +0,0 @@
|
|||
Add E2E test of backups sync from BSL
|
|
@ -1 +0,0 @@
|
|||
Fix: OrderedResources in Schedules
|
|
@ -1 +0,0 @@
|
|||
Skip volumes of non-running pods when backing up
|
|
@ -1 +0,0 @@
|
|||
E2E SSR test add retry mechanism and logs
|
|
@ -1 +0,0 @@
|
|||
Add pushing image to GCR in github workflow to facilitate some environments that have rate limitation to docker hub, e.g. vSphere.
|
|
@ -1 +0,0 @@
|
|||
Add existingResourcePolicy to Restore API
|
|
@ -1 +0,0 @@
|
|||
Fix E2E backup namespaces test
|
|
@ -1 +0,0 @@
|
|||
Update image used by E2E test to gcr.io
|
|
@ -1 +0,0 @@
|
|||
Add multiple label selector support to Velero Backup and Restore APIs
|
|
@ -1 +0,0 @@
|
|||
Convert Pod Volume Restore resource/controller to the Kubebuilder framework
|
|
@ -1 +0,0 @@
|
|||
Update --use-owner-references-in-backup description in velero command line.
|
|
@ -1 +0,0 @@
|
|||
Avoid overwritten hook's exec.container parameter when running pod command executor.
|
|
@ -1 +0,0 @@
|
|||
Support regional pv for GKE
|
|
@ -1 +0,0 @@
|
|||
Bypass the remap CRD version plugin when v1beta1 CRD is not supported
|
|
@ -1 +0,0 @@
|
|||
Add GINKGO_SKIP to support skip specific case in e2e test.
|
|
@ -1 +0,0 @@
|
|||
Add --pod-labels flag to velero install
|
|
@ -1 +0,0 @@
|
|||
Enable coverage in test.sh and upload to codecov
|
|
@ -1 +0,0 @@
|
|||
Mark the BSL as "Unavailable" when gets any error and add a new field "Message" to the status to record the error message
|
|
@ -1 +0,0 @@
|
|||
Support multiple skip option for E2E test
|
|
@ -1,2 +0,0 @@
|
|||
Add PriorityClass to the AdditionalItems of Backup's PodAction and Restore's PodAction plugin to
|
||||
backup and restore PriorityClass if it is used by a Pod.
|
|
@ -1 +0,0 @@
|
|||
Insert all restore errors and warnings into restore log.
|
|
@ -1 +0,0 @@
|
|||
Refactor schedule controller with kubebuilder
|
|
@ -1 +0,0 @@
|
|||
Garbage collector now adds labels to backups that failed to delete for BSLNotFound, BSLCannotGet, BSLReadOnly reasons.
|
|
@ -1 +0,0 @@
|
|||
Skip podvolumerestore creation when restore excludes pv/pvc
|
|
@ -1 +0,0 @@
|
|||
Add parameter for e2e test to support modify kibishii install path.
|
|
@ -1 +0,0 @@
|
|||
Ensure the restore hook applied to new namespace based on the mapping
|
|
@ -1 +0,0 @@
|
|||
Add ability to restore status on selected resources
|
|
@ -1 +0,0 @@
|
|||
Do not take snapshot for PV to avoid duplicated snapshotting, when CSI feature is enabled.
|
|
@ -1 +0,0 @@
|
|||
Bump up to v1 API for CSI snapshot
|
|
@ -1 +0,0 @@
|
|||
fix: delete empty backups
|
|
@ -1 +0,0 @@
|
|||
Add CSI VolumeSnapshot related metrics.
|
|
@ -1 +0,0 @@
|
|||
Fix default-backup-ttl not work
|
|
@ -1 +0,0 @@
|
|||
Make the vsc created by backup sync controller deletable
|
|
@ -1 +0,0 @@
|
|||
Make in-progress backup/restore as failed when doing the reconcile to avoid hanging in in-progress status
|
|
@ -1 +0,0 @@
|
|||
Use controller-gen to generate the deep copy methods for objects
|
|
@ -1 +0,0 @@
|
|||
Update integrated Restic version and add insecureSkipTLSVerify for Restic CLI.
|
|
@ -1 +0,0 @@
|
|||
Modify CSI VolumeSnapshot metric related code.
|
|
@ -1 +0,0 @@
|
|||
Refactor backup deletion controller based on kubebuilder
|
|
@ -1 +0,0 @@
|
|||
Remove VolumeSnapshots created during backup when CSI feature is enabled.
|
|
@ -1 +0,0 @@
|
|||
Convert Restic Repository resource/controller to the Kubebuilder framework
|
|
@ -1 +0,0 @@
|
|||
Add ClusterClasses to the restore priority list
|
|
@ -1 +0,0 @@
|
|||
Cleanup the .velero folder after restic done
|
|
@ -1 +0,0 @@
|
|||
Delete orphan CSI snapshots in backup sync controller
|
|
@ -1 +0,0 @@
|
|||
Make waiting VolumeSnapshot to ready process parallel.
|
|
@ -1 +0,0 @@
|
|||
continue rather than return for non-matching restore action label
|
|
@ -1 +0,0 @@
|
|||
Make in-progress PVB/PVR as failed when restic controller restarts to avoid hanging backup/restore
|
|
@ -1 +0,0 @@
|
|||
Refactor BSL controller with periodical enqueue source
|
|
@ -1 +0,0 @@
|
|||
Make garbage collection for expired backups configurable
|
|
@ -1 +0,0 @@
|
|||
Bump up the version of distroless to base-debian11
|
|
@ -1 +0,0 @@
|
|||
Add schedule ordered resources E2E test
|
|
@ -1 +0,0 @@
|
|||
Make velero completion zsh command output can be used by `source` command.
|
|
@ -1 +0,0 @@
|
|||
Enhance the map flag to support parsing input value contains entry delimiters
|
|
@ -1 +0,0 @@
|
|||
Fix E2E test [Backups][Deletion][Restic] on GCP.
|
|
@ -1 +0,0 @@
|
|||
Disable status as sub resource in CRDs
|
|
@ -1 +0,0 @@
|
|||
Add more information for failing to get path or snapshot in restic backup and restore.
|
|
@ -12,9 +12,10 @@ params:
|
|||
hero:
|
||||
backgroundColor: med-blue
|
||||
versioning: true
|
||||
latest: v1.8
|
||||
latest: v1.9
|
||||
versions:
|
||||
- main
|
||||
- v1.9
|
||||
- v1.8
|
||||
- v1.7
|
||||
- v1.6
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
---
|
||||
title: "Upgrading to Velero 1.8"
|
||||
title: "Upgrading to Velero 1.9"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Velero [v1.7.x][7] installed.
|
||||
- Velero [v1.8.x][8] installed.
|
||||
|
||||
If you're not yet running at least Velero v1.6, see the following:
|
||||
|
||||
|
@ -16,12 +16,13 @@ If you're not yet running at least Velero v1.6, see the following:
|
|||
- [Upgrading to v1.5][5]
|
||||
- [Upgrading to v1.6][6]
|
||||
- [Upgrading to v1.7][7]
|
||||
- [Upgrading to v1.8][8]
|
||||
|
||||
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Install the Velero v1.8 command-line interface (CLI) by following the [instructions here][0].
|
||||
1. Install the Velero v1.9 command-line interface (CLI) by following the [instructions here][0].
|
||||
|
||||
Verify that you've properly installed it by running:
|
||||
|
||||
|
@ -33,7 +34,7 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
|
|||
|
||||
```bash
|
||||
Client:
|
||||
Version: v1.8.0
|
||||
Version: v1.9.0
|
||||
Git commit: <git SHA>
|
||||
```
|
||||
|
||||
|
@ -43,18 +44,18 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
|
|||
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
**NOTE:** Since velero v1.8.0 only v1 CRD will be supported during installation, therefore, the v1.8.0 will only work on kubernetes version >= v1.16
|
||||
**NOTE:** Since velero v1.9.0 only v1 CRD will be supported during installation, therefore, the v1.9.0 will only work on kubernetes version >= v1.16
|
||||
|
||||
1. Update the container image used by the Velero deployment and, optionally, the restic daemon set:
|
||||
|
||||
```bash
|
||||
kubectl set image deployment/velero \
|
||||
velero=velero/velero:v1.8.0 \
|
||||
velero=velero/velero:v1.9.0 \
|
||||
--namespace velero
|
||||
|
||||
# optional, if using the restic daemon set
|
||||
kubectl set image daemonset/restic \
|
||||
restic=velero/velero:v1.8.0 \
|
||||
restic=velero/velero:v1.9.0 \
|
||||
--namespace velero
|
||||
```
|
||||
|
||||
|
@ -68,11 +69,11 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
|
|||
|
||||
```bash
|
||||
Client:
|
||||
Version: v1.8.0
|
||||
Version: v1.9.0
|
||||
Git commit: <git SHA>
|
||||
|
||||
Server:
|
||||
Version: v1.8.0
|
||||
Version: v1.9.0
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
@ -89,4 +90,5 @@ After upgrading, if there is a previously created backup storage location with t
|
|||
[5]: https://velero.io/docs/v1.5/upgrade-to-1.5
|
||||
[6]: https://velero.io/docs/v1.6/upgrade-to-1.6
|
||||
[7]: https://velero.io/docs/v1.7/upgrade-to-1.7
|
||||
[9]: https://velero.io/docs/v1.8/locations
|
||||
[8]: https://velero.io/docs/v1.8/upgrade-to-1.8
|
||||
[9]: https://velero.io/docs/v1.9/locations
|
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
toc: "false"
|
||||
cascade:
|
||||
version: main
|
||||
toc: "true"
|
||||
---
|
||||
![100]
|
||||
|
||||
[![Build Status][1]][2]
|
||||
|
||||
## Overview
|
||||
|
||||
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
|
||||
|
||||
* Take backups of your cluster and restore in case of loss.
|
||||
* Migrate cluster resources to other clusters.
|
||||
* Replicate your production cluster to development and testing clusters.
|
||||
|
||||
Velero consists of:
|
||||
|
||||
* A server that runs on your cluster
|
||||
* A command-line client that runs locally
|
||||
|
||||
## Documentation
|
||||
|
||||
This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more.
|
||||
|
||||
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
|
||||
|
||||
## Contributing
|
||||
|
||||
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.9/start-contributing/) documentation for guidance on how to setup Velero for development.
|
||||
|
||||
## Changelog
|
||||
|
||||
See [the list of releases][6] to find out about feature changes.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
|
||||
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
|
||||
|
||||
[4]: https://github.com/vmware-tanzu/velero/issues
|
||||
[6]: https://github.com/vmware-tanzu/velero/releases
|
||||
|
||||
[9]: https://kubernetes.io/docs/setup/
|
||||
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
|
||||
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
|
||||
[12]: https://github.com/kubernetes/kubernetes/blob/main/cluster/addons/dns/README.md
|
||||
[14]: https://github.com/kubernetes/kubernetes
|
||||
[24]: https://groups.google.com/forum/#!forum/projectvelero
|
||||
[25]: https://kubernetes.slack.com/messages/velero
|
||||
|
||||
[30]: troubleshooting.md
|
||||
|
||||
[100]: img/velero.png
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: "Table of Contents"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## API types
|
||||
|
||||
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
|
||||
(hooks)
|
||||
|
||||
* [Backup][1]
|
||||
* [Restore][2]
|
||||
* [Schedule][3]
|
||||
* [BackupStorageLocation][4]
|
||||
* [VolumeSnapshotLocation][5]
|
||||
|
||||
[1]: backup.md
|
||||
[2]: restore.md
|
||||
[3]: schedule.md
|
||||
[4]: backupstoragelocation.md
|
||||
[5]: volumesnapshotlocation.md
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
layout: docs
|
||||
title: API types
|
||||
---
|
||||
|
||||
Here's a list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
|
||||
(hooks)
|
||||
|
||||
* [Backup][1]
|
||||
* [Restore][2]
|
||||
* [Schedule][3]
|
||||
* [BackupStorageLocation][4]
|
||||
* [VolumeSnapshotLocation][5]
|
||||
|
||||
[1]: backup.md
|
||||
[2]: restore.md
|
||||
[3]: schedule.md
|
||||
[4]: backupstoragelocation.md
|
||||
[5]: volumesnapshotlocation.md
|
|
@ -0,0 +1,157 @@
|
|||
---
|
||||
title: "Backup API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
Use the `Backup` API type to request the Velero server to perform a backup. Once created, the
|
||||
Velero Server immediately starts the backup process.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Backup belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Backup` object with each of the fields documented:
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Backup
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Backup name. May be any valid Kubernetes object name. Required.
|
||||
name: a
|
||||
# Backup namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the backup. Required.
|
||||
spec:
|
||||
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the backup. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the backup. For example, if a
|
||||
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Individual objects must match this label selector to be included in the backup. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Individual object when matched with any of the label selector specified in the set are to be included in the backup. Optional.
|
||||
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the backup request
|
||||
orLabelSelectors:
|
||||
- matchLabels:
|
||||
app: velero
|
||||
- matchLabels:
|
||||
app: data-protection
|
||||
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
|
||||
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
|
||||
# a persistent volume provider is configured for Velero.
|
||||
snapshotVolumes: null
|
||||
# Where to store the tarball and logs.
|
||||
storageLocation: aws-primary
|
||||
# The list of locations in which to store volume snapshots created for this backup.
|
||||
volumeSnapshotLocations:
|
||||
- aws-primary
|
||||
- gcp-primary
|
||||
# The amount of time before this backup is eligible for garbage collection. If not specified,
|
||||
# a default value of 30 days will be used. The default can be configured on the velero server
|
||||
# by passing the flag --default-backup-ttl.
|
||||
ttl: 24h0m0s
|
||||
# Whether restic should be used to take a backup of all pod volumes by default.
|
||||
defaultVolumesToRestic: true
|
||||
# Actions to perform at different times during a backup. The only hook supported is
|
||||
# executing a command in a container in a pod using the pod exec API. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
-
|
||||
# Name of the hook. Will be displayed in backup log.
|
||||
name: my-hook
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to which this hook applies. The only resource supported at this time is
|
||||
# pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
|
||||
pre:
|
||||
-
|
||||
# The type of hook. This must be "exec".
|
||||
exec:
|
||||
# The name of the container where the command will be executed. If unspecified, the
|
||||
# first container in the pod will be used. Optional.
|
||||
container: my-container
|
||||
# The command to execute, specified as an array. Required.
|
||||
command:
|
||||
- /bin/uname
|
||||
- -a
|
||||
# How to handle an error executing the command. Valid values are Fail and Continue.
|
||||
# Defaults to Fail. Optional.
|
||||
onError: Fail
|
||||
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
|
||||
timeout: 10s
|
||||
# An array of hooks to run after all custom actions and additional items have been
|
||||
# processed. Only "exec" hooks are supported.
|
||||
post:
|
||||
# Same content as pre above.
|
||||
# Status about the Backup. Users should not set any data here.
|
||||
status:
|
||||
# The version of this Backup. The only version supported is 1.
|
||||
version: 1
|
||||
# The date and time when the Backup is eligible for garbage collection.
|
||||
expiration: null
|
||||
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors: null
|
||||
# Date/time when the backup started being processed.
|
||||
startTimestamp: 2019-04-29T15:58:43Z
|
||||
# Date/time when the backup finished being processed.
|
||||
completionTimestamp: 2019-04-29T15:58:56Z
|
||||
# Number of volume snapshots that Velero tried to create for this backup.
|
||||
volumeSnapshotsAttempted: 2
|
||||
# Number of volume snapshots that Velero successfully created for this backup.
|
||||
volumeSnapshotsCompleted: 1
|
||||
# Number of warnings that were logged by the backup.
|
||||
warnings: 2
|
||||
# Number of errors that were logged by the backup.
|
||||
errors: 0
|
||||
# An error that caused the entire backup to fail.
|
||||
failureReason: ""
|
||||
|
||||
```
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: "Velero Backup Storage Locations"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Backup Storage Location
|
||||
|
||||
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
|
||||
|
||||
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
|
||||
|
||||
A sample YAML `BackupStorageLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
backupSyncPeriod: 2m0s
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: myBucket
|
||||
credential:
|
||||
name: secret-name
|
||||
key: key-in-secret
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
{{< table caption="Main config parameters" >}}
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String | Required Field | The name for whichever object storage provider will be used to store the backups. See [your object storage provider's plugin documentation](../supported-providers) for the appropriate value to use. |
|
||||
| `objectStorage` | ObjectStorageLocation | Required Field | Specification of the object storage for the given provider. |
|
||||
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/caCert` | String | Optional Field | A base64 encoded CA bundle to be used when verifying TLS connections |
|
||||
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the object store plugin. See [your object storage provider's plugin documentation](../supported-providers) for details. |
|
||||
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
|
||||
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
|
||||
| `validationFrequency` | metav1.Duration | Optional Field | How frequently Velero should validate the object storage . Default is Velero's server validation frequency. Set this to `0s` to disable validation. Default 1 minute. |
|
||||
| `credential` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | The credential information to be used with this location. |
|
||||
| `credential/name` | String | Optional Field | The name of the secret within the Velero namespace which contains the credential information. |
|
||||
| `credential/key` | String | Optional Field | The key to use within the secret. |
|
||||
{{< /table >}}
|
|
@ -0,0 +1,193 @@
|
|||
---
|
||||
title: "Restore API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
The `Restore` API type is used as a request for the Velero server to perform a Restore. Once created, the
|
||||
Velero Server immediately starts the Restore process.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Restore belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Restore` object with each of the fields documented:
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Restore
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Restore name. May be any valid Kubernetes object name. Required.
|
||||
name: a-very-special-backup-0000111122223333
|
||||
# Restore namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the restore. Required.
|
||||
spec:
|
||||
# BackupName is the unique name of the Velero backup to restore from.
|
||||
backupName: a-very-special-backup
|
||||
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the restore. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
|
||||
# restoreStatus selects resources to restore not only the specification, but
|
||||
# the status of the manifest. This is specially useful for CRDs that maintain
|
||||
# external references. By default, it excludes all resources.
|
||||
restoreStatus:
|
||||
# Array of resources to include in the restore status. Just like above,
|
||||
# resources may be shortcuts (for example 'po' for 'pods') or fully-qualified.
|
||||
# If unspecified, no resources are included. Optional.
|
||||
includedResources:
|
||||
- workflows
|
||||
# Array of resources to exclude from the restore status. Resources may be
|
||||
# shortcuts (for example 'po' for 'pods') or fully-qualified.
|
||||
# If unspecified, all resources are excluded. Optional.
|
||||
excludedResources: []
|
||||
|
||||
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the restore. For example, if a
|
||||
# PersistentVolumeClaim is included in the restore, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Individual objects must match this label selector to be included in the restore. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Individual object when matched with any of the label selector specified in the set are to be included in the restore. Optional.
|
||||
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the restore request
|
||||
orLabelSelectors:
|
||||
- matchLabels:
|
||||
app: velero
|
||||
- matchLabels:
|
||||
app: data-protection
|
||||
# NamespaceMapping is a map of source namespace names to
|
||||
# target namespace names to restore into. Any source namespaces not
|
||||
# included in the map will be restored into namespaces of the same name.
|
||||
namespaceMapping:
|
||||
namespace-backup-from: namespace-to-restore-to
|
||||
# RestorePVs specifies whether to restore all included PVs
|
||||
# from snapshot (via the cloudprovider).
|
||||
restorePVs: true
|
||||
# ScheduleName is the unique name of the Velero schedule
|
||||
# to restore from. If specified, and BackupName is empty, Velero will
|
||||
# restore from the most recent successful backup created from this schedule.
|
||||
scheduleName: my-scheduled-backup-name
|
||||
# ExistingResourcePolicy specifies the restore behaviour
|
||||
# for the kubernetes resource to be restored. Optional
|
||||
existingResourcePolicy: none
|
||||
# Actions to perform during or post restore. The only hooks currently supported are
|
||||
# adding an init container to a pod before it can be restored and executing a command in a
|
||||
# restored pod's container. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
# Name is the name of this hook.
|
||||
- name: restore-hook-1
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- ns1
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- ns3
|
||||
# Array of resources to which this hook applies. If unspecified, the hook applies to all resources in the backup. Optional.
|
||||
# The only resource supported at this time is pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run during or after restores. Currently only "init" and "exec" hooks
|
||||
# are supported.
|
||||
postHooks:
|
||||
# The type of the hook. This must be "init" or "exec".
|
||||
- init:
|
||||
# An array of container specs to be added as init containers to pods to which this hook applies to.
|
||||
initContainers:
|
||||
- name: restore-hook-init1
|
||||
image: alpine:latest
|
||||
# Mounting volumes from the podSpec to which this hooks applies to.
|
||||
volumeMounts:
|
||||
- mountPath: /restores/pvc1-vm
|
||||
# Volume name from the podSpec
|
||||
name: pvc1-vm
|
||||
command:
|
||||
- /bin/ash
|
||||
- -c
|
||||
- echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz
|
||||
- name: restore-hook-init2
|
||||
image: alpine:latest
|
||||
# Mounting volumes from the podSpec to which this hooks applies to.
|
||||
volumeMounts:
|
||||
- mountPath: /restores/pvc2-vm
|
||||
# Volume name from the podSpec
|
||||
name: pvc2-vm
|
||||
command:
|
||||
- /bin/ash
|
||||
- -c
|
||||
- echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed
|
||||
- exec:
|
||||
# The container name where the hook will be executed. Defaults to the first container.
|
||||
# Optional.
|
||||
container: foo
|
||||
# The command that will be executed in the container. Required.
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- "psql < /backup/backup.sql"
|
||||
# How long to wait for a container to become ready. This should be long enough for the
|
||||
# container to start plus any preceding hooks in the same container to complete. The wait
|
||||
# timeout begins when the container is restored and may require time for the image to pull
|
||||
# and volumes to mount. If not set the restore will wait indefinitely. Optional.
|
||||
waitTimeout: 5m
|
||||
# How long to wait once execution begins. Defaults to 30 seconds. Optional.
|
||||
execTimeout: 1m
|
||||
# How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to
|
||||
# `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode,
|
||||
# no more restore hooks will be executed in any container in any pod and the status of the
|
||||
# Restore will be `PartiallyFailed`. Optional.
|
||||
onError: Continue
|
||||
# RestoreStatus captures the current status of a Velero restore. Users should not set any data here.
|
||||
status:
|
||||
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors: null
|
||||
# Number of warnings that were logged by the restore.
|
||||
warnings: 2
|
||||
# Errors is a count of all error messages that were generated
|
||||
# during execution of the restore. The actual errors are stored in object
|
||||
# storage.
|
||||
errors: 0
|
||||
# FailureReason is an error that caused the entire restore
|
||||
# to fail.
|
||||
failureReason:
|
||||
|
||||
```
|
|
@ -0,0 +1,142 @@
|
|||
---
|
||||
title: "Schedule API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
|
||||
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
|
||||
process on a repeating basis.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Schedule belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Schedule` object with each of the fields documented:
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Schedule
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Schedule name. May be any valid Kubernetes object name. Required.
|
||||
name: a
|
||||
# Schedule namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the scheduled backup. Required.
|
||||
spec:
|
||||
# Schedule is a Cron expression defining when to run the Backup
|
||||
schedule: 0 7 * * *
|
||||
# Template is the spec that should be used for each backup triggered by this schedule.
|
||||
template:
|
||||
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the scheduled backup. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
|
||||
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
|
||||
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
|
||||
# a persistent volume provider is configured for Velero.
|
||||
snapshotVolumes: null
|
||||
# Where to store the tarball and logs.
|
||||
storageLocation: aws-primary
|
||||
# The list of locations in which to store volume snapshots created for backups under this schedule.
|
||||
volumeSnapshotLocations:
|
||||
- aws-primary
|
||||
- gcp-primary
|
||||
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
|
||||
# a default value of 30 days will be used. The default can be configured on the velero server
|
||||
# by passing the flag --default-backup-ttl.
|
||||
ttl: 24h0m0s
|
||||
# Whether restic should be used to take a backup of all pod volumes by default.
|
||||
defaultVolumesToRestic: true
|
||||
# The labels you want on backup objects, created from this schedule (instead of copying the labels you have on schedule object itself).
|
||||
# When this field is set, the labels from the Schedule resource are not copied to the Backup resource.
|
||||
metadata:
|
||||
labels:
|
||||
labelname: somelabelvalue
|
||||
# Actions to perform at different times during a backup. The only hook supported is
|
||||
# executing a command in a container in a pod using the pod exec API. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
-
|
||||
# Name of the hook. Will be displayed in backup log.
|
||||
name: my-hook
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to which this hook applies. The only resource supported at this time is
|
||||
# pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
|
||||
pre:
|
||||
-
|
||||
# The type of hook. This must be "exec".
|
||||
exec:
|
||||
# The name of the container where the command will be executed. If unspecified, the
|
||||
# first container in the pod will be used. Optional.
|
||||
container: my-container
|
||||
# The command to execute, specified as an array. Required.
|
||||
command:
|
||||
- /bin/uname
|
||||
- -a
|
||||
# How to handle an error executing the command. Valid values are Fail and Continue.
|
||||
# Defaults to Fail. Optional.
|
||||
onError: Fail
|
||||
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
|
||||
timeout: 10s
|
||||
# An array of hooks to run after all custom actions and additional items have been
|
||||
# processed. Only "exec" hooks are supported.
|
||||
post:
|
||||
# Same content as pre above.
|
||||
status:
|
||||
# The current phase of the latest scheduled backup. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# Date/time of the last backup for a given schedule
|
||||
lastBackup:
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors:
|
||||
```
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: "Velero Volume Snapshot Location"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Volume Snapshot Location
|
||||
|
||||
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
|
||||
|
||||
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
|
||||
|
||||
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
|
||||
|
||||
A sample YAML `VolumeSnapshotLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: VolumeSnapshotLocation
|
||||
metadata:
|
||||
name: aws-default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
{{< table caption="Main config parameters" >}}
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See [your volume snapshot provider's plugin documentation](../supported-providers) for the appropriate value to use. |
|
||||
| `config` | map string string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See [your volume snapshot provider's plugin documentation](../supported-providers) for details. |
|
||||
{{< /table >}}
|
|
@ -0,0 +1,111 @@
|
|||
---
|
||||
title: "Backup Hooks"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
Velero supports executing commands in containers in pods during a backup.
|
||||
|
||||
## Backup Hooks
|
||||
|
||||
When performing a backup, you can specify one or more commands to execute in a container in a pod
|
||||
when that pod is being backed up. The commands can be configured to run *before* any custom action
|
||||
processing ("pre" hooks), or after all custom actions have been completed and any additional items
|
||||
specified by custom action have been backed up ("post" hooks). Note that hooks are _not_ executed within a shell
|
||||
on the containers.
|
||||
|
||||
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
|
||||
|
||||
### Specifying Hooks As Pod Annotations
|
||||
|
||||
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
|
||||
|
||||
#### Pre hooks
|
||||
|
||||
* `pre.hook.backup.velero.io/container`
|
||||
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
|
||||
* `pre.hook.backup.velero.io/command`
|
||||
* The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`. See [examples of using pre hook commands](#backup-hook-commands-examples). Optional.
|
||||
* `pre.hook.backup.velero.io/on-error`
|
||||
* What to do if the command returns a non-zero exit code. Defaults is `Fail`. Valid values are Fail and Continue. Optional.
|
||||
* `pre.hook.backup.velero.io/timeout`
|
||||
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults is 30s. Optional.
|
||||
|
||||
|
||||
#### Post hooks
|
||||
|
||||
* `post.hook.backup.velero.io/container`
|
||||
* The container where the command should be executed. Default is the first container in the pod. Optional.
|
||||
* `post.hook.backup.velero.io/command`
|
||||
* The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`. See [examples of using pre hook commands](#backup-hook-commands-examples). Optional.
|
||||
* `post.hook.backup.velero.io/on-error`
|
||||
* What to do if the command returns a non-zero exit code. Defaults is `Fail`. Valid values are Fail and Continue. Optional.
|
||||
* `post.hook.backup.velero.io/timeout`
|
||||
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults is 30s. Optional.
|
||||
|
||||
### Specifying Hooks in the Backup Spec
|
||||
|
||||
Please see the documentation on the [Backup API Type][1] for how to specify hooks in the Backup
|
||||
spec.
|
||||
|
||||
## Hook Example with fsfreeze
|
||||
|
||||
This examples walks you through using both pre and post hooks for freezing a file system. Freezing the
|
||||
file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot.
|
||||
|
||||
This example uses [examples/nginx-app/with-pv.yaml][2]. Follow the [steps for your provider][3] to
|
||||
setup this example.
|
||||
|
||||
### Annotations
|
||||
|
||||
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
|
||||
to your declarative deployment. Below is an example of what updating an object in place might look like.
|
||||
|
||||
```shell
|
||||
kubectl annotate pod -n nginx-example -l app=nginx \
|
||||
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
|
||||
pre.hook.backup.velero.io/container=fsfreeze \
|
||||
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
|
||||
post.hook.backup.velero.io/container=fsfreeze
|
||||
```
|
||||
|
||||
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
|
||||
hooks are running and exiting without error.
|
||||
|
||||
```shell
|
||||
velero backup create nginx-hook-test
|
||||
|
||||
velero backup get nginx-hook-test
|
||||
velero backup logs nginx-hook-test | grep hookCommand
|
||||
```
|
||||
|
||||
## Backup hook commands examples
|
||||
|
||||
### Multiple commands
|
||||
|
||||
To use multiple commands, wrap your target command in a shell and separate them with `;`, `&&`, or other shell conditional constructs.
|
||||
|
||||
```shell
|
||||
pre.hook.backup.velero.io/command='["/bin/bash", "-c", "echo hello > hello.txt && echo goodbye > goodbye.txt"]'
|
||||
```
|
||||
|
||||
#### Using environment variables
|
||||
|
||||
You are able to use environment variables from your pods in your pre and post hook commands by including a shell command before using the environment variable. For example, `MYSQL_ROOT_PASSWORD` is an environment variable defined in pod called `mysql`. To use `MYSQL_ROOT_PASSWORD` in your pre-hook, you'd include a shell, like `/bin/sh`, before calling your environment variable:
|
||||
|
||||
```
|
||||
pre:
|
||||
- exec:
|
||||
container: mysql
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- mysql --password=$MYSQL_ROOT_PASSWORD -e "FLUSH TABLES WITH READ LOCK"
|
||||
onError: Fail
|
||||
```
|
||||
|
||||
Note that the container must support the shell command you use.
|
||||
|
||||
|
||||
[1]: api-types/backup.md
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.9/examples/nginx-app/with-pv.yaml
|
||||
[3]: cloud-common.md
|
|
@ -0,0 +1,91 @@
|
|||
---
|
||||
title: "Backup Reference"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Exclude Specific Items from Backup
|
||||
|
||||
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
|
||||
|
||||
```bash
|
||||
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
|
||||
```
|
||||
|
||||
## Specify Backup Orders of Resources of Specific Kind
|
||||
|
||||
To backup resources of specific Kind in a specific order, use option --ordered-resources to specify a mapping Kinds to an ordered list of specific resources of that Kind. Resource names are separated by commas and their names are in format 'namespace/resourcename'. For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon. Kind name is in plural form.
|
||||
|
||||
```bash
|
||||
velero backup create backupName --include-cluster-resources=true --ordered-resources 'pods=ns1/pod1,ns1/pod2;persistentvolumes=pv4,pv8' --include-namespaces=ns1
|
||||
velero backup create backupName --ordered-resources 'statefulsets=ns1/sts1,ns1/sts0' --include-namespaces=ns1
|
||||
```
|
||||
## Schedule a Backup
|
||||
|
||||
The **schedule** operation allows you to create a backup of your data at a specified time, defined by a [Cron expression](https://en.wikipedia.org/wiki/Cron).
|
||||
|
||||
```
|
||||
velero schedule create NAME --schedule="* * * * *" [flags]
|
||||
```
|
||||
|
||||
Cron schedules use the following format.
|
||||
|
||||
```
|
||||
# ┌───────────── minute (0 - 59)
|
||||
# │ ┌───────────── hour (0 - 23)
|
||||
# │ │ ┌───────────── day of the month (1 - 31)
|
||||
# │ │ │ ┌───────────── month (1 - 12)
|
||||
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
|
||||
# │ │ │ │ │ 7 is also Sunday on some systems)
|
||||
# │ │ │ │ │
|
||||
# │ │ │ │ │
|
||||
# * * * * *
|
||||
```
|
||||
|
||||
For example, the command below creates a backup that runs every day at 3am.
|
||||
|
||||
```
|
||||
velero schedule create example-schedule --schedule="0 3 * * *"
|
||||
```
|
||||
|
||||
This command will create the backup, `example-schedule`, within Velero, but the backup will not be taken until the next scheduled time, 3am. Backups created by a schedule are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. For a full list of available configuration flags use the Velero CLI help command.
|
||||
|
||||
```
|
||||
velero schedule create --help
|
||||
```
|
||||
|
||||
Once you create the scheduled backup, you can then trigger it manually using the `velero backup` command.
|
||||
|
||||
```
|
||||
velero backup create --from-schedule example-schedule
|
||||
```
|
||||
|
||||
This command will immediately trigger a new backup based on your template for `example-schedule`. This will not affect the backup schedule, and another backup will trigger at the scheduled time.
|
||||
|
||||
|
||||
### Limitation
|
||||
Backups created from schedule can have owner reference to the schedule. This can be achieved by command:
|
||||
|
||||
```
|
||||
velero schedule create --use-owner-references-in-backup <backup-name>
|
||||
```
|
||||
By this way, schedule is the owner of it created backups. This is useful for some GitOps scenarios, or the resource tree of k8s synchronized from other places.
|
||||
|
||||
Please do notice there is also side effect that may not be expected. Because schedule is the owner, when the schedule is deleted, the related backups CR (Just backup CR is deleted. Backup data still exists in object store and snapshots) will be deleted by k8s GC controller, too, but Velero controller will sync these backups from object store's metadata into k8s. Then k8s GC controller and Velero controller will fight over whether these backups should exist all through.
|
||||
|
||||
If there is possibility the schedule will be disable to not create backup anymore, and the created backups are still useful. Please do not enable this option. For detail, please reference to [Backups created by a schedule with useOwnerReferenceInBackup set do not get synced properly](https://github.com/vmware-tanzu/velero/issues/4093).
|
||||
|
||||
|
||||
## Kubernetes API Pagination
|
||||
|
||||
By default, Velero will paginate the LIST API call for each resource type in the Kubernetes API when collecting items into a backup. The `--client-page-size` flag for the Velero server configures the size of each page.
|
||||
|
||||
Depending on the cluster's scale, tuning the page size can improve backup performance. You can experiment with higher values, noting their impact on the relevant `apiserver_request_duration_seconds_*` metrics from the Kubernetes apiserver.
|
||||
|
||||
Pagination can be entirely disabled by setting `--client-page-size` to `0`. This will request all items in a single unpaginated LIST call.
|
||||
|
||||
## Deleting Backups
|
||||
|
||||
Use the following commands to delete Velero backups and data:
|
||||
|
||||
* `kubectl delete backup <backupName> -n <veleroNamespace>` will delete the backup custom resource only and will not delete any associated data from object/block storage
|
||||
* `velero backup delete <backupName>` will delete the backup resource including all data in object/block storage
|
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
title: "Basic Install"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
Use this doc to get a basic installation of Velero.
|
||||
Refer [this document](customize-installation.md) to customize your installation.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatabilty-matrix).
|
||||
- `kubectl` installed locally
|
||||
|
||||
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
|
||||
|
||||
Velero supports storage providers for both cloud-provider environments and on-premises environments. For more details on on-premises scenarios, see the [on-premises documentation][2].
|
||||
|
||||
### Velero on Windows
|
||||
|
||||
Velero does not officially support Windows. In testing, the Velero team was able to backup stateless Windows applications only. The restic integration and backups of stateful applications or PersistentVolumes were not supported.
|
||||
|
||||
If you want to perform your own testing of Velero on Windows, you must deploy Velero as a Windows container. Velero does not provide official Windows images, but its possible for you to build your own Velero Windows container image to use. Note that you must build this image on a Windows node.
|
||||
|
||||
## Install the CLI
|
||||
|
||||
### Option 1: MacOS - Homebrew
|
||||
|
||||
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
|
||||
|
||||
```bash
|
||||
brew install velero
|
||||
```
|
||||
|
||||
### Option 2: GitHub release
|
||||
|
||||
1. Download the [latest release][1]'s tarball for your client platform.
|
||||
1. Extract the tarball:
|
||||
|
||||
```bash
|
||||
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
|
||||
```
|
||||
|
||||
1. Move the extracted `velero` binary to somewhere in your `$PATH` (`/usr/local/bin` for most users).
|
||||
|
||||
### Option 3: Windows - Chocolatey
|
||||
|
||||
On Windows, you can use [Chocolatey](https://chocolatey.org/install) to install the [velero](https://chocolatey.org/packages/velero) client:
|
||||
|
||||
```powershell
|
||||
choco install velero
|
||||
```
|
||||
|
||||
## Install and configure the server components
|
||||
|
||||
There are two supported methods for installing the Velero server components:
|
||||
|
||||
- the `velero install` CLI command
|
||||
- the [Helm chart](https://vmware-tanzu.github.io/helm-charts/)
|
||||
|
||||
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider. To find installation instructions for your chosen storage provider, follow the documentation link for your provider at our [supported storage providers][0] page
|
||||
|
||||
_Note: if your object storage provider is different than your volume snapshot provider, follow the installation instructions for your object storage provider first, then return here and follow the instructions to [add your volume snapshot provider][4]._
|
||||
|
||||
## Command line Autocompletion
|
||||
|
||||
Please refer to [this part of the documentation][5].
|
||||
|
||||
[0]: supported-providers.md
|
||||
[1]: https://github.com/vmware-tanzu/velero/releases/latest
|
||||
[2]: on-premises.md
|
||||
[3]: overview-plugins.md
|
||||
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
|
||||
[5]: customize-installation.md#optional-velero-cli-configurations
|
|
@ -0,0 +1,200 @@
|
|||
---
|
||||
title: "Build from source"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Access to a Kubernetes cluster, version 1.7 or later.
|
||||
* A DNS server on the cluster
|
||||
* `kubectl` installed
|
||||
* [Go][5] installed (minimum version 1.8)
|
||||
|
||||
## Get the source
|
||||
|
||||
### Option 1) Get latest (recommended)
|
||||
|
||||
```bash
|
||||
mkdir $HOME/go
|
||||
export GOPATH=$HOME/go
|
||||
go get github.com/vmware-tanzu/velero
|
||||
```
|
||||
|
||||
Where `go` is your [import path][4] for Go.
|
||||
|
||||
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
|
||||
|
||||
### Option 2) Release archive
|
||||
|
||||
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`.
|
||||
|
||||
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
|
||||
|
||||
## Build
|
||||
|
||||
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
|
||||
|
||||
When building by using `make`, it will place the binaries under `_output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `_output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output.
|
||||
|
||||
Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images).
|
||||
|
||||
### Build the binary
|
||||
|
||||
To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands:
|
||||
|
||||
```bash
|
||||
go build ./cmd/velero
|
||||
```
|
||||
|
||||
```bash
|
||||
make local
|
||||
```
|
||||
|
||||
### Cross compiling
|
||||
|
||||
To build the velero binary targeting linux/amd64 within a build container on your local machine, run:
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
For any specific platform, run `make build-<GOOS>-<GOARCH>`.
|
||||
|
||||
For example, to build for the Mac, run `make build-darwin-amd64`.
|
||||
|
||||
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
|
||||
|
||||
* linux-amd64
|
||||
* linux-arm
|
||||
* linux-arm64
|
||||
* linux-ppc64le
|
||||
* darwin-amd64
|
||||
* windows-amd64
|
||||
|
||||
## Making images and updating Velero
|
||||
|
||||
If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image:
|
||||
|
||||
```bash
|
||||
kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION
|
||||
```
|
||||
|
||||
To build a Velero container image, you need to configure `buildx` first.
|
||||
|
||||
### Buildx
|
||||
|
||||
Docker Buildx is a CLI plugin that extends the docker command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently.
|
||||
|
||||
More information in the [docker docs][23] and in the [buildx github][24] repo.
|
||||
|
||||
### Image building
|
||||
|
||||
Set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero:main` image, set `$REGISTRY` to `gcr.io/my-registry`. If this variable is not set, the default is `velero`.
|
||||
|
||||
Optionally, set the `$VERSION` environment variable to change the image tag or `$BIN` to change which binary to build a container image for. Then, run:
|
||||
|
||||
```bash
|
||||
make container
|
||||
```
|
||||
_Note: To build build container images for both `velero` and `velero-restic-restore-helper`, run: `make all-containers`_
|
||||
|
||||
### Publishing container images to a registry
|
||||
|
||||
To publish container images to a registry, the following one time setup is necessary:
|
||||
|
||||
1. If you are building cross platform container images
|
||||
```bash
|
||||
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
|
||||
```
|
||||
1. Create and bootstrap a new docker buildx builder
|
||||
```bash
|
||||
$ docker buildx create --use --name builder
|
||||
builder
|
||||
$ docker buildx inspect --bootstrap
|
||||
[+] Building 2.6s (1/1) FINISHED
|
||||
=> [internal] booting buildkit 2.6s
|
||||
=> => pulling image moby/buildkit:buildx-stable-1 1.9s
|
||||
=> => creating container buildx_buildkit_builder0 0.7s
|
||||
Name: builder
|
||||
Driver: docker-container
|
||||
|
||||
Nodes:
|
||||
Name: builder0
|
||||
Endpoint: unix:///var/run/docker.sock
|
||||
Status: running
|
||||
Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
|
||||
```
|
||||
NOTE: Without the above setup, the output of `docker buildx inspect --bootstrap` will be:
|
||||
```bash
|
||||
$ docker buildx inspect --bootstrap
|
||||
Name: default
|
||||
Driver: docker
|
||||
|
||||
Nodes:
|
||||
Name: default
|
||||
Endpoint: default
|
||||
Status: running
|
||||
Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
|
||||
```
|
||||
And the `REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry make container` will fail with the below error:
|
||||
```bash
|
||||
$ REGISTRY=ashishamarnath BUILDX_PLATFORMS=linux/arm64 BUILDX_OUTPUT_TYPE=registry make container
|
||||
auto-push is currently not implemented for docker driver
|
||||
make: *** [container] Error 1
|
||||
```
|
||||
|
||||
Having completed the above one time setup, now the output of `docker buildx inspect --bootstrap` should be like
|
||||
|
||||
```bash
|
||||
$ docker buildx inspect --bootstrap
|
||||
Name: builder
|
||||
Driver: docker-container
|
||||
|
||||
Nodes:
|
||||
Name: builder0
|
||||
Endpoint: unix:///var/run/docker.sock
|
||||
Status: running
|
||||
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v
|
||||
```
|
||||
|
||||
Now build and push the container image by running the `make container` command with `$BUILDX_OUTPUT_TYPE` set to `registry`
|
||||
```bash
|
||||
$ REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry make container
|
||||
```
|
||||
|
||||
### Cross platform building
|
||||
|
||||
Docker `buildx` platforms supported:
|
||||
* `linux/amd64`
|
||||
* `linux/arm64`
|
||||
* `linux/arm/v7`
|
||||
* `linux/ppc64le`
|
||||
|
||||
For any specific platform, run `BUILDX_PLATFORMS=<GOOS>/<GOARCH> make container`
|
||||
|
||||
For example, to build an image for arm64, run:
|
||||
|
||||
```bash
|
||||
BUILDX_PLATFORMS=linux/arm64 make container
|
||||
```
|
||||
_Note: By default, `$BUILDX_PLATFORMS` is set to `linux/amd64`_
|
||||
|
||||
With `buildx`, you can also build all supported platforms at the same time and push a multi-arch image to the registry. For example:
|
||||
|
||||
```bash
|
||||
REGISTRY=myrepo VERSION=foo BUILDX_PLATFORMS=linux/amd64,linux/arm64,linux/arm/v7,linux/ppc64le BUILDX_OUTPUT_TYPE=registry make all-containers
|
||||
```
|
||||
_Note: when building for more than 1 platform at the same time, you need to set `BUILDX_OUTPUT_TYPE` to `registry` as local multi-arch images are not supported [yet][25]._
|
||||
|
||||
Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod:
|
||||
|
||||
```bash
|
||||
kubectl -n velero delete pods -l deploy=velero
|
||||
```
|
||||
|
||||
[4]: https://blog.golang.org/organizing-go-code
|
||||
[5]: https://golang.org/doc/install
|
||||
[22]: https://github.com/vmware-tanzu/velero/releases
|
||||
[23]: https://docs.docker.com/buildx/working-with-buildx/
|
||||
[24]: https://github.com/docker/buildx
|
||||
[25]: https://github.com/moby/moby/pull/38738
|
|
@ -0,0 +1,151 @@
|
|||
---
|
||||
title: "Code Standards"
|
||||
layout: docs
|
||||
toc: "true"
|
||||
---
|
||||
|
||||
## Opening PRs
|
||||
|
||||
When opening a pull request, please fill out the checklist supplied the template. This will help others properly categorize and review your pull request.
|
||||
|
||||
## Adding a changelog
|
||||
|
||||
Authors are expected to include a changelog file with their pull requests. The changelog file
|
||||
should be a new file created in the `changelogs/unreleased` folder. The file should follow the
|
||||
naming convention of `pr-username` and the contents of the file should be your text for the
|
||||
changelog.
|
||||
|
||||
velero/changelogs/unreleased <- folder
|
||||
000-username <- file
|
||||
|
||||
Add that to the PR.
|
||||
|
||||
If a PR does not warrant a changelog, the CI check for a changelog can be skipped by applying a `changelog-not-required` label on the PR. If you are making a PR on a release branch, you should still make a new file in the `changelogs/unreleased` folder on the release branch for your change.
|
||||
|
||||
## Copyright header
|
||||
|
||||
Whenever a source code file is being modified, the copyright notice should be updated to our standard copyright notice. That is, it should read “Copyright the Velero contributors.”
|
||||
|
||||
For new files, the entire copyright and license header must be added.
|
||||
|
||||
Please note that doc files do not need a copyright header.
|
||||
|
||||
## Code
|
||||
|
||||
- Log messages are capitalized.
|
||||
|
||||
- Error messages are kept lower-cased.
|
||||
|
||||
- Wrap/add a stack only to errors that are being directly returned from non-velero code, such as an API call to the Kubernetes server.
|
||||
|
||||
```bash
|
||||
errors.WithStack(err)
|
||||
```
|
||||
|
||||
- Prefer to use the utilities in the Kubernetes package [`sets`](https://godoc.org/github.com/kubernetes/apimachinery/pkg/util/sets).
|
||||
|
||||
```bash
|
||||
k8s.io/apimachinery/pkg/util/sets
|
||||
```
|
||||
|
||||
## Imports
|
||||
|
||||
For imports, we use the following convention:
|
||||
|
||||
`<group><version><api | client | informer | ...>`
|
||||
|
||||
Example:
|
||||
|
||||
import (
|
||||
corev1api "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
corev1listers "k8s.io/client-go/listers/core/v1"
|
||||
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
|
||||
)
|
||||
|
||||
## Mocks
|
||||
|
||||
We use a package to generate mocks for our interfaces.
|
||||
|
||||
Example: if you want to change this mock: https://github.com/vmware-tanzu/velero/blob/v1.9/pkg/restic/mocks/restorer.go
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
go get github.com/vektra/mockery/.../
|
||||
cd pkg/restic
|
||||
mockery -name=Restorer
|
||||
```
|
||||
|
||||
Might need to run `make update` to update the imports.
|
||||
|
||||
## Kubernetes Labels
|
||||
|
||||
When generating label values, be sure to pass them through the `label.GetValidName()` helper function.
|
||||
|
||||
This will help ensure that the values are the proper length and format to be stored and queried.
|
||||
|
||||
In general, UIDs are safe to persist as label values.
|
||||
|
||||
This function is not relevant to annotation values, which do not have restrictions.
|
||||
|
||||
## DCO Sign off
|
||||
|
||||
All authors to the project retain copyright to their work. However, to ensure
|
||||
that they are only submitting work that they have rights to, we are requiring
|
||||
everyone to acknowledge this by signing their work.
|
||||
|
||||
Any copyright notices in this repo should specify the authors as "the Velero contributors".
|
||||
|
||||
To sign your work, just add a line like this at the end of your commit message:
|
||||
|
||||
```
|
||||
Signed-off-by: Joe Beda <joe@heptio.com>
|
||||
```
|
||||
|
||||
This can easily be done with the `--signoff` option to `git commit`.
|
||||
|
||||
By doing this you state that you can certify the following (from https://developercertificate.org/):
|
||||
|
||||
```
|
||||
Developer Certificate of Origin
|
||||
Version 1.1
|
||||
|
||||
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
||||
1 Letterman Drive
|
||||
Suite D4700
|
||||
San Francisco, CA, 94129
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies of this
|
||||
license document, but changing it is not allowed.
|
||||
|
||||
|
||||
Developer's Certificate of Origin 1.1
|
||||
|
||||
By making a contribution to this project, I certify that:
|
||||
|
||||
(a) The contribution was created in whole or in part by me and I
|
||||
have the right to submit it under the open source license
|
||||
indicated in the file; or
|
||||
|
||||
(b) The contribution is based upon previous work that, to the best
|
||||
of my knowledge, is covered under an appropriate open source
|
||||
license and I have the right under that license to submit that
|
||||
work with modifications, whether created in whole or in part
|
||||
by me, under the same open source license (unless I am
|
||||
permitted to submit under a different license), as indicated
|
||||
in the file; or
|
||||
|
||||
(c) The contribution was provided directly to me by some other
|
||||
person who certified (a), (b) or (c) and I have not modified
|
||||
it.
|
||||
|
||||
(d) I understand and agree that this project and the contribution
|
||||
are public and that a record of the contribution (including all
|
||||
personal information I submit with it, including my sign-off) is
|
||||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
||||
```
|
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
title: "Use IBM Cloud Object Storage as Velero's storage destination."
|
||||
layout: docs
|
||||
---
|
||||
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
|
||||
|
||||
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
|
||||
|
||||
* Download an official release of Velero
|
||||
* Create your COS instance
|
||||
* Create an S3 bucket
|
||||
* Define a service that can store data in the bucket
|
||||
* Configure and start the Velero server
|
||||
|
||||
## Download Velero
|
||||
|
||||
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
|
||||
|
||||
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
|
||||
Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch
|
||||
of the Velero repository is under active development and is not guaranteed to be stable!_
|
||||
|
||||
1. Extract the tarball:
|
||||
|
||||
```bash
|
||||
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
|
||||
```
|
||||
|
||||
The directory you extracted is called the "Velero directory" in subsequent steps.
|
||||
|
||||
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
|
||||
|
||||
## Create COS instance
|
||||
If you don’t have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
|
||||
|
||||
## Create an S3 bucket
|
||||
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
|
||||
|
||||
## Define a service that can store data in the bucket.
|
||||
The process of creating service credentials is described in [Service credentials][3].
|
||||
Several comments:
|
||||
|
||||
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
|
||||
|
||||
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keys — a set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See [HMAC credentials][31] guide.
|
||||
|
||||
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. Use them in the next step.
|
||||
|
||||
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
|
||||
|
||||
```
|
||||
[default]
|
||||
aws_access_key_id=<ACCESS_KEY_ID>
|
||||
aws_secret_access_key=<SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
Where the access key id and secret are the values that you got above.
|
||||
|
||||
## Install and start Velero
|
||||
|
||||
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
|
||||
|
||||
```bash
|
||||
velero install \
|
||||
--provider aws \
|
||||
--bucket <YOUR_BUCKET> \
|
||||
--secret-file ./credentials-velero \
|
||||
--use-volume-snapshots=false \
|
||||
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>
|
||||
```
|
||||
|
||||
Velero does not have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
|
||||
|
||||
Additionally, you can specify `--use-restic` to enable [restic support][16], and `--wait` to wait for the deployment to be ready.
|
||||
|
||||
(Optional) Specify [CPU and memory resource requests and limits][15] for the Velero/restic pods.
|
||||
|
||||
Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud:
|
||||
|
||||
```bash
|
||||
kubectl -n velero delete volumesnapshotlocation.velero.io default
|
||||
```
|
||||
|
||||
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
|
||||
|
||||
## Installing the nginx example (optional)
|
||||
|
||||
If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`:
|
||||
|
||||
Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `StorageClass` name.
|
||||
|
||||
[0]: ../namespace.md
|
||||
[1]: https://cloud.ibm.com/docs/cloud-object-storage/getting-started.html
|
||||
[2]: https://cloud.ibm.com/docs/cloud-object-storage/getting-started.html#create-buckets
|
||||
[3]: https://cloud.ibm.com/docs/cloud-object-storage/iam?topic=cloud-object-storage-service-credentials
|
||||
[31]: https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main
|
||||
[4]: https://www.ibm.com/docs/en/cloud-private
|
||||
[5]: https://cloud.ibm.com/docs/containers/container_index.html#container_index
|
||||
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
|
||||
[15]: ../customize-installation.md#customize-resource-requests-and-limits
|
||||
[16]: ../restic.md
|
After Width: | Height: | Size: 130 KiB |
After Width: | Height: | Size: 73 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 407 KiB |
After Width: | Height: | Size: 285 KiB |
After Width: | Height: | Size: 64 KiB |
After Width: | Height: | Size: 277 KiB |
After Width: | Height: | Size: 123 KiB |
After Width: | Height: | Size: 83 KiB |
After Width: | Height: | Size: 48 KiB |
|
@ -0,0 +1,296 @@
|
|||
---
|
||||
title: "Quick start evaluation install with Minio"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
The following example sets up the Velero server and client, then backs up and restores a sample application.
|
||||
|
||||
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
|
||||
For additional functionality with this setup, see the section below on how to [expose Minio outside your cluster][1].
|
||||
|
||||
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
|
||||
|
||||
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
|
||||
|
||||
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Access to a Kubernetes cluster, version 1.7 or later. **Note:** restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See [Restic Integration][17].
|
||||
* A DNS server on the cluster
|
||||
* `kubectl` installed
|
||||
* Sufficient disk space to store backups in Minio. You will need sufficient disk space available to handle any
|
||||
backups plus at least 1GB additional. Minio will not operate if less than 1GB of free disk space is available.
|
||||
|
||||
## Install the CLI
|
||||
|
||||
### Option 1: MacOS - Homebrew
|
||||
|
||||
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
|
||||
|
||||
```bash
|
||||
brew install velero
|
||||
```
|
||||
|
||||
### Option 2: GitHub release
|
||||
|
||||
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
|
||||
|
||||
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
|
||||
Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch
|
||||
of the Velero repository is under active development and is not guaranteed to be stable!_
|
||||
|
||||
1. Extract the tarball:
|
||||
|
||||
```bash
|
||||
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
|
||||
```
|
||||
|
||||
The directory you extracted is called the "Velero directory" in subsequent steps.
|
||||
|
||||
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
|
||||
|
||||
## Set up server
|
||||
|
||||
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster](#expose-minio-outside-your-cluster-with-a-service) for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
|
||||
|
||||
1. Create a Velero-specific credentials file (`credentials-velero`) in your Velero directory:
|
||||
|
||||
```
|
||||
[default]
|
||||
aws_access_key_id = minio
|
||||
aws_secret_access_key = minio123
|
||||
```
|
||||
|
||||
1. Start the server and the local storage service. In the Velero directory, run:
|
||||
|
||||
```
|
||||
kubectl apply -f examples/minio/00-minio-deployment.yaml
|
||||
```
|
||||
_Note_: The example Minio yaml provided uses "empty dir". Your node needs to have enough space available to store the
|
||||
data being backed up plus 1GB of free space. If the node does not have enough space, you can modify the example yaml to
|
||||
use a Persistent Volume instead of "empty dir"
|
||||
|
||||
```
|
||||
velero install \
|
||||
--provider aws \
|
||||
--plugins velero/velero-plugin-for-aws:v1.2.1 \
|
||||
--bucket velero \
|
||||
--secret-file ./credentials-velero \
|
||||
--use-volume-snapshots=false \
|
||||
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
|
||||
```
|
||||
|
||||
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
|
||||
|
||||
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
|
||||
|
||||
This example also assumes you have named your Minio bucket "velero".
|
||||
|
||||
|
||||
1. Deploy the example nginx application:
|
||||
|
||||
```bash
|
||||
kubectl apply -f examples/nginx-app/base.yaml
|
||||
```
|
||||
|
||||
1. Check to see that both the Velero and nginx deployments are successfully created:
|
||||
|
||||
```
|
||||
kubectl get deployments -l component=velero --namespace=velero
|
||||
kubectl get deployments --namespace=nginx-example
|
||||
```
|
||||
|
||||
## Back up
|
||||
|
||||
1. Create a backup for any object that matches the `app=nginx` label selector:
|
||||
|
||||
```
|
||||
velero backup create nginx-backup --selector app=nginx
|
||||
```
|
||||
|
||||
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
|
||||
|
||||
```
|
||||
velero backup create nginx-backup --selector 'backup notin (ignore)'
|
||||
```
|
||||
|
||||
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
|
||||
|
||||
```
|
||||
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
|
||||
```
|
||||
|
||||
Alternatively, you can use some non-standard shorthand cron expressions:
|
||||
|
||||
```
|
||||
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
|
||||
```
|
||||
|
||||
See the [cron package's documentation][30] for more usage examples.
|
||||
|
||||
1. Simulate a disaster:
|
||||
|
||||
```
|
||||
kubectl delete namespace nginx-example
|
||||
```
|
||||
|
||||
1. To check that the nginx deployment and service are gone, run:
|
||||
|
||||
```
|
||||
kubectl get deployments --namespace=nginx-example
|
||||
kubectl get services --namespace=nginx-example
|
||||
kubectl get namespace/nginx-example
|
||||
```
|
||||
|
||||
You should get no results.
|
||||
|
||||
NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up.
|
||||
|
||||
## Restore
|
||||
|
||||
1. Run:
|
||||
|
||||
```
|
||||
velero restore create --from-backup nginx-backup
|
||||
```
|
||||
|
||||
1. Run:
|
||||
|
||||
```
|
||||
velero restore get
|
||||
```
|
||||
|
||||
After the restore finishes, the output looks like the following:
|
||||
|
||||
```
|
||||
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
|
||||
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
|
||||
```
|
||||
|
||||
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
|
||||
|
||||
After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
|
||||
|
||||
If there are errors or warnings, you can look at them in detail:
|
||||
|
||||
```
|
||||
velero restore describe <RESTORE_NAME>
|
||||
```
|
||||
|
||||
For more information, see [the debugging information][18].
|
||||
|
||||
## Clean up
|
||||
|
||||
If you want to delete any backups you created, including data in object storage and persistent
|
||||
volume snapshots, you can run:
|
||||
|
||||
```
|
||||
velero backup delete BACKUP_NAME
|
||||
```
|
||||
|
||||
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
|
||||
this for each backup you want to permanently delete. A future version of Velero will allow you to
|
||||
delete multiple backups by name or label selector.
|
||||
|
||||
Once fully removed, the backup is no longer visible when you run:
|
||||
|
||||
```
|
||||
velero backup get BACKUP_NAME
|
||||
```
|
||||
|
||||
To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster:
|
||||
|
||||
```
|
||||
kubectl delete namespace/velero clusterrolebinding/velero
|
||||
kubectl delete crds -l component=velero
|
||||
kubectl delete -f examples/nginx-app/base.yaml
|
||||
```
|
||||
|
||||
## Expose Minio outside your cluster with a Service
|
||||
|
||||
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
|
||||
|
||||
- Change the Minio Service type from `ClusterIP` to `NodePort`.
|
||||
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
|
||||
|
||||
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
|
||||
|
||||
### Expose Minio with Service of type NodePort
|
||||
|
||||
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
|
||||
|
||||
You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
|
||||
|
||||
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
|
||||
|
||||
1. Get the Minio URL:
|
||||
|
||||
- if you're running Minikube:
|
||||
|
||||
```shell
|
||||
minikube service minio --namespace=velero --url
|
||||
```
|
||||
|
||||
- in any other environment:
|
||||
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
|
||||
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
|
||||
|
||||
```shell
|
||||
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
|
||||
```
|
||||
|
||||
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
|
||||
|
||||
## Accessing logs with an HTTPS endpoint
|
||||
|
||||
If you're using Minio with HTTPS, you may see unintelligible text in the output of `velero describe`, or `velero logs` commands.
|
||||
|
||||
To fix this, you can add a public URL to the `BackupStorageLocation`.
|
||||
|
||||
In a terminal, run the following:
|
||||
|
||||
```shell
|
||||
kubectl patch -n velero backupstoragelocation default --type merge -p '{"spec":{"config":{"publicUrl":"https://<a public IP for your Minio instance>:9000"}}}'
|
||||
```
|
||||
|
||||
If your certificate is self-signed, see the [documentation on self-signed certificates][32].
|
||||
|
||||
## Expose Minio outside your cluster with Kubernetes in Docker (KinD):
|
||||
|
||||
Kubernetes in Docker does not have support for NodePort services (see [this issue](https://github.com/kubernetes-sigs/kind/issues/99)). In this case, you can use a port forward to access the Minio bucket.
|
||||
|
||||
In a terminal, run the following:
|
||||
|
||||
```shell
|
||||
MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}')
|
||||
|
||||
kubectl port-forward $MINIO_POD -n velero 9000:9000
|
||||
```
|
||||
|
||||
Then, in another terminal:
|
||||
|
||||
```shell
|
||||
kubectl edit backupstoragelocation default -n velero
|
||||
```
|
||||
|
||||
Add `publicUrl: http://localhost:9000` under the `spec.config` section.
|
||||
|
||||
|
||||
### Work with Ingress
|
||||
|
||||
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
|
||||
|
||||
In this case:
|
||||
|
||||
1. Keep the Service type as `ClusterIP`.
|
||||
|
||||
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
|
||||
|
||||
[1]: #expose-minio-with-service-of-type-nodeport
|
||||
[3]: ../customize-installation.md
|
||||
[17]: ../restic.md
|
||||
[18]: ../debugging-restores.md
|
||||
[26]: https://github.com/vmware-tanzu/velero/releases
|
||||
[30]: https://godoc.org/github.com/robfig/cron
|
||||
[32]: ../self-signed-certificates.md
|
|
@ -0,0 +1,248 @@
|
|||
---
|
||||
title: "Use Oracle Cloud as a Backup Storage Provider for Velero"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
[Velero](https://velero.io/) is a tool used to backup and migrate Kubernetes applications. Here are the steps to use [Oracle Cloud Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) as a destination for Velero backups.
|
||||
|
||||
1. [Download Velero](#download-velero)
|
||||
2. [Create A Customer Secret Key](#create-a-customer-secret-key)
|
||||
3. [Create An Oracle Object Storage Bucket](#create-an-oracle-object-storage-bucket)
|
||||
4. [Install Velero](#install-velero)
|
||||
5. [Clean Up](#clean-up)
|
||||
6. [Examples](#examples)
|
||||
7. [Additional Reading](#additional-reading)
|
||||
|
||||
## Download Velero
|
||||
|
||||
1. Download the [latest release](https://github.com/vmware-tanzu/velero/releases/) of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example:
|
||||
|
||||
```
|
||||
wget https://github.com/vmware-tanzu/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
**NOTE:** Its strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!
|
||||
|
||||
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
|
||||
|
||||
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
|
||||
|
||||
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`
|
||||
|
||||
4. Run `velero` to confirm the CLI has been installed correctly. You should see an output like this:
|
||||
|
||||
```
|
||||
$ velero
|
||||
Velero is a tool for managing disaster recovery, specifically for Kubernetes
|
||||
cluster resources. It provides a simple, configurable, and operationally robust
|
||||
way to back up your application state and associated data.
|
||||
|
||||
If you're familiar with kubectl, Velero supports a similar model, allowing you to
|
||||
execute commands such as 'velero get backup' and 'velero create schedule'. The same
|
||||
operations can also be performed as 'velero backup get' and 'velero schedule create'.
|
||||
|
||||
Usage:
|
||||
velero [command]
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Create A Customer Secret Key
|
||||
|
||||
1. Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to [create a Customer Secret Key](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#To4). Refer to this link for more information about [Working with Customer Secret Keys](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#s3).
|
||||
|
||||
2. Create a Velero credentials file with your Customer Secret Key:
|
||||
|
||||
```
|
||||
$ vi credentials-velero
|
||||
|
||||
[default]
|
||||
aws_access_key_id=bae031188893d1eb83719648790ac850b76c9441
|
||||
aws_secret_access_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk=
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Create An Oracle Object Storage Bucket
|
||||
|
||||
Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for [more information about creating a bucket with Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/managingbuckets.htm#usingconsole).
|
||||
|
||||
|
||||
|
||||
## Install Velero
|
||||
|
||||
You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider:
|
||||
|
||||
```
|
||||
velero install \
|
||||
--provider [provider name] \
|
||||
--bucket [bucket name] \
|
||||
--prefix [tenancy name] \
|
||||
--use-volume-snapshots=false \
|
||||
--secret-file [secret file location] \
|
||||
--backup-location-config region=[region],s3ForcePathStyle="true",s3Url=[storage API endpoint]
|
||||
```
|
||||
|
||||
- `--provider` This example uses the S3-compatible API, so use `aws` as the provider.
|
||||
- `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`.
|
||||
- ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`.
|
||||
- `--use-volume-snapshots=false` Velero does not have a volume snapshot plugin for Oracle Cloud, so creating volume snapshots is disabled.
|
||||
- `--secret-file` The path to your `credentials-velero` file.
|
||||
- `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name ([List of Oracle Cloud Regions](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm?Highlight=regions)) and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com`
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
velero install \
|
||||
--provider aws \
|
||||
--bucket velero \
|
||||
--prefix oracle-cloudnative \
|
||||
--use-volume-snapshots=false \
|
||||
--secret-file /Users/mboxell/bin/velero/credentials-velero \
|
||||
--backup-location-config region=us-phoenix-1,s3ForcePathStyle="true",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com
|
||||
```
|
||||
|
||||
This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod name]`.
|
||||
|
||||
|
||||
|
||||
## Clean Up
|
||||
|
||||
To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run:
|
||||
|
||||
```
|
||||
kubectl delete namespace/velero clusterrolebinding/velero
|
||||
kubectl delete crds -l component=velero
|
||||
```
|
||||
|
||||
This will remove all resources created by `velero install`.
|
||||
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
After creating the Velero server in your cluster, try this example:
|
||||
|
||||
### Basic example (without PersistentVolumes)
|
||||
|
||||
1. Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml`
|
||||
|
||||
This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service.
|
||||
|
||||
```
|
||||
$ kubectl apply -f examples/nginx-app/base.yaml
|
||||
namespace/nginx-example created
|
||||
deployment.apps/nginx-deployment created
|
||||
service/my-nginx created
|
||||
```
|
||||
|
||||
You can see the created resources by running `kubectl get all`
|
||||
|
||||
```
|
||||
$ kubectl get all
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s
|
||||
pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s
|
||||
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/nginx-deployment 2 2 2 2 21s
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s
|
||||
```
|
||||
|
||||
2. Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example`
|
||||
|
||||
```
|
||||
$ velero backup create nginx-backup --include-namespaces nginx-example
|
||||
Backup request "nginx-backup" submitted successfully.
|
||||
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
|
||||
```
|
||||
|
||||
At this point you can navigate to appropriate bucket, called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero.
|
||||
|
||||
3. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example`
|
||||
|
||||
```
|
||||
$ kubectl delete namespaces nginx-example
|
||||
namespace "nginx-example" deleted
|
||||
```
|
||||
|
||||
Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run:
|
||||
|
||||
```
|
||||
kubectl get deployments --namespace=nginx-example
|
||||
kubectl get services --namespace=nginx-example
|
||||
kubectl get namespace/nginx-example
|
||||
```
|
||||
|
||||
This should return: `No resources found.`
|
||||
|
||||
4. Restore your lost resources: `velero restore create --from-backup nginx-backup`
|
||||
|
||||
```
|
||||
$ velero restore create --from-backup nginx-backup
|
||||
Restore request "nginx-backup-20190604102710" submitted successfully.
|
||||
Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details.
|
||||
```
|
||||
|
||||
Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents.
|
||||
|
||||
5. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following:
|
||||
|
||||
```
|
||||
$ velero restore get
|
||||
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
|
||||
nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none>
|
||||
```
|
||||
|
||||
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
|
||||
|
||||
After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
|
||||
|
||||
If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>`
|
||||
|
||||
|
||||
6. Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml`
|
||||
|
||||
```
|
||||
$ kubectl delete -f examples/nginx-app/base.yaml
|
||||
namespace "nginx-example" deleted
|
||||
deployment.apps "nginx-deployment" deleted
|
||||
service "my-nginx" deleted
|
||||
```
|
||||
|
||||
If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME`
|
||||
|
||||
```
|
||||
$ velero backup delete nginx-backup
|
||||
Are you sure you want to continue (Y/N)? Y
|
||||
Request to delete backup "nginx-backup" submitted successfully.
|
||||
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
|
||||
```
|
||||
|
||||
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector.
|
||||
|
||||
Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`:
|
||||
|
||||
```
|
||||
$ velero backup get nginx-backup
|
||||
An error occurred: backups.velero.io "nginx-backup" not found
|
||||
```
|
||||
|
||||
```
|
||||
$ velero backup get
|
||||
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Additional Reading
|
||||
|
||||
* [Official Velero Documentation](https://velero.io/docs/v1.9/)
|
||||
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)
|
|
@ -0,0 +1,168 @@
|
|||
---
|
||||
title: "Use Tencent Cloud Object Storage as Velero's storage destination."
|
||||
layout: docs
|
||||
---
|
||||
|
||||
|
||||
You can deploy Velero on Tencent [TKE](https://cloud.tencent.com/document/product/457), or an other Kubernetes cluster, and use Tencent Cloud Object Store as a destination for Velero’s backups.
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Registered [Tencent Cloud Account](https://cloud.tencent.com/register).
|
||||
- [Tencent Cloud COS](https://console.cloud.tencent.com/cos) service, referred to as COS, has been launched
|
||||
- A Kubernetes cluster has been created, cluster version v1.16 or later, and the cluster can use DNS and Internet services normally. If you need to create a TKE cluster, refer to the Tencent [create a cluster](https://cloud.tencent.com/document/product/457/32189) documentation.
|
||||
|
||||
## Create a Tencent Cloud COS bucket
|
||||
|
||||
Create an object bucket for Velero to store backups in the Tencent Cloud COS console. For how to create, please refer to Tencent Cloud COS [Create a bucket](https://cloud.tencent.com/document/product/436/13309) usage instructions.
|
||||
|
||||
Set access to the bucket through the object storage console, the bucket needs to be **read** and **written**, so the account is granted data reading, data writing permissions. For how to configure, see the [permission access settings](https://cloud.tencent.com/document/product/436/13315.E5.8D.95.E4.B8.AA.E6.8E.88.E6.9D.83) Tencent user instructions.
|
||||
|
||||
## Get bucket access credentials
|
||||
|
||||
Velero uses an AWS S3-compatible API to access Tencent Cloud COS storage, which requires authentication using a pair of access key IDs and key-created signatures.
|
||||
|
||||
In the S3 API parameter, the "access_key_id" field is the access key ID and the "secret_access_key" field is the key.
|
||||
|
||||
In the [Tencent Cloud Access Management Console](https://console.cloud.tencent.com/cam/capi), Create and acquire Tencent Cloud Keys "SecretId" and "SecretKey" for COS authorized account. **Where the "SecretId" value corresponds to the value of S3 API parameter "access_key_id" field, the "SecretKey" value corresponds to the value of S3 API parameter "secret_access_key" field**.
|
||||
|
||||
Create the credential profile "credentials-velero" required by Velero in the local directory based on the above correspondence:
|
||||
|
||||
```bash
|
||||
[default]
|
||||
aws_access_key_id=<SecretId>
|
||||
aws_secret_access_key=<SecretKey>
|
||||
```
|
||||
|
||||
## Install Velero Resources
|
||||
|
||||
You need to install the Velero CLI first, see [Install the CLI](https://velero.io/docs/v1.5/basic-install/#install-the-cli) for how to install.
|
||||
|
||||
Follow the Velero installation command below to create velero and restic workloads and other necessary resource objects.
|
||||
|
||||
```bash
|
||||
velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.1.0 --bucket <BucketName> \
|
||||
--secret-file ./credentials-velero \
|
||||
--use-restic \
|
||||
--default-volumes-to-restic \
|
||||
--backup-location-config \
|
||||
region=ap-guangzhou,s3ForcePathStyle="true",s3Url=https://cos.ap-guangzhou.myqcloud.com
|
||||
```
|
||||
|
||||
Description of the parameters:
|
||||
|
||||
- `--provider`: Declares the type of plug-in provided by "aws".
|
||||
|
||||
- `--plugins`: Use the AWS S3 compatible API plug-in "velero-plugin-for-aws".
|
||||
|
||||
- `--bucket`: The bucket name created at Tencent Cloud COS.
|
||||
|
||||
- `--secret-file`: Access tencent cloud COS access credential file for the "credentials-velero" credential file created above.
|
||||
|
||||
- `--use-restic`: Back up and restore persistent volume data using the open source free backup tool [restic](https://github.com/restic/restic). However, 'hostPath' volumes are not supported, see the [restic limit](https://velero.io/docs/v1.5/restic/#limitations) for details), an integration that complements Velero's backup capabilities and is recommended to be turned on.
|
||||
|
||||
- `--default-volumes-to-restic`: Enable the use of Restic to back up all Pod volumes, provided that the `--use-restic`parameter needs to be turned on.
|
||||
|
||||
- `--backup-location-config`: Back up the bucket access-related configuration:
|
||||
|
||||
`region`: Tencent cloud COS bucket area, for example, if the created region is Guangzhou, the Region parameter value is "ap-guangzhou".
|
||||
|
||||
`s3ForcePathStyle`: Use the S3 file path format.
|
||||
|
||||
`s3Url`: Tencent Cloud COS-compatible S3 API access address,Note that instead of creating a COS bucket for public network access domain name, you must use a format of "https://cos.`region`.myqcloud.com" URL, for example, if the region is Guangzhou, the parameter value is "https://cos.ap-guangzhou.myqcloud.com.".
|
||||
|
||||
There are other installation parameters that can be viewed using `velero install --help`, such as setting `--use-volume-snapshots-false` to close the storage volume data snapshot backup if you do not want to back up the storage volume data.
|
||||
|
||||
After executing the installation commands above, the installation process looks like this:
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/9015313121ed7987558c88081b052574.png" width="100%">}}
|
||||
|
||||
After the installation command is complete, wait for the velero and restic workloads to be ready to see if the configured storage location is available.
|
||||
|
||||
Executing the 'velero backup-location get' command to view the storage location status and display "Available" indicates that access to Tencent Cloud COS is OK, as shown in the following image:
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/69194157ccd5e377d1e7d914fd8c0336.png" width="100%">}}
|
||||
|
||||
At this point, The installation using Tencent Cloud COS as Velero storage location is complete, If you need more installation information about Velero, You can see the official website [Velero documentation](https://velero.io/docs/) .
|
||||
|
||||
## Velero backup and restore example
|
||||
|
||||
In the cluster, use the helm tool to create a minio test service with a persistent volume, and the minio installation method can be found in the [minio installation](https://github.com/minio/charts), in which case can bound a load balancer for the minio service to access the management page using a public address in the browser.
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/f0fff5228527edc72d6e71a50d5dc966.png" width="100%">}}
|
||||
|
||||
Sign in to the minio web management page and upload some image data for the test, as shown below:
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/e932223585c0b19891cc085ad7f438e1.png" width="100%">}}
|
||||
|
||||
With Velero Backup, you can back up all objects in the cluster directly, or filter objects by type, namespace, and/or label. This example uses the following command to back up all resources under the 'default' namespace.
|
||||
|
||||
```
|
||||
velero backup create default-backup --include-namespaces <Namespace>
|
||||
```
|
||||
|
||||
Use the `velero backup get` command to see if the backup task is complete, and when the backup task status is "Completed," the backup task is completed without any errors, as shown in the following below:
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/eb2bbabae48b188748f5278bedf177f1.png" width="100%">}}
|
||||
|
||||
At this point delete all of MinIO's resources, including its PVC persistence volume, as shown below::
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/15ccaacf00640a04ae29ceed4c86195b.png" width="100%">}}
|
||||
|
||||
After deleting the MinIO resource, use your backup to restore the deleted MinIO resource, and temporarily update the backup storage location to read-only mode (this prevents the backup object from being created or deleted in the backup storage location during the restore process)::
|
||||
|
||||
```bash
|
||||
kubectl patch backupstoragelocation default --namespace velero \
|
||||
--type merge \
|
||||
--patch '{"spec":{"accessMode":"ReadOnly"}}'
|
||||
|
||||
```
|
||||
|
||||
Modifying access to Velero's storage location is "ReadOnly," as shown in the following image:
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/e8c2ab4e5e31d1370c62fad25059a8a8.png" width="100%">}}
|
||||
|
||||
Now use the backup "default-backup" that Velero just created to create the restore task:
|
||||
|
||||
```bash
|
||||
velero restore create --from-backup <BackupObject>
|
||||
```
|
||||
|
||||
You can also use `velero restore get` to see the status of the restore task, and if the restore status is "Completed," the restore task is complete, as shown in the following image:
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/effe8a0a7ce3aa8e422db00bfdddc375.png" width="100%">}}
|
||||
|
||||
When the restore is complete, you can see that the previously deleted minio-related resources have been restored successfully, as shown in the following image:
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/1d53b0115644d43657c2a5ece805c9b4.png" width="100%">}}
|
||||
|
||||
Log in to minio's management page on your browser and you can see that the previously uploaded picture data is still there, indicating that the persistent volume's data was successfully restored, as shown below:
|
||||
|
||||
{{< figure src="/docs/main/contributions/img-for-tencent/ceaca9ce6bc92bdce987c63d2fe71561.png" width="100%">}}
|
||||
|
||||
When the restore is complete, don't forget to restore the backup storage location to read and write mode so that the next backup task can be used successfully:
|
||||
|
||||
```bash
|
||||
kubectl patch backupstoragelocation default --namespace velero \
|
||||
--type merge \
|
||||
--patch '{"spec":{"accessMode":"ReadWrite"}}'
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Uninstall Velero Resources
|
||||
|
||||
To uninstall velero resources in a cluster, you can do so using the following command:
|
||||
|
||||
```bash
|
||||
kubectl delete namespace/velero clusterrolebinding/velero
|
||||
kubectl delete crds -l component=velero
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Additional Reading
|
||||
|
||||
- [Official Velero Documentation](https://velero.io/docs/)
|
||||
- [Tencent Cloud Documentation](https://cloud.tencent.com/document/product)
|
|
@ -0,0 +1,75 @@
|
|||
---
|
||||
title: "Container Storage Interface Snapshot Support in Velero"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
Integrating Container Storage Interface (CSI) snapshot support into Velero enables Velero to backup and restore CSI-backed volumes using the [Kubernetes CSI Snapshot APIs](https://kubernetes.io/docs/concepts/storage/volume-snapshots/).
|
||||
|
||||
By supporting CSI snapshot APIs, Velero can support any volume provider that has a CSI driver, without requiring a Velero-specific plugin to be available. This page gives an overview of how to add support for CSI snapshots to Velero through CSI plugins. For more information about specific components, see the [plugin repo](https://github.com/vmware-tanzu/velero-plugin-for-csi/).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Your cluster is Kubernetes version 1.20 or greater.
|
||||
1. Your cluster is running a CSI driver capable of support volume snapshots at the [v1 API level](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/).
|
||||
1. When restoring CSI VolumeSnapshots across clusters, the name of the CSI driver in the destination cluster is the same as that on the source cluster to ensure cross cluster portability of CSI VolumeSnapshots
|
||||
|
||||
**NOTE:** Not all cloud provider's CSI drivers guarantee snapshot durability, meaning that the VolumeSnapshot and VolumeSnapshotContent objects may be stored in the same object storage system location as the original PersistentVolume and may be vulnerable to data loss. You should refer to your cloud provider's documentation for more information on configuring snapshot durability. Since v0.3.0 the velero team will provide official support for CSI plugin when they are used with AWS and Azure drivers.
|
||||
|
||||
## Installing Velero with CSI support
|
||||
|
||||
To integrate Velero with the CSI volume snapshot APIs, you must enable the `EnableCSI` feature flag and install the Velero [CSI plugins][2] on the Velero server.
|
||||
|
||||
Both of these can be added with the `velero install` command.
|
||||
|
||||
```bash
|
||||
velero install \
|
||||
--features=EnableCSI \
|
||||
--plugins=<object storage plugin>,velero/velero-plugin-for-csi:v0.3.0 \
|
||||
...
|
||||
```
|
||||
|
||||
To include the status of CSI objects associated with a Velero backup in `velero backup describe` output, run `velero client config set features=EnableCSI`.
|
||||
See [Enabling Features][1] for more information about managing client-side feature flags. You can also view the image on [Docker Hub][3].
|
||||
|
||||
## Implementation Choices
|
||||
|
||||
This section documents some of the choices made during implementation of the Velero [CSI plugins][2]:
|
||||
|
||||
1. VolumeSnapshots created by the Velero CSI plugins are retained only for the lifetime of the backup even if the `DeletionPolicy` on the VolumeSnapshotClass is set to `Retain`. To accomplish this, during deletion of the backup the prior to deleting the VolumeSnapshot, VolumeSnapshotContent object is patched to set its `DeletionPolicy` to `Delete`. Deleting the VolumeSnapshot object will result in cascade delete of the VolumeSnapshotContent and the snapshot in the storage provider.
|
||||
1. VolumeSnapshotContent objects created during a `velero backup` that are dangling, unbound to a VolumeSnapshot object, will be discovered, using labels, and deleted on backup deletion.
|
||||
1. The Velero CSI plugins, to backup CSI backed PVCs, will choose the VolumeSnapshotClass in the cluster that has the same driver name and also has the `velero.io/csi-volumesnapshot-class` label set on it, like
|
||||
```yaml
|
||||
velero.io/csi-volumesnapshot-class: "true"
|
||||
```
|
||||
1. The VolumeSnapshot objects will be removed from the cluster after the backup is uploaded to the object storage, so that the namespace that is backed up can be deleted without removing the snapshot in the storage provider if the `DeletionPolicy` is `Delete.
|
||||
|
||||
## How it Works - Overview
|
||||
|
||||
Velero's CSI support does not rely on the Velero VolumeSnapshotter plugin interface.
|
||||
|
||||
Instead, Velero uses a collection of BackupItemAction plugins that act first against PersistentVolumeClaims.
|
||||
|
||||
When this BackupItemAction sees PersistentVolumeClaims pointing to a PersistentVolume backed by a CSI driver, it will choose the VolumeSnapshotClass with the same driver name that has the `velero.io/csi-volumesnapshot-class` label to create a CSI VolumeSnapshot object with the PersistentVolumeClaim as a source.
|
||||
This VolumeSnapshot object resides in the same namespace as the PersistentVolumeClaim that was used as a source.
|
||||
|
||||
From there, the CSI external-snapshotter controller will see the VolumeSnapshot and create a VolumeSnapshotContent object, a cluster-scoped resource that will point to the actual, disk-based snapshot in the storage system.
|
||||
The external-snapshotter plugin will call the CSI driver's snapshot method, and the driver will call the storage system's APIs to generate the snapshot.
|
||||
Once an ID is generated and the storage system marks the snapshot as usable for restore, the VolumeSnapshotContent object will be updated with a `status.snapshotHandle` and the `status.readyToUse` field will be set.
|
||||
|
||||
Velero will include the generated VolumeSnapshot and VolumeSnapshotContent objects in the backup tarball, as well as upload all VolumeSnapshots and VolumeSnapshotContents objects in a JSON file to the object storage system.
|
||||
|
||||
When Velero synchronizes backups into a new cluster, VolumeSnapshotContent objects and the VolumeSnapshotClass that is chosen to take
|
||||
snapshot will be synced into the cluster as well, so that Velero can manage backup expiration appropriately.
|
||||
|
||||
|
||||
The `DeletionPolicy` on the VolumeSnapshotContent will be the same as the `DeletionPolicy` on the VolumeSnapshotClass that was used to create the VolumeSnapshot. Setting a `DeletionPolicy` of `Retain` on the VolumeSnapshotClass will preserve the volume snapshot in the storage system for the lifetime of the Velero backup and will prevent the deletion of the volume snapshot, in the storage system, in the event of a disaster where the namespace with the VolumeSnapshot object may be lost.
|
||||
|
||||
When the Velero backup expires, the VolumeSnapshot objects will be deleted and the VolumeSnapshotContent objects will be updated to have a `DeletionPolicy` of `Delete`, to free space on the storage system.
|
||||
|
||||
For more details on how each plugin works, see the [CSI plugin repo][2]'s documentation.
|
||||
|
||||
**Note:** The AWS, Microsoft Azure, and Google Cloud Platform (GCP) Velero plugins version 1.4 and later are able to snapshot and restore persistent volumes provisioned by a CSI driver via the APIs of the cloud provider, without having to install Velero CSI plugins. See the [AWS](https://github.com/vmware-tanzu/velero-plugin-for-aws), [Microsoft Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure), and [Google Cloud Platform (GCP)](https://github.com/vmware-tanzu/velero-plugin-for-gcp) Velero plugin repo for more information on supported CSI drivers.
|
||||
|
||||
[1]: customize-installation.md#enable-server-side-features
|
||||
[2]: https://github.com/vmware-tanzu/velero-plugin-for-csi/
|
||||
[3]: https://hub.docker.com/repository/docker/velero/velero-plugin-for-csi
|
|
@ -0,0 +1,115 @@
|
|||
---
|
||||
title: "Plugins"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
Velero has a plugin architecture that allows users to add their own custom functionality to Velero backups & restores without having to modify/recompile the core Velero binary. To add custom functionality, users simply create their own binary containing implementations of Velero's plugin kinds (described below), plus a small amount of boilerplate code to expose the plugin implementations to Velero. This binary is added to a container image that serves as an init container for the Velero server pod and copies the binary into a shared emptyDir volume for the Velero server to access.
|
||||
|
||||
Multiple plugins, of any type, can be implemented in this binary.
|
||||
|
||||
A fully-functional [sample plugin repository][1] is provided to serve as a convenient starting point for plugin authors.
|
||||
|
||||
## Plugin Naming
|
||||
|
||||
A plugin is identified by a prefix + name.
|
||||
|
||||
**Note: Please don't use `velero.io` as the prefix for a plugin not supported by the Velero team.** The prefix should help users identify the entity developing the plugin, so please use a prefix that identify yourself.
|
||||
|
||||
Whenever you define a Backup Storage Location or Volume Snapshot Location, this full name will be the value for the `provider` specification.
|
||||
|
||||
For example: `oracle.io/oracle`.
|
||||
|
||||
```
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
spec:
|
||||
provider: oracle.io/oracle
|
||||
```
|
||||
|
||||
```
|
||||
apiVersion: velero.io/v1
|
||||
kind: VolumeSnapshotLocation
|
||||
spec:
|
||||
provider: oracle.io/oracle
|
||||
```
|
||||
|
||||
When naming your plugin, keep in mind that the full name needs to conform to these rules:
|
||||
- have two parts, prefix + name, separated by '/'
|
||||
- none of the above parts can be empty
|
||||
- the prefix is a valid DNS subdomain name
|
||||
- a plugin with the same prefix + name cannot not already exist
|
||||
|
||||
### Some examples:
|
||||
|
||||
```
|
||||
- example.io/azure
|
||||
- 1.2.3.4/5678
|
||||
- example-with-dash.io/azure
|
||||
```
|
||||
|
||||
You will need to give your plugin(s) the full name when registering them by calling the appropriate `RegisterX` function: <https://github.com/vmware-tanzu/velero/blob/0e0f357cef7cf15d4c1d291d3caafff2eeb69c1e/pkg/plugin/framework/server.go#L42-L60>
|
||||
|
||||
## Plugin Kinds
|
||||
|
||||
Velero supports the following kinds of plugins:
|
||||
|
||||
- **Object Store** - persists and retrieves backups, backup logs and restore logs
|
||||
- **Volume Snapshotter** - creates volume snapshots (during backup) and restores volumes from snapshots (during restore)
|
||||
- **Backup Item Action** - executes arbitrary logic for individual items prior to storing them in a backup file
|
||||
- **Restore Item Action** - executes arbitrary logic for individual items prior to restoring them into a cluster
|
||||
- **Delete Item Action** - executes arbitrary logic based on individual items within a backup prior to deleting the backup
|
||||
|
||||
## Plugin Logging
|
||||
|
||||
Velero provides a [logger][2] that can be used by plugins to log structured information to the main Velero server log or
|
||||
per-backup/restore logs. It also passes a `--log-level` flag to each plugin binary, whose value is the value of the same
|
||||
flag from the main Velero process. This means that if you turn on debug logging for the Velero server via `--log-level=debug`,
|
||||
plugins will also emit debug-level logs. See the [sample repository][1] for an example of how to use the logger within your plugin.
|
||||
|
||||
## Plugin Configuration
|
||||
|
||||
Velero uses a ConfigMap-based convention for providing configuration to plugins. If your plugin needs to be configured at runtime,
|
||||
define a ConfigMap like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
# any name can be used; Velero uses the labels (below)
|
||||
# to identify it rather than the name
|
||||
name: my-plugin-config
|
||||
|
||||
# must be in the namespace where the velero deployment
|
||||
# is running
|
||||
namespace: velero
|
||||
|
||||
labels:
|
||||
# this value-less label identifies the ConfigMap as
|
||||
# config for a plugin (the built-in change storageclass
|
||||
# restore item action plugin)
|
||||
velero.io/plugin-config: ""
|
||||
|
||||
# add a label whose key corresponds to the fully-qualified
|
||||
# plugin name (for example mydomain.io/my-plugin-name), and whose
|
||||
# value is the plugin type (BackupItemAction, RestoreItemAction,
|
||||
# ObjectStore, or VolumeSnapshotter)
|
||||
<fully-qualified-plugin-name>: <plugin-type>
|
||||
|
||||
data:
|
||||
# add your configuration data here as key-value pairs
|
||||
```
|
||||
|
||||
Then, in your plugin's implementation, you can read this ConfigMap to fetch the necessary configuration.
|
||||
|
||||
## Feature Flags
|
||||
|
||||
Velero will pass any known features flags as a comma-separated list of strings to the `--features` argument.
|
||||
|
||||
Once parsed into a `[]string`, the features can then be registered using the `NewFeatureFlagSet` function and queried with `features.Enabled(<featureName>)`.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Velero adds the `LD_LIBRARY_PATH` into the list of environment variables to provide the convenience for plugins that requires C libraries/extensions in the runtime.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero-plugin-example
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.9/pkg/plugin/logger.go
|
|
@ -0,0 +1,390 @@
|
|||
---
|
||||
title: "Customize Velero Install"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Plugins
|
||||
|
||||
During install, Velero requires that at least one plugin is added (with the `--plugins` flag). Please see the documentation under [Plugins](overview-plugins.md)
|
||||
|
||||
## Install in any namespace
|
||||
|
||||
Velero is installed in the `velero` namespace by default. However, you can install Velero in any namespace. See [run in custom namespace][2] for details.
|
||||
|
||||
## Use non-file-based identity mechanisms
|
||||
|
||||
By default, `velero install` expects a credentials file for your `velero` IAM account to be provided via the `--secret-file` flag.
|
||||
|
||||
If you are using an alternate identity mechanism, such as kube2iam/kiam on AWS, Workload Identity on GKE, etc., that does not require a credentials file, you can specify the `--no-secret` flag instead of `--secret-file`.
|
||||
|
||||
## Enable restic integration
|
||||
|
||||
By default, `velero install` does not install Velero's [restic integration][3]. To enable it, specify the `--use-restic` flag.
|
||||
|
||||
If you've already run `velero install` without the `--use-restic` flag, you can run the same command again, including the `--use-restic` flag, to add the restic integration to your existing install.
|
||||
|
||||
## Default Pod Volume backup to restic
|
||||
|
||||
By default, `velero install` does not enable the use of restic to take backups of all pod volumes. You must apply an [annotation](restic.md/#using-opt-in-pod-volume-backup) to every pod which contains volumes for Velero to use restic for the backup.
|
||||
|
||||
If you are planning to only use restic for volume backups, you can run the `velero install` command with the `--default-volumes-to-restic` flag. This will default all pod volumes backups to use restic without having to apply annotations to pods. Note that when this flag is set during install, Velero will always try to use restic to perform the backup, even want an individual backup to use volume snapshots, by setting the `--snapshot-volumes` flag in the `backup create` command. Alternatively, you can set the `--default-volumes-to-restic` on an individual backup to to make sure Velero uses Restic for each volume being backed up.
|
||||
|
||||
## Enable features
|
||||
|
||||
New features in Velero will be released as beta features behind feature flags which are not enabled by default. A full listing of Velero feature flags can be found [here][11].
|
||||
|
||||
### Enable server side features
|
||||
|
||||
Features on the Velero server can be enabled using the `--features` flag to the `velero install` command. This flag takes as value a comma separated list of feature flags to enable. As an example [CSI snapshotting of PVCs][10] can be enabled using `EnableCSI` feature flag in the `velero install` command as shown below:
|
||||
|
||||
```bash
|
||||
velero install --features=EnableCSI
|
||||
```
|
||||
|
||||
Another example is enabling the support of multiple API group versions, as documented at [- -features=EnableAPIGroupVersions](enable-api-group-versions-feature.md).
|
||||
|
||||
Feature flags, passed to `velero install` will be passed to the Velero deployment and also to the `restic` daemon set, if `--use-restic` flag is used.
|
||||
|
||||
Similarly, features may be disabled by removing the corresponding feature flags from the `--features` flag.
|
||||
|
||||
Enabling and disabling feature flags will require modifying the Velero deployment and also the restic daemonset. This may be done from the CLI by uninstalling and re-installing Velero, or by editing the `deploy/velero` and `daemonset/restic` resources in-cluster.
|
||||
|
||||
```bash
|
||||
$ kubectl -n velero edit deploy/velero
|
||||
$ kubectl -n velero edit daemonset/restic
|
||||
```
|
||||
|
||||
### Enable client side features
|
||||
|
||||
For some features it may be necessary to use the `--features` flag to the Velero client. This may be done by passing the `--features` on every command run using the Velero CLI or the by setting the features in the velero client config file using the `velero client config set` command as shown below:
|
||||
|
||||
```bash
|
||||
velero client config set features=EnableCSI
|
||||
```
|
||||
|
||||
This stores the config in a file at `$HOME/.config/velero/config.json`.
|
||||
|
||||
All client side feature flags may be disabled using the below command
|
||||
|
||||
```bash
|
||||
velero client config set features=
|
||||
```
|
||||
|
||||
### Colored CLI output
|
||||
|
||||
Velero CLI uses colored output for some commands, such as `velero describe`. If
|
||||
the environment in which Velero is run doesn't support colored output, the
|
||||
colored output will be automatically disabled. However, you can manually disable
|
||||
colors with config file:
|
||||
|
||||
```bash
|
||||
velero client config set colorized=false
|
||||
```
|
||||
|
||||
Note that if you specify `--colorized=true` as a CLI option it will override
|
||||
the config file setting.
|
||||
|
||||
|
||||
## Customize resource requests and limits
|
||||
|
||||
At installation, Velero sets default resource requests and limits for the Velero pod and the restic pod, if you using the [restic integration](/docs/main/restic/).
|
||||
|
||||
{{< table caption="Velero Customize resource requests and limits defaults" >}}
|
||||
|Setting|Velero pod defaults|restic pod defaults|
|
||||
|--- |--- |--- |
|
||||
|CPU request|500m|500m|
|
||||
|Memory requests|128Mi|512Mi|
|
||||
|CPU limit|1000m (1 CPU)|1000m (1 CPU)|
|
||||
|Memory limit|512Mi|1024Mi|
|
||||
{{< /table >}}
|
||||
|
||||
Depending on the cluster resources, especially if you are using Restic, you may need to increase these defaults. Through testing, the Velero maintainers have found these defaults work well when backing up and restoring 1000 or less resources and total size of files is 100GB or below. If the resources you are planning to backup or restore exceed this, you will need to increase the CPU or memory resources available to Velero. In general, the Velero maintainer's testing found that backup operations needed more CPU & memory resources but were less time-consuming than restore operations, when comparing backing up and restoring the same amount of data. The exact CPU and memory limits you will need depend on the scale of the files and directories of your resources and your hardware. It's recommended that you perform your own testing to find the best resource limits for your clusters and resources.
|
||||
|
||||
Due to a [known Restic issue](https://github.com/restic/restic/issues/2446), the Restic pod will consume large amounts of memory, especially if you are backing up millions of tiny files and directories. If you are planning to use Restic to backup 100GB of data or more, you will need to increase the resource limits to make sure backups complete successfully.
|
||||
|
||||
### Install with custom resource requests and limits
|
||||
|
||||
You can customize these resource requests and limit when you first install using the [velero install][6] CLI command.
|
||||
|
||||
```
|
||||
velero install \
|
||||
--velero-pod-cpu-request <CPU_REQUEST> \
|
||||
--velero-pod-mem-request <MEMORY_REQUEST> \
|
||||
--velero-pod-cpu-limit <CPU_LIMIT> \
|
||||
--velero-pod-mem-limit <MEMORY_LIMIT> \
|
||||
[--use-restic] \
|
||||
[--default-volumes-to-restic] \
|
||||
[--restic-pod-cpu-request <CPU_REQUEST>] \
|
||||
[--restic-pod-mem-request <MEMORY_REQUEST>] \
|
||||
[--restic-pod-cpu-limit <CPU_LIMIT>] \
|
||||
[--restic-pod-mem-limit <MEMORY_LIMIT>]
|
||||
```
|
||||
|
||||
### Update resource requests and limits after install
|
||||
|
||||
After installation you can adjust the resource requests and limits in the Velero Deployment spec or restic DeamonSet spec, if you are using the restic integration.
|
||||
|
||||
**Velero pod**
|
||||
|
||||
Update the `spec.template.spec.containers.resources.limits` and `spec.template.spec.containers.resources.requests` values in the Velero deployment.
|
||||
|
||||
```bash
|
||||
kubectl patch deployment velero -n velero --patch \
|
||||
'{"spec":{"template":{"spec":{"containers":[{"name": "velero", "resources": {"limits":{"cpu": "1", "memory": "512Mi"}, "requests": {"cpu": "1", "memory": "128Mi"}}}]}}}}'
|
||||
```
|
||||
|
||||
**restic pod**
|
||||
|
||||
Update the `spec.template.spec.containers.resources.limits` and `spec.template.spec.containers.resources.requests` values in the restic DeamonSet spec.
|
||||
|
||||
```bash
|
||||
kubectl patch daemonset restic -n velero --patch \
|
||||
'{"spec":{"template":{"spec":{"containers":[{"name": "restic", "resources": {"limits":{"cpu": "1", "memory": "1024Mi"}, "requests": {"cpu": "1", "memory": "512Mi"}}}]}}}}'
|
||||
```
|
||||
|
||||
Additionally, you may want to update the the default Velero restic pod operation timeout (default 240 minutes) to allow larger backups more time to complete. You can adjust this timeout by adding the `- --restic-timeout` argument to the Velero Deployment spec.
|
||||
|
||||
**NOTE:** Changes made to this timeout value will revert back to the default value if you re-run the Velero install command.
|
||||
|
||||
1. Open the Velero Deployment spec.
|
||||
|
||||
```
|
||||
kubectl edit deploy velero -n velero
|
||||
```
|
||||
|
||||
1. Add `- --restic-timeout` to `spec.template.spec.containers`.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- --restic-timeout=240m
|
||||
```
|
||||
|
||||
## Configure more than one storage location for backups or volume snapshots
|
||||
|
||||
Velero supports any number of backup storage locations and volume snapshot locations. For more details, see [about locations](locations.md).
|
||||
|
||||
However, `velero install` only supports configuring at most one backup storage location and one volume snapshot location.
|
||||
|
||||
To configure additional locations after running `velero install`, use the `velero backup-location create` and/or `velero snapshot-location create` commands along with provider-specific configuration. Use the `--help` flag on each of these commands for more details.
|
||||
|
||||
### Set default backup storage location or volume snapshot locations
|
||||
|
||||
When performing backups, Velero needs to know where to backup your data. This means that if you configure multiple locations, you must specify the location Velero should use each time you run `velero backup create`, or you can set a default backup storage location or default volume snapshot locations. If you only have one backup storage llocation or volume snapshot location set for a provider, Velero will automatically use that location as the default.
|
||||
|
||||
Set a default backup storage location by passing a `--default` flag with when running `velero backup-location create`.
|
||||
|
||||
```
|
||||
velero backup-location create backups-primary \
|
||||
--provider aws \
|
||||
--bucket velero-backups \
|
||||
--config region=us-east-1 \
|
||||
--default
|
||||
```
|
||||
|
||||
You can set a default volume snapshot location for each of your volume snapshot providers using the `--default-volume-snapshot-locations` flag on the `velero server` command.
|
||||
|
||||
```
|
||||
velero server --default-volume-snapshot-locations="<PROVIDER-NAME>:<LOCATION-NAME>,<PROVIDER2-NAME>:<LOCATION2-NAME>"
|
||||
```
|
||||
|
||||
## Do not configure a backup storage location during install
|
||||
|
||||
If you need to install Velero without a default backup storage location (without specifying `--bucket` or `--provider`), the `--no-default-backup-location` flag is required for confirmation.
|
||||
|
||||
## Install an additional volume snapshot provider
|
||||
|
||||
Velero supports using different providers for volume snapshots than for object storage -- for example, you can use AWS S3 for object storage, and Portworx for block volume snapshots.
|
||||
|
||||
However, `velero install` only supports configuring a single matching provider for both object storage and volume snapshots.
|
||||
|
||||
To use a different volume snapshot provider:
|
||||
|
||||
1. Install the Velero server components by following the instructions for your **object storage** provider
|
||||
|
||||
1. Add your volume snapshot provider's plugin to Velero (look in [your provider][0]'s documentation for the image name):
|
||||
|
||||
```bash
|
||||
velero plugin add <registry/image:version>
|
||||
```
|
||||
|
||||
1. Add a volume snapshot location for your provider, following [your provider][0]'s documentation for configuration:
|
||||
|
||||
```bash
|
||||
velero snapshot-location create <NAME> \
|
||||
--provider <PROVIDER-NAME> \
|
||||
[--config <PROVIDER-CONFIG>]
|
||||
```
|
||||
|
||||
## Generate YAML only
|
||||
|
||||
By default, `velero install` generates and applies a customized set of Kubernetes configuration (YAML) to your cluster.
|
||||
|
||||
To generate the YAML without applying it to your cluster, use the `--dry-run -o yaml` flags.
|
||||
|
||||
This is useful for applying bespoke customizations, integrating with a GitOps workflow, etc.
|
||||
|
||||
If you are installing Velero in Kubernetes 1.14.x or earlier, you need to use `kubectl apply`'s `--validate=false` option when applying the generated configuration to your cluster. See [issue 2077][7] and [issue 2311][8] for more context.
|
||||
|
||||
## Use a storage provider secured by a self-signed certificate
|
||||
|
||||
If you intend to use Velero with a storage provider that is secured by a self-signed certificate,
|
||||
you may need to instruct Velero to trust that certificate. See [use Velero with a storage provider secured by a self-signed certificate][9] for details.
|
||||
|
||||
## Additional options
|
||||
|
||||
Run `velero install --help` or see the [Helm chart documentation](https://vmware-tanzu.github.io/helm-charts/) for the full set of installation options.
|
||||
|
||||
## Optional Velero CLI configurations
|
||||
|
||||
### Enabling shell autocompletion
|
||||
|
||||
**Velero CLI** provides autocompletion support for `Bash` and `Zsh`, which can save you a lot of typing.
|
||||
|
||||
Below are the procedures to set up autocompletion for `Bash` (including the difference between `Linux` and `macOS`) and `Zsh`.
|
||||
|
||||
#### Bash on Linux
|
||||
|
||||
The **Velero CLI** completion script for `Bash` can be generated with the command `velero completion bash`. Sourcing the completion script in your shell enables velero autocompletion.
|
||||
|
||||
However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`).
|
||||
|
||||
##### Install bash-completion
|
||||
|
||||
`bash-completion` is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc.
|
||||
|
||||
The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file.
|
||||
|
||||
To find out, reload your shell and run `type _init_completion`. If the command succeeds, you're already set, otherwise add the following to your `~/.bashrc` file:
|
||||
|
||||
```shell
|
||||
source /usr/share/bash-completion/bash_completion
|
||||
```
|
||||
|
||||
Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`.
|
||||
|
||||
##### Enable Velero CLI autocompletion for Bash on Linux
|
||||
|
||||
You now need to ensure that the **Velero CLI** completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
|
||||
|
||||
- Source the completion script in your `~/.bashrc` file:
|
||||
|
||||
```shell
|
||||
echo 'source <(velero completion bash)' >>~/.bashrc
|
||||
```
|
||||
|
||||
- Add the completion script to the `/etc/bash_completion.d` directory:
|
||||
|
||||
```shell
|
||||
velero completion bash >/etc/bash_completion.d/velero
|
||||
```
|
||||
|
||||
- If you have an alias for velero, you can extend shell completion to work with that alias:
|
||||
|
||||
```shell
|
||||
echo 'alias v=velero' >>~/.bashrc
|
||||
echo 'complete -F __start_velero v' >>~/.bashrc
|
||||
```
|
||||
|
||||
> `bash-completion` sources all completion scripts in `/etc/bash_completion.d`.
|
||||
|
||||
Both approaches are equivalent. After reloading your shell, velero autocompletion should be working.
|
||||
|
||||
#### Bash on macOS
|
||||
|
||||
The **Velero CLI** completion script for Bash can be generated with `velero completion bash`. Sourcing this script in your shell enables velero completion.
|
||||
|
||||
However, the velero completion script depends on [**bash-completion**](https://github.com/scop/bash-completion) which you thus have to previously install.
|
||||
|
||||
|
||||
> There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The velero completion script **doesn't work** correctly with bash-completion v1 and Bash 3.2. It requires **bash-completion v2** and **Bash 4.1+**. Thus, to be able to correctly use velero completion on macOS, you have to install and use Bash 4.1+ ([*instructions*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer).
|
||||
|
||||
|
||||
##### Install bash-completion
|
||||
|
||||
> As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work).
|
||||
|
||||
You can test if you have bash-completion v2 already installed with `type _init_completion`. If not, you can install it with Homebrew:
|
||||
|
||||
```shell
|
||||
brew install bash-completion@2
|
||||
```
|
||||
|
||||
As stated in the output of this command, add the following to your `~/.bashrc` file:
|
||||
|
||||
```shell
|
||||
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
|
||||
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
|
||||
```
|
||||
|
||||
Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`.
|
||||
|
||||
##### Enable Velero CLI autocompletion for Bash on macOS
|
||||
|
||||
You now have to ensure that the velero completion script gets sourced in all your shell sessions. There are multiple ways to achieve this:
|
||||
|
||||
- Source the completion script in your `~/.bashrc` file:
|
||||
|
||||
```shell
|
||||
echo 'source <(velero completion bash)' >>~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
- Add the completion script to the `/usr/local/etc/bash_completion.d` directory:
|
||||
|
||||
```shell
|
||||
velero completion bash >/usr/local/etc/bash_completion.d/velero
|
||||
```
|
||||
|
||||
- If you have an alias for velero, you can extend shell completion to work with that alias:
|
||||
|
||||
```shell
|
||||
echo 'alias v=velero' >>~/.bashrc
|
||||
echo 'complete -F __start_velero v' >>~/.bashrc
|
||||
```
|
||||
|
||||
- If you installed velero with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the velero completion script should already be in `/usr/local/etc/bash_completion.d/velero`. In that case, you don't need to do anything.
|
||||
|
||||
> The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
|
||||
|
||||
In any case, after reloading your shell, velero completion should be working.
|
||||
|
||||
#### Autocompletion on Zsh
|
||||
|
||||
The velero completion script for Zsh can be generated with the command `velero completion zsh`. Sourcing the completion script in your shell enables velero autocompletion.
|
||||
|
||||
To do so in all your shell sessions, add the following to your `~/.zshrc` file:
|
||||
|
||||
```shell
|
||||
source <(velero completion zsh)
|
||||
```
|
||||
|
||||
If you have an alias for kubectl, you can extend shell completion to work with that alias:
|
||||
|
||||
```shell
|
||||
echo 'alias v=velero' >>~/.zshrc
|
||||
echo 'complete -F __start_velero v' >>~/.zshrc
|
||||
```
|
||||
|
||||
After reloading your shell, kubectl autocompletion should be working.
|
||||
|
||||
If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file:
|
||||
|
||||
```shell
|
||||
autoload -Uz compinit
|
||||
compinit
|
||||
```
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/releases/latest
|
||||
[2]: namespace.md
|
||||
[3]: restic.md
|
||||
[4]: on-premises.md
|
||||
[6]: velero-install.md#usage
|
||||
[7]: https://github.com/vmware-tanzu/velero/issues/2077
|
||||
[8]: https://github.com/vmware-tanzu/velero/issues/2311
|
||||
[9]: self-signed-certificates.md
|
||||
[10]: csi.md
|
||||
[11]: https://github.com/vmware-tanzu/velero/blob/v1.9/pkg/apis/velero/v1/constants.go
|
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
title: "Debugging Installation Issues"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## General
|
||||
|
||||
### `invalid configuration: no configuration has been provided`
|
||||
This typically means that no `kubeconfig` file can be found for the Velero client to use. Velero looks for a kubeconfig in the
|
||||
following locations:
|
||||
* the path specified by the `--kubeconfig` flag, if any
|
||||
* the path specified by the `$KUBECONFIG` environment variable, if any
|
||||
* `~/.kube/config`
|
||||
|
||||
### Backups or restores stuck in `New` phase
|
||||
This means that the Velero controllers are not processing the backups/restores, which usually happens because the Velero server is not running. Check the pod description and logs for errors:
|
||||
```
|
||||
kubectl -n velero describe pods
|
||||
kubectl -n velero logs deployment/velero
|
||||
```
|
||||
|
||||
|
||||
## AWS
|
||||
|
||||
### `NoCredentialProviders: no valid providers in chain`
|
||||
|
||||
#### Using credentials
|
||||
This means that the secret containing the AWS IAM user credentials for Velero has not been created/mounted properly
|
||||
into the Velero server pod. Ensure the following:
|
||||
|
||||
* The `cloud-credentials` secret exists in the Velero server's namespace
|
||||
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
|
||||
* The `credentials-velero` file is formatted properly and has the correct values:
|
||||
|
||||
```
|
||||
[default]
|
||||
aws_access_key_id=<your AWS access key ID>
|
||||
aws_secret_access_key=<your AWS secret access key>
|
||||
```
|
||||
|
||||
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
|
||||
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
|
||||
|
||||
#### Using kube2iam
|
||||
This means that Velero can't read the content of the S3 bucket. Ensure the following:
|
||||
|
||||
* A Trust Policy document exists that allows the role used by kube2iam to assume Velero's role, as stated in the AWS config documentation.
|
||||
* The new Velero role has all the permissions listed in the documentation regarding S3.
|
||||
|
||||
|
||||
## Azure
|
||||
|
||||
### `Failed to refresh the Token` or `adal: Refresh request failed`
|
||||
This means that the secrets containing the Azure service principal credentials for Velero has not been created/mounted
|
||||
properly into the Velero server pod. Ensure the following:
|
||||
|
||||
* The `cloud-credentials` secret exists in the Velero server's namespace
|
||||
* The `cloud-credentials` secret has all of the expected keys and each one has the correct value (see [setup instructions][0])
|
||||
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
|
||||
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
|
||||
|
||||
|
||||
## GCE/GKE
|
||||
|
||||
### `open credentials/cloud: no such file or directory`
|
||||
This means that the secret containing the GCE service account credentials for Velero has not been created/mounted properly
|
||||
into the Velero server pod. Ensure the following:
|
||||
|
||||
* The `cloud-credentials` secret exists in the Velero server's namespace
|
||||
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
|
||||
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
|
||||
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
|
||||
|
||||
[0]: azure-config.md#create-service-principal
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
title: "Debugging Restores"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Example
|
||||
|
||||
When Velero finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `velero restore get`:
|
||||
|
||||
```
|
||||
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
|
||||
backup-test-20170726180512 backup-test Completed 155 76 2017-07-26 11:41:14 -0400 EDT <none>
|
||||
backup-test-20170726180513 backup-test Completed 121 14 2017-07-26 11:48:24 -0400 EDT <none>
|
||||
backup-test-2-20170726180514 backup-test-2 Completed 0 0 2017-07-26 13:31:21 -0400 EDT <none>
|
||||
backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none>
|
||||
```
|
||||
|
||||
To delve into the warnings and errors into more detail, you can use `velero restore describe`:
|
||||
|
||||
```bash
|
||||
velero restore describe backup-test-20170726180512
|
||||
```
|
||||
|
||||
The output looks like this:
|
||||
|
||||
```
|
||||
Name: backup-test-20170726180512
|
||||
Namespace: velero
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Backup: backup-test
|
||||
|
||||
Namespaces:
|
||||
Included: *
|
||||
Excluded: <none>
|
||||
|
||||
Resources:
|
||||
Included: serviceaccounts
|
||||
Excluded: nodes, events, events.events.k8s.io
|
||||
Cluster-scoped: auto
|
||||
|
||||
Namespace mappings: <none>
|
||||
|
||||
Label selector: <none>
|
||||
|
||||
Restore PVs: auto
|
||||
|
||||
Preserve Service NodePorts: auto
|
||||
|
||||
Phase: Completed
|
||||
|
||||
Validation errors: <none>
|
||||
|
||||
Warnings:
|
||||
Velero: <none>
|
||||
Cluster: <none>
|
||||
Namespaces:
|
||||
velero: serviceaccounts "velero" already exists
|
||||
serviceaccounts "default" already exists
|
||||
kube-public: serviceaccounts "default" already exists
|
||||
kube-system: serviceaccounts "attachdetach-controller" already exists
|
||||
serviceaccounts "certificate-controller" already exists
|
||||
serviceaccounts "cronjob-controller" already exists
|
||||
serviceaccounts "daemon-set-controller" already exists
|
||||
serviceaccounts "default" already exists
|
||||
serviceaccounts "deployment-controller" already exists
|
||||
serviceaccounts "disruption-controller" already exists
|
||||
serviceaccounts "endpoint-controller" already exists
|
||||
serviceaccounts "generic-garbage-collector" already exists
|
||||
serviceaccounts "horizontal-pod-autoscaler" already exists
|
||||
serviceaccounts "job-controller" already exists
|
||||
serviceaccounts "kube-dns" already exists
|
||||
serviceaccounts "namespace-controller" already exists
|
||||
serviceaccounts "node-controller" already exists
|
||||
serviceaccounts "persistent-volume-binder" already exists
|
||||
serviceaccounts "pod-garbage-collector" already exists
|
||||
serviceaccounts "replicaset-controller" already exists
|
||||
serviceaccounts "replication-controller" already exists
|
||||
serviceaccounts "resourcequota-controller" already exists
|
||||
serviceaccounts "service-account-controller" already exists
|
||||
serviceaccounts "service-controller" already exists
|
||||
serviceaccounts "statefulset-controller" already exists
|
||||
serviceaccounts "ttl-controller" already exists
|
||||
default: serviceaccounts "default" already exists
|
||||
|
||||
Errors:
|
||||
Velero: <none>
|
||||
Cluster: <none>
|
||||
Namespaces: <none>
|
||||
```
|
||||
|
||||
## Structure
|
||||
|
||||
Errors appear for incomplete or partial restores. Warnings appear for non-blocking issues, for example, the
|
||||
restore looks "normal" and all resources referenced in the backup exist in some form, although some
|
||||
of them may have been pre-existing.
|
||||
|
||||
Both errors and warnings are structured in the same way:
|
||||
|
||||
* `Velero`: A list of system-related issues encountered by the Velero server. For example, Velero couldn't read a directory.
|
||||
|
||||
* `Cluster`: A list of issues related to the restore of cluster-scoped resources.
|
||||
|
||||
* `Namespaces`: A map of namespaces to the list of issues related to the restore of their respective resources.
|
|
@ -0,0 +1,56 @@
|
|||
---
|
||||
title: "Development "
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Update generated files
|
||||
|
||||
Run `make update` to regenerate files if you make the following changes:
|
||||
|
||||
* Add/edit/remove command line flags and/or their help text
|
||||
* Add/edit/remove commands or subcommands
|
||||
* Add new API types
|
||||
* Add/edit/remove plugin protobuf message or service definitions
|
||||
|
||||
The following files are automatically generated from the source code:
|
||||
|
||||
* The clientset
|
||||
* Listers
|
||||
* Shared informers
|
||||
* Documentation
|
||||
* Protobuf/gRPC types
|
||||
|
||||
You can run `make verify` to ensure that all generated files (clientset, listers, shared informers, docs) are up to date.
|
||||
|
||||
## Linting
|
||||
|
||||
You can run `make lint` which executes golangci-lint inside the build image, or `make local-lint` which executes outside of the build image.
|
||||
Both `make lint` and `make local-lint` will only run the linter against changes.
|
||||
|
||||
Use `lint-all` to run the linter against the entire code base.
|
||||
|
||||
The default linters are defined in the `Makefile` via the `LINTERS` variable.
|
||||
|
||||
You can also override the default list of linters by running the command
|
||||
|
||||
`$ make lint LINTERS=gosec`
|
||||
|
||||
## Test
|
||||
|
||||
To run unit tests, use `make test`.
|
||||
|
||||
## Vendor dependencies
|
||||
|
||||
If you need to add or update the vendored dependencies, see [Vendoring dependencies][11].
|
||||
|
||||
## Using the main branch
|
||||
|
||||
If you are developing or using the main branch, note that you may need to update the Velero CRDs to get new changes as other development work is completed.
|
||||
|
||||
```bash
|
||||
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
**NOTE:** You could change the default CRD API version (v1beta1 _or_ v1) if Velero CLI can't discover the Kubernetes preferred CRD API version. The Kubernetes version < 1.16 preferred CRD API version is v1beta1; the Kubernetes version >= 1.16 preferred CRD API version is v1.
|
||||
|
||||
[11]: vendoring-dependencies.md
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: "Disaster recovery"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
*Using Schedules and Read-Only Backup Storage Locations*
|
||||
|
||||
If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Velero looks like the following:
|
||||
|
||||
1. After you first run the Velero server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired):
|
||||
|
||||
```
|
||||
velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
|
||||
```
|
||||
|
||||
This creates a Backup object with the name `<SCHEDULE NAME>-<TIMESTAMP>`. The default backup retention period, expressed as TTL (time to live), is 30 days (720 hours); you can use the `--ttl <DURATION>` flag to change this as necessary. See [how velero works][1] for more information about backup expiry.
|
||||
|
||||
1. A disaster happens and you need to recreate your resources.
|
||||
|
||||
1. Update your backup storage location to read-only mode (this prevents backup objects from being created or deleted in the backup storage location during the restore process):
|
||||
|
||||
```bash
|
||||
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
|
||||
--namespace velero \
|
||||
--type merge \
|
||||
--patch '{"spec":{"accessMode":"ReadOnly"}}'
|
||||
```
|
||||
|
||||
1. Create a restore with your most recent Velero Backup:
|
||||
|
||||
```
|
||||
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
|
||||
```
|
||||
|
||||
1. When ready, revert your backup storage location to read-write mode:
|
||||
|
||||
```bash
|
||||
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
|
||||
--namespace velero \
|
||||
--type merge \
|
||||
--patch '{"spec":{"accessMode":"ReadWrite"}}'
|
||||
```
|
||||
|
||||
[1]: how-velero-works.md#set-a-backup-to-expire
|
|
@ -0,0 +1,115 @@
|
|||
---
|
||||
title: "Enable API Group Versions Feature"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Background
|
||||
|
||||
Velero serves to both restore and migrate Kubernetes applications. Typically, backup and restore does not involve upgrading Kubernetes API group versions. However, when migrating from a source cluster to a destination cluster, it is not unusual to see the API group versions differing between clusters.
|
||||
|
||||
**NOTE:** Kubernetes applications are made up of various resources. Common resources are pods, jobs, and deployments. Custom resources are created via custom resource definitions (CRDs). Every resource, whether custom or not, is part of a group, and each group has a version called the API group version.
|
||||
|
||||
Kubernetes by default allows changing API group versions between clusters as long as the upgrade is a single version, for example, v1 -> v2beta1. Jumping multiple versions, for example, v1 -> v3, is not supported out of the box. This is where the Velero Enable API Group Version feature can help you during an upgrade.
|
||||
|
||||
Currently, the Enable API Group Version feature is in beta and can be enabled by installing Velero with a [feature flag](customize-installation.md/#enable-server-side-features), `--features=EnableAPIGroupVersions`.
|
||||
|
||||
For the most up-to-date information on Kubernetes API version compatibility, you should always review the [Kubernetes release notes](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) for the source and destination cluster version to before starting an upgrade, migration, or restore. If there is a difference between Kubernetes API versions, use the Enable API Group Version feature to help mitigate compatibility issues.
|
||||
|
||||
## How the Enable API Group Versions Feature Works
|
||||
|
||||
When the Enable API Group Versions feature is enabled on the source cluster, Velero will not only back up Kubernetes preferred API group versions, but it will also back up all supported versions on the cluster. As an example, consider the resource `horizontalpodautoscalers` which falls under the `autoscaling` group. Without the feature flag enabled, only the preferred API group version for autoscaling, `v1` will be backed up. With the feature enabled, the remaining supported versions, `v2beta1` and `v2beta2` will also be backed up. Once the versions are stored in the backup tarball file, they will be available to be restored on the destination cluster.
|
||||
|
||||
When the Enable API Group Versions feature is enabled on the destination cluster, Velero restore will choose the version to restore based on an API group version priority order.
|
||||
|
||||
The version priorities are listed from highest to lowest priority below:
|
||||
|
||||
- Priority 1: destination cluster preferred version
|
||||
- Priority 2: source cluster preferred version
|
||||
- Priority 3: non-preferred common supported version with the highest [Kubernetes version priority](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-priority)
|
||||
|
||||
The highest priority (Priority 1) will be the destination cluster's preferred API group version. If the destination preferred version is found in the backup tarball, it will be the API group version chosen for restoration for that resource. However, if the destination preferred version is not found in the backup tarball, the next version in the list will be selected: the source cluster preferred version (Priority 2).
|
||||
|
||||
If the source cluster preferred version is found to be supported by the destination cluster, it will be chosen as the API group version to restore. However, if the source preferred version is not supported by the destination cluster, then the next version in the list will be considered: a non-preferred common supported version (Priority 3).
|
||||
|
||||
In the case that there are more than one non-preferred common supported version, which version will be chosen? The answer requires understanding the [Kubernetes version priority order](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-priority). Kubernetes prioritizes group versions by making the latest, most stable version the highest priority. The highest priority version is the Kubernetes preferred version. Here is a sorted version list example from the Kubernetes.io documentation:
|
||||
|
||||
- v10
|
||||
- v2
|
||||
- v1
|
||||
- v11beta2
|
||||
- v10beta3
|
||||
- v3beta1
|
||||
- v12alpha1
|
||||
- v11alpha2
|
||||
- foo1
|
||||
- foo10
|
||||
|
||||
Of the non-preferred common versions, the version that has the highest Kubernetes version priority will be chosen. See the example for Priority 3 below.
|
||||
|
||||
To better understand which API group version will be chosen, the following provides some concrete examples. The examples use the term "target cluster" which is synonymous to "destination cluster".
|
||||
|
||||
![Priority 1 Case A example](/docs/main/img/gv_priority1-caseA.png)
|
||||
|
||||
![Priority 1 Case B example](/docs/main/img/gv_priority1-caseB.png)
|
||||
|
||||
![Priority 2 Case C example](/docs/main/img/gv_priority2-caseC.png)
|
||||
|
||||
![Priority 3 Case D example](/docs/main/img/gv_priority3-caseD.png)
|
||||
|
||||
## Procedure for Using the Enable API Group Versions Feature
|
||||
|
||||
1. [Install Velero](basic-install.md) on source cluster with the [feature flag enabled](customize-installation.md/#enable-server-side-features). The flag is `--features=EnableAPIGroupVersions`. For the enable API group versions feature to work, the feature flag needs to be used for Velero installations on both the source and destination clusters.
|
||||
2. Back up and restore following the [migration case instructions](migration-case.md). Note that "Cluster 1" in the instructions refers to the source cluster, and "Cluster 2" refers to the destination cluster.
|
||||
|
||||
## Advanced Procedure for Customizing the Version Prioritization
|
||||
|
||||
Optionally, users can create a config map to override the default API group prioritization for some or all of the resources being migrated. For each resource that is specified by the user, Velero will search for the version in both the backup tarball and the destination cluster. If there is a match, the user-specified API group version will be restored. If the backup tarball and the destination cluster does not have or support any of the user-specified versions, then the default version prioritization will be used.
|
||||
|
||||
Here are the steps for creating a config map that allows users to override the default version prioritization. These steps must happen on the destination cluster before a Velero restore is initiated.
|
||||
|
||||
1. Create a file called `restoreResourcesVersionPriority`. The file name will become a key in the `data` field of the config map.
|
||||
- In the file, write a line for each resource group you'd like to override. Make sure each line follows the format `<resource>.<group>=<highest user priority version>,<next highest>`
|
||||
- Note that the resource group and versions are separated by a single equal (=) sign. Each version is listed in order of user's priority separated by commas.
|
||||
- Here is an example of the contents of a config map file:
|
||||
|
||||
```cm
|
||||
rockbands.music.example.io=v2beta1,v2beta2
|
||||
orchestras.music.example.io=v2,v3alpha1
|
||||
subscriptions.operators.coreos.com=v2,v1
|
||||
```
|
||||
|
||||
2. Apply config map with
|
||||
|
||||
```bash
|
||||
kubectl create configmap enableapigroupversions --from-file=<absolute path>/restoreResourcesVersionPriority -n velero
|
||||
```
|
||||
|
||||
3. See the config map with
|
||||
|
||||
```bash
|
||||
kubectl describe configmap enableapigroupversions -n velero
|
||||
```
|
||||
|
||||
The config map should look something like
|
||||
|
||||
```bash
|
||||
Name: enableapigroupversions
|
||||
Namespace: velero
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Data
|
||||
====
|
||||
restoreResourcesVersionPriority:
|
||||
----
|
||||
rockbands.music.example.io=v2beta1,v2beta2
|
||||
orchestras.music.example.io=v2,v3alpha1
|
||||
subscriptions.operators.coreos.com=v2,v1
|
||||
Events: <none>
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. Refer to the [troubleshooting section](troubleshooting.md) of the docs as the techniques generally apply here as well.
|
||||
2. The [debug logs](troubleshooting.md/#getting-velero-debug-logs) will contain information on which version was chosen to restore.
|
||||
3. If no API group version could be found that both exists in the backup tarball file and is supported by the destination cluster, then the following error will be recorded (no need to activate debug level logging): `"error restoring rockbands.music.example.io/rockstars/beatles: the server could not find the requested resource"`.
|