v1.2.0-beta.1 release (#1995)
* generate v1.2.0-beta.1 docs site Signed-off-by: Steve Kriss <krisss@vmware.com> * v1.2.0-beta.1 changelog Signed-off-by: Steve Kriss <krisss@vmware.com> * add PR 1994 changelog Signed-off-by: Steve Kriss <krisss@vmware.com> * fix image tag Co-Authored-By: Adnan Abdulhussein <adnan@prydoni.us> Signed-off-by: Steve Kriss <krisss@vmware.com>pull/2009/head v1.2.0-beta.1
parent
0c1fc8195a
commit
558e4b9075
|
@ -2,6 +2,7 @@
|
|||
* [CHANGELOG-1.1.md][11]
|
||||
|
||||
## Development release:
|
||||
* [v1.2.0-beta.1][12]
|
||||
* [Unreleased Changes][0]
|
||||
|
||||
## Older releases:
|
||||
|
@ -17,6 +18,7 @@
|
|||
* [CHANGELOG-0.3.md][1]
|
||||
|
||||
|
||||
[12]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.2.md
|
||||
[11]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.1.md
|
||||
[10]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.0.md
|
||||
[9]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.11.md
|
||||
|
|
|
@ -0,0 +1,64 @@
|
|||
## v1.2.0-beta.1
|
||||
#### 2019-10-24
|
||||
|
||||
### Download
|
||||
- https://github.com/vmware-tanzu/velero/releases/tag/v1.2.0-beta.1
|
||||
|
||||
### Container Image
|
||||
`velero/velero:v1.2.0-beta.1`
|
||||
|
||||
### Documentation
|
||||
https://velero.io/docs/v1.2.0-beta.1/
|
||||
|
||||
### Upgrading
|
||||
|
||||
If you're upgrading from a previous version of Velero, there are several changes you'll need to be aware of:
|
||||
|
||||
- Container images are now published to Docker Hub. To upgrade your server, use the new image `velero/velero:v1.2.0-beta.1`.
|
||||
- The AWS, Microsoft Azure, and GCP provider plugins that were previously part of the Velero binary have been extracted to their own standalone repositories/plugin images. If you are using one of these three providers, you will need to explicitly add the appropriate plugin to your Velero install:
|
||||
- [AWS] `velero plugin add velero/velero-plugin-for-aws:v1.0.0-beta.1`
|
||||
- [Azure] `velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0-beta.1`
|
||||
- [GCP] `velero plugin add velero/velero-plugin-for-gcp:v1.0.0-beta.1`
|
||||
|
||||
### Highlights
|
||||
|
||||
- The AWS, Microsoft Azure, and GCP provider plugins that were previously part of the Velero binary have been extracted to their own standalone repositories/plugin images. They now function like any other provider plugin.
|
||||
- Container images are now published to Docker Hub: `velero/velero:v1.2.0-beta.1`.
|
||||
- Several improvements have been made to the restic integration:
|
||||
- Backup and restore progress is now updated on the `PodVolumeBackup` and `PodVolumeRestore` custom resources and viewable via `velero backup/restore describe` while operations are in progress.
|
||||
- Read-write-many PVCs are now only backed up once.
|
||||
- Backups of PVCs remain incremental across pod reschedules.
|
||||
- A structural schema has been added to the Velero CRDs that are created by `velero install` to enable validation of API fields.
|
||||
- During restores that use the `--namespace-mappings` flag to clone a namespace within a cluster, PVs will now be cloned as needed.
|
||||
|
||||
### All Changes
|
||||
* Allow backup storage locations to specify backup sync period or toggle off sync (#1936, @betta1)
|
||||
* Remove cloud provider code (#1985, @carlisia)
|
||||
* Restore action for cluster/namespace role bindings (#1974, @alexander)
|
||||
* Add `--no-default-backup-location` flag to `velero install` (#1931, @Frank51)
|
||||
* If includeClusterResources is nil/auto, pull in necessary CRDs in backupResource (#1831, @sseago)
|
||||
* Azure: add support for Azure China/German clouds (#1938, @andyzhangx)
|
||||
* Add a new required `--plugins` flag for `velero install` command. `--plugins` takes a list of container images to add as initcontainers. (#1930, @nrb)
|
||||
* restic: only backup read-write-many PVCs at most once, even if they're annotated for backup from multiple pods. (#1896, @skriss)
|
||||
* Azure: add support for cross-subscription backups (#1895, @boxcee)
|
||||
* adds `insecureSkipTLSVerify` server config for AWS storage and `--insecure-skip-tls-verify` flag on client for self-signed certs (#1793, @s12chung)
|
||||
* Add check to update resource field during backupItem (#1904, @spiffcs)
|
||||
* Add `LD_LIBRARY_PATH` (=/plugins) to the env variables of velero deployment. (#1893, @lintongj)
|
||||
* backup sync controller: stop using `metadata/revision` file, do a full diff of bucket contents vs. cluster contents each sync interval (#1892, @skriss)
|
||||
* bug fix: during restore, check item's original namespace, not the remapped one, for inclusion/exclusion (#1909, @skriss)
|
||||
* adds structural schema to Velero CRDs created on Velero install, enabling validation of Velero API fields (#1898, @prydonius)
|
||||
* GCP: add support for specifying a Cloud KMS key name to use for encrypting backups in a storage location. (#1879, @skriss)
|
||||
* AWS: add support for SSE-S3 AES256 encryption via `serverSideEncryption` config field in BackupStorageLocation (#1869, @skriss)
|
||||
* change default `restic prune` interval to 7 days, add `velero server/install` flags for specifying an alternate default value. (#1864, @skriss)
|
||||
* velero install: if `--use-restic` and `--wait` are specified, wait up to a minute for restic daemonset to be ready (#1859, @skriss)
|
||||
* report restore progress in PodVolumeRestores and expose progress in the velero restore describe --details command (#1854, @prydonius)
|
||||
* Jekyll Site updates - modifies documentation to use a wider layout; adds better markdown table formatting (#1848, @ccbayer)
|
||||
* fix excluding additional items with the velero.io/exclude-from-backup=true label (#1843, @prydonius)
|
||||
* report backup progress in PodVolumeBackups and expose progress in the velero backup describe --details command. Also upgrades restic to v0.9.5 (#1821, @prydonius)
|
||||
* Add `--features` argument to all velero commands to provide feature flags that can control enablement of pre-release features. (#1798, @nrb)
|
||||
* when backing up PVCs with restic, specify `--parent` flag to prevent full volume rescans after pod reschedules (#1807, @skriss)
|
||||
* remove 'restic check' calls from before/after 'restic prune' since they're redundant (#1794, @skriss)
|
||||
* fix error formatting due interpreting % as printf formatted strings (#1781, @s12chung)
|
||||
* when using `velero restore create --namespace-mappings ...` to create a second copy of a namespace in a cluster, create copies of the PVs used (#1779, @skriss)
|
||||
* adds --from-schedule flag to the `velero create backup` command to create a Backup from an existing Schedule (#1734, @prydonius)
|
||||
* add `--allow-partially-failed` flag to `velero restore create` for use with `--from-schedule` to allow partially-failed backups to be restored (#1994, @skriss)
|
|
@ -55,6 +55,12 @@ defaults:
|
|||
version: master
|
||||
gh: https://github.com/vmware-tanzu/velero/tree/master
|
||||
layout: "docs"
|
||||
- scope:
|
||||
path: docs/v1.2.0-beta.1
|
||||
values:
|
||||
version: v1.2.0-beta.1
|
||||
gh: https://github.com/vmware-tanzu/velero/tree/v1.2.0-beta.1
|
||||
layout: "docs"
|
||||
- scope:
|
||||
path: docs/v1.1.0
|
||||
values:
|
||||
|
@ -148,6 +154,7 @@ versioning: true
|
|||
latest: v1.1.0
|
||||
versions:
|
||||
- master
|
||||
- v1.2.0-beta.1
|
||||
- v1.1.0
|
||||
- v1.0.0
|
||||
- v0.11.0
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
# that the navigation for older versions still work.
|
||||
|
||||
master: master-toc
|
||||
v1.2.0-beta.1: v1-2-0-beta-1-toc
|
||||
v1.1.0: v1-1-0-toc
|
||||
v1.0.0: v1-0-0-toc
|
||||
v0.11.0: v011-toc
|
||||
|
|
|
@ -0,0 +1,75 @@
|
|||
toc:
|
||||
- title: Introduction
|
||||
subfolderitems:
|
||||
- page: About Velero
|
||||
url: /index.html
|
||||
- page: How Velero works
|
||||
url: /how-velero-works
|
||||
- page: About locations
|
||||
url: /locations
|
||||
- title: Install
|
||||
subfolderitems:
|
||||
- page: Overview
|
||||
url: /install-overview
|
||||
- page: Upgrade to 1.1
|
||||
url: /upgrade-to-1.1
|
||||
- page: Supported providers
|
||||
url: /supported-providers
|
||||
- page: Evaluation install
|
||||
url: /contributions/minio
|
||||
- page: Restic integration
|
||||
url: /restic
|
||||
- page: Examples
|
||||
url: /examples
|
||||
- page: Uninstalling
|
||||
url: /uninstalling
|
||||
- title: Use
|
||||
subfolderitems:
|
||||
- page: Disaster recovery
|
||||
url: /disaster-case
|
||||
- page: Cluster migration
|
||||
url: /migration-case
|
||||
- page: Backup reference
|
||||
url: /backup-reference
|
||||
- page: Restore reference
|
||||
url: /restore-reference
|
||||
- page: Run in any namespace
|
||||
url: /namespace
|
||||
- page: Extend with hooks
|
||||
url: /hooks
|
||||
- title: Plugins
|
||||
subfolderitems:
|
||||
- page: Overview
|
||||
url: /overview-plugins
|
||||
- page: Custom plugins
|
||||
url: /custom-plugins
|
||||
- title: Troubleshoot
|
||||
subfolderitems:
|
||||
- page: Troubleshooting
|
||||
url: /troubleshooting
|
||||
- page: Troubleshoot an install or setup
|
||||
url: /debugging-install
|
||||
- page: Troubleshoot a restore
|
||||
url: /debugging-restores
|
||||
- page: Troubleshoot Restic
|
||||
url: /restic#troubleshooting
|
||||
- title: Contribute
|
||||
subfolderitems:
|
||||
- page: Start Contributing
|
||||
url: /start-contributing
|
||||
- page: Development
|
||||
url: /development
|
||||
- page: Build from source
|
||||
url: /build-from-source
|
||||
- page: Run locally
|
||||
url: /run-locally
|
||||
- title: More information
|
||||
subfolderitems:
|
||||
- page: Backup file format
|
||||
url: /output-file-format
|
||||
- page: API types
|
||||
url: /api-types
|
||||
- page: FAQ
|
||||
url: /faq
|
||||
- page: ZenHub
|
||||
url: /zenhub
|
|
@ -0,0 +1,54 @@
|
|||
![100]
|
||||
|
||||
[![Build Status][1]][2]
|
||||
|
||||
## Overview
|
||||
|
||||
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
|
||||
|
||||
* Take backups of your cluster and restore in case of loss.
|
||||
* Migrate cluster resources to other clusters.
|
||||
* Replicate your production cluster to development and testing clusters.
|
||||
|
||||
Velero consists of:
|
||||
|
||||
* A server that runs on your cluster
|
||||
* A command-line client that runs locally
|
||||
|
||||
## Documentation
|
||||
|
||||
[The documentation][29] provides a getting started guide, plus information about building from source, architecture, extending Velero, and more.
|
||||
|
||||
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
|
||||
|
||||
## Contributing
|
||||
|
||||
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.2.0-beta.1/start-contributing/) documentation for guidance on how to setup Velero for development.
|
||||
|
||||
## Changelog
|
||||
|
||||
See [the list of releases][6] to find out about feature changes.
|
||||
|
||||
[1]: https://travis-ci.org/vmware-tanzu/velero.svg?branch=master
|
||||
[2]: https://travis-ci.org/vmware-tanzu/velero
|
||||
|
||||
[4]: https://github.com/vmware-tanzu/velero/issues
|
||||
[6]: https://github.com/vmware-tanzu/velero/releases
|
||||
|
||||
[9]: https://kubernetes.io/docs/setup/
|
||||
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
|
||||
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
|
||||
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md
|
||||
[14]: https://github.com/kubernetes/kubernetes
|
||||
[24]: https://groups.google.com/forum/#!forum/projectvelero
|
||||
[25]: https://kubernetes.slack.com/messages/velero
|
||||
|
||||
[28]: install-overview.md
|
||||
[29]: https://velero.io/docs/v1.2.0-beta.1/
|
||||
[30]: troubleshooting.md
|
||||
|
||||
[100]: img/velero.png
|
|
@ -0,0 +1,16 @@
|
|||
# Table of Contents
|
||||
|
||||
## API types
|
||||
|
||||
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
|
||||
(hooks)
|
||||
|
||||
* [Backup][1]
|
||||
* [Schedule][2]
|
||||
* [BackupStorageLocation][3]
|
||||
* [VolumeSnapshotLocation][4]
|
||||
|
||||
[1]: backup.md
|
||||
[2]: schedule.md
|
||||
[3]: backupstoragelocation.md
|
||||
[4]: volumesnapshotlocation.md
|
|
@ -0,0 +1,143 @@
|
|||
# Backup API Type
|
||||
|
||||
## Use
|
||||
|
||||
The `Backup` API type is used as a request for the Velero server to perform a backup. Once created, the
|
||||
Velero Server immediately starts the backup process.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Backup belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Backup` object with each of the fields documented:
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Backup
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Backup name. May be any valid Kubernetes object name. Required.
|
||||
name: a
|
||||
# Backup namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the backup. Required.
|
||||
spec:
|
||||
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the backup. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the backup. For example, if a
|
||||
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Individual objects must match this label selector to be included in the backup. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
|
||||
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
|
||||
# a persistent volume provider is configured for Velero.
|
||||
snapshotVolumes: null
|
||||
# Where to store the tarball and logs.
|
||||
storageLocation: aws-primary
|
||||
# The list of locations in which to store volume snapshots created for this backup.
|
||||
volumeSnapshotLocations:
|
||||
- aws-primary
|
||||
- gcp-primary
|
||||
# The amount of time before this backup is eligible for garbage collection. If not specified,
|
||||
# a default value of 30 days will be used. The default can be configured on the velero server
|
||||
# by passing the flag --default-backup-ttl.
|
||||
ttl: 24h0m0s
|
||||
# Actions to perform at different times during a backup. The only hook currently supported is
|
||||
# executing a command in a container in a pod using the pod exec API. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
-
|
||||
# Name of the hook. Will be displayed in backup log.
|
||||
name: my-hook
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to which this hook applies. The only resource supported at this time is
|
||||
# pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
|
||||
pre:
|
||||
-
|
||||
# The type of hook. This must be "exec".
|
||||
exec:
|
||||
# The name of the container where the command will be executed. If unspecified, the
|
||||
# first container in the pod will be used. Optional.
|
||||
container: my-container
|
||||
# The command to execute, specified as an array. Required.
|
||||
command:
|
||||
- /bin/uname
|
||||
- -a
|
||||
# How to handle an error executing the command. Valid values are Fail and Continue.
|
||||
# Defaults to Fail. Optional.
|
||||
onError: Fail
|
||||
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
|
||||
timeout: 10s
|
||||
# An array of hooks to run after all custom actions and additional items have been
|
||||
# processed. Currently only "exec" hooks are supported.
|
||||
post:
|
||||
# Same content as pre above.
|
||||
# Status about the Backup. Users should not set any data here.
|
||||
status:
|
||||
# The version of this Backup. The only version currently supported is 1.
|
||||
version: 1
|
||||
# The date and time when the Backup is eligible for garbage collection.
|
||||
expiration: null
|
||||
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors: null
|
||||
# Date/time when the backup started being processed.
|
||||
startTimestamp: 2019-04-29T15:58:43Z
|
||||
# Date/time when the backup finished being processed.
|
||||
completionTimestamp: 2019-04-29T15:58:56Z
|
||||
# Number of volume snapshots that Velero tried to create for this backup.
|
||||
volumeSnapshotsAttempted: 2
|
||||
# Number of volume snapshots that Velero successfully created for this backup.
|
||||
volumeSnapshotsCompleted: 1
|
||||
# Number of warnings that were logged by the backup.
|
||||
warnings: 2
|
||||
# Number of errors that were logged by the backup.
|
||||
errors: 0
|
||||
|
||||
```
|
|
@ -0,0 +1,82 @@
|
|||
# Velero Backup Storage Locations
|
||||
|
||||
## Backup Storage Location
|
||||
|
||||
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
|
||||
|
||||
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
|
||||
|
||||
A sample YAML `BackupStorageLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
backupSyncPeriod: 2m0s
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: myBucket
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the backups. |
|
||||
| `objectStorage` | ObjectStorageLocation | Specification of the object storage for the given provider. |
|
||||
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
|
||||
| `config` | map[string]string<br><br>(See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.) | None (Optional) | Configuration keys/values to be passed to the cloud provider for backup storage. |
|
||||
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
|
||||
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
|
||||
|
||||
#### AWS
|
||||
|
||||
**(Or other S3-compatible storage)**
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Queried from the AWS S3 API if not provided. |
|
||||
| `s3ForcePathStyle` | bool | `false` | Set this to `true` if you are using a local storage service like Minio. |
|
||||
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Velero can already generate it from `region`, and `bucket`. This field is primarily for local storage services like Minio.|
|
||||
| `publicUrl` | string | Empty | *Example*: https://minio.mycluster.com<br><br>If specified, use this instead of `s3Url` when generating download URLs (e.g., for logs). This field is primarily for local storage services like Minio.|
|
||||
| `serverSideEncryption` | string | Empty | The name of the server-side encryption algorithm to use for uploading objects, e.g. `AES256`. If using SSE-KMS and `kmsKeyId` is specified, this field will automatically be set to `aws:kms` so does not need to be specified by the user. |
|
||||
| `kmsKeyId` | string | Empty | *Example*: "502b409c-4da1-419f-a16e-eif453b3i49f" or "alias/`<KMS-Key-Alias-Name>`"<br><br>Specify an [AWS KMS key][10] id or alias to enable encryption of the backups stored in S3. Only works with AWS S3 and may require explicitly granting key usage rights.|
|
||||
| `signatureVersion` | string | `"4"` | Version of the signature algorithm used to create signed URLs that are used by velero cli to download backups or fetch logs. Possible versions are "1" and "4". Usually the default version 4 is correct, but some S3-compatible providers like Quobyte only support version 1.|
|
||||
| `profile` | string | "default" | AWS profile within the credential file to use for given store |
|
||||
| `insecureSkipTLSVerify` | bool | `false` | Set this to `true` if you do not want to verify the TLS certificate when connecting to the object store--like self-signed certs in Minio. This is susceptible to man-in-the-middle attacks and is not recommended for production. |
|
||||
|
||||
#### Azure
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `resourceGroup` | string | Required Field | Name of the resource group containing the storage account for this backup storage location. |
|
||||
| `storageAccount` | string | Required Field | Name of the storage account for this backup storage location. |
|
||||
| `subscriptionId` | string | Optional Field | ID of the subscription for this backup storage location. |
|
||||
|
||||
#### GCP
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `kmsKeyName` | string | Empty | Name of the Cloud KMS key to use to encrypt backups stored in this location, in the form `projects/P/locations/L/keyRings/R/cryptoKeys/K`. See [customer-managed Cloud KMS keys](https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys) for details. |
|
||||
| `serviceAccount` | string | Empty | Name of the GCP service account to use for this backup storage location. Specify the service account here if you want to use workload identity instead of providing the key file.
|
||||
|
||||
[0]: #aws
|
||||
[1]: #gcp
|
||||
[2]: #azure
|
||||
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
|
||||
[10]: http://docs.aws.amazon.com/kms/latest/developerguide/overview.html
|
|
@ -0,0 +1,130 @@
|
|||
# Schedule API Type
|
||||
|
||||
## Use
|
||||
|
||||
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
|
||||
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
|
||||
process on a repeating basis.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Schedule belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Schedule` object with each of the fields documented:
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Schedule
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Schedule name. May be any valid Kubernetes object name. Required.
|
||||
name: a
|
||||
# Schedule namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the scheduled backup. Required.
|
||||
spec:
|
||||
# Schedule is a Cron expression defining when to run the Backup
|
||||
schedule: 0 7 * * *
|
||||
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the scheduled backup. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the scheduled backup. Resources may be shortcuts (e.g. 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (e.g. 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
|
||||
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
|
||||
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
|
||||
# a persistent volume provider is configured for Velero.
|
||||
snapshotVolumes: null
|
||||
# Where to store the tarball and logs.
|
||||
storageLocation: aws-primary
|
||||
# The list of locations in which to store volume snapshots created for backups under this schedule.
|
||||
volumeSnapshotLocations:
|
||||
- aws-primary
|
||||
- gcp-primary
|
||||
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
|
||||
# a default value of 30 days will be used. The default can be configured on the velero server
|
||||
# by passing the flag --default-backup-ttl.
|
||||
ttl: 24h0m0s
|
||||
# Actions to perform at different times during a backup. The only hook currently supported is
|
||||
# executing a command in a container in a pod using the pod exec API. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
-
|
||||
# Name of the hook. Will be displayed in backup log.
|
||||
name: my-hook
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to which this hook applies. The only resource supported at this time is
|
||||
# pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
|
||||
pre:
|
||||
-
|
||||
# The type of hook. This must be "exec".
|
||||
exec:
|
||||
# The name of the container where the command will be executed. If unspecified, the
|
||||
# first container in the pod will be used. Optional.
|
||||
container: my-container
|
||||
# The command to execute, specified as an array. Required.
|
||||
command:
|
||||
- /bin/uname
|
||||
- -a
|
||||
# How to handle an error executing the command. Valid values are Fail and Continue.
|
||||
# Defaults to Fail. Optional.
|
||||
onError: Fail
|
||||
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
|
||||
timeout: 10s
|
||||
# An array of hooks to run after all custom actions and additional items have been
|
||||
# processed. Currently only "exec" hooks are supported.
|
||||
post:
|
||||
# Same content as pre above.
|
||||
status:
|
||||
# The current phase of the latest scheduled backup. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# Date/time of the last backup for a given schedule
|
||||
lastBackup:
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors:
|
||||
```
|
|
@ -0,0 +1,70 @@
|
|||
# Velero Volume Snapshot Location
|
||||
|
||||
## Volume Snapshot Location
|
||||
|
||||
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
|
||||
|
||||
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
|
||||
|
||||
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
|
||||
|
||||
A sample YAML `VolumeSnapshotLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: VolumeSnapshotLocation
|
||||
metadata:
|
||||
name: aws-default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the volume. |
|
||||
| `config` | See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.
|
||||
|
||||
#### AWS
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Required. |
|
||||
| `profile` | string | "default" | AWS profile within the credential file to use for given store |
|
||||
|
||||
#### Azure
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `apiTimeout` | metav1.Duration | 2m0s | How long to wait for an Azure API request to complete before timeout. |
|
||||
| `resourceGroup` | string | Optional | The name of the resource group where volume snapshots should be stored, if different from the cluster's resource group. |
|
||||
| `subscriptionId` | string | Optional | The ID of the subscription where volume snapshots should be stored, if different from the cluster's subscription. Requires `resourceGroup`to be set.
|
||||
|
||||
#### GCP
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `snapshotLocation` | string | Empty | *Example*: "us-central1"<br><br>See [GCP documentation][4] for the full list.<br><br>If not specified the snapshots are stored in the [default location][5]. |
|
||||
| `project` | string | Empty | The project ID where snapshots should be stored, if different than the project that your IAM account is in. Optional. |
|
||||
|
||||
[0]: #aws
|
||||
[1]: #gcp
|
||||
[2]: #azure
|
||||
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
|
||||
[4]: https://cloud.google.com/storage/docs/locations#available_locations
|
||||
[5]: https://cloud.google.com/compute/docs/disks/create-snapshots#default_location
|
|
@ -0,0 +1,9 @@
|
|||
# Backup Reference
|
||||
|
||||
## Exclude Specific Items from Backup
|
||||
|
||||
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
|
||||
|
||||
```bash
|
||||
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
|
||||
```
|
|
@ -0,0 +1,100 @@
|
|||
# Build from source
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Access to a Kubernetes cluster, version 1.7 or later.
|
||||
* A DNS server on the cluster
|
||||
* `kubectl` installed
|
||||
* [Go][5] installed (minimum version 1.8)
|
||||
|
||||
## Get the source
|
||||
|
||||
### Option 1) Get latest (recommended)
|
||||
|
||||
```bash
|
||||
mkdir $HOME/go
|
||||
export GOPATH=$HOME/go
|
||||
go get github.com/vmware-tanzu/velero
|
||||
```
|
||||
|
||||
Where `go` is your [import path][4] for Go.
|
||||
|
||||
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
|
||||
|
||||
### Option 2) Release archive
|
||||
|
||||
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`.
|
||||
|
||||
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
|
||||
|
||||
## Build
|
||||
|
||||
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
|
||||
|
||||
When building by using `make`, it will place the binaries under `_output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `_output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output.
|
||||
|
||||
Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images).
|
||||
|
||||
### Build the binary
|
||||
|
||||
To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands:
|
||||
|
||||
```bash
|
||||
go build ./cmd/velero
|
||||
```
|
||||
|
||||
```bash
|
||||
make local
|
||||
```
|
||||
|
||||
### Cross compiling
|
||||
|
||||
To build the velero binary targeting linux/amd64 within a build container on your local machine, run:
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
For any specific platform, run `make build-<GOOS>-<GOARCH>`.
|
||||
|
||||
For example, to build for the Mac, run `make build-darwin-amd64`.
|
||||
|
||||
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
|
||||
|
||||
* linux-amd64
|
||||
* linux-arm
|
||||
* linux-arm64
|
||||
* darwin-amd64
|
||||
* windows-amd64
|
||||
|
||||
## Making images and updating Velero
|
||||
|
||||
If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image:
|
||||
|
||||
```bash
|
||||
kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION
|
||||
```
|
||||
|
||||
To build a Velero container image, first set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero:master` image, set `$REGISTRY` to `gcr.io/my-registry`. If this variable is not set, the default is `gcr.io/heptio-images`.
|
||||
|
||||
Optionally, set the `$VERSION` environment variable to change the image tag. Then, run:
|
||||
|
||||
```bash
|
||||
make container
|
||||
```
|
||||
|
||||
To push your image to the registry, run:
|
||||
|
||||
```bash
|
||||
make push
|
||||
```
|
||||
|
||||
Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod:
|
||||
|
||||
```bash
|
||||
kubectl -n velero delete pods -l deploy=velero
|
||||
```
|
||||
|
||||
[4]: https://blog.golang.org/organizing-go-code
|
||||
[5]: https://golang.org/doc/install
|
||||
[22]: https://github.com/vmware-tanzu/velero/releases
|
|
@ -0,0 +1,97 @@
|
|||
# Use IBM Cloud Object Storage as Velero's storage destination.
|
||||
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
|
||||
|
||||
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
|
||||
|
||||
* Download an official release of Velero
|
||||
* Create your COS instance
|
||||
* Create an S3 bucket
|
||||
* Define a service that can store data in the bucket
|
||||
* Configure and start the Velero server
|
||||
|
||||
## Download Velero
|
||||
|
||||
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
|
||||
|
||||
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
|
||||
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
|
||||
of the Velero repository is under active development and is not guaranteed to be stable!_
|
||||
|
||||
1. Extract the tarball:
|
||||
|
||||
```bash
|
||||
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
|
||||
```
|
||||
|
||||
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
|
||||
|
||||
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
|
||||
|
||||
## Create COS instance
|
||||
If you don’t have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
|
||||
|
||||
## Create an S3 bucket
|
||||
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
|
||||
|
||||
## Define a service that can store data in the bucket.
|
||||
The process of creating service credentials is described in [Service credentials][3].
|
||||
Several comments:
|
||||
|
||||
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
|
||||
|
||||
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keys — a set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See step 3 in the [Service credentials][3] guide.
|
||||
|
||||
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. We will use them in the next step.
|
||||
|
||||
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
|
||||
|
||||
```
|
||||
[default]
|
||||
aws_access_key_id=<ACCESS_KEY_ID>
|
||||
aws_secret_access_key=<SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
where the access key id and secret are the values that we got above.
|
||||
|
||||
## Install and start Velero
|
||||
|
||||
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
|
||||
|
||||
```bash
|
||||
velero install \
|
||||
--provider aws \
|
||||
--bucket <YOUR_BUCKET> \
|
||||
--secret-file ./credentials-velero \
|
||||
--use-volume-snapshots=false \
|
||||
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>
|
||||
```
|
||||
|
||||
Velero does not currently have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
|
||||
|
||||
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
|
||||
|
||||
(Optional) Specify [CPU and memory resource requests and limits][15] for the Velero/restic pods.
|
||||
|
||||
Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud:
|
||||
|
||||
```bash
|
||||
kubectl -n velero delete volumesnapshotlocation.velero.io default
|
||||
```
|
||||
|
||||
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
|
||||
|
||||
## Installing the nginx example (optional)
|
||||
|
||||
If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`:
|
||||
|
||||
Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `StorageClass` name.
|
||||
|
||||
[0]: namespace.md
|
||||
[1]: https://console.bluemix.net/docs/services/cloud-object-storage/basics/order-storage.html#creating-a-new-resource-instance
|
||||
[2]: https://console.bluemix.net/docs/services/cloud-object-storage/getting-started.html#create-buckets
|
||||
[3]: https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
|
||||
[4]: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/kc_welcome_containers.html
|
||||
[5]: https://console.bluemix.net/docs/containers/container_index.html#container_index
|
||||
[6]: api-types/backupstoragelocation.md#aws
|
||||
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
|
||||
[15]: install-overview.md#velero-resource-requirements
|
|
@ -0,0 +1,266 @@
|
|||
## Quick start evaluation install with Minio
|
||||
|
||||
The following example sets up the Velero server and client, then backs up and restores a sample application.
|
||||
|
||||
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
|
||||
For additional functionality with this setup, see the section below on how to [expose Minio outside your cluster][1].
|
||||
|
||||
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
|
||||
|
||||
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
|
||||
|
||||
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* Access to a Kubernetes cluster, version 1.7 or later. **Note:** restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See [Restic Integration][17].
|
||||
* A DNS server on the cluster
|
||||
* `kubectl` installed
|
||||
|
||||
### Download Velero
|
||||
|
||||
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
|
||||
|
||||
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
|
||||
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
|
||||
of the Velero repository is under active development and is not guaranteed to be stable!_
|
||||
|
||||
1. Extract the tarball:
|
||||
|
||||
```bash
|
||||
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
|
||||
```
|
||||
|
||||
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
|
||||
|
||||
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
|
||||
|
||||
#### MacOS Installation
|
||||
|
||||
On Mac, you can use [HomeBrew](https://brew.sh) to install the `velero` client:
|
||||
|
||||
```bash
|
||||
brew install velero
|
||||
```
|
||||
|
||||
### Set up server
|
||||
|
||||
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster][31] for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
|
||||
|
||||
1. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
|
||||
|
||||
```
|
||||
[default]
|
||||
aws_access_key_id = minio
|
||||
aws_secret_access_key = minio123
|
||||
```
|
||||
|
||||
1. Start the server and the local storage service. In the Velero directory, run:
|
||||
|
||||
```
|
||||
kubectl apply -f examples/minio/00-minio-deployment.yaml
|
||||
```
|
||||
```
|
||||
velero install \
|
||||
--provider aws \
|
||||
--bucket velero \
|
||||
--secret-file ./credentials-velero \
|
||||
--use-volume-snapshots=false \
|
||||
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
|
||||
```
|
||||
|
||||
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`).
|
||||
|
||||
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
|
||||
|
||||
|
||||
1. Deploy the example nginx application:
|
||||
|
||||
```bash
|
||||
kubectl apply -f examples/nginx-app/base.yaml
|
||||
```
|
||||
|
||||
1. Check to see that both the Velero and nginx deployments are successfully created:
|
||||
|
||||
```
|
||||
kubectl get deployments -l component=velero --namespace=velero
|
||||
kubectl get deployments --namespace=nginx-example
|
||||
```
|
||||
|
||||
### Back up
|
||||
|
||||
1. Create a backup for any object that matches the `app=nginx` label selector:
|
||||
|
||||
```
|
||||
velero backup create nginx-backup --selector app=nginx
|
||||
```
|
||||
|
||||
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
|
||||
|
||||
```
|
||||
velero backup create nginx-backup --selector 'backup notin (ignore)'
|
||||
```
|
||||
|
||||
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
|
||||
|
||||
```
|
||||
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
|
||||
```
|
||||
|
||||
Alternatively, you can use some non-standard shorthand cron expressions:
|
||||
|
||||
```
|
||||
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
|
||||
```
|
||||
|
||||
See the [cron package's documentation][30] for more usage examples.
|
||||
|
||||
1. Simulate a disaster:
|
||||
|
||||
```
|
||||
kubectl delete namespace nginx-example
|
||||
```
|
||||
|
||||
1. To check that the nginx deployment and service are gone, run:
|
||||
|
||||
```
|
||||
kubectl get deployments --namespace=nginx-example
|
||||
kubectl get services --namespace=nginx-example
|
||||
kubectl get namespace/nginx-example
|
||||
```
|
||||
|
||||
You should get no results.
|
||||
|
||||
NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up.
|
||||
|
||||
### Restore
|
||||
|
||||
1. Run:
|
||||
|
||||
```
|
||||
velero restore create --from-backup nginx-backup
|
||||
```
|
||||
|
||||
1. Run:
|
||||
|
||||
```
|
||||
velero restore get
|
||||
```
|
||||
|
||||
After the restore finishes, the output looks like the following:
|
||||
|
||||
```
|
||||
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
|
||||
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
|
||||
```
|
||||
|
||||
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
|
||||
|
||||
After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
|
||||
|
||||
If there are errors or warnings, you can look at them in detail:
|
||||
|
||||
```
|
||||
velero restore describe <RESTORE_NAME>
|
||||
```
|
||||
|
||||
For more information, see [the debugging information][18].
|
||||
|
||||
### Clean up
|
||||
|
||||
If you want to delete any backups you created, including data in object storage and persistent
|
||||
volume snapshots, you can run:
|
||||
|
||||
```
|
||||
velero backup delete BACKUP_NAME
|
||||
```
|
||||
|
||||
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
|
||||
this for each backup you want to permanently delete. A future version of Velero will allow you to
|
||||
delete multiple backups by name or label selector.
|
||||
|
||||
Once fully removed, the backup is no longer visible when you run:
|
||||
|
||||
```
|
||||
velero backup get BACKUP_NAME
|
||||
```
|
||||
|
||||
To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster:
|
||||
|
||||
```
|
||||
kubectl delete namespace/velero clusterrolebinding/velero
|
||||
kubectl delete crds -l component=velero
|
||||
kubectl delete -f examples/nginx-app/base.yaml
|
||||
```
|
||||
|
||||
## Expose Minio outside your cluster with a Service
|
||||
|
||||
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
|
||||
|
||||
- Change the Minio Service type from `ClusterIP` to `NodePort`.
|
||||
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
|
||||
|
||||
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
|
||||
|
||||
### Expose Minio with Service of type NodePort
|
||||
|
||||
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
|
||||
|
||||
You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
|
||||
|
||||
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
|
||||
|
||||
1. Get the Minio URL:
|
||||
|
||||
- if you're running Minikube:
|
||||
|
||||
```shell
|
||||
minikube service minio --namespace=velero --url
|
||||
```
|
||||
|
||||
- in any other environment:
|
||||
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
|
||||
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
|
||||
|
||||
```shell
|
||||
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
|
||||
```
|
||||
|
||||
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
|
||||
|
||||
## Expose Minio outside your cluster with Kubernetes in Docker (KinD):
|
||||
|
||||
Kubernetes in Docker currently does not have support for NodePort services (see [this issue](https://github.com/kubernetes-sigs/kind/issues/99)). In this case, you can use a port forward to access the Minio bucket.
|
||||
|
||||
In a terminal, run the following:
|
||||
|
||||
```shell
|
||||
MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}')
|
||||
|
||||
kubectl port-forward $MINIO_POD -n velero 9000:9000
|
||||
```
|
||||
|
||||
Then, in another terminal:
|
||||
|
||||
```shell
|
||||
kubectl edit backupstoragelocation default -n velero
|
||||
```
|
||||
|
||||
Add `publicUrl: http://localhost:9000` under the `spec.config` section.
|
||||
|
||||
### Work with Ingress
|
||||
|
||||
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
|
||||
|
||||
In this case:
|
||||
|
||||
1. Keep the Service type as `ClusterIP`.
|
||||
|
||||
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
|
||||
|
||||
[1]: #expose-minio-with-service-of-type-nodeport
|
||||
[3]: ../install-overview.md
|
||||
[17]: ../restic.md
|
||||
[18]: ../debugging-restores.md
|
||||
[26]: https://github.com/vmware-tanzu/velero/releases
|
||||
[30]: https://godoc.org/github.com/robfig/cron
|
|
@ -0,0 +1,245 @@
|
|||
# Use Oracle Cloud as a Backup Storage Provider for Velero
|
||||
|
||||
## Introduction
|
||||
|
||||
[Velero](https://velero.io/) is a tool used to backup and migrate Kubernetes applications. Here are the steps to use [Oracle Cloud Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) as a destination for Velero backups.
|
||||
|
||||
1. [Download Velero](#download-velero)
|
||||
2. [Create A Customer Secret Key](#create-a-customer-secret-key)
|
||||
3. [Create An Oracle Object Storage Bucket](#create-an-oracle-object-storage-bucket)
|
||||
4. [Install Velero](#install-velero)
|
||||
5. [Clean Up](#clean-up)
|
||||
6. [Examples](#examples)
|
||||
7. [Additional Reading](#additional-reading)
|
||||
|
||||
## Download Velero
|
||||
|
||||
1. Download the [latest release](https://github.com/vmware-tanzu/velero/releases/) of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example:
|
||||
|
||||
```
|
||||
wget https://github.com/vmware-tanzu/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
*We strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the master branch of the Velero repository is under active development and is not guaranteed to be stable!*
|
||||
|
||||
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
|
||||
|
||||
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
|
||||
|
||||
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`
|
||||
|
||||
4. Run `velero` to confirm the CLI has been installed correctly. You should see an output like this:
|
||||
|
||||
```
|
||||
$ velero
|
||||
Velero is a tool for managing disaster recovery, specifically for Kubernetes
|
||||
cluster resources. It provides a simple, configurable, and operationally robust
|
||||
way to back up your application state and associated data.
|
||||
|
||||
If you're familiar with kubectl, Velero supports a similar model, allowing you to
|
||||
execute commands such as 'velero get backup' and 'velero create schedule'. The same
|
||||
operations can also be performed as 'velero backup get' and 'velero schedule create'.
|
||||
|
||||
Usage:
|
||||
velero [command]
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Create A Customer Secret Key
|
||||
|
||||
1. Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to [create a Customer Secret Key](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#To4). Refer to this link for more information about [Working with Customer Secret Keys](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#s3).
|
||||
|
||||
2. Create a Velero credentials file with your Customer Secret Key:
|
||||
|
||||
```
|
||||
$ vi credentials-velero
|
||||
|
||||
[default]
|
||||
aws_access_key_id=bae031188893d1eb83719648790ac850b76c9441
|
||||
aws_secret_access_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk=
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Create An Oracle Object Storage Bucket
|
||||
|
||||
Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for [more information about creating a bucket with Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/managingbuckets.htm#usingconsole).
|
||||
|
||||
|
||||
|
||||
## Install Velero
|
||||
|
||||
You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider:
|
||||
|
||||
```
|
||||
velero install \
|
||||
--provider [provider name] \
|
||||
--bucket [bucket name] \
|
||||
--prefix [tenancy name] \
|
||||
--use-volume-snapshots=false \
|
||||
--secret-file [secret file location] \
|
||||
--backup-location-config region=[region],s3ForcePathStyle="true",s3Url=[storage API endpoint]
|
||||
```
|
||||
|
||||
- `--provider` Because we are using the S3-compatible API, we will use `aws` as our provider.
|
||||
- `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`.
|
||||
- ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`.
|
||||
- `--use-volume-snapshots=false` Velero does not currently have a volume snapshot plugin for Oracle Cloud creating volume snapshots is disabled.
|
||||
- `--secret-file` The path to your `credentials-velero` file.
|
||||
- `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name ([List of Oracle Cloud Regions](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm?Highlight=regions)) and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com`
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
velero install \
|
||||
--provider aws \
|
||||
--bucket velero \
|
||||
--prefix oracle-cloudnative \
|
||||
--use-volume-snapshots=false \
|
||||
--secret-file /Users/mboxell/bin/velero/credentials-velero \
|
||||
--backup-location-config region=us-phoenix-1,s3ForcePathStyle="true",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com
|
||||
```
|
||||
|
||||
This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod name]`.
|
||||
|
||||
|
||||
|
||||
## Clean Up
|
||||
|
||||
To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run:
|
||||
|
||||
```
|
||||
kubectl delete namespace/velero clusterrolebinding/velero
|
||||
kubectl delete crds -l component=velero
|
||||
```
|
||||
|
||||
This will remove all resources created by `velero install`.
|
||||
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
After creating the Velero server in your cluster, try this example:
|
||||
|
||||
### Basic example (without PersistentVolumes)
|
||||
|
||||
1. Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml`
|
||||
|
||||
This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service.
|
||||
|
||||
```
|
||||
$ kubectl apply -f examples/nginx-app/base.yaml
|
||||
namespace/nginx-example created
|
||||
deployment.apps/nginx-deployment created
|
||||
service/my-nginx created
|
||||
```
|
||||
|
||||
You can see the created resources by running `kubectl get all`
|
||||
|
||||
```
|
||||
$ kubectl get all
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s
|
||||
pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s
|
||||
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/nginx-deployment 2 2 2 2 21s
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s
|
||||
```
|
||||
|
||||
2. Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example`
|
||||
|
||||
```
|
||||
$ velero backup create nginx-backup --include-namespaces nginx-example
|
||||
Backup request "nginx-backup" submitted successfully.
|
||||
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
|
||||
```
|
||||
|
||||
At this point you can navigate to appropriate bucket, which we called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero.
|
||||
|
||||
3. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example`
|
||||
|
||||
```
|
||||
$ kubectl delete namespaces nginx-example
|
||||
namespace "nginx-example" deleted
|
||||
```
|
||||
|
||||
Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run:
|
||||
|
||||
```
|
||||
kubectl get deployments --namespace=nginx-example
|
||||
kubectl get services --namespace=nginx-example
|
||||
kubectl get namespace/nginx-example
|
||||
```
|
||||
|
||||
This should return: `No resources found.`
|
||||
|
||||
4. Restore your lost resources: `velero restore create --from-backup nginx-backup`
|
||||
|
||||
```
|
||||
$ velero restore create --from-backup nginx-backup
|
||||
Restore request "nginx-backup-20190604102710" submitted successfully.
|
||||
Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details.
|
||||
```
|
||||
|
||||
Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents.
|
||||
|
||||
5. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following:
|
||||
|
||||
```
|
||||
$ velero restore get
|
||||
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
|
||||
nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none>
|
||||
```
|
||||
|
||||
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
|
||||
|
||||
After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
|
||||
|
||||
If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>`
|
||||
|
||||
|
||||
6. Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml`
|
||||
|
||||
```
|
||||
$ kubectl delete -f examples/nginx-app/base.yaml
|
||||
namespace "nginx-example" deleted
|
||||
deployment.apps "nginx-deployment" deleted
|
||||
service "my-nginx" deleted
|
||||
```
|
||||
|
||||
If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME`
|
||||
|
||||
```
|
||||
$ velero backup delete nginx-backup
|
||||
Are you sure you want to continue (Y/N)? Y
|
||||
Request to delete backup "nginx-backup" submitted successfully.
|
||||
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
|
||||
```
|
||||
|
||||
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector.
|
||||
|
||||
Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`:
|
||||
|
||||
```
|
||||
$ velero backup get nginx-backup
|
||||
An error occurred: backups.velero.io "nginx-backup" not found
|
||||
```
|
||||
|
||||
```
|
||||
$ velero backup get
|
||||
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Additional Reading
|
||||
|
||||
* [Official Velero Documentation](https://velero.io/docs/v1.2.0-beta.1/)
|
||||
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)
|
|
@ -0,0 +1,91 @@
|
|||
# Plugins
|
||||
|
||||
Velero has a plugin architecture that allows users to add their own custom functionality to Velero backups & restores without having to modify/recompile the core Velero binary. To add custom functionality, users simply create their own binary containing implementations of Velero's plugin kinds (described below), plus a small amount of boilerplate code to expose the plugin implementations to Velero. This binary is added to a container image that serves as an init container for the Velero server pod and copies the binary into a shared emptyDir volume for the Velero server to access.
|
||||
|
||||
Multiple plugins, of any type, can be implemented in this binary.
|
||||
|
||||
A fully-functional [sample plugin repository][1] is provided to serve as a convenient starting point for plugin authors.
|
||||
|
||||
## Plugin Naming
|
||||
|
||||
When naming your plugin, keep in mind that the name needs to conform to these rules:
|
||||
- have two parts separated by '/'
|
||||
- none of the above parts can be empty
|
||||
- the prefix is a valid DNS subdomain name
|
||||
- a plugin with the same name cannot not already exist
|
||||
|
||||
### Some examples:
|
||||
|
||||
```
|
||||
- example.io/azure
|
||||
- 1.2.3.4/5678
|
||||
- example-with-dash.io/azure
|
||||
```
|
||||
|
||||
You will need to give your plugin(s) a name when registering them by calling the appropriate `RegisterX` function: <https://github.com/vmware-tanzu/velero/blob/0e0f357cef7cf15d4c1d291d3caafff2eeb69c1e/pkg/plugin/framework/server.go#L42-L60>
|
||||
|
||||
## Plugin Kinds
|
||||
|
||||
Velero currently supports the following kinds of plugins:
|
||||
|
||||
- **Object Store** - persists and retrieves backups, backup logs and restore logs
|
||||
- **Volume Snapshotter** - creates volume snapshots (during backup) and restores volumes from snapshots (during restore)
|
||||
- **Backup Item Action** - executes arbitrary logic for individual items prior to storing them in a backup file
|
||||
- **Restore Item Action** - executes arbitrary logic for individual items prior to restoring them into a cluster
|
||||
|
||||
## Plugin Logging
|
||||
|
||||
Velero provides a [logger][2] that can be used by plugins to log structured information to the main Velero server log or
|
||||
per-backup/restore logs. It also passes a `--log-level` flag to each plugin binary, whose value is the value of the same
|
||||
flag from the main Velero process. This means that if you turn on debug logging for the Velero server via `--log-level=debug`,
|
||||
plugins will also emit debug-level logs. See the [sample repository][1] for an example of how to use the logger within your plugin.
|
||||
|
||||
## Plugin Configuration
|
||||
|
||||
Velero uses a ConfigMap-based convention for providing configuration to plugins. If your plugin needs to be configured at runtime,
|
||||
define a ConfigMap like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
# any name can be used; Velero uses the labels (below)
|
||||
# to identify it rather than the name
|
||||
name: my-plugin-config
|
||||
|
||||
# must be in the namespace where the velero deployment
|
||||
# is running
|
||||
namespace: velero
|
||||
|
||||
labels:
|
||||
# this value-less label identifies the ConfigMap as
|
||||
# config for a plugin (i.e. the built-in change storageclass
|
||||
# restore item action plugin)
|
||||
velero.io/plugin-config: ""
|
||||
|
||||
# add a label whose key corresponds to the fully-qualified
|
||||
# plugin name (e.g. mydomain.io/my-plugin-name), and whose
|
||||
# value is the plugin type (BackupItemAction, RestoreItemAction,
|
||||
# ObjectStore, or VolumeSnapshotter)
|
||||
<fully-qualified-plugin-name>: <plugin-type>
|
||||
|
||||
data:
|
||||
# add your configuration data here as key-value pairs
|
||||
```
|
||||
|
||||
Then, in your plugin's implementation, you can read this ConfigMap to fetch the necessary configuration. See the [restic restore action][3]
|
||||
for an example of this -- in particular, the `getPluginConfig(...)` function.
|
||||
|
||||
## Feature Flags
|
||||
|
||||
Velero will pass any known features flags as a comma-separated list of strings to the `--features` argument.
|
||||
|
||||
Once parsed into a `[]string`, the features can then be registered using the `NewFeatureFlagSet` function and queried with `features.Enabled(<featureName>)`.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Velero adds the `LD_LIBRARY_PATH` into the list of environment variables to provide the convenience for plugins that requires C libraries/extensions in the runtime.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero-plugin-example
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/pkg/plugin/logger.go
|
||||
[3]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/pkg/restore/restic_restore_action.go
|
|
@ -0,0 +1,71 @@
|
|||
# Debugging Installation Issues
|
||||
|
||||
## General
|
||||
|
||||
### `invalid configuration: no configuration has been provided`
|
||||
This typically means that no `kubeconfig` file can be found for the Velero client to use. Velero looks for a kubeconfig in the
|
||||
following locations:
|
||||
* the path specified by the `--kubeconfig` flag, if any
|
||||
* the path specified by the `$KUBECONFIG` environment variable, if any
|
||||
* `~/.kube/config`
|
||||
|
||||
### Backups or restores stuck in `New` phase
|
||||
This means that the Velero controllers are not processing the backups/restores, which usually happens because the Velero server is not running. Check the pod description and logs for errors:
|
||||
```
|
||||
kubectl -n velero describe pods
|
||||
kubectl -n velero logs deployment/velero
|
||||
```
|
||||
|
||||
|
||||
## AWS
|
||||
|
||||
### `NoCredentialProviders: no valid providers in chain`
|
||||
|
||||
#### Using credentials
|
||||
This means that the secret containing the AWS IAM user credentials for Velero has not been created/mounted properly
|
||||
into the Velero server pod. Ensure the following:
|
||||
|
||||
* The `cloud-credentials` secret exists in the Velero server's namespace
|
||||
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
|
||||
* The `credentials-velero` file is formatted properly and has the correct values:
|
||||
|
||||
```
|
||||
[default]
|
||||
aws_access_key_id=<your AWS access key ID>
|
||||
aws_secret_access_key=<your AWS secret access key>
|
||||
```
|
||||
|
||||
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
|
||||
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
|
||||
|
||||
#### Using kube2iam
|
||||
This means that Velero can't read the content of the S3 bucket. Ensure the following:
|
||||
|
||||
* There is a Trust Policy document allowing the role used by kube2iam to assume Velero's role, as stated in the AWS config documentation.
|
||||
* The new Velero role has all the permissions listed in the documentation regarding S3.
|
||||
|
||||
|
||||
## Azure
|
||||
|
||||
### `Failed to refresh the Token` or `adal: Refresh request failed`
|
||||
This means that the secrets containing the Azure service principal credentials for Velero has not been created/mounted
|
||||
properly into the Velero server pod. Ensure the following:
|
||||
|
||||
* The `cloud-credentials` secret exists in the Velero server's namespace
|
||||
* The `cloud-credentials` secret has all of the expected keys and each one has the correct value (see [setup instructions][0])
|
||||
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
|
||||
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
|
||||
|
||||
|
||||
## GCE/GKE
|
||||
|
||||
### `open credentials/cloud: no such file or directory`
|
||||
This means that the secret containing the GCE service account credentials for Velero has not been created/mounted properly
|
||||
into the Velero server pod. Ensure the following:
|
||||
|
||||
* The `cloud-credentials` secret exists in the Velero server's namespace
|
||||
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
|
||||
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
|
||||
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
|
||||
|
||||
[0]: azure-config.md#create-service-principal
|
|
@ -0,0 +1,106 @@
|
|||
# Debugging Restores
|
||||
|
||||
* [Example][0]
|
||||
* [Structure][1]
|
||||
|
||||
## Example
|
||||
|
||||
When Velero finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `velero restore get`:
|
||||
|
||||
```
|
||||
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
|
||||
backup-test-20170726180512 backup-test Completed 155 76 2017-07-26 11:41:14 -0400 EDT <none>
|
||||
backup-test-20170726180513 backup-test Completed 121 14 2017-07-26 11:48:24 -0400 EDT <none>
|
||||
backup-test-2-20170726180514 backup-test-2 Completed 0 0 2017-07-26 13:31:21 -0400 EDT <none>
|
||||
backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none>
|
||||
```
|
||||
|
||||
To delve into the warnings and errors into more detail, you can use `velero restore describe`:
|
||||
|
||||
```bash
|
||||
velero restore describe backup-test-20170726180512
|
||||
```
|
||||
|
||||
The output looks like this:
|
||||
|
||||
```
|
||||
Name: backup-test-20170726180512
|
||||
Namespace: velero
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Backup: backup-test
|
||||
|
||||
Namespaces:
|
||||
Included: *
|
||||
Excluded: <none>
|
||||
|
||||
Resources:
|
||||
Included: serviceaccounts
|
||||
Excluded: nodes, events, events.events.k8s.io
|
||||
Cluster-scoped: auto
|
||||
|
||||
Namespace mappings: <none>
|
||||
|
||||
Label selector: <none>
|
||||
|
||||
Restore PVs: auto
|
||||
|
||||
Phase: Completed
|
||||
|
||||
Validation errors: <none>
|
||||
|
||||
Warnings:
|
||||
Velero: <none>
|
||||
Cluster: <none>
|
||||
Namespaces:
|
||||
velero: serviceaccounts "velero" already exists
|
||||
serviceaccounts "default" already exists
|
||||
kube-public: serviceaccounts "default" already exists
|
||||
kube-system: serviceaccounts "attachdetach-controller" already exists
|
||||
serviceaccounts "certificate-controller" already exists
|
||||
serviceaccounts "cronjob-controller" already exists
|
||||
serviceaccounts "daemon-set-controller" already exists
|
||||
serviceaccounts "default" already exists
|
||||
serviceaccounts "deployment-controller" already exists
|
||||
serviceaccounts "disruption-controller" already exists
|
||||
serviceaccounts "endpoint-controller" already exists
|
||||
serviceaccounts "generic-garbage-collector" already exists
|
||||
serviceaccounts "horizontal-pod-autoscaler" already exists
|
||||
serviceaccounts "job-controller" already exists
|
||||
serviceaccounts "kube-dns" already exists
|
||||
serviceaccounts "namespace-controller" already exists
|
||||
serviceaccounts "node-controller" already exists
|
||||
serviceaccounts "persistent-volume-binder" already exists
|
||||
serviceaccounts "pod-garbage-collector" already exists
|
||||
serviceaccounts "replicaset-controller" already exists
|
||||
serviceaccounts "replication-controller" already exists
|
||||
serviceaccounts "resourcequota-controller" already exists
|
||||
serviceaccounts "service-account-controller" already exists
|
||||
serviceaccounts "service-controller" already exists
|
||||
serviceaccounts "statefulset-controller" already exists
|
||||
serviceaccounts "ttl-controller" already exists
|
||||
default: serviceaccounts "default" already exists
|
||||
|
||||
Errors:
|
||||
Velero: <none>
|
||||
Cluster: <none>
|
||||
Namespaces: <none>
|
||||
```
|
||||
|
||||
## Structure
|
||||
|
||||
Errors appear for incomplete or partial restores. Warnings appear for non-blocking issues (e.g. the
|
||||
restore looks "normal" and all resources referenced in the backup exist in some form, although some
|
||||
of them may have been pre-existing).
|
||||
|
||||
Both errors and warnings are structured in the same way:
|
||||
|
||||
* `Velero`: A list of system-related issues encountered by the Velero server (e.g. couldn't read directory).
|
||||
|
||||
* `Cluster`: A list of issues related to the restore of cluster-scoped resources.
|
||||
|
||||
* `Namespaces`: A map of namespaces to the list of issues related to the restore of their respective resources.
|
||||
|
||||
[0]: #example
|
||||
[1]: #structure
|
|
@ -0,0 +1,30 @@
|
|||
# Development
|
||||
|
||||
## Update generated files
|
||||
|
||||
Run `make update` to regenerate files if you make the following changes:
|
||||
|
||||
* Add/edit/remove command line flags and/or their help text
|
||||
* Add/edit/remove commands or subcommands
|
||||
* Add new API types
|
||||
* Add/edit/remove plugin protobuf message or service definitions
|
||||
|
||||
The following files are automatically generated from the source code:
|
||||
|
||||
* The clientset
|
||||
* Listers
|
||||
* Shared informers
|
||||
* Documentation
|
||||
* Protobuf/gRPC types
|
||||
|
||||
You can run `make verify` to ensure that all generated files (clientset, listers, shared informers, docs) are up to date.
|
||||
|
||||
## Test
|
||||
|
||||
To run unit tests, use `make test`.
|
||||
|
||||
## Vendor dependencies
|
||||
|
||||
If you need to add or update the vendored dependencies, see [Vendoring dependencies][11].
|
||||
|
||||
[11]: vendoring-dependencies.md
|
|
@ -0,0 +1,39 @@
|
|||
# Disaster recovery
|
||||
|
||||
*Using Schedules and Read-Only Backup Storage Locations*
|
||||
|
||||
If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Velero looks like the following:
|
||||
|
||||
1. After you first run the Velero server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired):
|
||||
|
||||
```
|
||||
velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
|
||||
```
|
||||
|
||||
This creates a Backup object with the name `<SCHEDULE NAME>-<TIMESTAMP>`.
|
||||
|
||||
1. A disaster happens and you need to recreate your resources.
|
||||
|
||||
1. Update your backup storage location to read-only mode (this prevents backup objects from being created or deleted in the backup storage location during the restore process):
|
||||
|
||||
```bash
|
||||
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
|
||||
--namespace velero \
|
||||
--type merge \
|
||||
--patch '{"spec":{"accessMode":"ReadOnly"}}'
|
||||
```
|
||||
|
||||
1. Create a restore with your most recent Velero Backup:
|
||||
|
||||
```
|
||||
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
|
||||
```
|
||||
|
||||
1. When ready, revert your backup storage location to read-write mode:
|
||||
|
||||
```bash
|
||||
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
|
||||
--namespace velero \
|
||||
--type merge \
|
||||
--patch '{"spec":{"accessMode":"ReadWrite"}}'
|
||||
```
|
|
@ -0,0 +1,63 @@
|
|||
## Examples
|
||||
|
||||
After you set up the Velero server, try these examples:
|
||||
|
||||
### Basic example (without PersistentVolumes)
|
||||
|
||||
1. Start the sample nginx app:
|
||||
|
||||
```bash
|
||||
kubectl apply -f examples/nginx-app/base.yaml
|
||||
```
|
||||
|
||||
1. Create a backup:
|
||||
|
||||
```bash
|
||||
velero backup create nginx-backup --include-namespaces nginx-example
|
||||
```
|
||||
|
||||
1. Simulate a disaster:
|
||||
|
||||
```bash
|
||||
kubectl delete namespaces nginx-example
|
||||
```
|
||||
|
||||
Wait for the namespace to be deleted.
|
||||
|
||||
1. Restore your lost resources:
|
||||
|
||||
```bash
|
||||
velero restore create --from-backup nginx-backup
|
||||
```
|
||||
|
||||
### Snapshot example (with PersistentVolumes)
|
||||
|
||||
> NOTE: For Azure, you must run Kubernetes version 1.7.2 or later to support PV snapshotting of managed disks.
|
||||
|
||||
1. Start the sample nginx app:
|
||||
|
||||
```bash
|
||||
kubectl apply -f examples/nginx-app/with-pv.yaml
|
||||
```
|
||||
|
||||
1. Create a backup with PV snapshotting:
|
||||
|
||||
```bash
|
||||
velero backup create nginx-backup --include-namespaces nginx-example
|
||||
```
|
||||
|
||||
1. Simulate a disaster:
|
||||
|
||||
```bash
|
||||
kubectl delete namespaces nginx-example
|
||||
```
|
||||
|
||||
Because the default [reclaim policy][1] for dynamically-provisioned PVs is "Delete", these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time. **Before continuing to the next step, check your cloud provider to confirm that the disk no longer exists.**
|
||||
|
||||
1. Restore your lost resources:
|
||||
|
||||
```bash
|
||||
velero restore create --from-backup nginx-backup
|
||||
```
|
||||
|
||||
[1]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
|
|
@ -0,0 +1,44 @@
|
|||
# FAQ
|
||||
|
||||
## When is it appropriate to use Velero instead of etcd's built in backup/restore?
|
||||
|
||||
Etcd's backup/restore tooling is good for recovering from data loss in a single etcd cluster. For
|
||||
example, it is a good idea to take a backup of etcd prior to upgrading etcd itself. For more
|
||||
sophisticated management of your Kubernetes cluster backups and restores, we feel that Velero is
|
||||
generally a better approach. It gives you the ability to throw away an unstable cluster and restore
|
||||
your Kubernetes resources and data into a new cluster, which you can't do easily just by backing up
|
||||
and restoring etcd.
|
||||
|
||||
Examples of cases where Velero is useful:
|
||||
|
||||
* you don't have access to etcd (e.g. you're running on GKE)
|
||||
* backing up both Kubernetes resources and persistent volume state
|
||||
* cluster migrations
|
||||
* backing up a subset of your Kubernetes resources
|
||||
* backing up Kubernetes resources that are stored across multiple etcd clusters (for example if you
|
||||
run a custom apiserver)
|
||||
|
||||
## Will Velero restore my Kubernetes resources exactly the way they were before?
|
||||
|
||||
Yes, with some exceptions. For example, when Velero restores pods it deletes the `nodeName` from the
|
||||
pod so that it can be scheduled onto a new node. You can see some more examples of the differences
|
||||
in [pod_action.go](https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/pkg/restore/pod_action.go)
|
||||
|
||||
## I'm using Velero in multiple clusters. Should I use the same bucket to store all of my backups?
|
||||
|
||||
We **strongly** recommend that each Velero instance use a distinct bucket/prefix combination to store backups.
|
||||
Having multiple Velero instances write backups to the same bucket/prefix combination can lead to numerous
|
||||
problems - failed backups, overwritten backups, inadvertently deleted backups, etc., all of which can be
|
||||
avoided by using a separate bucket + prefix per Velero instance.
|
||||
|
||||
It's fine to have multiple Velero instances back up to the same bucket if each instance uses its own
|
||||
prefix within the bucket. This can be configured in your `BackupStorageLocation`, by setting the
|
||||
`spec.objectStorage.prefix` field. It's also fine to use a distinct bucket for each Velero instance,
|
||||
and not to use prefixes at all.
|
||||
|
||||
Related to this, if you need to restore a backup that was created in cluster A into cluster B, you may
|
||||
configure cluster B with a backup storage location that points to cluster A's bucket/prefix. If you do
|
||||
this, you should configure the storage location pointing to cluster A's bucket/prefix in `ReadOnly` mode
|
||||
via the `--access-mode=ReadOnly` flag on the `velero backup-location create` command. This will ensure no
|
||||
new backups are created from Cluster B in Cluster A's bucket/prefix, and no existing backups are deleted
|
||||
or overwritten.
|
|
@ -0,0 +1,81 @@
|
|||
# Hooks
|
||||
|
||||
Velero currently supports executing commands in containers in pods during a backup.
|
||||
|
||||
## Backup Hooks
|
||||
|
||||
When performing a backup, you can specify one or more commands to execute in a container in a pod
|
||||
when that pod is being backed up. The commands can be configured to run *before* any custom action
|
||||
processing ("pre" hooks), or after all custom actions have been completed and any additional items
|
||||
specified by custom action have been backed up ("post" hooks). Note that hooks are _not_ executed within a shell
|
||||
on the containers.
|
||||
|
||||
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
|
||||
|
||||
### Specifying Hooks As Pod Annotations
|
||||
|
||||
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
|
||||
|
||||
#### Pre hooks
|
||||
|
||||
* `pre.hook.backup.velero.io/container`
|
||||
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
|
||||
* `pre.hook.backup.velero.io/command`
|
||||
* The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`
|
||||
* `pre.hook.backup.velero.io/on-error`
|
||||
* What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional.
|
||||
* `pre.hook.backup.velero.io/timeout`
|
||||
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
|
||||
|
||||
|
||||
#### Post hooks
|
||||
|
||||
* `post.hook.backup.velero.io/container`
|
||||
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
|
||||
* `post.hook.backup.velero.io/command`
|
||||
* The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`
|
||||
* `post.hook.backup.velero.io/on-error`
|
||||
* What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional.
|
||||
* `post.hook.backup.velero.io/timeout`
|
||||
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
|
||||
|
||||
### Specifying Hooks in the Backup Spec
|
||||
|
||||
Please see the documentation on the [Backup API Type][1] for how to specify hooks in the Backup
|
||||
spec.
|
||||
|
||||
## Hook Example with fsfreeze
|
||||
|
||||
We are going to walk through using both pre and post hooks for freezing a file system. Freezing the
|
||||
file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot.
|
||||
|
||||
We will be using [examples/nginx-app/with-pv.yaml][2] for this example. Follow the [steps for your provider][3] to
|
||||
setup this example.
|
||||
|
||||
### Annotations
|
||||
|
||||
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
|
||||
to your declarative deployment. Below is an example of what updating an object in place might look like.
|
||||
|
||||
```shell
|
||||
kubectl annotate pod -n nginx-example -l app=nginx \
|
||||
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
|
||||
pre.hook.backup.velero.io/container=fsfreeze \
|
||||
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
|
||||
post.hook.backup.velero.io/container=fsfreeze
|
||||
```
|
||||
|
||||
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
|
||||
hooks are running and exiting without error.
|
||||
|
||||
```shell
|
||||
velero backup create nginx-hook-test
|
||||
|
||||
velero backup get nginx-hook-test
|
||||
velero backup logs nginx-hook-test | grep hookCommand
|
||||
```
|
||||
|
||||
|
||||
[1]: api-types/backup.md
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/examples/nginx-app/with-pv.yaml
|
||||
[3]: cloud-common.md
|
|
@ -0,0 +1,80 @@
|
|||
# How Velero Works
|
||||
|
||||
Each Velero operation -- on-demand backup, scheduled backup, restore -- is a custom resource, defined with a Kubernetes [Custom Resource Definition (CRD)][20] and stored in [etcd][22]. Velero also includes controllers that process the custom resources to perform backups, restores, and all related operations.
|
||||
|
||||
You can back up or restore all objects in your cluster, or you can filter objects by type, namespace, and/or label.
|
||||
|
||||
Velero is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster (e.g. upgrades).
|
||||
|
||||
## On-demand backups
|
||||
|
||||
The **backup** operation:
|
||||
|
||||
1. Uploads a tarball of copied Kubernetes objects into cloud object storage.
|
||||
|
||||
1. Calls the cloud provider API to make disk snapshots of persistent volumes, if specified.
|
||||
|
||||
You can optionally specify hooks to be executed during the backup. For example, you might
|
||||
need to tell a database to flush its in-memory buffers to disk before taking a snapshot. [More about hooks][10].
|
||||
|
||||
Note that cluster backups are not strictly atomic. If Kubernetes objects are being created or edited at the time of backup, they might not be included in the backup. The odds of capturing inconsistent information are low, but it is possible.
|
||||
|
||||
## Scheduled backups
|
||||
|
||||
The **schedule** operation allows you to back up your data at recurring intervals. The first backup is performed when the schedule is first created, and subsequent backups happen at the schedule's specified interval. These intervals are specified by a Cron expression.
|
||||
|
||||
Scheduled backups are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*.
|
||||
|
||||
## Restores
|
||||
|
||||
The **restore** operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a filtered subset of objects and persistent volumes. Velero supports multiple namespace remapping--for example, in a single restore, objects in namespace "abc" can be recreated under namespace "def", and the objects in namespace "123" under "456".
|
||||
|
||||
The default name of a restore is `<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. You can also specify a custom name. A restored object also includes a label with key `velero.io/restore-name` and value `<RESTORE NAME>`.
|
||||
|
||||
By default, backup storage locations are created in read-write mode. However, during a restore, you can configure a backup storage location to be in read-only mode, which disables backup creation and deletion for the storage location. This is useful to ensure that no backups are inadvertently created or deleted during a restore scenario.
|
||||
|
||||
## Backup workflow
|
||||
|
||||
When you run `velero backup create test-backup`:
|
||||
|
||||
1. The Velero client makes a call to the Kubernetes API server to create a `Backup` object.
|
||||
|
||||
1. The `BackupController` notices the new `Backup` object and performs validation.
|
||||
|
||||
1. The `BackupController` begins the backup process. It collects the data to back up by querying the API server for resources.
|
||||
|
||||
1. The `BackupController` makes a call to the object storage service -- for example, AWS S3 -- to upload the backup file.
|
||||
|
||||
By default, `velero backup create` makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run `velero backup create --help` to see available flags. Snapshots can be disabled with the option `--snapshot-volumes=false`.
|
||||
|
||||
![19]
|
||||
|
||||
## Backed-up API versions
|
||||
|
||||
Velero backs up resources using the Kubernetes API server's *preferred version* for each group/resource. When restoring a resource, this same API group/version must exist in the target cluster in order for the restore to be successful.
|
||||
|
||||
For example, if the cluster being backed up has a `gizmos` resource in the `things` API group, with group/versions `things/v1alpha1`, `things/v1beta1`, and `things/v1`, and the server's preferred group/version is `things/v1`, then all `gizmos` will be backed up from the `things/v1` API endpoint. When backups from this cluster are restored, the target cluster **must** have the `things/v1` endpoint in order for `gizmos` to be restored. Note that `things/v1` **does not** need to be the preferred version in the target cluster; it just needs to exist.
|
||||
|
||||
## Set a backup to expire
|
||||
|
||||
When you create a backup, you can specify a TTL by adding the flag `--ttl <DURATION>`. If Velero sees that an existing backup resource is expired, it removes:
|
||||
|
||||
* The backup resource
|
||||
* The backup file from cloud object storage
|
||||
* All PersistentVolume snapshots
|
||||
* All associated Restores
|
||||
|
||||
## Object storage sync
|
||||
|
||||
Velero treats object storage as the source of truth. It continuously checks to see that the correct backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding backup resource in the Kubernetes API, Velero synchronizes the information from object storage to Kubernetes.
|
||||
|
||||
This allows restore functionality to work in a cluster migration scenario, where the original backup objects do not exist in the new cluster.
|
||||
|
||||
Likewise, if a backup object exists in Kubernetes but not in object storage, it will be deleted from Kubernetes since the backup tarball no longer exists.
|
||||
|
||||
[10]: hooks.md
|
||||
[19]: img/backup-process.png
|
||||
[20]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions
|
||||
[21]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers
|
||||
[22]: https://github.com/coreos/etcd
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
# Image tagging policy
|
||||
|
||||
This document describes Velero's image tagging policy.
|
||||
|
||||
## Released versions
|
||||
|
||||
`velero/velero:<SemVer>`
|
||||
|
||||
Velero follows the [Semantic Versioning](http://semver.org/) standard for releases. Each tag in the `github.com/vmware-tanzu/velero` repository has a matching image, e.g. `velero/velero:v1.0.0`.
|
||||
|
||||
### Latest
|
||||
|
||||
`velero/velero:latest`
|
||||
|
||||
The `latest` tag follows the most recently released version of Velero.
|
||||
|
||||
## Development
|
||||
|
||||
`velero/velero:master`
|
||||
|
||||
The `master` tag follows the latest commit to land on the `master` branch.
|
|
@ -0,0 +1 @@
|
|||
Some of these diagrams (for instance backup-process.png), have been created on [draw.io](https://www.draw.io), using the "Include a copy of my diagram" option. If you want to make changes to these diagrams, try importing them into draw.io, and you should have access to the original shapes/text that went into the originals.
|
Binary file not shown.
After Width: | Height: | Size: 33 KiB |
Binary file not shown.
After Width: | Height: | Size: 44 KiB |
|
@ -0,0 +1,159 @@
|
|||
# Install Overview
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Install the command-line interface (CLI)](#install-the-cli)
|
||||
- [Install and configure the server components](#install-and-configure-the-server-components)
|
||||
- [Advanced installation topics](#advanced-installation-topics)
|
||||
|
||||
## Prerequisites
|
||||
- access to a Kubernetes cluster, v1.10 or later, with DNS and container networking enabled.
|
||||
- `kubectl` installed locally
|
||||
|
||||
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
|
||||
|
||||
There are supported storage providers for both cloud-provider environments and on-premises environments. For more details on on-premises scenarios, see the [on-premises documentation][4].
|
||||
|
||||
## Install the CLI
|
||||
|
||||
#### Option 1: macOS - Homebrew
|
||||
|
||||
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
|
||||
|
||||
```bash
|
||||
brew install velero
|
||||
```
|
||||
|
||||
#### Option 2: GitHub release
|
||||
|
||||
1. Download the [latest release][1]'s tarball for your client platform.
|
||||
1. Extract the tarball:
|
||||
|
||||
```bash
|
||||
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
|
||||
```
|
||||
1. Move the extracted `velero` binary to somewhere in your `$PATH` (e.g. `/usr/local/bin` for most users).
|
||||
|
||||
## Install and configure the server components
|
||||
|
||||
There are two supported methods for installing the Velero server components:
|
||||
|
||||
- the `velero install` CLI command
|
||||
- the [Helm chart](https://github.com/helm/charts/tree/master/stable/velero)
|
||||
|
||||
To install and configure the Velero server components, follow the provider-specific instructions documented by [your storage provider][0].
|
||||
|
||||
_Note: if your object storage provider is different than your volume snapshot provider, follow the installation instructions for your object storage provider first, then return here and follow the instructions to [add your volume snapshot provider](#install-an-additional-volume-snapshot-provider)._
|
||||
|
||||
## Advanced installation topics
|
||||
|
||||
- [Plugins](#plugins)
|
||||
- [Install in any namespace](#install-in-any-namespace)
|
||||
- [Use non-file-based identity mechanisms](#use-non-file-based-identity-mechanisms)
|
||||
- [Enable restic integration](#enable-restic-integration)
|
||||
- [Customize resource requests and limits](#customize-resource-requests-and-limits)
|
||||
- [Configure more than one storage location for backups or volume snapshots](#configure-more-than-one-storage-location-for-backups-or-volume-snapshots)
|
||||
- [Do not configure a backup storage location during install](#do-not-configure-a-backup-storage-location-during-install)
|
||||
- [Install an additional volume snapshot provider](#install-an-additional-volume-snapshot-provider)
|
||||
- [Generate YAML only](#generate-yaml-only)
|
||||
- [Additional options](#additional-options)
|
||||
|
||||
#### Plugins
|
||||
|
||||
During install, Velero requires that at least one plugin is added (with the `--plugins` flag). Please see the documentation under [Plugins](overview-plugins.md)
|
||||
|
||||
#### Install in any namespace
|
||||
|
||||
Velero is installed in the `velero` namespace by default. However, you can install Velero in any namespace. See [run in custom namespace][2] for details.
|
||||
|
||||
#### Use non-file-based identity mechanisms
|
||||
|
||||
By default, `velero install` expects a credentials file for your `velero` IAM account to be provided via the `--secret-file` flag.
|
||||
|
||||
If you are using an alternate identity mechanism, such as kube2iam/kiam on AWS, Workload Identity on GKE, etc., that does not require a credentials file, you can specify the `--no-secret` flag instead of `--secret-file`.
|
||||
|
||||
#### Enable restic integration
|
||||
|
||||
By default, `velero install` does not install Velero's [restic integration][3]. To enable it, specify the `--use-restic` flag.
|
||||
|
||||
If you've already run `velero install` without the `--use-restic` flag, you can run the same command again, including the `--use-restic` flag, to add the restic integration to your existing install.
|
||||
|
||||
#### Customize resource requests and limits
|
||||
|
||||
By default, the Velero deployment requests 500m CPU, 128Mi memory and sets a limit of 1000m CPU, 256Mi.
|
||||
Default requests and limits are not set for the restic pods as CPU/Memory usage can depend heavily on the size of volumes being backed up.
|
||||
|
||||
If you need to customize these resource requests and limits, you can set the following flags in your `velero install` command:
|
||||
|
||||
```bash
|
||||
velero install \
|
||||
--provider <YOUR_PROVIDER> \
|
||||
--bucket <YOUR_BUCKET> \
|
||||
--secret-file <PATH_TO_FILE> \
|
||||
--velero-pod-cpu-request <CPU_REQUEST> \
|
||||
--velero-pod-mem-request <MEMORY_REQUEST> \
|
||||
--velero-pod-cpu-limit <CPU_LIMIT> \
|
||||
--velero-pod-mem-limit <MEMORY_LIMIT> \
|
||||
[--use-restic] \
|
||||
[--restic-pod-cpu-request <CPU_REQUEST>] \
|
||||
[--restic-pod-mem-request <MEMORY_REQUEST>] \
|
||||
[--restic-pod-cpu-limit <CPU_LIMIT>] \
|
||||
[--restic-pod-mem-limit <MEMORY_LIMIT>]
|
||||
```
|
||||
|
||||
Values for these flags follow the same format as [Kubernetes resource requirements][5].
|
||||
|
||||
#### Configure more than one storage location for backups or volume snapshots
|
||||
|
||||
Velero supports any number of backup storage locations and volume snapshot locations. For more details, see [about locations](locations.md).
|
||||
|
||||
However, `velero install` only supports configuring at most one backup storage location and one volume snapshot location.
|
||||
|
||||
To configure additional locations after running `velero install`, use the `velero backup-location create` and/or `velero snapshot-location create` commands along with provider-specific configuration. Use the `--help` flag on each of these commands for more details.
|
||||
|
||||
#### Do not configure a backup storage location during install
|
||||
|
||||
If you need to install Velero without a default backup storage location (without specifying `--bucket` or `--provider`), the `--no-default-backup-location` flag is required for confirmation.
|
||||
|
||||
#### Install an additional volume snapshot provider
|
||||
|
||||
Velero supports using different providers for volume snapshots than for object storage -- for example, you can use AWS S3 for object storage, and Portworx for block volume snapshots.
|
||||
|
||||
However, `velero install` only supports configuring a single matching provider for both object storage and volume snapshots.
|
||||
|
||||
To use a different volume snapshot provider:
|
||||
|
||||
1. Install the Velero server components by following the instructions for your **object storage** provider
|
||||
|
||||
1. Add your volume snapshot provider's plugin to Velero (look in [your provider][0]'s documentation for the image name):
|
||||
|
||||
```bash
|
||||
velero plugin add <registry/image:version>
|
||||
```
|
||||
|
||||
1. Add a volume snapshot location for your provider, following [your provider][0]'s documentation for configuration:
|
||||
|
||||
```bash
|
||||
velero snapshot-location create <NAME> \
|
||||
--provider <PROVIDER-NAME> \
|
||||
[--config <PROVIDER-CONFIG>]
|
||||
```
|
||||
|
||||
#### Generate YAML only
|
||||
|
||||
By default, `velero install` generates and applies a customized set of Kubernetes configuration (YAML) to your cluster.
|
||||
|
||||
To generate the YAML without applying it to your cluster, use the `--dry-run -o yaml` flags.
|
||||
|
||||
This is useful for applying bespoke customizations, integrating with a GitOps workflow, etc.
|
||||
|
||||
#### Additional options
|
||||
|
||||
Run `velero install --help` or see the [Helm chart documentation](https://github.com/helm/charts/tree/master/stable/velero) for the full set of installation options.
|
||||
|
||||
|
||||
[0]: supported-providers.md
|
||||
[1]: https://github.com/vmware-tanzu/velero/releases/latest
|
||||
[2]: namespace.md
|
||||
[3]: restic.md
|
||||
[4]: on-premises.md
|
||||
[5]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu
|
|
@ -0,0 +1,164 @@
|
|||
# Backup Storage Locations and Volume Snapshot Locations
|
||||
|
||||
## Overview
|
||||
|
||||
Velero has two custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`, that are used to configure where Velero backups and their associated persistent volume snapshots are stored.
|
||||
|
||||
A `BackupStorageLocation` is defined as a bucket, a prefix within that bucket under which all Velero data should be stored, and a set of additional provider-specific fields (e.g. AWS region, Azure storage account, etc.) The [API documentation][1] captures the configurable parameters for each in-tree provider.
|
||||
|
||||
A `VolumeSnapshotLocation` is defined entirely by provider-specific fields (e.g. AWS region, Azure resource group, Portworx snapshot type, etc.) The [API documentation][2] captures the configurable parameters for each in-tree provider.
|
||||
|
||||
The user can pre-configure one or more possible `BackupStorageLocations` and one or more `VolumeSnapshotLocations`, and can select *at backup creation time* the location in which the backup and associated snapshots should be stored.
|
||||
|
||||
This configuration design enables a number of different use cases, including:
|
||||
|
||||
- Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
|
||||
- Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
|
||||
- For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
|
||||
|
||||
## Limitations / Caveats
|
||||
|
||||
- Velero only supports a single set of credentials *per provider*. It's not yet possible to use different credentials for different locations, if they're for the same provider.
|
||||
|
||||
- Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail.
|
||||
|
||||
- Each Velero backup has one `BackupStorageLocation`, and one `VolumeSnapshotLocation` per volume provider. It is not possible (yet) to send a single Velero backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can always set up multiple scheduled backups that differ only in the storage locations used if redundancy of backups across locations is important.
|
||||
|
||||
- Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume (e.g. EBS and Portworx), but you only have a `VolumeSnapshotLocation` configured for EBS, then Velero will **only** snapshot the EBS volumes.
|
||||
|
||||
- Restic data is stored under a prefix/subdirectory of the main Velero bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation time.
|
||||
|
||||
## Examples
|
||||
|
||||
Let's look at some examples of how we can use this configuration mechanism to address some common use cases:
|
||||
|
||||
#### Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
|
||||
|
||||
During server configuration:
|
||||
|
||||
```shell
|
||||
velero snapshot-location create ebs-us-east-1 \
|
||||
--provider aws \
|
||||
--config region=us-east-1
|
||||
|
||||
velero snapshot-location create portworx-cloud \
|
||||
--provider portworx \
|
||||
--config type=cloud
|
||||
```
|
||||
|
||||
During backup creation:
|
||||
|
||||
```shell
|
||||
velero backup create full-cluster-backup \
|
||||
--volume-snapshot-locations ebs-us-east-1,portworx-cloud
|
||||
```
|
||||
|
||||
Alternately, since in this example there's only one possible volume snapshot location configured for each of our two providers (`ebs-us-east-1` for `aws`, and `portworx-cloud` for `portworx`), Velero doesn't require them to be explicitly specified when creating the backup:
|
||||
|
||||
```shell
|
||||
velero backup create full-cluster-backup
|
||||
```
|
||||
|
||||
#### Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
|
||||
|
||||
During server configuration:
|
||||
|
||||
```shell
|
||||
velero backup-location create default \
|
||||
--provider aws \
|
||||
--bucket velero-backups \
|
||||
--config region=us-east-1
|
||||
|
||||
velero backup-location create s3-alt-region \
|
||||
--provider aws \
|
||||
--bucket velero-backups-alt \
|
||||
--config region=us-west-1
|
||||
```
|
||||
|
||||
During backup creation:
|
||||
|
||||
```shell
|
||||
# The Velero server will automatically store backups in the backup storage location named "default" if
|
||||
# one is not specified when creating the backup. You can alter which backup storage location is used
|
||||
# by default by setting the --default-backup-storage-location flag on the `velero server` command (run
|
||||
# by the Velero deployment) to the name of a different backup storage location.
|
||||
velero backup create full-cluster-backup
|
||||
```
|
||||
|
||||
Or:
|
||||
|
||||
```shell
|
||||
velero backup create full-cluster-alternate-location-backup \
|
||||
--storage-location s3-alt-region
|
||||
```
|
||||
|
||||
#### For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
|
||||
|
||||
During server configuration:
|
||||
|
||||
```shell
|
||||
velero snapshot-location create portworx-local \
|
||||
--provider portworx \
|
||||
--config type=local
|
||||
|
||||
velero snapshot-location create portworx-cloud \
|
||||
--provider portworx \
|
||||
--config type=cloud
|
||||
```
|
||||
|
||||
During backup creation:
|
||||
|
||||
```shell
|
||||
# Note that since in this example we have two possible volume snapshot locations for the Portworx
|
||||
# provider, we need to explicitly specify which one to use when creating a backup. Alternately,
|
||||
# you can set the --default-volume-snapshot-locations flag on the `velero server` command (run by
|
||||
# the Velero deployment) to specify which location should be used for each provider by default, in
|
||||
# which case you don't need to specify it when creating a backup.
|
||||
velero backup create local-snapshot-backup \
|
||||
--volume-snapshot-locations portworx-local
|
||||
```
|
||||
|
||||
Or:
|
||||
|
||||
```shell
|
||||
velero backup create cloud-snapshot-backup \
|
||||
--volume-snapshot-locations portworx-cloud
|
||||
```
|
||||
|
||||
#### Use a single location
|
||||
|
||||
If you don't have a use case for more than one location, it's still easy to use Velero. Let's assume you're running on AWS, in the `us-west-1` region:
|
||||
|
||||
During server configuration:
|
||||
|
||||
```shell
|
||||
velero backup-location create default \
|
||||
--provider aws \
|
||||
--bucket velero-backups \
|
||||
--config region=us-west-1
|
||||
|
||||
velero snapshot-location create ebs-us-west-1 \
|
||||
--provider aws \
|
||||
--config region=us-west-1
|
||||
```
|
||||
|
||||
During backup creation:
|
||||
|
||||
```shell
|
||||
# Velero will automatically use your configured backup storage location and volume snapshot location.
|
||||
# Nothing needs to be specified when creating a backup.
|
||||
velero backup create full-cluster-backup
|
||||
```
|
||||
|
||||
## Additional Use Cases
|
||||
|
||||
1. If you're using Azure's AKS, you may want to store your volume snapshots outside of the "infrastructure" resource group that is automatically created when you create your AKS cluster. This is possible using a `VolumeSnapshotLocation`, by specifying a `resourceGroup` under the `config` section of the snapshot location. See the [Azure volume snapshot location documentation][3] for details.
|
||||
|
||||
1. If you're using Azure, you may want to store your Velero backups across multiple storage accounts and/or resource groups/subscriptions. This is possible using a `BackupStorageLocation`, by specifying a `storageAccount`, `resourceGroup` and/or `subscriptionId`, respectively, under the `config` section of the backup location. See the [Azure backup storage location documentation][4] for details.
|
||||
|
||||
|
||||
|
||||
[1]: api-types/backupstoragelocation.md
|
||||
[2]: api-types/volumesnapshotlocation.md
|
||||
[3]: api-types/volumesnapshotlocation.md#azure
|
||||
[4]: api-types/backupstoragelocation.md#azure
|
|
@ -0,0 +1,48 @@
|
|||
# Cluster migration
|
||||
|
||||
*Using Backups and Restores*
|
||||
|
||||
Velero can help you port your resources from one cluster to another, as long as you point each Velero instance to the same cloud object storage location. In this scenario, we are also assuming that your clusters are hosted by the same cloud provider. **Note that Velero does not support the migration of persistent volumes across cloud providers.**
|
||||
|
||||
1. *(Cluster 1)* Assuming you haven't already been checkpointing your data with the Velero `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired):
|
||||
|
||||
```
|
||||
velero backup create <BACKUP-NAME>
|
||||
```
|
||||
|
||||
The default TTL is 30 days (720 hours); you can use the `--ttl` flag to change this as necessary.
|
||||
|
||||
1. *(Cluster 2)* Configure `BackupStorageLocations` and `VolumeSnapshotLocations`, pointing to the locations used by *Cluster 1*, using `velero backup-location create` and `velero snapshot-location create`. Make sure to configure the `BackupStorageLocations` as read-only
|
||||
by using the `--access-mode=ReadOnly` flag for `velero backup-location create`.
|
||||
|
||||
1. *(Cluster 2)* Make sure that the Velero Backup object is created. Velero resources are synchronized with the backup files in cloud storage.
|
||||
|
||||
```
|
||||
velero backup describe <BACKUP-NAME>
|
||||
```
|
||||
|
||||
**Note:** The default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Velero server.
|
||||
|
||||
1. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with:
|
||||
|
||||
```
|
||||
velero restore create --from-backup <BACKUP-NAME>
|
||||
```
|
||||
|
||||
## Verify both clusters
|
||||
|
||||
Check that the second cluster is behaving as expected:
|
||||
|
||||
1. *(Cluster 2)* Run:
|
||||
|
||||
```
|
||||
velero restore get
|
||||
```
|
||||
|
||||
1. Then run:
|
||||
|
||||
```
|
||||
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
|
||||
```
|
||||
|
||||
If you encounter issues, make sure that Velero is running in the same namespace in both clusters.
|
|
@ -0,0 +1,25 @@
|
|||
# Run in custom namespace
|
||||
|
||||
You can run Velero in any namespace.
|
||||
|
||||
First, ensure you've [downloaded & extracted the latest release][0].
|
||||
|
||||
Then, install Velero using the `--namespace` flag:
|
||||
|
||||
```bash
|
||||
velero install --bucket <YOUR_BUCKET> --provider <YOUR_PROVIDER> --namespace <YOUR_NAMESPACE>
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Specify the namespace in client commands
|
||||
|
||||
To specify the namespace for all Velero client commands, run:
|
||||
|
||||
```bash
|
||||
velero client config set namespace=<NAMESPACE_VALUE>
|
||||
```
|
||||
|
||||
|
||||
|
||||
[0]: install-overview.md
|
|
@ -0,0 +1,24 @@
|
|||
# On-Premises Environments
|
||||
|
||||
You can run Velero in an on-premises cluster in different ways depending on your requirements.
|
||||
|
||||
### Selecting an object storage provider
|
||||
|
||||
You must select an object storage backend that Velero can use to store backup data. [Supported providers][0] contains information on various
|
||||
options that are supported or have been reported to work by users.
|
||||
|
||||
If you do not already have an object storage system, [MinIO][2] is an open-source S3-compatible object storage system that can be installed on-premises and is compatible with Velero. The details of configuring it for production usage are out of scope for Velero's documentation, but an [evaluation install guide][3] using MinIO is provided for convenience.
|
||||
|
||||
### (Optional) Selecting volume snapshot providers
|
||||
|
||||
If you need to back up persistent volume data, you must select a volume backup solution. [Supported providers][0] contains information on the supported options.
|
||||
|
||||
For example, if you use [Portworx][4] for persistent storage, you can install their Velero plugin to get native Portworx snapshots as part of your Velero backups.
|
||||
|
||||
If there is no native snapshot plugin available for your storage platform, you can use Velero's [restic integration][1], which provides a platform-agnostic file-level backup solution for volume data.
|
||||
|
||||
[0]: supported-providers.md
|
||||
[1]: restic.md
|
||||
[2]: https://min.io
|
||||
[3]: contributions/minio.md
|
||||
[4]: https://portworx.com
|
|
@ -0,0 +1,99 @@
|
|||
# Output file format
|
||||
|
||||
A backup is a gzip-compressed tar file whose name matches the Backup API resource's `metadata.name` (what is specified during `velero backup create <NAME>`).
|
||||
|
||||
In cloud object storage, each backup file is stored in its own subdirectory in the bucket specified in the Velero server configuration. This subdirectory includes an additional file called `velero-backup.json`. The JSON file lists all information about your associated Backup resource, including any default values. This gives you a complete historical record of the backup configuration. The JSON file also specifies `status.version`, which corresponds to the output file format.
|
||||
|
||||
The directory structure in your cloud storage looks something like:
|
||||
|
||||
```
|
||||
rootBucket/
|
||||
backup1234/
|
||||
velero-backup.json
|
||||
backup1234.tar.gz
|
||||
```
|
||||
|
||||
## Example backup JSON file
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Backup",
|
||||
"apiVersion": "velero.io/v1",
|
||||
"metadata": {
|
||||
"name": "test-backup",
|
||||
"namespace": "velero",
|
||||
"selfLink": "/apis/velero.io/v1/namespaces/velero/backups/testtest",
|
||||
"uid": "a12345cb-75f5-11e7-b4c2-abcdef123456",
|
||||
"resourceVersion": "337075",
|
||||
"creationTimestamp": "2017-07-31T13:39:15Z"
|
||||
},
|
||||
"spec": {
|
||||
"includedNamespaces": [
|
||||
"*"
|
||||
],
|
||||
"excludedNamespaces": null,
|
||||
"includedResources": [
|
||||
"*"
|
||||
],
|
||||
"excludedResources": null,
|
||||
"labelSelector": null,
|
||||
"snapshotVolumes": true,
|
||||
"ttl": "24h0m0s"
|
||||
},
|
||||
"status": {
|
||||
"version": 1,
|
||||
"expiration": "2017-08-01T13:39:15Z",
|
||||
"phase": "Completed",
|
||||
"volumeBackups": {
|
||||
"pvc-e1e2d345-7583-11e7-b4c2-abcdef123456": {
|
||||
"snapshotID": "snap-04b1a8e11dfb33ab0",
|
||||
"type": "gp2",
|
||||
"iops": 100
|
||||
}
|
||||
},
|
||||
"validationErrors": null
|
||||
}
|
||||
}
|
||||
```
|
||||
Note that this file includes detailed info about your volume snapshots in the `status.volumeBackups` field, which can be helpful if you want to manually check them in your cloud provider GUI.
|
||||
|
||||
## file format version: 1
|
||||
|
||||
When unzipped, a typical backup directory (e.g. `backup1234.tar.gz`) looks like the following:
|
||||
|
||||
```
|
||||
resources/
|
||||
persistentvolumes/
|
||||
cluster/
|
||||
pv01.json
|
||||
...
|
||||
configmaps/
|
||||
namespaces/
|
||||
namespace1/
|
||||
myconfigmap.json
|
||||
...
|
||||
namespace2/
|
||||
...
|
||||
pods/
|
||||
namespaces/
|
||||
namespace1/
|
||||
mypod.json
|
||||
...
|
||||
namespace2/
|
||||
...
|
||||
jobs/
|
||||
namespaces/
|
||||
namespace1/
|
||||
awesome-job.json
|
||||
...
|
||||
namespace2/
|
||||
...
|
||||
deployments/
|
||||
namespaces/
|
||||
namespace1/
|
||||
cool-deployment.json
|
||||
...
|
||||
namespace2/
|
||||
...
|
||||
...
|
||||
```
|
|
@ -0,0 +1,24 @@
|
|||
|
||||
# Velero plugin system
|
||||
|
||||
Velero has a plugin system which allows integration with a variety of providers for backup storage and volume snapshot operations.
|
||||
|
||||
During install, Velero requires that at least one plugin is added (with the `--plugins` flag). The plugin will be either of the type object store or volume snapshotter, or a plugin that contains both. An exception to this is that when the user is not configuring a backup storage location or a snapshot storage location at the time of install, this flag is optional.
|
||||
|
||||
Any plugin can be added after Velero has been installed by using the command `velero plugin add <registry/image:version>`.
|
||||
|
||||
Example with a dockerhub image: `velero plugin add velero/velero-plugin-for-aws:v1.0.0`.
|
||||
|
||||
In the same way, any plugin can be removed by using the command `velero plugin remove <registry/image:version>`.
|
||||
|
||||
## Creating a new plugin
|
||||
|
||||
Anyone can add integrations for any platform to provide additional backup and volume storage without modifying the Velero codebase. To write a plugin for a new backup or volume storage platform, take a look at our [example repo][1] and at our documentation for [Custom plugins][2].
|
||||
|
||||
## Adding a new plugin
|
||||
|
||||
After you publish your plugin on your own repository, open a PR that adds a link to it under the appropriate list of [supported providers][3] page in our documentation.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero-plugin-example/
|
||||
[2]: custom-plugins.md
|
||||
[3]: supported-providers.md
|
|
@ -0,0 +1,47 @@
|
|||
# Run Velero more securely with restrictive RBAC settings
|
||||
|
||||
By default Velero runs with an RBAC policy of ClusterRole `cluster-admin`. This is to make sure that Velero can back up or restore anything in your cluster. But `cluster-admin` access is wide open -- it gives Velero components access to everything in your cluster. Depending on your environment and your security needs, you should consider whether to configure additional RBAC policies with more restrictive access.
|
||||
|
||||
**Note:** Roles and RoleBindings are associated with a single namespaces, not with an entire cluster. PersistentVolume backups are associated only with an entire cluster. This means that any backups or restores that use a restrictive Role and RoleBinding pair can manage only the resources that belong to the namespace. You do not need a wide open RBAC policy to manage PersistentVolumes, however. You can configure a ClusterRole and ClusterRoleBinding that allow backups and restores only of PersistentVolumes, not of all objects in the cluster.
|
||||
|
||||
For more information about RBAC and access control generally in Kubernetes, see the Kubernetes documentation about [access control][1], [managing service accounts][2], and [RBAC authorization][3].
|
||||
|
||||
## Set up Roles and RoleBindings
|
||||
|
||||
Here's a sample Role and RoleBinding pair.
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
namespace: YOUR_NAMESPACE_HERE
|
||||
name: ROLE_NAME_HERE
|
||||
labels:
|
||||
component: velero
|
||||
rules:
|
||||
- apiGroups:
|
||||
- velero.io
|
||||
verbs:
|
||||
- "*"
|
||||
resources:
|
||||
- "*"
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: ROLEBINDING_NAME_HERE
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: YOUR_SERVICEACCOUNT_HERE
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: ROLE_NAME_HERE
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
[1]: https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/
|
||||
[2]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
|
||||
[3]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||
[4]: namespace.md
|
|
@ -0,0 +1,380 @@
|
|||
# Restic Integration
|
||||
|
||||
Velero has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called [restic][1]. This support is considered beta quality. Please see the list of [limitations](#limitations) to understand if it currently fits your use case.
|
||||
|
||||
Velero has always allowed you to take snapshots of persistent volumes as part of your backups if you’re using one of
|
||||
the supported cloud providers’ block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks).
|
||||
We also provide a plugin model that enables anyone to implement additional object and block storage backends, outside the
|
||||
main Velero repository.
|
||||
|
||||
We integrated restic with Velero so that users have an out-of-the-box solution for backing up and restoring almost any type of Kubernetes
|
||||
volume*. This is a new capability for Velero, not a replacement for existing functionality. If you're running on AWS, and
|
||||
taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you've
|
||||
been waiting for a snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
|
||||
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
|
||||
|
||||
Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable
|
||||
cross-volume-type data migrations. Stay tuned as this evolves!
|
||||
|
||||
\* hostPath volumes are not supported, but the [new local volume type][4] is supported.
|
||||
|
||||
## Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.10.0 and later.
|
||||
|
||||
### Instructions
|
||||
|
||||
Ensure you've [downloaded latest release][3].
|
||||
|
||||
To install restic, use the `--use-restic` flag on the `velero install` command. See the [install overview][2] for more details. When using restic on a storage provider that doesn't currently have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
|
||||
|
||||
Please note: For some PaaS/CaaS platforms based on Kubernetes such as RancherOS, OpenShift and Enterprise PKS, some modifications are required to the restic DaemonSet spec.
|
||||
|
||||
**RancherOS**
|
||||
|
||||
The host path for volumes is not `/var/lib/kubelet/pods`, rather it is `/opt/rke/var/lib/kubelet/pods`
|
||||
|
||||
```yaml
|
||||
hostPath:
|
||||
path: /var/lib/kubelet/pods
|
||||
```
|
||||
|
||||
to
|
||||
|
||||
```yaml
|
||||
hostPath:
|
||||
path: /opt/rke/var/lib/kubelet/pods
|
||||
```
|
||||
|
||||
**OpenShift**
|
||||
|
||||
The restic containers should be running in a `privileged` mode to be able to mount the correct hostpath to pods volumes.
|
||||
|
||||
1. Add the `velero` ServiceAccount to the `privileged` SCC:
|
||||
|
||||
```
|
||||
$ oc adm policy add-scc-to-user privileged -z velero -n velero
|
||||
```
|
||||
|
||||
2. Modify the DaemonSet yaml to request a privileged mode and mount the correct hostpath to pods volumes.
|
||||
|
||||
```diff
|
||||
@@ -35,7 +35,7 @@ spec:
|
||||
secretName: cloud-credentials
|
||||
- name: host-pods
|
||||
hostPath:
|
||||
- path: /var/lib/kubelet/pods
|
||||
+ path: /var/lib/origin/openshift.local.volumes/pods
|
||||
- name: scratch
|
||||
emptyDir: {}
|
||||
containers:
|
||||
@@ -67,3 +67,5 @@ spec:
|
||||
value: /credentials/cloud
|
||||
- name: VELERO_SCRATCH_DIR
|
||||
value: /scratch
|
||||
+ securityContext:
|
||||
+ privileged: true
|
||||
```
|
||||
|
||||
If restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) in order to relax the security in your cluster so that restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
|
||||
|
||||
By default a userland openshift namespace will not schedule pods on all nodes in the cluster.
|
||||
|
||||
To schedule on all nodes the namespace needs an annotation:
|
||||
|
||||
```
|
||||
oc annotate namespace <velero namespace> openshift.io/node-selector=""
|
||||
```
|
||||
|
||||
This should be done before velero installation.
|
||||
|
||||
Or the ds needs to be deleted and recreated:
|
||||
|
||||
```
|
||||
oc get ds restic -o yaml -n <velero namespace> > ds.yaml
|
||||
oc annotate namespace <velero namespace> openshift.io/node-selector=""
|
||||
oc create -n <velero namespace> -f ds.yaml
|
||||
```
|
||||
|
||||
**Enterprise PKS**
|
||||
|
||||
You need to enable the `Allow Privileged` option in your plan configuration so that restic is able to mount the hostpath.
|
||||
|
||||
The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods`
|
||||
|
||||
```yaml
|
||||
hostPath:
|
||||
path: /var/vcap/data/kubelet/pods
|
||||
```
|
||||
|
||||
You're now ready to use Velero with restic.
|
||||
|
||||
## Back up
|
||||
|
||||
1. Run the following for each pod that contains a volume to back up:
|
||||
|
||||
```bash
|
||||
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
|
||||
```
|
||||
|
||||
where the volume names are the names of the volumes in the pod spec.
|
||||
|
||||
For example, for the following pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: sample
|
||||
namespace: foo
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
name: test-webserver
|
||||
volumeMounts:
|
||||
- name: pvc-volume
|
||||
mountPath: /volume-1
|
||||
- name: emptydir-volume
|
||||
mountPath: /volume-2
|
||||
volumes:
|
||||
- name: pvc-volume
|
||||
persistentVolumeClaim:
|
||||
claimName: test-volume-claim
|
||||
- name: emptydir-volume
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
You'd run:
|
||||
|
||||
```bash
|
||||
kubectl -n foo annotate pod/sample backup.velero.io/backup-volumes=pvc-volume,emptydir-volume
|
||||
```
|
||||
|
||||
This annotation can also be provided in a pod template spec if you use a controller to manage your pods.
|
||||
|
||||
1. Take a Velero backup:
|
||||
|
||||
```bash
|
||||
velero backup create NAME OPTIONS...
|
||||
```
|
||||
|
||||
1. When the backup completes, view information about the backups:
|
||||
|
||||
```bash
|
||||
velero backup describe YOUR_BACKUP_NAME
|
||||
```
|
||||
```bash
|
||||
kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOUR_BACKUP_NAME -o yaml
|
||||
```
|
||||
|
||||
## Restore
|
||||
|
||||
1. Restore from your Velero backup:
|
||||
|
||||
```bash
|
||||
velero restore create --from-backup BACKUP_NAME OPTIONS...
|
||||
```
|
||||
|
||||
1. When the restore completes, view information about your pod volume restores:
|
||||
|
||||
```bash
|
||||
velero restore describe YOUR_RESTORE_NAME
|
||||
```
|
||||
```bash
|
||||
kubectl -n velero get podvolumerestores -l velero.io/restore-name=YOUR_RESTORE_NAME -o yaml
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
|
||||
- Those of you familiar with [restic][1] may know that it encrypts all of its data. We've decided to use a static,
|
||||
common encryption key for all restic repositories created by Velero. **This means that anyone who has access to your
|
||||
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
|
||||
appropriately. We plan to implement full Velero backup encryption, including securing the restic encryption keys, in
|
||||
a future release.
|
||||
- An incremental backup chain will be maintained across pod reschedules for PVCs. However, for pod volumes that are *not*
|
||||
PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (e.g. by a ReplicaSet/Deployment), the next backup of those
|
||||
volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod.
|
||||
- Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual
|
||||
difference is small.
|
||||
|
||||
## Customize Restore Helper Container
|
||||
|
||||
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
|
||||
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
|
||||
the alternate image.
|
||||
|
||||
In addition, you can customize the resource requirements for the init container, should you need.
|
||||
|
||||
The ConfigMap must look like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
# any name can be used; Velero uses the labels (below)
|
||||
# to identify it rather than the name
|
||||
name: restic-restore-action-config
|
||||
# must be in the velero namespace
|
||||
namespace: velero
|
||||
# the below labels should be used verbatim in your
|
||||
# ConfigMap.
|
||||
labels:
|
||||
# this value-less label identifies the ConfigMap as
|
||||
# config for a plugin (i.e. the built-in restic restore
|
||||
# item action plugin)
|
||||
velero.io/plugin-config: ""
|
||||
# this label identifies the name and kind of plugin
|
||||
# that this ConfigMap is for.
|
||||
velero.io/restic: RestoreItemAction
|
||||
data:
|
||||
# The value for "image" can either include a tag or not;
|
||||
# if the tag is *not* included, the tag from the main Velero
|
||||
# image will automatically be used.
|
||||
image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]
|
||||
|
||||
# "cpuRequest" sets the request.cpu value on the restic init containers during restore.
|
||||
# If not set, it will default to "100m". A value of "0" is treated as unbounded.
|
||||
cpuRequest: 200m
|
||||
|
||||
# "memRequest" sets the request.memory value on the restic init containers during restore.
|
||||
# If not set, it will default to "128Mi". A value of "0" is treated as unbounded.
|
||||
memRequest: 128Mi
|
||||
|
||||
# "cpuLimit" sets the request.cpu value on the restic init containers during restore.
|
||||
# If not set, it will default to "100m". A value of "0" is treated as unbounded.
|
||||
cpuLimit: 200m
|
||||
|
||||
# "memLimit" sets the request.memory value on the restic init containers during restore.
|
||||
# If not set, it will default to "128Mi". A value of "0" is treated as unbounded.
|
||||
memLimit: 128Mi
|
||||
|
||||
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Run the following checks:
|
||||
|
||||
Are your Velero server and daemonset pods running?
|
||||
|
||||
```bash
|
||||
kubectl get pods -n velero
|
||||
```
|
||||
|
||||
Does your restic repository exist, and is it ready?
|
||||
|
||||
```bash
|
||||
velero restic repo get
|
||||
|
||||
velero restic repo get REPO_NAME -o yaml
|
||||
```
|
||||
|
||||
Are there any errors in your Velero backup/restore?
|
||||
|
||||
```bash
|
||||
velero backup describe BACKUP_NAME
|
||||
velero backup logs BACKUP_NAME
|
||||
|
||||
velero restore describe RESTORE_NAME
|
||||
velero restore logs RESTORE_NAME
|
||||
```
|
||||
|
||||
What is the status of your pod volume backups/restores?
|
||||
|
||||
```bash
|
||||
kubectl -n velero get podvolumebackups -l velero.io/backup-name=BACKUP_NAME -o yaml
|
||||
|
||||
kubectl -n velero get podvolumerestores -l velero.io/restore-name=RESTORE_NAME -o yaml
|
||||
```
|
||||
|
||||
Is there any useful information in the Velero server or daemon pod logs?
|
||||
|
||||
```bash
|
||||
kubectl -n velero logs deploy/velero
|
||||
kubectl -n velero logs DAEMON_POD_NAME
|
||||
```
|
||||
|
||||
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
|
||||
to the container command in the deployment/daemonset pod template spec.
|
||||
|
||||
## How backup and restore work with restic
|
||||
|
||||
We introduced three custom resource definitions and associated controllers:
|
||||
|
||||
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
|
||||
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
|
||||
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
|
||||
and `restic prune`.
|
||||
|
||||
You can see information about your Velero restic repositories by running `velero restic repo get`.
|
||||
|
||||
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
|
||||
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
|
||||
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
|
||||
`restic backup` commands to backup pod volume data.
|
||||
|
||||
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
|
||||
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
|
||||
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
|
||||
on that node. The controller executes `restic restore` commands to restore pod volume data.
|
||||
|
||||
### Backup
|
||||
|
||||
1. The main Velero backup process checks each pod that it's backing up for the annotation specifying a restic backup
|
||||
should be taken (`backup.velero.io/backup-volumes`)
|
||||
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
|
||||
- checking if a `ResticRepository` custom resource already exists
|
||||
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
|
||||
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
|
||||
1. The main Velero process now waits for the `PodVolumeBackup` resources to complete or fail
|
||||
1. Meanwhile, each `PodVolumeBackup` is handled by the controller on the appropriate node, which:
|
||||
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
|
||||
- finds the pod volume's subdirectory within the above volume
|
||||
- runs `restic backup`
|
||||
- updates the status of the custom resource to `Completed` or `Failed`
|
||||
1. As each `PodVolumeBackup` finishes, the main Velero process adds it to the Velero backup in a file named `<backup-name>-podvolumebackups.json.gz`. This file gets uploaded to object storage alongside the backup tarball. It will be used for restores, as seen in the next section.
|
||||
|
||||
### Restore
|
||||
|
||||
1. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from.
|
||||
1. For each `PodVolumeBackup` found, Velero first ensures a restic repository exists for the pod's namespace, by:
|
||||
- checking if a `ResticRepository` custom resource already exists
|
||||
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
|
||||
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
|
||||
check it for integrity)
|
||||
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
|
||||
on this shortly)
|
||||
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
|
||||
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
|
||||
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
|
||||
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
|
||||
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
|
||||
- waits for the pod to be running the init container
|
||||
- finds the pod volume's subdirectory within the above volume
|
||||
- runs `restic restore`
|
||||
- on success, writes a file into the pod volume, in a `.velero` subdirectory, whose name is the UID of the Velero restore
|
||||
that this pod volume restore is for
|
||||
- updates the status of the custom resource to `Completed` or `Failed`
|
||||
1. The init container that was added to the pod is running a process that waits until it finds a file
|
||||
within each restored volume, under `.velero`, whose name is the UID of the Velero restore being run
|
||||
1. Once all such files are found, the init container's process terminates successfully and the pod moves
|
||||
on to running other init containers/the main containers.
|
||||
|
||||
## 3rd party controller
|
||||
|
||||
### Monitor backup annotation
|
||||
|
||||
Velero does not currently provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
|
||||
|
||||
To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watcher][7]
|
||||
|
||||
[1]: https://github.com/restic/restic
|
||||
[2]: install-overview.md
|
||||
[3]: https://github.com/vmware-tanzu/velero/releases/
|
||||
[4]: https://kubernetes.io/docs/concepts/storage/volumes/#local
|
||||
[5]: http://restic.readthedocs.io/en/latest/100_references.html#terminology
|
||||
[6]: https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
|
||||
[7]: https://github.com/bitsbeats/velero-pvc-watcher
|
|
@ -0,0 +1,41 @@
|
|||
# Restore Reference
|
||||
|
||||
## Restoring Into a Different Namespace
|
||||
|
||||
Velero can restore resources into a different namespace than the one they were backed up from. To do this, use the `--namespace-mappings` flag:
|
||||
|
||||
```bash
|
||||
velero restore create RESTORE_NAME \
|
||||
--from-backup BACKUP_NAME \
|
||||
--namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-2
|
||||
```
|
||||
|
||||
## Changing PV/PVC Storage Classes
|
||||
|
||||
Velero can change the storage class of persistent volumes and persistent volume claims during restores. To configure a storage class mapping, create a config map in the Velero namespace like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
# any name can be used; Velero uses the labels (below)
|
||||
# to identify it rather than the name
|
||||
name: change-storage-class-config
|
||||
# must be in the velero namespace
|
||||
namespace: velero
|
||||
# the below labels should be used verbatim in your
|
||||
# ConfigMap.
|
||||
labels:
|
||||
# this value-less label identifies the ConfigMap as
|
||||
# config for a plugin (i.e. the built-in change storage
|
||||
# class restore item action plugin)
|
||||
velero.io/plugin-config: ""
|
||||
# this label identifies the name and kind of plugin
|
||||
# that this ConfigMap is for.
|
||||
velero.io/change-storage-class: RestoreItemAction
|
||||
data:
|
||||
# add 1+ key-value pairs here, where the key is the old
|
||||
# storage class name and the value is the new storage
|
||||
# class name.
|
||||
<old-storage-class>: <new-storage-class>
|
||||
```
|
|
@ -0,0 +1,49 @@
|
|||
# Run Velero locally in development
|
||||
|
||||
Running the Velero server locally can speed up iterative development. This eliminates the need to rebuild the Velero server
|
||||
image and redeploy it to the cluster with each change.
|
||||
|
||||
## Run Velero locally with a remote cluster
|
||||
|
||||
Velero runs against the Kubernetes API server as the endpoint (as per the `kubeconfig` configuration), so both the Velero server and client use the same `client-go` to communicate with Kubernetes. This means the Velero server can be run locally just as functionally as if it was running in the remote cluster.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
When running Velero, you will need to ensure that you set up all of the following:
|
||||
|
||||
* Appropriate RBAC permissions in the cluster
|
||||
* Read access for all data from the source cluster and namespaces
|
||||
* Write access to the target cluster and namespaces
|
||||
* Cloud provider credentials
|
||||
* Read/write access to volumes
|
||||
* Read/write access to object storage for backup data
|
||||
* A [BackupStorageLocation][20] object definition for the Velero server
|
||||
* (Optional) A [VolumeSnapshotLocation][21] object definition for the Velero server, to take PV snapshots
|
||||
|
||||
### 1. Install Velero
|
||||
|
||||
See documentation on how to install Velero in some specific providers: [Install overview][22]
|
||||
|
||||
### 2. Scale deployment down to zero
|
||||
|
||||
After you use the `velero install` command to install Velero into your cluster, you scale the Velero deployment down to 0 so it is not simultaneously being run on the remote cluster and potentially causing things to get out of sync:
|
||||
|
||||
`kubectl scale --replicas=0 deployment velero -n velero`
|
||||
|
||||
#### 3. Start the Velero server locally
|
||||
|
||||
* To run the server locally, use the full path according to the binary you need. Example, if you are on a Mac, and using `AWS` as a provider, this is how to run the binary you built from source using the full path: `AWS_SHARED_CREDENTIALS_FILE=<path-to-credentials-file> ./_output/bin/darwin/amd64/velero`. Alternatively, you may add the `velero` binary to your `PATH`.
|
||||
|
||||
* Start the server: `velero server [CLI flags]`. The following CLI flags may be useful to customize, but see `velero server --help` for full details:
|
||||
* `--log-level`: set the Velero server's log level (default `info`, use `debug` for the most logging)
|
||||
* `--kubeconfig`: set the path to the kubeconfig file the Velero server uses to talk to the Kubernetes apiserver (default `$KUBECONFIG`)
|
||||
* `--namespace`: the set namespace where the Velero server should look for backups, schedules, restores (default `velero`)
|
||||
* `--plugin-dir`: set the directory where the Velero server looks for plugins (default `/plugins`)
|
||||
* `--metrics-address`: set the bind address and port where Prometheus metrics are exposed (default `:8085`)
|
||||
|
||||
[15]: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#the-shared-credentials-file
|
||||
[16]: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable
|
||||
[18]: https://eksctl.io/
|
||||
[20]: api-types/backupstoragelocation.md
|
||||
[21]: api-types/volumesnapshotlocation.md
|
||||
[22]: install-overview/
|
|
@ -0,0 +1,21 @@
|
|||
## Start contributing
|
||||
|
||||
### Before you start
|
||||
|
||||
* Please familiarize yourself with the [Code of Conduct][1] before contributing.
|
||||
* Also, see [CONTRIBUTING.md][2] for instructions on the developer certificate of origin that we require.
|
||||
|
||||
### Finding your way around
|
||||
|
||||
You may join the Velero community and contribute in many different ways, including helping us design or test new features. For any significant feature we consider adding, we start with a design document. You may find a list of currently in progress new designs here: https://github.com/vmware-tanzu/velero/pulls?q=is%3Aopen+is%3Apr+label%3ADesign. Feel free to review and help us with your input.
|
||||
|
||||
For information on how to connect with our maintainers and community, join our online meetings, or find good first issues, start on our [Velero community](https://velero.io/community/) page.
|
||||
|
||||
Please browse our list of resources, including a playlist of past online community meetings, blog posts, and other resources to help you get familiar with our project: [Velero resources](https://velero.io/resources/).
|
||||
|
||||
### Contributing
|
||||
|
||||
If you are ready to jump in and test, add code, or help with documentation, please use the navigation on the left under `Contribute`.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/CODE_OF_CONDUCT.md
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/CONTRIBUTING.md
|
|
@ -0,0 +1,77 @@
|
|||
# Providers
|
||||
|
||||
Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase.
|
||||
|
||||
## Velero supported providers
|
||||
|
||||
| Provider | Object Store | Volume Snapshotter | Plugin |
|
||||
|----------------------------|---------------------|------------------------------|---------------------------|
|
||||
| [AWS S3][7] | AWS S3 | AWS EBS | [Velero plugin AWS][8] |
|
||||
| [Azure Blob Storage][9] | Azure Blob Storage | Azure Managed Disks | [Velero plugin Azure][10] |
|
||||
| [Google Cloud Storage][11] | Google Cloud Storage| Google Compute Engine Disks | [Velero plugin GCP][12] |
|
||||
|
||||
Contact: [Slack][28], [GitHub Issue][29]
|
||||
|
||||
## Community supported providers
|
||||
|
||||
| Provider | Object Store | Volume Snapshotter | Plugin | Contact |
|
||||
|---------------------------|------------------------------|------------------------------------|------------------------|---------------------------------|
|
||||
| [Portworx][31] | 🚫 | Portworx Volume | [Portworx][32] | [Slack][33], [GitHub Issue][34] |
|
||||
| [DigitalOcean][15] | DigitalOcean Object Storage | DigitalOcean Volumes Block Storage | [StackPointCloud][16] | |
|
||||
| [OpenEBS][17] | 🚫 | OpenEBS CStor Volume | [OpenEBS][18] | [Slack][19], [GitHub Issue][20] |
|
||||
| [AlibabaCloud][21] | 🚫 | Alibaba Cloud | [AlibabaCloud][22] | [GitHub Issue][23] |
|
||||
| [Hewlett Packard][24] | 🚫 | HPE Storage | [Hewlett Packard][25] | [Slack][26], [GitHub Issue][27] |
|
||||
|
||||
## S3-Compatible object store providers
|
||||
|
||||
Velero's AWS Object Store plugin uses [Amazon's Go SDK][0] to connect to the AWS S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero:
|
||||
|
||||
_Note that these storage providers are not regularly tested by the Velero team._
|
||||
|
||||
* [IBM Cloud][1]
|
||||
* [Oracle Cloud][2]
|
||||
* [Minio][3]
|
||||
* [DigitalOcean][4]
|
||||
* [NooBaa][5]
|
||||
* Ceph RADOS v12.2.7
|
||||
* Quobyte
|
||||
|
||||
_Some storage providers, like Quobyte, may need a different [signature algorithm version][6]._
|
||||
|
||||
## Non-supported volume snapshots
|
||||
|
||||
In the case you want to take volume snapshots but didn't find a plugin for your provider, Velero has support for snapshotting using restic. Please see the [restic integration][30] documentation.
|
||||
|
||||
[0]: https://github.com/aws/aws-sdk-go/aws
|
||||
[1]: contributions/ibm-config.md
|
||||
[2]: contributions/oracle-config.md
|
||||
[3]: contributions/minio.md
|
||||
[4]: https://github.com/StackPointCloud/ark-plugin-digitalocean
|
||||
[5]: http://www.noobaa.com/
|
||||
[6]: api-types/backupstoragelocation.md#aws
|
||||
[7]: https://aws.amazon.com/s3/
|
||||
[8]: https://github.com/vmware-tanzu/velero-plugin-for-aws
|
||||
[9]: https://azure.microsoft.com/en-us/services/storage/blobs
|
||||
[10]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure
|
||||
[11]: https://cloud.google.com/storage/
|
||||
[12]: https://github.com/vmware-tanzu/velero-plugin-for-gcp
|
||||
[15]: https://www.digitalocean.com/
|
||||
[16]: https://github.com/StackPointCloud/ark-plugin-digitalocean
|
||||
[17]: https://openebs.io/
|
||||
[18]: https://github.com/openebs/velero-plugin
|
||||
[19]: https://openebs-community.slack.com/
|
||||
[20]: https://github.com/openebs/velero-plugin/issues
|
||||
[21]: https://www.alibabacloud.com/
|
||||
[22]: https://github.com/AliyunContainerService/velero-plugin
|
||||
[23]: https://github.com/AliyunContainerService/velero-plugin/issues
|
||||
[24]: https://www.hpe.com/us/en/storage.html
|
||||
[25]: https://github.com/hpe-storage/velero-plugin
|
||||
[26]: https://slack.hpedev.io/
|
||||
[27]: https://github.com/hpe-storage/velero-plugin/issues
|
||||
[28]: https://kubernetes.slack.com/messages/velero
|
||||
[29]: https://github.com/vmware-tanzu/velero/issues
|
||||
[30]: restic.md
|
||||
[31]: https://portworx.com/
|
||||
[32]: https://docs.portworx.com/scheduler/kubernetes/ark.html
|
||||
[33]: https://portworx.slack.com/messages/px-k8s
|
||||
[34]: https://github.com/portworx/ark-plugin/issues
|
|
@ -0,0 +1,75 @@
|
|||
# Troubleshooting
|
||||
|
||||
These tips can help you troubleshoot known issues. If they don't help, you can [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
|
||||
|
||||
See also:
|
||||
|
||||
- [Debug installation/setup issues][2]
|
||||
- [Debug restores][1]
|
||||
|
||||
## General troubleshooting information
|
||||
|
||||
You can use the `velero bug` command to open a [Github issue][4] by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `velero` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like.
|
||||
|
||||
Some general commands for troubleshooting that may be helpful:
|
||||
|
||||
* `velero backup describe <backupName>` - describe the details of a backup
|
||||
* `velero backup logs <backupName>` - fetch the logs for this specific backup. Useful for viewing failures and warnings, including resources that could not be backed up.
|
||||
* `velero restore describe <restoreName>` - describe the details of a restore
|
||||
* `velero restore logs <restoreName>` - fetch the logs for this specific restore. Useful for viewing failures and warnings, including resources that could not be restored.
|
||||
* `kubectl logs deployment/velero -n velero` - fetch the logs of the Velero server pod. This provides the output of the Velero server processes.
|
||||
|
||||
### Getting velero debug logs
|
||||
|
||||
You can increase the verbosity of the Velero server by editing your Velero deployment to look like this:
|
||||
|
||||
|
||||
```
|
||||
kubectl edit deployment/velero -n velero
|
||||
...
|
||||
containers:
|
||||
- name: velero
|
||||
image: velero/velero:latest
|
||||
command:
|
||||
- /velero
|
||||
args:
|
||||
- server
|
||||
- --log-level # Add this line
|
||||
- debug # Add this line
|
||||
...
|
||||
```
|
||||
|
||||
## Known issue with restoring LoadBalancer Service
|
||||
|
||||
Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform a Velero restore.
|
||||
|
||||
Alternatively, you might be able to use the Service's `spec.loadBalancerIP` field to keep connections valid, if your cloud provider supports this value. See [the Kubernetes documentation about Services of Type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
|
||||
|
||||
## Miscellaneous issues
|
||||
|
||||
### Velero reports `custom resource not found` errors when starting up.
|
||||
|
||||
Velero's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Run `velero install` again to install any missing custom resource definitions.
|
||||
|
||||
### `velero backup logs` returns a `SignatureDoesNotMatch` error
|
||||
|
||||
Downloading artifacts from object storage utilizes temporary, signed URLs. In the case of S3-compatible
|
||||
providers, such as Ceph, there may be differences between their implementation and the official S3
|
||||
API that cause errors.
|
||||
|
||||
Here are some things to verify if you receive `SignatureDoesNotMatch` errors:
|
||||
|
||||
* Make sure your S3-compatible layer is using [signature version 4][5] (such as Ceph RADOS v12.2.7)
|
||||
* For Ceph, try using a native Ceph account for credentials instead of external providers such as OpenStack Keystone
|
||||
|
||||
## Velero (or a pod it was backing up) restarted during a backup and the backup is stuck InProgress
|
||||
|
||||
Velero cannot currently resume backups that were interrupted. Backups stuck in the `InProgress` phase can be deleted with `kubectl delete backup <name> -n <velero-namespace>`.
|
||||
Backups in the `InProgress` phase have not uploaded any files to object storage.
|
||||
|
||||
|
||||
[1]: debugging-restores.md
|
||||
[2]: debugging-install.md
|
||||
[4]: https://github.com/vmware-tanzu/velero/issues
|
||||
[5]: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
|
||||
[25]: https://kubernetes.slack.com/messages/velero
|
|
@ -0,0 +1,8 @@
|
|||
# Uninstalling Velero
|
||||
|
||||
If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by `velero install`:
|
||||
|
||||
```bash
|
||||
kubectl delete namespace/velero clusterrolebinding/velero
|
||||
kubectl delete crds -l component=velero
|
||||
```
|
|
@ -0,0 +1,25 @@
|
|||
# Upgrading to Velero 1.1
|
||||
|
||||
## Prerequisites
|
||||
- [Velero v1.0][1] installed.
|
||||
|
||||
Velero v1.1 only requires user intervention if Velero is running in a namespace other than `velero`.
|
||||
|
||||
## Instructions
|
||||
|
||||
### Adding VELERO_NAMESPACE environment variable to the deployment
|
||||
|
||||
Previous versions of Velero's server detected the namespace in which it was running by inspecting the container's filesystem.
|
||||
With v1.1, this is no longer the case, and the server must be made aware of the namespace it is running in with the `VELERO_NAMESPACE` environment variable.
|
||||
|
||||
`velero install` automatically writes this for new deployments, but existing installations will need to add the environment variable before upgrading.
|
||||
|
||||
You can use the following command to patch the deployment:
|
||||
|
||||
```bash
|
||||
kubectl patch deployment/velero -n <YOUR_NAMESPACE> \
|
||||
--type='json' \
|
||||
-p='[{"op":"add","path":"/spec/template/spec/containers/0/env/0","value":{"name":"VELERO_NAMESPACE", "valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]'
|
||||
```
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/releases/tag/v1.0.0
|
|
@ -0,0 +1,18 @@
|
|||
# Vendoring dependencies
|
||||
|
||||
## Overview
|
||||
|
||||
We are using [dep][0] to manage dependencies. You can install it by following [these
|
||||
instructions][1].
|
||||
|
||||
## Adding a new dependency
|
||||
|
||||
Run `dep ensure`. If you want to see verbose output, you can append `-v` as in
|
||||
`dep ensure -v`.
|
||||
|
||||
## Updating an existing dependency
|
||||
|
||||
Run `dep ensure -update <pkg> [<pkg> ...]` to update one or more dependencies.
|
||||
|
||||
[0]: https://github.com/golang/dep
|
||||
[1]: https://golang.github.io/dep/docs/installation.html
|
|
@ -0,0 +1,15 @@
|
|||
# ZenHub
|
||||
|
||||
As an Open Source community, it is necessary for our work, communication, and collaboration to be done in the open.
|
||||
GitHub provides a central repository for code, pull requests, issues, and documentation. When applicable, we will use Google Docs for design reviews, proposals, and other working documents.
|
||||
|
||||
While GitHub issues, milestones, and labels generally work pretty well, the Velero team has found that product planning requires some additional tooling that GitHub projects do not offer.
|
||||
|
||||
In our effort to minimize tooling while enabling product management insights, we have decided to use [ZenHub Open-Source](https://www.zenhub.com/blog/open-source/) to overlay product and project tracking on top of GitHub.
|
||||
ZenHub is a GitHub application that provides Kanban visualization, Epic tracking, fine-grained prioritization, and more. It's primary backing storage system is existing GitHub issues along with additional metadata stored in ZenHub's database.
|
||||
|
||||
If you are a Velero user or Velero Developer, you do not _need_ to use ZenHub for your regular workflow (e.g to see open bug reports or feature requests, work on pull requests). However, if you'd like to be able to visualize the high-level project goals and roadmap, you will need to use the free version of ZenHub.
|
||||
|
||||
## Using ZenHub
|
||||
|
||||
ZenHub can be integrated within the GitHub interface using their [Chrome or FireFox extensions](https://www.zenhub.com/extension). In addition, you can use their dedicated [web application](https://app.zenhub.com/workspace/o/vmware-tanzu/velero/boards?filterLogic=all&repos=99143276).
|
Loading…
Reference in New Issue