Add changelog and docs for v1.6.0-rc.1

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
pull/3637/head
Bridget McErlean 2021-03-29 08:48:07 -04:00
parent 9a9525725d
commit 5e72b87ef7
78 changed files with 6504 additions and 5 deletions

View File

@ -1,6 +1,9 @@
## Current release:
* [CHANGELOG-1.5.md][15]
## Development releases:
* [v1.6.0-rc.1][16]
## Older releases:
* [CHANGELOG-1.4.md][14]
* [CHANGELOG-1.3.md][13]
@ -18,6 +21,7 @@
* [CHANGELOG-0.3.md][1]
[16]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.6.md
[15]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.5.md
[14]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.4.md
[13]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.3.md

View File

@ -0,0 +1,72 @@
## v1.6.0-rc.1
### 2021-03-29
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.6.0-rc.1
### Container Image
`velero/velero:v1.6.0-rc.1`
### Documentation
https://velero.io/docs/v1.6.0-rc.1/
### Upgrading
https://velero.io/docs/v1.6.0-rc.1/upgrade-to-1.6/
### Highlights
* Support for per-BSL credentials
* Progress reporting for restores
* Restore API Groups by priority level
* Restic v0.12.0 upgrade
* End-to-end testing
* CLI usability improvements
### All Changes
* Add support for restic to use per-BSL credentials. Velero will now serialize the secret referenced by the `Credential` field in the BSL and use this path when setting provider specific environment variables for restic commands. (#3489, @zubron)
* Upgrade restic from v0.9.6 to v0.12.0. (#3528, @ashish-amarnath)
* Progress reporting added for Velero Restores (#3125, @pranavgaikwad)
* Add uninstall option for velero cli (#3399, @vadasambar)
* Add support for per-BSL credentials. Velero will now serialize the secret referenced by the `Credential` field in the BSL and pass this path through to Object Storage plugins via the `config` map using the `credentialsFile` key. (#3442, @zubron)
* Fixed a bug where restic volumes would not be restored when using a namespace mapping. (#3475, @zubron)
* Restore API group version by priority. Increase timeout to 3 minutes in DeploymentIsReady(...) function in the install package (#3133, @codegold79)
* Add field and cli flag to associate a credential with a BSL on BSL create|set. (#3190, @carlisia)
* Add colored output to `describe schedule/backup/restore` commands (#3275, @mike1808)
* Add CAPI Cluster and ClusterResourceSets to default restore priorities so that the capi-controller-manager does not panic on restores. (#3446, @nrb)
* Use label to select Velero deployment in plugin cmd (#3447, @codegold79)
* feat: support setting BackupStorageLocation CA certificate via `velero backup-location set --cacert` (#3167, @jenting)
* Add restic initContainer length check in pod volume restore to prevent restic plugin container disappear in runtime (#3198, @shellwedance)
* Bump versions of external snapshotter and others in order to make `go get` to succeed (#3202, @georgettica)
* Support fish shell completion (#3231, @jenting)
* Change the logging level of PV deletion timeout from Debug to Warn (#3316, @MadhavJivrajani)
* Set the BSL created at install time as the "default" (#3172, @carlisia)
* Capitalize all help messages (#3209, @jenting)
* Increased default Velero pod memory limit to 512Mi (#3234, @dsmithuchida)
* Fixed an issue where the deletion of a backup would fail if the backup tarball couldn't be downloaded from object storage. Now the tarball is only downloaded if there are associated DeleteItemAction plugins and if downloading the tarball fails, the plugins are skipped. (#2993, @zubron)
* feat: add delete sub-command for BSL (#3073, @jenting)
* 🐛 BSLs with validation disabled should be validated at least once (#3084, @ashish-amarnath)
* feat: support configures BackupStorageLocation custom resources to indicate which one is the default (#3092, @jenting)
* Added "--preserve-nodeports" flag to preserve original nodePorts when restoring. (#3095, @yusufgungor)
* Owner reference in backup when created from schedule (#3127, @matheusjuvelino)
* issue: add flag to the schedule cmd to configure the `useOwnerReferencesInBackup` option #3176 (#3182, @matheusjuvelino)
* cli: allow creating multiple instances of Velero across two different namespaces (#2886, @alaypatel07)
* Feature: It is possible to change the timezone of the container by specifying in the manifest.. env: [TZ: Zone/Country], or in the Helm Chart.. configuration: {extraEnvVars: [TZ: 'Zone/Country']} (#2944, @mickkael)
* Fix issue where bare `velero` command returned an error code. (#2947, @nrb)
* Restore CRD Resource name to fix CRD wait functionality. (#2949, @sseago)
* Fixed 'velero.io/change-pvc-node-selector' plugin to fetch configmap using label key "velero.io/change-pvc-node-selector" (#2970, @mynktl)
* Compile with Go 1.15 (#2974, @gliptak)
* Fix BSL controller to avoid invoking init() on all BSLs regardless of ValidationFrequency (#2992, @betta1)
* Ensure that bound PVCs and PVs remain bound on restore. (#3007, @nrb)
* Allows the restic-wait container to exist in any order in the pod being restored. Prints a warning message in the case where the restic-wait container isn't the first container in the list of initialization containers. (#3011, @doughepi)
* Add warning to velero version cmd if the client and server versions mismatch. (#3024, @cvhariharan)
* 🐛 Use namespace and name to match PVB to Pod restore (#3051, @ashish-amarnath)
* Fixed various typos across codebase (#3057, @invidian)
* 🐛 ItemAction plugins for unresolvable types should not be run for all types (#3059, @ashish-amarnath)
* Basic end-to-end tests, generate data/backup/remove/restore/verify. Uses distributed data generator (#3060, @dsu-igeek)
* Added GitHub Workflow running Codespell for spell checking (#3064, @invidian)
* Pass annotations from schedule to backup it creates the same way it is done for labels. Add WithannotationsMap function to builder to be able to pass map instead of key/val list (#3067, @funkycode)
* Add instructions to clone repository for examples in docs (#3074, @MadhavJivrajani)
* 🏃‍♂️ update setup-kind github actions CI (#3085, @ashish-amarnath)
* Modify wrong function name to correct one. (#3106, @shellwedance)
* Add additional printer columns for Velero CRDs to allow more information to be exposed when using `kubectl get`. (#2881, @zubron)

View File

@ -15,6 +15,7 @@ params:
latest: v1.5
versions:
- main
- v1.6.0-rc.1
- v1.5
- v1.4
- v1.3.2

View File

@ -29,7 +29,7 @@ If you're not yet running at least Velero v1.5, see the following:
```bash
Client:
Version: v1.6.0-beta.1
Version: v1.6.0-rc.1
Git commit: <git SHA>
```
@ -45,12 +45,12 @@ If you're not yet running at least Velero v1.5, see the following:
```bash
kubectl set image deployment/velero \
velero=velero/velero:v1.6.0-beta.1 \
velero=velero/velero:v1.6.0-rc.1 \
--namespace velero
# optional, if using the restic daemon set
kubectl set image daemonset/restic \
restic=velero/velero:v1.6.0-beta.1 \
restic=velero/velero:v1.6.0-rc.1 \
--namespace velero
```
@ -64,11 +64,11 @@ If you're not yet running at least Velero v1.5, see the following:
```bash
Client:
Version: v1.6.0-beta.1
Version: v1.6.0-rc.1
Git commit: <git SHA>
Server:
Version: v1.6.0-beta.1
Version: v1.6.0-rc.1
```
## Notes

View File

@ -0,0 +1,58 @@
---
toc: "false"
cascade:
version: main
toc: "true"
---
![100]
[![Build Status][1]][2]
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
* Replicate your production cluster to development and testing clusters.
Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
## Documentation
This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
## Troubleshooting
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
## Contributing
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.6.0-rc.1/start-contributing/) documentation for guidance on how to setup Velero for development.
## Changelog
See [the list of releases][6] to find out about feature changes.
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
[4]: https://github.com/vmware-tanzu/velero/issues
[6]: https://github.com/vmware-tanzu/velero/releases
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/main/cluster/addons/dns/README.md
[14]: https://github.com/kubernetes/kubernetes
[24]: https://groups.google.com/forum/#!forum/projectvelero
[25]: https://kubernetes.slack.com/messages/velero
[30]: troubleshooting.md
[100]: img/velero.png

View File

@ -0,0 +1,21 @@
---
title: "Table of Contents"
layout: docs
---
## API types
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]
* [Restore][2]
* [Schedule][3]
* [BackupStorageLocation][4]
* [VolumeSnapshotLocation][5]
[1]: backup.md
[2]: restore.md
[3]: schedule.md
[4]: backupstoragelocation.md
[5]: volumesnapshotlocation.md

View File

@ -0,0 +1,19 @@
---
layout: docs
title: API types
---
Here's a list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]
* [Restore][2]
* [Schedule][3]
* [BackupStorageLocation][4]
* [VolumeSnapshotLocation][5]
[1]: backup.md
[2]: restore.md
[3]: schedule.md
[4]: backupstoragelocation.md
[5]: volumesnapshotlocation.md

View File

@ -0,0 +1,148 @@
---
title: "Backup API Type"
layout: docs
---
## Use
Use the `Backup` API type to request the Velero server to perform a backup. Once created, the
Velero Server immediately starts the backup process.
## API GroupVersion
Backup belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Backup` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Backup
# Standard Kubernetes metadata. Required.
metadata:
# Backup name. May be any valid Kubernetes object name. Required.
name: a
# Backup namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the backup. Required.
spec:
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for this backup.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before this backup is eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Whether restic should be used to take a backup of all pod volumes by default.
defaultVolumesToRestic: true
# Actions to perform at different times during a backup. The only hook supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Only "exec" hooks are supported.
post:
# Same content as pre above.
# Status about the Backup. Users should not set any data here.
status:
# The version of this Backup. The only version supported is 1.
version: 1
# The date and time when the Backup is eligible for garbage collection.
expiration: null
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Date/time when the backup started being processed.
startTimestamp: 2019-04-29T15:58:43Z
# Date/time when the backup finished being processed.
completionTimestamp: 2019-04-29T15:58:56Z
# Number of volume snapshots that Velero tried to create for this backup.
volumeSnapshotsAttempted: 2
# Number of volume snapshots that Velero successfully created for this backup.
volumeSnapshotsCompleted: 1
# Number of warnings that were logged by the backup.
warnings: 2
# Number of errors that were logged by the backup.
errors: 0
```

View File

@ -0,0 +1,54 @@
---
title: "Velero Backup Storage Locations"
layout: docs
---
## Backup Storage Location
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
A sample YAML `BackupStorageLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: aws
objectStorage:
bucket: myBucket
credential:
name: secret-name
key: key-in-secret
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
{{< table caption="Main config parameters" >}}
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String | Required Field | The name for whichever object storage provider will be used to store the backups. See [your object storage provider's plugin documentation](../supported-providers) for the appropriate value to use. |
| `objectStorage` | ObjectStorageLocation | Required Field | Specification of the object storage for the given provider. |
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
| `objectStorage/caCert` | String | Optional Field | A base64 encoded CA bundle to be used when verifying TLS connections |
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the object store plugin. See [your object storage provider's plugin documentation](../supported-providers) for details. |
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
| `validationFrequency` | metav1.Duration | Optional Field | How frequently Velero should validate the object storage . Default is Velero's server validation frequency. Set this to `0s` to disable validation. Default 1 minute. |
| `credential` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | The credential information to be used with this location. |
| `credential/name` | String | Optional Field | The name of the secret within the Velero namespace which contains the credential information. |
| `credential/key` | String | Optional Field | The key to use within the secret. |
{{< /table >}}

View File

@ -0,0 +1,168 @@
---
title: "Restore API Type"
layout: docs
---
## Use
The `Restore` API type is used as a request for the Velero server to perform a Restore. Once created, the
Velero Server immediately starts the Restore process.
## API GroupVersion
Restore belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Restore` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Restore
# Standard Kubernetes metadata. Required.
metadata:
# Restore name. May be any valid Kubernetes object name. Required.
name: a-very-special-backup-0000111122223333
# Restore namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the restore. Required.
spec:
# BackupName is the unique name of the Velero backup to restore from.
backupName: a-very-special-backup
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the restore. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the restore. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the restore. For example, if a
# PersistentVolumeClaim is included in the restore, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the restore. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# NamespaceMapping is a map of source namespace names to
# target namespace names to restore into. Any source namespaces not
# included in the map will be restored into namespaces of the same name.
namespaceMapping:
namespace-backup-from: namespace-to-restore-to
# RestorePVs specifies whether to restore all included PVs
# from snapshot (via the cloudprovider).
restorePVs: true
# ScheduleName is the unique name of the Velero schedule
# to restore from. If specified, and BackupName is empty, Velero will
# restore from the most recent successful backup created from this schedule.
scheduleName: my-scheduled-backup-name
# Actions to perform during or post restore. The only hooks currently supported are
# adding an init container to a pod before it can be restored and executing a command in a
# restored pod's container. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
# Name is the name of this hook.
- name: restore-hook-1
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- ns1
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- ns3
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run during or after restores. Currently only "init" and "exec" hooks
# are supported.
postHooks:
# The type of the hook. This must be "init" or "exec".
- init:
# An array of container specs to be added as init containers to pods to which this hook applies to.
initContainers:
- name: restore-hook-init1
image: alpine:latest
# Mounting volumes from the podSpec to which this hooks applies to.
volumeMounts:
- mountPath: /restores/pvc1-vm
# Volume name from the podSpec
name: pvc1-vm
command:
- /bin/ash
- -c
- echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz
- name: restore-hook-init2
image: alpine:latest
# Mounting volumes from the podSpec to which this hooks applies to.
volumeMounts:
- mountPath: /restores/pvc2-vm
# Volume name from the podSpec
name: pvc2-vm
command:
- /bin/ash
- -c
- echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed
- exec:
# The container name where the hook will be executed. Defaults to the first container.
# Optional.
container: foo
# The command that will be executed in the container. Required.
command:
- /bin/bash
- -c
- "psql < /backup/backup.sql"
# How long to wait for a container to become ready. This should be long enough for the
# container to start plus any preceding hooks in the same container to complete. The wait
# timeout begins when the container is restored and may require time for the image to pull
# and volumes to mount. If not set the restore will wait indefinitely. Optional.
waitTimeout: 5m
# How long to wait once execution begins. Defaults to 30 seconds. Optional.
execTimeout: 1m
# How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to
# `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode,
# no more restore hooks will be executed in any container in any pod and the status of the
# Restore will be `PartiallyFailed`. Optional.
onError: Continue
# RestoreStatus captures the current status of a Velero restore. Users should not set any data here.
status:
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Number of warnings that were logged by the restore.
warnings: 2
# Errors is a count of all error messages that were generated
# during execution of the restore. The actual errors are stored in object
# storage.
errors: 0
# FailureReason is an error that caused the entire restore
# to fail.
failureReason:
```

View File

@ -0,0 +1,135 @@
---
title: "Schedule API Type"
layout: docs
---
## Use
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
process on a repeating basis.
## API GroupVersion
Schedule belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Schedule` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Schedule
# Standard Kubernetes metadata. Required.
metadata:
# Schedule name. May be any valid Kubernetes object name. Required.
name: a
# Schedule namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the scheduled backup. Required.
spec:
# Schedule is a Cron expression defining when to run the Backup
schedule: 0 7 * * *
# Template is the spec that should be used for each backup triggered by this schedule.
template:
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the scheduled backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for backups under this schedule.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Only "exec" hooks are supported.
post:
# Same content as pre above.
status:
# The current phase of the latest scheduled backup. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# Date/time of the last backup for a given schedule
lastBackup:
# An array of any validation errors encountered.
validationErrors:
```

View File

@ -0,0 +1,40 @@
---
title: "Velero Volume Snapshot Location"
layout: docs
---
## Volume Snapshot Location
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
A sample YAML `VolumeSnapshotLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: velero
spec:
provider: aws
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
{{< table caption="Main config parameters" >}}
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See [your volume snapshot provider's plugin documentation](../supported-providers) for the appropriate value to use. |
| `config` | map string string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See [your volume snapshot provider's plugin documentation](../supported-providers) for details. |
{{< /table >}}

View File

@ -0,0 +1,92 @@
---
title: "Backup Hooks"
layout: docs
---
Velero supports executing commands in containers in pods during a backup.
## Backup Hooks
When performing a backup, you can specify one or more commands to execute in a container in a pod
when that pod is being backed up. The commands can be configured to run *before* any custom action
processing ("pre" hooks), or after all custom actions have been completed and any additional items
specified by custom action have been backed up ("post" hooks). Note that hooks are _not_ executed within a shell
on the containers.
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
### Specifying Hooks As Pod Annotations
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
#### Pre hooks
* `pre.hook.backup.velero.io/container`
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
* `pre.hook.backup.velero.io/command`
* The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`
* `pre.hook.backup.velero.io/on-error`
* What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional.
* `pre.hook.backup.velero.io/timeout`
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
#### Post hooks
* `post.hook.backup.velero.io/container`
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
* `post.hook.backup.velero.io/command`
* The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`
* `post.hook.backup.velero.io/on-error`
* What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional.
* `post.hook.backup.velero.io/timeout`
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
### Specifying Hooks in the Backup Spec
Please see the documentation on the [Backup API Type][1] for how to specify hooks in the Backup
spec.
## Hook Example with fsfreeze
This examples walks you through using both pre and post hooks for freezing a file system. Freezing the
file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot.
This example uses [examples/nginx-app/with-pv.yaml][2]. Follow the [steps for your provider][3] to
setup this example.
### Annotations
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
to your declarative deployment. Below is an example of what updating an object in place might look like.
```shell
kubectl annotate pod -n nginx-example -l app=nginx \
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
pre.hook.backup.velero.io/container=fsfreeze \
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
post.hook.backup.velero.io/container=fsfreeze
```
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
hooks are running and exiting without error.
```shell
velero backup create nginx-hook-test
velero backup get nginx-hook-test
velero backup logs nginx-hook-test | grep hookCommand
```
## Using Multiple Commands
To use multiple commands, wrap your target command in a shell and separate them with `;`, `&&`, or other shell conditional constructs.
```shell
pre.hook.backup.velero.io/command='["/bin/bash", "-c", "echo hello > hello.txt && echo goodbye > goodbye.txt"]'
```
[1]: api-types/backup.md
[2]: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/examples/nginx-app/with-pv.yaml
[3]: cloud-common.md

View File

@ -0,0 +1,21 @@
---
title: "Backup Reference"
layout: docs
---
## Exclude Specific Items from Backup
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
```bash
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
```
## Specify Backup Orders of Resources of Specific Kind
To backup resources of specific Kind in a specific order, use option --ordered-resources to specify a mapping Kinds to an ordered list of specific resources of that Kind. Resource names are separated by commas and their names are in format 'namespace/resourcename'. For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon. Kind name is in plural form.
```bash
velero backup create backupName --include-cluster-resources=true --ordered-resources 'pods=ns1/pod1,ns1/pod2;persistentvolumes=pv4,pv8' --include-namespaces=ns1
velero backup create backupName --ordered-resources 'statefulsets=ns1/sts1,ns1/sts0' --include-namespaces=ns1
```

View File

@ -0,0 +1,73 @@
---
title: "Basic Install"
layout: docs
---
Use this doc to get a basic installation of Velero.
Refer [this document](customize-installation.md) to customize your installation.
## Prerequisites
- Access to a Kubernetes cluster, v1.10 or later, with DNS and container networking enabled.
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
Velero supports storage providers for both cloud-provider environments and on-premises environments. For more details on on-premises scenarios, see the [on-premises documentation][2].
### Velero on Windows
Velero does not officially support Windows. In testing, the Velero team was able to backup stateless Windows applications only. The restic integration and backups of stateful applications or PersistentVolumes were not supported.
If you want to perform your own testing of Velero on Windows, you must deploy Velero as a Windows container. Velero does not provide official Windows images, but its possible for you to build your own Velero Windows container image to use. Note that you must build this image on a Windows node.
## Install the CLI
### Option 1: macOS - Homebrew
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Option 2: GitHub release
1. Download the [latest release][1]'s tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
```
1. Move the extracted `velero` binary to somewhere in your `$PATH` (`/usr/local/bin` for most users).
### Option 3: Windows - Chocolatey
On Windows, you can use [Chocolatey](https://chocolatey.org/install) to install the [velero](https://chocolatey.org/packages/velero) client:
```powershell
choco install velero
```
## Install and configure the server components
There are two supported methods for installing the Velero server components:
- the `velero install` CLI command
- the [Helm chart](https://vmware-tanzu.github.io/helm-charts/)
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider. To find installation instructions for your chosen storage provider, follow the documentation link for your provider at our [supported storage providers][0] page
_Note: if your object storage provider is different than your volume snapshot provider, follow the installation instructions for your object storage provider first, then return here and follow the instructions to [add your volume snapshot provider][4]._
## Command line Autocompletion
Please refer to [this part of the documentation][5].
[0]: supported-providers.md
[1]: https://github.com/vmware-tanzu/velero/releases/latest
[2]: on-premises.md
[3]: overview-plugins.md
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
[5]: customize-installation.md#optional-velero-cli-configurations

View File

@ -0,0 +1,200 @@
---
title: "Build from source"
layout: docs
---
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later.
* A DNS server on the cluster
* `kubectl` installed
* [Go][5] installed (minimum version 1.8)
## Get the source
### Option 1) Get latest (recommended)
```bash
mkdir $HOME/go
export GOPATH=$HOME/go
go get github.com/vmware-tanzu/velero
```
Where `go` is your [import path][4] for Go.
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
### Option 2) Release archive
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`.
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
## Build
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
When building by using `make`, it will place the binaries under `_output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `_output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output.
Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images).
### Build the binary
To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands:
```bash
go build ./cmd/velero
```
```bash
make local
```
### Cross compiling
To build the velero binary targeting linux/amd64 within a build container on your local machine, run:
```bash
make build
```
For any specific platform, run `make build-<GOOS>-<GOARCH>`.
For example, to build for the Mac, run `make build-darwin-amd64`.
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
* linux-amd64
* linux-arm
* linux-arm64
* linux-ppc64le
* darwin-amd64
* windows-amd64
## Making images and updating Velero
If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image:
```bash
kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION
```
To build a Velero container image, you need to configure `buildx` first.
### Buildx
Docker Buildx is a CLI plugin that extends the docker command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently.
More information in the [docker docs][23] and in the [buildx github][24] repo.
### Image building
Set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero:main` image, set `$REGISTRY` to `gcr.io/my-registry`. If this variable is not set, the default is `velero`.
Optionally, set the `$VERSION` environment variable to change the image tag or `$BIN` to change which binary to build a container image for. Then, run:
```bash
make container
```
_Note: To build build container images for both `velero` and `velero-restic-restore-helper`, run: `make all-containers`_
### Publishing container images to a registry
To publish container images to a registry, the following one time setup is necessary:
1. If you are building cross platform container images
```bash
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
```
1. Create and bootstrap a new docker buildx builder
```bash
$ docker buildx create --use --name builder
builder
$ docker buildx inspect --bootstrap
[+] Building 2.6s (1/1) FINISHED
=> [internal] booting buildkit 2.6s
=> => pulling image moby/buildkit:buildx-stable-1 1.9s
=> => creating container buildx_buildkit_builder0 0.7s
Name: builder
Driver: docker-container
Nodes:
Name: builder0
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
```
NOTE: Without the above setup, the output of `docker buildx inspect --bootstrap` will be:
```bash
$ docker buildx inspect --bootstrap
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
```
And the `REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry make container` will fail with the below error:
```bash
$ REGISTRY=ashishamarnath BUILDX_PLATFORMS=linux/arm64 BUILDX_OUTPUT_TYPE=registry make container
auto-push is currently not implemented for docker driver
make: *** [container] Error 1
```
Having completed the above one time setup, now the output of `docker buildx inspect --bootstrap` should be like
```bash
$ docker buildx inspect --bootstrap
Name: builder
Driver: docker-container
Nodes:
Name: builder0
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v
```
Now build and push the container image by running the `make container` command with `$BUILDX_OUTPUT_TYPE` set to `registry`
```bash
$ REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry make container
```
### Cross platform building
Docker `buildx` platforms supported:
* `linux/amd64`
* `linux/arm64`
* `linux/arm/v7`
* `linux/ppc64le`
For any specific platform, run `BUILDX_PLATFORMS=<GOOS>/<GOARCH> make container`
For example, to build an image for arm64, run:
```bash
BUILDX_PLATFORMS=linux/arm64 make container
```
_Note: By default, `$BUILDX_PLATFORMS` is set to `linux/amd64`_
With `buildx`, you can also build all supported platforms at the same time and push a multi-arch image to the registry. For example:
```bash
REGISTRY=myrepo VERSION=foo BUILDX_PLATFORMS=linux/amd64,linux/arm64,linux/arm/v7,linux/ppc64le BUILDX_OUTPUT_TYPE=registry make all-containers
```
_Note: when building for more than 1 platform at the same time, you need to set `BUILDX_OUTPUT_TYPE` to `registry` as local multi-arch images are not supported [yet][25]._
Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod:
```bash
kubectl -n velero delete pods -l deploy=velero
```
[4]: https://blog.golang.org/organizing-go-code
[5]: https://golang.org/doc/install
[22]: https://github.com/vmware-tanzu/velero/releases
[23]: https://docs.docker.com/buildx/working-with-buildx/
[24]: https://github.com/docker/buildx
[25]: https://github.com/moby/moby/pull/38738

View File

@ -0,0 +1,151 @@
---
title: "Code Standards"
layout: docs
toc: "true"
---
## Opening PRs
When opening a pull request, please fill out the checklist supplied the template. This will help others properly categorize and review your pull request.
## Adding a changelog
Authors are expected to include a changelog file with their pull requests. The changelog file
should be a new file created in the `changelogs/unreleased` folder. The file should follow the
naming convention of `pr-username` and the contents of the file should be your text for the
changelog.
velero/changelogs/unreleased <- folder
000-username <- file
Add that to the PR.
If a PR does not warrant a changelog, the CI check for a changelog can be skipped by applying a `changelog-not-required` label on the PR.
## Copyright header
Whenever a source code file is being modified, the copyright notice should be updated to our standard copyright notice. That is, it should read “Copyright [insert current year] the Velero contributors.”
For new files, the entire copyright and license header must be added.
Please note that doc files do not need a copyright header.
## Code
- Log messages are capitalized.
- Error messages are kept lower-cased.
- Wrap/add a stack only to errors that are being directly returned from non-velero code, such as an API call to the Kubernetes server.
```bash
errors.WithStack(err)
```
- Prefer to use the utilities in the Kubernetes package [`sets`](https://godoc.org/github.com/kubernetes/apimachinery/pkg/util/sets).
```bash
k8s.io/apimachinery/pkg/util/sets
```
## Imports
For imports, we use the following convention:
`<group><version><api | client | informer | ...>`
Example:
import (
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
corev1listers "k8s.io/client-go/listers/core/v1"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
)
## Mocks
We use a package to generate mocks for our interfaces.
Example: if you want to change this mock: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/pkg/restic/mocks/restorer.go
Run:
```bash
go get github.com/vektra/mockery/.../
cd pkg/restic
mockery -name=Restorer
```
Might need to run `make update` to update the imports.
## Kubernetes Labels
When generating label values, be sure to pass them through the `label.GetValidName()` helper function.
This will help ensure that the values are the proper length and format to be stored and queried.
In general, UIDs are safe to persist as label values.
This function is not relevant to annotation values, which do not have restrictions.
## DCO Sign off
All authors to the project retain copyright to their work. However, to ensure
that they are only submitting work that they have rights to, we are requiring
everyone to acknowledge this by signing their work.
Any copyright notices in this repo should specify the authors as "the Velero contributors".
To sign your work, just add a line like this at the end of your commit message:
```
Signed-off-by: Joe Beda <joe@heptio.com>
```
This can easily be done with the `--signoff` option to `git commit`.
By doing this you state that you can certify the following (from https://developercertificate.org/):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```

View File

@ -0,0 +1,100 @@
---
title: "Use IBM Cloud Object Storage as Velero's storage destination."
layout: docs
---
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
* Download an official release of Velero
* Create your COS instance
* Create an S3 bucket
* Define a service that can store data in the bucket
* Configure and start the Velero server
## Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
The directory you extracted is called the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create COS instance
If you dont have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
## Create an S3 bucket
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
## Define a service that can store data in the bucket.
The process of creating service credentials is described in [Service credentials][3].
Several comments:
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keysa set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See step 3 in the [Service credentials][3] guide.
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. Use them in the next step.
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id=<ACCESS_KEY_ID>
aws_secret_access_key=<SECRET_ACCESS_KEY>
```
Where the access key id and secret are the values that you got above.
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider aws \
--bucket <YOUR_BUCKET> \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>
```
Velero does not have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
Additionally, you can specify `--use-restic` to enable [restic support][16], and `--wait` to wait for the deployment to be ready.
(Optional) Specify [CPU and memory resource requests and limits][15] for the Velero/restic pods.
Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud:
```bash
kubectl -n velero delete volumesnapshotlocation.velero.io default
```
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
## Installing the nginx example (optional)
If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`:
Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `StorageClass` name.
[0]: namespace.md
[1]: https://console.bluemix.net/docs/services/cloud-object-storage/basics/order-storage.html#creating-a-new-resource-instance
[2]: https://console.bluemix.net/docs/services/cloud-object-storage/getting-started.html#create-buckets
[3]: https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
[4]: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/kc_welcome_containers.html
[5]: https://console.bluemix.net/docs/containers/container_index.html#container_index
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[15]: customize-installation.md#customize-resource-requests-and-limits
[16]: restic.md

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 277 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

View File

@ -0,0 +1,292 @@
---
title: "Quick start evaluation install with Minio"
layout: docs
---
The following example sets up the Velero server and client, then backs up and restores a sample application.
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
For additional functionality with this setup, see the section below on how to [expose Minio outside your cluster][1].
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later. **Note:** restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See [Restic Integration][17].
* A DNS server on the cluster
* `kubectl` installed
* Sufficient disk space to store backups in Minio. You will need sufficient disk space available to handle any
backups plus at least 1GB additional. Minio will not operate if less than 1GB of free disk space is available.
## Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
The directory you extracted is called the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
### MacOS Installation
On Mac, you can use [HomeBrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
## Set up server
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster][31] for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
1. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
```
1. Start the server and the local storage service. In the Velero directory, run:
```
kubectl apply -f examples/minio/00-minio-deployment.yaml
```
_Note_: The example Minio yaml provided uses "empty dir". Your node needs to have enough space available to store the
data being backed up plus 1GB of free space. If the node does not have enough space, you can modify the example yaml to
use a Persistent Volume instead of "empty dir"
```
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
```
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`).
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
1. Deploy the example nginx application:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Check to see that both the Velero and nginx deployments are successfully created:
```
kubectl get deployments -l component=velero --namespace=velero
kubectl get deployments --namespace=nginx-example
```
## Back up
1. Create a backup for any object that matches the `app=nginx` label selector:
```
velero backup create nginx-backup --selector app=nginx
```
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
```
velero backup create nginx-backup --selector 'backup notin (ignore)'
```
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
```
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
```
Alternatively, you can use some non-standard shorthand cron expressions:
```
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
```
See the [cron package's documentation][30] for more usage examples.
1. Simulate a disaster:
```
kubectl delete namespace nginx-example
```
1. To check that the nginx deployment and service are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
You should get no results.
NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up.
## Restore
1. Run:
```
velero restore create --from-backup nginx-backup
```
1. Run:
```
velero restore get
```
After the restore finishes, the output looks like the following:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, you can look at them in detail:
```
velero restore describe <RESTORE_NAME>
```
For more information, see [the debugging information][18].
## Clean up
If you want to delete any backups you created, including data in object storage and persistent
volume snapshots, you can run:
```
velero backup delete BACKUP_NAME
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
this for each backup you want to permanently delete. A future version of Velero will allow you to
delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run:
```
velero backup get BACKUP_NAME
```
To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
kubectl delete -f examples/nginx-app/base.yaml
```
## Expose Minio outside your cluster with a Service
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
- Change the Minio Service type from `ClusterIP` to `NodePort`.
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
### Expose Minio with Service of type NodePort
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
1. Get the Minio URL:
- if you're running Minikube:
```shell
minikube service minio --namespace=velero --url
```
- in any other environment:
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
```shell
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
```
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
## Accessing logs with an HTTPS endpoint
If you're using Minio with HTTPS, you may see unintelligible text in the output of `velero describe`, or `velero logs` commands.
To fix this, you can add a public URL to the `BackupStorageLocation`.
In a terminal, run the following:
```shell
kubectl patch -n velero backupstoragelocation default --type merge -p '{"spec":{"config":{"publicUrl":"https://<a public IP for your Minio instance>:9000"}}}'
```
If your certificate is self-signed, see the [documentation on self-signed certificates][32].
## Expose Minio outside your cluster with Kubernetes in Docker (KinD):
Kubernetes in Docker does not have support for NodePort services (see [this issue](https://github.com/kubernetes-sigs/kind/issues/99)). In this case, you can use a port forward to access the Minio bucket.
In a terminal, run the following:
```shell
MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward $MINIO_POD -n velero 9000:9000
```
Then, in another terminal:
```shell
kubectl edit backupstoragelocation default -n velero
```
Add `publicUrl: http://localhost:9000` under the `spec.config` section.
### Work with Ingress
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
In this case:
1. Keep the Service type as `ClusterIP`.
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
[1]: #expose-minio-with-service-of-type-nodeport
[3]: ../customize-installation.md
[17]: ../restic.md
[18]: ../debugging-restores.md
[26]: https://github.com/vmware-tanzu/velero/releases
[30]: https://godoc.org/github.com/robfig/cron
[32]: ../self-signed-certificates.md

View File

@ -0,0 +1,248 @@
---
title: "Use Oracle Cloud as a Backup Storage Provider for Velero"
layout: docs
---
## Introduction
[Velero](https://velero.io/) is a tool used to backup and migrate Kubernetes applications. Here are the steps to use [Oracle Cloud Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) as a destination for Velero backups.
1. [Download Velero](#download-velero)
2. [Create A Customer Secret Key](#create-a-customer-secret-key)
3. [Create An Oracle Object Storage Bucket](#create-an-oracle-object-storage-bucket)
4. [Install Velero](#install-velero)
5. [Clean Up](#clean-up)
6. [Examples](#examples)
7. [Additional Reading](#additional-reading)
## Download Velero
1. Download the [latest release](https://github.com/vmware-tanzu/velero/releases/) of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example:
```
wget https://github.com/vmware-tanzu/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz
```
**NOTE:** Its strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`
4. Run `velero` to confirm the CLI has been installed correctly. You should see an output like this:
```
$ velero
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.
If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.
Usage:
velero [command]
```
## Create A Customer Secret Key
1. Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to [create a Customer Secret Key](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#To4). Refer to this link for more information about [Working with Customer Secret Keys](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#s3).
2. Create a Velero credentials file with your Customer Secret Key:
```
$ vi credentials-velero
[default]
aws_access_key_id=bae031188893d1eb83719648790ac850b76c9441
aws_secret_access_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk=
```
## Create An Oracle Object Storage Bucket
Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for [more information about creating a bucket with Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/managingbuckets.htm#usingconsole).
## Install Velero
You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider:
```
velero install \
--provider [provider name] \
--bucket [bucket name] \
--prefix [tenancy name] \
--use-volume-snapshots=false \
--secret-file [secret file location] \
--backup-location-config region=[region],s3ForcePathStyle="true",s3Url=[storage API endpoint]
```
- `--provider` This example uses the S3-compatible API, so use `aws` as the provider.
- `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`.
- ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`.
- `--use-volume-snapshots=false` Velero does not have a volume snapshot plugin for Oracle Cloud, so creating volume snapshots is disabled.
- `--secret-file` The path to your `credentials-velero` file.
- `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name ([List of Oracle Cloud Regions](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm?Highlight=regions)) and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com`
For example:
```
velero install \
--provider aws \
--bucket velero \
--prefix oracle-cloudnative \
--use-volume-snapshots=false \
--secret-file /Users/mboxell/bin/velero/credentials-velero \
--backup-location-config region=us-phoenix-1,s3ForcePathStyle="true",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com
```
This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod name]`.
## Clean Up
To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
This will remove all resources created by `velero install`.
## Examples
After creating the Velero server in your cluster, try this example:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml`
This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service.
```
$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created
```
You can see the created resources by running `kubectl get all`
```
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s
pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2 2 2 2 21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s
```
2. Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example`
```
$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
```
At this point you can navigate to appropriate bucket, called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero.
3. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example`
```
$ kubectl delete namespaces nginx-example
namespace "nginx-example" deleted
```
Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
This should return: `No resources found.`
4. Restore your lost resources: `velero restore create --from-backup nginx-backup`
```
$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20190604102710" submitted successfully.
Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details.
```
Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents.
5. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following:
```
$ velero restore get
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>`
6. Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml`
```
$ kubectl delete -f examples/nginx-app/base.yaml
namespace "nginx-example" deleted
deployment.apps "nginx-deployment" deleted
service "my-nginx" deleted
```
If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME`
```
$ velero backup delete nginx-backup
Are you sure you want to continue (Y/N)? Y
Request to delete backup "nginx-backup" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`:
```
$ velero backup get nginx-backup
An error occurred: backups.velero.io "nginx-backup" not found
```
```
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
```
## Additional Reading
* [Official Velero Documentation](https://velero.io/docs/v1.6.0-rc.1/)
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)

View File

@ -0,0 +1,168 @@
---
title: "Use Tencent Cloud Object Storage as Velero's storage destination."
layout: docs
---
You can deploy Velero on Tencent [TKE](https://cloud.tencent.com/document/product/457), or an other Kubernetes cluster, and use Tencent Cloud Object Store as a destination for Veleros backups.
## Prerequisites
- Registered [Tencent Cloud Account](https://cloud.tencent.com/register).
- [Tencent Cloud COS](https://console.cloud.tencent.com/cos) service, referred to as COS, has been launched
- A Kubernetes cluster has been created, cluster version v1.10 or later, and the cluster can use DNS and Internet services normally. If you need to create a TKE cluster, refer to the Tencent [create a cluster](https://cloud.tencent.com/document/product/457/32189) documentation.
## Create a Tencent Cloud COS bucket
Create an object bucket for Velero to store backups in the Tencent Cloud COS console. For how to create, please refer to Tencent Cloud COS [Create a bucket](https://cloud.tencent.com/document/product/436/13309) usage instructions.
Set access to the bucket through the object storage console, the bucket needs to be **read** and **written**, so the account is granted data reading, data writing permissions. For how to configure, see the [permission access settings](https://cloud.tencent.com/document/product/436/13315.E5.8D.95.E4.B8.AA.E6.8E.88.E6.9D.83) Tencent user instructions.
## Get bucket access credentials
Velero uses an AWS S3-compatible API to access Tencent Cloud COS storage, which requires authentication using a pair of access key IDs and key-created signatures.
In the S3 API parameter, the "access_key_id" field is the access key ID and the "secret_access_key" field is the key.
In the [Tencent Cloud Access Management Console](https://console.cloud.tencent.com/cam/capi), Create and acquire Tencent Cloud Keys "SecretId" and "SecretKey" for COS authorized account. **Where the "SecretId" value corresponds to the value of S3 API parameter "access_key_id" field, the "SecretKey" value corresponds to the value of S3 API parameter "secret_access_key" field**.
Create the credential profile "credentials-velero" required by Velero in the local directory based on the above correspondence:
```bash
[default]
aws_access_key_id=<SecretId>
aws_secret_access_key=<SecretKey>
```
## Install Velero Resources
You need to install the Velero CLI first, see [Install the CLI](https://velero.io/docs/v1.5/basic-install/#install-the-cli) for how to install.
Follow the Velero installation command below to create velero and restic workloads and other necessary resource objects.
```bash
velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.1.0 --bucket <BucketName> \
--secret-file ./credentials-velero \
--use-restic \
--default-volumes-to-restic \
--backup-location-config \
region=ap-guangzhou,s3ForcePathStyle="true",s3Url=https://cos.ap-guangzhou.myqcloud.com
```
Description of the parameters:
- `--provider`: Declares the type of plug-in provided by "aws".
- `--plugins`: Use the AWS S3 compatible API plug-in "velero-plugin-for-aws".
- `--bucket`: The bucket name created at Tencent Cloud COS.
- `--secret-file`: Access tencent cloud COS access credential file for the "credentials-velero" credential file created above.
- `--use-restic`: Back up and restore persistent volume data using the open source free backup tool [restic](https://github.com/restic/restic). However, 'hostPath' volumes are not supported, see the [restic limit](https://velero.io/docs/v1.5/restic/#limitations) for details), an integration that complements Velero's backup capabilities and is recommended to be turned on.
- `--default-volumes-to-restic`: Enable the use of Restic to back up all Pod volumes, provided that the `--use-restic`parameter needs to be turned on.
- `--backup-location-config`: Back up the bucket access-related configuration:
`region`: Tencent cloud COS bucket area, for example, if the created region is Guangzhou, the Region parameter value is "ap-guangzhou".
`s3ForcePathStyle`: Use the S3 file path format.
`s3Url`: Tencent Cloud COS-compatible S3 API access address,Note that instead of creating a COS bucket for public network access domain name, you must use a format of "https://cos.`region`.myqcloud.com" URL, for example, if the region is Guangzhou, the parameter value is "https://cos.ap-guangzhou.myqcloud.com.".
There are other installation parameters that can be viewed using `velero install --help`, such as setting `--use-volume-snapshots-false` to close the storage volume data snapshot backup if you do not want to back up the storage volume data.
After executing the installation commands above, the installation process looks like this:
{{< figure src="/docs/main/contributions/img-for-tencent/9015313121ed7987558c88081b052574.png" width="100%">}}
After the installation command is complete, wait for the velero and restic workloads to be ready to see if the configured storage location is available.
Executing the 'velero backup-location get' command to view the storage location status and display "Available" indicates that access to Tencent Cloud COS is OK, as shown in the following image:
{{< figure src="/docs/main/contributions/img-for-tencent/69194157ccd5e377d1e7d914fd8c0336.png" width="100%">}}
At this point, The installation using Tencent Cloud COS as Velero storage location is complete, If you need more installation information about Velero, You can see the official website [Velero documentation](https://velero.io/docs/) .
## Velero backup and restore example
In the cluster, use the helm tool to create a minio test service with a persistent volume, and the minio installation method can be found in the [minio installation](https://github.com/minio/charts), in which case can bound a load balancer for the minio service to access the management page using a public address in the browser.
{{< figure src="/docs/main/contributions/img-for-tencent/f0fff5228527edc72d6e71a50d5dc966.png" width="100%">}}
Sign in to the minio web management page and upload some image data for the test, as shown below:
{{< figure src="/docs/main/contributions/img-for-tencent/e932223585c0b19891cc085ad7f438e1.png" width="100%">}}
With Velero Backup, you can back up all objects in the cluster directly, or filter objects by type, namespace, and/or label. This example uses the following command to back up all resources under the 'default' namespace.
```
velero backup create default-backup --include-namespaces <Namespace>
```
Use the `velero backup get` command to see if the backup task is complete, and when the backup task status is "Completed," the backup task is completed without any errors, as shown in the following below:
{{< figure src="/docs/main/contributions/img-for-tencent/eb2bbabae48b188748f5278bedf177f1.png" width="100%">}}
At this point delete all of MinIO's resources, including its PVC persistence volume, as shown below::
{{< figure src="/docs/main/contributions/img-for-tencent/15ccaacf00640a04ae29ceed4c86195b.png" width="100%">}}
After deleting the MinIO resource, use your backup to restore the deleted MinIO resource, and temporarily update the backup storage location to read-only mode (this prevents the backup object from being created or deleted in the backup storage location during the restore process)::
```bash
kubectl patch backupstoragelocation default --namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
```
Modifying access to Velero's storage location is "ReadOnly," as shown in the following image:
{{< figure src="/docs/main/contributions/img-for-tencent/e8c2ab4e5e31d1370c62fad25059a8a8.png" width="100%">}}
Now use the backup "default-backup" that Velero just created to create the restore task:
```bash
velero restore create --from-backup <BackupObject>
```
You can also use `velero restore get` to see the status of the restore task, and if the restore status is "Completed," the restore task is complete, as shown in the following image:
{{< figure src="/docs/main/contributions/img-for-tencent/effe8a0a7ce3aa8e422db00bfdddc375.png" width="100%">}}
When the restore is complete, you can see that the previously deleted minio-related resources have been restored successfully, as shown in the following image:
{{< figure src="/docs/main/contributions/img-for-tencent/1d53b0115644d43657c2a5ece805c9b4.png" width="100%">}}
Log in to minio's management page on your browser and you can see that the previously uploaded picture data is still there, indicating that the persistent volume's data was successfully restored, as shown below:
{{< figure src="/docs/main/contributions/img-for-tencent/ceaca9ce6bc92bdce987c63d2fe71561.png" width="100%">}}
When the restore is complete, don't forget to restore the backup storage location to read and write mode so that the next backup task can be used successfully:
```bash
kubectl patch backupstoragelocation default --namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadWrite"}}'
```
## Uninstall Velero Resources
To uninstall velero resources in a cluster, you can do so using the following command:
```bash
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
## Additional Reading
- [Official Velero Documentation](https://velero.io/docs/)
- [Tencent Cloud Documentation](https://cloud.tencent.com/document/product)

View File

@ -0,0 +1,77 @@
---
title: "Container Storage Interface Snapshot Support in Velero"
layout: docs
---
_This feature is under development. Documentation may not be up-to-date and features may not work as expected._
Integrating Container Storage Interface (CSI) snapshot support into Velero enables Velero to backup and restore CSI-backed volumes using the [Kubernetes CSI Snapshot Beta APIs](https://kubernetes.io/docs/concepts/storage/volume-snapshots/).
By supporting CSI snapshot APIs, Velero can support any volume provider that has a CSI driver, without requiring a Velero-specific plugin to be available.
## Prerequisites
The following are the prerequisites for using Velero to take Container Storage Interface (CSI) snapshots:
1. The cluster is Kubernetes version 1.17 or greater.
1. The cluster is running a CSI driver capable of support volume snapshots at the [v1beta1 API level](https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/).
1. When restoring CSI volumesnapshots across clusters, the name of the CSI driver in the destination cluster is the same as that on the source cluster to ensure cross cluster portability of CSI volumesnapshots
## Installing Velero with CSI support
Ensure that the Velero server is running with the `EnableCSI` feature flag. See [Enabling Features][1] for more information.
Also, the Velero [CSI plugin][2] ([Docker Hub][3]) is necessary to integrate with the CSI volume snapshot APIs.
Both of these can be added with the `velero install` command.
```bash
velero install \
--features=EnableCSI \
--plugins=<object storage plugin>,velero/velero-plugin-for-csi:v0.1.0 \
...
```
To include the status of CSI objects associated with a Velero backup in `velero backup describe` output, run `velero client config set features=EnableCSI`.
See [Enabling Features][1] for more information about managing client-side feature flags.
## Implementation Choices
This section documents some of the choices made during implementation of the Velero [CSI plugin][2]:
1. Volumesnapshots created by the plugin will be retained only for the lifetime of the backup even if the `DeletionPolicy` on the volumesnapshotclass is set to `Retain`. To accomplish this, during deletion of the backup the prior to deleting the volumesnapshot, volumesnapshotcontent object will be patched to set its `DeletionPolicy` to `Delete`. Thus deleting volumesnapshot object will result in cascade delete of the volumesnapshotcontent and the snapshot in the storage provider.
1. Volumesnapshotcontent objects created during a velero backup that are dangling, unbound to a volumesnapshot object, will also be discovered, through labels, and deleted on backup deletion.
1. The Velero CSI plugin, to backup CSI backed PVCs, will choose the VolumeSnapshotClass in the cluster that has the same driver name and also has the `velero.io/csi-volumesnapshot-class` label set on it, like
```yaml
velero.io/csi-volumesnapshot-class: "true"
```
## Roadmap
Velero's support level for CSI volume snapshotting will follow upstream Kubernetes support for the feature, and will reach general availability sometime
after volume snapshotting is GA in upstream Kubernetes. Beta support is expected to launch in Velero v1.4.
## How it Works - Overview
Velero's CSI support does not rely on the Velero VolumeSnapshotter plugin interface.
Instead, Velero uses a collection of BackupItemAction plugins that act first against PersistentVolumeClaims.
When this BackupItemAction sees PersistentVolumeClaims pointing to a PersistentVolume backed by a CSI driver, it will choose the VolumeSnapshotClass with the same driver name that has the `velero.io/csi-volumesnapshot-class` label to create a CSI VolumeSnapshot object with the PersistentVolumeClaim as a source.
This VolumeSnapshot object resides in the same namespace as the PersistentVolumeClaim that was used as a source.
From there, the CSI external-snapshotter controller will see the VolumeSnapshot and create a VolumeSnapshotContent object, a cluster-scoped resource that will point to the actual, disk-based snapshot in the storage system.
The external-snapshotter plugin will call the CSI driver's snapshot method, and the driver will call the storage system's APIs to generate the snapshot.
Once an ID is generated and the storage system marks the snapshot as usable for restore, the VolumeSnapshotContent object will be updated with a `status.snapshotHandle` and the `status.readyToUse` field will be set.
Velero will include the generated VolumeSnapshot and VolumeSnapshotContent objects in the backup tarball, as well as upload all VolumeSnapshots and VolumeSnapshotContents objects in a JSON file to the object storage system.
When Velero synchronizes backups into a new cluster, VolumeSnapshotContent objects will be synced into the cluster as well, so that Velero can manage backup expiration appropriately.
The `DeletionPolicy` on the VolumeSnapshotContent will be the same as the `DeletionPolicy` on the VolumeSnapshotClass that was used to create the VolumeSnapshot. Setting a `DeletionPolicy` of `Retain` on the VolumeSnapshotClass will preserve the volume snapshot in the storage system for the lifetime of the Velero backup and will prevent the deletion of the volume snapshot, in the storage system, in the event of a disaster where the namespace with the VolumeSnapshot object may be lost.
When the Velero backup expires, the VolumeSnapshot objects will be deleted and the VolumeSnapshotContent objects will be updated to have a `DeletionPolicy` of `Delete`, to free space on the storage system.
For more details on how each plugin works, see the [CSI plugin repo][2]'s documentation.
[1]: customize-installation.md#enable-server-side-features
[2]: https://github.com/vmware-tanzu/velero-plugin-for-csi/
[3]: https://hub.docker.com/repository/docker/velero/velero-plugin-for-csi

View File

@ -0,0 +1,117 @@
---
title: "Plugins"
layout: docs
---
Velero has a plugin architecture that allows users to add their own custom functionality to Velero backups & restores without having to modify/recompile the core Velero binary. To add custom functionality, users simply create their own binary containing implementations of Velero's plugin kinds (described below), plus a small amount of boilerplate code to expose the plugin implementations to Velero. This binary is added to a container image that serves as an init container for the Velero server pod and copies the binary into a shared emptyDir volume for the Velero server to access.
Multiple plugins, of any type, can be implemented in this binary.
A fully-functional [sample plugin repository][1] is provided to serve as a convenient starting point for plugin authors.
## Plugin Naming
A plugin is identified by a prefix + name.
**Note: Please don't use `velero.io` as the prefix for a plugin not supported by the Velero team.** The prefix should help users identify the entity developing the plugin, so please use a prefix that identify yourself.
Whenever you define a Backup Storage Location or Volume Snapshot Location, this full name will be the value for the `provider` specification.
For example: `oracle.io/oracle`.
```
apiVersion: velero.io/v1
kind: BackupStorageLocation
spec:
provider: oracle.io/oracle
```
```
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
spec:
provider: oracle.io/oracle
```
When naming your plugin, keep in mind that the full name needs to conform to these rules:
- have two parts, prefix + name, separated by '/'
- none of the above parts can be empty
- the prefix is a valid DNS subdomain name
- a plugin with the same prefix + name cannot not already exist
### Some examples:
```
- example.io/azure
- 1.2.3.4/5678
- example-with-dash.io/azure
```
You will need to give your plugin(s) the full name when registering them by calling the appropriate `RegisterX` function: <https://github.com/vmware-tanzu/velero/blob/0e0f357cef7cf15d4c1d291d3caafff2eeb69c1e/pkg/plugin/framework/server.go#L42-L60>
## Plugin Kinds
Velero supports the following kinds of plugins:
- **Object Store** - persists and retrieves backups, backup logs and restore logs
- **Volume Snapshotter** - creates volume snapshots (during backup) and restores volumes from snapshots (during restore)
- **Backup Item Action** - executes arbitrary logic for individual items prior to storing them in a backup file
- **Restore Item Action** - executes arbitrary logic for individual items prior to restoring them into a cluster
- **Delete Item Action** - executes arbitrary logic based on individual items within a backup prior to deleting the backup
## Plugin Logging
Velero provides a [logger][2] that can be used by plugins to log structured information to the main Velero server log or
per-backup/restore logs. It also passes a `--log-level` flag to each plugin binary, whose value is the value of the same
flag from the main Velero process. This means that if you turn on debug logging for the Velero server via `--log-level=debug`,
plugins will also emit debug-level logs. See the [sample repository][1] for an example of how to use the logger within your plugin.
## Plugin Configuration
Velero uses a ConfigMap-based convention for providing configuration to plugins. If your plugin needs to be configured at runtime,
define a ConfigMap like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: my-plugin-config
# must be in the namespace where the velero deployment
# is running
namespace: velero
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (the built-in change storageclass
# restore item action plugin)
velero.io/plugin-config: ""
# add a label whose key corresponds to the fully-qualified
# plugin name (for example mydomain.io/my-plugin-name), and whose
# value is the plugin type (BackupItemAction, RestoreItemAction,
# ObjectStore, or VolumeSnapshotter)
<fully-qualified-plugin-name>: <plugin-type>
data:
# add your configuration data here as key-value pairs
```
Then, in your plugin's implementation, you can read this ConfigMap to fetch the necessary configuration. See the [restic restore action][3]
for an example of this -- in particular, the `getPluginConfig(...)` function.
## Feature Flags
Velero will pass any known features flags as a comma-separated list of strings to the `--features` argument.
Once parsed into a `[]string`, the features can then be registered using the `NewFeatureFlagSet` function and queried with `features.Enabled(<featureName>)`.
## Environment Variables
Velero adds the `LD_LIBRARY_PATH` into the list of environment variables to provide the convenience for plugins that requires C libraries/extensions in the runtime.
[1]: https://github.com/vmware-tanzu/velero-plugin-example
[2]: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/pkg/plugin/logger.go
[3]: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/pkg/restore/restic_restore_action.go

View File

@ -0,0 +1,368 @@
---
title: "Customize Velero Install"
layout: docs
---
## Plugins
During install, Velero requires that at least one plugin is added (with the `--plugins` flag). Please see the documentation under [Plugins](overview-plugins.md)
## Install in any namespace
Velero is installed in the `velero` namespace by default. However, you can install Velero in any namespace. See [run in custom namespace][2] for details.
## Use non-file-based identity mechanisms
By default, `velero install` expects a credentials file for your `velero` IAM account to be provided via the `--secret-file` flag.
If you are using an alternate identity mechanism, such as kube2iam/kiam on AWS, Workload Identity on GKE, etc., that does not require a credentials file, you can specify the `--no-secret` flag instead of `--secret-file`.
## Enable restic integration
By default, `velero install` does not install Velero's [restic integration][3]. To enable it, specify the `--use-restic` flag.
If you've already run `velero install` without the `--use-restic` flag, you can run the same command again, including the `--use-restic` flag, to add the restic integration to your existing install.
## Default Pod Volume backup to restic
By default, `velero install` does not enable use of restic to take backups of all pod volumes. An annotation has to be applied on every pod which contains volumes to be backed up by restic.
To backup all pod volumes using restic without having to apply annotation on the pod, run the `velero install` command with the `--default-volumes-to-restic` flag.
Using this flag requires restic integration to be enabled with the `--use-restic` flag. Please refer to the [restic integration][3] page for more information.
## Enable features
New features in Velero will be released as beta features behind feature flags which are not enabled by default. A full listing of Velero feature flags can be found [here][11].
### Enable server side features
Features on the Velero server can be enabled using the `--features` flag to the `velero install` command. This flag takes as value a comma separated list of feature flags to enable. As an example [CSI snapshotting of PVCs][10] can be enabled using `EnableCSI` feature flag in the `velero install` command as shown below:
```bash
velero install --features=EnableCSI
```
Another example is enabling the support of multiple API group versions, as documented at [- -features=EnableAPIGroupVersions](enable-api-group-versions-feature.md).
Feature flags, passed to `velero install` will be passed to the Velero deployment and also to the `restic` daemon set, if `--use-restic` flag is used.
Similarly, features may be disabled by removing the corresponding feature flags from the `--features` flag.
Enabling and disabling feature flags will require modifying the Velero deployment and also the restic daemonset. This may be done from the CLI by uninstalling and re-installing Velero, or by editing the `deploy/velero` and `daemonset/restic` resources in-cluster.
```bash
$ kubectl -n velero edit deploy/velero
$ kubectl -n velero edit daemonset/restic
```
### Enable client side features
For some features it may be necessary to use the `--features` flag to the Velero client. This may be done by passing the `--features` on every command run using the Velero CLI or the by setting the features in the velero client config file using the `velero client config set` command as shown below:
```bash
velero client config set features=EnableCSI
```
This stores the config in a file at `$HOME/.config/velero/config.json`.
All client side feature flags may be disabled using the below command
```bash
velero client config set features=
```
### Colored CLI output
Velero CLI uses colored output for some commands, such as `velero describe`. If
the environment in which Velero is run doesn't support colored output, the
colored output will be automatically disabled. However, you can manually disable
colors with config file:
```bash
velero client config set colorized=false
```
Note that if you specify `--colorized=true` as a CLI option it will override
the config file setting.
## Customize resource requests and limits
At installation, Velero sets default resource requests and limits for the Velero pod and the restic pod, if you using the [restic integration](/docs/main/restic/).
{{< table caption="Velero Customize resource requests and limits defaults" >}}
|Setting|Velero pod defaults|restic pod defaults|
|--- |--- |--- |
|CPU request|500m|500m|
|Memory requests|128Mi|512Mi|
|CPU limit|1000m (1 CPU)|1000m (1 CPU)|
|Memory limit|512Mi|1024Mi|
{{< /table >}}
### Install with custom resource requests and limits
You can customize these resource requests and limit when you first install using the [velero install][6] CLI command.
```
velero install \
--velero-pod-cpu-request <CPU_REQUEST> \
--velero-pod-mem-request <MEMORY_REQUEST> \
--velero-pod-cpu-limit <CPU_LIMIT> \
--velero-pod-mem-limit <MEMORY_LIMIT> \
[--use-restic] \
[--default-volumes-to-restic] \
[--restic-pod-cpu-request <CPU_REQUEST>] \
[--restic-pod-mem-request <MEMORY_REQUEST>] \
[--restic-pod-cpu-limit <CPU_LIMIT>] \
[--restic-pod-mem-limit <MEMORY_LIMIT>]
```
### Update resource requests and limits after install
After installation you can adjust the resource requests and limits in the Velero Deployment spec or restic DeamonSet spec, if you are using the restic integration.
**Velero pod**
Update the `spec.template.spec.containers.resources.limits` and `spec.template.spec.containers.resources.requests` values in the Velero deployment.
```bash
kubectl patch deployment velero -n velero --patch \
'{"spec":{"template":{"spec":{"containers":[{"name": "velero", "resources": {"limits":{"cpu": "1", "memory": "512Mi"}, "requests": {"cpu": "1", "memory": "128Mi"}}}]}}}}'
```
**restic pod**
Update the `spec.template.spec.containers.resources.limits` and `spec.template.spec.containers.resources.requests` values in the restic DeamonSet spec.
```bash
kubectl patch daemonset restic -n velero --patch \
'{"spec":{"template":{"spec":{"containers":[{"name": "restic", "resources": {"limits":{"cpu": "1", "memory": "1024Mi"}, "requests": {"cpu": "1", "memory": "512Mi"}}}]}}}}'
```
Additionally, you may want to update the the default Velero restic pod operation timeout (default 240 minutes) to allow larger backups more time to complete. You can adjust this timeout by adding the `- --restic-timeout` argument to the Velero Deployment spec.
**NOTE:** Changes made to this timeout value will revert back to the default value if you re-run the Velero install command.
1. Open the Velero Deployment spec.
```
kubectl edit deploy velero -n velero
```
1. Add `- --restic-timeout` to `spec.template.spec.containers`.
```yaml
spec:
template:
spec:
containers:
- args:
- --restic-timeout=240m
```
## Configure more than one storage location for backups or volume snapshots
Velero supports any number of backup storage locations and volume snapshot locations. For more details, see [about locations](locations.md).
However, `velero install` only supports configuring at most one backup storage location and one volume snapshot location.
To configure additional locations after running `velero install`, use the `velero backup-location create` and/or `velero snapshot-location create` commands along with provider-specific configuration. Use the `--help` flag on each of these commands for more details.
## Do not configure a backup storage location during install
If you need to install Velero without a default backup storage location (without specifying `--bucket` or `--provider`), the `--no-default-backup-location` flag is required for confirmation.
## Install an additional volume snapshot provider
Velero supports using different providers for volume snapshots than for object storage -- for example, you can use AWS S3 for object storage, and Portworx for block volume snapshots.
However, `velero install` only supports configuring a single matching provider for both object storage and volume snapshots.
To use a different volume snapshot provider:
1. Install the Velero server components by following the instructions for your **object storage** provider
1. Add your volume snapshot provider's plugin to Velero (look in [your provider][0]'s documentation for the image name):
```bash
velero plugin add <registry/image:version>
```
1. Add a volume snapshot location for your provider, following [your provider][0]'s documentation for configuration:
```bash
velero snapshot-location create <NAME> \
--provider <PROVIDER-NAME> \
[--config <PROVIDER-CONFIG>]
```
## Generate YAML only
By default, `velero install` generates and applies a customized set of Kubernetes configuration (YAML) to your cluster.
To generate the YAML without applying it to your cluster, use the `--dry-run -o yaml` flags.
This is useful for applying bespoke customizations, integrating with a GitOps workflow, etc.
If you are installing Velero in Kubernetes 1.14.x or earlier, you need to use `kubectl apply`'s `--validate=false` option when applying the generated configuration to your cluster. See [issue 2077][7] and [issue 2311][8] for more context.
## Use a storage provider secured by a self-signed certificate
If you intend to use Velero with a storage provider that is secured by a self-signed certificate,
you may need to instruct Velero to trust that certificate. See [use Velero with a storage provider secured by a self-signed certificate][9] for details.
## Additional options
Run `velero install --help` or see the [Helm chart documentation](https://vmware-tanzu.github.io/helm-charts/) for the full set of installation options.
## Optional Velero CLI configurations
### Enabling shell autocompletion
**Velero CLI** provides autocompletion support for `Bash` and `Zsh`, which can save you a lot of typing.
Below are the procedures to set up autocompletion for `Bash` (including the difference between `Linux` and `macOS`) and `Zsh`.
#### Bash on Linux
The **Velero CLI** completion script for `Bash` can be generated with the command `velero completion bash`. Sourcing the completion script in your shell enables velero autocompletion.
However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`).
##### Install bash-completion
`bash-completion` is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc.
The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file.
To find out, reload your shell and run `type _init_completion`. If the command succeeds, you're already set, otherwise add the following to your `~/.bashrc` file:
```shell
source /usr/share/bash-completion/bash_completion
```
Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`.
##### Enable Velero CLI autocompletion for Bash on Linux
You now need to ensure that the **Velero CLI** completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
- Source the completion script in your `~/.bashrc` file:
```shell
echo 'source <(velero completion bash)' >>~/.bashrc
```
- Add the completion script to the `/etc/bash_completion.d` directory:
```shell
velero completion bash >/etc/bash_completion.d/velero
```
- If you have an alias for velero, you can extend shell completion to work with that alias:
```shell
echo 'alias v=velero' >>~/.bashrc
echo 'complete -F __start_velero v' >>~/.bashrc
```
> `bash-completion` sources all completion scripts in `/etc/bash_completion.d`.
Both approaches are equivalent. After reloading your shell, velero autocompletion should be working.
#### Bash on macOS
The **Velero CLI** completion script for Bash can be generated with `velero completion bash`. Sourcing this script in your shell enables velero completion.
However, the velero completion script depends on [**bash-completion**](https://github.com/scop/bash-completion) which you thus have to previously install.
> There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The velero completion script **doesn't work** correctly with bash-completion v1 and Bash 3.2. It requires **bash-completion v2** and **Bash 4.1+**. Thus, to be able to correctly use velero completion on macOS, you have to install and use Bash 4.1+ ([*instructions*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer).
##### Install bash-completion
> As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work).
You can test if you have bash-completion v2 already installed with `type _init_completion`. If not, you can install it with Homebrew:
```shell
brew install bash-completion@2
```
As stated in the output of this command, add the following to your `~/.bashrc` file:
```shell
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
```
Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`.
##### Enable Velero CLI autocompletion for Bash on macOS
You now have to ensure that the velero completion script gets sourced in all your shell sessions. There are multiple ways to achieve this:
- Source the completion script in your `~/.bashrc` file:
```shell
echo 'source <(velero completion bash)' >>~/.bashrc
```
- Add the completion script to the `/usr/local/etc/bash_completion.d` directory:
```shell
velero completion bash >/usr/local/etc/bash_completion.d/velero
```
- If you have an alias for velero, you can extend shell completion to work with that alias:
```shell
echo 'alias v=velero' >>~/.bashrc
echo 'complete -F __start_velero v' >>~/.bashrc
```
- If you installed velero with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the velero completion script should already be in `/usr/local/etc/bash_completion.d/velero`. In that case, you don't need to do anything.
> The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
In any case, after reloading your shell, velero completion should be working.
#### Autocompletion on Zsh
The velero completion script for Zsh can be generated with the command `velero completion zsh`. Sourcing the completion script in your shell enables velero autocompletion.
To do so in all your shell sessions, add the following to your `~/.zshrc` file:
```shell
source <(velero completion zsh)
```
If you have an alias for kubectl, you can extend shell completion to work with that alias:
```shell
echo 'alias v=velero' >>~/.zshrc
echo 'complete -F __start_velero v' >>~/.zshrc
```
After reloading your shell, kubectl autocompletion should be working.
If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file:
```shell
autoload -Uz compinit
compinit
```
[1]: https://github.com/vmware-tanzu/velero/releases/latest
[2]: namespace.md
[3]: restic.md
[4]: on-premises.md
[6]: velero-install.md#usage
[7]: https://github.com/vmware-tanzu/velero/issues/2077
[8]: https://github.com/vmware-tanzu/velero/issues/2311
[9]: self-signed-certificates.md
[10]: csi.md
[11]: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/pkg/apis/velero/v1/constants.go

View File

@ -0,0 +1,74 @@
---
title: "Debugging Installation Issues"
layout: docs
---
## General
### `invalid configuration: no configuration has been provided`
This typically means that no `kubeconfig` file can be found for the Velero client to use. Velero looks for a kubeconfig in the
following locations:
* the path specified by the `--kubeconfig` flag, if any
* the path specified by the `$KUBECONFIG` environment variable, if any
* `~/.kube/config`
### Backups or restores stuck in `New` phase
This means that the Velero controllers are not processing the backups/restores, which usually happens because the Velero server is not running. Check the pod description and logs for errors:
```
kubectl -n velero describe pods
kubectl -n velero logs deployment/velero
```
## AWS
### `NoCredentialProviders: no valid providers in chain`
#### Using credentials
This means that the secret containing the AWS IAM user credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `credentials-velero` file is formatted properly and has the correct values:
```
[default]
aws_access_key_id=<your AWS access key ID>
aws_secret_access_key=<your AWS secret access key>
```
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
#### Using kube2iam
This means that Velero can't read the content of the S3 bucket. Ensure the following:
* A Trust Policy document exists that allows the role used by kube2iam to assume Velero's role, as stated in the AWS config documentation.
* The new Velero role has all the permissions listed in the documentation regarding S3.
## Azure
### `Failed to refresh the Token` or `adal: Refresh request failed`
This means that the secrets containing the Azure service principal credentials for Velero has not been created/mounted
properly into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has all of the expected keys and each one has the correct value (see [setup instructions][0])
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
## GCE/GKE
### `open credentials/cloud: no such file or directory`
This means that the secret containing the GCE service account credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
[0]: azure-config.md#create-service-principal

View File

@ -0,0 +1,105 @@
---
title: "Debugging Restores"
layout: docs
---
## Example
When Velero finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `velero restore get`:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
backup-test-20170726180512 backup-test Completed 155 76 2017-07-26 11:41:14 -0400 EDT <none>
backup-test-20170726180513 backup-test Completed 121 14 2017-07-26 11:48:24 -0400 EDT <none>
backup-test-2-20170726180514 backup-test-2 Completed 0 0 2017-07-26 13:31:21 -0400 EDT <none>
backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none>
```
To delve into the warnings and errors into more detail, you can use `velero restore describe`:
```bash
velero restore describe backup-test-20170726180512
```
The output looks like this:
```
Name: backup-test-20170726180512
Namespace: velero
Labels: <none>
Annotations: <none>
Backup: backup-test
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: serviceaccounts
Excluded: nodes, events, events.events.k8s.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Preserve Service NodePorts: auto
Phase: Completed
Validation errors: <none>
Warnings:
Velero: <none>
Cluster: <none>
Namespaces:
velero: serviceaccounts "velero" already exists
serviceaccounts "default" already exists
kube-public: serviceaccounts "default" already exists
kube-system: serviceaccounts "attachdetach-controller" already exists
serviceaccounts "certificate-controller" already exists
serviceaccounts "cronjob-controller" already exists
serviceaccounts "daemon-set-controller" already exists
serviceaccounts "default" already exists
serviceaccounts "deployment-controller" already exists
serviceaccounts "disruption-controller" already exists
serviceaccounts "endpoint-controller" already exists
serviceaccounts "generic-garbage-collector" already exists
serviceaccounts "horizontal-pod-autoscaler" already exists
serviceaccounts "job-controller" already exists
serviceaccounts "kube-dns" already exists
serviceaccounts "namespace-controller" already exists
serviceaccounts "node-controller" already exists
serviceaccounts "persistent-volume-binder" already exists
serviceaccounts "pod-garbage-collector" already exists
serviceaccounts "replicaset-controller" already exists
serviceaccounts "replication-controller" already exists
serviceaccounts "resourcequota-controller" already exists
serviceaccounts "service-account-controller" already exists
serviceaccounts "service-controller" already exists
serviceaccounts "statefulset-controller" already exists
serviceaccounts "ttl-controller" already exists
default: serviceaccounts "default" already exists
Errors:
Velero: <none>
Cluster: <none>
Namespaces: <none>
```
## Structure
Errors appear for incomplete or partial restores. Warnings appear for non-blocking issues, for example, the
restore looks "normal" and all resources referenced in the backup exist in some form, although some
of them may have been pre-existing.
Both errors and warnings are structured in the same way:
* `Velero`: A list of system-related issues encountered by the Velero server. For example, Velero couldn't read a directory.
* `Cluster`: A list of issues related to the restore of cluster-scoped resources.
* `Namespaces`: A map of namespaces to the list of issues related to the restore of their respective resources.

View File

@ -0,0 +1,46 @@
---
title: "Development "
layout: docs
---
## Update generated files
Run `make update` to regenerate files if you make the following changes:
* Add/edit/remove command line flags and/or their help text
* Add/edit/remove commands or subcommands
* Add new API types
* Add/edit/remove plugin protobuf message or service definitions
The following files are automatically generated from the source code:
* The clientset
* Listers
* Shared informers
* Documentation
* Protobuf/gRPC types
You can run `make verify` to ensure that all generated files (clientset, listers, shared informers, docs) are up to date.
## Linting
You can run `make lint` which executes golangci-lint inside the build image, or `make local-lint` which executes outside of the build image.
Both `make lint` and `make local-lint` will only run the linter against changes.
Use `lint-all` to run the linter against the entire code base.
The default linters are defined in the `Makefile` via the `LINTERS` variable.
You can also override the default list of linters by running the command
`$ make lint LINTERS=gosec`
## Test
To run unit tests, use `make test`.
## Vendor dependencies
If you need to add or update the vendored dependencies, see [Vendoring dependencies][11].
[11]: vendoring-dependencies.md

View File

@ -0,0 +1,44 @@
---
title: "Disaster recovery"
layout: docs
---
*Using Schedules and Read-Only Backup Storage Locations*
If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Velero looks like the following:
1. After you first run the Velero server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired):
```
velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
```
This creates a Backup object with the name `<SCHEDULE NAME>-<TIMESTAMP>`. The default backup retention period, expressed as TTL (time to live), is 30 days (720 hours); you can use the `--ttl <DURATION>` flag to change this as necessary. See [how velero works][1] for more information about backup expiry.
1. A disaster happens and you need to recreate your resources.
1. Update your backup storage location to read-only mode (this prevents backup objects from being created or deleted in the backup storage location during the restore process):
```bash
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
```
1. Create a restore with your most recent Velero Backup:
```
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
```
1. When ready, revert your backup storage location to read-write mode:
```bash
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadWrite"}}'
```
[1]: how-velero-works.md#set-a-backup-to-expire

View File

@ -0,0 +1,113 @@
---
title: "Enable API Group Versions Feature"
layout: docs
---
## Background
Velero serves to both restore and migrate Kubernetes applications. Typically, backup and restore does not involve upgrading Kubernetes API group versions. However, when migrating from a source cluster to a destination cluster, it is not unusual to see the API group versions differing between clusters.
> &#9432; **API Group Version** | Kubernetes applications are made up of various resources. Common resources are pods, jobs, and deployments. Custom resources are created via custom resource definitions (CRDs). Every resource, whether custom or not, is part of a group, and each group has a version called the API group version.
Kubernetes by default allows changing API group versions between clusters as long as the upgrade is a single version, for example, v1 -> v2beta1. Jumping multiple versions, for example, v1 -> v3, is not supported out of the box. This is where the Enable API Group Version feature comes in.
Currently, the Enable API Group Version feature is in beta and can be enabled by installing Velero with a [feature flag](https://velero.io/docs/v1.5/customize-installation/#enable-server-side-features), `--features=EnableAPIGroupVersions`.
## How the Enable API Group Versions Feature Works
When the Enable API Group Versions feature is enabled on the source cluster, Velero will not only back up Kubernetes preferred API group versions, but it will also back up all supported versions on the cluster. As an example, consider the resource `horizontalpodautoscalers` which falls under the `autoscaling` group. Without the feature flag enabled, only the preferred API group version for autoscaling, `v1` will be backed up. With the feature enabled, the remaining supported versions, `v2beta1` and `v2beta2` will also be backed up. Once the versions are stored in the backup tarball file, they will be available to be restored on the destination cluster.
When the Enable API Group Versions feature is enabled on the destination cluster, Velero restore will choose the version to restore based on an API group version priority order.
The version priorities are listed from highest to lowest priority below:
- Priority 1: destination cluster preferred version
- Priority 2: source cluster preferred version
- Priority 3: non-preferred common supported version with the highest [Kubernetes version priority](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-priority)
The highest priority (Priority 1) will be the destination cluster's preferred API group version. If the destination preferred version is found in the backup tarball, it will be the API group version chosen for restoration for that resource. However, if the destination preferred version is not found in the backup tarball, the next version in the list will be selected: the source cluster preferred version (Priority 2).
If the source cluster preferred version is found to be supported by the destination cluster, it will be chosen as the API group version to restore. However, if the source preferred version is not supported by the destination cluster, then the next version in the list will be considered: a non-preferred common supported version (Priority 3).
In the case that there are more than one non-preferred common supported version, which version will be chosen? The answer requires understanding the [Kubernetes version priority order](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-priority). Kubernetes prioritizes group versions by making the latest, most stable version the highest priority. The highest priority version is the Kubernetes preferred version. Here is a sorted version list example from the Kubernetes.io documentation:
- v10
- v2
- v1
- v11beta2
- v10beta3
- v3beta1
- v12alpha1
- v11alpha2
- foo1
- foo10
Of the non-preferred common versions, the version that has the highest Kubernetes version priority will be chosen. See the example for Priority 3 below.
To better understand which API group version will be chosen, the following provides some concrete examples. The examples use the term "target cluster" which is synonymous to "destination cluster".
![Priority 1 Case A example](/docs/main/img/gv_priority1-caseA.png)
![Priority 1 Case B example](/docs/main/img/gv_priority1-caseB.png)
![Priority 2 Case C example](/docs/main/img/gv_priority2-caseC.png)
![Priority 3 Case D example](/docs/main/img/gv_priority3-caseD.png)
## Procedure for Using the Enable API Group Versions Feature
1. [Install Velero](https://velero.io/docs/v1.5/basic-install/) on source cluster with the [feature flag enabled](https://velero.io/docs/v1.5/customize-installation/#enable-server-side-features). The flag is `--features=EnableAPIGroupVersions`. For the enable API group versions feature to work, the feature flag needs to be used for Velero installations on both the source and destination clusters.
2. Back up and restore following the [migration case instructions](https://velero.io/docs/v1.5/migration-case/). Note that "Cluster 1" in the instructions refers to the source cluster, and "Cluster 2" refers to the destination cluster.
## Advanced Procedure for Customizing the Version Prioritization
Optionally, users can create a config map to override the default API group prioritization for some or all of the resources being migrated. For each resource that is specified by the user, Velero will search for the version in both the backup tarball and the destination cluster. If there is a match, the user-specified API group version will be restored. If the backup tarball and the destination cluster does not have or support any of the user-specified versions, then the default version prioritization will be used.
Here are the steps for creating a config map that allows users to override the default version prioritization. These steps must happen on the destination cluster before a Velero restore is initiated.
1. Create a file called `restoreResourcesVersionPriority`. The file name will become a key in the `data` field of the config map.
- In the file, write a line for each resource group you'd like to override. Make sure each line follows the format `<resource>.<group>=<highest user priority version>,<next highest>`
- Note that the resource group and versions are separated by a single equal (=) sign. Each version is listed in order of user's priority separated by commas.
- Here is an example of the contents of a config map file:
```cm
rockbands.music.example.io=v2beta1,v2beta2
orchestras.music.example.io=v2,v3alpha1
subscriptions.operators.coreos.com=v2,v1
```
2. Apply config map with
```bash
kubectl create configmap enableapigroupversions --from-file=<absolute path>/restoreResourcesVersionPriority -n velero
```
3. See the config map with
```bash
kubectl describe configmap enableapigroupversions -n velero
```
The config map should look something like
```bash
Name: enableapigroupversions
Namespace: velero
Labels: <none>
Annotations: <none>
Data
====
restoreResourcesVersionPriority:
----
rockbands.music.example.io=v2beta1,v2beta2
orchestras.music.example.io=v2,v3alpha1
subscriptions.operators.coreos.com=v2,v1
Events: <none>
```
## Troubleshooting
1. Refer to the [troubleshooting section](https://velero.io/docs/v1.5/troubleshooting/) of the docs as the techniques generally apply here as well.
2. The [debug logs](https://velero.io/docs/v1.5/troubleshooting/#getting-velero-debug-logs) will contain information on which version was chosen to restore.
3. If no API group version could be found that both exists in the backup tarball file and is supported by the destination cluster, then the following error will be recorded (no need to activate debug level logging): `"error restoring rockbands.music.example.io/rockstars/beatles: the server could not find the requested resource"`.

View File

@ -0,0 +1,70 @@
---
title: "Examples"
layout: docs
---
After you set up the Velero server, you can clone the examples used in the following sections by running the following:
```
git clone https://github.com/vmware-tanzu/velero.git
cd velero
```
## Basic example (without PersistentVolumes)
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Create a backup:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Wait for the namespace to be deleted.
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
## Snapshot example (with PersistentVolumes)
> NOTE: For Azure, you must run Kubernetes version 1.7.2 or later to support PV snapshotting of managed disks.
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/with-pv.yaml
```
1. Create a backup with PV snapshotting:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Because the default [reclaim policy][1] for dynamically-provisioned PVs is "Delete", these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time. **Before continuing to the next step, check your cloud provider to confirm that the disk no longer exists.**
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
[1]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming

View File

@ -0,0 +1,87 @@
---
title: "How Velero Works"
layout: docs
---
Each Velero operation -- on-demand backup, scheduled backup, restore -- is a custom resource, defined with a Kubernetes [Custom Resource Definition (CRD)][20] and stored in [etcd][22]. Velero also includes controllers that process the custom resources to perform backups, restores, and all related operations.
You can back up or restore all objects in your cluster, or you can filter objects by type, namespace, and/or label.
Velero is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster, like upgrades.
## On-demand backups
The **backup** operation:
1. Uploads a tarball of copied Kubernetes objects into cloud object storage.
1. Calls the cloud provider API to make disk snapshots of persistent volumes, if specified.
You can optionally specify backup hooks to be executed during the backup. For example, you might
need to tell a database to flush its in-memory buffers to disk before taking a snapshot. [More about backup hooks][10].
Note that cluster backups are not strictly atomic. If Kubernetes objects are being created or edited at the time of backup, they might not be included in the backup. The odds of capturing inconsistent information are low, but it is possible.
## Scheduled backups
The **schedule** operation allows you to back up your data at recurring intervals. The first backup is performed when the schedule is first created, and subsequent backups happen at the schedule's specified interval. These intervals are specified by a Cron expression.
Scheduled backups are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*.
## Restores
The **restore** operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a filtered subset of objects and persistent volumes. Velero supports multiple namespace remapping--for example, in a single restore, objects in namespace "abc" can be recreated under namespace "def", and the objects in namespace "123" under "456".
The default name of a restore is `<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. You can also specify a custom name. A restored object also includes a label with key `velero.io/restore-name` and value `<RESTORE NAME>`.
By default, backup storage locations are created in read-write mode. However, during a restore, you can configure a backup storage location to be in read-only mode, which disables backup creation and deletion for the storage location. This is useful to ensure that no backups are inadvertently created or deleted during a restore scenario.
You can optionally specify restore hooks to be executed during a restore or after resources are restored. For example, you might need to perform a custom database restore operation before the database application containers start. [More about restore hooks][11].
## Backup workflow
When you run `velero backup create test-backup`:
1. The Velero client makes a call to the Kubernetes API server to create a `Backup` object.
1. The `BackupController` notices the new `Backup` object and performs validation.
1. The `BackupController` begins the backup process. It collects the data to back up by querying the API server for resources.
1. The `BackupController` makes a call to the object storage service -- for example, AWS S3 -- to upload the backup file.
By default, `velero backup create` makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run `velero backup create --help` to see available flags. Snapshots can be disabled with the option `--snapshot-volumes=false`.
![19]
## Backed-up API versions
Velero backs up resources using the Kubernetes API server's *preferred version* for each group/resource. When restoring a resource, this same API group/version must exist in the target cluster in order for the restore to be successful.
For example, if the cluster being backed up has a `gizmos` resource in the `things` API group, with group/versions `things/v1alpha1`, `things/v1beta1`, and `things/v1`, and the server's preferred group/version is `things/v1`, then all `gizmos` will be backed up from the `things/v1` API endpoint. When backups from this cluster are restored, the target cluster **must** have the `things/v1` endpoint in order for `gizmos` to be restored. Note that `things/v1` **does not** need to be the preferred version in the target cluster; it just needs to exist.
## Set a backup to expire
When you create a backup, you can specify a TTL (time to live) by adding the flag `--ttl <DURATION>`. If Velero sees that an existing backup resource is expired, it removes:
* The backup resource
* The backup file from cloud object storage
* All PersistentVolume snapshots
* All associated Restores
The TTL flag allows the user to specify the backup retention period with the value specified in hours, minutes and seconds in the form `--ttl 24h0m0s`. If not specified, a default TTL value of 30 days will be applied.
## Object storage sync
Velero treats object storage as the source of truth. It continuously checks to see that the correct backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding backup resource in the Kubernetes API, Velero synchronizes the information from object storage to Kubernetes.
This allows restore functionality to work in a cluster migration scenario, where the original backup objects do not exist in the new cluster.
Likewise, if a backup object exists in Kubernetes but not in object storage, it will be deleted from Kubernetes since the backup tarball no longer exists.
[10]: backup-hooks.md
[11]: restore-hooks.md
[19]: /docs/main/img/backup-process.png
[20]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions
[21]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers
[22]: https://github.com/coreos/etcd

View File

@ -0,0 +1,24 @@
---
title: "Image tagging policy"
layout: docs
---
This document describes Velero's image tagging policy.
## Released versions
`velero/velero:<SemVer>`
Velero follows the [Semantic Versioning](http://semver.org/) standard for releases. Each tag in the `github.com/vmware-tanzu/velero` repository has a matching image, `velero/velero:v1.0.0`.
### Latest
`velero/velero:latest`
The `latest` tag follows the most recently released version of Velero.
## Development
`velero/velero:main`
The `main` tag follows the latest commit to land on the `main` branch.

View File

@ -0,0 +1 @@
Some of these diagrams (for instance backup-process.png), have been created on [draw.io](https://www.draw.io), using the "Include a copy of my diagram" option. If you want to make changes to these diagrams, try importing them into draw.io, and you should have access to the original shapes/text that went into the originals.

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

View File

@ -0,0 +1,258 @@
---
title: "Backup Storage Locations and Volume Snapshot Locations"
layout: docs
---
## Overview
Velero has two custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`, that are used to configure where Velero backups and their associated persistent volume snapshots are stored.
A `BackupStorageLocation` is defined as a bucket or a prefix within a bucket under which all Velero data is stored and a set of additional provider-specific fields (AWS region, Azure storage account, etc.). Velero assumes it has control over the location you provide so you should use a dedicated bucket or prefix. If you provide a prefix, then the rest of the bucket is safe to use for multiple purposes. The [API documentation][1] captures the configurable parameters for each in-tree provider.
A `VolumeSnapshotLocation` is defined entirely by provider-specific fields (AWS region, Azure resource group, Portworx snapshot type, etc.) The [API documentation][2] captures the configurable parameters for each in-tree provider.
The user can pre-configure one or more possible `BackupStorageLocations` and one or more `VolumeSnapshotLocations`, and can select *at backup creation time* the location in which the backup and associated snapshots should be stored.
This configuration design enables a number of different use cases, including:
- Take snapshots of more than one kind of persistent volume in a single Velero backup. For example, in a cluster with both EBS volumes and Portworx volumes
- Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region, or to a different storage provider
- For volume providers that support it, like Portworx, you can have some snapshots stored locally on the cluster and have others stored in the cloud
## Limitations / Caveats
- Velero supports multiple credentials for `BackupStorageLocations`, allowing you to specify the credentials to use with any `BackupStorageLocation`.
However, use of this feature requires support within the plugin for the object storage provider you wish to use.
All [plugins maintained by the Velero team][5] support this feature.
If you are using a plugin from another provider, please check their documentation to determine if this feature is supported.
- Velero only supports a single set of credentials for `VolumeSnapshotLocations`.
Velero will always use the credentials provided at install time (stored in the `cloud-credentials` secret) for volume snapshots.
- Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail.
- Each Velero backup has one `BackupStorageLocation`, and one `VolumeSnapshotLocation` per volume provider. It is not possible (yet) to send a single Velero backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can always set up multiple scheduled backups that differ only in the storage locations used if redundancy of backups across locations is important.
- Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume, like EBS and Portworx, but you only have a `VolumeSnapshotLocation` configured for EBS, then Velero will **only** snapshot the EBS volumes.
- Restic data is stored under a prefix/subdirectory of the main Velero bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation time.
- Velero's backups are split into 2 pieces - the metadata stored in object storage, and snapshots/backups of the persistent volume data. Right now, Velero *itself* does not encrypt either of them, instead it relies on the native mechanisms in the object and snapshot systems. A special case is restic, which backs up the persistent volume data at the filesystem level and send it to Velero's object storage.
## Examples
Let's look at some examples of how you can use this configuration mechanism to address some common use cases:
### Take snapshots of more than one kind of persistent volume in a single Velero backup
During server configuration:
```shell
velero snapshot-location create ebs-us-east-1 \
--provider aws \
--config region=us-east-1
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
```
During backup creation:
```shell
velero backup create full-cluster-backup \
--volume-snapshot-locations ebs-us-east-1,portworx-cloud
```
Alternately, since in this example there's only one possible volume snapshot location configured for each of our two providers (`ebs-us-east-1` for `aws`, and `portworx-cloud` for `portworx`), Velero doesn't require them to be explicitly specified when creating the backup:
```shell
velero backup create full-cluster-backup
```
### Have some Velero backups go to a bucket in an eastern USA region (default), and others go to a bucket in a western USA region
In this example, two `BackupStorageLocations` will be created within the same account but in different regions.
They will both use the credentials provided at install time and stored in the `cloud-credentials` secret.
If you need to configure unique credentials for each `BackupStorageLocation`, please refer to the [later example][8].
During server configuration:
```shell
velero backup-location create backups-primary \
--provider aws \
--bucket velero-backups \
--config region=us-east-1 \
--default
velero backup-location create backups-secondary \
--provider aws \
--bucket velero-backups \
--config region=us-west-1
```
A "default" backup storage location (BSL) is where backups get saved to when no BSL is specified at backup creation time.
You can change the default backup storage location at any time by setting the `--default` flag using the
`velero backup-location set` command and configure a different location to be the default.
Examples:
```shell
velero backup-location set backups-secondary --default
```
During backup creation:
```shell
velero backup create full-cluster-backup
```
Or:
```shell
velero backup create full-cluster-alternate-location-backup \
--storage-location backups-secondary
```
### For volume providers that support it (like Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
During server configuration:
```shell
velero snapshot-location create portworx-local \
--provider portworx \
--config type=local
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
```
During backup creation:
```shell
# Note that since in this example you have two possible volume snapshot locations for the Portworx
# provider, you need to explicitly specify which one to use when creating a backup. Alternately,
# you can set the --default-volume-snapshot-locations flag on the `velero server` command (run by
# the Velero deployment) to specify which location should be used for each provider by default, in
# which case you don't need to specify it when creating a backup.
velero backup create local-snapshot-backup \
--volume-snapshot-locations portworx-local
```
Or:
```shell
velero backup create cloud-snapshot-backup \
--volume-snapshot-locations portworx-cloud
```
### Use a single location
If you don't have a use case for more than one location, it's still easy to use Velero. Let's assume you're running on AWS, in the `us-west-1` region:
During server configuration:
```shell
velero backup-location create backups-primary \
--provider aws \
--bucket velero-backups \
--config region=us-west-1 \
--default
velero snapshot-location create ebs-us-west-1 \
--provider aws \
--config region=us-west-1
```
During backup creation:
```shell
# Velero will automatically use your configured backup storage location and volume snapshot location.
# Nothing needs to be specified when creating a backup.
velero backup create full-cluster-backup
```
### Create a storage location that uses unique credentials
It is possible to create additional `BackupStorageLocations` that use their own credentials.
This enables you to save backups to another storage provider or to another account with the storage provider you are already using.
If you create additional `BackupStorageLocations` without specifying the credentials to use, Velero will use the credentials provided at install time and stored in the `cloud-credentials` secret.
Please see the [earlier example][9] for details on how to create multiple `BackupStorageLocations` that use the same credentials.
#### Prerequisites
- This feature requires support from the [object storage provider plugin][5] you wish to use.
All plugins maintained by the Velero team support this feature.
If you are using a plugin from another provider, please check their documentation to determine if this is supported.
- The [plugin for the object storage provider][5] you wish to use must be [installed][6].
- You must create a file with the object storage credentials. Follow the instructions provided by your object storage provider plugin to create this file.
Once you have installed the necessary plugin and created the credentials file, create a [Kubernetes Secret][6] in the Velero namespace that contains these credentials:
```shell
kubectl create secret generic -n velero credentials --from-file=bsl=</path/to/credentialsfile>
```
This will create a secret named `credentials` with a single key (`bsl`) which contains the contents of your credentials file.
Next, create a `BackupStorageLocation` that uses this Secret by passing the Secret name and key in the `--credential` flag.
When interacting with this `BackupStroageLocation` in the future, Velero will fetch the data from the key within the Secret you provide.
For example, a new `BackupStorageLocation` with a Secret would be configured as follows:
```bash
velero backup-location create <bsl-name> \
--provider <provider> \
--bucket <bucket> \
--config region=<region> \
--credential=<secret-name>=<key-within-secret>
```
The `BackupStorageLocation` is ready to use when it has the phase `Available`.
You can check the status with the following command:
```bash
velero backup-location get
```
To use this new `BackupStorageLocation` when performing a backup, use the flag `--storage-location <bsl-name>` when running `velero backup create`.
You may also set this new `BackupStorageLocation` as the default with the command `velero backup-location set --default <bsl-name>`.
### Modify the credentials used by an existing storage location
By default, `BackupStorageLocations` will use the credentials provided at install time and stored in the `cloud-credentials` secret in the Velero namespace.
You can modify these existing credentials by [editing the `cloud-credentials` secret][10], however, these changes will apply to all locations using this secret.
This may be the desired outcome, for example, in the case where you wish to rotate the credentials used for a particular account.
You can also opt to modify an existing `BackupStorageLocation` such that it uses its own credentials by using the `backup-location set` command.
If you have a credentials file that you wish to use for a `BackupStorageLocation`, follow the instructions above to create the Secret with that file in the Velero namespace.
Once you have created the Secret, or have an existing Secret which contains the credentials you wish to use for your `BackupStorageLocation`, set the credential to use as follows:
```bash
velero backup-location set <bsl-name> \
--credential=<secret-name>=<key-within-secret>
```
## Additional Use Cases
1. If you're using Azure's AKS, you may want to store your volume snapshots outside of the "infrastructure" resource group that is automatically created when you create your AKS cluster. This is possible using a `VolumeSnapshotLocation`, by specifying a `resourceGroup` under the `config` section of the snapshot location. See the [Azure volume snapshot location documentation][3] for details.
1. If you're using Azure, you may want to store your Velero backups across multiple storage accounts and/or resource groups/subscriptions. This is possible using a `BackupStorageLocation`, by specifying a `storageAccount`, `resourceGroup` and/or `subscriptionId`, respectively, under the `config` section of the backup location. See the [Azure backup storage location documentation][4] for details.
[1]: api-types/backupstoragelocation.md
[2]: api-types/volumesnapshotlocation.md
[3]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/main/volumesnapshotlocation.md
[4]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/main/backupstoragelocation.md
[5]: /plugins
[6]: overview-plugins.md
[7]: https://kubernetes.io/docs/concepts/configuration/secret/
[8]: #create-a-storage-location-that-uses-unique-credentials
[9]: #have-some-velero-backups-go-to-a-bucket-in-an-eastern-usa-region-default-and-others-go-to-a-bucket-in-a-western-usa-region
[10]: https://kubernetes.io/docs/concepts/configuration/secret/#editing-a-secret

View File

@ -0,0 +1,35 @@
---
title: "Instructions for Maintainers"
layout: docs
toc: "true"
---
There are some guidelines maintainers need to follow. We list them here for quick reference, especially for new maintainers. These guidelines apply to all projects in the Velero org, including the main project, the Velero Helm chart, and all other [related repositories](https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/GOVERNANCE.md#code-repositories).
Please be sure to also go through the guidance under the entire [Contribute](start-contributing/) section.
## Reviewing PRs
- PRs require 2 approvals before it is mergeable.
- The second reviewer usually merges the PR (if you notice a PR open for a while and with 2 approvals, go ahead and merge it!)
- As you review a PR that is not yet ready to merge, please check if the "request review" needs to be refreshed for any reviewer (this is better than @mention at them)
- Refrain from @mention other maintainers to review the PR unless it is an immediate need. All maintainers already get notified through the automated add to the "request review". If it is an urgent need, please add a helpful message as to why it is so people can properly prioritize work.
- There is no need to manually request reviewers: after the PR is created, all maintainers will be automatically added to the list (note: feel free to remove people if they are on PTO, etc).
- Be familiar with the [lazy consensus](https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/GOVERNANCE.md#lazy-consensus) policy for the project.
Some tips for doing reviews:
- There are some [code standards and general guidelines](https://velero.io/docs/v1.6.0-rc.1/code-standards) we aim for
- We have [guidelines for writing and reviewing documentation](https://velero.io/docs/v1.6.0-rc.1/style-guide/)
- When reviewing a design document, ensure it follows [our format and guidelines]( https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/design/_template.md). Also, when reviewing a PR that implements a previously accepted design, ensure the associated design doc is moved to the [design/implemented](https://github.com/vmware-tanzu/velero/tree/main/design/implemented) folder.
## Creating a release
Maintainers are expected to create releases for the project. We have parts of the process automated, and full [instructions](release-instructions).
## Community support
Maintainers are expected to participate in the community support rotation. We have guidelines for how we handle the [support](support-process).
## Community engagement
Maintainers for the Velero project are highly involved with the open source community. All the online community meetings for the project are listed in our [community](community) page.
## How do I become a maintainer?
The Velero project welcomes contributors of all kinds. We are also always on the look out for a high level of engagement from contributors and opportunities to bring in new maintainers. If this is of interest, take a look at how [adding a maintainer](https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/GOVERNANCE.md#maintainers) is decided.

View File

@ -0,0 +1,58 @@
---
title: "Cluster migration"
layout: docs
---
## Using Backups and Restores
Velero can help you port your resources from one cluster to another, as long as you point each Velero instance to the same cloud object storage location. This scenario assumes that your clusters are hosted by the same cloud provider. **Note that Velero does not natively support the migration of persistent volumes snapshots across cloud providers.** If you would like to migrate volume data between cloud platforms, please enable [restic][2], which will backup volume contents at the filesystem level.
1. *(Cluster 1)* Assuming you haven't already been checkpointing your data with the Velero `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired):
```
velero backup create <BACKUP-NAME>
```
The default backup retention period, expressed as TTL (time to live), is 30 days (720 hours); you can use the `--ttl <DURATION>` flag to change this as necessary. See [how velero works][1] for more information about backup expiry.
1. *(Cluster 2)* Configure `BackupStorageLocations` and `VolumeSnapshotLocations`, pointing to the locations used by *Cluster 1*, using `velero backup-location create` and `velero snapshot-location create`. Make sure to configure the `BackupStorageLocations` as read-only
by using the `--access-mode=ReadOnly` flag for `velero backup-location create`.
1. *(Cluster 2)* Make sure that the Velero Backup object is created. Velero resources are synchronized with the backup files in cloud storage.
```
velero backup describe <BACKUP-NAME>
```
**Note:** The default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Velero server.
1. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with:
```
velero restore create --from-backup <BACKUP-NAME>
```
## Verify Both Clusters
Check that the second cluster is behaving as expected:
1. *(Cluster 2)* Run:
```
velero restore get
```
1. Then run:
```
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
```
If you encounter issues, make sure that Velero is running in the same namespace in both clusters.
## Migrating Workloads Across Different Kubernetes Versions
Migration across clusters that are not running the same version of Kubernetes might be possible, but some factors need to be considered: compatibility of API groups between clusters for each custom resource, and if a Kubernetes version upgrade breaks the compatibility of core/native API groups. For more information about API group versions, please see [EnableAPIGroupVersions](enable-api-group-versions-feature.md).
[1]: how-velero-works.md#set-a-backup-to-expire
[2]: restic.md

View File

@ -0,0 +1,22 @@
---
title: "Run in a non-default namespace"
layout: docs
---
The Velero installation and backups by default are run in the `velero` namespace. However, it is possible to use a different namespace.
## Customize the namespace during install
Use the `--namespace` flag, in conjunction with the other flags in the `velero install` command (as shown in the [the Velero install instructions][0]). This will inform Velero where to install.
## Customize the namespace for operational commands
To have namespace consistency, specify the namespace for all Velero operational commands to be the same as the namespace used to install Velero:
```bash
velero client config set namespace=<NAMESPACE_VALUE>
```
Alternatively, you may use the global `--namespace` flag with any operational command to tell Velero where to run.
[0]: basic-install.md#install-the-cli

View File

@ -0,0 +1,95 @@
---
title: "On-Premises Environments"
layout: docs
---
You can run Velero in an on-premises cluster in different ways depending on your requirements.
### Selecting an object storage provider
You must select an object storage backend that Velero can use to store backup data. [Supported providers][0] contains information on various
options that are supported or have been reported to work by users.
If you do not already have an object storage system, [MinIO][2] is an open-source S3-compatible object storage system that can be installed on-premises and is compatible with Velero. The details of configuring it for production usage are out of scope for Velero's documentation, but an [evaluation install guide][3] using MinIO is provided for convenience.
### (Optional) Selecting volume snapshot providers
If you need to back up persistent volume data, you must select a volume backup solution. [Supported providers][0] contains information on the supported options.
For example, if you use [Portworx][4] for persistent storage, you can install their Velero plugin to get native Portworx snapshots as part of your Velero backups.
If there is no native snapshot plugin available for your storage platform, you can use Velero's [restic integration][1], which provides a platform-agnostic file-level backup solution for volume data.
### Air-gapped deployments
In an air-gapped deployment, there is no access to the public internet, and therefore no access to public container registries.
In these scenarios, you will need to make sure that you have an internal registry, such as [Harbor][5], installed and the Velero core and plugin images loaded into your internal registry.
Below you will find instructions to downloading the Velero images to your local machine, tagging them, then uploading them to your custom registry.
#### Preparing the Velero image
First, download the Velero image, tag it for the your private registry, then upload it into the registry so that it can be pulled by your cluster.
```bash
PRIVATE_REG=<your private registry>
VELERO_VERSION=<version of Velero you're targeting, for example v1.4.0>
docker pull velero/velero:$VELERO_VERSION
docker tag velero/velero:$VELERO_VERSION $PRIVATE_REG/velero:$VELERO_VERSION
docker push $PRIVATE_REG/velero:$VELERO_VERSION
```
#### Preparing plugin images
Next, repeat these steps for any plugins you may need. This example will use the AWS plugin, but the plugin name should be replaced with the plugins you will need.
```bash
PRIVATE_REG=<your private registry>
PLUGIN_VERSION=<version of plugin you're targeting, for example v1.0.2>
docker pull velero/velero-plugin-for-aws:$PLUGIN_VERSION
docker tag velero/velero-plugin-for-aws:$PLUGIN_VERSION $PRIVATE_REG/velero-plugin-for-aws:$PLUGIN_VERSION
docker push $PRIVATE_REG/velero-plugin-for-aws:$PLUGIN_VERSION
```
#### Preparing the restic helper image (optional)
If you are using restic, you will also need to upload the restic helper image.
```bash
PRIVATE_REG=<your private registry>
VELERO_VERSION=<version of Velero you're targeting, for example v1.4.0>
docker pull velero/velero-restic-restore-helper:$VELERO_VERSION
docker tag velero/velero-restic-restore-helper:$VELERO_VERSION $PRIVATE_REG/velero-restic-restore-helper:$VELERO_VERSION
docker push $PRIVATE_REG/velero-restic-restore-helper:$VELERO_VERSION
```
#### Pulling specific architecture images (optional)
Velero uses Docker manifests for its images, allowing Docker to pull the image needed based on your client machine's architecture.
If you need to pull a specific image, you should replace the `velero/velero` image with the specific architecture image, such as `velero/velero-arm`.
To see an up-to-date list of architectures, be sure to enable Docker experimental features and use `docker manifest inspect velero/velero` (or whichever image you're interested in), and join the architecture string to the end of the image name with `-`.
#### Installing Velero
By default, `velero install` will use the public `velero/velero` image. When using an air-gapped deployment, use your private registry's image for Velero and your private registry's images for any plugins.
```bash
velero install \
--image=$PRIVATE_REG/velero:$VELERO_VERSION \
--plugin=$PRIVATE_REG/velero-plugin-for-aws:$PLUGIN_VERSION \
<....>
```
[0]: supported-providers.md
[1]: restic.md
[2]: https://min.io
[3]: contributions/minio.md
[4]: https://portworx.com
[5]: https://goharbor.io/

View File

@ -0,0 +1,224 @@
---
title: "Output file format"
layout: docs
---
A backup is a gzip-compressed tar file whose name matches the Backup API resource's `metadata.name` (what is specified during `velero backup create <NAME>`).
In cloud object storage, each backup file is stored in its own subdirectory in the bucket specified in the Velero server configuration. This subdirectory includes an additional file called `velero-backup.json`. The JSON file lists all information about your associated Backup resource, including any default values. This gives you a complete historical record of the backup configuration. The JSON file also specifies `status.version`, which corresponds to the output file format.
The directory structure in your cloud storage looks something like:
```
rootBucket/
backup1234/
velero-backup.json
backup1234.tar.gz
```
## Example backup JSON file
```json
{
"kind": "Backup",
"apiVersion": "velero.io/v1",
"metadata": {
"name": "test-backup",
"namespace": "velero",
"selfLink": "/apis/velero.io/v1/namespaces/velero/backups/test-backup",
"uid": "a12345cb-75f5-11e7-b4c2-abcdef123456",
"resourceVersion": "337075",
"creationTimestamp": "2017-07-31T13:39:15Z"
},
"spec": {
"includedNamespaces": [
"*"
],
"excludedNamespaces": null,
"includedResources": [
"*"
],
"excludedResources": null,
"labelSelector": null,
"snapshotVolumes": true,
"ttl": "24h0m0s"
},
"status": {
"version": 1,
"formatVersion": "1.1.0",
"expiration": "2017-08-01T13:39:15Z",
"phase": "Completed",
"volumeBackups": {
"pvc-e1e2d345-7583-11e7-b4c2-abcdef123456": {
"snapshotID": "snap-04b1a8e11dfb33ab0",
"type": "gp2",
"iops": 100
}
},
"validationErrors": null
}
}
```
Note that this file includes detailed info about your volume snapshots in the `status.volumeBackups` field, which can be helpful if you want to manually check them in your cloud provider GUI.
## Output File Format Versioning
The Velero output file format is intended to be relatively stable, but may change over time to support new features.
To accommodate this, Velero follows [Semantic Versioning](http://semver.org/) for the file format version.
Minor and patch versions will indicate backwards-compatible changes that previous versions of Velero can restore, including new directories or files.
A major version would indicate that a version of Velero older than the version that created the backup could not restore it, usually because of moved or renamed directories or files.
Major versions of the file format will be incremented with major version releases of Velero.
However, a major version release of Velero does not necessarily mean that the backup format version changed - Velero 3.0 could still use backup file format 2.0, as an example.
## Versions
### File Format Version: 1.1 (Current)
Version 1.1 added support of API groups versions as part of the backup. Previously, only the preferred version of each API groups was backed up. Each resource has one or more sub-directories: one sub-directory for each supported version of the API group. The preferred version API Group of each resource has the suffix "-preferredversion" as part of the sub-directory name. For backward compatibility, we kept the classic directory structure without the API group version, which sits on the same level as the API group sub-directory versions.
By default, only the preferred API group of each resource is backed up. To take a backup of all API group versions, you need to run the Velero server with the `--features=EnableAPIGroupVersions` feature flag. This is an experimental flag and the restore logic to handle multiple API group versions is documented at [EnableAPIGroupVersions](enable-api-group-versions-feature.md).
When unzipped, a typical backup directory (`backup1234.tar.gz`) taken with this file format version looks like the following (with the feature flag):
```
resources/
persistentvolumes/
cluster/
pv01.json
...
v1-preferredversion/
cluster/
pv01.json
...
configmaps/
namespaces/
namespace1/
myconfigmap.json
...
namespace2/
...
v1-preferredversion/
namespaces/
namespace1/
myconfigmap.json
...
namespace2/
...
pods/
namespaces/
namespace1/
mypod.json
...
namespace2/
...
v1-preferredversion/
namespaces/
namespace1/
mypod.json
...
namespace2/
...
jobs.batch/
namespaces/
namespace1/
awesome-job.json
...
namespace2/
...
v1-preferredversion/
namespaces/
namespace1/
awesome-job.json
...
namespace2/
...
deployments/
namespaces/
namespace1/
cool-deployment.json
...
namespace2/
...
v1-preferredversion/
namespaces/
namespace1/
cool-deployment.json
...
namespace2/
...
horizontalpodautoscalers.autoscaling/
namespaces/
namespace1/
hpa-to-the-rescue.json
...
namespace2/
...
v1-preferredversion/
namespaces/
namespace1/
hpa-to-the-rescue.json
...
namespace2/
...
v2beta1/
namespaces/
namespace1/
hpa-to-the-rescue.json
...
namespace2/
...
v2beta2/
namespaces/
namespace1/
hpa-to-the-rescue.json
...
namespace2/
...
...
```
### File Format Version: 1
When unzipped, a typical backup directory (`backup1234.tar.gz`) looks like the following:
```
resources/
persistentvolumes/
cluster/
pv01.json
...
configmaps/
namespaces/
namespace1/
myconfigmap.json
...
namespace2/
...
pods/
namespaces/
namespace1/
mypod.json
...
namespace2/
...
jobs/
namespaces/
namespace1/
awesome-job.json
...
namespace2/
...
deployments/
namespaces/
namespace1/
cool-deployment.json
...
namespace2/
...
...
```

View File

@ -0,0 +1,29 @@
---
title: "Velero plugin system"
layout: docs
---
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations.
For server installation, Velero requires that at least one plugin is added (with the `--plugins` flag). The plugin will be either of the type object store or volume snapshotter, or a plugin that contains both. An exception to this is that when the user is not configuring a backup storage location or a snapshot storage location at the time of install, this flag is optional.
Any plugin can be added after Velero has been installed by using the command `velero plugin add <registry/image:version>`.
Example with a dockerhub image: `velero plugin add velero/velero-plugin-for-aws:v1.0.0`.
In the same way, any plugin can be removed by using the command `velero plugin remove <registry/image:version>`.
## Creating a new plugin
Anyone can add integrations for any platform to provide additional backup and volume storage without modifying the Velero codebase. To write a plugin for a new backup or volume storage platform, take a look at our [example repo][1] and at our documentation for [Custom plugins][2].
## Adding a new plugin
After you publish your plugin on your own repository, open a PR that adds a link to it under the appropriate list of [supported providers][3] page in our documentation.
You can also add the [`velero-plugin` GitHub Topic][4] to your repo, and it will be shown under the aggregated list of repositories automatically.
[1]: https://github.com/vmware-tanzu/velero-plugin-example/
[2]: custom-plugins.md
[3]: supported-providers.md
[4]: https://github.com/topics/velero-plugin

View File

@ -0,0 +1,17 @@
---
title: Releasing Velero plugins
layout: docs
toc: "true"
---
Velero plugins maintained by the core maintainers do not have any shipped binaries, only container images, so there is no need to invoke a GoReleaser script.
Container images are built via a CI job on git push.
Plugins the Velero core team is responsible include all those listed in [the Velero-supported providers list](supported-providers.md) _except_ the vSphere plugin.
1. Update the README.md file to update the compatibility matrix and `velero install` instructions with the expected version number and open a PR.
1. Determining the version number is based on semantic versioning and whether the plugin uses any newly introduced, changed, or removed methods or variables from Velero.
1. Once the updated README.md PR is merged, checkout the upstream `main` branch. `git checkout upstream/main`.
1. Tag the git version - `git tag v<version>`.
1. Push the git tag - `git push --tags <upstream remote name>`
1. Wait for the container images to build

View File

@ -0,0 +1,50 @@
---
title: "Run Velero more securely with restrictive RBAC settings"
layout: docs
---
By default Velero runs with an RBAC policy of ClusterRole `cluster-admin`. This is to make sure that Velero can back up or restore anything in your cluster. But `cluster-admin` access is wide open -- it gives Velero components access to everything in your cluster. Depending on your environment and your security needs, you should consider whether to configure additional RBAC policies with more restrictive access.
**Note:** Roles and RoleBindings are associated with a single namespaces, not with an entire cluster. PersistentVolume backups are associated only with an entire cluster. This means that any backups or restores that use a restrictive Role and RoleBinding pair can manage only the resources that belong to the namespace. You do not need a wide open RBAC policy to manage PersistentVolumes, however. You can configure a ClusterRole and ClusterRoleBinding that allow backups and restores only of PersistentVolumes, not of all objects in the cluster.
For more information about RBAC and access control generally in Kubernetes, see the Kubernetes documentation about [access control][1], [managing service accounts][2], and [RBAC authorization][3].
## Set up Roles and RoleBindings
Here's a sample Role and RoleBinding pair.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: YOUR_NAMESPACE_HERE
name: ROLE_NAME_HERE
labels:
component: velero
rules:
- apiGroups:
- velero.io
verbs:
- "*"
resources:
- "*"
```
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ROLEBINDING_NAME_HERE
subjects:
- kind: ServiceAccount
name: YOUR_SERVICEACCOUNT_HERE
roleRef:
kind: Role
name: ROLE_NAME_HERE
apiGroup: rbac.authorization.k8s.io
```
[1]: https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/
[2]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
[3]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[4]: namespace.md

View File

@ -0,0 +1,135 @@
---
title: "Release Instructions"
layout: docs
toc: "true"
---
This page covers the steps to perform when releasing a new version of Velero.
## General notes
- Please read the documented variables in each script to understand what they are for and how to properly format their values.
- You will need to have an upstream remote configured to use to the [vmware-tanzu/velero](https://github.com/vmware-tanzu/velero) repository.
You can check this using `git remote -v`.
The release script ([`tag-release.sh`](https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/hack/release-tools/tag-release.sh)) will use `upstream` as the default remote name if it is not specified using the environment variable `REMOTE`.
- GA release: major and minor releases only. Example: 1.0 (major), 1.5 (minor).
- Pre-releases: Any release leading up to a GA. Example: 1.4.0-beta.1, 1.5.0-rc.1
- RC releases: Release Candidate, contains everything that is supposed to ship with the GA release. This is still a pre-release.
## Preparing
### Create release blog post (GA only)
For each major or minor release, create and publish a blog post to let folks know what's new. Please follow these [instructions](how-to-write-and-release-a-blog-post).
### Changelog and Docs PR
#### Troubleshooting
- If you encounter the error `You don't have enough free space in /var/cache/apt/archives/` when running `make serve-docs`: run `docker system prune`.
#### Steps
1. If it doesn't already exist: in a branch, create the file `changelogs/CHANGELOG-<major>.<minor>.md` by copying the most recent one.
1. Update the file `changelogs/CHANGELOG-<major>.<minor>.md`
- Run `make changelog` to generate a list of all unreleased changes.
- Copy/paste the output into `CHANGELOG-<major>.<minor>.md`, under the "All Changes" section for the release.
- You *may* choose to tweak formatting on the list of changes by adding code blocks, etc.
- Update links at the top of the file to point to the new release version
1. Update the main `CHANGELOG.md` file to properly reference the release-specific changelog file
- Under "Current release":
- Should contain only the current GA release.
- Under "Development release":
- Should contain only the latest pre-release
- Move any prior pre-release into "Older releases"
1. GA Only: Remove all changelog files from `changelogs/unreleased`.
1. Generate new docs
- Run `make gen-docs`, passing the appropriate variables. Examples:
a) `VELERO_VERSION=v1.5.0-rc.1 NEW_DOCS_VERSION=v1.5.0-rc.1 make gen-docs`.
b) `VELERO_VERSION=v1.5.0 NEW_DOCS_VERSION=v1.5 make gen-docs`).
- Note: `PREVIOUS_DOCS_VERSION=<doc-version-to-copy-from>` is optional; when not set, it will default to the latest doc version.
1. Clean up when there is an existing set of pre-release versioned docs for the version you are releasing
- Example: `site/content/docs/v1.5.0-beta.1` exists, and you're releasing `v1.5.0-rc.1` or `v1.5`
- Remove the directory containing the pre-release docs, i.e. `site/content/docs/<pre-release-version>`.
- Delete the pre-release docs table of contents file, i.e. `site/data/docs/<pre-release-version>-toc.yml`.
- Remove the pre-release docs table of contents mapping entry from `site/data/toc-mapping.yml`.
- Remove all references to the pre-release docs from `site/config.yml`.
1. Create the "Upgrade to $major.minor" page if it does not already exist ([example](https://velero.io/docs/v1.5/upgrade-to-1.5/)).
If it already exists, update any usage of the previous version string within this file to use the new version string instead ([example](https://github.com/vmware-tanzu/velero/pull/2941/files#diff-d594f8fd0901fed79c39aab4b348193d)).
This needs to be done in both the versioned and the `main` folders.
1. Review and submit PR
- Follow the additional instructions at `site/README-HUGO.md` to complete the docs generation process.
- Do a review of the diffs, and/or run `make serve-docs` and review the site.
- Submit a PR containing the changelog and the version-tagged docs.
## Velero release
### Notes
- Pre-requisite: PR with the changelog and docs is merged, so that it's included in the release tag.
- This process is the same for both pre-release and GA.
- Refer to the [General notes](general-notes) above for instructions.
#### Troubleshooting
- If the dry-run fails with random errors, try running it again.
#### Steps
1. Create a tagged release in dry-run mode
- This won't push anything to GitHub.
- Run `VELERO_VERSION=v1.0.0-rc.1 REMOTE=<upstream-remote> GITHUB_TOKEN=REDACTED ./hack/release-tools/tag-release.sh`.
- Fix any issue.
1. Create a tagged release and push it to GitHub
- Run `VELERO_VERSION=v1.0.0-rc.1 REMOTE=<upstream-remote> GITHUB_TOKEN=REDACTED ./hack/release-tools/tag-release.sh publish`.
1. Publish the release
- Navigate to the draft GitHub release at https://github.com/vmware-tanzu/velero/releases and edit the release.
- If this is a patch release (e.g. `v1.4.1`), note that the full `CHANGELOG-1.4.md` contents will be included in the body of the GitHub release. You need to delete the previous releases' content (e.g. `v1.2.0`'s changelog) so that only the latest patch release's changelog shows.
- Do a quick review for formatting.
- **Note:** the `goreleaser` process should have detected if it's a pre-release version and, if so, checked the box at the bottom of the GitHub release page appropriately, but it's always worth double-checking.
- Verify that GitHub has built and pushed all the images (it takes a while): https://github.com/vmware-tanzu/velero/actions
- Verify that the images are on Docker Hub: https://hub.docker.com/r/velero/velero/tags
- Verify that the assets were published to the GitHub release
- Publish the release.
1. Test the release
- By now, the Docker images should have been published.
- Perform a smoke-test - for example:
- Download the CLI from the GitHub release
- Use it to install Velero into a cluster (or manually update an existing deployment to use the new images)
- Verify that `velero version` shows the expected output
- Run a backup/restore and ensure it works
## Homebrew release (GA only)
These are the steps to update the Velero Homebrew version.
### Steps
- If you don't already have one, create a [GitHub access token for Homebrew](https://github.com/settings/tokens/new?scopes=gist,public_repo&description=Homebrew)
- Run `export HOMEBREW_GITHUB_API_TOKEN=your_token_here` on your command line to make sure that `brew` can work on GitHub on your behalf.
- Run `hack/release-tools/brew-update.sh`. This script will download the necessary files, do the checks, and invoke the brew helper to submit the PR, which will open in your browser.
- Update Windows Chocolatey version. From a Windows computer, follow the step-by-step instructions to [create the Windows Chocolatey package for Velero CLI](https://github.com/adamrushuk/velero-choco/blob/main/README.md)
-
### Plugins
To release plugins maintained by the Velero team, follow the [plugin release instructions](plugin-release-instructions.md).
After the plugin images are built, be sure to update any [e2e tests][3] that use these plugins.
## How to write and release a blog post
What to include in a release blog:
* Thank all contributors for their involvement in the release.
* Where possible shoutout folks by name or consider spotlighting new maintainers.
* Highlight the themes, or areas of focus, for the release. Some examples of themes are security, bug fixes, feature improvements. See past Velero [release blog posts][1] for more examples.
* Include summaries of new features or workflows introduced in a release.
* This can also include new project initiatives, like a code-of-conduct update.
* Consider creating additional blog posts that go through new features in more detail. Plan to publish additional blogs after the release blog (all blogs dont have to be publish all at once).
Release blog post PR:
* Prepare a PR containing the release blog post. Read the [website guidelines][2] for more information on creating a blog post. It's usually easiest to make a copy of the most recent existing post, then replace the content as appropriate.
* You also need to update `site/index.html` to have "Latest Release Information" contain a link to the new post.
* Plan to publish the blog post the same day as the release.
## Announce a release
Once you are finished doing the release, let the rest of the world know it's available by posting messages in the following places.
1. GA Only: Merge the blog post PR.
1. Velero's Twitter account. Maintainers are encouraged to help spread the word by posting or reposting on social media.
1. Community Slack channel.
1. Google group message.
What to include:
* Thank all contributors
* A brief list of highlights in the release
* Link to the release blog post, release notes, and/or github release page
[1]: https://velero.io/blog
[2]: website-guidelines.md
[3]: test/e2e/velero_utils.go

View File

@ -0,0 +1,132 @@
---
title: "Resource filtering"
layout: docs
---
*Filter objects by namespace, type, or labels.*
Velero includes all objects in a backup or restore when no filtering options are used.
## Includes
Only specific resources are included, excluding all others.
Wildcard takes precedence when both a wildcard and specific resource are included.
### --include-namespaces
* Backup a namespace and it's objects.
```bash
velero backup create <backup-name> --include-namespaces <namespace>
```
* Restore two namespaces and their objects.
```bash
velero restore create <backup-name> --include-namespaces <namespace1>,<namespace2>
```
### --include-resources
* Backup all deployments in the cluster.
```bash
velero backup create <backup-name> --include-resources deployments
```
* Restore all deployments and configmaps in the cluster.
```bash
velero restore create <backup-name> --include-resources deployments,configmaps
```
* Backup the deployments in a namespace.
```bash
velero backup create <backup-name> --include-resources deployments --include-namespaces <namespace>
```
### --include-cluster-resources
This option can have three possible values:
* `true`: all cluster-scoped resources are included.
* `false`: no cluster-scoped resources are included.
* `nil` ("auto" or not supplied):
- Cluster-scoped resources are included when backing up or restoring all namespaces. Default: `true`.
- Cluster-scoped resources are not included when namespace filtering is used. Default: `false`.
* Some related cluster-scoped resources may still be backed/restored up if triggered by a custom action (for example, PVC->PV) unless `--include-cluster-resources=false`.
* Backup entire cluster including cluster-scoped resources.
```bash
velero backup create <backup-name>
```
* Restore only namespaced resources in the cluster.
```bash
velero restore create <backup-name> --include-cluster-resources=false
```
* Backup a namespace and include cluster-scoped resources.
```bash
velero backup create <backup-name> --include-namespaces <namespace> --include-cluster-resources=true
```
### --selector
* Include resources matching the label selector.
```bash
velero backup create <backup-name> --selector <key>=<value>
```
## Excludes
Exclude specific resources from the backup.
Wildcard excludes are ignored.
### --exclude-namespaces
* Exclude kube-system from the cluster backup.
```bash
velero backup create <backup-name> --exclude-namespaces kube-system
```
* Exclude two namespaces during a restore.
```bash
velero restore create <backup-name> --exclude-namespaces <namespace1>,<namespace2>
```
### --exclude-resources
* Exclude secrets from the backup.
```bash
velero backup create <backup-name> --exclude-resources secrets
```
* Exclude secrets and rolebindings.
```bash
velero backup create <backup-name> --exclude-resources secrets,rolebindings
```
### velero.io/exclude-from-backup=true
* Resources with the label `velero.io/exclude-from-backup=true` are not included in backup, even if it contains a matching selector label.

View File

@ -0,0 +1,528 @@
---
title: "Restic Integration"
layout: docs
---
Velero supports backing up and restoring Kubernetes volumes using a free open-source backup tool called [restic][1]. This support is considered beta quality. Please see the list of [limitations](#limitations) to understand if it fits your use case.
Velero allows you to take snapshots of persistent volumes as part of your backups if youre using one of
the supported cloud providers block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks).
It also provides a plugin model that enables anyone to implement additional object and block storage backends, outside the
main Velero repository.
The restic intergation was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable
cross-volume-type data migrations.
**NOTE:** hostPath volumes are not supported, but the [local volume type][4] is supported.
## Setup restic
### Prerequisites
- Understand how Velero performs [backups with the restic integration](#how-backup-and-restore-work-with-restic).
- [Download][3] the latest Velero release.
- Kubernetes v1.10.0 and later. Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.10.0 and later.
### Install restic
To install restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
```
velero install --use-restic
```
When using restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
### Configure restic DaemonSet spec
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
**RancherOS**
Update the host path for volumes in the restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
```yaml
hostPath:
path: /var/lib/kubelet/pods
```
to
```yaml
hostPath:
path: /opt/rke/var/lib/kubelet/pods
```
**OpenShift**
To mount the correct hostpath to pods volumes, run the restic pod in `privileged` mode.
1. Add the `velero` ServiceAccount to the `privileged` SCC:
```
$ oc adm policy add-scc-to-user privileged -z velero -n velero
```
2. For OpenShift version >= `4.1`, modify the DaemonSet yaml to request a privileged mode:
```diff
@@ -67,3 +67,5 @@ spec:
value: /credentials/cloud
- name: VELERO_SCRATCH_DIR
value: /scratch
+ securityContext:
+ privileged: true
```
or
```shell
oc patch ds/restic \
--namespace velero \
--type json \
-p '[{"op":"add","path":"/spec/template/spec/containers/0/securityContext","value": { "privileged": true}}]'
```
3. For OpenShift version < `4.1`, modify the DaemonSet yaml to request a privileged mode and mount the correct hostpath to pods volumes.
```diff
@@ -35,7 +35,7 @@ spec:
secretName: cloud-credentials
- name: host-pods
hostPath:
- path: /var/lib/kubelet/pods
+ path: /var/lib/origin/openshift.local.volumes/pods
- name: scratch
emptyDir: {}
containers:
@@ -67,3 +67,5 @@ spec:
value: /credentials/cloud
- name: VELERO_SCRATCH_DIR
value: /scratch
+ securityContext:
+ privileged: true
```
or
```shell
oc patch ds/restic \
--namespace velero \
--type json \
-p '[{"op":"add","path":"/spec/template/spec/containers/0/securityContext","value": { "privileged": true}}]'
oc patch ds/restic \
--namespace velero \
--type json \
-p '[{"op":"replace","path":"/spec/template/spec/volumes/0/hostPath","value": { "path": "/var/lib/origin/openshift.local.volumes/pods"}}]'
```
If restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
By default a userland openshift namespace will not schedule pods on all nodes in the cluster.
To schedule on all nodes the namespace needs an annotation:
```
oc annotate namespace <velero namespace> openshift.io/node-selector=""
```
This should be done before velero installation.
Or the ds needs to be deleted and recreated:
```
oc get ds restic -o yaml -n <velero namespace> > ds.yaml
oc annotate namespace <velero namespace> openshift.io/node-selector=""
oc create -n <velero namespace> -f ds.yaml
```
**VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS)**
You need to enable the `Allow Privileged` option in your plan configuration so that restic is able to mount the hostpath.
The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods`
```yaml
hostPath:
path: /var/vcap/data/kubelet/pods
```
**Microsoft Azure**
If you are using [Azure Files][8], you need to add `nouser_xattr` to your storage class's `mountOptions`. See [this restic issue][9] for more details.
You can use the following command to patch the storage class:
```bash
kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
--type json \
--patch '[{"op":"add","path":"/mountOptions/-","value":"nouser_xattr"}]'
```
## To back up
Velero supports two approaches of discovering pod volumes that need to be backed up using restic:
- Opt-in approach: Where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
- Opt-out approach: Where all pod volumes are backed up using restic, with the ability to opt-out any volumes that should not be backed up.
The following sections provide more details on the two approaches.
### Using the opt-out approach
In this approach, Velero will back up all pod volumes using restic with the exception of:
- Volumes mounting the default service account token, kubernetes secrets, and config maps
- Hostpath volumes
It is possible to exclude volumes from being backed up using the `backup.velero.io/backup-volumes-excludes` annotation on the pod.
Instructions to back up using this approach are as follows:
1. Run the following command on each pod that contains volumes that should **not** be backed up using restic
```bash
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes-excludes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
```
where the volume names are the names of the volumes in the pod sepc.
For example, in the following pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: app1
namespace: sample
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-webserver
volumeMounts:
- name: pvc1-vm
mountPath: /volume-1
- name: pvc2-vm
mountPath: /volume-2
volumes:
- name: pvc1-vm
persistentVolumeClaim:
claimName: pvc1
- name: pvc2-vm
claimName: pvc2
```
to exclude restic backup of volume `pvc1-vm`, you would run:
```bash
kubectl -n sample annotate pod/app1 backup.velero.io/backup-volumes-excludes=pvc1-vm
```
2. Take a Velero backup:
```bash
velero backup create BACKUP_NAME --default-volumes-to-restic OTHER_OPTIONS
```
The above steps uses the opt-out approach on a per backup basis.
Alternatively, this behavior may be enabled on all velero backups running the `velero install` command with the `--default-volumes-to-restic` flag. Refer [install overview][11] for details.
3. When the backup completes, view information about the backups:
```bash
velero backup describe YOUR_BACKUP_NAME
```
```bash
kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOUR_BACKUP_NAME -o yaml
```
### Using opt-in pod volume backup
Velero, by default, uses this approach to discover pod volumes that need to be backed up using restic, where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
Instructions to back up using this approach are as follows:
1. Run the following for each pod that contains a volume to back up:
```bash
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
```
where the volume names are the names of the volumes in the pod spec.
For example, for the following pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: sample
namespace: foo
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-webserver
volumeMounts:
- name: pvc-volume
mountPath: /volume-1
- name: emptydir-volume
mountPath: /volume-2
volumes:
- name: pvc-volume
persistentVolumeClaim:
claimName: test-volume-claim
- name: emptydir-volume
emptyDir: {}
```
You'd run:
```bash
kubectl -n foo annotate pod/sample backup.velero.io/backup-volumes=pvc-volume,emptydir-volume
```
This annotation can also be provided in a pod template spec if you use a controller to manage your pods.
1. Take a Velero backup:
```bash
velero backup create NAME OPTIONS...
```
1. When the backup completes, view information about the backups:
```bash
velero backup describe YOUR_BACKUP_NAME
```
```bash
kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOUR_BACKUP_NAME -o yaml
```
## To restore
Regardless of how volumes are discovered for backup using restic, the process of restoring remains the same.
1. Restore from your Velero backup:
```bash
velero restore create --from-backup BACKUP_NAME OPTIONS...
```
1. When the restore completes, view information about your pod volume restores:
```bash
velero restore describe YOUR_RESTORE_NAME
```
```bash
kubectl -n velero get podvolumerestores -l velero.io/restore-name=YOUR_RESTORE_NAME -o yaml
```
## Limitations
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
- Those of you familiar with [restic][1] may know that it encrypts all of its data. Velero uses a static,
common encryption key for all restic repositories it creates. **This means that anyone who has access to your
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
appropriately.
- An incremental backup chain will be maintained across pod reschedules for PVCs. However, for pod volumes that are *not*
PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (for example, by a ReplicaSet/Deployment), the next backup of those
volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod.
- Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual
difference is small.
- If you plan to use the Velero restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
- Velero's restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
## Customize Restore Helper Container
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
the alternate image.
In addition, you can customize the resource requirements for the init container, should you need.
The ConfigMap must look like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: restic-restore-action-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in restic restore
# item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/restic: RestoreItemAction
data:
# The value for "image" can either include a tag or not;
# if the tag is *not* included, the tag from the main Velero
# image will automatically be used.
image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]
# "cpuRequest" sets the request.cpu value on the restic init containers during restore.
# If not set, it will default to "100m". A value of "0" is treated as unbounded.
cpuRequest: 200m
# "memRequest" sets the request.memory value on the restic init containers during restore.
# If not set, it will default to "128Mi". A value of "0" is treated as unbounded.
memRequest: 128Mi
# "cpuLimit" sets the request.cpu value on the restic init containers during restore.
# If not set, it will default to "100m". A value of "0" is treated as unbounded.
cpuLimit: 200m
# "memLimit" sets the request.memory value on the restic init containers during restore.
# If not set, it will default to "128Mi". A value of "0" is treated as unbounded.
memLimit: 128Mi
# "secCtxRunAsUser sets the securityContext.runAsUser value on the restic init containers during restore."
secCtxRunAsUser: 1001
# "secCtxRunAsGroup sets the securityContext.runAsGroup value on the restic init containers during restore."
secCtxRunAsGroup: 999
```
## Troubleshooting
Run the following checks:
Are your Velero server and daemonset pods running?
```bash
kubectl get pods -n velero
```
Does your restic repository exist, and is it ready?
```bash
velero restic repo get
velero restic repo get REPO_NAME -o yaml
```
Are there any errors in your Velero backup/restore?
```bash
velero backup describe BACKUP_NAME
velero backup logs BACKUP_NAME
velero restore describe RESTORE_NAME
velero restore logs RESTORE_NAME
```
What is the status of your pod volume backups/restores?
```bash
kubectl -n velero get podvolumebackups -l velero.io/backup-name=BACKUP_NAME -o yaml
kubectl -n velero get podvolumerestores -l velero.io/restore-name=RESTORE_NAME -o yaml
```
Is there any useful information in the Velero server or daemon pod logs?
```bash
kubectl -n velero logs deploy/velero
kubectl -n velero logs DAEMON_POD_NAME
```
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
to the container command in the deployment/daemonset pod template spec.
## How backup and restore work with restic
Velero has three custom resource definitions and associated controllers:
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
and `restic prune`.
You can see information about your Velero restic repositories by running `velero restic repo get`.
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
`restic backup` commands to backup pod volume data.
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
on that node. The controller executes `restic restore` commands to restore pod volume data.
### Backup
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using restic.
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
1. The main Velero process now waits for the `PodVolumeBackup` resources to complete or fail
1. Meanwhile, each `PodVolumeBackup` is handled by the controller on the appropriate node, which:
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
- finds the pod volume's subdirectory within the above volume
- runs `restic backup`
- updates the status of the custom resource to `Completed` or `Failed`
1. As each `PodVolumeBackup` finishes, the main Velero process adds it to the Velero backup in a file named `<backup-name>-podvolumebackups.json.gz`. This file gets uploaded to object storage alongside the backup tarball. It will be used for restores, as seen in the next section.
### Restore
1. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from.
1. For each `PodVolumeBackup` found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
check it for integrity)
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
on this shortly)
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
- waits for the pod to be running the init container
- finds the pod volume's subdirectory within the above volume
- runs `restic restore`
- on success, writes a file into the pod volume, in a `.velero` subdirectory, whose name is the UID of the Velero restore
that this pod volume restore is for
- updates the status of the custom resource to `Completed` or `Failed`
1. The init container that was added to the pod is running a process that waits until it finds a file
within each restored volume, under `.velero`, whose name is the UID of the Velero restore being run
1. Once all such files are found, the init container's process terminates successfully and the pod moves
on to running other init containers/the main containers.
## 3rd party controllers
### Monitor backup annotation
Velero does not provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watcher][7]
[1]: https://github.com/restic/restic
[2]: customize-installation.md#enable-restic-integration
[3]: https://github.com/vmware-tanzu/velero/releases/
[4]: https://kubernetes.io/docs/concepts/storage/volumes/#local
[5]: http://restic.readthedocs.io/en/latest/100_references.html#terminology
[6]: https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
[7]: https://github.com/bitsbeats/velero-pvc-watcher
[8]: https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
[9]: https://github.com/restic/restic/issues/1800
[11]: customize-installation.md#default-pod-volume-backup-to-restic

View File

@ -0,0 +1,261 @@
---
title: "Restore Hooks"
layout: docs
---
Velero supports Restore Hooks, custom actions that can be executed during or after the restore process. There are two kinds of Restore Hooks:
1. InitContainer Restore Hooks: These will add init containers into restored pods to perform any necessary setup before the application containers of the restored pod can start.
1. Exec Restore Hooks: These can be used to execute custom commands or scripts in containers of a restored Kubernetes pod.
## InitContainer Restore Hooks
Use an `InitContainer` hook to add init containers into a pod before it's restored. You can use these init containers to run any setup needed for the pod to resume running from its backed-up state.
The InitContainer added by the restore hook will be the first init container in the `podSpec` of the restored pod.
In the case where the pod had volumes backed up using restic, then, the restore hook InitContainer will be added after the `restic-wait` InitContainer.
NOTE: This ordering can be altered by any mutating webhooks that may be installed in the cluster.
There are two ways to specify `InitContainer` restore hooks:
1. Specifying restore hooks in annotations
1. Specifying restore hooks in the restore spec
### Specifying Restore Hooks As Pod Annotations
Below are the annotations that can be added to a pod to specify restore hooks:
* `init.hook.restore.velero.io/container-image`
* The container image for the init container to be added.
* `init.hook.restore.velero.io/container-name`
* The name for the init container that is being added.
* `init.hook.restore.velero.io/command`
* This is the `ENTRYPOINT` for the init container being added. This command is not executed within a shell and the container image's `ENTRYPOINT` is used if this is not provided.
#### Example
Use the below commands to add annotations to the pods before taking a backup.
```bash
$ kubectl annotate pod -n <POD_NAMESPACE> <POD_NAME> \
init.hook.restore.velero.io/container-name=restore-hook \
init.hook.restore.velero.io/container-image=alpine:latest \
init.hook.restore.velero.io/command='["/bin/ash", "-c", "date"]'
```
With the annotation above, Velero will add the following init container to the pod when it's restored.
```json
{
"command": [
"/bin/ash",
"-c",
"date"
],
"image": "alpine:latest",
"imagePullPolicy": "Always",
"name": "restore-hook"
...
}
```
### Specifying Restore Hooks In Restore Spec
Init container restore hooks can also be specified using the `RestoreSpec`.
Please refer to the documentation on the [Restore API Type][1] for how to specify hooks in the Restore spec.
#### Example
Below is an example of specifying restore hooks in `RestoreSpec`
```yaml
apiVersion: velero.io/v1
kind: Restore
metadata:
name: r2
namespace: velero
spec:
backupName: b2
excludedResources:
...
includedNamespaces:
- '*'
hooks:
resources:
- name: restore-hook-1
includedNamespaces:
- app
postHooks:
- init:
initContainers:
- name: restore-hook-init1
image: alpine:latest
volumeMounts:
- mountPath: /restores/pvc1-vm
name: pvc1-vm
command:
- /bin/ash
- -c
- echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz
- name: restore-hook-init2
image: alpine:latest
volumeMounts:
- mountPath: /restores/pvc2-vm
name: pvc2-vm
command:
- /bin/ash
- -c
- echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed
```
The `hooks` in the above `RestoreSpec`, when restored, will add two init containers to every pod in the `app` namespace
```json
{
"command": [
"/bin/ash",
"-c",
"echo -n \"FOOBARBAZ\" >> /restores/pvc1-vm/foobarbaz"
],
"image": "alpine:latest",
"imagePullPolicy": "Always",
"name": "restore-hook-init1",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/restores/pvc1-vm",
"name": "pvc1-vm"
}
]
...
}
```
and
```json
{
"command": [
"/bin/ash",
"-c",
"echo -n \"DEADFEED\" >> /restores/pvc2-vm/deadfeed"
],
"image": "alpine:latest",
"imagePullPolicy": "Always",
"name": "restore-hook-init2",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/restores/pvc2-vm",
"name": "pvc2-vm"
}
]
...
}
```
## Exec Restore Hooks
Use an Exec Restore hook to execute commands in a restored pod's containers after they start.
There are two ways to specify `Exec` restore hooks:
1. Specifying exec restore hooks in annotations
1. Specifying exec restore hooks in the restore spec
If a pod has the annotation `post.hook.restore.velero.io/command` then that is the only hook that will be executed in the pod.
No hooks from the restore spec will be executed in that pod.
### Specifying Exec Restore Hooks As Pod Annotations
Below are the annotations that can be added to a pod to specify exec restore hooks:
* `post.hook.restore.velero.io/container`
* The container name where the hook will be executed. Defaults to the first container. Optional.
* `post.hook.restore.velero.io/command`
* The command that will be executed in the container. Required.
* `post.hook.restore.velero.io/on-error`
* How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode, no more restore hooks will be executed in any container in any pod and the status of the Restore will be `PartiallyFailed`. Optional.
* `post.hook.restore.velero.io/exec-timeout`
* How long to wait once execution begins. Defaults to 30 seconds. Optional.
* `post.hook.restore.velero.io/wait-timeout`
* How long to wait for a container to become ready. This should be long enough for the container to start plus any preceding hooks in the same container to complete. The wait timeout begins when the container is restored and may require time for the image to pull and volumes to mount. If not set the restore will wait indefinitely. Optional.
#### Example
Use the below commands to add annotations to the pods before taking a backup.
```bash
$ kubectl annotate pod -n <POD_NAMESPACE> <POD_NAME> \
post.hook.restore.velero.io/container=postgres \
post.hook.restore.velero.io/command='["/bin/bash", "-c", "psql < /backup/backup.sql"]' \
post.hook.restore.velero.io/wait-timeout=5m \
post.hook.restore.velero.io/exec-timeout=45s \
post.hook.restore.velero.io/on-error=Continue
```
### Specifying Exec Restore Hooks in Restore Spec
Exec restore hooks can also be specified using the `RestoreSpec`.
Please refer to the documentation on the [Restore API Type][1] for how to specify hooks in the Restore spec.
#### Multiple Exec Restore Hooks Example
Below is an example of specifying restore hooks in a `RestoreSpec`.
When using the restore spec it is possible to specify multiple hooks for a single pod, as this example demonstrates.
All hooks applicable to a single container will be executed sequentially in that container once it starts.
The ordering of hooks executed in a single container follows the order of the restore spec.
In this example, the `pg_isready` hook is guaranteed to run before the `psql` hook because they both apply to the same container and the `pg_isready` hook is defined first.
If a pod has multiple containers with applicable hooks, all hooks for a single container will be executed before executing hooks in another container.
In this example, if the postgres container starts before the sidecar container, both postgres hooks will run before the hook in the sidecar.
This means the sidecar container may be running for several minutes before its hook is executed.
Velero guarantees that no two hooks for a single pod are executed in parallel, but hooks executing in different pods may run in parallel.
```yaml
apiVersion: velero.io/v1
kind: Restore
metadata:
name: r2
namespace: velero
spec:
backupName: b2
excludedResources:
...
includedNamespaces:
- '*'
hooks:
resources:
- name: restore-hook-1
includedNamespaces:
- app
postHooks:
- exec:
execTimeout: 1m
waitTimeout: 5m
onError: Fail
container: postgres
command:
- /bin/bash
- '-c'
- 'while ! pg_isready; do sleep 1; done'
- exec:
container: postgres
waitTimeout: 6m
execTimeout: 1m
command:
- /bin/bash
- '-c'
- 'psql < /backup/backup.sql'
- exec:
container: sidecar
command:
- /bin/bash
- '-c'
- 'date > /start'
```
[1]: api-types/restore.md

View File

@ -0,0 +1,144 @@
---
title: "Restore Reference"
layout: docs
---
## Restoring Into a Different Namespace
Velero can restore resources into a different namespace than the one they were backed up from. To do this, use the `--namespace-mappings` flag:
```bash
velero restore create RESTORE_NAME \
--from-backup BACKUP_NAME \
--namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-2
```
## What happens when user removes restore objects
A **restore** object represents the restore operation. There are two types of deletion for restore objects:
1. Deleting with **`velero restore delete`**.
This command will delete the custom resource representing it, along with its individual log and results files. But, it will not delete any objects that were created by it from your cluster.
2. Deleting with **`kubectl -n velero delete restore`**.
This command will delete the custom resource representing the restore, but will not delete log/results files from object storage, or any objects that were created during the restore in your cluster.
## Restore command-line options
To see all commands for restores, run : `velero restore --help`
To see all options associated with a specific command, provide the --help flag to that command. For example, **`velero restore create --help`** shows all options associated with the **create** command.
To list all options of restore, use **`velero restore --help`**
```Usage:
velero restore [command]
Available Commands:
create Create a restore
delete Delete restores
describe Describe restores
get Get restores
logs Get restore logs
```
## What happens to NodePorts when restoring Services
**Auto assigned** NodePorts **deleted** by default and Services get new **auto assigned** nodePorts after restore.
**Explicitly specified** NodePorts auto detected using **`last-applied-config`** annotation and **preserved** after restore. NodePorts can be explicitly specified as .spec.ports[*].nodePort field on Service definition.
#### Always Preserve NodePorts
It is not always possible to set nodePorts explicitly on some big clusters because of operation complexity. Official Kubernetes documents states that preventing port collisions is responsibility of the user when explicitly specifying nodePorts:
```
If you want a specific port number, you can specify a value in the `nodePort` field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
```
The clusters which are not explicitly specifying nodePorts still may need to restore original NodePorts in case of disaster. Auto assigned nodePorts most probably defined on Load Balancers which located front side of cluster. Changing all these nodePorts on Load Balancers is another operation complexity after disaster if nodePorts are changed.
Velero has a flag to let user deciding the preservation of nodePorts. **`velero restore create`** sub command has **`--preserve-nodeports`** flag to **preserve** Service nodePorts **always** regardless of nodePorts **explicitly specified** or **not**. This flag used for preserving the original nodePorts from backup and can be used as **`--preserve-nodeports`** or **`--preserve-nodeports=true`**
If this flag given and/or set to true, Velero does not remove the nodePorts when restoring Service and tries to use the nodePorts which written on backup.
Trying to preserve nodePorts may cause **port conflicts** when restoring on situations below:
- If the nodePort from the backup already allocated on the target cluster then Velero prints error log as shown below and continue to restore operation.
```
time="2020-11-23T12:58:31+03:00" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:1002" restore=velero/test-with-3-svc-20201123125825
time="2020-11-23T12:58:31+03:00" level=info msg="Restoring Services with original NodePort(s)" cmd=_output/bin/linux/amd64/velero logSource="pkg/restore/service_action.go:61" pluginName=velero restore=velero/test-with-3-svc-20201123125825
time="2020-11-23T12:58:31+03:00" level=info msg="Attempting to restore Service: hello-service" logSource="pkg/restore/restore.go:1107" restore=velero/test-with-3-svc-20201123125825
time="2020-11-23T12:58:31+03:00" level=error msg="error restoring hello-service: Service \"hello-service\" is invalid: spec.ports[0].nodePort: Invalid value: 31536: provided port is already allocated" logSource="pkg/restore/restore.go:1170" restore=velero/test-with-3-svc-20201123125825
```
- If the nodePort from the backup is not in the nodePort range of target cluster then Velero prints error log as below and continue to restore operation. Kubernetes default nodePort range is 30000-32767 but on the example cluster nodePort range is 20000-22767 and tried to restore Service with nodePort 31536
```
time="2020-11-23T13:09:17+03:00" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:1002" restore=velero/test-with-3-svc-20201123130915
time="2020-11-23T13:09:17+03:00" level=info msg="Restoring Services with original NodePort(s)" cmd=_output/bin/linux/amd64/velero logSource="pkg/restore/service_action.go:61" pluginName=velero restore=velero/test-with-3-svc-20201123130915
time="2020-11-23T13:09:17+03:00" level=info msg="Attempting to restore Service: hello-service" logSource="pkg/restore/restore.go:1107" restore=velero/test-with-3-svc-20201123130915
time="2020-11-23T13:09:17+03:00" level=error msg="error restoring hello-service: Service \"hello-service\" is invalid: spec.ports[0].nodePort: Invalid value: 31536: provided port is not in the valid range. The range of valid ports is 20000-22767" logSource="pkg/restore/restore.go:1170" restore=velero/test-with-3-svc-20201123130915
```
## Changing PV/PVC Storage Classes
Velero can change the storage class of persistent volumes and persistent volume claims during restores. To configure a storage class mapping, create a config map in the Velero namespace like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: change-storage-class-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in restore item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/change-storage-class: RestoreItemAction
data:
# add 1+ key-value pairs here, where the key is the old
# storage class name and the value is the new storage
# class name.
<old-storage-class>: <new-storage-class>
```
## Changing PVC selected-node
Velero can update the selected-node annotation of persistent volume claim during restores, if selected-node doesn't exist in the cluster then it will remove the selected-node annotation from PersistentVolumeClaim. To configure a node mapping, create a config map in the Velero namespace like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: change-pvc-node-selector-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in restore item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/change-pvc-node-selector: RestoreItemAction
data:
# add 1+ key-value pairs here, where the key is the old
# node name and the value is the new node name.
<old-node-name>: <new-node-name>
```

View File

@ -0,0 +1,53 @@
---
title: "Run Velero locally in development"
layout: docs
---
Running the Velero server locally can speed up iterative development. This eliminates the need to rebuild the Velero server
image and redeploy it to the cluster with each change.
## Run Velero locally with a remote cluster
Velero runs against the Kubernetes API server as the endpoint (as per the `kubeconfig` configuration), so both the Velero server and client use the same `client-go` to communicate with Kubernetes. This means the Velero server can be run locally just as functionally as if it was running in the remote cluster.
### Prerequisites
When running Velero, you will need to ensure that you set up all of the following:
* Appropriate RBAC permissions in the cluster
* Read access for all data from the source cluster and namespaces
* Write access to the target cluster and namespaces
* Cloud provider credentials
* Read/write access to volumes
* Read/write access to object storage for backup data
* A [BackupStorageLocation][20] object definition for the Velero server
* (Optional) A [VolumeSnapshotLocation][21] object definition for the Velero server, to take PV snapshots
### 1. Install Velero
See documentation on how to install Velero in some specific providers: [Install overview][22]
### 2. Scale deployment down to zero
After you use the `velero install` command to install Velero into your cluster, you scale the Velero deployment down to 0 so it is not simultaneously being run on the remote cluster and potentially causing things to get out of sync:
`kubectl scale --replicas=0 deployment velero -n velero`
#### 3. Start the Velero server locally
* To run the server locally, use the full path according to the binary you need. Example, if you are on a Mac, and using `AWS` as a provider, this is how to run the binary you built from source using the full path: `AWS_SHARED_CREDENTIALS_FILE=<path-to-credentials-file> ./_output/bin/darwin/amd64/velero`. Alternatively, you may add the `velero` binary to your `PATH`.
* Start the server: `velero server [CLI flags]`. The following CLI flags may be useful to customize, but see `velero server --help` for full details:
* `--log-level`: set the Velero server's log level (default `info`, use `debug` for the most logging)
* `--kubeconfig`: set the path to the kubeconfig file the Velero server uses to talk to the Kubernetes apiserver (default `$KUBECONFIG`)
* `--namespace`: the set namespace where the Velero server should look for backups, schedules, restores (default `velero`)
* `--plugin-dir`: set the directory where the Velero server looks for plugins (default `/plugins`)
* The `--plugin-dir` flag requires the plugin binary to be present locally, and should be set to the directory containing this built binary.
* `--metrics-address`: set the bind address and port where Prometheus metrics are exposed (default `:8085`)
[15]: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#the-shared-credentials-file
[16]: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable
[18]: https://eksctl.io/
[20]: api-types/backupstoragelocation.md
[21]: api-types/volumesnapshotlocation.md
[22]: basic-install.md

View File

@ -0,0 +1,34 @@
---
title: "Use Velero with a storage provider secured by a self-signed certificate"
layout: docs
---
If you are using an S3-Compatible storage provider that is secured with a self-signed certificate, connections to the object store may fail with a `certificate signed by unknown authority` message.
To proceed, provide a certificate bundle when adding the storage provider.
## Trusting a self-signed certificate during installation
When using the `velero install` command, you can use the `--cacert` flag to provide a path
to a PEM-encoded certificate bundle to trust.
```bash
velero install \
--plugins <PLUGIN_CONTAINER_IMAGE [PLUGIN_CONTAINER_IMAGE]>
--provider <YOUR_PROVIDER> \
--bucket <YOUR_BUCKET> \
--secret-file <PATH_TO_FILE> \
--cacert <PATH_TO_CA_BUNDLE>
```
Velero will then automatically use the provided CA bundle to verify TLS connections to
that storage provider when backing up and restoring.
## Trusting a self-signed certificate with the Velero client
To use the describe, download, or logs commands to access a backup or restore contained
in storage secured by a self-signed certificate as in the above example, you must use
the `--cacert` flag to provide a path to the certificate to be trusted.
```bash
velero backup describe my-backup --cacert <PATH_TO_CA_BUNDLE>
```

View File

@ -0,0 +1,34 @@
---
title: "Start contributing"
layout: docs
---
## Before you start
* Please familiarize yourself with the [Code of Conduct][1] before contributing.
* Also, see [CONTRIBUTING.md][2] for instructions on the developer certificate of origin that we require.
## Creating a design doc
Having a high level design document with the proposed change and the impacts helps the maintainers evaluate if a major change should be incorporated.
To make a design pull request, you can copy the template found in the `design/_template.md` file into a new Markdown file.
## Finding your way around
You may join the Velero community and contribute in many different ways, including helping us design or test new features. For any significant feature we consider adding, we start with a design document. You may find a list of in progress new designs here: https://github.com/vmware-tanzu/velero/pulls?q=is%3Aopen+is%3Apr+label%3ADesign. Feel free to review and help us with your input.
You can also vote on issues using :+1: and :-1:, as explained in our [Feature enhancement request][3] and [Bug issue][4] templates. This will help us quantify importance and prioritize issues.
For information on how to connect with our maintainers and community, join our online meetings, or find good first issues, start on our [Velero community](https://velero.io/community/) page.
Please browse our list of resources, including a playlist of past online community meetings, blog posts, and other resources to help you get familiar with our project: [Velero resources](https://velero.io/resources/).
## Contributing
If you are ready to jump in and test, add code, or help with documentation, please use the navigation on the left under `Contribute`.
[1]: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/CODE_OF_CONDUCT.md
[2]: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/CONTRIBUTING.md
[3]: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/.github/ISSUE_TEMPLATE/feature-enhancement-request.md
[4]: https://github.com/vmware-tanzu/velero/blob/v1.6.0-rc.1/.github/ISSUE_TEMPLATE/bug_report.md

View File

@ -0,0 +1,344 @@
---
title: "Documentation Style Guide"
layout: docs
---
_This style guide is adapted from the [Kubernetes style guide](https://kubernetes.io/docs/contribute/style/style-guide/)._
This page outlines writing style guidelines for the Velero documentation and you should use this page as a reference you write or edit content. Note that these are guidelines, not rules. Use your best judgment as you write documentation, and feel free to propose changes to these guidelines. Changes to the style guide are made by the Velero maintainers as a group. To propose a change or addition create an issue/PR, or add a suggestion to the [community meeting agenda](https://hackmd.io/Jq6F5zqZR7S80CeDWUklkA) and attend the meeting to participate in the discussion.
The Velero documentation uses the [kramdown](https://kramdown.gettalong.org/) Markdown renderer.
## Content best practices
### Use present tense
{{< table caption="Do and Don't - Use present tense" >}}
|Do|Don't|
|--- |--- |
|This `command` starts a proxy.|This command will start a proxy.|
{{< /table >}}
Exception: Use future or past tense if it is required to convey the correct meaning.
### Use active voice
{{< table caption="Do and Don't - Use active voice" >}}
|Do|Don't|
|--- |--- |
|You can explore the API using a browser.|The API can be explored using a browser.|
|The YAML file specifies the replica count.|The replica count is specified in the YAML file.|
{{< /table >}}
Exception: Use passive voice if active voice leads to an awkward sentence construction.
### Use simple and direct language
Use simple and direct language. Avoid using unnecessary phrases, such as saying "please."
{{< table caption="Do and Don't - Use simple and direct language" >}}
|Do|Don't|
|--- |--- |
|To create a ReplicaSet, ...|In order to create a ReplicaSet, ...|
|See the configuration file.|Please see the configuration file.|
|View the Pods.|With this next command, we'll view the Pods.|
{{< /table >}}
### Address the reader as "you"
{{< table caption="Do and Don't - Addressing the reader" >}}
|Do|Don't|
|--- |--- |
|You can create a Deployment by ...|We'll create a Deployment by ...|
|In the preceding output, you can see...|In the preceding output, we can see ...|
{{< /table >}}
### Avoid Latin phrases
Prefer English terms over Latin abbreviations.
{{< table caption="Do and Don't - Avoid Latin phrases" >}}
|Do|Don't|
|--- |--- |
|For example, ...|e.g., ...|
|That is, ...|i.e., ...|
{{< /table >}}
Exception: Use "etc." for et cetera.
## Patterns to avoid
### Avoid using "we"
Using "we" in a sentence can be confusing, because the reader might not know
whether they're part of the "we" you're describing.
{{< table caption="Do and Don't - Avoid using we" >}}
|Do|Don't|
|--- |--- |
|Version 1.4 includes ...|In version 1.4, we have added ...|
|Kubernetes provides a new feature for ...|We provide a new feature ...|
|This page teaches you how to use Pods.|In this page, we are going to learn about Pods.|
{{< /table >}}
### Avoid jargon and idioms
Many readers speak English as a second language. Avoid jargon and idioms to help them understand better.
{{< table caption="Do and Don't - Avoid jargon and idioms" >}}
|Do|Don't|
|--- |--- |
|Internally, ...|Under the hood, ...|
|Create a new cluster.|Turn up a new cluster.|
{{< /table >}}
### Avoid statements about the future or that will soon be out of date
Avoid making promises or giving hints about the future. If you need to talk about
a beta feature, put the text under a heading that identifies it as beta
information.
Also avoid words like “recently”, "currently" and "new." A feature that is new today might not be
considered new in a few months.
{{< table caption="Do and Don't - Avoid statements that will soon be out of date" >}}
|Do|Don't|
|--- |--- |
|In version 1.4, ...|In the current version, ...|
|The Federation feature provides ...|The new Federation feature provides ...|
{{< /table >}}
### Language
This documentation uses U.S. English spelling and grammar.
## Documentation formatting standards
### Use camel case for API objects
When you refer to an API object, use the same uppercase and lowercase letters
that are used in the actual object name. Typically, the names of API
objects use
[camel case](https://en.wikipedia.org/wiki/Camel_case).
Don't split the API object name into separate words. For example, use
PodTemplateList, not Pod Template List.
Refer to API objects without saying "object," unless omitting "object"
leads to an awkward sentence construction.
{{< table caption="Do and Don't - Do and Don't - API objects" >}}
|Do|Don't|
|--- |--- |
|The Pod has two containers.|The pod has two containers.|
|The Deployment is responsible for ...|The Deployment object is responsible for ...|
|A PodList is a list of Pods.|A Pod List is a list of pods.|
|The two ContainerPorts ...|The two ContainerPort objects ...|
|The two ContainerStateTerminated objects ...|The two ContainerStateTerminateds ...|
{{< /table >}}
### Use angle brackets for placeholders
Use angle brackets for placeholders. Tell the reader what a placeholder represents.
1. Display information about a Pod:
kubectl describe pod <pod-name> -n <namespace>
If the pod is in the default namespace, you can omit the '-n' parameter.
### Use bold for user interface elements
{{< table caption="Do and Don't - Bold interface elements" >}}
|Do|Don't|
|--- |--- |
|Click **Fork**.|Click "Fork".|
|Select **Other**.|Select "Other".|
{{< /table >}}
### Use italics to define or introduce new terms
{{< table caption="Do and Don't - Use italics for new terms" >}}
|Do|Don't|
|--- |--- |
|A _cluster_ is a set of nodes ...|A "cluster" is a set of nodes ...|
|These components form the _control plane_.|These components form the **control plane**.|
{{< /table >}}
### Use code style for filenames, directories, paths, object field names and namespaces
{{< table caption="Do and Don't - Use code style for filenames, directories, paths, object field names and namespaces" >}}
|Do|Don't|
|--- |--- |
|Open the `envars.yaml` file.|Open the envars.yaml file.|
|Go to the `/docs/tutorials` directory.|Go to the /docs/tutorials directory.|
|Open the `/_data/concepts.yaml` file.|Open the /\_data/concepts.yaml file.|
{{< /table >}}
### Use punctuation inside quotes
{{< table caption="Do and Don't - Use code style for filenames, directories, paths, object field names and namespaces" >}}
|Do|Don't|
|--- |--- |
|events are recorded with an associated "stage."|events are recorded with an associated "stage".|
|The copy is called a "fork."|The copy is called a "fork".|
{{< /table >}}
Exception: When the quoted word is a user input.
Example:
* My user ID is “IM47g”.
* Did you try the password “mycatisawesome”?
## Inline code formatting
### Use code style for inline code and commands
For inline code in an HTML document, use the `<code>` tag. In a Markdown
document, use the backtick (`` ` ``).
{{< table caption="Do and Don't - Use code style for filenames, directories, paths, object field names and namespaces" >}}
|Do|Don't|
|--- |--- |
|The `kubectl run` command creates a Deployment.|The "kubectl run" command creates a Deployment.|
|For declarative management, use `kubectl apply`.|For declarative management, use "kubectl apply".|
|Use single backticks to enclose inline code. For example, `var example = true`.|Use two asterisks (`**`) or an underscore (`_`) to enclose inline code. For example, **var example = true**.|
|Use triple backticks (\`\`\`) before and after a multi-line block of code for fenced code blocks.|Use multi-line blocks of code to create diagrams, flowcharts, or other illustrations.|
|Use meaningful variable names that have a context.|Use variable names such as 'foo','bar', and 'baz' that are not meaningful and lack context.|
|Remove trailing spaces in the code.|Add trailing spaces in the code, where these are important, because a screen reader will read out the spaces as well.|
{{< /table >}}
### Starting a sentence with a component tool or component name
{{< table caption="Do and Don't - Starting a sentence with a component tool or component name" >}}
|Do|Don't|
|--- |--- |
|The `kubeadm` tool bootstraps and provisions machines in a cluster.|`kubeadm` tool bootstraps and provisions machines in a cluster.|
|The kube-scheduler is the default scheduler for Kubernetes.|kube-scheduler is the default scheduler for Kubernetes.|
{{< /table >}}
### Use normal style for string and integer field values
For field values of type string or integer, use normal style without quotation marks.
{{< table caption="Do and Don't - Use normal style for string and integer field values" >}}
|Do|Don't|
|--- |--- |
|Set the value of `imagePullPolicy` to `Always`.|Set the value of `imagePullPolicy` to "Always".|
|Set the value of `image` to `nginx:1.16`.|Set the value of `image` to nginx:1.16.|
|Set the value of the `replicas` field to `2`.|Set the value of the `replicas` field to 2.|
{{< /table >}}
## Code snippet formatting
### Don't include the command prompt
{{< table caption="Do and Don't - Don't include the command prompt" >}}
|Do|Don't|
|--- |--- |
|kubectl get pods|$ kubectl get pods|
{{< /table >}}
### Separate commands from output
Verify that the Pod is running on your chosen node:
```
kubectl get pods --output=wide
```
The output is similar to this:
```
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 13s 10.200.0.4 worker0
```
## Velero.io word list
A list of Velero-specific terms and words to be used consistently across the site.
{{< table caption="Velero.io word list" >}}
|Trem|Usage|
|--- |--- |
|Kubernetes|Kubernetes should always be capitalized.|
|Docker|Docker should always be capitalized.|
|Velero|Velero should always be capitalized.|
|VMware|VMware should always be correctly capitalized.|
|On-premises|On-premises or on-prem rather than on-premise or other variations.|
|Backup|Backup rather than back up, back-up or other variations.|
|Plugin|Plugin rather than plug-in or other variations.|
|Allowlist|Use allowlist instead of whitelist.|
|Denylist|Use denylist instead of blacklist.|
{{< /table >}}
## Markdown elements
### Headings
People accessing this documentation may use a screen reader or other assistive technology (AT). [Screen readers](https://en.wikipedia.org/wiki/Screen_reader) are linear output devices, they output items on a page one at a time. If there is a lot of content on a page, you can use headings to give the page an internal structure. A good page structure helps all readers to easily navigate the page or filter topics of interest.
{{< table caption="Do and Don't - Headings" >}}
|Do|Don't|
|--- |--- |
|Include a title on each page or blog post.|Include more than one title headings (#) in a page.|
|Use ordered headings to provide a meaningful high-level outline of your content.|Use headings level 4 through 6, unless it is absolutely necessary. If your content is that detailed, it may need to be broken into separate articles.|
|Use sentence case for headings. For example, **Extend kubectl with plugins**|Use title case for headings. For example, **Extend Kubectl With Plugins**|
{{< /table >}}
### Paragraphs
{{< table caption="Do and Don't - Paragraphs" >}}
|Do|Don't|
|--- |--- |
|Try to keep paragraphs under 6 sentences.|Write long-winded paragraphs.|
|Use three hyphens (`---`) to create a horizontal rule for breaks in paragraph content.|Use horizontal rules for decoration.|
{{< /table >}}
### Links
{{< table caption="Do and Don't - Links" >}}
|Do|Don't|
|--- |--- |
|Write hyperlinks that give you context for the content they link to. For example: Certain ports are open on your machines. See [check required ports](#check-required-ports) for more details.|Use ambiguous terms such as “click here”. For example: Certain ports are open on your machines. See [here](#check-required-ports) for more details.|
|Write Markdown-style links: `[link text](URL)`. For example: `[community meeting agenda](https://hackmd.io/Jq6F5zqZR7S80CeDWUklkA)` and the output is [community meeting agenda](https://hackmd.io/Jq6F5zqZR7S80CeDWUklkA).|Write HTML-style links: `Visit our tutorial!`|
{{< /table >}}
### Lists
Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is an ordered or unordered list—it will be announced to the user that there is a group of list items. The user can then use the arrow keys to move up and down between the various items in the list.
Website navigation links can also be marked up as list items; after all they are nothing but a group of related links.
- End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences.
- Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.
- Use the number one (`1.`) for ordered lists.
- Use (`+`), (`*`), or (`-`) for unordered lists - be consistent within the same document.
- Leave a blank line after each list.
- Indent nested lists with four spaces (for example, ⋅⋅⋅⋅).
- List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab.
### Tables
The semantic purpose of a data table is to present tabular data. Sighted users can quickly scan the table but a screen reader goes through line by line. A table [caption](https://www.w3schools.com/tags/tag_caption.asp) is used to create a descriptive title for a data table. Assistive technologies (AT) use the HTML table caption element to identify the table contents to the user within the page structure.
If you need to create a table, create the table in markdown and use the table [Hugo shortcode](https://gohugo.io/content-management/shortcodes/) to include a caption.
```
{{</* table caption="Configuration parameters" >}}
Parameter | Description | Default
:---------|:------------|:-------
`timeout` | The timeout for requests | `30s`
`logLevel` | The log level for log output | `INFO`
{{< /table */>}}
```
**Note:** This shortcode does not support markdown reference-style links. Use inline-style links in tables. See more information about [markdown link styles](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#links).

View File

@ -0,0 +1,44 @@
---
title: "Support Process"
layout: docs
---
## Weekly Rotation
The Velero maintainers use a weekly rotation to manage community support. Each week, a different maintainer is the point person for responding to incoming support issues via Slack, GitHub, and the Google group. The point person is *not* expected to be on-call 24x7. Instead, they choose one or more hour(s) per day to be available/responding to incoming issues. They will communicate to the community what that time slot will be each week.
## Start of Week
We will update the public Slack channel's topic to indicate that you are the point person for the week, and what hours you'll be available.
## During the Week
### Where we will monitor
- `#velero` public Slack channel in Kubernetes org
- [all Velero-related repos][0] in GitHub (`velero`, `velero-plugin-for-[aws|gcp|microsoft-azure|csi]`, `helm-charts`)
- [Project Velero Google Group][1]
### GitHub issue flow
Generally speaking, new GitHub issues will fall into one of several categories. We use the following process for each:
1. **Feature request**
- Label the issue with `Enhancement/User` or `Enhancement/Dev`
- Leave the issue in the `New Issues` swimlane for triage by product mgmt
1. **Bug**
- Label the issue with `Bug`
- Leave the issue in the `New Issues` swimlane for triage by product mgmt
1. **User question/problem** that does not clearly fall into one of the previous categories
- When you start investigating/responding, label the issue with `Investigating`
- Add comments as you go, so both the user and future support people have as much context as possible
- Use the `Needs Info` label to indicate an issue is waiting for information from the user. Remove/re-add the label as needed.
- If you resolve the issue with the user, close it out
- If the issue ends up being a feature request or a bug, update the title and follow the appropriate process for it
- If the reporter becomes unresponsive after multiple pings, close out the issue due to inactivity and comment that the user can always reach out again as needed
## End of Week
We ensure all GitHub issues worked on during the week on are labeled with `Investigating` and `Needs Info` (if appropriate), and have updated comments so the next person can pick them up.
[0]: https://github.com/vmware-tanzu?q=velero&type=&language=
[1]: https://groups.google.com/forum/#!forum/projectvelero

View File

@ -0,0 +1,69 @@
---
title: "Providers"
layout: docs
---
Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase.
## Velero supported providers
{{< table caption="Velero supported providers" >}}
| Provider | Object Store | Volume Snapshotter | Plugin Provider Repo | Setup Instructions |
|-----------------------------------|---------------------|------------------------------|-----------------------------------------|-------------------------------|
| [Amazon Web Services (AWS)](https://aws.amazon.com) | AWS S3 | AWS EBS | [Velero plugin for AWS](https://github.com/vmware-tanzu/velero-plugin-for-aws) | [AWS Plugin Setup](https://github.com/vmware-tanzu/velero-plugin-for-aws#setup) |
| [Google Cloud Platform (GCP)](https://cloud.google.com) | Google Cloud Storage| Google Compute Engine Disks | [Velero plugin for GCP](https://github.com/vmware-tanzu/velero-plugin-for-gcp) | [GCP Plugin Setup](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) |
| [Microsoft Azure](https://azure.com) | Azure Blob Storage | Azure Managed Disks | [Velero plugin for Microsoft Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure) | [Azure Plugin Setup](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure#setup) |
| [VMware vSphere](https://github.com/vmware-tanzu/velero-plugin-for-vsphere) | 🚫 | vSphere Volumes | [VMware vSphere](https://github.com/vmware-tanzu/velero-plugin-for-vsphere) | [vSphere Plugin Setup](https://github.com/vmware-tanzu/velero-plugin-for-vsphere#installing-the-plugin) |
| [Container Storage Interface (CSI)](https://github.com/vmware-tanzu/velero-plugin-for-csi/)| 🚫 | CSI Volumes | [Velero plugin for CSI](https://github.com/vmware-tanzu/velero-plugin-for-csi/) | [CSI Plugin Setup](website-guidelines.md) |
{{< /table >}}
Contact: [#Velero Slack](https://kubernetes.slack.com/messages/velero), [GitHub Issues](https://github.com/vmware-tanzu/velero/issues)
## Community supported providers
{{< table caption="Community supported providers" >}}
| Provider | Object Store | Volume Snapshotter | Plugin Documentation | Contact |
|---------------------------|------------------------------|------------------------------------|------------------------|---------------------------------|
| [AlibabaCloud](https://www.alibabacloud.com/) | Alibaba Cloud OSS | Alibaba Cloud | [AlibabaCloud](https://github.com/AliyunContainerService/velero-plugin) | [GitHub Issue](https://github.com/AliyunContainerService/velero-plugin/issues) |
| [DigitalOcean](https://www.digitalocean.com/) | DigitalOcean Object Storage | DigitalOcean Volumes Block Storage | [StackPointCloud](https://github.com/StackPointCloud/ark-plugin-digitalocean) | |
| [Hewlett Packard](https://www.hpe.com/us/en/storage.html) | 🚫 | HPE Storage | [Hewlett Packard](https://github.com/hpe-storage/velero-plugin) | [Slack](https://slack.hpedev.io/), [GitHub Issue](https://github.com/hpe-storage/velero-plugin/issues) |
| [OpenEBS](https://openebs.io/) | 🚫 | OpenEBS CStor Volume | [OpenEBS](https://github.com/openebs/velero-plugin) | [Slack](https://openebs-community.slack.com/), [GitHub Issue](https://github.com/openebs/velero-plugin/issues) |
| [Portworx](https://portworx.com/) | 🚫 | Portworx Volume | [Portworx](https://docs.portworx.com/scheduler/kubernetes/ark.html) | [Slack](https://portworx.slack.com/messages/px-k8s), [GitHub Issue](https://github.com/portworx/ark-plugin/issues) |
| [Storj](https://storj.io) | Storj Object Storage | 🚫 | [Storj](https://github.com/storj-thirdparty/velero-plugin) | [GitHub Issue](https://github.com/storj-thirdparty/velero-plugin/issues) |
{{< /table >}}
## S3-Compatible object store providers
Velero's AWS Object Store plugin uses [Amazon's Go SDK][0] to connect to the AWS S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero:
_Note that these storage providers are not regularly tested by the Velero team._
* [IBM Cloud][1]
* [Oracle Cloud][2]
* [Minio][3]
* [DigitalOcean][4]
* [NooBaa][5]
* [Tencent Cloud][7]
* Ceph RADOS v12.2.7
* Quobyte
* [Cloudian HyperStore][38]
_Some storage providers, like Quobyte, may need a different [signature algorithm version][6]._
## Non-supported volume snapshots
In the case you want to take volume snapshots but didn't find a plugin for your provider, Velero has support for snapshotting using restic. Please see the [restic integration][30] documentation.
[0]: https://github.com/aws/aws-sdk-go/aws
[1]: contributions/ibm-config.md
[2]: contributions/oracle-config.md
[3]: contributions/minio.md
[4]: https://github.com/StackPointCloud/ark-plugin-digitalocean
[5]: http://www.noobaa.com/
[6]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/backupstoragelocation.md
[7]: contributions/tencent-config.md
[25]: https://github.com/hpe-storage/velero-plugin
[30]: restic.md
[36]: https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup
[38]: https://www.cloudian.com/

View File

@ -0,0 +1,210 @@
---
title: "Rapid iterative Velero development with Tilt "
layout: docs
---
## Overview
This document describes how to use [Tilt](https://tilt.dev) with any cluster for a simplified
workflow that offers easy deployments and rapid iterative builds.
This setup allows for continuing deployment of the Velero server and, if specified, any provider plugin or the restic daemonset.
It does this work by:
1. Deploying the necessary Kubernetes resources, such as the Velero CRDs and Velero deployment
1. Building a local binary for Velero and (if specified) provider plugins as a `local_resource`
1. Invoking `docker_build` to live update any binary into the container/init container and trigger a re-start
Tilt will look for configuration files under `velero/tilt-resources`. Most of the
files in this directory are gitignored so you may configure your setup according to your needs.
## Prerequisites
1. [Docker](https://docs.docker.com/install/) v19.03 or newer
1. A Kubernetes cluster v1.10 or greater (does not have to be Kind)
1. [Tilt](https://docs.tilt.dev/install.html) v0.12.0 or newer
1. Clone the [Velero project](https://github.com/vmware-tanzu/velero) repository
locally
1. Access to an S3 object storage
1. Clone any [provider plugin(s)](https://velero.io/plugins/) you want to make changes to and deploy (optional, must be configured to be deployed by the Velero Tilt's setup, [more info below](#provider-plugins))
Note: To properly configure any plugin you use, please follow the plugin's documentation.
## Getting started
### tl;dr
- Copy all sample files under `velero/tilt-resources/examples` into `velero/tilt-resources`.
- Configure the `velero_v1_backupstoragelocation.yaml` file, and the `cloud` file for the storage credentials/secret.
- Run `tilt up`.
### Create a Tilt settings file
Create a configuration file named `tilt-settings.json` and place it in your local copy of `velero/tilt-resources`. Alternatively,
you may copy and paste the sample file found in `velero/tilt-resources/examples`.
Here is an example:
```json
{
"default_registry": "",
"enable_providers": [
"aws",
"gcp",
"azure",
"csi"
],
"providers": {
"aws": "../velero-plugin-for-aws",
"gcp": "../velero-plugin-for-gcp",
"azure": "../velero-plugin-for-microsoft-azure",
"csi": "../velero-plugin-for-csi"
},
"allowed_contexts": [
"development"
],
"enable_restic": false,
"create_backup_locations": true,
"setup-minio": true,
"enable_debug": false,
"debug_continue_on_start": true
}
```
#### tilt-settings.json fields
**default_registry** (String, default=""): The image registry to use if you need to push images. See the [Tilt
*documentation](https://docs.tilt.dev/api.html#api.default_registry) for more details.
**provider_repos** (Array[]String, default=[]): A list of paths to all the provider plugins you want to make changes to. Each provider must have a
`tilt-provider.json` file describing how to build the provider.
**enable_providers** (Array[]String, default=[]): A list of the provider plugins to enable. See [provider plugins](provider-plugins)
for more details. Note: when not making changes to a plugin, it is not necessary to load them into
Tilt: an existing image and version might be specified in the Velero deployment instead, and Tilt will load that.
**allowed_contexts** (Array, default=[]): A list of kubeconfig contexts Tilt is allowed to use. See the Tilt documentation on
*[allow_k8s_contexts](https://docs.tilt.dev/api.html#api.allow_k8s_contexts) for more details. Note: Kind is automatically allowed.
**enable_restic** (Bool, default=false): Indicate whether to deploy the restic Daemonset. If set to `true`, Tilt will look for a `velero/tilt-resources/restic.yaml` file
containing the configuration of the Velero restic DaemonSet.
**create_backup_locations** (Bool, default=false): Indicate whether to create one or more backup storage locations. If set to `true`, Tilt will look for a `velero/tilt-resources/velero_v1_backupstoragelocation.yaml` file
containing at least one configuration for a Velero backup storage location.
**setup-minio** (Bool, default=false): Configure this to `true` if you want to configure backup storage locations in a Minio instance running inside your cluster.
**enable_debug** (Bool, default=false): Configure this to `true` if you want to debug the velero process using [Delve](https://github.com/go-delve/delve).
**debug_continue_on_start** (Bool, default=true): Configure this to `true` if you want the velero process to continue on start when in debug mode. See [Delve CLI documentation](https://github.com/go-delve/delve/blob/master/Documentation/usage/dlv.md).
### Create Kubernetes resource files to deploy
All needed Kubernetes resource files are provided as ready to use samples in the `velero/tilt-resources/examples` directory. You only have to move them to the `velero/tilt-resources` level.
Because the Velero Kubernetes deployment as well as the restic DaemonSet contain the configuration
for any plugin to be used, files for these resources are expected to be provided by the user so you may choose
which provider plugin to load as a init container. Currently, the sample files provided are configured with all the
plugins supported by Velero, feel free to remove any of them as needed.
For Velero to operate fully, it also needs at least one backup
storage location. A sample file is provided that needs to be modified with the specific
configuration for your object storage. See the next sub-section for more details on this.
### Configure a backup storage location
You will have to configure the `velero/tilt-resources/velero_v1_backupstoragelocation.yaml` with the proper values according to your storage provider. Read the [plugin documentation](https://velero.io/plugins/)
to learn what field/value pairs are required for your particular provider's backup storage location configuration.
Below are some ways to configure a backup storage location for Velero.
#### As a storage with a service provider
Follow the provider documentation to provision the storage. We have a [list of all known object storage providers](supported-providers/) with corresponding plugins for Velero.
#### Using MinIO as an object storage
Note: to use MinIO as an object storage, you will need to use the [`AWS` plugin](https://github.com/vmware-tanzu/velero-plugin-for-aws), and configure the storage location with the `spec.provider` set to `aws` and the `spec.config.region` set to `minio`. Example:
```
spec:
config:
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
objectStorage:
bucket: velero
provider: aws
```
Here are two ways to use MinIO as the storage:
1) As a MinIO instance running inside your cluster (don't do this for production!)
In the `tilt-settings.json` file, set `"setup-minio": true`. This will configure a Kubernetes deployment containing a running
instance of MinIO inside your cluster. There are [extra steps](contributions/minio/#expose-minio-outside-your-cluster-with-a-service)
necessary to expose MinIO outside the cluster.
To access this storage, you will need to expose MinIO outside the cluster by forwarding the MinIO port to the local machine using kubectl port-forward -n <velero-namespace> svc/minio 9000. Update the BSL configuration to use that as its "public URL" by adding `publicUrl: http://localhost:9000` to the BSL config. This is necessary to do things like download a backup file.
Note: with this setup, when your cluster is terminated so is the storage and any backup/restore in it.
1) As a standalone MinIO instance running locally in a Docker container
See [these instructions](https://github.com/vmware-tanzu/velero/discussions/3381) to run MinIO locally on your computer, as a standalone as opposed to running it on a Pod.
Please see our [locations documentation](locations/) to learn more how backup locations work.
### Configure the provider credentials (secret)
Whatever object storage provider you use, configure the credentials for in the `velero/tilt-resources/cloud` file. Read the [plugin documentation](https://velero.io/plugins/)
to learn what field/value pairs are required for your provider's credentials. The Tilt file will invoke Kustomize to create the secret under the hard-coded key `secret.cloud-credentials.data.cloud` in the Velero namespace.
There is a sample credentials file properly formatted for a MinIO storage credentials in `velero/tilt-resources/examples/cloud`.
### Configure debugging with Delve
If you would like to debug the Velero process, you can enable debug mode by setting the field `enable_debug` to `true` in your `tilt-resources/tile-settings.json` file.
This will enable you to debug the process using [Delve](https://github.com/go-delve/delve).
By enabling debug mode, the Velero executable will be built in debug mode (using the flags `-gcflags="-N -l"` which disables optimizations and inlining), and the process will be started in the Velero deployment using [`dlv exec`](https://github.com/go-delve/delve/blob/master/Documentation/usage/dlv_exec.md).
The debug server will accept connections on port 2345 and Tilt is configured to forward this port to the local machine.
Once Tilt is [running](#run-tilt) and the Velero resource is ready, you can connect to the debug server to begin debugging.
To connect to the session, you can use the Delve CLI locally by running `dlv connect 127.0.0.1:2345`. See the [Delve CLI documentation](https://github.com/go-delve/delve/tree/master/Documentation/cli) for more guidance on how to use Delve.
Delve can also be used within a number of [editors and IDEs](https://github.com/go-delve/delve/blob/master/Documentation/EditorIntegration.md).
By default, the Velero process will continue on start when in debug mode.
This means that the process will run until a breakpoint is set.
You can disable this by setting the field `debug_continue_on_start` to `false` in your `tilt-resources/tile-settings.json` file.
When this setting is disabled, the Velero process will not continue to run until a `continue` instruction is issued through your Delve session.
When exiting your debug session, the CLI and editor integrations will typically ask if the remote process should be stopped.
It is important to leave the remote process running and just disconnect from the debugging session.
By stopping the remote process, that will cause the Velero container to stop and the pod to restart.
If backups are in progress, these will be left in a stale state as they are not resumed when the Velero pod restarts.
### Run Tilt!
To launch your development environment, run:
``` bash
tilt up
```
This will output the address to a web browser interface where you can monitor Tilt's status and the logs for each Tilt resource. After a brief amount of time, you should have a running development environment, and you should now be able to
create backups/restores and fully operate Velero.
Note: Running `tilt down` after exiting out of Tilt [will delete all resources](https://docs.tilt.dev/cli/tilt_down.html) specified in the Tiltfile.
Tip: Create an alias to `velero/_tuiltbuild/local/velero` and you won't have to run `make local` to get a refreshed version of the Velero CLI, just use the alias.
Please see the documentation for [how Velero works](how-velero-works/).
## Provider plugins
A provider must supply a `tilt-provider.json` file describing how to build it. Here is an example:
```json
{
"plugin_name": "velero-plugin-for-aws",
"context": ".",
"image": "velero/velero-plugin-for-aws",
"live_reload_deps": [
"velero-plugin-for-aws"
],
"go_main": "./velero-plugin-for-aws"
}
```
## Live updates
Each provider plugin configured to be deployed by Velero's Tilt setup has a `live_reload_deps` list. This defines the files and/or directories that Tilt
should monitor for changes. When a dependency is modified, Tilt rebuilds the provider's binary **on your local
machine**, copies the binary to the init container, and triggers a restart of the Velero container. This is significantly faster
than rebuilding the container image for each change. It also helps keep the size of each development image as small as
possible (the container images do not need the entire go toolchain, source code, module dependencies, etc.).

View File

@ -0,0 +1,219 @@
---
title: "Troubleshooting"
layout: docs
---
These tips can help you troubleshoot known issues. If they don't help, you can [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
## Debug installation/ setup issues
- [Debug installation/setup issues][2]
## Debug restores
- [Debug restores][1]
## General troubleshooting information
You can use the `velero bug` command to open a [Github issue][4] by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `velero` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like.
Some general commands for troubleshooting that may be helpful:
* `velero backup describe <backupName>` - describe the details of a backup
* `velero backup logs <backupName>` - fetch the logs for this specific backup. Useful for viewing failures and warnings, including resources that could not be backed up.
* `velero restore describe <restoreName>` - describe the details of a restore
* `velero restore logs <restoreName>` - fetch the logs for this specific restore. Useful for viewing failures and warnings, including resources that could not be restored.
* `kubectl logs deployment/velero -n velero` - fetch the logs of the Velero server pod. This provides the output of the Velero server processes.
### Getting velero debug logs
You can increase the verbosity of the Velero server by editing your Velero deployment to look like this:
```
kubectl edit deployment/velero -n velero
...
containers:
- name: velero
image: velero/velero:latest
command:
- /velero
args:
- server
- --log-level # Add this line
- debug # Add this line
...
```
## Known issue with restoring LoadBalancer Service
Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform a Velero restore.
Alternatively, you might be able to use the Service's `spec.loadBalancerIP` field to keep connections valid, if your cloud provider supports this value. See [the Kubernetes documentation about Services of Type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
## Miscellaneous issues
### Velero reports `custom resource not found` errors when starting up.
Velero's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Run `velero install` again to install any missing custom resource definitions.
### `velero backup logs` returns a `SignatureDoesNotMatch` error
Downloading artifacts from object storage utilizes temporary, signed URLs. In the case of S3-compatible
providers, such as Ceph, there may be differences between their implementation and the official S3
API that cause errors.
Here are some things to verify if you receive `SignatureDoesNotMatch` errors:
* Make sure your S3-compatible layer is using [signature version 4][5] (such as Ceph RADOS v12.2.7)
* For Ceph, try using a native Ceph account for credentials instead of external providers such as OpenStack Keystone
## Velero (or a pod it was backing up) restarted during a backup and the backup is stuck InProgress
Velero cannot resume backups that were interrupted. Backups stuck in the `InProgress` phase can be deleted with `kubectl delete backup <name> -n <velero-namespace>`.
Backups in the `InProgress` phase have not uploaded any files to object storage.
## Velero is not publishing prometheus metrics
Steps to troubleshoot:
- Confirm that your velero deployment has metrics publishing enabled. The [latest Velero helm charts][6] have been setup with [metrics enabled by default][7].
- Confirm that the Velero server pod exposes the port on which the metrics server listens on. By default, this value is 8085.
```yaml
ports:
- containerPort: 8085
name: metrics
protocol: TCP
```
- Confirm that the metric server is listening for and responding to connections on this port. This can be done using [port-forwarding][9] as shown below
```bash
$ kubectl -n <YOUR_VELERO_NAMESPACE> port-forward <YOUR_VELERO_POD> 8085:8085
Forwarding from 127.0.0.1:8085 -> 8085
Forwarding from [::1]:8085 -> 8085
.
.
.
```
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the necessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
## Is Velero using the correct cloud credentials?
Cloud provider credentials are given to Velero to store and retrieve backups from the object store and to perform volume snapshotting operations.
These credentials are either passed to Velero at install time using:
1. `--secret-file` flag to the `velero install` command. OR
1. `--set-file credentials.secretContents.cloud` flag to the `helm install` command.
Or, they are specified when creating a `BackupStorageLocation` using the `--credential` flag.
### Troubleshooting credentials provided during install
If using the credentials provided at install time, they are stored in the cluster as a Kubernetes secret named `cloud-credentials` in the same namespace in which Velero is installed.
Follow the below troubleshooting steps to confirm that Velero is using the correct credentials:
1. Confirm that the `cloud-credentials` secret exists and has the correct content.
```bash
$ kubectl -n velero get secrets cloud-credentials
NAME TYPE DATA AGE
cloud-credentials Opaque 1 11h
$ kubectl -n velero get secrets cloud-credentials -ojsonpath={.data.cloud} | base64 --decode
<Output should be your credentials>
```
1. Confirm that velero deployment is mounting the `cloud-credentials` secret.
```bash
$ kubectl -n velero get deploy velero -ojson | jq .spec.template.spec.containers[0].volumeMounts
[
{
"mountPath": "/plugins",
"name": "plugins"
},
{
"mountPath": "/scratch",
"name": "scratch"
},
{
"mountPath": "/credentials",
"name": "cloud-credentials"
}
]
```
If [restic-integration][3] is enabled, then, confirm that the restic daemonset is also mounting the `cloud-credentials` secret.
```bash
$ kubectl -n velero get ds restic -ojson |jq .spec.template.spec.containers[0].volumeMounts
[
{
"mountPath": "/host_pods",
"mountPropagation": "HostToContainer",
"name": "host-pods"
},
{
"mountPath": "/scratch",
"name": "scratch"
},
{
"mountPath": "/credentials",
"name": "cloud-credentials"
}
]
```
1. Confirm if the correct credentials are mounted into the Velero pod.
```bash
$ kubectl -n velero exec -ti deploy/velero -- bash
nobody@velero-69f9c874c-l8mqp:/$ cat /credentials/cloud
<Output should be your credentials>
```
### Troubleshooting `BackupStorageLocation` credentials
Follow the below troubleshooting steps to confirm that Velero is using the correct credentials if using credentials specific to a [`BackupStorageLocation`][10]:
1. Confirm that the object storage provider plugin being used supports multiple credentials.
If the logs from the Velero deployment contain the error message `"config has invalid keys credentialsFile"`, the version of your object storage plugin does not yet support multiple credentials.
The object storage plugins [maintained by the Velero team][11] support this feature, so please update your plugin to the latest version if you see the above error message.
If you are using a plugin from a different provider, please contact them for further advice.
1. Confirm that the secret and key referenced by the `BackupStorageLocation` exists in the Velero namespace and has the correct content:
```bash
# Determine which secret and key the BackupStorageLocation is using
BSL_SECRET=$(kubectl get backupstoragelocations.velero.io -n velero <bsl-name> -o yaml -o jsonpath={.spec.credential.name})
BSL_SECRET_KEY=$(kubectl get backupstoragelocations.velero.io -n velero <bsl-name> -o yaml -o jsonpath={.spec.credential.key})
# Confirm that the secret exists
kubectl -n velero get secret $BSL_SECRET
# Print the content of the secret and ensure it is correct
kubectl -n velero get secret $BSL_SECRET -ojsonpath={.data.$BSL_SECRET_KEY} | base64 --decode
```
If the secret can't be found, the secret does not exist within the Velero namespace and must be created.
If no output is produced when printing the contents of the secret, the key within the secret may not exist or may have no content.
Ensure that the key exists within the secret's data by checking the output from `kubectl -n velero describe secret $BSL_SECRET`.
If it does not exist, follow the instructions for [editing a Kubernetes secret][12] to add the base64 encoded credentials data.
[1]: debugging-restores.md
[2]: debugging-install.md
[3]: restic.md
[4]: https://github.com/vmware-tanzu/velero/issues
[5]: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
[6]: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero
[7]: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/values.yaml#L44
[8]: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/values.yaml#L49-L52
[9]: https://kubectl.docs.kubernetes.io/pages/container_debugging/port_forward_to_pods.html
[10]: locations.md
[11]: /plugins
[12]: https://kubernetes.io/docs/concepts/configuration/secret/#editing-a-secret
[25]: https://kubernetes.slack.com/messages/velero

View File

@ -0,0 +1,11 @@
---
title: "Uninstalling Velero"
layout: docs
---
If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by `velero install`:
```bash
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```

View File

@ -0,0 +1,89 @@
---
title: "Upgrading to Velero 1.6"
layout: docs
---
## Prerequisites
- Velero [v1.5.x][5] installed.
If you're not yet running at least Velero v1.5, see the following:
- [Upgrading to v1.1][1]
- [Upgrading to v1.2][2]
- [Upgrading to v1.3][3]
- [Upgrading to v1.4][4]
- [Upgrading to v1.5][5]
## Instructions
1. Install the Velero v1.6 command-line interface (CLI) by following the [instructions here][0].
Verify that you've properly installed it by running:
```bash
velero version --client-only
```
You should see the following output:
```bash
Client:
Version: v1.6.0-rc.1
Git commit: <git SHA>
```
1. Update the Velero custom resource definitions (CRDs) to include schema changes across all CRDs that are at the core of the new features in this release:
```bash
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
```
**NOTE:** If you are upgrading Velero in Kubernetes 1.14.x or earlier, you will need to use `kubectl apply`'s `--validate=false` option when applying the CRD configuration above. See [issue 2077][6] and [issue 2311][7] for more context.
1. Update the container image used by the Velero deployment and, optionally, the restic daemon set:
```bash
kubectl set image deployment/velero \
velero=velero/velero:v1.6.0-rc.1 \
--namespace velero
# optional, if using the restic daemon set
kubectl set image daemonset/restic \
restic=velero/velero:v1.6.0-rc.1 \
--namespace velero
```
1. Confirm that the deployment is up and running with the correct version by running:
```bash
velero version
```
You should see the following output:
```bash
Client:
Version: v1.6.0-rc.1
Git commit: <git SHA>
Server:
Version: v1.6.0-rc.1
```
## Notes
### Default backup storage location
We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the [About locations][9] on how to indicate which backup storage location is the default one.
After upgrading, if there is a previously created backup storage location with the name that matches what was defined on the server side as the default, it will be automatically set as the `default`.
[0]: basic-install.md#install-the-cli
[1]: https://velero.io/docs/v1.1.0/upgrade-to-1.1/
[2]: https://velero.io/docs/v1.2.0/upgrade-to-1.2/
[3]: https://velero.io/docs/v1.3.2/upgrade-to-1.3/
[4]: https://velero.io/docs/v1.4/upgrade-to-1.4/
[5]: https://velero.io/docs/v1.5/upgrade-to-1.5
[6]: https://github.com/vmware-tanzu/velero/releases/tag/v1.4.2
[7]: https://github.com/vmware-tanzu/velero/issues/2077
[8]: https://github.com/vmware-tanzu/velero/issues/2311
[9]: https://velero.io/docs/v1.6/locations

View File

@ -0,0 +1,49 @@
---
title: "Velero Install CLI"
layout: docs
---
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage
This section explains some of the basic flags supported by the `velero install` CLI command. For a complete explanation of the flags, please run `velero install --help`
```bash
velero install \
--plugins <PLUGIN_CONTAINER_IMAGE [PLUGIN_CONTAINER_IMAGE]>
--provider <YOUR_PROVIDER> \
--bucket <YOUR_BUCKET> \
--secret-file <PATH_TO_FILE> \
--velero-pod-cpu-request <CPU_REQUEST> \
--velero-pod-mem-request <MEMORY_REQUEST> \
--velero-pod-cpu-limit <CPU_LIMIT> \
--velero-pod-mem-limit <MEMORY_LIMIT> \
[--use-restic] \
[--default-volumes-to-restic] \
[--restic-pod-cpu-request <CPU_REQUEST>] \
[--restic-pod-mem-request <MEMORY_REQUEST>] \
[--restic-pod-cpu-limit <CPU_LIMIT>] \
[--restic-pod-mem-limit <MEMORY_LIMIT>]
```
The values for the resource requests and limits flags follow the same format as [Kubernetes resource requirements][3]
For plugin container images, please refer to our [supported providers][2] page.
## Examples
This section provides examples that serve as a starting point for more customized installations.
```bash
velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --bucket mybucket --secret-file ./gcp-service-account.json
velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket backups --provider aws --secret-file ./aws-iam-creds --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2 --use-restic
velero install --provider azure --plugins velero/velero-plugin-for-microsoft-azure:v1.0.0 --bucket $BLOB_CONTAINER --secret-file ./credentials-velero --backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID[,subscriptionId=$AZURE_BACKUP_SUBSCRIPTION_ID] --snapshot-location-config apiTimeout=<YOUR_TIMEOUT>[,resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,subscriptionId=$AZURE_BACKUP_SUBSCRIPTION_ID]
```
[1]: build-from-source.md#making-images-and-updating-velero
[2]: supported-providers.md
[3]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

View File

@ -0,0 +1,21 @@
---
title: "Vendoring dependencies"
layout: docs
---
## Overview
We are using [dep][0] to manage dependencies. You can install it by following [these
instructions][1].
## Adding a new dependency
Run `dep ensure`. If you want to see verbose output, you can append `-v` as in
`dep ensure -v`.
## Updating an existing dependency
Run `dep ensure -update <pkg> [<pkg> ...]` to update one or more dependencies.
[0]: https://github.com/golang/dep
[1]: https://golang.github.io/dep/docs/installation.html

View File

@ -0,0 +1,45 @@
---
title: "Website Guidelines"
layout: docs
---
## Running the website locally
When making changes to the website, please run the site locally before submitting a PR and manually verify your changes.
At the root of the project, run:
```bash
make serve-docs
```
This runs all the Hugo dependencies in a container.
Alternatively, for quickly loading the website, under the `velero/site/` directory run:
```bash
hugo serve
```
For more information on how to run the website locally, please see our [Hugo documentation](https://gohugo.io/getting-started/).
## Adding a blog post
To add a blog post, create a new markdown (.MD) file in the `/site/content/posts/` folder. A blog post requires the following front matter.
```yaml
title: "Title of the blog"
excerpt: Brief summary of thee blog post that appears as a preview on velero.io/blogs
author_name: Jane Smith
slug: URL-For-Blog
# Use different categories that apply to your blog. This is used to connect related blogs on the site
categories: ['velero','release']
# Image to use for blog. The path is relative to the site/static/ folder
image: /img/posts/example-image.jpg
# Tag should match author to drive author pages. Tags can have multiple values.
tags: ['Velero Team', 'Nolan Brubaker']
```
Include the `author_name` value in tags field so the page that lists the author's posts will work properly, for example https://velero.io/tags/carlisia-thompson/.
Ideally each blog will have a unique image to use on the blog home page, but if you do not include an image, the default Velero logo will be used instead. Use an image that is less than 70KB and add it to the `/site/static/img/posts` folder.

View File

@ -3,6 +3,7 @@
# that the navigation for older versions still work.
main: main-toc
v1.6.0-rc.1: v1-6-0-rc-1-toc
v1.5: v1-5-toc
v1.4: v1-4-toc
v1.3.2: v1-3-2-toc

View File

@ -0,0 +1,97 @@
toc:
- title: Introduction
subfolderitems:
- page: About Velero
url: /index.html
- page: How Velero works
url: /how-velero-works
- page: About locations
url: /locations
- title: Install
subfolderitems:
- page: Basic Install
url: /basic-install
- page: Customize Installation
url: /customize-installation
- page: Upgrade to 1.6
url: /upgrade-to-1.6
- page: Supported providers
url: /supported-providers
- page: Evaluation install
url: /contributions/minio
- page: Restic integration
url: /restic
- page: Examples
url: /examples
- page: Uninstalling
url: /uninstalling
- title: Use
subfolderitems:
- page: Disaster recovery
url: /disaster-case
- page: Cluster migration
url: /migration-case
- page: Enable API group versions
url: /enable-api-group-versions-feature
- page: Resource filtering
url: /resource-filtering
- page: Backup reference
url: /backup-reference
- page: Backup hooks
url: /backup-hooks
- page: Restore reference
url: /restore-reference
- page: Restore hooks
url: /restore-hooks
- page: Run in any namespace
url: /namespace
- page: CSI Support (beta)
url: /csi
- page: Verifying Self-signed Certificates
url: /self-signed-certificates
- page: Changing RBAC permissions
url: /rbac
- title: Plugins
subfolderitems:
- page: Overview
url: /overview-plugins
- page: Custom plugins
url: /custom-plugins
- title: Troubleshoot
subfolderitems:
- page: Troubleshooting
url: /troubleshooting
- page: Troubleshoot an install or setup
url: /debugging-install
- page: Troubleshoot a restore
url: /debugging-restores
- page: Troubleshoot Restic
url: /restic#troubleshooting
- title: Contribute
subfolderitems:
- page: Start Contributing
url: /start-contributing
- page: Development
url: /development
- page: Rapid development with Tilt
url: /tilt
- page: Build from source
url: /build-from-source
- page: Run locally
url: /run-locally
- page: Code standards
url: /code-standards
- page: Website guidelines
url: /website-guidelines
- page: Documentation style guide
url: /style-guide
- title: More information
subfolderitems:
- page: Backup file format
url: /output-file-format
- page: API types
url: /api-types
- page: Support process
url: /support-process
- page: For maintainers
url: /maintainers