v1.3.0-beta.1 docs site

Signed-off-by: Steve Kriss <krisss@vmware.com>
pull/2239/head
Steve Kriss 2020-02-04 09:32:59 -07:00
parent 88d123fcdc
commit f3409c406a
51 changed files with 3898 additions and 0 deletions

View File

@ -55,6 +55,12 @@ defaults:
version: master
gh: https://github.com/vmware-tanzu/velero/tree/master
layout: "docs"
- scope:
path: docs/v1.3.0-beta.1
values:
version: v1.3.0-beta.1
gh: https://github.com/vmware-tanzu/velero/tree/v1.3.0-beta.1
layout: "docs"
- scope:
path: docs/v1.2.0
values:
@ -154,6 +160,7 @@ versioning: true
latest: v1.2.0
versions:
- master
- v1.3.0-beta.1
- v1.2.0
- v1.1.0
- v1.0.0

View File

@ -3,6 +3,7 @@
# that the navigation for older versions still work.
master: master-toc
v1.3.0-beta.1: v1-3-0-beta-1-toc
v1.2.0: v1-2-0-toc
v1.1.0: v1-1-0-toc
v1.0.0: v1-0-0-toc

View File

@ -0,0 +1,83 @@
toc:
- title: Introduction
subfolderitems:
- page: About Velero
url: /index.html
- page: How Velero works
url: /how-velero-works
- page: About locations
url: /locations
- title: Install
subfolderitems:
- page: Basic Install
url: /basic-install
- page: Customize Installation
url: /customize-installation
- page: Upgrade to 1.2
url: /upgrade-to-1.2
- page: Supported providers
url: /supported-providers
- page: Evaluation install
url: /contributions/minio
- page: Restic integration
url: /restic
- page: Examples
url: /examples
- page: Uninstalling
url: /uninstalling
- title: Use
subfolderitems:
- page: Disaster recovery
url: /disaster-case
- page: Cluster migration
url: /migration-case
- page: Backup reference
url: /backup-reference
- page: Restore reference
url: /restore-reference
- page: Run in any namespace
url: /namespace
- page: Extend with hooks
url: /hooks
- title: Plugins
subfolderitems:
- page: Overview
url: /overview-plugins
- page: Custom plugins
url: /custom-plugins
- title: Troubleshoot
subfolderitems:
- page: Troubleshooting
url: /troubleshooting
- page: Troubleshoot an install or setup
url: /debugging-install
- page: Troubleshoot a restore
url: /debugging-restores
- page: Troubleshoot Restic
url: /restic#troubleshooting
- title: Contribute
subfolderitems:
- page: Start Contributing
url: /start-contributing
- page: Development
url: /development
- page: Build from source
url: /build-from-source
- page: Run locally
url: /run-locally
- page: Code standards
url: /code-standards
- page: Website guidelines
url: /website-guidelines
- title: More information
subfolderitems:
- page: Backup file format
url: /output-file-format
- page: API types
url: /api-types
- page: FAQ
url: /faq
- page: ZenHub
url: /zenhub
- page: Support Process
url: /support-process

View File

@ -0,0 +1,52 @@
![100]
[![Build Status][1]][2]
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
* Replicate your production cluster to development and testing clusters.
Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
## Documentation
This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
## Troubleshooting
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
## Contributing
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.3.0-beta.1/start-contributing/) documentation for guidance on how to setup Velero for development.
## Changelog
See [the list of releases][6] to find out about feature changes.
[1]: https://travis-ci.org/vmware-tanzu/velero.svg?branch=master
[2]: https://travis-ci.org/vmware-tanzu/velero
[4]: https://github.com/vmware-tanzu/velero/issues
[6]: https://github.com/vmware-tanzu/velero/releases
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md
[14]: https://github.com/kubernetes/kubernetes
[24]: https://groups.google.com/forum/#!forum/projectvelero
[25]: https://kubernetes.slack.com/messages/velero
[30]: troubleshooting.md
[100]: img/velero.png

View File

@ -0,0 +1,16 @@
# Table of Contents
## API types
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]
* [Schedule][2]
* [BackupStorageLocation][3]
* [VolumeSnapshotLocation][4]
[1]: backup.md
[2]: schedule.md
[3]: backupstoragelocation.md
[4]: volumesnapshotlocation.md

View File

@ -0,0 +1,143 @@
# Backup API Type
## Use
The `Backup` API type is used as a request for the Velero server to perform a backup. Once created, the
Velero Server immediately starts the backup process.
## API GroupVersion
Backup belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Backup` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Backup
# Standard Kubernetes metadata. Required.
metadata:
# Backup name. May be any valid Kubernetes object name. Required.
name: a
# Backup namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the backup. Required.
spec:
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for this backup.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before this backup is eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Currently only "exec" hooks are supported.
post:
# Same content as pre above.
# Status about the Backup. Users should not set any data here.
status:
# The version of this Backup. The only version currently supported is 1.
version: 1
# The date and time when the Backup is eligible for garbage collection.
expiration: null
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Date/time when the backup started being processed.
startTimestamp: 2019-04-29T15:58:43Z
# Date/time when the backup finished being processed.
completionTimestamp: 2019-04-29T15:58:56Z
# Number of volume snapshots that Velero tried to create for this backup.
volumeSnapshotsAttempted: 2
# Number of volume snapshots that Velero successfully created for this backup.
volumeSnapshotsCompleted: 1
# Number of warnings that were logged by the backup.
warnings: 2
# Number of errors that were logged by the backup.
errors: 0
```

View File

@ -0,0 +1,44 @@
# Velero Backup Storage Locations
## Backup Storage Location
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
A sample YAML `BackupStorageLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: aws
objectStorage:
bucket: myBucket
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String | Required Field | The name for whichever object storage provider will be used to store the backups. See [your object storage provider's plugin documentation][0] for the appropriate value to use. |
| `objectStorage` | ObjectStorageLocation | Required Field | Specification of the object storage for the given provider. |
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the object store plugin. See [your object storage provider's plugin documentation][0] for details. |
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
[0]: ../supported-providers.md

View File

@ -0,0 +1,132 @@
# Schedule API Type
## Use
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
process on a repeating basis.
## API GroupVersion
Schedule belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Schedule` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Schedule
# Standard Kubernetes metadata. Required.
metadata:
# Schedule name. May be any valid Kubernetes object name. Required.
name: a
# Schedule namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the scheduled backup. Required.
spec:
# Schedule is a Cron expression defining when to run the Backup
schedule: 0 7 * * *
# Template is the spec that should be used for each backup triggered by this schedule.
template:
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the scheduled backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the scheduled backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for backups under this schedule.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Currently only "exec" hooks are supported.
post:
# Same content as pre above.
status:
# The current phase of the latest scheduled backup. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# Date/time of the last backup for a given schedule
lastBackup:
# An array of any validation errors encountered.
validationErrors:
```

View File

@ -0,0 +1,38 @@
# Velero Volume Snapshot Location
## Volume Snapshot Location
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
A sample YAML `VolumeSnapshotLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: velero
spec:
provider: aws
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See [your volume snapshot provider's plugin documentation][0] for the appropriate value to use. |
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See [your volume snapshot provider's plugin documentation][0] for details. |
[0]: ../supported-providers.md

View File

@ -0,0 +1,9 @@
# Backup Reference
## Exclude Specific Items from Backup
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
```bash
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
```

View File

@ -0,0 +1,63 @@
# Basic Install
- [Basic Install](#basic-install)
- [Prerequisites](#prerequisites)
- [Install the CLI](#install-the-cli)
- [Option 1: macOS - Homebrew](#option-1-macos---homebrew)
- [Option 2: GitHub release](#option-2-github-release)
- [Install and configure the server components](#install-and-configure-the-server-components)
Use this doc to get a basic installation of Velero.
Refer [this document](customize-installation.md) to customize your installation.
## Prerequisites
- Access to a Kubernetes cluster, v1.10 or later, with DNS and container networking enabled.
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
There are supported storage providers for both cloud-provider environments and on-premises environments. For more details on on-premises scenarios, see the [on-premises documentation][2].
## Install the CLI
### Option 1: macOS - Homebrew
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Option 2: GitHub release
1. Download the [latest release][1]'s tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
```
1. Move the extracted `velero` binary to somewhere in your `$PATH` (e.g. `/usr/local/bin` for most users).
## Install and configure the server components
There are two supported methods for installing the Velero server components:
- the `velero install` CLI command
- the [Helm chart](https://github.com/helm/charts/tree/master/stable/velero)
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider. To find installation instructions for your chosen storage provider, follow the documentation link for your provider at our [supported storage providers][0] page
_Note: if your object storage provider is different than your volume snapshot provider, follow the installation instructions for your object storage provider first, then return here and follow the instructions to [add your volume snapshot provider][4]._
## Command line Autocompletion
Please refer to [this part of the documentation][5].
[0]: supported-providers.md
[1]: https://github.com/vmware-tanzu/velero/releases/latest
[2]: on-premises.md
[3]: overview-plugins.md
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
[5]: customize-installation.md#optional-velero-cli-configurations

View File

@ -0,0 +1,158 @@
# Build from source
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later.
* A DNS server on the cluster
* `kubectl` installed
* [Go][5] installed (minimum version 1.8)
## Get the source
### Option 1) Get latest (recommended)
```bash
mkdir $HOME/go
export GOPATH=$HOME/go
go get github.com/vmware-tanzu/velero
```
Where `go` is your [import path][4] for Go.
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
### Option 2) Release archive
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`.
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
## Build
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
When building by using `make`, it will place the binaries under `_output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `_output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output.
Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images).
### Build the binary
To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands:
```bash
go build ./cmd/velero
```
```bash
make local
```
### Cross compiling
To build the velero binary targeting linux/amd64 within a build container on your local machine, run:
```bash
make build
```
For any specific platform, run `make build-<GOOS>-<GOARCH>`.
For example, to build for the Mac, run `make build-darwin-amd64`.
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
* linux-amd64
* linux-arm
* linux-arm64
* linux-ppc64le
* darwin-amd64
* windows-amd64
## Making images and updating Velero
If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image:
```bash
kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION
```
To build a Velero container image, first set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero-amd64:master` image, set `$REGISTRY` to `gcr.io/my-registry`. If this variable is not set, the default is `velero`.
Optionally, set the `$VERSION` environment variable to change the image tag. Then, run:
```bash
make container
```
For any specific platform, run `ARCH=<GOOS>-<GOARCH> make container`
For example, to build an image for the Power (ppc64le), run:
```bash
ARCH=linux-ppc64le make container
```
_Note: By default, ARCH is set to linux-amd64_
To push your image to the registry. For example, if you want to push the `gcr.io/my-registry/velero-amd64:master` image, run:
```bash
make push
```
For any specific platform, run `ARCH=<GOOS>-<GOARCH> make push`
For example, to push image for the Power (ppc64le), run:
```bash
ARCH=linux-ppc64le make push
```
_Note: By default, ARCH is set to linux-amd64_
To create and push your manifest to the registry. For example, if you want to create and push the `gcr.io/my-registry/velero:master` manifest, run:
```bash
make manifest
```
For any specific platform, run `MANIFEST_PLATFORMS=<GOARCH> make manifest`
For example, to create and push manifest only for amd64, run:
```bash
MANIFEST_PLATFORMS=amd64 make manifest
```
_Note: By default, MANIFEST_PLATFORMS is set to amd64, ppc64le_
To run the entire workflow, run:
`REGISTRY=<$REGISTRY> VERSION=<$VERSION> ARCH=<GOOS>-<GOARCH> MANIFEST_PLATFORMS=<GOARCH> make container push manifest`
For example, to run the workflow only for amd64
```bash
REGISTRY=myrepo VERSION=foo MANIFEST_PLATFORMS=amd64 make container push manifest
```
_Note: By default, ARCH is set to linux-amd64_
For example, to run the workflow only for ppc64le
```bash
REGISTRY=myrepo VERSION=foo ARCH=linux-ppc64le MANIFEST_PLATFORMS=ppc64le make container push manifest
```
For example, to run the workflow for all supported platforms
```bash
REGISTRY=myrepo VERSION=foo make all-containers all-push all-manifests
```
Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod:
```bash
kubectl -n velero delete pods -l deploy=velero
```
[4]: https://blog.golang.org/organizing-go-code
[5]: https://golang.org/doc/install
[22]: https://github.com/vmware-tanzu/velero/releases

View File

@ -0,0 +1,123 @@
# Code Standards
## Adding a changelog
Authors are expected to include a changelog file with their pull requests. The changelog file
should be a new file created in the `changelogs/unreleased` folder. The file should follow the
naming convention of `pr-username` and the contents of the file should be your text for the
changelog.
velero/changelogs/unreleased <- folder
000-username <- file
Add that to the PR.
## Code
- Log messages are capitalized.
- Error messages are kept lower-cased.
- Wrap/add a stack only to errors that are being directly returned from non-velero code, such as an API call to the Kubernetes server.
```bash
errors.WithStack(err)
```
- Prefer to use the utilities in the Kubernetes package [`sets`](https://godoc.org/github.com/kubernetes/apimachinery/pkg/util/sets).
```bash
k8s.io/apimachinery/pkg/util/sets
```
## Imports
For imports, we use the following convention:
<group><version><api | client | informer | ...>
Example:
import (
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
corev1listers "k8s.io/client-go/listers/core/v1"
velerov1api ""github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
)
## Mocks
We use a package to generate mocks for our interfaces.
Example: if you want to change this mock: https://github.com/vmware-tanzu/velero/blob/v1.3.0-beta.1/pkg/restic/mocks/restorer.go
Run:
```bash
go get github.com/vektra/mockery/.../
cd pkg/restic
mockery -name=Restorer
```
Might need to run `make update` to update the imports.
## DCO Sign off
All authors to the project retain copyright to their work. However, to ensure
that they are only submitting work that they have rights to, we are requiring
everyone to acknowledge this by signing their work.
Any copyright notices in this repo should specify the authors as "the Velero contributors".
To sign your work, just add a line like this at the end of your commit message:
```
Signed-off-by: Joe Beda <joe@heptio.com>
```
This can easily be done with the `--signoff` option to `git commit`.
By doing this you state that you can certify the following (from https://developercertificate.org/):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```

View File

@ -0,0 +1,96 @@
# Use IBM Cloud Object Storage as Velero's storage destination.
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
* Download an official release of Velero
* Create your COS instance
* Create an S3 bucket
* Define a service that can store data in the bucket
* Configure and start the Velero server
## Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create COS instance
If you dont have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
## Create an S3 bucket
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
## Define a service that can store data in the bucket.
The process of creating service credentials is described in [Service credentials][3].
Several comments:
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keysa set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See step 3 in the [Service credentials][3] guide.
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. We will use them in the next step.
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id=<ACCESS_KEY_ID>
aws_secret_access_key=<SECRET_ACCESS_KEY>
```
where the access key id and secret are the values that we got above.
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider aws \
--bucket <YOUR_BUCKET> \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>
```
Velero does not currently have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
(Optional) Specify [CPU and memory resource requests and limits][15] for the Velero/restic pods.
Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud:
```bash
kubectl -n velero delete volumesnapshotlocation.velero.io default
```
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
## Installing the nginx example (optional)
If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`:
Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `StorageClass` name.
[0]: namespace.md
[1]: https://console.bluemix.net/docs/services/cloud-object-storage/basics/order-storage.html#creating-a-new-resource-instance
[2]: https://console.bluemix.net/docs/services/cloud-object-storage/getting-started.html#create-buckets
[3]: https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
[4]: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/kc_welcome_containers.html
[5]: https://console.bluemix.net/docs/containers/container_index.html#container_index
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[15]: customize-installation.md#customize-resource-requests-and-limits

View File

@ -0,0 +1,267 @@
## Quick start evaluation install with Minio
The following example sets up the Velero server and client, then backs up and restores a sample application.
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
For additional functionality with this setup, see the section below on how to [expose Minio outside your cluster][1].
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
### Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later. **Note:** restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See [Restic Integration][17].
* A DNS server on the cluster
* `kubectl` installed
### Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
#### MacOS Installation
On Mac, you can use [HomeBrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Set up server
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster][31] for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
1. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
```
1. Start the server and the local storage service. In the Velero directory, run:
```
kubectl apply -f examples/minio/00-minio-deployment.yaml
```
```
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
```
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`).
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
1. Deploy the example nginx application:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Check to see that both the Velero and nginx deployments are successfully created:
```
kubectl get deployments -l component=velero --namespace=velero
kubectl get deployments --namespace=nginx-example
```
### Back up
1. Create a backup for any object that matches the `app=nginx` label selector:
```
velero backup create nginx-backup --selector app=nginx
```
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
```
velero backup create nginx-backup --selector 'backup notin (ignore)'
```
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
```
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
```
Alternatively, you can use some non-standard shorthand cron expressions:
```
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
```
See the [cron package's documentation][30] for more usage examples.
1. Simulate a disaster:
```
kubectl delete namespace nginx-example
```
1. To check that the nginx deployment and service are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
You should get no results.
NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up.
### Restore
1. Run:
```
velero restore create --from-backup nginx-backup
```
1. Run:
```
velero restore get
```
After the restore finishes, the output looks like the following:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, you can look at them in detail:
```
velero restore describe <RESTORE_NAME>
```
For more information, see [the debugging information][18].
### Clean up
If you want to delete any backups you created, including data in object storage and persistent
volume snapshots, you can run:
```
velero backup delete BACKUP_NAME
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
this for each backup you want to permanently delete. A future version of Velero will allow you to
delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run:
```
velero backup get BACKUP_NAME
```
To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
kubectl delete -f examples/nginx-app/base.yaml
```
## Expose Minio outside your cluster with a Service
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
- Change the Minio Service type from `ClusterIP` to `NodePort`.
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
### Expose Minio with Service of type NodePort
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
1. Get the Minio URL:
- if you're running Minikube:
```shell
minikube service minio --namespace=velero --url
```
- in any other environment:
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
```shell
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
```
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
## Expose Minio outside your cluster with Kubernetes in Docker (KinD):
Kubernetes in Docker currently does not have support for NodePort services (see [this issue](https://github.com/kubernetes-sigs/kind/issues/99)). In this case, you can use a port forward to access the Minio bucket.
In a terminal, run the following:
```shell
MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward $MINIO_POD -n velero 9000:9000
```
Then, in another terminal:
```shell
kubectl edit backupstoragelocation default -n velero
```
Add `publicUrl: http://localhost:9000` under the `spec.config` section.
### Work with Ingress
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
In this case:
1. Keep the Service type as `ClusterIP`.
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
[1]: #expose-minio-with-service-of-type-nodeport
[3]: ../customize-installation.md
[17]: ../restic.md
[18]: ../debugging-restores.md
[26]: https://github.com/vmware-tanzu/velero/releases
[30]: https://godoc.org/github.com/robfig/cron

View File

@ -0,0 +1,245 @@
# Use Oracle Cloud as a Backup Storage Provider for Velero
## Introduction
[Velero](https://velero.io/) is a tool used to backup and migrate Kubernetes applications. Here are the steps to use [Oracle Cloud Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) as a destination for Velero backups.
1. [Download Velero](#download-velero)
2. [Create A Customer Secret Key](#create-a-customer-secret-key)
3. [Create An Oracle Object Storage Bucket](#create-an-oracle-object-storage-bucket)
4. [Install Velero](#install-velero)
5. [Clean Up](#clean-up)
6. [Examples](#examples)
7. [Additional Reading](#additional-reading)
## Download Velero
1. Download the [latest release](https://github.com/vmware-tanzu/velero/releases/) of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example:
```
wget https://github.com/vmware-tanzu/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz
```
*We strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the master branch of the Velero repository is under active development and is not guaranteed to be stable!*
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`
4. Run `velero` to confirm the CLI has been installed correctly. You should see an output like this:
```
$ velero
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.
If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.
Usage:
velero [command]
```
## Create A Customer Secret Key
1. Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to [create a Customer Secret Key](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#To4). Refer to this link for more information about [Working with Customer Secret Keys](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#s3).
2. Create a Velero credentials file with your Customer Secret Key:
```
$ vi credentials-velero
[default]
aws_access_key_id=bae031188893d1eb83719648790ac850b76c9441
aws_secret_access_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk=
```
## Create An Oracle Object Storage Bucket
Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for [more information about creating a bucket with Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/managingbuckets.htm#usingconsole).
## Install Velero
You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider:
```
velero install \
--provider [provider name] \
--bucket [bucket name] \
--prefix [tenancy name] \
--use-volume-snapshots=false \
--secret-file [secret file location] \
--backup-location-config region=[region],s3ForcePathStyle="true",s3Url=[storage API endpoint]
```
- `--provider` Because we are using the S3-compatible API, we will use `aws` as our provider.
- `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`.
- ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`.
- `--use-volume-snapshots=false` Velero does not currently have a volume snapshot plugin for Oracle Cloud creating volume snapshots is disabled.
- `--secret-file` The path to your `credentials-velero` file.
- `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name ([List of Oracle Cloud Regions](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm?Highlight=regions)) and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com`
For example:
```
velero install \
--provider aws \
--bucket velero \
--prefix oracle-cloudnative \
--use-volume-snapshots=false \
--secret-file /Users/mboxell/bin/velero/credentials-velero \
--backup-location-config region=us-phoenix-1,s3ForcePathStyle="true",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com
```
This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod name]`.
## Clean Up
To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
This will remove all resources created by `velero install`.
## Examples
After creating the Velero server in your cluster, try this example:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml`
This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service.
```
$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created
```
You can see the created resources by running `kubectl get all`
```
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s
pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2 2 2 2 21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s
```
2. Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example`
```
$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
```
At this point you can navigate to appropriate bucket, which we called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero.
3. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example`
```
$ kubectl delete namespaces nginx-example
namespace "nginx-example" deleted
```
Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
This should return: `No resources found.`
4. Restore your lost resources: `velero restore create --from-backup nginx-backup`
```
$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20190604102710" submitted successfully.
Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details.
```
Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents.
5. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following:
```
$ velero restore get
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>`
6. Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml`
```
$ kubectl delete -f examples/nginx-app/base.yaml
namespace "nginx-example" deleted
deployment.apps "nginx-deployment" deleted
service "my-nginx" deleted
```
If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME`
```
$ velero backup delete nginx-backup
Are you sure you want to continue (Y/N)? Y
Request to delete backup "nginx-backup" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`:
```
$ velero backup get nginx-backup
An error occurred: backups.velero.io "nginx-backup" not found
```
```
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
```
## Additional Reading
* [Official Velero Documentation](https://velero.io/docs/v1.3.0-beta.1/)
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)

View File

@ -0,0 +1,91 @@
# Plugins
Velero has a plugin architecture that allows users to add their own custom functionality to Velero backups & restores without having to modify/recompile the core Velero binary. To add custom functionality, users simply create their own binary containing implementations of Velero's plugin kinds (described below), plus a small amount of boilerplate code to expose the plugin implementations to Velero. This binary is added to a container image that serves as an init container for the Velero server pod and copies the binary into a shared emptyDir volume for the Velero server to access.
Multiple plugins, of any type, can be implemented in this binary.
A fully-functional [sample plugin repository][1] is provided to serve as a convenient starting point for plugin authors.
## Plugin Naming
When naming your plugin, keep in mind that the name needs to conform to these rules:
- have two parts separated by '/'
- none of the above parts can be empty
- the prefix is a valid DNS subdomain name
- a plugin with the same name cannot not already exist
### Some examples:
```
- example.io/azure
- 1.2.3.4/5678
- example-with-dash.io/azure
```
You will need to give your plugin(s) a name when registering them by calling the appropriate `RegisterX` function: <https://github.com/vmware-tanzu/velero/blob/0e0f357cef7cf15d4c1d291d3caafff2eeb69c1e/pkg/plugin/framework/server.go#L42-L60>
## Plugin Kinds
Velero currently supports the following kinds of plugins:
- **Object Store** - persists and retrieves backups, backup logs and restore logs
- **Volume Snapshotter** - creates volume snapshots (during backup) and restores volumes from snapshots (during restore)
- **Backup Item Action** - executes arbitrary logic for individual items prior to storing them in a backup file
- **Restore Item Action** - executes arbitrary logic for individual items prior to restoring them into a cluster
## Plugin Logging
Velero provides a [logger][2] that can be used by plugins to log structured information to the main Velero server log or
per-backup/restore logs. It also passes a `--log-level` flag to each plugin binary, whose value is the value of the same
flag from the main Velero process. This means that if you turn on debug logging for the Velero server via `--log-level=debug`,
plugins will also emit debug-level logs. See the [sample repository][1] for an example of how to use the logger within your plugin.
## Plugin Configuration
Velero uses a ConfigMap-based convention for providing configuration to plugins. If your plugin needs to be configured at runtime,
define a ConfigMap like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: my-plugin-config
# must be in the namespace where the velero deployment
# is running
namespace: velero
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in change storageclass
# restore item action plugin)
velero.io/plugin-config: ""
# add a label whose key corresponds to the fully-qualified
# plugin name (e.g. mydomain.io/my-plugin-name), and whose
# value is the plugin type (BackupItemAction, RestoreItemAction,
# ObjectStore, or VolumeSnapshotter)
<fully-qualified-plugin-name>: <plugin-type>
data:
# add your configuration data here as key-value pairs
```
Then, in your plugin's implementation, you can read this ConfigMap to fetch the necessary configuration. See the [restic restore action][3]
for an example of this -- in particular, the `getPluginConfig(...)` function.
## Feature Flags
Velero will pass any known features flags as a comma-separated list of strings to the `--features` argument.
Once parsed into a `[]string`, the features can then be registered using the `NewFeatureFlagSet` function and queried with `features.Enabled(<featureName>)`.
## Environment Variables
Velero adds the `LD_LIBRARY_PATH` into the list of environment variables to provide the convenience for plugins that requires C libraries/extensions in the runtime.
[1]: https://github.com/vmware-tanzu/velero-plugin-example
[2]: https://github.com/vmware-tanzu/velero/blob/v1.3.0-beta.1/pkg/plugin/logger.go
[3]: https://github.com/vmware-tanzu/velero/blob/v1.3.0-beta.1/pkg/restore/restic_restore_action.go

View File

@ -0,0 +1,241 @@
# Customize Velero Install
- [Customize Velero Install](#customize-velero-install)
- [Plugins](#plugins)
- [Install in any namespace](#install-in-any-namespace)
- [Use non-file-based identity mechanisms](#use-non-file-based-identity-mechanisms)
- [Enable restic integration](#enable-restic-integration)
- [Customize resource requests and limits](#customize-resource-requests-and-limits)
- [Configure more than one storage location for backups or volume snapshots](#configure-more-than-one-storage-location-for-backups-or-volume-snapshots)
- [Do not configure a backup storage location during install](#do-not-configure-a-backup-storage-location-during-install)
- [Install an additional volume snapshot provider](#install-an-additional-volume-snapshot-provider)
- [Generate YAML only](#generate-yaml-only)
- [Additional options](#additional-options)
- [Optional Velero CLI configurations](#optional-velero-cli-configurations)
## Plugins
During install, Velero requires that at least one plugin is added (with the `--plugins` flag). Please see the documentation under [Plugins](overview-plugins.md)
## Install in any namespace
Velero is installed in the `velero` namespace by default. However, you can install Velero in any namespace. See [run in custom namespace][2] for details.
## Use non-file-based identity mechanisms
By default, `velero install` expects a credentials file for your `velero` IAM account to be provided via the `--secret-file` flag.
If you are using an alternate identity mechanism, such as kube2iam/kiam on AWS, Workload Identity on GKE, etc., that does not require a credentials file, you can specify the `--no-secret` flag instead of `--secret-file`.
## Enable restic integration
By default, `velero install` does not install Velero's [restic integration][3]. To enable it, specify the `--use-restic` flag.
If you've already run `velero install` without the `--use-restic` flag, you can run the same command again, including the `--use-restic` flag, to add the restic integration to your existing install.
## Customize resource requests and limits
By default, the Velero deployment requests 500m CPU, 128Mi memory and sets a limit of 1000m CPU, 256Mi.
Default requests and limits are not set for the restic pods as CPU/Memory usage can depend heavily on the size of volumes being backed up.
Customization of these resource requests and limits may be performed using the [velero install][6] CLI command.
## Configure more than one storage location for backups or volume snapshots
Velero supports any number of backup storage locations and volume snapshot locations. For more details, see [about locations](locations.md).
However, `velero install` only supports configuring at most one backup storage location and one volume snapshot location.
To configure additional locations after running `velero install`, use the `velero backup-location create` and/or `velero snapshot-location create` commands along with provider-specific configuration. Use the `--help` flag on each of these commands for more details.
## Do not configure a backup storage location during install
If you need to install Velero without a default backup storage location (without specifying `--bucket` or `--provider`), the `--no-default-backup-location` flag is required for confirmation.
## Install an additional volume snapshot provider
Velero supports using different providers for volume snapshots than for object storage -- for example, you can use AWS S3 for object storage, and Portworx for block volume snapshots.
However, `velero install` only supports configuring a single matching provider for both object storage and volume snapshots.
To use a different volume snapshot provider:
1. Install the Velero server components by following the instructions for your **object storage** provider
1. Add your volume snapshot provider's plugin to Velero (look in [your provider][0]'s documentation for the image name):
```bash
velero plugin add <registry/image:version>
```
1. Add a volume snapshot location for your provider, following [your provider][0]'s documentation for configuration:
```bash
velero snapshot-location create <NAME> \
--provider <PROVIDER-NAME> \
[--config <PROVIDER-CONFIG>]
```
## Generate YAML only
By default, `velero install` generates and applies a customized set of Kubernetes configuration (YAML) to your cluster.
To generate the YAML without applying it to your cluster, use the `--dry-run -o yaml` flags.
This is useful for applying bespoke customizations, integrating with a GitOps workflow, etc.
If you are installing Velero in Kubernetes 1.13.x or earlier, you need to use `kubectl apply`'s `--validate=false` option when applying the generated configuration to your cluster. See [issue 2077][7] for more context.
## Additional options
Run `velero install --help` or see the [Helm chart documentation](https://github.com/helm/charts/tree/master/stable/velero) for the full set of installation options.
## Optional Velero CLI configurations
### Enabling shell autocompletion
**Velero CLI** provides autocompletion support for `Bash` and `Zsh`, which can save you a lot of typing.
Below are the procedures to set up autocompletion for `Bash` (including the difference between `Linux` and `macOS`) and `Zsh`.
## Bash on Linux
The **Velero CLI** completion script for `Bash` can be generated with the command `velero completion bash`. Sourcing the completion script in your shell enables velero autocompletion.
However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`).
### Install bash-completion
`bash-completion` is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc.
The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file.
To find out, reload your shell and run `type _init_completion`. If the command succeeds, you're already set, otherwise add the following to your `~/.bashrc` file:
```shell
source /usr/share/bash-completion/bash_completion
```
Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`.
### Enable Velero CLI autocompletion for Bash on Linux
You now need to ensure that the **Velero CLI** completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
- Source the completion script in your `~/.bashrc` file:
```shell
echo 'source <(velero completion bash)' >>~/.bashrc
```
- Add the completion script to the `/etc/bash_completion.d` directory:
```shell
velero completion bash >/etc/bash_completion.d/velero
```
- If you have an alias for velero, you can extend shell completion to work with that alias:
```shell
echo 'alias v=velero' >>~/.bashrc
echo 'complete -F __start_velero v' >>~/.bashrc
```
> `bash-completion` sources all completion scripts in `/etc/bash_completion.d`.
Both approaches are equivalent. After reloading your shell, velero autocompletion should be working.
## Bash on macOS
The **Velero CLI** completion script for Bash can be generated with `velero completion bash`. Sourcing this script in your shell enables velero completion.
However, the velero completion script depends on [**bash-completion**](https://github.com/scop/bash-completion) which you thus have to previously install.
> There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The velero completion script **doesn't work** correctly with bash-completion v1 and Bash 3.2. It requires **bash-completion v2** and **Bash 4.1+**. Thus, to be able to correctly use velero completion on macOS, you have to install and use Bash 4.1+ ([*instructions*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer).
### Install bash-completion
> As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work).
You can test if you have bash-completion v2 already installed with `type _init_completion`. If not, you can install it with Homebrew:
```shell
brew install bash-completion@2
```
As stated in the output of this command, add the following to your `~/.bashrc` file:
```shell
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
```
Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`.
### Enable Velero CLI autocompletion for Bash on macOS
You now have to ensure that the velero completion script gets sourced in all your shell sessions. There are multiple ways to achieve this:
- Source the completion script in your `~/.bashrc` file:
```shell
echo 'source <(velero completion bash)' >>~/.bashrc
```
- Add the completion script to the `/usr/local/etc/bash_completion.d` directory:
```shell
velero completion bash >/usr/local/etc/bash_completion.d/velero
```
- If you have an alias for velero, you can extend shell completion to work with that alias:
```shell
echo 'alias v=velero' >>~/.bashrc
echo 'complete -F __start_velero v' >>~/.bashrc
```
- If you installed velero with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the velero completion script should already be in `/usr/local/etc/bash_completion.d/velero`. In that case, you don't need to do anything.
> The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
In any case, after reloading your shell, velero completion should be working.
## Autocompletion on Zsh
The velero completion script for Zsh can be generated with the command `velero completion zsh`. Sourcing the completion script in your shell enables velero autocompletion.
To do so in all your shell sessions, add the following to your `~/.zshrc` file:
```shell
source <(velero completion zsh)
```
If you have an alias for kubectl, you can extend shell completion to work with that alias:
```shell
echo 'alias v=velero' >>~/.zshrc
echo 'complete -F __start_velero v' >>~/.zshrc
```
After reloading your shell, kubectl autocompletion should be working.
If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file:
```shell
autoload -Uz compinit
compinit
```
[1]: https://github.com/vmware-tanzu/velero/releases/latest
[2]: namespace.md
[3]: restic.md
[4]: on-premises.md
[6]: velero-install.md#usage
[7]: https://github.com/vmware-tanzu/velero/issues/2077

View File

@ -0,0 +1,71 @@
# Debugging Installation Issues
## General
### `invalid configuration: no configuration has been provided`
This typically means that no `kubeconfig` file can be found for the Velero client to use. Velero looks for a kubeconfig in the
following locations:
* the path specified by the `--kubeconfig` flag, if any
* the path specified by the `$KUBECONFIG` environment variable, if any
* `~/.kube/config`
### Backups or restores stuck in `New` phase
This means that the Velero controllers are not processing the backups/restores, which usually happens because the Velero server is not running. Check the pod description and logs for errors:
```
kubectl -n velero describe pods
kubectl -n velero logs deployment/velero
```
## AWS
### `NoCredentialProviders: no valid providers in chain`
#### Using credentials
This means that the secret containing the AWS IAM user credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `credentials-velero` file is formatted properly and has the correct values:
```
[default]
aws_access_key_id=<your AWS access key ID>
aws_secret_access_key=<your AWS secret access key>
```
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
#### Using kube2iam
This means that Velero can't read the content of the S3 bucket. Ensure the following:
* There is a Trust Policy document allowing the role used by kube2iam to assume Velero's role, as stated in the AWS config documentation.
* The new Velero role has all the permissions listed in the documentation regarding S3.
## Azure
### `Failed to refresh the Token` or `adal: Refresh request failed`
This means that the secrets containing the Azure service principal credentials for Velero has not been created/mounted
properly into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has all of the expected keys and each one has the correct value (see [setup instructions][0])
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
## GCE/GKE
### `open credentials/cloud: no such file or directory`
This means that the secret containing the GCE service account credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
[0]: azure-config.md#create-service-principal

View File

@ -0,0 +1,106 @@
# Debugging Restores
* [Example][0]
* [Structure][1]
## Example
When Velero finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `velero restore get`:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
backup-test-20170726180512 backup-test Completed 155 76 2017-07-26 11:41:14 -0400 EDT <none>
backup-test-20170726180513 backup-test Completed 121 14 2017-07-26 11:48:24 -0400 EDT <none>
backup-test-2-20170726180514 backup-test-2 Completed 0 0 2017-07-26 13:31:21 -0400 EDT <none>
backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none>
```
To delve into the warnings and errors into more detail, you can use `velero restore describe`:
```bash
velero restore describe backup-test-20170726180512
```
The output looks like this:
```
Name: backup-test-20170726180512
Namespace: velero
Labels: <none>
Annotations: <none>
Backup: backup-test
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: serviceaccounts
Excluded: nodes, events, events.events.k8s.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Phase: Completed
Validation errors: <none>
Warnings:
Velero: <none>
Cluster: <none>
Namespaces:
velero: serviceaccounts "velero" already exists
serviceaccounts "default" already exists
kube-public: serviceaccounts "default" already exists
kube-system: serviceaccounts "attachdetach-controller" already exists
serviceaccounts "certificate-controller" already exists
serviceaccounts "cronjob-controller" already exists
serviceaccounts "daemon-set-controller" already exists
serviceaccounts "default" already exists
serviceaccounts "deployment-controller" already exists
serviceaccounts "disruption-controller" already exists
serviceaccounts "endpoint-controller" already exists
serviceaccounts "generic-garbage-collector" already exists
serviceaccounts "horizontal-pod-autoscaler" already exists
serviceaccounts "job-controller" already exists
serviceaccounts "kube-dns" already exists
serviceaccounts "namespace-controller" already exists
serviceaccounts "node-controller" already exists
serviceaccounts "persistent-volume-binder" already exists
serviceaccounts "pod-garbage-collector" already exists
serviceaccounts "replicaset-controller" already exists
serviceaccounts "replication-controller" already exists
serviceaccounts "resourcequota-controller" already exists
serviceaccounts "service-account-controller" already exists
serviceaccounts "service-controller" already exists
serviceaccounts "statefulset-controller" already exists
serviceaccounts "ttl-controller" already exists
default: serviceaccounts "default" already exists
Errors:
Velero: <none>
Cluster: <none>
Namespaces: <none>
```
## Structure
Errors appear for incomplete or partial restores. Warnings appear for non-blocking issues (e.g. the
restore looks "normal" and all resources referenced in the backup exist in some form, although some
of them may have been pre-existing).
Both errors and warnings are structured in the same way:
* `Velero`: A list of system-related issues encountered by the Velero server (e.g. couldn't read directory).
* `Cluster`: A list of issues related to the restore of cluster-scoped resources.
* `Namespaces`: A map of namespaces to the list of issues related to the restore of their respective resources.
[0]: #example
[1]: #structure

View File

@ -0,0 +1,30 @@
# Development
## Update generated files
Run `make update` to regenerate files if you make the following changes:
* Add/edit/remove command line flags and/or their help text
* Add/edit/remove commands or subcommands
* Add new API types
* Add/edit/remove plugin protobuf message or service definitions
The following files are automatically generated from the source code:
* The clientset
* Listers
* Shared informers
* Documentation
* Protobuf/gRPC types
You can run `make verify` to ensure that all generated files (clientset, listers, shared informers, docs) are up to date.
## Test
To run unit tests, use `make test`.
## Vendor dependencies
If you need to add or update the vendored dependencies, see [Vendoring dependencies][11].
[11]: vendoring-dependencies.md

View File

@ -0,0 +1,39 @@
# Disaster recovery
*Using Schedules and Read-Only Backup Storage Locations*
If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Velero looks like the following:
1. After you first run the Velero server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired):
```
velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
```
This creates a Backup object with the name `<SCHEDULE NAME>-<TIMESTAMP>`.
1. A disaster happens and you need to recreate your resources.
1. Update your backup storage location to read-only mode (this prevents backup objects from being created or deleted in the backup storage location during the restore process):
```bash
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
```
1. Create a restore with your most recent Velero Backup:
```
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
```
1. When ready, revert your backup storage location to read-write mode:
```bash
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadWrite"}}'
```

View File

@ -0,0 +1,63 @@
## Examples
After you set up the Velero server, try these examples:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Create a backup:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Wait for the namespace to be deleted.
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
### Snapshot example (with PersistentVolumes)
> NOTE: For Azure, you must run Kubernetes version 1.7.2 or later to support PV snapshotting of managed disks.
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/with-pv.yaml
```
1. Create a backup with PV snapshotting:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Because the default [reclaim policy][1] for dynamically-provisioned PVs is "Delete", these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time. **Before continuing to the next step, check your cloud provider to confirm that the disk no longer exists.**
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
[1]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming

View File

@ -0,0 +1,44 @@
# FAQ
## When is it appropriate to use Velero instead of etcd's built in backup/restore?
Etcd's backup/restore tooling is good for recovering from data loss in a single etcd cluster. For
example, it is a good idea to take a backup of etcd prior to upgrading etcd itself. For more
sophisticated management of your Kubernetes cluster backups and restores, we feel that Velero is
generally a better approach. It gives you the ability to throw away an unstable cluster and restore
your Kubernetes resources and data into a new cluster, which you can't do easily just by backing up
and restoring etcd.
Examples of cases where Velero is useful:
* you don't have access to etcd (e.g. you're running on GKE)
* backing up both Kubernetes resources and persistent volume state
* cluster migrations
* backing up a subset of your Kubernetes resources
* backing up Kubernetes resources that are stored across multiple etcd clusters (for example if you
run a custom apiserver)
## Will Velero restore my Kubernetes resources exactly the way they were before?
Yes, with some exceptions. For example, when Velero restores pods it deletes the `nodeName` from the
pod so that it can be scheduled onto a new node. You can see some more examples of the differences
in [pod_action.go](https://github.com/vmware-tanzu/velero/blob/v1.3.0-beta.1/pkg/restore/pod_action.go)
## I'm using Velero in multiple clusters. Should I use the same bucket to store all of my backups?
We **strongly** recommend that each Velero instance use a distinct bucket/prefix combination to store backups.
Having multiple Velero instances write backups to the same bucket/prefix combination can lead to numerous
problems - failed backups, overwritten backups, inadvertently deleted backups, etc., all of which can be
avoided by using a separate bucket + prefix per Velero instance.
It's fine to have multiple Velero instances back up to the same bucket if each instance uses its own
prefix within the bucket. This can be configured in your `BackupStorageLocation`, by setting the
`spec.objectStorage.prefix` field. It's also fine to use a distinct bucket for each Velero instance,
and not to use prefixes at all.
Related to this, if you need to restore a backup that was created in cluster A into cluster B, you may
configure cluster B with a backup storage location that points to cluster A's bucket/prefix. If you do
this, you should configure the storage location pointing to cluster A's bucket/prefix in `ReadOnly` mode
via the `--access-mode=ReadOnly` flag on the `velero backup-location create` command. This will ensure no
new backups are created from Cluster B in Cluster A's bucket/prefix, and no existing backups are deleted
or overwritten.

View File

@ -0,0 +1,81 @@
# Hooks
Velero currently supports executing commands in containers in pods during a backup.
## Backup Hooks
When performing a backup, you can specify one or more commands to execute in a container in a pod
when that pod is being backed up. The commands can be configured to run *before* any custom action
processing ("pre" hooks), or after all custom actions have been completed and any additional items
specified by custom action have been backed up ("post" hooks). Note that hooks are _not_ executed within a shell
on the containers.
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
### Specifying Hooks As Pod Annotations
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
#### Pre hooks
* `pre.hook.backup.velero.io/container`
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
* `pre.hook.backup.velero.io/command`
* The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`
* `pre.hook.backup.velero.io/on-error`
* What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional.
* `pre.hook.backup.velero.io/timeout`
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
#### Post hooks
* `post.hook.backup.velero.io/container`
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
* `post.hook.backup.velero.io/command`
* The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`
* `post.hook.backup.velero.io/on-error`
* What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional.
* `post.hook.backup.velero.io/timeout`
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
### Specifying Hooks in the Backup Spec
Please see the documentation on the [Backup API Type][1] for how to specify hooks in the Backup
spec.
## Hook Example with fsfreeze
We are going to walk through using both pre and post hooks for freezing a file system. Freezing the
file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot.
We will be using [examples/nginx-app/with-pv.yaml][2] for this example. Follow the [steps for your provider][3] to
setup this example.
### Annotations
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
to your declarative deployment. Below is an example of what updating an object in place might look like.
```shell
kubectl annotate pod -n nginx-example -l app=nginx \
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
pre.hook.backup.velero.io/container=fsfreeze \
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
post.hook.backup.velero.io/container=fsfreeze
```
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
hooks are running and exiting without error.
```shell
velero backup create nginx-hook-test
velero backup get nginx-hook-test
velero backup logs nginx-hook-test | grep hookCommand
```
[1]: api-types/backup.md
[2]: https://github.com/vmware-tanzu/velero/blob/v1.3.0-beta.1/examples/nginx-app/with-pv.yaml
[3]: cloud-common.md

View File

@ -0,0 +1,80 @@
# How Velero Works
Each Velero operation -- on-demand backup, scheduled backup, restore -- is a custom resource, defined with a Kubernetes [Custom Resource Definition (CRD)][20] and stored in [etcd][22]. Velero also includes controllers that process the custom resources to perform backups, restores, and all related operations.
You can back up or restore all objects in your cluster, or you can filter objects by type, namespace, and/or label.
Velero is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster (e.g. upgrades).
## On-demand backups
The **backup** operation:
1. Uploads a tarball of copied Kubernetes objects into cloud object storage.
1. Calls the cloud provider API to make disk snapshots of persistent volumes, if specified.
You can optionally specify hooks to be executed during the backup. For example, you might
need to tell a database to flush its in-memory buffers to disk before taking a snapshot. [More about hooks][10].
Note that cluster backups are not strictly atomic. If Kubernetes objects are being created or edited at the time of backup, they might not be included in the backup. The odds of capturing inconsistent information are low, but it is possible.
## Scheduled backups
The **schedule** operation allows you to back up your data at recurring intervals. The first backup is performed when the schedule is first created, and subsequent backups happen at the schedule's specified interval. These intervals are specified by a Cron expression.
Scheduled backups are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*.
## Restores
The **restore** operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a filtered subset of objects and persistent volumes. Velero supports multiple namespace remapping--for example, in a single restore, objects in namespace "abc" can be recreated under namespace "def", and the objects in namespace "123" under "456".
The default name of a restore is `<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. You can also specify a custom name. A restored object also includes a label with key `velero.io/restore-name` and value `<RESTORE NAME>`.
By default, backup storage locations are created in read-write mode. However, during a restore, you can configure a backup storage location to be in read-only mode, which disables backup creation and deletion for the storage location. This is useful to ensure that no backups are inadvertently created or deleted during a restore scenario.
## Backup workflow
When you run `velero backup create test-backup`:
1. The Velero client makes a call to the Kubernetes API server to create a `Backup` object.
1. The `BackupController` notices the new `Backup` object and performs validation.
1. The `BackupController` begins the backup process. It collects the data to back up by querying the API server for resources.
1. The `BackupController` makes a call to the object storage service -- for example, AWS S3 -- to upload the backup file.
By default, `velero backup create` makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run `velero backup create --help` to see available flags. Snapshots can be disabled with the option `--snapshot-volumes=false`.
![19]
## Backed-up API versions
Velero backs up resources using the Kubernetes API server's *preferred version* for each group/resource. When restoring a resource, this same API group/version must exist in the target cluster in order for the restore to be successful.
For example, if the cluster being backed up has a `gizmos` resource in the `things` API group, with group/versions `things/v1alpha1`, `things/v1beta1`, and `things/v1`, and the server's preferred group/version is `things/v1`, then all `gizmos` will be backed up from the `things/v1` API endpoint. When backups from this cluster are restored, the target cluster **must** have the `things/v1` endpoint in order for `gizmos` to be restored. Note that `things/v1` **does not** need to be the preferred version in the target cluster; it just needs to exist.
## Set a backup to expire
When you create a backup, you can specify a TTL by adding the flag `--ttl <DURATION>`. If Velero sees that an existing backup resource is expired, it removes:
* The backup resource
* The backup file from cloud object storage
* All PersistentVolume snapshots
* All associated Restores
## Object storage sync
Velero treats object storage as the source of truth. It continuously checks to see that the correct backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding backup resource in the Kubernetes API, Velero synchronizes the information from object storage to Kubernetes.
This allows restore functionality to work in a cluster migration scenario, where the original backup objects do not exist in the new cluster.
Likewise, if a backup object exists in Kubernetes but not in object storage, it will be deleted from Kubernetes since the backup tarball no longer exists.
[10]: hooks.md
[19]: img/backup-process.png
[20]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions
[21]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers
[22]: https://github.com/coreos/etcd

View File

@ -0,0 +1,21 @@
# Image tagging policy
This document describes Velero's image tagging policy.
## Released versions
`velero/velero:<SemVer>`
Velero follows the [Semantic Versioning](http://semver.org/) standard for releases. Each tag in the `github.com/vmware-tanzu/velero` repository has a matching image, e.g. `velero/velero:v1.0.0`.
### Latest
`velero/velero:latest`
The `latest` tag follows the most recently released version of Velero.
## Development
`velero/velero:master`
The `master` tag follows the latest commit to land on the `master` branch.

View File

@ -0,0 +1 @@
Some of these diagrams (for instance backup-process.png), have been created on [draw.io](https://www.draw.io), using the "Include a copy of my diagram" option. If you want to make changes to these diagrams, try importing them into draw.io, and you should have access to the original shapes/text that went into the originals.

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

View File

@ -0,0 +1,164 @@
# Backup Storage Locations and Volume Snapshot Locations
## Overview
Velero has two custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`, that are used to configure where Velero backups and their associated persistent volume snapshots are stored.
A `BackupStorageLocation` is defined as a bucket, a prefix within that bucket under which all Velero data should be stored, and a set of additional provider-specific fields (e.g. AWS region, Azure storage account, etc.) The [API documentation][1] captures the configurable parameters for each in-tree provider.
A `VolumeSnapshotLocation` is defined entirely by provider-specific fields (e.g. AWS region, Azure resource group, Portworx snapshot type, etc.) The [API documentation][2] captures the configurable parameters for each in-tree provider.
The user can pre-configure one or more possible `BackupStorageLocations` and one or more `VolumeSnapshotLocations`, and can select *at backup creation time* the location in which the backup and associated snapshots should be stored.
This configuration design enables a number of different use cases, including:
- Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
- Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
- For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
## Limitations / Caveats
- Velero only supports a single set of credentials *per provider*. It's not yet possible to use different credentials for different locations, if they're for the same provider.
- Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail.
- Each Velero backup has one `BackupStorageLocation`, and one `VolumeSnapshotLocation` per volume provider. It is not possible (yet) to send a single Velero backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can always set up multiple scheduled backups that differ only in the storage locations used if redundancy of backups across locations is important.
- Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume (e.g. EBS and Portworx), but you only have a `VolumeSnapshotLocation` configured for EBS, then Velero will **only** snapshot the EBS volumes.
- Restic data is stored under a prefix/subdirectory of the main Velero bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation time.
## Examples
Let's look at some examples of how we can use this configuration mechanism to address some common use cases:
#### Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
During server configuration:
```shell
velero snapshot-location create ebs-us-east-1 \
--provider aws \
--config region=us-east-1
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
```
During backup creation:
```shell
velero backup create full-cluster-backup \
--volume-snapshot-locations ebs-us-east-1,portworx-cloud
```
Alternately, since in this example there's only one possible volume snapshot location configured for each of our two providers (`ebs-us-east-1` for `aws`, and `portworx-cloud` for `portworx`), Velero doesn't require them to be explicitly specified when creating the backup:
```shell
velero backup create full-cluster-backup
```
#### Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
During server configuration:
```shell
velero backup-location create default \
--provider aws \
--bucket velero-backups \
--config region=us-east-1
velero backup-location create s3-alt-region \
--provider aws \
--bucket velero-backups-alt \
--config region=us-west-1
```
During backup creation:
```shell
# The Velero server will automatically store backups in the backup storage location named "default" if
# one is not specified when creating the backup. You can alter which backup storage location is used
# by default by setting the --default-backup-storage-location flag on the `velero server` command (run
# by the Velero deployment) to the name of a different backup storage location.
velero backup create full-cluster-backup
```
Or:
```shell
velero backup create full-cluster-alternate-location-backup \
--storage-location s3-alt-region
```
#### For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
During server configuration:
```shell
velero snapshot-location create portworx-local \
--provider portworx \
--config type=local
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
```
During backup creation:
```shell
# Note that since in this example we have two possible volume snapshot locations for the Portworx
# provider, we need to explicitly specify which one to use when creating a backup. Alternately,
# you can set the --default-volume-snapshot-locations flag on the `velero server` command (run by
# the Velero deployment) to specify which location should be used for each provider by default, in
# which case you don't need to specify it when creating a backup.
velero backup create local-snapshot-backup \
--volume-snapshot-locations portworx-local
```
Or:
```shell
velero backup create cloud-snapshot-backup \
--volume-snapshot-locations portworx-cloud
```
#### Use a single location
If you don't have a use case for more than one location, it's still easy to use Velero. Let's assume you're running on AWS, in the `us-west-1` region:
During server configuration:
```shell
velero backup-location create default \
--provider aws \
--bucket velero-backups \
--config region=us-west-1
velero snapshot-location create ebs-us-west-1 \
--provider aws \
--config region=us-west-1
```
During backup creation:
```shell
# Velero will automatically use your configured backup storage location and volume snapshot location.
# Nothing needs to be specified when creating a backup.
velero backup create full-cluster-backup
```
## Additional Use Cases
1. If you're using Azure's AKS, you may want to store your volume snapshots outside of the "infrastructure" resource group that is automatically created when you create your AKS cluster. This is possible using a `VolumeSnapshotLocation`, by specifying a `resourceGroup` under the `config` section of the snapshot location. See the [Azure volume snapshot location documentation][3] for details.
1. If you're using Azure, you may want to store your Velero backups across multiple storage accounts and/or resource groups/subscriptions. This is possible using a `BackupStorageLocation`, by specifying a `storageAccount`, `resourceGroup` and/or `subscriptionId`, respectively, under the `config` section of the backup location. See the [Azure backup storage location documentation][4] for details.
[1]: api-types/backupstoragelocation.md
[2]: api-types/volumesnapshotlocation.md
[3]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/volumesnapshotlocation.md
[4]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/backupstoragelocation.md

View File

@ -0,0 +1,48 @@
# Cluster migration
*Using Backups and Restores*
Velero can help you port your resources from one cluster to another, as long as you point each Velero instance to the same cloud object storage location. In this scenario, we are also assuming that your clusters are hosted by the same cloud provider. **Note that Velero does not support the migration of persistent volumes across cloud providers.**
1. *(Cluster 1)* Assuming you haven't already been checkpointing your data with the Velero `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired):
```
velero backup create <BACKUP-NAME>
```
The default TTL is 30 days (720 hours); you can use the `--ttl` flag to change this as necessary.
1. *(Cluster 2)* Configure `BackupStorageLocations` and `VolumeSnapshotLocations`, pointing to the locations used by *Cluster 1*, using `velero backup-location create` and `velero snapshot-location create`. Make sure to configure the `BackupStorageLocations` as read-only
by using the `--access-mode=ReadOnly` flag for `velero backup-location create`.
1. *(Cluster 2)* Make sure that the Velero Backup object is created. Velero resources are synchronized with the backup files in cloud storage.
```
velero backup describe <BACKUP-NAME>
```
**Note:** The default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Velero server.
1. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with:
```
velero restore create --from-backup <BACKUP-NAME>
```
## Verify both clusters
Check that the second cluster is behaving as expected:
1. *(Cluster 2)* Run:
```
velero restore get
```
1. Then run:
```
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
```
If you encounter issues, make sure that Velero is running in the same namespace in both clusters.

View File

@ -0,0 +1,23 @@
# Run in custom namespace
You can run Velero in any namespace.
First, ensure you've [downloaded & extracted the latest release][0].
Then, install Velero using the `--namespace` flag:
```bash
velero install --bucket <YOUR_BUCKET> --provider <YOUR_PROVIDER> --namespace <YOUR_NAMESPACE>
```
## Specify the namespace in client commands
To specify the namespace for all Velero client commands, run:
```bash
velero client config set namespace=<NAMESPACE_VALUE>
```
[0]: basic-install.md#install-the-cli

View File

@ -0,0 +1,24 @@
# On-Premises Environments
You can run Velero in an on-premises cluster in different ways depending on your requirements.
### Selecting an object storage provider
You must select an object storage backend that Velero can use to store backup data. [Supported providers][0] contains information on various
options that are supported or have been reported to work by users.
If you do not already have an object storage system, [MinIO][2] is an open-source S3-compatible object storage system that can be installed on-premises and is compatible with Velero. The details of configuring it for production usage are out of scope for Velero's documentation, but an [evaluation install guide][3] using MinIO is provided for convenience.
### (Optional) Selecting volume snapshot providers
If you need to back up persistent volume data, you must select a volume backup solution. [Supported providers][0] contains information on the supported options.
For example, if you use [Portworx][4] for persistent storage, you can install their Velero plugin to get native Portworx snapshots as part of your Velero backups.
If there is no native snapshot plugin available for your storage platform, you can use Velero's [restic integration][1], which provides a platform-agnostic file-level backup solution for volume data.
[0]: supported-providers.md
[1]: restic.md
[2]: https://min.io
[3]: contributions/minio.md
[4]: https://portworx.com

View File

@ -0,0 +1,99 @@
# Output file format
A backup is a gzip-compressed tar file whose name matches the Backup API resource's `metadata.name` (what is specified during `velero backup create <NAME>`).
In cloud object storage, each backup file is stored in its own subdirectory in the bucket specified in the Velero server configuration. This subdirectory includes an additional file called `velero-backup.json`. The JSON file lists all information about your associated Backup resource, including any default values. This gives you a complete historical record of the backup configuration. The JSON file also specifies `status.version`, which corresponds to the output file format.
The directory structure in your cloud storage looks something like:
```
rootBucket/
backup1234/
velero-backup.json
backup1234.tar.gz
```
## Example backup JSON file
```json
{
"kind": "Backup",
"apiVersion": "velero.io/v1",
"metadata": {
"name": "test-backup",
"namespace": "velero",
"selfLink": "/apis/velero.io/v1/namespaces/velero/backups/testtest",
"uid": "a12345cb-75f5-11e7-b4c2-abcdef123456",
"resourceVersion": "337075",
"creationTimestamp": "2017-07-31T13:39:15Z"
},
"spec": {
"includedNamespaces": [
"*"
],
"excludedNamespaces": null,
"includedResources": [
"*"
],
"excludedResources": null,
"labelSelector": null,
"snapshotVolumes": true,
"ttl": "24h0m0s"
},
"status": {
"version": 1,
"expiration": "2017-08-01T13:39:15Z",
"phase": "Completed",
"volumeBackups": {
"pvc-e1e2d345-7583-11e7-b4c2-abcdef123456": {
"snapshotID": "snap-04b1a8e11dfb33ab0",
"type": "gp2",
"iops": 100
}
},
"validationErrors": null
}
}
```
Note that this file includes detailed info about your volume snapshots in the `status.volumeBackups` field, which can be helpful if you want to manually check them in your cloud provider GUI.
## file format version: 1
When unzipped, a typical backup directory (e.g. `backup1234.tar.gz`) looks like the following:
```
resources/
persistentvolumes/
cluster/
pv01.json
...
configmaps/
namespaces/
namespace1/
myconfigmap.json
...
namespace2/
...
pods/
namespaces/
namespace1/
mypod.json
...
namespace2/
...
jobs/
namespaces/
namespace1/
awesome-job.json
...
namespace2/
...
deployments/
namespaces/
namespace1/
cool-deployment.json
...
namespace2/
...
...
```

View File

@ -0,0 +1,27 @@
# Velero plugin system
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations.
For server installation, Velero requires that at least one plugin is added (with the `--plugins` flag). The plugin will be either of the type object store or volume snapshotter, or a plugin that contains both. An exception to this is that when the user is not configuring a backup storage location or a snapshot storage location at the time of install, this flag is optional.
Any plugin can be added after Velero has been installed by using the command `velero plugin add <registry/image:version>`.
Example with a dockerhub image: `velero plugin add velero/velero-plugin-for-aws:v1.0.0`.
In the same way, any plugin can be removed by using the command `velero plugin remove <registry/image:version>`.
## Creating a new plugin
Anyone can add integrations for any platform to provide additional backup and volume storage without modifying the Velero codebase. To write a plugin for a new backup or volume storage platform, take a look at our [example repo][1] and at our documentation for [Custom plugins][2].
## Adding a new plugin
After you publish your plugin on your own repository, open a PR that adds a link to it under the appropriate list of [supported providers][3] page in our documentation.
You can also add the [`velero-plugin` GitHub Topic][4] to your repo, and it will be shown under the aggregated list of repositories automatically.
[1]: https://github.com/vmware-tanzu/velero-plugin-example/
[2]: custom-plugins.md
[3]: supported-providers.md
[4]: https://github.com/topics/velero-plugin

View File

@ -0,0 +1,47 @@
# Run Velero more securely with restrictive RBAC settings
By default Velero runs with an RBAC policy of ClusterRole `cluster-admin`. This is to make sure that Velero can back up or restore anything in your cluster. But `cluster-admin` access is wide open -- it gives Velero components access to everything in your cluster. Depending on your environment and your security needs, you should consider whether to configure additional RBAC policies with more restrictive access.
**Note:** Roles and RoleBindings are associated with a single namespaces, not with an entire cluster. PersistentVolume backups are associated only with an entire cluster. This means that any backups or restores that use a restrictive Role and RoleBinding pair can manage only the resources that belong to the namespace. You do not need a wide open RBAC policy to manage PersistentVolumes, however. You can configure a ClusterRole and ClusterRoleBinding that allow backups and restores only of PersistentVolumes, not of all objects in the cluster.
For more information about RBAC and access control generally in Kubernetes, see the Kubernetes documentation about [access control][1], [managing service accounts][2], and [RBAC authorization][3].
## Set up Roles and RoleBindings
Here's a sample Role and RoleBinding pair.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: YOUR_NAMESPACE_HERE
name: ROLE_NAME_HERE
labels:
component: velero
rules:
- apiGroups:
- velero.io
verbs:
- "*"
resources:
- "*"
```
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ROLEBINDING_NAME_HERE
subjects:
- kind: ServiceAccount
name: YOUR_SERVICEACCOUNT_HERE
roleRef:
kind: Role
name: ROLE_NAME_HERE
apiGroup: rbac.authorization.k8s.io
```
[1]: https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/
[2]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
[3]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[4]: namespace.md

View File

@ -0,0 +1,80 @@
# Release Instructions
## Ahead of Time
### (GA Only) Release Blog Post PR
Prepare a PR containing the release blog post. It's usually easiest to make a copy of the most recent existing post, then replace the content as appropriate.
You also need to update `site/index.html` to have "Latest Release Information" contain a link to the new post.
### (Pre-Release and GA) Changelog and Docs PR
1. In a branch, create the file `changelogs/CHANGELOG-<major>.<minor>.md` (if it doesn't already exist) by copying the most recent one.
1. Run `make changelog` to generate a list of all unreleased changes. Copy/paste the output into `CHANGELOG-<major>.<minor>.md`, under the "All Changes" section for the release.
- You *may* choose to tweak formatting on the list of changes by adding code blocks, etc.
1. (GA Only) Remove all changelog files from `changelogs/unreleased`.
1. Update the main `CHANGELOG.md` file to properly reference the release-specific changelog file:
- (Pre-Release) List the release under "Development release"
- (GA) List the release under "Current release", remove any pre-releases from "Development release", and move the previous release into "Older releases".
1. If there is an existing set of pre-release versioned docs for the version you are releasing (i.e. `site/docs/v1.2.0-beta.1` exists, and you're releasing `v1.2.0-beta.2` or `v1.2.0`):
- Remove the directory containing the pre-release docs, i.e. `site/docs/<pre-release-version>`.
- Delete the pre-release docs table of contents file, i.e. `site/_data/<pre-release-version>-toc.yml`.
- Remove the pre-release docs table of contents mapping entry from `site/_data/toc-mapping.yml`.
- Remove all references to the pre-release docs from `site/_config.yml`.
1. Run `NEW_DOCS_VERSION=<VERSION> make gen-docs` (e.g. `NEW_DOCS_VERSION=v1.2.0 make gen-docs` or `NEW_DOCS_VERSION=v1.2.0-beta.1 make gen-docs`).
1. Follow the additional instructions at `site/README-JEKYLL.md` to complete the docs generation process.
1. Do a review of the diffs, and/or run `make serve-docs` and review the site.
1. Submit a PR containing the changelog and the version-tagged docs.
### (Pre-Release and GA) GitHub Token
To run the `goreleaser` process to generate a GitHub release, you'll need to have a GitHub token. See https://goreleaser.com/environment/ for more details.
You may regenerate the token for every release if you prefer.
#### If you don't already have a token
1. Go to https://github.com/settings/tokens/new.
1. Choose a name for your token.
1. Check the "repo" scope.
1. Click "Generate token".
1. Save the token value somewhere - you'll need it during the release, in the `GITHUB_TOKEN` environment variable.
#### If you do already have a token, but need to regenerate it
1. Go to https://github.com/settings/tokens.
1. Click on the name of the relevant token.
1. Click "Regenerate token".
1. Save the token value somewhere - you'll need it during the release, in the `GITHUB_TOKEN` environment variable.
## During Release
This process is the same for both pre-release and GA, except for the fact that there will not be a blog post PR to merge for pre-release versions.
1. Merge the changelog + docs PR, so that it's included in the release tag.
1. Make sure your working directory is clean: `git status` should show `nothing to commit, working tree clean`.
1. Run `git fetch upstream master && git checkout upstream/master`.
1. Run `git tag <VERSION>` (e.g. `git tag v1.2.0` or `git tag v1.2.0-beta.1`).
1. Run `git push upstream <VERSION>` (e.g. `git push upstream v1.2.0` or `git push upstream v1.2.0-beta.1`). This will trigger the Travis CI job that builds/publishes the Docker images.
1. Generate the GitHub release (it will be created in "Draft" status, which means it's not visible to the outside world until you click "Publish"):
```bash
GITHUB_TOKEN=your-github-token \
RELEASE_NOTES_FILE=changelogs/CHANGELOG-<major>.<minor>.md \
PUBLISH=true \
make release
```
1. Navigate to the draft GitHub release, at https://github.com/vmware-tanzu/velero/releases.
1. If this is a patch release (e.g. `v1.2.1`), note that the full `CHANGELOG-1.2.md` contents will be included in the body of the GitHub release. You need to delete the previous releases' content (e.g. `v1.2.0`'s changelog) so that only the latest patch release's changelog shows.
1. Do a quick review for formatting. **Note:** the `goreleaser` process should detect if it's a pre-release version, and check that box in the GitHub release appropriately, but it's always worth double-checking.
1. Publish the release.
1. By now, the Docker images should have been published. Perform a smoke-test - for example:
- Download the CLI from the GitHub release
- Use it to install Velero into a cluster (or manually update an existing deployment to use the new images)
- Verify that `velero version` shows the expected output
- Run a backup/restore and ensure it works
1. (GA Only) Merge the blog post PR.
1. Announce the release:
- Twitter (mention a few highlights, link to the blog post)
- Slack channel
- Google group (this doesn't get a lot of traffic, and recent releases may not have been posted here)

View File

@ -0,0 +1,429 @@
# Restic Integration
Velero has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called [restic][1]. This support is considered beta quality. Please see the list of [limitations](#limitations) to understand if it currently fits your use case.
Velero has always allowed you to take snapshots of persistent volumes as part of your backups if youre using one of
the supported cloud providers block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks).
We also provide a plugin model that enables anyone to implement additional object and block storage backends, outside the
main Velero repository.
We integrated restic with Velero so that users have an out-of-the-box solution for backing up and restoring almost any type of Kubernetes
volume*. This is a new capability for Velero, not a replacement for existing functionality. If you're running on AWS, and
taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you've
been waiting for a snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable
cross-volume-type data migrations. Stay tuned as this evolves!
\* hostPath volumes are not supported, but the [new local volume type][4] is supported.
## Setup
### Prerequisites
- Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.10.0 and later.
### Instructions
Ensure you've [downloaded latest release][3].
To install restic, use the `--use-restic` flag on the `velero install` command. See the [install overview][2] for more details. When using restic on a storage provider that doesn't currently have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
Please note: For some PaaS/CaaS platforms based on Kubernetes such as RancherOS, OpenShift and Enterprise PKS, some modifications are required to the restic DaemonSet spec.
**RancherOS**
The host path for volumes is not `/var/lib/kubelet/pods`, rather it is `/opt/rke/var/lib/kubelet/pods`
```yaml
hostPath:
path: /var/lib/kubelet/pods
```
to
```yaml
hostPath:
path: /opt/rke/var/lib/kubelet/pods
```
**OpenShift**
The restic containers should be running in a `privileged` mode to be able to mount the correct hostpath to pods volumes.
1. Add the `velero` ServiceAccount to the `privileged` SCC:
```
$ oc adm policy add-scc-to-user privileged -z velero -n velero
```
2. For OpenShift version >= `4.1`, Modify the DaemonSet yaml to request a privileged mode:
```diff
@@ -67,3 +67,5 @@ spec:
value: /credentials/cloud
- name: VELERO_SCRATCH_DIR
value: /scratch
+ securityContext:
+ privileged: true
```
or
```shell
oc patch ds/restic \
--namespace velero \
--type json \
-p '[{"op":"add","path":"/spec/template/spec/containers/0/securityContext","value": { "privileged": true}}]'
```
3. For OpenShift version < `4.1`, Modify the DaemonSet yaml to request a privileged mode and mount the correct hostpath to pods volumes.
```diff
@@ -35,7 +35,7 @@ spec:
secretName: cloud-credentials
- name: host-pods
hostPath:
- path: /var/lib/kubelet/pods
+ path: /var/lib/origin/openshift.local.volumes/pods
- name: scratch
emptyDir: {}
containers:
@@ -67,3 +67,5 @@ spec:
value: /credentials/cloud
- name: VELERO_SCRATCH_DIR
value: /scratch
+ securityContext:
+ privileged: true
```
or
```shell
oc patch ds/restic \
--namespace velero \
--type json \
-p '[{"op":"add","path":"/spec/template/spec/containers/0/securityContext","value": { "privileged": true}}]'
oc patch ds/restic \
--namespace velero \
--type json \
-p '[{"op":"replace","path":"/spec/template/spec/volumes/0/hostPath","value": { "path": "/var/lib/origin/openshift.local.volumes/pods"}}]'
```
If restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) in order to relax the security in your cluster so that restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
By default a userland openshift namespace will not schedule pods on all nodes in the cluster.
To schedule on all nodes the namespace needs an annotation:
```
oc annotate namespace <velero namespace> openshift.io/node-selector=""
```
This should be done before velero installation.
Or the ds needs to be deleted and recreated:
```
oc get ds restic -o yaml -n <velero namespace> > ds.yaml
oc annotate namespace <velero namespace> openshift.io/node-selector=""
oc create -n <velero namespace> -f ds.yaml
```
**Enterprise PKS**
You need to enable the `Allow Privileged` option in your plan configuration so that restic is able to mount the hostpath.
The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods`
```yaml
hostPath:
path: /var/vcap/data/kubelet/pods
```
**Microsoft Azure**
If you are using [Azure Files][8], you need to add `nouser_xattr` to your storage class's `mountOptions`. See [this restic issue][9] for more details.
You can use the following command to patch the storage class:
```bash
kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
--type json \
--patch '[{"op":"add","path":"/mountOptions/-","value":"nouser_xattr"}]'
```
You're now ready to use Velero with restic.
## Back up
1. Run the following for each pod that contains a volume to back up:
```bash
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
```
where the volume names are the names of the volumes in the pod spec.
For example, for the following pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: sample
namespace: foo
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-webserver
volumeMounts:
- name: pvc-volume
mountPath: /volume-1
- name: emptydir-volume
mountPath: /volume-2
volumes:
- name: pvc-volume
persistentVolumeClaim:
claimName: test-volume-claim
- name: emptydir-volume
emptyDir: {}
```
You'd run:
```bash
kubectl -n foo annotate pod/sample backup.velero.io/backup-volumes=pvc-volume,emptydir-volume
```
This annotation can also be provided in a pod template spec if you use a controller to manage your pods.
1. Take a Velero backup:
```bash
velero backup create NAME OPTIONS...
```
1. When the backup completes, view information about the backups:
```bash
velero backup describe YOUR_BACKUP_NAME
```
```bash
kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOUR_BACKUP_NAME -o yaml
```
## Restore
1. Restore from your Velero backup:
```bash
velero restore create --from-backup BACKUP_NAME OPTIONS...
```
1. When the restore completes, view information about your pod volume restores:
```bash
velero restore describe YOUR_RESTORE_NAME
```
```bash
kubectl -n velero get podvolumerestores -l velero.io/restore-name=YOUR_RESTORE_NAME -o yaml
```
## Limitations
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
- Those of you familiar with [restic][1] may know that it encrypts all of its data. We've decided to use a static,
common encryption key for all restic repositories created by Velero. **This means that anyone who has access to your
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
appropriately. We plan to implement full Velero backup encryption, including securing the restic encryption keys, in
a future release.
- An incremental backup chain will be maintained across pod reschedules for PVCs. However, for pod volumes that are *not*
PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (e.g. by a ReplicaSet/Deployment), the next backup of those
volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod.
- Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual
difference is small.
## Customize Restore Helper Container
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
the alternate image.
In addition, you can customize the resource requirements for the init container, should you need.
The ConfigMap must look like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: restic-restore-action-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in restic restore
# item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/restic: RestoreItemAction
data:
# The value for "image" can either include a tag or not;
# if the tag is *not* included, the tag from the main Velero
# image will automatically be used.
image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]
# "cpuRequest" sets the request.cpu value on the restic init containers during restore.
# If not set, it will default to "100m". A value of "0" is treated as unbounded.
cpuRequest: 200m
# "memRequest" sets the request.memory value on the restic init containers during restore.
# If not set, it will default to "128Mi". A value of "0" is treated as unbounded.
memRequest: 128Mi
# "cpuLimit" sets the request.cpu value on the restic init containers during restore.
# If not set, it will default to "100m". A value of "0" is treated as unbounded.
cpuLimit: 200m
# "memLimit" sets the request.memory value on the restic init containers during restore.
# If not set, it will default to "128Mi". A value of "0" is treated as unbounded.
memLimit: 128Mi
```
## Troubleshooting
Run the following checks:
Are your Velero server and daemonset pods running?
```bash
kubectl get pods -n velero
```
Does your restic repository exist, and is it ready?
```bash
velero restic repo get
velero restic repo get REPO_NAME -o yaml
```
Are there any errors in your Velero backup/restore?
```bash
velero backup describe BACKUP_NAME
velero backup logs BACKUP_NAME
velero restore describe RESTORE_NAME
velero restore logs RESTORE_NAME
```
What is the status of your pod volume backups/restores?
```bash
kubectl -n velero get podvolumebackups -l velero.io/backup-name=BACKUP_NAME -o yaml
kubectl -n velero get podvolumerestores -l velero.io/restore-name=RESTORE_NAME -o yaml
```
Is there any useful information in the Velero server or daemon pod logs?
```bash
kubectl -n velero logs deploy/velero
kubectl -n velero logs DAEMON_POD_NAME
```
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
to the container command in the deployment/daemonset pod template spec.
## How backup and restore work with restic
We introduced three custom resource definitions and associated controllers:
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
and `restic prune`.
You can see information about your Velero restic repositories by running `velero restic repo get`.
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
`restic backup` commands to backup pod volume data.
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
on that node. The controller executes `restic restore` commands to restore pod volume data.
### Backup
1. The main Velero backup process checks each pod that it's backing up for the annotation specifying a restic backup
should be taken (`backup.velero.io/backup-volumes`)
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
1. The main Velero process now waits for the `PodVolumeBackup` resources to complete or fail
1. Meanwhile, each `PodVolumeBackup` is handled by the controller on the appropriate node, which:
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
- finds the pod volume's subdirectory within the above volume
- runs `restic backup`
- updates the status of the custom resource to `Completed` or `Failed`
1. As each `PodVolumeBackup` finishes, the main Velero process adds it to the Velero backup in a file named `<backup-name>-podvolumebackups.json.gz`. This file gets uploaded to object storage alongside the backup tarball. It will be used for restores, as seen in the next section.
### Restore
1. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from.
1. For each `PodVolumeBackup` found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
check it for integrity)
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
on this shortly)
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
- waits for the pod to be running the init container
- finds the pod volume's subdirectory within the above volume
- runs `restic restore`
- on success, writes a file into the pod volume, in a `.velero` subdirectory, whose name is the UID of the Velero restore
that this pod volume restore is for
- updates the status of the custom resource to `Completed` or `Failed`
1. The init container that was added to the pod is running a process that waits until it finds a file
within each restored volume, under `.velero`, whose name is the UID of the Velero restore being run
1. Once all such files are found, the init container's process terminates successfully and the pod moves
on to running other init containers/the main containers.
## 3rd party controller
### Monitor backup annotation
Velero does not currently provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watcher][7]
[1]: https://github.com/restic/restic
[2]: customize-installation.md#enable-restic-integration
[3]: https://github.com/vmware-tanzu/velero/releases/
[4]: https://kubernetes.io/docs/concepts/storage/volumes/#local
[5]: http://restic.readthedocs.io/en/latest/100_references.html#terminology
[6]: https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
[7]: https://github.com/bitsbeats/velero-pvc-watcher
[8]: https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
[9]: https://github.com/restic/restic/issues/1800

View File

@ -0,0 +1,64 @@
# Restore Reference
## Restoring Into a Different Namespace
Velero can restore resources into a different namespace than the one they were backed up from. To do this, use the `--namespace-mappings` flag:
```bash
velero restore create RESTORE_NAME \
--from-backup BACKUP_NAME \
--namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-2
```
## What happens when user removes restore objects
A **restore** object represents the restore operation. There are two types of deletion for restore objects:
### 1. Deleting with **`velero restore delete`**
This command will delete the custom resource representing it, along with its individual log and results files. But, it will not delete any objects that were created by it from your cluster.
### 2. Deleting with **`kubectl -n velero delete restore`**
This command will delete the custom resource representing the restore, but will not delete log/results files from object storage, or any objects that were created during the restore in your cluster.
## Restore command-line options
To see all commands for restores, run : `velero restore --help`
To see all options associated with a specific command, provide the --help flag to that command. For example, **`velero restore create --help`** shows all options associated with the **create** command.
### To List all options of restore use : **`velero restore --help`**
```Usage:
velero restore [command]
Available Commands:
create Create a restore
delete Delete restores
describe Describe restores
get Get restores
logs Get restore logs
```
## Changing PV/PVC Storage Classes
Velero can change the storage class of persistent volumes and persistent volume claims during restores. To configure a storage class mapping, create a config map in the Velero namespace like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: change-storage-class-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in change storage
# class restore item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/change-storage-class: RestoreItemAction
data:
# add 1+ key-value pairs here, where the key is the old
# storage class name and the value is the new storage
# class name.
<old-storage-class>: <new-storage-class>
```

View File

@ -0,0 +1,49 @@
# Run Velero locally in development
Running the Velero server locally can speed up iterative development. This eliminates the need to rebuild the Velero server
image and redeploy it to the cluster with each change.
## Run Velero locally with a remote cluster
Velero runs against the Kubernetes API server as the endpoint (as per the `kubeconfig` configuration), so both the Velero server and client use the same `client-go` to communicate with Kubernetes. This means the Velero server can be run locally just as functionally as if it was running in the remote cluster.
### Prerequisites
When running Velero, you will need to ensure that you set up all of the following:
* Appropriate RBAC permissions in the cluster
* Read access for all data from the source cluster and namespaces
* Write access to the target cluster and namespaces
* Cloud provider credentials
* Read/write access to volumes
* Read/write access to object storage for backup data
* A [BackupStorageLocation][20] object definition for the Velero server
* (Optional) A [VolumeSnapshotLocation][21] object definition for the Velero server, to take PV snapshots
### 1. Install Velero
See documentation on how to install Velero in some specific providers: [Install overview][22]
### 2. Scale deployment down to zero
After you use the `velero install` command to install Velero into your cluster, you scale the Velero deployment down to 0 so it is not simultaneously being run on the remote cluster and potentially causing things to get out of sync:
`kubectl scale --replicas=0 deployment velero -n velero`
#### 3. Start the Velero server locally
* To run the server locally, use the full path according to the binary you need. Example, if you are on a Mac, and using `AWS` as a provider, this is how to run the binary you built from source using the full path: `AWS_SHARED_CREDENTIALS_FILE=<path-to-credentials-file> ./_output/bin/darwin/amd64/velero`. Alternatively, you may add the `velero` binary to your `PATH`.
* Start the server: `velero server [CLI flags]`. The following CLI flags may be useful to customize, but see `velero server --help` for full details:
* `--log-level`: set the Velero server's log level (default `info`, use `debug` for the most logging)
* `--kubeconfig`: set the path to the kubeconfig file the Velero server uses to talk to the Kubernetes apiserver (default `$KUBECONFIG`)
* `--namespace`: the set namespace where the Velero server should look for backups, schedules, restores (default `velero`)
* `--plugin-dir`: set the directory where the Velero server looks for plugins (default `/plugins`)
* `--metrics-address`: set the bind address and port where Prometheus metrics are exposed (default `:8085`)
[15]: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#the-shared-credentials-file
[16]: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable
[18]: https://eksctl.io/
[20]: api-types/backupstoragelocation.md
[21]: api-types/volumesnapshotlocation.md
[22]: basic-install.md

View File

@ -0,0 +1,21 @@
## Start contributing
### Before you start
* Please familiarize yourself with the [Code of Conduct][1] before contributing.
* Also, see [CONTRIBUTING.md][2] for instructions on the developer certificate of origin that we require.
### Finding your way around
You may join the Velero community and contribute in many different ways, including helping us design or test new features. For any significant feature we consider adding, we start with a design document. You may find a list of currently in progress new designs here: https://github.com/vmware-tanzu/velero/pulls?q=is%3Aopen+is%3Apr+label%3ADesign. Feel free to review and help us with your input.
For information on how to connect with our maintainers and community, join our online meetings, or find good first issues, start on our [Velero community](https://velero.io/community/) page.
Please browse our list of resources, including a playlist of past online community meetings, blog posts, and other resources to help you get familiar with our project: [Velero resources](https://velero.io/resources/).
### Contributing
If you are ready to jump in and test, add code, or help with documentation, please use the navigation on the left under `Contribute`.
[1]: https://github.com/vmware-tanzu/velero/blob/v1.3.0-beta.1/CODE_OF_CONDUCT.md
[2]: https://github.com/vmware-tanzu/velero/blob/v1.3.0-beta.1/CONTRIBUTING.md

View File

@ -0,0 +1,41 @@
# Support Process
## Weekly Rotation
The Velero maintainers use a weekly rotation to manage community support. Each week, a different maintainer is the point person for responding to incoming support issues via Slack, GitHub, and the Google group. The point person is *not* expected to be on-call 24x7. Instead, they choose one or more hour(s) per day to be available/responding to incoming issues. They will communicate to the community what that time slot will be each week.
## Start of Week
We will update the public Slack channel's topic to indicate that you are the point person for the week, and what hours you'll be available.
## During the Week
### Where we will monitor
- `#velero` public Slack channel in Kubernetes org
- [all Velero-related repos][0] in GitHub (`velero`, `velero-plugin-for-[aws|gcp|microsoft-azure|csi]`, `helm-charts`)
- [Project Velero Google Group][1]
### GitHub issue flow
Generally speaking, new GitHub issues will fall into one of several categories. We use the following process for each:
1. **Feature request**
- Label the issue with `Enhancement/User` or `Enhancement/Dev`
- Leave the issue in the `New Issues` swimlane for triage by product mgmt
1. **Bug**
- Label the issue with `Bug`
- Leave the issue in the `New Issues` swimlane for triage by product mgmt
1. **User question/problem** that does not clearly fall into one of the previous categories
- When you start investigating/responding, label the issue with `Investigating`
- Add comments as you go, so both the user and future support people have as much context as possible
- Use the `Needs Info` label to indicate an issue is waiting for information from the user. Remove/re-add the label as needed.
- If you resolve the issue with the user, close it out
- If the issue ends up being a feature request or a bug, update the title and follow the appropriate process for it
- If the reporter becomes unresponsive after multiple pings, close out the issue due to inactivity and comment that the user can always reach out again as needed
## End of Week
We ensure all GitHub issues worked on during the week on are labeled with `Investigating` and `Needs Info` (if appropriate), and have updated comments so the next person can pick them up.
[0]: https://app.zenhub.com/workspaces/velero-5c59c15e39d47b774b5864e3/board?repos=99143276,112385197,213946861,190224441,214524700,214524630
[1]: https://groups.google.com/forum/#!forum/projectvelero

View File

@ -0,0 +1,80 @@
# Providers
Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase.
## Velero supported providers
| Provider | Object Store | Volume Snapshotter | Plugin Provider Repo | Setup Instructions |
|-----------------------------------|---------------------|------------------------------|-----------------------------------------|-------------------------------|
| [Amazon Web Services (AWS)][7] | AWS S3 | AWS EBS | [Velero plugin for AWS][8] | [AWS Plugin Setup][35] |
| [Google Cloud Platform (GCP)][11] | Google Cloud Storage| Google Compute Engine Disks | [Velero plugin for GCP][12] | [GCP Plugin Setup][36] |
| [Microsoft Azure][9] | Azure Blob Storage | Azure Managed Disks | [Velero plugin for Microsoft Azure][10] | [Azure Plugin Setup][37] |
Contact: [#Velero Slack][28], [GitHub Issues][29]
## Community supported providers
| Provider | Object Store | Volume Snapshotter | Plugin Documentation | Contact |
|---------------------------|------------------------------|------------------------------------|------------------------|---------------------------------|
| [Portworx][31] | 🚫 | Portworx Volume | [Portworx][32] | [Slack][33], [GitHub Issue][34] |
| [DigitalOcean][15] | DigitalOcean Object Storage | DigitalOcean Volumes Block Storage | [StackPointCloud][16] | |
| [OpenEBS][17] | 🚫 | OpenEBS CStor Volume | [OpenEBS][18] | [Slack][19], [GitHub Issue][20] |
| [AlibabaCloud][21] | Alibaba Cloud OSS | Alibaba Cloud | [AlibabaCloud][22] | [GitHub Issue][23] |
| [Hewlett Packard][24] | 🚫 | HPE Storage | [Hewlett Packard][25] | [Slack][26], [GitHub Issue][27] |
## S3-Compatible object store providers
Velero's AWS Object Store plugin uses [Amazon's Go SDK][0] to connect to the AWS S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero:
_Note that these storage providers are not regularly tested by the Velero team._
* [IBM Cloud][1]
* [Oracle Cloud][2]
* [Minio][3]
* [DigitalOcean][4]
* [NooBaa][5]
* Ceph RADOS v12.2.7
* Quobyte
_Some storage providers, like Quobyte, may need a different [signature algorithm version][6]._
## Non-supported volume snapshots
In the case you want to take volume snapshots but didn't find a plugin for your provider, Velero has support for snapshotting using restic. Please see the [restic integration][30] documentation.
[0]: https://github.com/aws/aws-sdk-go/aws
[1]: contributions/ibm-config.md
[2]: contributions/oracle-config.md
[3]: contributions/minio.md
[4]: https://github.com/StackPointCloud/ark-plugin-digitalocean
[5]: http://www.noobaa.com/
[6]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/master/backupstoragelocation.md
[7]: https://aws.amazon.com
[8]: https://github.com/vmware-tanzu/velero-plugin-for-aws
[9]: https://azure.com
[10]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure
[11]: https://cloud.google.com
[12]: https://github.com/vmware-tanzu/velero-plugin-for-gcp
[15]: https://www.digitalocean.com/
[16]: https://github.com/StackPointCloud/ark-plugin-digitalocean
[17]: https://openebs.io/
[18]: https://github.com/openebs/velero-plugin
[19]: https://openebs-community.slack.com/
[20]: https://github.com/openebs/velero-plugin/issues
[21]: https://www.alibabacloud.com/
[22]: https://github.com/AliyunContainerService/velero-plugin
[23]: https://github.com/AliyunContainerService/velero-plugin/issues
[24]: https://www.hpe.com/us/en/storage.html
[25]: https://github.com/hpe-storage/velero-plugin
[26]: https://slack.hpedev.io/
[27]: https://github.com/hpe-storage/velero-plugin/issues
[28]: https://kubernetes.slack.com/messages/velero
[29]: https://github.com/vmware-tanzu/velero/issues
[30]: restic.md
[31]: https://portworx.com/
[32]: https://docs.portworx.com/scheduler/kubernetes/ark.html
[33]: https://portworx.slack.com/messages/px-k8s
[34]: https://github.com/portworx/ark-plugin/issues
[35]: https://github.com/vmware-tanzu/velero-plugin-for-aws#setup
[36]: https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup
[37]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure#setup

View File

@ -0,0 +1,123 @@
# Troubleshooting
These tips can help you troubleshoot known issues. If they don't help, you can [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
- [Troubleshooting](#troubleshooting)
- [Debug installation/ setup issues](#debug-installation-setup-issues)
- [Debug restores](#debug-restores)
- [General troubleshooting information](#general-troubleshooting-information)
- [Getting velero debug logs](#getting-velero-debug-logs)
- [Known issue with restoring LoadBalancer Service](#known-issue-with-restoring-loadbalancer-service)
- [Miscellaneous issues](#miscellaneous-issues)
- [Velero reports `custom resource not found` errors when starting up.](#velero-reports-custom-resource-not-found-errors-when-starting-up)
- [`velero backup logs` returns a `SignatureDoesNotMatch` error](#velero-backup-logs-returns-a-signaturedoesnotmatch-error)
- [Velero (or a pod it was backing up) restarted during a backup and the backup is stuck InProgress](#velero-or-a-pod-it-was-backing-up-restarted-during-a-backup-and-the-backup-is-stuck-inprogress)
- [Velero is not publishing prometheus metrics](#velero-is-not-publishing-prometheus-metrics)
## Debug installation/ setup issues
- [Debug installation/setup issues][2]
## Debug restores
- [Debug restores][1]
## General troubleshooting information
You can use the `velero bug` command to open a [Github issue][4] by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `velero` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like.
Some general commands for troubleshooting that may be helpful:
* `velero backup describe <backupName>` - describe the details of a backup
* `velero backup logs <backupName>` - fetch the logs for this specific backup. Useful for viewing failures and warnings, including resources that could not be backed up.
* `velero restore describe <restoreName>` - describe the details of a restore
* `velero restore logs <restoreName>` - fetch the logs for this specific restore. Useful for viewing failures and warnings, including resources that could not be restored.
* `kubectl logs deployment/velero -n velero` - fetch the logs of the Velero server pod. This provides the output of the Velero server processes.
### Getting velero debug logs
You can increase the verbosity of the Velero server by editing your Velero deployment to look like this:
```
kubectl edit deployment/velero -n velero
...
containers:
- name: velero
image: velero/velero:latest
command:
- /velero
args:
- server
- --log-level # Add this line
- debug # Add this line
...
```
## Known issue with restoring LoadBalancer Service
Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform a Velero restore.
Alternatively, you might be able to use the Service's `spec.loadBalancerIP` field to keep connections valid, if your cloud provider supports this value. See [the Kubernetes documentation about Services of Type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
## Miscellaneous issues
### Velero reports `custom resource not found` errors when starting up.
Velero's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Run `velero install` again to install any missing custom resource definitions.
### `velero backup logs` returns a `SignatureDoesNotMatch` error
Downloading artifacts from object storage utilizes temporary, signed URLs. In the case of S3-compatible
providers, such as Ceph, there may be differences between their implementation and the official S3
API that cause errors.
Here are some things to verify if you receive `SignatureDoesNotMatch` errors:
* Make sure your S3-compatible layer is using [signature version 4][5] (such as Ceph RADOS v12.2.7)
* For Ceph, try using a native Ceph account for credentials instead of external providers such as OpenStack Keystone
## Velero (or a pod it was backing up) restarted during a backup and the backup is stuck InProgress
Velero cannot currently resume backups that were interrupted. Backups stuck in the `InProgress` phase can be deleted with `kubectl delete backup <name> -n <velero-namespace>`.
Backups in the `InProgress` phase have not uploaded any files to object storage.
## Velero is not publishing prometheus metrics
Steps to troubleshoot:
- Confirm that your velero deployment has metrics publishing enabled. The [latest Velero helm charts][6] have been setup with [metrics enabled by default][7].
- Confirm that the Velero server pod exposes the port on which the metrics server listens on. By default, this value is 8085.
```yaml
ports:
- containerPort: 8085
name: metrics
protocol: TCP
```
- Confirm that the metric server is listening for and responding to connections on this port. This can be done using [port-forwarding][9] as shown below
```bash
$ kubectl -n <YOUR_VELERO_NAMESPACE> port-forward <YOUR_VELERO_POD> 8085:8085
Forwarding from 127.0.0.1:8085 -> 8085
Forwarding from [::1]:8085 -> 8085
.
.
.
```
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the nessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
[1]: debugging-restores.md
[2]: debugging-install.md
[4]: https://github.com/vmware-tanzu/velero/issues
[5]: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
[6]: https://github.com/vmware-tanzu/helm-charts/blob/master/charts/velero
[7]: https://github.com/vmware-tanzu/helm-charts/blob/master/charts/velero/values.yaml#L44
[8]: https://github.com/vmware-tanzu/helm-charts/blob/master/charts/velero/values.yaml#L49-L52
[9]: https://kubectl.docs.kubernetes.io/pages/container_debugging/port_forward_to_pods.html
[25]: https://kubernetes.slack.com/messages/velero

View File

@ -0,0 +1,8 @@
# Uninstalling Velero
If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by `velero install`:
```bash
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```

View File

@ -0,0 +1,102 @@
# Upgrading to Velero 1.2
## Prerequisites
- Velero [v1.1][0] or [v1.0][1] installed.
_Note: if you're upgrading from v1.0, follow the [upgrading to v1.1][2] instructions first._
## Instructions
1. Install the Velero v1.2 command-line interface (CLI) by following the [instructions here][3].
Verify that you've properly installed it by running:
```bash
velero version --client-only
```
You should see the following output:
```bash
Client:
Version: v1.2.0
Git commit: <git SHA>
```
1. Scale down the existing Velero deployment:
```bash
kubectl scale deployment/velero \
--namespace velero \
--replicas 0
```
1. Update the container image used by the Velero deployment and, optionally, the restic daemon set:
```bash
kubectl set image deployment/velero \
velero=velero/velero:v1.2.0 \
--namespace velero
# optional, if using the restic daemon set
kubectl set image daemonset/restic \
restic=velero/velero:v1.2.0 \
--namespace velero
```
1. If using AWS, Azure, or GCP, add the respective plugin to your Velero deployment:
For AWS:
```bash
velero plugin add velero/velero-plugin-for-aws:v1.0.0
```
For Azure:
```bash
velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0
```
For GCP:
```bash
velero plugin add velero/velero-plugin-for-gcp:v1.0.0
```
1. Update the Velero custom resource definitions (CRDs) to include the structural schemas:
```bash
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
```
1. Scale back up the existing Velero deployment:
```bash
kubectl scale deployment/velero \
--namespace velero \
--replicas 1
```
1. Confirm that the deployment is up and running with the correct version by running:
```bash
velero version
```
You should see the following output:
```bash
Client:
Version: v1.2.0
Git commit: <git SHA>
Server:
Version: v1.2.0
```
[0]: https://github.com/vmware-tanzu/velero/releases/tag/v1.1.0
[1]: https://github.com/vmware-tanzu/velero/releases/tag/v1.0.0
[2]: https://velero.io/docs/v1.1.0/upgrade-to-1.1/
[3]: basic-install.md#install-the-cli

View File

@ -0,0 +1,45 @@
# Velero Install CLI
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server compoents to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage
This section explains some of the basic flags supported by the `velero install` CLI command. For a complete explanation of the flags, please run `velero install --help`
```bash
velero install \
--plugins <PLUGIN_CONTAINER_IMAGE [PLUGIN_CONTAINER_IMAGE]>
--provider <YOUR_PROVIDER> \
--bucket <YOUR_BUCKET> \
--secret-file <PATH_TO_FILE> \
--velero-pod-cpu-request <CPU_REQUEST> \
--velero-pod-mem-request <MEMORY_REQUEST> \
--velero-pod-cpu-limit <CPU_LIMIT> \
--velero-pod-mem-limit <MEMORY_LIMIT> \
[--use-restic] \
[--restic-pod-cpu-request <CPU_REQUEST>] \
[--restic-pod-mem-request <MEMORY_REQUEST>] \
[--restic-pod-cpu-limit <CPU_LIMIT>] \
[--restic-pod-mem-limit <MEMORY_LIMIT>]
```
The values for the resource requests and limits flags follow the same format as [Kubernetes resource requirements][3]
For plugin container images, please refer to our [supported providers][2] page.
## Examples
This section provides examples that serve as a starting point for more customized installations.
```bash
velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --bucket mybucket --secret-file ./gcp-service-account.json
velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket backups --provider aws --secret-file ./aws-iam-creds --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2 --use-restic
velero install --provider azure --plugins velero/velero-plugin-for-microsoft-azure:v1.0.0 --bucket $BLOB_CONTAINER --secret-file ./credentials-velero --backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID[,subscriptionId=$AZURE_BACKUP_SUBSCRIPTION_ID] --snapshot-location-config apiTimeout=<YOUR_TIMEOUT>[,resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,subscriptionId=$AZURE_BACKUP_SUBSCRIPTION_ID]
```
[1]: build-from-source.md#making-images-and-updating-velero
[2]: supported-providers.md
[3]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

View File

@ -0,0 +1,18 @@
# Vendoring dependencies
## Overview
We are using [dep][0] to manage dependencies. You can install it by following [these
instructions][1].
## Adding a new dependency
Run `dep ensure`. If you want to see verbose output, you can append `-v` as in
`dep ensure -v`.
## Updating an existing dependency
Run `dep ensure -update <pkg> [<pkg> ...]` to update one or more dependencies.
[0]: https://github.com/golang/dep
[1]: https://golang.github.io/dep/docs/installation.html

View File

@ -0,0 +1,46 @@
# Website Guidelines
## Running the website locally
When making changes to the website, please run the site locally before submitting a PR and manually verify your changes.
At the root of the project, run:
```bash
make serve-docs
```
This runs all the Ruby dependencies in a container.
Alternatively, for quickly loading the website, under the `velero/site/` directory run:
```bash
bundle exec jekyll serve --livereload --future
```
For more information on how to run the website locally, please see our [jekyll documentation](https://github.com/vmware-tanzu/velero/blob/v1.3.0-beta.1/site/README-JEKYLL.md).
## Adding a blog post
The `author_name` value must also be included in the tags field so the page that lists the author's posts will work properly (Ex: https://velero.io/tags/carlisia%20campos/).
Note that the tags field can have multiple values.
Example:
```text
author_name: Carlisia Campos
tags: ['Carlisia Campos', "release", "how-to"]
```
### Please add an image
If there is no image added to the header of the post, the default Velero logo will be used. This is fine, but not ideal.
If there's an image that can be used as the blog post icon, the image field must be set to:
```text
image: /img/posts/<your_image_name.png>
```
This image file must be added to the `/site/img/posts` folder.

View File

@ -0,0 +1,15 @@
# ZenHub
As an Open Source community, it is necessary for our work, communication, and collaboration to be done in the open.
GitHub provides a central repository for code, pull requests, issues, and documentation. When applicable, we will use Google Docs for design reviews, proposals, and other working documents.
While GitHub issues, milestones, and labels generally work pretty well, the Velero team has found that product planning requires some additional tooling that GitHub projects do not offer.
In our effort to minimize tooling while enabling product management insights, we have decided to use [ZenHub Open-Source](https://www.zenhub.com/blog/open-source/) to overlay product and project tracking on top of GitHub.
ZenHub is a GitHub application that provides Kanban visualization, Epic tracking, fine-grained prioritization, and more. It's primary backing storage system is existing GitHub issues along with additional metadata stored in ZenHub's database.
If you are a Velero user or Velero Developer, you do not _need_ to use ZenHub for your regular workflow (e.g to see open bug reports or feature requests, work on pull requests). However, if you'd like to be able to visualize the high-level project goals and roadmap, you will need to use the free version of ZenHub.
## Using ZenHub
ZenHub can be integrated within the GitHub interface using their [Chrome or FireFox extensions](https://www.zenhub.com/extension). In addition, you can use their dedicated [web application](https://app.zenhub.com/workspace/o/vmware-tanzu/velero/boards?filterLogic=all&repos=99143276).