generate v1.1.0-beta.1 docs

Signed-off-by: Steve Kriss <krisss@vmware.com>
pull/1732/head
Steve Kriss 2019-08-05 15:03:07 -06:00
parent 80692a8a39
commit d881a10fba
42 changed files with 3877 additions and 0 deletions

View File

@ -52,6 +52,12 @@ defaults:
version: master
gh: https://github.com/heptio/velero/tree/master
layout: "docs"
- scope:
path: docs/v1.1.0-beta.1
values:
version: v1.1.0-beta.1
gh: https://github.com/heptio/velero/tree/v1.1.0-beta.1
layout: "docs"
- scope:
path: docs/v1.0.0
values:
@ -139,6 +145,7 @@ versioning: true
latest: v1.0.0
versions:
- master
- v1.1.0-beta.1
- v1.0.0
- v0.11.0
- v0.10.0

View File

@ -3,6 +3,7 @@
# that the navigation for older versions still work.
master: master-toc
v1.1.0-beta.1: v1-1-0-beta-1-toc
v1.0.0: v1-0-0-toc
v0.11.0: v011-toc
v0.10.0: v010-toc

View File

@ -0,0 +1,67 @@
toc:
- title: Introduction
subfolderitems:
- page: About Velero
url: /index.html
- page: How Velero works
url: /about
- page: About locations
url: /locations
- page: Supported platforms
url: /support-matrix
- title: Install
subfolderitems:
- page: Overview
url: /install-overview
- page: Quick start with in-cluster MinIO
url: /get-started
- page: Run on AWS
url: /aws-config
- page: Run on Azure
url: /azure-config
- page: Run on GCP
url: /gcp-config
- page: Restic setup
url: /restic
- title: Use
subfolderitems:
- page: Disaster recovery
url: /disaster-case
- page: Cluster migration
url: /migration-case
- page: Backup reference
url: /backup-reference
- page: Restore reference
url: /restore-reference
- title: Troubleshoot
subfolderitems:
- page: Troubleshooting
url: /troubleshooting
- page: Troubleshoot an install or setup
url: /debugging-install
- page: Troubleshoot a restore
url: /debugging-restores
- page: Troubleshoot Restic
url: /restic#troubleshooting
- title: Customize
subfolderitems:
- page: Build from source
url: /build-from-source
- page: Run in any namespace
url: /namespace
- page: Extend
url: /extend
- page: Extend with plugins
url: /plugins
- page: Extend with hooks
url: /hooks
- title: More information
subfolderitems:
- page: Backup file format
url: /output-file-format
- page: API types
url: /api-types
- page: FAQ
url: /faq
- page: ZenHub
url: /zenhub

View File

@ -0,0 +1,81 @@
![100]
[![Build Status][1]][2]
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
* Replicate your production cluster to development and testing clusters.
Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
You can run Velero in clusters on a cloud provider or on-premises. For detailed information, see [Compatible Storage Providers][99].
## Installation
We strongly recommend that you use an [official release][6] of Velero. The tarballs for each release contain the
`velero` command-line client. Follow the [installation instructions][28] to get started.
_The code and sample YAML files in the master branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## More information
[The documentation][29] provides a getting started guide, plus information about building from source, architecture, extending Velero, and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
## Troubleshooting
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
## Contributing
Thanks for taking the time to join our community and start contributing!
Feedback and discussion are available on [the mailing list][24].
### Before you start
* Please familiarize yourself with the [Code of Conduct][8] before contributing.
* See [CONTRIBUTING.md][5] for instructions on the developer certificate of origin that we require.
* Read how [we're using ZenHub][26] for project and roadmap planning
### Pull requests
* We welcome pull requests. Feel free to dig through the [issues][4] and jump in.
## Changelog
See [the list of releases][6] to find out about feature changes.
[1]: https://travis-ci.org/heptio/velero.svg?branch=master
[2]: https://travis-ci.org/heptio/velero
[4]: https://github.com/heptio/velero/issues
[5]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/CONTRIBUTING.md
[6]: https://github.com/heptio/velero/releases
[8]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/CODE_OF_CONDUCT.md
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md
[14]: https://github.com/kubernetes/kubernetes
[24]: https://groups.google.com/forum/#!forum/projectvelero
[25]: https://kubernetes.slack.com/messages/velero
[26]: /zenhub.md
[28]: install-overview.md
[29]: https://velero.io/docs/v1.1.0-beta.1/
[30]: troubleshooting.md
[99]: support-matrix.md
[100]: img/velero.png

View File

@ -0,0 +1,80 @@
# How Velero Works
Each Velero operation -- on-demand backup, scheduled backup, restore -- is a custom resource, defined with a Kubernetes [Custom Resource Definition (CRD)][20] and stored in [etcd][22]. Velero also includes controllers that process the custom resources to perform backups, restores, and all related operations.
You can back up or restore all objects in your cluster, or you can filter objects by type, namespace, and/or label.
Velero is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster (e.g. upgrades).
## On-demand backups
The **backup** operation:
1. Uploads a tarball of copied Kubernetes objects into cloud object storage.
1. Calls the cloud provider API to make disk snapshots of persistent volumes, if specified.
You can optionally specify hooks to be executed during the backup. For example, you might
need to tell a database to flush its in-memory buffers to disk before taking a snapshot. [More about hooks][10].
Note that cluster backups are not strictly atomic. If Kubernetes objects are being created or edited at the time of backup, they might not be included in the backup. The odds of capturing inconsistent information are low, but it is possible.
## Scheduled backups
The **schedule** operation allows you to back up your data at recurring intervals. The first backup is performed when the schedule is first created, and subsequent backups happen at the schedule's specified interval. These intervals are specified by a Cron expression.
Scheduled backups are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*.
## Restores
The **restore** operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a filtered subset of objects and persistent volumes. Velero supports multiple namespace remapping--for example, in a single restore, objects in namespace "abc" can be recreated under namespace "def", and the objects in namespace "123" under "456".
The default name of a restore is `<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. You can also specify a custom name. A restored object also includes a label with key `velero.io/restore-name` and value `<RESTORE NAME>`.
By default, backup storage locations are created in read-write mode. However, during a restore, you can configure a backup storage location to be in read-only mode, which disables backup creation and deletion for the storage location. This is useful to ensure that no backups are inadvertently created or deleted during a restore scenario.
## Backup workflow
When you run `velero backup create test-backup`:
1. The Velero client makes a call to the Kubernetes API server to create a `Backup` object.
1. The `BackupController` notices the new `Backup` object and performs validation.
1. The `BackupController` begins the backup process. It collects the data to back up by querying the API server for resources.
1. The `BackupController` makes a call to the object storage service -- for example, AWS S3 -- to upload the backup file.
By default, `velero backup create` makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run `velero backup create --help` to see available flags. Snapshots can be disabled with the option `--snapshot-volumes=false`.
![19]
## Backed-up API versions
Velero backs up resources using the Kubernetes API server's *preferred version* for each group/resource. When restoring a resource, this same API group/version must exist in the target cluster in order for the restore to be successful.
For example, if the cluster being backed up has a `gizmos` resource in the `things` API group, with group/versions `things/v1alpha1`, `things/v1beta1`, and `things/v1`, and the server's preferred group/version is `things/v1`, then all `gizmos` will be backed up from the `things/v1` API endpoint. When backups from this cluster are restored, the target cluster **must** have the `things/v1` endpoint in order for `gizmos` to be restored. Note that `things/v1` **does not** need to be the preferred version in the target cluster; it just needs to exist.
## Set a backup to expire
When you create a backup, you can specify a TTL by adding the flag `--ttl <DURATION>`. If Velero sees that an existing backup resource is expired, it removes:
* The backup resource
* The backup file from cloud object storage
* All PersistentVolume snapshots
* All associated Restores
## Object storage sync
Velero treats object storage as the source of truth. It continuously checks to see that the correct backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding backup resource in the Kubernetes API, Velero synchronizes the information from object storage to Kubernetes.
This allows restore functionality to work in a cluster migration scenario, where the original backup objects do not exist in the new cluster.
Likewise, if a backup object exists in Kubernetes but not in object storage, it will be deleted from Kubernetes since the backup tarball no longer exists.
[10]: hooks.md
[19]: img/backup-process.png
[20]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions
[21]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers
[22]: https://github.com/coreos/etcd

View File

@ -0,0 +1,14 @@
# Table of Contents
## API types
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]
* [BackupStorageLocation][2]
* [VolumeSnapshotLocation][3]
[1]: backup.md
[2]: backupstoragelocation.md
[3]: volumesnapshotlocation.md

View File

@ -0,0 +1,143 @@
# Backup API Type
## Use
The `Backup` API type is used as a request for the Velero server to perform a backup. Once created, the
Velero Server immediately starts the backup process.
## API GroupVersion
Backup belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Backup` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Backup
# Standard Kubernetes metadata. Required.
metadata:
# Backup name. May be any valid Kubernetes object name. Required.
name: a
# Backup namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the backup. Required.
spec:
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for this backup.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before this backup is eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Currently only "exec" hooks are supported.
post:
# Same content as pre above.
# Status about the Backup. Users should not set any data here.
status:
# The version of this Backup. The only version currently supported is 1.
version: 1
# The date and time when the Backup is eligible for garbage collection.
expiration: null
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Date/time when the backup started being processed.
startTimestamp: 2019-04-29T15:58:43Z
# Date/time when the backup finished being processed.
completionTimestamp: 2019-04-29T15:58:56Z
# Number of volume snapshots that Velero tried to create for this backup.
volumeSnapshotsAttempted: 2
# Number of volume snapshots that Velero successfully created for this backup.
volumeSnapshotsCompleted: 1
# Number of warnings that were logged by the backup.
warnings: 2
# Number of errors that were logged by the backup.
errors: 0
```

View File

@ -0,0 +1,74 @@
# Velero Backup Storage Locations
## Backup Storage Location
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
A sample YAML `BackupStorageLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
provider: aws
objectStorage:
bucket: myBucket
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the backups. |
| `objectStorage` | ObjectStorageLocation | Specification of the object storage for the given provider. |
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
| `config` | map[string]string<br><br>(See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.) | None (Optional) | Configuration keys/values to be passed to the cloud provider for backup storage. |
#### AWS
**(Or other S3-compatible storage)**
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Queried from the AWS S3 API if not provided. |
| `s3ForcePathStyle` | bool | `false` | Set this to `true` if you are using a local storage service like Minio. |
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Velero can already generate it from `region`, and `bucket`. This field is primarily for local storage services like Minio.|
| `publicUrl` | string | Empty | *Example*: https://minio.mycluster.com<br><br>If specified, use this instead of `s3Url` when generating download URLs (e.g., for logs). This field is primarily for local storage services like Minio.|
| `kmsKeyId` | string | Empty | *Example*: "502b409c-4da1-419f-a16e-eif453b3i49f" or "alias/`<KMS-Key-Alias-Name>`"<br><br>Specify an [AWS KMS key][10] id or alias to enable encryption of the backups stored in S3. Only works with AWS S3 and may require explicitly granting key usage rights.|
| `signatureVersion` | string | `"4"` | Version of the signature algorithm used to create signed URLs that are used by velero cli to download backups or fetch logs. Possible versions are "1" and "4". Usually the default version 4 is correct, but some S3-compatible providers like Quobyte only support version 1.|
| `profile` | string | "default" | AWS profile within the credential file to use for given store |
#### Azure
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `resourceGroup` | string | Required Field | Name of the resource group containing the storage account for this backup storage location. |
| `storageAccount` | string | Required Field | Name of the storage account for this backup storage location. |
#### GCP
No parameters required.
[0]: #aws
[1]: #gcp
[2]: #azure
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
[10]: http://docs.aws.amazon.com/kms/latest/developerguide/overview.html

View File

@ -0,0 +1,69 @@
# Velero Volume Snapshot Location
## Volume Snapshot Location
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
A sample YAML `VolumeSnapshotLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: velero
spec:
provider: aws
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the volume. |
| `config` | See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.
#### AWS
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Required. |
| `profile` | string | "default" | AWS profile within the credential file to use for given store |
#### Azure
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `apiTimeout` | metav1.Duration | 2m0s | How long to wait for an Azure API request to complete before timeout. |
| `resourceGroup` | string | Optional | The name of the resource group where volume snapshots should be stored, if different from the cluster's resource group. |
#### GCP
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `snapshotLocation` | string | Empty | *Example*: "us-central1"<br><br>See [GCP documentation][4] for the full list.<br><br>If not specified the snapshots are stored in the [default location][5]. |
| `project` | string | Empty | The project ID where snapshots should be stored, if different than the project that your IAM account is in. Optional. |
[0]: #aws
[1]: #gcp
[2]: #azure
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
[4]: https://cloud.google.com/storage/docs/locations#available_locations
[5]: https://cloud.google.com/compute/docs/disks/create-snapshots#default_location

View File

@ -0,0 +1,306 @@
# Run Velero on AWS
To set up Velero on AWS, you:
* Download an official release of Velero
* Create your S3 bucket
* Create an AWS IAM user for Velero
* Install the server
If you do not have the `aws` CLI locally installed, follow the [user guide][5] to set it up.
## Download Velero
1. Download the [latest official release's](https://github.com/heptio/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
2. Extract the tarball:
```
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
3. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create S3 bucket
Velero requires an object storage bucket to store backups in, preferrably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
```bash
BUCKET=<YOUR_BUCKET>
REGION=<YOUR_REGION>
aws s3api create-bucket \
--bucket $BUCKET \
--region $REGION \
--create-bucket-configuration LocationConstraint=$REGION
```
NOTE: us-east-1 does not support a `LocationConstraint`. If your region is `us-east-1`, omit the bucket configuration:
```bash
aws s3api create-bucket \
--bucket $BUCKET \
--region us-east-1
```
## Create IAM user
For more information, see [the AWS documentation on IAM users][14].
1. Create the IAM user:
```bash
aws iam create-user --user-name velero
```
If you'll be using Velero to backup multiple clusters with multiple S3 buckets, it may be desirable to create a unique username per cluster rather than the default `velero`.
2. Attach policies to give `velero` the necessary permissions:
```
cat > velero-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::${BUCKET}/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::${BUCKET}"
]
}
]
}
EOF
```
```bash
aws iam put-user-policy \
--user-name velero \
--policy-name velero \
--policy-document file://velero-policy.json
```
3. Create an access key for the user:
```bash
aws iam create-access-key --user-name velero
```
The result should look like:
```json
{
"AccessKey": {
"UserName": "velero",
"Status": "Active",
"CreateDate": "2017-07-31T22:24:41.576Z",
"SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
"AccessKeyId": <AWS_ACCESS_KEY_ID>
}
}
```
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```bash
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
```
where the access key id and secret are the values returned from the `create-access-key` request.
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider aws \
--bucket $BUCKET \
--secret-file ./credentials-velero \
--backup-location-config region=$REGION \
--snapshot-location-config region=$REGION
```
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
(Optional) Specify [additional configurable parameters][21] for the `--backup-location-config` flag.
(Optional) Specify [additional configurable parameters][6] for the `--snapshot-location-config` flag.
(Optional) Specify [CPU and memory resource requests and limits][22] for the Velero/restic pods.
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
## Setting AWS_CLUSTER_NAME (Optional)
If you have multiple clusters and you want to support migration of resources between them, you can use `kubectl edit deploy/velero -n velero` to edit your deployment:
Add the environment variable `AWS_CLUSTER_NAME` under `spec.template.spec.env`, with the current cluster's name. When restoring backup, it will make Velero (and cluster it's running on) claim ownership of AWS volumes created from snapshots taken on different cluster.
The best way to get the current cluster's name is to either check it with used deployment tool or to read it directly from the EC2 instances tags.
The following listing shows how to get the cluster's nodes EC2 Tags. First, get the nodes external IDs (EC2 IDs):
```bash
kubectl get nodes -o jsonpath='{.items[*].spec.externalID}'
```
Copy one of the returned IDs `<ID>` and use it with the `aws` CLI tool to search for one of the following:
* The `kubernetes.io/cluster/<AWS_CLUSTER_NAME>` tag of the value `owned`. The `<AWS_CLUSTER_NAME>` is then your cluster's name:
```bash
aws ec2 describe-tags --filters "Name=resource-id,Values=<ID>" "Name=value,Values=owned"
```
* If the first output returns nothing, then check for the legacy Tag `KubernetesCluster` of the value `<AWS_CLUSTER_NAME>`:
```bash
aws ec2 describe-tags --filters "Name=resource-id,Values=<ID>" "Name=key,Values=KubernetesCluster"
```
## ALTERNATIVE: Setup permissions using kube2iam
[Kube2iam](https://github.com/jtblin/kube2iam) is a Kubernetes application that allows managing AWS IAM permissions for pod via annotations rather than operating on API keys.
> This path assumes you have `kube2iam` already running in your Kubernetes cluster. If that is not the case, please install it first, following the docs here: [https://github.com/jtblin/kube2iam](https://github.com/jtblin/kube2iam)
It can be set up for Velero by creating a role that will have required permissions, and later by adding the permissions annotation on the velero deployment to define which role it should use internally.
1. Create a Trust Policy document to allow the role being used for EC2 management & assume kube2iam role:
```
cat > velero-trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ROLE_CREATED_WHEN_INITIALIZING_KUBE2IAM>"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
```
2. Create the IAM role:
```bash
aws iam create-role --role-name velero --assume-role-policy-document file://./velero-trust-policy.json
```
3. Attach policies to give `velero` the necessary permissions:
```
BUCKET=<YOUR_BUCKET>
cat > velero-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::${BUCKET}/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::${BUCKET}"
]
}
]
}
EOF
```
```bash
aws iam put-role-policy \
--role-name velero \
--policy-name velero-policy \
--policy-document file://./velero-policy.json
```
4. Use the `--pod-annotations` argument on `velero install` to add the following annotation:
```bash
velero install \
--pod-annotations iam.amazonaws.com/role=arn:aws:iam::<AWS_ACCOUNT_ID>:role/<VELERO_ROLE_NAME> \
--provider aws \
--bucket $BUCKET \
--backup-location-config region=$REGION \
--snapshot-location-config region=$REGION \
--no-secret
```
[0]: namespace.md
[5]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
[6]: api-types/volumesnapshotlocation.md#aws
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[20]: faq.md
[21]: api-types/backupstoragelocation.md#aws
[22]: install-overview.md#velero-resource-requirements

View File

@ -0,0 +1,172 @@
# Run Velero on Azure
To configure Velero on Azure, you:
* Download an official release of Velero
* Create your Azure storage account and blob container
* Create Azure service principal for Velero
* Install the server
If you do not have the `az` Azure CLI 2.0 installed locally, follow the [install guide][18] to set it up.
Run:
```bash
az login
```
## Kubernetes cluster prerequisites
Ensure that the VMs for your agent pool allow Managed Disks. If I/O performance is critical,
consider using Premium Managed Disks, which are SSD backed.
## Download Velero
1. Download the [latest official release's](https://github.com/heptio/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create Azure storage account and blob container
Velero requires a storage account and blob container in which to store backups.
The storage account can be created in the same Resource Group as your Kubernetes cluster or
separated into its own Resource Group. The example below shows the storage account created in a
separate `Velero_Backups` Resource Group.
The storage account needs to be created with a globally unique id since this is used for dns. In
the sample script below, we're generating a random name using `uuidgen`, but you can come up with
this name however you'd like, following the [Azure naming rules for storage accounts][19]. The
storage account is created with encryption at rest capabilities (Microsoft managed keys) and is
configured to only allow access via https.
Create a resource group for the backups storage account. Change the location as needed.
```bash
AZURE_BACKUP_RESOURCE_GROUP=Velero_Backups
az group create -n $AZURE_BACKUP_RESOURCE_GROUP --location WestUS
```
Create the storage account.
```bash
AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"
az storage account create \
--name $AZURE_STORAGE_ACCOUNT_ID \
--resource-group $AZURE_BACKUP_RESOURCE_GROUP \
--sku Standard_GRS \
--encryption-services blob \
--https-only true \
--kind BlobStorage \
--access-tier Hot
```
Create the blob container named `velero`. Feel free to use a different name, preferably unique to a single Kubernetes cluster. See the [FAQ][20] for more details.
```bash
BLOB_CONTAINER=velero
az storage container create -n $BLOB_CONTAINER --public-access off --account-name $AZURE_STORAGE_ACCOUNT_ID
```
## Get resource group for persistent volume snapshots
1. Set the name of the Resource Group that contains your Kubernetes cluster's virtual machines/disks.
**WARNING**: If you're using [AKS][22], `AZURE_RESOURCE_GROUP` must be set to the name of the auto-generated resource group that is created
when you provision your cluster in Azure, since this is the resource group that contains your cluster's virtual machines/disks.
```bash
AZURE_RESOURCE_GROUP=<NAME_OF_RESOURCE_GROUP>
```
If you are unsure of the Resource Group name, run the following command to get a list that you can select from. Then set the `AZURE_RESOURCE_GROUP` environment variable to the appropriate value.
```bash
az group list --query '[].{ ResourceGroup: name, Location:location }'
```
Get your cluster's Resource Group name from the `ResourceGroup` value in the response, and use it to set `$AZURE_RESOURCE_GROUP`.
## Create service principal
To integrate Velero with Azure, you must create a Velero-specific [service principal][17].
1. Obtain your Azure Account Subscription ID and Tenant ID:
```bash
AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv`
AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`
```
1. Create a service principal with `Contributor` role. This will have subscription-wide access, so protect this credential.
If you'll be using Velero to backup multiple clusters with multiple blob containers, it may be desirable to create a unique username per cluster rather than the default `velero`.
Create service principal and let the CLI generate a password for you. Make sure to capture the password.
```bash
AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv`
```
After creating the service principal, obtain the client id.
```bash
AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
```
1. Now you need to create a file that contains all the environment variables you just set. The command looks like the following:
```
cat << EOF > ./credentials-velero
AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
AZURE_TENANT_ID=${AZURE_TENANT_ID}
AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
EOF
```
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider azure \
--bucket $BLOB_CONTAINER \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID \
--snapshot-location-config apiTimeout=<YOUR_TIMEOUT>
```
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
(Optional) Specify [additional configurable parameters][21] for the `--backup-location-config` flag.
(Optional) Specify [additional configurable parameters][8] for the `--snapshot-location-config` flag.
(Optional) Specify [CPU and memory resource requests and limits][23] for the Velero/restic pods.
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
[0]: namespace.md
[8]: api-types/volumesnapshotlocation.md#azure
[17]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
[18]: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
[19]: https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions#storage
[20]: faq.md
[21]: api-types/backupstoragelocation.md#azure
[22]: https://azure.microsoft.com/en-us/services/kubernetes-service/
[23]: install-overview.md#velero-resource-requirements

View File

@ -0,0 +1,9 @@
# Backup Reference
## Exclude Specific Items from Backup
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
```bash
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
```

View File

@ -0,0 +1,250 @@
# Build from source
* [Prerequisites][1]
* [Getting the source][2]
* [Build][3]
* [Test][12]
* [Run][7]
* [Vendoring dependencies][10]
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later.
* A DNS server on the cluster
* `kubectl` installed
* [Go][5] installed (minimum version 1.8)
## Getting the source
### Option 1) Get latest (recommended)
```bash
mkdir $HOME/go
export GOPATH=$HOME/go
go get github.com/heptio/velero
```
Where `go` is your [import path][4] for Go.
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
### Option 2) Release archive
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/heptio/velero`.
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
## Build
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
To build the `velero` binary on your local machine, compiled for your OS and architecture, run:
```bash
go build ./cmd/velero
```
or:
```bash
make local
```
The latter will place the compiled binary under `$PWD/_output/bin/$GOOS/$GOARCH`, and will splice version and git commit information in so that `velero version` displays proper output. `velero install` will also use the version information to determine which tagged image to deploy.
To build the `velero` binary targeting `linux/amd64` within a build container on your local machine, run:
```bash
make build
```
See the **Cross compiling** section below for details on building for alternate OS/architecture combinations.
To build a Velero container image, first set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero:master` image, set `$REGISTRY` to `gcr.io/my-registry`. Optionally, set the `$VERSION` environment variable to change the image tag. Then, run:
```bash
make container
```
To push your image to a registry, run:
```bash
make push
```
### Update generated files
The following files are automatically generated from the source code:
* The clientset
* Listers
* Shared informers
* Documentation
* Protobuf/gRPC types
Run `make update` to regenerate files if you make the following changes:
* Add/edit/remove command line flags and/or their help text
* Add/edit/remove commands or subcommands
* Add new API types
Run [generate-proto.sh][13] to regenerate files if you make the following changes:
* Add/edit/remove protobuf message or service definitions. These changes require the [proto compiler][14] and compiler plugin `protoc-gen-go` version v1.0.0.
### Cross compiling
By default, `make build` builds an `velero` binary for `linux-amd64`.
To build for another platform, run `make build-<GOOS>-<GOARCH>`.
For example, to build for the Mac, run `make build-darwin-amd64`.
All binaries are placed in `_output/bin/<GOOS>/<GOARCH>`-- for example, `_output/bin/darwin/amd64/velero`.
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
* linux-amd64
* linux-arm
* linux-arm64
* darwin-amd64
* windows-amd64
## 3. Test
To run unit tests, use `make test`. You can also run `make verify` to ensure that all generated
files (clientset, listers, shared informers, docs) are up to date.
## 4. Run
### Prerequisites
When running Velero, you will need to ensure that you set up all of the following:
* Appropriate RBAC permissions in the cluster
* Read access for all data from the source cluster and namespaces
* Write access to the target cluster and namespaces
* Cloud provider credentials
* Read/write access to volumes
* Read/write access to object storage for backup data
* A [BackupStorageLocation][20] object definition for the Velero server
* (Optional) A [VolumeSnapshotLocation][21] object definition for the Velero server, to take PV snapshots
### Create a cluster
To provision a cluster on AWS using Amazons official CloudFormation templates, here are two options:
* EC2 [Quick Start for Kubernetes][17]
* eksctl - [a CLI for Amazon EKS][18]
### Option 1: Run your Velero server locally
Running the Velero server locally can speed up iterative development. This eliminates the need to rebuild the Velero server
image and redeploy it to the cluster with each change.
#### 1. Set enviroment variables
Set the appropriate environment variable for your cloud provider:
AWS: [AWS_SHARED_CREDENTIALS_FILE][15]
GCP: [GOOGLE_APPLICATION_CREDENTIALS][16]
Azure:
1. AZURE_CLIENT_ID
2. AZURE_CLIENT_SECRET
3. AZURE_SUBSCRIPTION_ID
4. AZURE_TENANT_ID
5. AZURE_STORAGE_ACCOUNT_ID
6. AZURE_STORAGE_KEY
7. AZURE_RESOURCE_GROUP
#### 2. Create required Velero resources in the cluster
You can use the `velero install` command to install velero into your cluster, then remove the deployment from the cluster, leaving you
with all of the required in-cluster resources.
##### Example
This examples assumes you are using an existing cluster in AWS.
Using the `velero` binary that you've built, run `velero install`:
```bash
# velero install requires a credentials file to exist, but we will
# not be using it since we're running the server locally, so just
# create an empty file to pass to the install command.
touch fake-credentials-file
velero install \
--provider aws \
--bucket $BUCKET \
--backup-location-config region=$REGION \
--snapshot-location-config region=$REGION \
--secret-file ./fake-credentials-file
# 'velero install' creates an in-cluster deployment of the
# velero server using an official velero image, but we want
# to run the velero server on our local machine using the
# binary we built, so delete the in-cluster deployment.
kubectl --namespace velero delete deployment velero
rm fake-credentials-file
```
#### 3. Start the Velero server locally
* Make sure the `velero` binary you build is in your `PATH`, or specify the full path.
* Start the server: `velero server [CLI flags]`. The following CLI flags may be useful to customize, but see `velero server --help` for full details:
* `--kubeconfig`: set the path to the kubeconfig file the Velero server uses to talk to the Kubernetes apiserver (default `$KUBECONFIG`)
* `--namespace`: the set namespace where the Velero server should look for backups, schedules, restores (default `velero`)
* `--log-level`: set the Velero server's log level (default `info`)
* `--plugin-dir`: set the directory where the Velero server looks for plugins (default `/plugins`)
* `--metrics-address`: set the bind address and port where Prometheus metrics are exposed (default `:8085`)
### Option 2: Run your Velero server in a deployment
1. Ensure you've built a `velero` container image and either loaded it onto your cluster's nodes, or pushed it to a registry (see [build][3]).
1. Install Velero into the cluster (the example below assumes you're using AWS):
```bash
velero install \
--provider aws \
--image $YOUR_CONTAINER_IMAGE \
--bucket $BUCKET \
--backup-location-config region=$REGION \
--snapshot-location-config region=$REGION \
--secret-file $YOUR_AWS_CREDENTIALS_FILE
```
## 5. Vendoring dependencies
If you need to add or update the vendored dependencies, see [Vendoring dependencies][11].
[0]: ../README.md
[1]: #prerequisites
[2]: #getting-the-source
[3]: #build
[4]: https://blog.golang.org/organizing-go-code
[5]: https://golang.org/doc/install
[6]: https://github.com/heptio/velero/tree/master/examples
[7]: #run
[8]: config-definition.md
[10]: #vendoring-dependencies
[11]: vendoring-dependencies.md
[12]: #test
[13]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/hack/generate-proto.sh
[14]: https://grpc.io/docs/quickstart/go.html#install-protocol-buffers-v3
[15]: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#the-shared-credentials-file
[16]: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable
[17]: https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/
[18]: https://eksctl.io/
[19]: ../examples/README.md
[20]: api-types/backupstoragelocation.md
[21]: api-types/volumesnapshotlocation.md
[22]: https://github.com/heptio/velero/releases

View File

@ -0,0 +1,71 @@
# Debugging Installation Issues
## General
### `invalid configuration: no configuration has been provided`
This typically means that no `kubeconfig` file can be found for the Velero client to use. Velero looks for a kubeconfig in the
following locations:
* the path specified by the `--kubeconfig` flag, if any
* the path specified by the `$KUBECONFIG` environment variable, if any
* `~/.kube/config`
### Backups or restores stuck in `New` phase
This means that the Velero controllers are not processing the backups/restores, which usually happens because the Velero server is not running. Check the pod description and logs for errors:
```
kubectl -n velero describe pods
kubectl -n velero logs deployment/velero
```
## AWS
### `NoCredentialProviders: no valid providers in chain`
#### Using credentials
This means that the secret containing the AWS IAM user credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `credentials-velero` file is formatted properly and has the correct values:
```
[default]
aws_access_key_id=<your AWS access key ID>
aws_secret_access_key=<your AWS secret access key>
```
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
#### Using kube2iam
This means that Velero can't read the content of the S3 bucket. Ensure the following:
* There is a Trust Policy document allowing the role used by kube2iam to assume Velero's role, as stated in the AWS config documentation.
* The new Velero role has all the permissions listed in the documentation regarding S3.
## Azure
### `Failed to refresh the Token` or `adal: Refresh request failed`
This means that the secrets containing the Azure service principal credentials for Velero has not been created/mounted
properly into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has all of the expected keys and each one has the correct value (see [setup instructions](0))
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
## GCE/GKE
### `open credentials/cloud: no such file or directory`
This means that the secret containing the GCE service account credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
[0]: azure-config#credentials-and-configuration

View File

@ -0,0 +1,106 @@
# Debugging Restores
* [Example][0]
* [Structure][1]
## Example
When Velero finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `velero restore get`:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
backup-test-20170726180512 backup-test Completed 155 76 2017-07-26 11:41:14 -0400 EDT <none>
backup-test-20170726180513 backup-test Completed 121 14 2017-07-26 11:48:24 -0400 EDT <none>
backup-test-2-20170726180514 backup-test-2 Completed 0 0 2017-07-26 13:31:21 -0400 EDT <none>
backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none>
```
To delve into the warnings and errors into more detail, you can use `velero restore describe`:
```bash
velero restore describe backup-test-20170726180512
```
The output looks like this:
```
Name: backup-test-20170726180512
Namespace: velero
Labels: <none>
Annotations: <none>
Backup: backup-test
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: serviceaccounts
Excluded: nodes, events, events.events.k8s.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Phase: Completed
Validation errors: <none>
Warnings:
Velero: <none>
Cluster: <none>
Namespaces:
velero: serviceaccounts "velero" already exists
serviceaccounts "default" already exists
kube-public: serviceaccounts "default" already exists
kube-system: serviceaccounts "attachdetach-controller" already exists
serviceaccounts "certificate-controller" already exists
serviceaccounts "cronjob-controller" already exists
serviceaccounts "daemon-set-controller" already exists
serviceaccounts "default" already exists
serviceaccounts "deployment-controller" already exists
serviceaccounts "disruption-controller" already exists
serviceaccounts "endpoint-controller" already exists
serviceaccounts "generic-garbage-collector" already exists
serviceaccounts "horizontal-pod-autoscaler" already exists
serviceaccounts "job-controller" already exists
serviceaccounts "kube-dns" already exists
serviceaccounts "namespace-controller" already exists
serviceaccounts "node-controller" already exists
serviceaccounts "persistent-volume-binder" already exists
serviceaccounts "pod-garbage-collector" already exists
serviceaccounts "replicaset-controller" already exists
serviceaccounts "replication-controller" already exists
serviceaccounts "resourcequota-controller" already exists
serviceaccounts "service-account-controller" already exists
serviceaccounts "service-controller" already exists
serviceaccounts "statefulset-controller" already exists
serviceaccounts "ttl-controller" already exists
default: serviceaccounts "default" already exists
Errors:
Velero: <none>
Cluster: <none>
Namespaces: <none>
```
## Structure
Errors appear for incomplete or partial restores. Warnings appear for non-blocking issues (e.g. the
restore looks "normal" and all resources referenced in the backup exist in some form, although some
of them may have been pre-existing).
Both errors and warnings are structured in the same way:
* `Velero`: A list of system-related issues encountered by the Velero server (e.g. couldn't read directory).
* `Cluster`: A list of issues related to the restore of cluster-scoped resources.
* `Namespaces`: A map of namespaces to the list of issues related to the restore of their respective resources.
[0]: #example
[1]: #structure

View File

@ -0,0 +1,39 @@
# Disaster recovery
*Using Schedules and Read-Only Backup Storage Locations*
If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Velero looks like the following:
1. After you first run the Velero server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired):
```
velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
```
This creates a Backup object with the name `<SCHEDULE NAME>-<TIMESTAMP>`.
1. A disaster happens and you need to recreate your resources.
1. Update your backup storage location to read-only mode (this prevents backup objects from being created or deleted in the backup storage location during the restore process):
```bash
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
```
1. Create a restore with your most recent Velero Backup:
```
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
```
1. When ready, revert your backup storage location to read-write mode:
```bash
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadWrite"}}'
```

View File

@ -0,0 +1,9 @@
# Extend Velero
Velero includes mechanisms for extending the core functionality to meet your individual backup/restore needs:
* [Hooks][27] allow you to specify commands to be executed within running pods during a backup. This is useful if you need to run a workload-specific command prior to taking a backup (for example, to flush disk buffers or to freeze a database).
* [Plugins][28] allow you to develop custom object/block storage back-ends or per-item backup/restore actions that can execute arbitrary logic, including modifying the items being backed up/restored. Plugins can be used by Velero without needing to be compiled into the core Velero binary.
[27]: hooks.md
[28]: plugins.md

View File

@ -0,0 +1,44 @@
# FAQ
## When is it appropriate to use Velero instead of etcd's built in backup/restore?
Etcd's backup/restore tooling is good for recovering from data loss in a single etcd cluster. For
example, it is a good idea to take a backup of etcd prior to upgrading etcd itself. For more
sophisticated management of your Kubernetes cluster backups and restores, we feel that Velero is
generally a better approach. It gives you the ability to throw away an unstable cluster and restore
your Kubernetes resources and data into a new cluster, which you can't do easily just by backing up
and restoring etcd.
Examples of cases where Velero is useful:
* you don't have access to etcd (e.g. you're running on GKE)
* backing up both Kubernetes resources and persistent volume state
* cluster migrations
* backing up a subset of your Kubernetes resources
* backing up Kubernetes resources that are stored across multiple etcd clusters (for example if you
run a custom apiserver)
## Will Velero restore my Kubernetes resources exactly the way they were before?
Yes, with some exceptions. For example, when Velero restores pods it deletes the `nodeName` from the
pod so that it can be scheduled onto a new node. You can see some more examples of the differences
in [pod_action.go](https://github.com/heptio/velero/blob/v1.1.0-beta.1/pkg/restore/pod_action.go)
## I'm using Velero in multiple clusters. Should I use the same bucket to store all of my backups?
We **strongly** recommend that each Velero instance use a distinct bucket/prefix combination to store backups.
Having multiple Velero instances write backups to the same bucket/prefix combination can lead to numerous
problems - failed backups, overwritten backups, inadvertently deleted backups, etc., all of which can be
avoided by using a separate bucket + prefix per Velero instance.
It's fine to have multiple Velero instances back up to the same bucket if each instance uses its own
prefix within the bucket. This can be configured in your `BackupStorageLocation`, by setting the
`spec.objectStorage.prefix` field. It's also fine to use a distinct bucket for each Velero instance,
and not to use prefixes at all.
Related to this, if you need to restore a backup that was created in cluster A into cluster B, you may
configure cluster B with a backup storage location that points to cluster A's bucket/prefix. If you do
this, you should configure the storage location pointing to cluster A's bucket/prefix in `ReadOnly` mode
via the `--access-mode=ReadOnly` flag on the `velero backup-location create` command. This will ensure no
new backups are created from Cluster B in Cluster A's bucket/prefix, and no existing backups are deleted
or overwritten.

View File

@ -0,0 +1,146 @@
# Run Velero on GCP
You can run Kubernetes on Google Cloud Platform in either:
* Kubernetes on Google Compute Engine virtual machines
* Google Kubernetes Engine
If you do not have the `gcloud` and `gsutil` CLIs locally installed, follow the [user guide][16] to set them up.
## Download Velero
1. Download the [latest official release's](https://github.com/heptio/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create GCS bucket
Velero requires an object storage bucket in which to store backups, preferably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create a GCS bucket, replacing the <YOUR_BUCKET> placeholder with the name of your bucket:
```bash
BUCKET=<YOUR_BUCKET>
gsutil mb gs://$BUCKET/
```
## Create service account
To integrate Velero with GCP, create a Velero-specific [Service Account][15]:
1. View your current config settings:
```bash
gcloud config list
```
Store the `project` value from the results in the environment variable `$PROJECT_ID`.
```bash
PROJECT_ID=$(gcloud config get-value project)
```
2. Create a service account:
```bash
gcloud iam service-accounts create velero \
--display-name "Velero service account"
```
> If you'll be using Velero to backup multiple clusters with multiple GCS buckets, it may be desirable to create a unique username per cluster rather than the default `velero`.
Then list all accounts and find the `velero` account you just created:
```bash
gcloud iam service-accounts list
```
Set the `$SERVICE_ACCOUNT_EMAIL` variable to match its `email` value.
```bash
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:Velero service account" \
--format 'value(email)')
```
3. Attach policies to give `velero` the necessary permissions to function:
```bash
ROLE_PERMISSIONS=(
compute.disks.get
compute.disks.create
compute.disks.createSnapshot
compute.snapshots.get
compute.snapshots.create
compute.snapshots.useReadOnly
compute.snapshots.delete
compute.zones.get
)
gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
```
4. Create a service account key, specifying an output file (`credentials-velero`) in your local directory:
```bash
gcloud iam service-accounts keys create credentials-velero \
--iam-account $SERVICE_ACCOUNT_EMAIL
```
## Credentials and configuration
If you run Google Kubernetes Engine (GKE), make sure that your current IAM user is a cluster-admin. This role is required to create RBAC objects.
See [the GKE documentation][22] for more information.
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider gcp \
--bucket $BUCKET \
--secret-file ./credentials-velero
```
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
(Optional) Specify `--snapshot-location-config snapshotLocation=<YOUR_LOCATION>` to keep snapshots in a specific availability zone. See the [VolumeSnapshotLocation definition][8] for details.
(Optional) Specify [additional configurable parameters][7] for the `--backup-location-config` flag.
(Optional) Specify [additional configurable parameters][8] for the `--snapshot-location-config` flag.
(Optional) Specify [CPU and memory resource requests and limits][23] for the Velero/restic pods.
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
[0]: namespace.md
[7]: api-types/backupstoragelocation.md#gcp
[8]: api-types/volumesnapshotlocation.md#gcp
[15]: https://cloud.google.com/compute/docs/access/service-accounts
[16]: https://cloud.google.com/sdk/docs/
[20]: faq.md
[22]: https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#iam-rolebinding-bootstrap
[23]: install-overview.md#velero-resource-requirements

View File

@ -0,0 +1,270 @@
## Getting started
The following example sets up the Velero server and client, then backs up and restores a sample application.
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
For additional functionality with this setup, see the docs on how to [expose Minio outside your cluster][31].
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
### Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later. **Note:** restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See [Restic Integration][17].
* A DNS server on the cluster
* `kubectl` installed
### Download Velero
1. Download the [latest official release's](https://github.com/heptio/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
#### MacOS Installation
On Mac, you can use [HomeBrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Set up server
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster][31] for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
1. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
```
1. Start the server and the local storage service. In the Velero directory, run:
```
kubectl apply -f examples/minio/00-minio-deployment.yaml
```
```
velero install \
--provider aws \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
```
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`).
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
1. Deploy the example nginx application:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Check to see that both the Velero and nginx deployments are successfully created:
```
kubectl get deployments -l component=velero --namespace=velero
kubectl get deployments --namespace=nginx-example
```
### Back up
1. Create a backup for any object that matches the `app=nginx` label selector:
```
velero backup create nginx-backup --selector app=nginx
```
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
```
velero backup create nginx-backup --selector 'backup notin (ignore)'
```
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
```
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
```
Alternatively, you can use some non-standard shorthand cron expressions:
```
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
```
See the [cron package's documentation][30] for more usage examples.
1. Simulate a disaster:
```
kubectl delete namespace nginx-example
```
1. To check that the nginx deployment and service are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
You should get no results.
NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up.
### Restore
1. Run:
```
velero restore create --from-backup nginx-backup
```
1. Run:
```
velero restore get
```
After the restore finishes, the output looks like the following:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, you can look at them in detail:
```
velero restore describe <RESTORE_NAME>
```
For more information, see [the debugging information][18].
### Clean up
If you want to delete any backups you created, including data in object storage and persistent
volume snapshots, you can run:
```
velero backup delete BACKUP_NAME
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
this for each backup you want to permanently delete. A future version of Velero will allow you to
delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run:
```
velero backup get BACKUP_NAME
```
To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
kubectl delete -f examples/nginx-app/base.yaml
```
## Expose Minio outside your cluster with a Service
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
- Change the Minio Service type from `ClusterIP` to `NodePort`.
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
For basic instructions on how to install the Velero server and client, see [the getting started example][1].
### Expose Minio with Service of type NodePort
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
1. Get the Minio URL:
- if you're running Minikube:
```shell
minikube service minio --namespace=velero --url
```
- in any other environment:
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
```shell
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
```
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
## Expose Minio outside your cluster with Kubernetes in Docker (KinD):
Kubernetes in Docker currently does not have support for NodePort services (see [this issue](https://github.com/kubernetes-sigs/kind/issues/99)). In this case, you can use a port forward to access the Minio bucket.
In a terminal, run the following:
```shell
MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}')
kubectl port-forwward $MINIO_POD -n velero 9000:9000
```
Then, in another terminal:
```shell
kubectl edit backupstoragelocation default -n velero
```
Add `publicUrl: http://localhost:9000` under the `spec.config` section.
### Work with Ingress
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
In this case:
1. Keep the Service type as `ClusterIP`.
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
[1]: get-started.md
[3]: install-overview.md
[17]: restic.md
[18]: debugging-restores.md
[26]: https://github.com/heptio/velero/releases
[30]: https://godoc.org/github.com/robfig/cron
[31]: #expose-minio-outside-your-cluster

View File

@ -0,0 +1,81 @@
# Hooks
Velero currently supports executing commands in containers in pods during a backup.
## Backup Hooks
When performing a backup, you can specify one or more commands to execute in a container in a pod
when that pod is being backed up. The commands can be configured to run *before* any custom action
processing ("pre" hooks), or after all custom actions have been completed and any additional items
specified by custom action have been backed up ("post" hooks). Note that hooks are _not_ executed within a shell
on the containers.
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
### Specifying Hooks As Pod Annotations
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
#### Pre hooks
* `pre.hook.backup.velero.io/container`
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
* `pre.hook.backup.velero.io/command`
* The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`
* `pre.hook.backup.velero.io/on-error`
* What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional.
* `pre.hook.backup.velero.io/timeout`
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
#### Post hooks
* `post.hook.backup.velero.io/container`
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
* `post.hook.backup.velero.io/command`
* The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`
* `post.hook.backup.velero.io/on-error`
* What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional.
* `post.hook.backup.velero.io/timeout`
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
### Specifying Hooks in the Backup Spec
Please see the documentation on the [Backup API Type][1] for how to specify hooks in the Backup
spec.
## Hook Example with fsfreeze
We are going to walk through using both pre and post hooks for freezing a file system. Freezing the
file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot.
We will be using [examples/nginx-app/with-pv.yaml][2] for this example. Follow the [steps for your provider][3] to
setup this example.
### Annotations
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
to your declarative deployment. Below is an example of what updating an object in place might look like.
```shell
kubectl annotate pod -n nginx-example -l app=nginx \
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
pre.hook.backup.velero.io/container=fsfreeze \
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
post.hook.backup.velero.io/container=fsfreeze
```
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
hooks are running and exiting without error.
```shell
velero backup create nginx-hook-test
velero backup get nginx-hook-test
velero backup logs nginx-hook-test | grep hookCommand
```
[1]: api-types/backup.md
[2]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/examples/nginx-app/with-pv.yaml
[3]: cloud-common.md

View File

@ -0,0 +1,97 @@
# Use IBM Cloud Object Storage as Velero's storage destination.
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
* Download an official release of Velero
* Create your COS instance
* Create an S3 bucket
* Define a service that can store data in the bucket
* Configure and start the Velero server
## Download Velero
1. Download the [latest official release's](https://github.com/heptio/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create COS instance
If you dont have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
## Create an S3 bucket
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
## Define a service that can store data in the bucket.
The process of creating service credentials is described in [Service credentials][3].
Several comments:
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keysa set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See step 3 in the [Service credentials][3] guide.
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. We will use them in the next step.
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id=<ACCESS_KEY_ID>
aws_secret_access_key=<SECRET_ACCESS_KEY>
```
where the access key id and secret are the values that we got above.
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider aws \
--bucket <YOUR_BUCKET> \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>
```
Velero does not currently have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
(Optional) Specify [CPU and memory resource requests and limits][15] for the Velero/restic pods.
Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud:
```bash
kubectl -n velero delete volumesnapshotlocation.velero.io default
```
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
## Installing the nginx example (optional)
If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`:
Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `StorageClass` name.
[0]: namespace.md
[1]: https://console.bluemix.net/docs/services/cloud-object-storage/basics/order-storage.html#creating-a-new-resource-instance
[2]: https://console.bluemix.net/docs/services/cloud-object-storage/getting-started.html#create-buckets
[3]: https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
[4]: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/kc_welcome_containers.html
[5]: https://console.bluemix.net/docs/containers/container_index.html#container_index
[6]: api-types/backupstoragelocation.md#aws
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[15]: install-overview.md#velero-resource-requirements

View File

@ -0,0 +1,21 @@
# Image tagging policy
This document describes Velero's image tagging policy.
## Released versions
`gcr.io/heptio-images/velero:<SemVer>`
Velero follows the [Semantic Versioning](http://semver.org/) standard for releases. Each tag in the `github.com/heptio/velero` repository has a matching image, e.g. `gcr.io/heptio-images/velero:v1.0.0`.
### Latest
`gcr.io/heptio-images/velero:latest`
The `latest` tag follows the most recently released version of Velero.
## Development
`gcr.io/heptio-images/velero:master`
The `master` tag follows the latest commit to land on the `master` branch.

View File

@ -0,0 +1 @@
Some of these diagrams (for instance backup-process.png), have been created on [draw.io](https://www.draw.io), using the "Include a copy of my diagram" option. If you want to make changes to these diagrams, try importing them into draw.io, and you should have access to the original shapes/text that went into the originals.

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

View File

@ -0,0 +1,180 @@
# Set up Velero on your platform
You can run Velero with a cloud provider or on-premises. For detailed information about the platforms that Velero supports, see [Compatible Storage Providers][99].
You can run Velero in any namespace, which requires additional customization. See [Run in custom namespace][3].
You can also use Velero's integration with restic, which requires additional setup. See [restic instructions][20].
## Cloud provider
The Velero client includes an `install` command to specify the settings for each supported cloud provider. You can install Velero for the included cloud providers using the following command:
```bash
velero install \
--provider <YOUR_PROVIDER> \
--bucket <YOUR_BUCKET> \
[--secret-file <PATH_TO_FILE>] \
[--no-secret] \
[--backup-location-config] \
[--snapshot-location-config] \
[--namespace] \
[--use-volume-snapshots] \
[--use-restic] \
[--pod-annotations] \
```
When using node-based IAM policies, `--secret-file` is not required, but `--no-secret` is required for confirmation.
For provider-specific instructions, see:
* [Run Velero on AWS][0]
* [Run Velero on GCP][1]
* [Run Velero on Azure][2]
* [Use IBM Cloud Object Store as Velero's storage destination][4]
When using restic on a storage provider that doesn't currently have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
To see the YAML applied by the `velero install` command, use the `--dry-run -o yaml` arguments.
For more complex installation needs, use either the generated YAML, or the Helm chart.
## On-premises
You can run Velero in an on-premises cluster in different ways depending on your requirements.
First, you must select an object storage backend that Velero can use to store backup data. [Compatible Storage Providers][99] contains information on various
options that are supported or have been reported to work by users. [Minio][101] is an option if you want to keep your backup data on-premises and you are
not using another storage platform that offers an S3-compatible object storage API.
Second, if you need to back up persistent volume data, you must select a volume backup solution. [Volume Snapshot Providers][100] contains information on
the supported options. For example, if you use [Portworx][102] for persistent storage, you can install their Velero plugin to get native Portworx snapshots as part
of your Velero backups. If there is no native snapshot plugin available for your storage platform, you can use Velero's [restic integration][20], which provides a
platform-agnostic backup solution for volume data.
## Customize configuration
Whether you run Velero on a cloud provider or on-premises, if you have more than one volume snapshot location for a given volume provider, you can specify its default location for backups by setting a server flag in your Velero deployment YAML.
For details, see the documentation topics for individual cloud providers.
## Velero resource requirements
By default, the Velero deployment requests 500m CPU, 128Mi memory and sets a limit of 1000m CPU, 256Mi.
Default requests and limits are not set for the restic pods as CPU/Memory usage can depend heavily on the size of volumes being backed up.
If you need to customize these resource requests and limits, you can set the following flags in your `velero install` command:
```
velero install \
--provider <YOUR_PROVIDER> \
--bucket <YOUR_BUCKET> \
--secret-file <PATH_TO_FILE> \
--velero-pod-cpu-request <CPU_REQUEST> \
--velero-pod-mem-request <MEMORY_REQUEST> \
--velero-pod-cpu-limit <CPU_LIMIT> \
--velero-pod-mem-limit <MEMORY_LIMIT> \
[--use-restic] \
[--restic-pod-cpu-request <CPU_REQUEST>] \
[--restic-pod-mem-request <MEMORY_REQUEST>] \
[--restic-pod-cpu-limit <CPU_LIMIT>] \
[--restic-pod-mem-limit <MEMORY_LIMIT>]
```
Values for these flags follow the same format as [Kubernetes resource requirements][103].
## Removing Velero
If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by `velero install`:
```bash
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
## Installing with the Helm chart
When installing using the Helm chart, the provider's credential information will need to be appended into your values.
The easiest way to do this is with the `--set-file` argument, available in Helm 2.10 and higher.
```bash
helm install --set-file credentials.secretContents.cloud=./credentials-velero
```
See your cloud provider's documentation for the contents and creation of the `credentials-velero` file.
## Examples
After you set up the Velero server, try these examples:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Create a backup:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Wait for the namespace to be deleted.
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
### Snapshot example (with PersistentVolumes)
> NOTE: For Azure, you must run Kubernetes version 1.7.2 or later to support PV snapshotting of managed disks.
1. Start the sample nginx app:
```bash
kubectl apply -f examples/nginx-app/with-pv.yaml
```
1. Create a backup with PV snapshotting:
```bash
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
```bash
kubectl delete namespaces nginx-example
```
Because the default [reclaim policy][19] for dynamically-provisioned PVs is "Delete", these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time. **Before continuing to the next step, check your cloud provider to confirm that the disk no longer exists.**
1. Restore your lost resources:
```bash
velero restore create --from-backup nginx-backup
```
[0]: aws-config.md
[1]: gcp-config.md
[2]: azure-config.md
[3]: namespace.md
[4]: ibm-config.md
[19]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
[20]: restic.md
[99]: support-matrix.md
[100]: support-matrix.md#volume-snapshot-providers
[101]: https://www.minio.io
[102]: https://portworx.com
[103]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu

View File

@ -0,0 +1,164 @@
# Backup Storage Locations and Volume Snapshot Locations
## Overview
Velero has two custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`, that are used to configure where Velero backups and their associated persistent volume snapshots are stored.
A `BackupStorageLocation` is defined as a bucket, a prefix within that bucket under which all Velero data should be stored, and a set of additional provider-specific fields (e.g. AWS region, Azure storage account, etc.) The [API documentation][1] captures the configurable parameters for each in-tree provider.
A `VolumeSnapshotLocation` is defined entirely by provider-specific fields (e.g. AWS region, Azure resource group, Portworx snapshot type, etc.) The [API documentation][2] captures the configurable parameters for each in-tree provider.
The user can pre-configure one or more possible `BackupStorageLocations` and one or more `VolumeSnapshotLocations`, and can select *at backup creation time* the location in which the backup and associated snapshots should be stored.
This configuration design enables a number of different use cases, including:
- Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
- Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
- For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
## Limitations / Caveats
- Velero only supports a single set of credentials *per provider*. It's not yet possible to use different credentials for different locations, if they're for the same provider.
- Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail.
- Each Velero backup has one `BackupStorageLocation`, and one `VolumeSnapshotLocation` per volume provider. It is not possible (yet) to send a single Velero backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can always set up multiple scheduled backups that differ only in the storage locations used if redundancy of backups across locations is important.
- Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume (e.g. EBS and Portworx), but you only have a `VolumeSnapshotLocation` configured for EBS, then Velero will **only** snapshot the EBS volumes.
- Restic data is stored under a prefix/subdirectory of the main Velero bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation time.
## Examples
Let's look at some examples of how we can use this configuration mechanism to address some common use cases:
#### Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
During server configuration:
```shell
velero snapshot-location create ebs-us-east-1 \
--provider aws \
--config region=us-east-1
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
```
During backup creation:
```shell
velero backup create full-cluster-backup \
--volume-snapshot-locations ebs-us-east-1,portworx-cloud
```
Alternately, since in this example there's only one possible volume snapshot location configured for each of our two providers (`ebs-us-east-1` for `aws`, and `portworx-cloud` for `portworx`), Velero doesn't require them to be explicitly specified when creating the backup:
```shell
velero backup create full-cluster-backup
```
#### Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
During server configuration:
```shell
velero backup-location create default \
--provider aws \
--bucket velero-backups \
--config region=us-east-1
velero backup-location create s3-alt-region \
--provider aws \
--bucket velero-backups-alt \
--config region=us-west-1
```
During backup creation:
```shell
# The Velero server will automatically store backups in the backup storage location named "default" if
# one is not specified when creating the backup. You can alter which backup storage location is used
# by default by setting the --default-backup-storage-location flag on the `velero server` command (run
# by the Velero deployment) to the name of a different backup storage location.
velero backup create full-cluster-backup
```
Or:
```shell
velero backup create full-cluster-alternate-location-backup \
--storage-location s3-alt-region
```
#### For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
During server configuration:
```shell
velero snapshot-location create portworx-local \
--provider portworx \
--config type=local
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
```
During backup creation:
```shell
# Note that since in this example we have two possible volume snapshot locations for the Portworx
# provider, we need to explicitly specify which one to use when creating a backup. Alternately,
# you can set the --default-volume-snapshot-locations flag on the `velero server` command (run by
# the Velero deployment) to specify which location should be used for each provider by default, in
# which case you don't need to specify it when creating a backup.
velero backup create local-snapshot-backup \
--volume-snapshot-locations portworx-local
```
Or:
```shell
velero backup create cloud-snapshot-backup \
--volume-snapshot-locations portworx-cloud
```
#### Use a single location
If you don't have a use case for more than one location, it's still easy to use Velero. Let's assume you're running on AWS, in the `us-west-1` region:
During server configuration:
```shell
velero backup-location create default \
--provider aws \
--bucket velero-backups \
--config region=us-west-1
velero snapshot-location create ebs-us-west-1 \
--provider aws \
--config region=us-west-1
```
During backup creation:
```shell
# Velero will automatically use your configured backup storage location and volume snapshot location.
# Nothing needs to be specified when creating a backup.
velero backup create full-cluster-backup
```
## Additional Use Cases
1. If you're using Azure's AKS, you may want to store your volume snapshots outside of the "infrastructure" resource group that is automatically created when you create your AKS cluster. This is possible using a `VolumeSnapshotLocation`, by specifying a `resourceGroup` under the `config` section of the snapshot location. See the [Azure volume snapshot location documentation][3] for details.
1. If you're using Azure, you may want to store your Velero backups across multiple storage accounts and/or resource groups. This is possible using a `BackupStorageLocation`, by specifying a `storageAccount` and/or `resourceGroup`, respectively, under the `config` section of the backup location. See the [Azure backup storage location documentation][4] for details.
[1]: api-types/backupstoragelocation.md
[2]: api-types/volumesnapshotlocation.md
[3]: api-types/volumesnapshotlocation.md#azure
[4]: api-types/backupstoragelocation.md#azure

View File

@ -0,0 +1,82 @@
# Migrating from Heptio Ark to Velero
As of v0.11.0, Heptio Ark has become Velero. This means the following changes have been made:
* The `ark` CLI client is now `velero`.
* The default Kubernetes namespace and ServiceAccount are now named `velero` (formerly `heptio-ark`).
* The container image name is now `gcr.io/heptio-images/velero` (formerly `gcr.io/heptio-images/ark`).
* CRDs are now under the new `velero.io` API group name (formerly `ark.heptio.com`).
The following instructions will help you migrate your existing Ark installation to Velero.
# Prerequisites
* Ark v0.10.x installed. See the v0.10.x [upgrade instructions][1] to upgrade from older versions.
* `kubectl` installed.
* `cluster-admin` permissions.
# Migration process
At a high level, the migration process involves the following steps:
* Scale down the `ark` deployment, so it will not process schedules, backups, or restores during the migration period.
* Create a new namespace (named `velero` by default).
* Apply the new CRDs.
* Migrate existing Ark CRD objects, labels, and annotations to the new Velero equivalents.
* Recreate the existing cloud credentials secret(s) in the velero namespace.
* Apply the updated Kubernetes deployment and daemonset (for restic support) to use the new container images and namespace.
* Remove the existing Ark namespace (which includes the deployment), CRDs, and ClusterRoleBinding.
These steps are provided in a script here:
```bash
kubectl scale --namespace heptio-ark deployment/ark --replicas 0
OS=$(uname | tr '[:upper:]' '[:lower:]') # Determine if the OS is Linux or macOS
ARCH="amd64"
# Download the velero client/example tarball to and unpack
curl -L https://github.com/heptio/velero/releases/download/v0.11.0/velero-v0.11.0-${OS}-${ARCH}.tar.gz --output velero-v0.11.0-${OS}-${ARCH}.tar.gz
tar xvf velero-v0.11.0-${OS}-${ARCH}.tar.gz
# Create the prerequisite CRDs and namespace
kubectl apply -f config/common/00-prereqs.yaml
# Download and unpack the crd-migrator tool
curl -L https://github.com/vmware/crd-migration-tool/releases/download/v1.0.0/crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz --output crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz
tar xvf crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz
# Run the tool against your cluster.
./crd-migrator \
--from ark.heptio.com/v1 \
--to velero.io/v1 \
--label-mappings ark.heptio.com:velero.io,ark-schedule:velero.io/schedule-name \
--annotation-mappings ark.heptio.com:velero.io \
--namespace-mappings heptio-ark:velero
# Copy the necessary secret from the ark namespace
kubectl get secret --namespace heptio-ark cloud-credentials --export -o yaml | kubectl apply --namespace velero -f -
# Apply the Velero deployment and restic DaemonSet for your platform
## GCP
#kubectl apply -f config/gcp/10-deployment.yaml
#kubectl apply -f config/gcp/20-restic-daemonset.yaml
## AWS
#kubectl apply -f config/aws/10-deployment.yaml
#kubectl apply -f config/aws/20-restic-daemonset.yaml
## Azure
#kubectl apply -f config/azure/00-deployment.yaml
#kubectl apply -f config/azure/20-restic-daemonset.yaml
# Verify your data is still present
./velero get backup
./velero get restore
# Remove old Ark data
kubectl delete namespace heptio-ark
kubectl delete crds -l component=ark
kubectl delete clusterrolebindings -l component=ark
```
[1]: https://velero.io/docs/v0.10.0/upgrading-to-v0.10

View File

@ -0,0 +1,48 @@
# Cluster migration
*Using Backups and Restores*
Velero can help you port your resources from one cluster to another, as long as you point each Velero instance to the same cloud object storage location. In this scenario, we are also assuming that your clusters are hosted by the same cloud provider. **Note that Velero does not support the migration of persistent volumes across cloud providers.**
1. *(Cluster 1)* Assuming you haven't already been checkpointing your data with the Velero `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired):
```
velero backup create <BACKUP-NAME>
```
The default TTL is 30 days (720 hours); you can use the `--ttl` flag to change this as necessary.
1. *(Cluster 2)* Configure `BackupStorageLocations` and `VolumeSnapshotLocations`, pointing to the locations used by *Cluster 1*, using `velero backup-location create` and `velero snapshot-location create`. Make sure to configure the `BackupStorageLocations` as read-only
by using the `--access-mode=ReadOnly` flag for `velero backup-location create`.
1. *(Cluster 2)* Make sure that the Velero Backup object is created. Velero resources are synchronized with the backup files in cloud storage.
```
velero backup describe <BACKUP-NAME>
```
**Note:** The default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Velero server.
1. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with:
```
velero restore create --from-backup <BACKUP-NAME>
```
## Verify both clusters
Check that the second cluster is behaving as expected:
1. *(Cluster 2)* Run:
```
velero restore get
```
1. Then run:
```
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
```
If you encounter issues, make sure that Velero is running in the same namespace in both clusters.

View File

@ -0,0 +1,25 @@
# Run in custom namespace
You can run Velero in any namespace.
First, ensure you've [downloaded & extracted the latest release][0].
Then, install Velero using the `--namespace` flag:
```bash
velero install --bucket <YOUR_BUCKET> --provider <YOUR_PROVIDER> --namespace <YOUR_NAMESPACE>
```
## Specify the namespace in client commands
To specify the namespace for all Velero client commands, run:
```bash
velero client config set namespace=<NAMESPACE_VALUE>
```
[0]: get-started.md#download

View File

@ -0,0 +1,245 @@
# Use Oracle Cloud as a Backup Storage Provider for Velero
## Introduction
[Velero](https://velero.io/) is a tool used to backup and migrate Kubernetes applications. Here are the steps to use [Oracle Cloud Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) as a destination for Velero backups.
1. [Download Velero](#download-velero)
2. [Create A Customer Secret Key](#create-a-customer-secret-key)
3. [Create An Oracle Object Storage Bucket](#create-an-oracle-object-storage-bucket)
4. [Install Velero](#install-velero)
5. [Clean Up](#clean-up)
6. [Examples](#examples)
7. [Additional Reading](#additional-reading)
## Download Velero
1. Download the [latest release](https://github.com/heptio/velero/releases/) of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example:
```
wget https://github.com/heptio/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz
```
*We strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the master branch of the Velero repository is under active development and is not guaranteed to be stable!*
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`
4. Run `velero` to confirm the CLI has been installed correctly. You should see an output like this:
```
$ velero
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.
If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.
Usage:
velero [command]
```
## Create A Customer Secret Key
1. Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to [create a Customer Secret Key](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#To4). Refer to this link for more information about [Working with Customer Secret Keys](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#s3).
2. Create a Velero credentials file with your Customer Secret Key:
```
$ vi credentials-velero
[default]
aws_access_key_id=bae031188893d1eb83719648790ac850b76c9441
aws_secret_access_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk=
```
## Create An Oracle Object Storage Bucket
Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for [more information about creating a bucket with Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/managingbuckets.htm#usingconsole).
## Install Velero
You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider:
```
velero install \
--provider [provider name] \
--bucket [bucket name] \
--prefix [tenancy name] \
--use-volume-snapshots=false \
--secret-file [secret file location] \
--backup-location-config region=[region],s3ForcePathStyle="true",s3Url=[storage API endpoint]
```
- `--provider` Because we are using the S3-compatible API, we will use `aws` as our provider.
- `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`.
- ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`.
- `--use-volume-snapshots=false` Velero does not currently have a volume snapshot plugin for Oracle Cloud creating volume snapshots is disabled.
- `--secret-file` The path to your `credentials-velero` file.
- `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name ([List of Oracle Cloud Regions](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm?Highlight=regions)) and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com`
For example:
```
velero install \
--provider aws \
--bucket velero \
--prefix oracle-cloudnative \
--use-volume-snapshots=false \
--secret-file /Users/mboxell/bin/velero/credentials-velero \
--backup-location-config region=us-phoenix-1,s3ForcePathStyle="true",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com
```
This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod name]`.
## Clean Up
To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
This will remove all resources created by `velero install`.
## Examples
After creating the Velero server in your cluster, try this example:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml`
This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service.
```
$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created
```
You can see the created resources by running `kubectl get all`
```
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s
pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2 2 2 2 21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s
```
2. Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example`
```
$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
```
At this point you can navigate to appropriate bucket, which we called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero.
3. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example`
```
$ kubectl delete namespaces nginx-example
namespace "nginx-example" deleted
```
Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
This should return: `No resources found.`
4. Restore your lost resources: `velero restore create --from-backup nginx-backup`
```
$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20190604102710" submitted successfully.
Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details.
```
Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents.
5. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following:
```
$ velero restore get
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>`
6. Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml`
```
$ kubectl delete -f examples/nginx-app/base.yaml
namespace "nginx-example" deleted
deployment.apps "nginx-deployment" deleted
service "my-nginx" deleted
```
If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME`
```
$ velero backup delete nginx-backup
Are you sure you want to continue (Y/N)? Y
Request to delete backup "nginx-backup" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`:
```
$ velero backup get nginx-backup
An error occurred: backups.velero.io "nginx-backup" not found
```
```
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
```
## Additional Reading
* [Official Velero Documentation](https://velero.io/docs/v1.1.0-beta.1/)
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)

View File

@ -0,0 +1,99 @@
# Output file format
A backup is a gzip-compressed tar file whose name matches the Backup API resource's `metadata.name` (what is specified during `velero backup create <NAME>`).
In cloud object storage, each backup file is stored in its own subdirectory in the bucket specified in the Velero server configuration. This subdirectory includes an additional file called `velero-backup.json`. The JSON file lists all information about your associated Backup resource, including any default values. This gives you a complete historical record of the backup configuration. The JSON file also specifies `status.version`, which corresponds to the output file format.
The directory structure in your cloud storage looks something like:
```
rootBucket/
backup1234/
velero-backup.json
backup1234.tar.gz
```
## Example backup JSON file
```json
{
"kind": "Backup",
"apiVersion": "velero.io/v1",
"metadata": {
"name": "test-backup",
"namespace": "velero",
"selfLink": "/apis/velero.io/v1/namespaces/velero/backups/testtest",
"uid": "a12345cb-75f5-11e7-b4c2-abcdef123456",
"resourceVersion": "337075",
"creationTimestamp": "2017-07-31T13:39:15Z"
},
"spec": {
"includedNamespaces": [
"*"
],
"excludedNamespaces": null,
"includedResources": [
"*"
],
"excludedResources": null,
"labelSelector": null,
"snapshotVolumes": true,
"ttl": "24h0m0s"
},
"status": {
"version": 1,
"expiration": "2017-08-01T13:39:15Z",
"phase": "Completed",
"volumeBackups": {
"pvc-e1e2d345-7583-11e7-b4c2-abcdef123456": {
"snapshotID": "snap-04b1a8e11dfb33ab0",
"type": "gp2",
"iops": 100
}
},
"validationErrors": null
}
}
```
Note that this file includes detailed info about your volume snapshots in the `status.volumeBackups` field, which can be helpful if you want to manually check them in your cloud provider GUI.
## file format version: 1
When unzipped, a typical backup directory (e.g. `backup1234.tar.gz`) looks like the following:
```
resources/
persistentvolumes/
cluster/
pv01.json
...
configmaps/
namespaces/
namespace1/
myconfigmap.json
...
namespace2/
...
pods/
namespaces/
namespace1/
mypod.json
...
namespace2/
...
jobs/
namespaces/
namespace1/
awesome-job.json
...
namespace2/
...
deployments/
namespaces/
namespace1/
cool-deployment.json
...
namespace2/
...
...
```

View File

@ -0,0 +1,82 @@
# Plugins
Velero has a plugin architecture that allows users to add their own custom functionality to Velero backups & restores without having to modify/recompile the core Velero binary. To add custom functionality, users simply create their own binary containing implementations of Velero's plugin kinds (described below), plus a small amount of boilerplate code to expose the plugin implementations to Velero. This binary is added to a container image that serves as an init container for the Velero server pod and copies the binary into a shared emptyDir volume for the Velero server to access.
Multiple plugins, of any type, can be implemented in this binary.
A fully-functional [sample plugin repository][1] is provided to serve as a convenient starting point for plugin authors.
## Plugin Naming
When naming your plugin, keep in mind that the name needs to conform to these rules:
- have two parts separated by '/'
- none of the above parts can be empty
- the prefix is a valid DNS subdomain name
- a plugin with the same name cannot not already exist
### Some examples:
```
- example.io/azure
- 1.2.3.4/5678
- example-with-dash.io/azure
```
You will need to give your plugin(s) a name when registering them by calling the appropriate `RegisterX` function: <https://github.com/heptio/velero/blob/0e0f357cef7cf15d4c1d291d3caafff2eeb69c1e/pkg/plugin/framework/server.go#L42-L60>
## Plugin Kinds
Velero currently supports the following kinds of plugins:
- **Object Store** - persists and retrieves backups, backup logs and restore logs
- **Volume Snapshotter** - creates volume snapshots (during backup) and restores volumes from snapshots (during restore)
- **Backup Item Action** - executes arbitrary logic for individual items prior to storing them in a backup file
- **Restore Item Action** - executes arbitrary logic for individual items prior to restoring them into a cluster
## Plugin Logging
Velero provides a [logger][2] that can be used by plugins to log structured information to the main Velero server log or
per-backup/restore logs. It also passes a `--log-level` flag to each plugin binary, whose value is the value of the same
flag from the main Velero process. This means that if you turn on debug logging for the Velero server via `--log-level=debug`,
plugins will also emit debug-level logs. See the [sample repository][1] for an example of how to use the logger within your plugin.
## Plugin Configuration
Velero uses a ConfigMap-based convention for providing configuration to plugins. If your plugin needs to be configured at runtime,
define a ConfigMap like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: my-plugin-config
# must be in the namespace where the velero deployment
# is running
namespace: velero
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in change storageclass
# restore item action plugin)
velero.io/plugin-config: ""
# add a label whose key corresponds to the fully-qualified
# plugin name (e.g. mydomain.io/my-plugin-name), and whose
# value is the plugin type (BackupItemAction, RestoreItemAction,
# ObjectStore, or VolumeSnapshotter)
<fully-qualified-plugin-name>: <plugin-type>
data:
# add your configuration data here as key-value pairs
```
Then, in your plugin's implementation, you can read this ConfigMap to fetch the necessary configuration. See the [restic restore action][3]
for an example of this -- in particular, the `getPluginConfig(...)` function.
[1]: https://github.com/heptio/velero-plugin-example
[2]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/pkg/plugin/logger.go
[3]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/pkg/restore/restic_restore_action.go

View File

@ -0,0 +1,47 @@
# Run Velero more securely with restrictive RBAC settings
By default Velero runs with an RBAC policy of ClusterRole `cluster-admin`. This is to make sure that Velero can back up or restore anything in your cluster. But `cluster-admin` access is wide open -- it gives Velero components access to everything in your cluster. Depending on your environment and your security needs, you should consider whether to configure additional RBAC policies with more restrictive access.
**Note:** Roles and RoleBindings are associated with a single namespaces, not with an entire cluster. PersistentVolume backups are associated only with an entire cluster. This means that any backups or restores that use a restrictive Role and RoleBinding pair can manage only the resources that belong to the namespace. You do not need a wide open RBAC policy to manage PersistentVolumes, however. You can configure a ClusterRole and ClusterRoleBinding that allow backups and restores only of PersistentVolumes, not of all objects in the cluster.
For more information about RBAC and access control generally in Kubernetes, see the Kubernetes documentation about [access control][1], [managing service accounts][2], and [RBAC authorization][3].
## Set up Roles and RoleBindings
Here's a sample Role and RoleBinding pair.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: YOUR_NAMESPACE_HERE
name: ROLE_NAME_HERE
labels:
component: velero
rules:
- apiGroups:
- velero.io
verbs:
- "*"
resources:
- "*"
```
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ROLEBINDING_NAME_HERE
subjects:
- kind: ServiceAccount
name: YOUR_SERVICEACCOUNT_HERE
roleRef:
kind: Role
name: ROLE_NAME_HERE
apiGroup: rbac.authorization.k8s.io
```
[1]: https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/
[2]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
[3]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[4]: namespace.md

View File

@ -0,0 +1,379 @@
# Restic Integration
Velero has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called [restic][1]. This support is considered beta quality. Please see the list of [limitations](#limitations) to understand if it currently fits your use case.
Velero has always allowed you to take snapshots of persistent volumes as part of your backups if youre using one of
the supported cloud providers block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks).
We also provide a plugin model that enables anyone to implement additional object and block storage backends, outside the
main Velero repository.
We integrated restic with Velero so that users have an out-of-the-box solution for backing up and restoring almost any type of Kubernetes
volume*. This is a new capability for Velero, not a replacement for existing functionality. If you're running on AWS, and
taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you've
been waiting for a snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable
cross-volume-type data migrations. Stay tuned as this evolves!
\* hostPath volumes are not supported, but the [new local volume type][4] is supported.
## Setup
### Prerequisites
- Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.10.0 and later.
### Instructions
Ensure you've [downloaded latest release][3].
To install restic, use the `--use-restic` flag on the `velero install` command. See the [install overview][2] for more details.
Please note: For some PaaS/CaaS platforms based on Kubernetes such as RancherOS, OpenShift and Enterprise PKS, some modifications are required to the restic DaemonSet spec.
**RancherOS**
The host path for volumes is not `/var/lib/kubelet/pods`, rather it is `/opt/rke/var/lib/kubelet/pods`
```yaml
hostPath:
path: /var/lib/kubelet/pods
```
to
```yaml
hostPath:
path: /opt/rke/var/lib/kubelet/pods
```
**OpenShift**
The restic containers should be running in a `privileged` mode to be able to mount the correct hostpath to pods volumes.
1. Add the `velero` ServiceAccount to the `privileged` SCC:
```
$ oc adm policy add-scc-to-user privileged -z velero -n velero
```
2. Modify the DaemonSet yaml to request a privileged mode and mount the correct hostpath to pods volumes.
```diff
@@ -35,7 +35,7 @@ spec:
secretName: cloud-credentials
- name: host-pods
hostPath:
- path: /var/lib/kubelet/pods
+ path: /var/lib/origin/openshift.local.volumes/pods
- name: scratch
emptyDir: {}
containers:
@@ -67,3 +67,5 @@ spec:
value: /credentials/cloud
- name: VELERO_SCRATCH_DIR
value: /scratch
+ securityContext:
+ privileged: true
```
If restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) in order to relax the security in your cluster so that restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
By default a userland openshift namespace will not schedule pods on all nodes in the cluster.
To schedule on all nodes the namespace needs an annotation:
```
oc annotate namespace <velero namespace> openshift.io/node-selector=""
```
This should be done before velero installation.
Or the ds needs to be deleted and recreated:
```
oc get ds restic -o yaml -n <velero namespace> > ds.yaml
oc annotate namespace <velero namespace> openshift.io/node-selector=""
oc create -n <velero namespace> -f ds.yaml
```
**Enterprise PKS**
You need to enable the `Allow Privileged` option in your plan configuration so that restic is able to mount the hostpath.
The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods`
```yaml
hostPath:
path: /var/vcap/data/kubelet/pods
```
You're now ready to use Velero with restic.
## Back up
1. Run the following for each pod that contains a volume to back up:
```bash
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
```
where the volume names are the names of the volumes in the pod spec.
For example, for the following pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: sample
namespace: foo
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-webserver
volumeMounts:
- name: pvc-volume
mountPath: /volume-1
- name: emptydir-volume
mountPath: /volume-2
volumes:
- name: pvc-volume
persistentVolumeClaim:
claimName: test-volume-claim
- name: emptydir-volume
emptyDir: {}
```
You'd run:
```bash
kubectl -n foo annotate pod/sample backup.velero.io/backup-volumes=pvc-volume,emptydir-volume
```
This annotation can also be provided in a pod template spec if you use a controller to manage your pods.
1. Take a Velero backup:
```bash
velero backup create NAME OPTIONS...
```
1. When the backup completes, view information about the backups:
```bash
velero backup describe YOUR_BACKUP_NAME
```
```bash
kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOUR_BACKUP_NAME -o yaml
```
## Restore
1. Restore from your Velero backup:
```bash
velero restore create --from-backup BACKUP_NAME OPTIONS...
```
1. When the restore completes, view information about your pod volume restores:
```bash
velero restore describe YOUR_RESTORE_NAME
```
```bash
kubectl -n velero get podvolumerestores -l velero.io/restore-name=YOUR_RESTORE_NAME -o yaml
```
## Limitations
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
- Those of you familiar with [restic][1] may know that it encrypts all of its data. We've decided to use a static,
common encryption key for all restic repositories created by Velero. **This means that anyone who has access to your
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
appropriately. We plan to implement full Velero backup encryption, including securing the restic encryption keys, in
a future release.
- The current Velero/restic integration relies on using pod names to associate restic backups with their parents. If a pod is restarted, such as with a Deployment,
the next restic backup taken will be treated as a completely new backup, not an incremental one.
- Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual
difference is small.
## Customize Restore Helper Container
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `gcr.io/heptio-images/velero-restic-restore-helper:<VERSION>`,
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
the alternate image.
In addition, you can customize the resource requirements for the init container, should you need.
The ConfigMap must look like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: restic-restore-action-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in restic restore
# item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/restic: RestoreItemAction
data:
# The value for "image" can either include a tag or not;
# if the tag is *not* included, the tag from the main Velero
# image will automatically be used.
image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]
# "cpuRequest" sets the request.cpu value on the restic init containers during restore.
# If not set, it will default to "100m".
cpuRequest: 200m
# "memRequest" sets the request.memory value on the restic init containers during restore.
# If not set, it will default to "128Mi".
memRequest: 128Mi
# "cpuLimit" sets the request.cpu value on the restic init containers during restore.
# If not set, it will default to "100m".
cpuLimit: 200m
# "memLimit" sets the request.memory value on the restic init containers during restore.
# If not set, it will default to "128Mi".
memLimit: 128Mi
```
## Troubleshooting
Run the following checks:
Are your Velero server and daemonset pods running?
```bash
kubectl get pods -n velero
```
Does your restic repository exist, and is it ready?
```bash
velero restic repo get
velero restic repo get REPO_NAME -o yaml
```
Are there any errors in your Velero backup/restore?
```bash
velero backup describe BACKUP_NAME
velero backup logs BACKUP_NAME
velero restore describe RESTORE_NAME
velero restore logs RESTORE_NAME
```
What is the status of your pod volume backups/restores?
```bash
kubectl -n velero get podvolumebackups -l velero.io/backup-name=BACKUP_NAME -o yaml
kubectl -n velero get podvolumerestores -l velero.io/restore-name=RESTORE_NAME -o yaml
```
Is there any useful information in the Velero server or daemon pod logs?
```bash
kubectl -n velero logs deploy/velero
kubectl -n velero logs DAEMON_POD_NAME
```
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
to the container command in the deployment/daemonset pod template spec.
## How backup and restore work with restic
We introduced three custom resource definitions and associated controllers:
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
and `restic prune`.
You can see information about your Velero restic repositories by running `velero restic repo get`.
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
`restic backup` commands to backup pod volume data.
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
on that node. The controller executes `restic restore` commands to restore pod volume data.
### Backup
1. The main Velero backup process checks each pod that it's backing up for the annotation specifying a restic backup
should be taken (`backup.velero.io/backup-volumes`)
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
1. The main Velero process now waits for the `PodVolumeBackup` resources to complete or fail
1. Meanwhile, each `PodVolumeBackup` is handled by the controller on the appropriate node, which:
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
- finds the pod volume's subdirectory within the above volume
- runs `restic backup`
- updates the status of the custom resource to `Completed` or `Failed`
1. As each `PodVolumeBackup` finishes, the main Velero process adds it to the Velero backup in a file named `<backup-name>-podvolumebackups.json.gz`. This file gets uploaded to object storage alongside the backup tarball. It will be used for restores, as seen in the next section.
### Restore
1. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from.
1. For each `PodVolumeBackup` found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
check it for integrity)
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
on this shortly)
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
- waits for the pod to be running the init container
- finds the pod volume's subdirectory within the above volume
- runs `restic restore`
- on success, writes a file into the pod volume, in a `.velero` subdirectory, whose name is the UID of the Velero restore
that this pod volume restore is for
- updates the status of the custom resource to `Completed` or `Failed`
1. The init container that was added to the pod is running a process that waits until it finds a file
within each restored volume, under `.velero`, whose name is the UID of the Velero restore being run
1. Once all such files are found, the init container's process terminates successfully and the pod moves
on to running other init containers/the main containers.
## 3rd party controller
### Monitor backup annotation
Velero does not currently provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watcher][7]
[1]: https://github.com/restic/restic
[2]: install-overview.md
[3]: https://github.com/heptio/velero/releases/
[4]: https://kubernetes.io/docs/concepts/storage/volumes/#local
[5]: http://restic.readthedocs.io/en/latest/100_references.html#terminology
[6]: https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
[7]: https://github.com/bitsbeats/velero-pvc-watcher

View File

@ -0,0 +1,41 @@
# Restore Reference
## Restoring Into a Different Namespace
Velero can restore resources into a different namespace than the one they were backed up from. To do this, use the `--namespace-mappings` flag:
```bash
velero restore create RESTORE_NAME \
--from-backup BACKUP_NAME \
--namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-2
```
## Changing PV/PVC Storage Classes
Velero can change the storage class of persistent volumes and persistent volume claims during restores. To configure a storage class mapping, create a config map in the Velero namespace like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: change-storage-class-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in change storage
# class restore item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/change-storage-class: RestoreItemAction
data:
# add 1+ key-value pairs here, where the key is the old
# storage class name and the value is the new storage
# class name.
<old-storage-class>: <new-storage-class>
```

View File

@ -0,0 +1,75 @@
# Supported Kubernetes Versions
- In general, Velero works on Kubernetes version 1.7 or later (when Custom Resource Definitions were introduced).
- Restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. See [Restic Integration][17].
# Compatible Storage Providers
Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase.
## Backup Storage Providers
| Provider | Owner | Contact |
|---------------------------|----------|---------------------------------|
| [AWS S3][2] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Azure Blob Storage][3] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Google Cloud Storage][4] | Velero Team | [Slack][10], [GitHub Issue][11] |
## S3-Compatible Backup Storage Providers
Velero uses [Amazon's Go SDK][12] to connect to the S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero:
_Note that these providers are not regularly tested by the Velero team._
* [IBM Cloud][5]
* [Minio][9]
* Ceph RADOS v12.2.7
* [DigitalOcean][7]
* Quobyte
* [NooBaa][16]
* [Oracle Cloud][23]
_Some storage providers, like Quobyte, may need a different [signature algorithm version][15]._
## Volume Snapshot Providers
| Provider | Owner | Contact |
|----------------------------------|-----------------|---------------------------------|
| [AWS EBS][2] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Azure Managed Disks][3] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Google Compute Engine Disks][4] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Restic][1] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Portworx][6] | Portworx | [Slack][13], [GitHub Issue][14] |
| [DigitalOcean][7] | StackPointCloud | |
| [OpenEBS][18] | OpenEBS | [Slack][19], [GitHub Issue][20] |
| [AlibabaCloud][21] | AlibabaCloud | [GitHub Issue][22] |
### Adding a new plugin
To write a plugin for a new backup or volume storage system, take a look at the [example repo][8].
After you publish your plugin, open a PR that adds your plugin to the appropriate list.
[1]: restic.md
[2]: aws-config.md
[3]: azure-config.md
[4]: gcp-config.md
[5]: ibm-config.md
[6]: https://docs.portworx.com/scheduler/kubernetes/ark.html
[7]: https://github.com/StackPointCloud/ark-plugin-digitalocean
[8]: https://github.com/heptio/velero-plugin-example/
[9]: get-started.md
[10]: https://kubernetes.slack.com/messages/velero
[11]: https://github.com/heptio/velero/issues
[12]: https://github.com/aws/aws-sdk-go/aws
[13]: https://portworx.slack.com/messages/px-k8s
[14]: https://github.com/portworx/ark-plugin/issues
[15]: api-types/backupstoragelocation.md#aws
[16]: http://www.noobaa.com/
[17]: restic.md
[18]: https://github.com/openebs/velero-plugin
[19]: https://openebs-community.slack.com/
[20]: https://github.com/openebs/velero-plugin/issues
[21]: https://github.com/AliyunContainerService/velero-plugin
[22]: https://github.com/AliyunContainerService/velero-plugin/issues
[23]: oracle-config.md

View File

@ -0,0 +1,75 @@
# Troubleshooting
These tips can help you troubleshoot known issues. If they don't help, you can [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
See also:
- [Debug installation/setup issues][2]
- [Debug restores][1]
## General troubleshooting information
You can use the `velero bug` command to open a [Github issue][4] by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `velero` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like.
Some general commands for troubleshooting that may be helpful:
* `velero backup describe <backupName>` - describe the details of a backup
* `velero backup logs <backupName>` - fetch the logs for this specific backup. Useful for viewing failures and warnings, including resources that could not be backed up.
* `velero restore describe <restoreName>` - describe the details of a restore
* `velero restore logs <restoreName>` - fetch the logs for this specific restore. Useful for viewing failures and warnings, including resources that could not be restored.
* `kubectl logs deployment/velero -n velero` - fetch the logs of the Velero server pod. This provides the output of the Velero server processes.
### Getting velero debug logs
You can increase the verbosity of the Velero server by editing your Velero deployment to look like this:
```
kubectl edit deployment/velero -n velero
...
containers:
- name: velero
image: gcr.io/heptio-images/velero:latest
command:
- /velero
args:
- server
- --log-level # Add this line
- debug # Add this line
...
```
## Known issue with restoring LoadBalancer Service
Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform a Velero restore.
Alternatively, you might be able to use the Service's `spec.loadBalancerIP` field to keep connections valid, if your cloud provider supports this value. See [the Kubernetes documentation about Services of Type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
## Miscellaneous issues
### Velero reports `custom resource not found` errors when starting up.
Velero's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Run `velero install` again to install any missing custom resource definitions.
### `velero backup logs` returns a `SignatureDoesNotMatch` error
Downloading artifacts from object storage utilizes temporary, signed URLs. In the case of S3-compatible
providers, such as Ceph, there may be differences between their implementation and the official S3
API that cause errors.
Here are some things to verify if you receive `SignatureDoesNotMatch` errors:
* Make sure your S3-compatible layer is using [signature version 4][5] (such as Ceph RADOS v12.2.7)
* For Ceph, try using a native Ceph account for credentials instead of external providers such as OpenStack Keystone
## Velero (or a pod it was backing up) restarted during a backup and the backup is stuck InProgress
Velero cannot currently resume backups that were interrupted. Backups stuck in the `InProgress` phase can be deleted with `kubectl delete backup <name> -n <velero-namespace>`.
Backups in the `InProgress` phase have not uploaded any files to object storage.
[1]: debugging-restores.md
[2]: debugging-install.md
[4]: https://github.com/heptio/velero/issues
[5]: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
[25]: https://kubernetes.slack.com/messages/velero

View File

@ -0,0 +1,144 @@
# Upgrading to Velero 1.0
## Prerequisites
- Velero v0.11 installed. If you're not already on v0.11, see the [instructions for upgrading to v0.11][0]. **Upgrading directly from v0.10.x or earlier to v1.0 is not supported!**
- (Optional, but strongly recommended) Create a full copy of the object storage bucket(s) Velero is using. Part 1 of the upgrade procedure will modify the contents of the bucket, so we recommend creating a backup copy of it prior to upgrading.
## Instructions
### Part 1 - Rewrite Legacy Metadata
#### Overview
You need to replace legacy metadata in object storage with updated versions **for any backups that were originally taken with a version prior to v0.11 (i.e. when the project was named Ark)**. While Velero v0.11 is backwards-compatible with these legacy files, Velero v1.0 is not.
_If you're sure that you do not have any backups that were originally created prior to v0.11 (with Ark), you can proceed directly to Part 2._
We've added a CLI command to [Velero v0.11.1][1], `velero migrate-backups`, to help you with this. This command will:
- Replace `ark-backup.json` files in object storage with equivalent `velero-backup.json` files.
- Create `<backup-name>-volumesnapshots.json.gz` files in object storage if they don't already exist, containing snapshot metadata populated from the backups' `status.volumeBackups` field*.
_*backups created prior to v0.10 stored snapshot metadata in the `status.volumeBackups` field, but it has subsequently been replaced with the `<backup-name>-volumesnapshots.json.gz` file._
#### Instructions
1. Download the [v0.11.1 release tarball][1] tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
1. Scale down your existing Velero deployment:
```bash
kubectl -n velero scale deployment/velero --replicas 0
```
1. Fetch velero's credentials for accessing your object storage bucket and store them locally for use by `velero migrate-backups`:
For AWS:
```bash
export AWS_SHARED_CREDENTIALS_FILE=./velero-migrate-backups-credentials
kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.cloud}" | base64 --decode > $AWS_SHARED_CREDENTIALS_FILE
````
For Azure:
```bash
export AZURE_SUBSCRIPTION_ID=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_SUBSCRIPTION_ID}" | base64 --decode)
export AZURE_TENANT_ID=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_TENANT_ID}" | base64 --decode)
export AZURE_CLIENT_ID=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_CLIENT_ID}" | base64 --decode)
export AZURE_CLIENT_SECRET=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_CLIENT_SECRET}" | base64 --decode)
export AZURE_RESOURCE_GROUP=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_RESOURCE_GROUP}" | base64 --decode)
```
For GCP:
```bash
export GOOGLE_APPLICATION_CREDENTIALS=./velero-migrate-backups-credentials
kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.cloud}" | base64 --decode > $GOOGLE_APPLICATION_CREDENTIALS
```
1. List all of your backup storage locations:
```bash
velero backup-location get
```
1. For each backup storage location that you want to use with Velero 1.0, replace any legacy pre-v0.11 backup metadata with the equivalent current formats:
```
# - BACKUP_LOCATION_NAME is the name of a backup location from the previous step, whose
# backup metadata will be updated in object storage
# - SNAPSHOT_LOCATION_NAME is the name of the volume snapshot location that Velero should
# record volume snapshots as existing in (this is only relevant if you have backups that
# were originally taken with a pre-v0.10 Velero/Ark.)
velero migrate-backups \
--backup-location <BACKUP_LOCATION_NAME> \
--snapshot-location <SNAPSHOT_LOCATION_NAME>
```
1. Scale up your deployment:
```bash
kubectl -n velero scale deployment/velero --replicas 1
```
1. Remove the local `velero` credentials:
For AWS:
```
rm $AWS_SHARED_CREDENTIALS_FILE
unset AWS_SHARED_CREDENTIALS_FILE
```
For Azure:
```
unset AZURE_SUBSCRIPTION_ID
unset AZURE_TENANT_ID
unset AZURE_CLIENT_ID
unset AZURE_CLIENT_SECRET
unset AZURE_RESOURCE_GROUP
```
For GCP:
```
rm $GOOGLE_APPLICATION_CREDENTIALS
unset GOOGLE_APPLICATION_CREDENTIALS
```
### Part 2 - Upgrade Components to Velero 1.0
#### Overview
#### Instructions
1. Download the [v1.0 release tarball][2] tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
1. Move the `velero` binary from the Velero directory to somewhere in your PATH, replacing any existing pre-1.0 `velero` binaries.
1. Update the image for the Velero deployment and daemon set (if applicable):
```bash
kubectl -n velero set image deployment/velero velero=gcr.io/heptio-images/velero:v1.0.0
kubectl -n velero set image daemonset/restic restic=gcr.io/heptio-images/velero:v1.0.0
```
[0]: https://velero.io/docs/v0.11.0/migrating-to-velero
[1]: https://github.com/heptio/velero/releases/tag/v0.11.1
[2]: https://github.com/heptio/velero/releases/tag/v1.0.0

View File

@ -0,0 +1,18 @@
# Vendoring dependencies
## Overview
We are using [dep][0] to manage dependencies. You can install it by following [these
instructions][1].
## Adding a new dependency
Run `dep ensure`. If you want to see verbose output, you can append `-v` as in
`dep ensure -v`.
## Updating an existing dependency
Run `dep ensure -update <pkg> [<pkg> ...]` to update one or more dependencies.
[0]: https://github.com/golang/dep
[1]: https://golang.github.io/dep/docs/installation.html

View File

@ -0,0 +1,15 @@
# ZenHub
As an Open Source community, it is necessary for our work, communication, and collaboration to be done in the open.
GitHub provides a central repository for code, pull requests, issues, and documentation. When applicable, we will use Google Docs for design reviews, proposals, and other working documents.
While GitHub issues, milestones, and labels generally work pretty well, the Velero team has found that product planning requires some additional tooling that GitHub projects do not offer.
In our effort to minimize tooling while enabling product management insights, we have decided to use [ZenHub Open-Source](https://www.zenhub.com/blog/open-source/) to overlay product and project tracking on top of GitHub.
ZenHub is a GitHub application that provides Kanban visualization, Epic tracking, fine-grained prioritization, and more. It's primary backing storage system is existing GitHub issues along with additional metadata stored in ZenHub's database.
If you are a Velero user or Velero Developer, you do not _need_ to use ZenHub for your regular workflow (e.g to see open bug reports or feature requests, work on pull requests). However, if you'd like to be able to visualize the high-level project goals and roadmap, you will need to use the free version of ZenHub.
## Using ZenHub
ZenHub can be integrated within the GitHub interface using their [Chrome or FireFox extensions](https://www.zenhub.com/extension). In addition, you can use their dedicated [web application](https://app.zenhub.com/workspace/o/heptio/velero/boards?filterLogic=all&repos=99143276).