remove refs to pre-v1.0 velero versions from docs (#1425)

* remove refs to pre-v1.0 velero versions from docs

Signed-off-by: Steve Kriss <krisss@vmware.com>
pull/1429/head
Steve Kriss 2019-05-01 14:54:51 -06:00 committed by KubeKween
parent 0205a43028
commit f3d36afd3a
13 changed files with 50 additions and 62 deletions

View File

@ -22,7 +22,7 @@ kind: Backup
metadata: metadata:
# Backup name. May be any valid Kubernetes object name. Required. # Backup name. May be any valid Kubernetes object name. Required.
name: a name: a
# Backup namespace. Required. In version 0.7.0 and later, can be any string. Must be the namespace of the Velero server. # Backup namespace. Must be the namespace of the Velero server. Required.
namespace: velero namespace: velero
# Parameters about the backup. Required. # Parameters about the backup. Required.
spec: spec:

View File

@ -6,8 +6,6 @@ Velero can store backups in a number of locations. These are represented in the
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`. Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
> *NOTE*: `BackupStorageLocation` takes the place of the `Config.backupStorageProvider` key as of v0.10.0
A sample YAML `BackupStorageLocation` looks like the following: A sample YAML `BackupStorageLocation` looks like the following:
```yaml ```yaml

View File

@ -5,7 +5,7 @@ When you run commands to get logs or describe a backup, the Velero server genera
- Change the Minio Service type from `ClusterIP` to `NodePort`. - Change the Minio Service type from `ClusterIP` to `NodePort`.
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`. - Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
In Velero 0.10, you can also specify the value of a new `publicUrl` field for the pre-signed URL in your backup storage config. You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
For basic instructions on how to install the Velero server and client, see [the getting started example][1]. For basic instructions on how to install the Velero server and client, see [the getting started example][1].
@ -13,7 +13,7 @@ For basic instructions on how to install the Velero server and client, see [the
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client. The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
You must also get the Minio URL, which you can then specify as the value of the new `publicUrl` field in your backup storage config. You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`. 1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
@ -35,7 +35,7 @@ You must also get the Minio URL, which you can then specify as the value of the
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}' kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
``` ```
1. In `examples/minio/05-backupstoragelocation.yaml`, uncomment the `publicUrl` line and provide this Minio URL as the value of the `publicUrl` field. You must include the `http://` or `https://` prefix. 1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
## Work with Ingress ## Work with Ingress
@ -45,6 +45,6 @@ In this case:
1. Keep the Service type as `ClusterIP`. 1. Keep the Service type as `ClusterIP`.
1. In `examples/minio/05-backupstoragelocation.yaml`, uncomment the `publicUrl` line and provide the URL and port of your Ingress as the value of the `publicUrl` field. 1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
[1]: get-started.md [1]: get-started.md

View File

@ -26,12 +26,18 @@ in [pod_action.go](https://github.com/heptio/velero/blob/master/pkg/restore/pod_
## I'm using Velero in multiple clusters. Should I use the same bucket to store all of my backups? ## I'm using Velero in multiple clusters. Should I use the same bucket to store all of my backups?
We **strongly** recommend that you use a separate bucket per cluster to store backups. Sharing a bucket We **strongly** recommend that each Velero instance use a distinct bucket/prefix combination to store backups.
across multiple Velero instances can lead to numerous problems - failed backups, overwritten backups, Having multiple Velero instances write backups to the same bucket/prefix combination can lead to numerous
inadvertently deleted backups, etc., all of which can be avoided by using a separate bucket per Velero problems - failed backups, overwritten backups, inadvertently deleted backups, etc., all of which can be
instance. avoided by using a separate bucket + prefix per Velero instance.
Related to this, if you need to restore a backup from cluster A into cluster B, please use restore-only It's fine to have multiple Velero instances back up to the same bucket if each instance uses its own
mode in cluster B's Velero instance (via the `--restore-only` flag on the `velero server` command specified prefix within the bucket. This can be configured in your `BackupStorageLocation`, by setting the
in your Velero deployment) while it's configured to use cluster A's bucket. This will ensure no `spec.objectStorage.prefix` field. It's also fine to use a distinct bucket for each Velero instance,
new backups are created, and no existing backups are deleted or overwritten. and not to use prefixes at all.
Related to this, if you need to restore a backup that was created in cluster A into cluster B, you may
configure cluster B with a backup storage location that points to cluster A's bucket/prefix. If you do
this, you should use restore-only mode in cluster B's Velero instance (via the `--restore-only` flag on
the `velero server` command specified in your Velero deployment) while it's configured to use cluster A's
bucket/prefix. This will ensure no new backups are created, and no existing backups are deleted or overwritten.

View File

@ -5,14 +5,9 @@ Velero currently supports executing commands in containers in pods during a back
## Backup Hooks ## Backup Hooks
When performing a backup, you can specify one or more commands to execute in a container in a pod When performing a backup, you can specify one or more commands to execute in a container in a pod
when that pod is being backed up. when that pod is being backed up. The commands can be configured to run *before* any custom action
processing ("pre" hooks), or after all custom actions have been completed and any additional items
Velero versions prior to v0.7.0 only support hooks that execute prior to any custom action processing specified by custom action have been backed up ("post" hooks).
("pre" hooks).
As of version v0.7.0, Velero also supports "post" hooks - these execute after all custom actions have
completed, as well as after all the additional items specified by custom actions have been backed
up.
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec. There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
@ -30,7 +25,7 @@ You can use the following annotations on a pod to make Velero execute a hook whe
| `pre.hook.backup.velero.io/timeout` | How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional. | | `pre.hook.backup.velero.io/timeout` | How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional. |
#### Post hooks (v0.7.0+) #### Post hooks
| Annotation Name | Description | | Annotation Name | Description |
| --- | --- | | --- | --- |

View File

@ -6,7 +6,7 @@ This document describes Velero's image tagging policy.
`gcr.io/heptio-images/velero:<SemVer>` `gcr.io/heptio-images/velero:<SemVer>`
Velero follows the [Semantic Versioning](http://semver.org/) standard for releases. Each tag in the `github.com/heptio/velero` repository has a matching image, e.g. `gcr.io/heptio-images/velero:v0.11.0`. Velero follows the [Semantic Versioning](http://semver.org/) standard for releases. Each tag in the `github.com/heptio/velero` repository has a matching image, e.g. `gcr.io/heptio-images/velero:v1.0.0`.
### Latest ### Latest

View File

@ -2,9 +2,9 @@
You can run Velero with a cloud provider or on-premises. For detailed information about the platforms that Velero supports, see [Compatible Storage Providers][99]. You can run Velero with a cloud provider or on-premises. For detailed information about the platforms that Velero supports, see [Compatible Storage Providers][99].
In version 0.7.0 and later, you can run Velero in any namespace, which requires additional customization. See [Run in custom namespace][3]. You can run Velero in any namespace, which requires additional customization. See [Run in custom namespace][3].
In version 0.9.0 and later, you can use Velero's integration with restic, which requires additional setup. See [restic instructions][20]. You can also use Velero's integration with restic, which requires additional setup. See [restic instructions][20].
## Customize configuration ## Customize configuration

View File

@ -1,30 +1,20 @@
# Backup Storage Locations and Volume Snapshot Locations # Backup Storage Locations and Volume Snapshot Locations
Velero v0.10 introduces a new way of configuring where Velero backups and their associated persistent volume snapshots are stored.
## Motivations
In Velero versions prior to v0.10, the configuration for where to store backups & volume snapshots is specified in a `Config` custom resource. The `backupStorageProvider` section captures the place where all Velero backups should be stored. This is defined by a **provider** (e.g. `aws`, `azure`, `gcp`, `minio`, etc.), a **bucket**, and possibly some additional provider-specific settings (e.g. `region`). Similarly, the `persistentVolumeProvider` section captures the place where all persistent volume snapshots taken as part of Velero backups should be stored, and is defined by a **provider** and additional provider-specific settings (e.g. `region`).
There are a number of use cases that this basic design does not support, such as:
- Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
- Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
- For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
Additionally, as we look ahead to backup replication, a major feature on our roadmap, we know that we'll need Velero to be able to support multiple possible storage locations.
## Overview ## Overview
In Velero v0.10 we got rid of the `Config` custom resource, and replaced it with two new custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`. The new resources directly replace the legacy `backupStorageProvider` and `persistentVolumeProvider` sections of the `Config` resource, respectively. Velero has two custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`, that are used to configure where Velero backups and their associated persistent volume snapshots are stored.
Now, the user can pre-define more than one possible `BackupStorageLocation` and more than one `VolumeSnapshotLocation`, and can select *at backup creation time* the location in which the backup and associated snapshots should be stored.
A `BackupStorageLocation` is defined as a bucket, a prefix within that bucket under which all Velero data should be stored, and a set of additional provider-specific fields (e.g. AWS region, Azure storage account, etc.) The [API documentation][1] captures the configurable parameters for each in-tree provider. A `BackupStorageLocation` is defined as a bucket, a prefix within that bucket under which all Velero data should be stored, and a set of additional provider-specific fields (e.g. AWS region, Azure storage account, etc.) The [API documentation][1] captures the configurable parameters for each in-tree provider.
A `VolumeSnapshotLocation` is defined entirely by provider-specific fields (e.g. AWS region, Azure resource group, Portworx snapshot type, etc.) The [API documentation][2] captures the configurable parameters for each in-tree provider. A `VolumeSnapshotLocation` is defined entirely by provider-specific fields (e.g. AWS region, Azure resource group, Portworx snapshot type, etc.) The [API documentation][2] captures the configurable parameters for each in-tree provider.
Additionally, since multiple `VolumeSnapshotLocations` can be created, the user can now configure locations for more than one volume provider, and if the cluster has volumes from multiple providers (e.g. AWS EBS and Portworx), all of them can be snapshotted in a single Velero backup. The user can pre-configure one or more possible `BackupStorageLocations` and one or more `VolumeSnapshotLocations`, and can select *at backup creation time* the location in which the backup and associated snapshots should be stored.
This configuration design enables a number of different use cases, including:
- Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
- Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
- For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
## Limitations / Caveats ## Limitations / Caveats
@ -34,11 +24,11 @@ Additionally, since multiple `VolumeSnapshotLocations` can be created, the user
- Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume (e.g. EBS and Portworx), but you only have a `VolumeSnapshotLocation` configured for EBS, then Velero will **only** snapshot the EBS volumes. - Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume (e.g. EBS and Portworx), but you only have a `VolumeSnapshotLocation` configured for EBS, then Velero will **only** snapshot the EBS volumes.
- Restic data is now stored under a prefix/subdirectory of the main Velero bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation time. - Restic data is stored under a prefix/subdirectory of the main Velero bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation time.
## Examples ## Examples
Let's look at some examples of how we can use this new mechanism to address each of our previously unsupported use cases: Let's look at some examples of how we can use this configuration mechanism to address some common use cases:
#### Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes) #### Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
@ -130,9 +120,9 @@ velero backup create cloud-snapshot-backup \
--volume-snapshot-locations portworx-cloud --volume-snapshot-locations portworx-cloud
``` ```
#### One location is still easy #### Use a single location
If you don't have a use case for more than one location, it's still just as easy to use Velero. Let's assume you're running on AWS, in the `us-west-1` region: If you don't have a use case for more than one location, it's still easy to use Velero. Let's assume you're running on AWS, in the `us-west-1` region:
During server configuration: During server configuration:
@ -149,16 +139,16 @@ velero snapshot-location create ebs-us-west-1 \
During backup creation: During backup creation:
```shell ```shell
# Velero's will automatically use your configured backup storage location and volume snapshot location. # Velero will automatically use your configured backup storage location and volume snapshot location.
# Nothing new needs to be specified when creating a backup. # Nothing needs to be specified when creating a backup.
velero backup create full-cluster-backup velero backup create full-cluster-backup
``` ```
## Additional Use Cases ## Additional Use Cases
1. If you're using Azure's AKS, you may want to store your volume snapshots outside of the "infrastructure" resource group that is automatically created when you create your AKS cluster. This is now possible using a `VolumeSnapshotLocation`, by specifying a `resourceGroup` under the `config` section of the snapshot location. See the [Azure volume snapshot location documentation][3] for details. 1. If you're using Azure's AKS, you may want to store your volume snapshots outside of the "infrastructure" resource group that is automatically created when you create your AKS cluster. This is possible using a `VolumeSnapshotLocation`, by specifying a `resourceGroup` under the `config` section of the snapshot location. See the [Azure volume snapshot location documentation][3] for details.
1. If you're using Azure, you may want to store your Velero backups across multiple storage accounts and/or resource groups. This is now possible using a `BackupStorageLocation`, by specifying a `storageAccount` and/or `resourceGroup`, respectively, under the `config` section of the backup location. See the [Azure backup storage location documentation][4] for details. 1. If you're using Azure, you may want to store your Velero backups across multiple storage accounts and/or resource groups. This is possible using a `BackupStorageLocation`, by specifying a `storageAccount` and/or `resourceGroup`, respectively, under the `config` section of the backup location. See the [Azure backup storage location documentation][4] for details.

View File

@ -21,7 +21,7 @@ Velero can help you port your resources from one cluster to another, as long as
velero backup describe <BACKUP-NAME> velero backup describe <BACKUP-NAME>
``` ```
**Note:** As of version 0.10, the default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Velero server. **Note:** The default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Velero server.
1. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with: 1. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with:
@ -45,4 +45,4 @@ Check that the second cluster is behaving as expected:
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND> velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
``` ```
If you encounter issues, make sure that Velero is running in the same namespace in both clusters. If you encounter issues, make sure that Velero is running in the same namespace in both clusters.

View File

@ -1,6 +1,6 @@
# Run in custom namespace # Run in custom namespace
In Velero version 0.7.0 and later, you can run Velero in any namespace. You can run Velero in any namespace.
First, ensure you've [downloaded & extracted the latest release][0]. First, ensure you've [downloaded & extracted the latest release][0].

View File

@ -1,12 +1,11 @@
# Restic Integration # Restic Integration
As of version 0.9.0, Velero has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called Velero has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called [restic][1].
[restic][1].
Velero has always allowed you to take snapshots of persistent volumes as part of your backups if youre using one of Velero has always allowed you to take snapshots of persistent volumes as part of your backups if youre using one of
the supported cloud providers block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks). the supported cloud providers block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks).
Starting with version 0.6.0, we provide a plugin model that enables anyone to implement additional object and block storage We also provide a plugin model that enables anyone to implement additional object and block storage backends, outside the
backends, outside the main Velero repository. main Velero repository.
We integrated restic with Velero so that users have an out-of-the-box solution for backing up and restoring almost any type of Kubernetes We integrated restic with Velero so that users have an out-of-the-box solution for backing up and restoring almost any type of Kubernetes
volume*. This is a new capability for Velero, not a replacement for existing functionality. If you're running on AWS, and volume*. This is a new capability for Velero, not a replacement for existing functionality. If you're running on AWS, and

View File

@ -1,6 +1,6 @@
# Compatible Storage Providers # Compatible Storage Providers
Velero supports a variety of storage providers for different backup and snapshot operations. As of version 0.6.0, a plugin system allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase. Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase.
## Backup Storage Providers ## Backup Storage Providers

View File

@ -9,7 +9,7 @@ See also:
## General troubleshooting information ## General troubleshooting information
In `velero` version >= `0.10.0`, you can use the `velero bug` command to open a [Github issue][4] by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `velero` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like. You can use the `velero bug` command to open a [Github issue][4] by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `velero` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like.
Some general commands for troubleshooting that may be helpful: Some general commands for troubleshooting that may be helpful: