commit
5d008491bb
|
@ -1,11 +1,11 @@
|
|||
## Current release:
|
||||
* [CHANGELOG-1.1.md][11]
|
||||
* [CHANGELOG-1.2.md][12]
|
||||
|
||||
## Development release:
|
||||
* [v1.2.0-beta.1][12]
|
||||
* [Unreleased Changes][0]
|
||||
|
||||
## Older releases:
|
||||
* [CHANGELOG-1.1.md][11]
|
||||
* [CHANGELOG-1.0.md][10]
|
||||
* [CHANGELOG-0.11.md][9]
|
||||
* [CHANGELOG-0.10.md][8]
|
||||
|
|
|
@ -1,40 +1,60 @@
|
|||
## v1.2.0-beta.1
|
||||
#### 2019-10-24
|
||||
## v1.2.0
|
||||
#### 2019-11-07
|
||||
|
||||
### Download
|
||||
- https://github.com/vmware-tanzu/velero/releases/tag/v1.2.0-beta.1
|
||||
https://github.com/vmware-tanzu/velero/releases/tag/v1.2.0
|
||||
|
||||
### Container Image
|
||||
`velero/velero:v1.2.0-beta.1`
|
||||
`velero/velero:v1.2.0`
|
||||
|
||||
Please note that as of this release we are no longer publishing new container images to `gcr.io/heptio-images`. The existing ones will remain there for the foreseeable future.
|
||||
|
||||
### Documentation
|
||||
https://velero.io/docs/v1.2.0-beta.1/
|
||||
https://velero.io/docs/v1.2.0/
|
||||
|
||||
### Upgrading
|
||||
|
||||
If you're upgrading from a previous version of Velero, there are several changes you'll need to be aware of:
|
||||
|
||||
- Container images are now published to Docker Hub. To upgrade your server, use the new image `velero/velero:v1.2.0-beta.1`.
|
||||
- The AWS, Microsoft Azure, and GCP provider plugins that were previously part of the Velero binary have been extracted to their own standalone repositories/plugin images. If you are using one of these three providers, you will need to explicitly add the appropriate plugin to your Velero install:
|
||||
- [AWS] `velero plugin add velero/velero-plugin-for-aws:v1.0.0-beta.1`
|
||||
- [Azure] `velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0-beta.1`
|
||||
- [GCP] `velero plugin add velero/velero-plugin-for-gcp:v1.0.0-beta.1`
|
||||
https://velero.io/docs/v1.2.0/upgrade-to-1.2/
|
||||
|
||||
### Highlights
|
||||
## Moving Cloud Provider Plugins Out of Tree
|
||||
|
||||
- The AWS, Microsoft Azure, and GCP provider plugins that were previously part of the Velero binary have been extracted to their own standalone repositories/plugin images. They now function like any other provider plugin.
|
||||
- Container images are now published to Docker Hub: `velero/velero:v1.2.0-beta.1`.
|
||||
- Several improvements have been made to the restic integration:
|
||||
- Backup and restore progress is now updated on the `PodVolumeBackup` and `PodVolumeRestore` custom resources and viewable via `velero backup/restore describe` while operations are in progress.
|
||||
- Read-write-many PVCs are now only backed up once.
|
||||
- Backups of PVCs remain incremental across pod reschedules.
|
||||
- A structural schema has been added to the Velero CRDs that are created by `velero install` to enable validation of API fields.
|
||||
- During restores that use the `--namespace-mappings` flag to clone a namespace within a cluster, PVs will now be cloned as needed.
|
||||
Velero has had built-in support for AWS, Microsoft Azure, and Google Cloud Platform (GCP) since day 1. When Velero moved to a plugin architecture for object store providers and volume snapshotters in version 0.6, the code for these three providers was converted to use the plugin interface provided by this new architecture, but the cloud provider code still remained inside the Velero codebase. This put the AWS, Azure, and GCP plugins in a different position compared with other providers’ plugins, since they automatically shipped with the Velero binary and could include documentation in-tree.
|
||||
|
||||
With version 1.2, we’ve extracted the AWS, Azure, and GCP plugins into their own repositories, one per provider. We now also publish one plugin image per provider. This change brings these providers to parity with other providers’ plugin implementations, reduces the size of the core Velero binary by not requiring each provider’s SDK to be included, and opens the door for the plugins to be maintained and released independently of core Velero.
|
||||
|
||||
## Restic Integration Improvements
|
||||
|
||||
We’ve continued to work on improving Velero’s restic integration. With this release, we’ve made the following enhancements:
|
||||
|
||||
- Restic backup and restore progress is now captured during execution and visible to the user through the `velero backup/restore describe --details` command. The details are updated every 10 seconds. This provides a new level of visibility into restic operations for users.
|
||||
- Restic backups of persistent volume claims (PVCs) now remain incremental across the rescheduling of a pod. Previously, if the pod using a PVC was rescheduled, the next restic backup would require a full rescan of the volume’s contents. This improvement potentially makes such backups significantly faster.
|
||||
- Read-write-many volumes are no longer backed up once for every pod using the volume, but instead just once per Velero backup. This improvement speeds up backups and prevents potential restore issues due to multiple copies of the backup being processed simultaneously.
|
||||
|
||||
|
||||
## Clone PVs When Cloning a Namespace
|
||||
|
||||
Before version 1.2, you could clone a Kubernetes namespace by backing it up and then restoring it to a different namespace in the same cluster by using the `--namespace-mappings` flag with the `velero restore create` command. However, in this scenario, Velero was unable to clone persistent volumes used by the namespace, leading to errors for users.
|
||||
|
||||
In version 1.2, Velero automatically detects when you are trying to clone an existing namespace, and clones the persistent volumes used by the namespace as well. This doesn’t require the user to specify any additional flags for the `velero restore create` command. This change lets you fully achieve your goal of cloning namespaces using persistent storage within a cluster.
|
||||
|
||||
## Improved Server-Side Encryption Support
|
||||
|
||||
To help you secure your important backup data, we’ve added support for more forms of server-side encryption of backup data on both AWS and GCP. Specifically:
|
||||
- On AWS, Velero now supports Amazon S3-managed encryption keys (SSE-S3), which uses AES256 encryption, by specifying `serverSideEncryption: AES256` in a backup storage location’s config.
|
||||
- On GCP, Velero now supports using a specific Cloud KMS key for server-side encryption by specifying `kmsKeyName: <key name>` in a backup storage location’s config.
|
||||
|
||||
## CRD Structural Schema
|
||||
|
||||
In Kubernetes 1.16, custom resource definitions (CRDs) reached general availability. Structural schemas are required for CRDs created in the `apiextensions.k8s.io/v1` API group. Velero now defines a structural schema for each of its CRDs and automatically applies it the user runs the `velero install` command. The structural schemas enable the user to get quicker feedback when their backup, restore, or schedule request is invalid, so they can immediately remediate their request.
|
||||
|
||||
### All Changes
|
||||
* Ensure object store plugin processes are cleaned up after restore and after BSL validation during server start up (#2041, @betta1)
|
||||
* bug fix: don't try to restore pod volume backups that don't have a snapshot ID (#2031, @skriss)
|
||||
* Restore Documentation: Updated Restore Documentation with Clarification implications of removing restore object. (#1957, @nainav)
|
||||
* add `--allow-partially-failed` flag to `velero restore create` for use with `--from-schedule` to allow partially-failed backups to be restored (#1994, @skriss)
|
||||
* Allow backup storage locations to specify backup sync period or toggle off sync (#1936, @betta1)
|
||||
* Remove cloud provider code (#1985, @carlisia)
|
||||
* Restore action for cluster/namespace role bindings (#1974, @alexander)
|
||||
* Restore action for cluster/namespace role bindings (#1974, @alexander-demichev)
|
||||
* Add `--no-default-backup-location` flag to `velero install` (#1931, @Frank51)
|
||||
* If includeClusterResources is nil/auto, pull in necessary CRDs in backupResource (#1831, @sseago)
|
||||
* Azure: add support for Azure China/German clouds (#1938, @andyzhangx)
|
||||
|
@ -61,4 +81,3 @@ If you're upgrading from a previous version of Velero, there are several changes
|
|||
* fix error formatting due interpreting % as printf formatted strings (#1781, @s12chung)
|
||||
* when using `velero restore create --namespace-mappings ...` to create a second copy of a namespace in a cluster, create copies of the PVs used (#1779, @skriss)
|
||||
* adds --from-schedule flag to the `velero create backup` command to create a Backup from an existing Schedule (#1734, @prydonius)
|
||||
* add `--allow-partially-failed` flag to `velero restore create` for use with `--from-schedule` to allow partially-failed backups to be restored (#1994, @skriss)
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
adds --from-schedule flag to the `velero create backup` command to create a Backup from an existing Schedule
|
|
@ -1 +0,0 @@
|
|||
when using `velero restore create --namespace-mappings ...` to create a second copy of a namespace in a cluster, create copies of the PVs used
|
|
@ -1 +0,0 @@
|
|||
fix error formatting due interpreting % as printf formatted strings
|
|
@ -1 +0,0 @@
|
|||
adds `insecureSkipTLSVerify` server config for AWS storage and `--insecure-skip-tls-verify` flag on client for self-signed certs
|
|
@ -1 +0,0 @@
|
|||
remove 'restic check' calls from before/after 'restic prune' since they're redundant
|
|
@ -1 +0,0 @@
|
|||
Add `--features` argument to all velero commands to provide feature flags that can control enablement of pre-release features.
|
|
@ -1 +0,0 @@
|
|||
when backing up PVCs with restic, specify --parent flag to prevent full volume rescans after pod reschedules
|
|
@ -1 +0,0 @@
|
|||
report backup progress in PodVolumeBackups and expose progress in the velero backup describe --details command. Also upgrades restic to v0.9.5
|
|
@ -1 +0,0 @@
|
|||
If includeClusterResources is nil/auto, pull in necessary CRDs in backupResource
|
|
@ -1 +0,0 @@
|
|||
fix excluding additional items with the velero.io/exclude-from-backup=true label
|
|
@ -1 +0,0 @@
|
|||
Jeckyll Site updates - modifies documentation to use a wider layout; adds better markdown table formatting
|
|
@ -1 +0,0 @@
|
|||
report restore progress in PodVolumeRestores and expose progress in the velero restore describe --details command
|
|
@ -1 +0,0 @@
|
|||
velero install: if `--use-restic` and `--wait` are specified, wait up to a minute for restic daemonset to be ready
|
|
@ -1 +0,0 @@
|
|||
change default `restic prune` interval to 7 days, add `velero server/install` flags for specifying an alternate default value.
|
|
@ -1 +0,0 @@
|
|||
AWS: add support for SSE-S3 AES256 encryption via `serverSideEncryption` config field in BackupStorageLocation
|
|
@ -1 +0,0 @@
|
|||
GCP: add support for specifying a Cloud KMS key name to use for encrypting backups in a storage location.
|
|
@ -1 +0,0 @@
|
|||
backup sync controller: stop using `metadata/revision` file, do a full diff of bucket contents vs. cluster contents each sync interval
|
|
@ -1 +0,0 @@
|
|||
Add LD_LIBRARY_PATH (=/plugins) to the env variables of velero deployment.
|
|
@ -1 +0,0 @@
|
|||
Azure: add support for cross-subscription backups
|
|
@ -1 +0,0 @@
|
|||
restic: only backup read-write-many PVCs at most once, even if they're annotated for backup from multiple pods.
|
|
@ -1 +0,0 @@
|
|||
adds structural schema to Velero CRDs created on Velero install, enabling validation of Velero API fields
|
|
@ -1 +0,0 @@
|
|||
Add check to update resource field during backupItem
|
|
@ -1 +0,0 @@
|
|||
bug fix: during restore, check item's original namespace, not the remapped one, for inclusion/exclusion
|
|
@ -1 +0,0 @@
|
|||
Add a new required --plugins flag for velero install command. --plugins takes a list of container images to add as initcontainers.
|
|
@ -1 +0,0 @@
|
|||
Add --no-default-backup-location flag to velero install
|
|
@ -1 +0,0 @@
|
|||
Allow backup storage locations to specify backup sync period or toggle off sync
|
|
@ -1 +0,0 @@
|
|||
Azure: add support for Azure China/German clouds
|
|
@ -1 +0,0 @@
|
|||
Restore Documentation: Updated Restore Documentation with Clarification implications of removing restore object.
|
|
@ -1 +0,0 @@
|
|||
Restore action for cluster/namespace role bindings
|
|
@ -1 +0,0 @@
|
|||
Remove cloud provider code
|
|
@ -1 +0,0 @@
|
|||
add `--allow-partially-failed` flag to `velero restore create` for use with `--from-schedule` to allow partially-failed backups to be restored
|
|
@ -1 +0,0 @@
|
|||
bug fix: don't try to restore pod volume backups that don't have a snapshot ID
|
|
@ -1 +0,0 @@
|
|||
Ensure object store plugin processes are cleaned up after restore and after BSL validation during server start up
|
|
@ -56,10 +56,10 @@ defaults:
|
|||
gh: https://github.com/vmware-tanzu/velero/tree/master
|
||||
layout: "docs"
|
||||
- scope:
|
||||
path: docs/v1.2.0-beta.1
|
||||
path: docs/v1.2.0
|
||||
values:
|
||||
version: v1.2.0-beta.1
|
||||
gh: https://github.com/vmware-tanzu/velero/tree/v1.2.0-beta.1
|
||||
version: v1.2.0
|
||||
gh: https://github.com/vmware-tanzu/velero/tree/v1.2.0
|
||||
layout: "docs"
|
||||
- scope:
|
||||
path: docs/v1.1.0
|
||||
|
@ -151,10 +151,10 @@ collections:
|
|||
- casestudies
|
||||
|
||||
versioning: true
|
||||
latest: v1.1.0
|
||||
latest: v1.2.0
|
||||
versions:
|
||||
- master
|
||||
- v1.2.0-beta.1
|
||||
- v1.2.0
|
||||
- v1.1.0
|
||||
- v1.0.0
|
||||
- v0.11.0
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
# that the navigation for older versions still work.
|
||||
|
||||
master: master-toc
|
||||
v1.2.0-beta.1: v1-2-0-beta-1-toc
|
||||
v1.2.0: v1-2-0-toc
|
||||
v1.1.0: v1-1-0-toc
|
||||
v1.0.0: v1-0-0-toc
|
||||
v0.11.0: v011-toc
|
||||
|
|
|
@ -11,8 +11,8 @@ toc:
|
|||
subfolderitems:
|
||||
- page: Overview
|
||||
url: /install-overview
|
||||
- page: Upgrade to 1.1
|
||||
url: /upgrade-to-1.1
|
||||
- page: Upgrade to 1.2
|
||||
url: /upgrade-to-1.2
|
||||
- page: Supported providers
|
||||
url: /supported-providers
|
||||
- page: Evaluation install
|
||||
|
@ -63,6 +63,10 @@ toc:
|
|||
url: /build-from-source
|
||||
- page: Run locally
|
||||
url: /run-locally
|
||||
- page: Code standards
|
||||
url: /code-standards
|
||||
- page: Website guidelines
|
||||
url: /website-guidelines
|
||||
- title: More information
|
||||
subfolderitems:
|
||||
- page: Backup file format
|
|
@ -0,0 +1,81 @@
|
|||
---
|
||||
title: Velero 1.2 Sets Sail by Shifting Plugins Out of Tree, Adding a Structural Schema, and Sharpening Usability
|
||||
excerpt: With this release, we’ve focused on extracting in-tree cloud provider plugins into their own repositories, making further usability improvements to the restic integration, preparing for the general availability of Kubernetes custom resource definitions (CRDs) by adding a structural schema to our CRDs, and many other new features and usability improvements.
|
||||
author_name: Steve Kriss
|
||||
categories: ['velero','release']
|
||||
image: /img/posts/sailboat.jpg
|
||||
# Tag should match author to drive author pages
|
||||
tags: ['Velero Team', 'Steve Kriss']
|
||||
---
|
||||
Velero continues to evolve with the release of version 1.2. With this release, we’ve focused on extracting in-tree cloud provider plugins into their own repositories, making further usability improvements to the restic integration, preparing for the general availability of Kubernetes custom resource definitions (CRDs) by adding a structural schema to our CRDs, and many other new features and usability improvements.
|
||||
|
||||
Let’s take a look at the highlights for this release.
|
||||
|
||||
## Moving Cloud Provider Plugins Out of Tree
|
||||
|
||||
Velero has had built-in support for AWS, Microsoft Azure, and Google Cloud Platform (GCP) since day 1. When Velero moved to a plugin architecture for object store providers and volume snapshotters in version 0.6, the code for these three providers was converted to use the plugin interface provided by this new architecture, but the cloud provider code still remained inside the Velero codebase. This put the AWS, Azure, and GCP plugins in a different position compared with other providers’ plugins, since they automatically shipped with the Velero binary and could include documentation in-tree.
|
||||
|
||||
With version 1.2, we’ve extracted the AWS, Azure, and GCP plugins into their own repositories, one per provider. We now also publish one plugin image per provider. This change brings these providers to parity with other providers’ plugin implementations, reduces the size of the core Velero binary by not requiring each provider’s SDK to be included, and opens the door for the plugins to be maintained and released independently of core Velero.
|
||||
|
||||
## Restic Integration Improvements
|
||||
|
||||
We’ve continued to work on improving Velero’s restic integration. With this release, we’ve made the following enhancements:
|
||||
|
||||
- Restic backup and restore progress is now captured during execution and visible to the user through the `velero backup/restore describe --details` command. The details are updated every 10 seconds. This provides a new level of visibility into restic operations for users.
|
||||
- Restic backups of persistent volume claims (PVCs) now remain incremental across the rescheduling of a pod. Previously, if the pod using a PVC was rescheduled, the next restic backup would require a full rescan of the volume’s contents. This improvement potentially makes such backups significantly faster.
|
||||
- Read-write-many volumes are no longer backed up once for every pod using the volume, but instead just once per Velero backup. This improvement speeds up backups and prevents potential restore issues due to multiple copies of the backup being processed simultaneously.
|
||||
|
||||
|
||||
## Clone PVs When Cloning a Namespace
|
||||
|
||||
Before version 1.2, you could clone a Kubernetes namespace by backing it up and then restoring it to a different namespace in the same cluster by using the `--namespace-mappings` flag with the `velero restore create` command. However, in this scenario, Velero was unable to clone persistent volumes used by the namespace, leading to errors for users.
|
||||
|
||||
In version 1.2, Velero automatically detects when you are trying to clone an existing namespace, and clones the persistent volumes used by the namespace as well. This doesn’t require the user to specify any additional flags for the `velero restore create` command. This change lets you fully achieve your goal of cloning namespaces using persistent storage within a cluster.
|
||||
|
||||
## Improved Server-Side Encryption Support
|
||||
|
||||
To help you secure your important backup data, we’ve added support for more forms of server-side encryption of backup data on both AWS and GCP. Specifically:
|
||||
|
||||
- On AWS, Velero now supports Amazon S3-managed encryption keys (SSE-S3), which uses AES256 encryption, by specifying `serverSideEncryption: AES256` in a backup storage location’s config.
|
||||
- On GCP, Velero now supports using a specific Cloud KMS key for server-side encryption by specifying `kmsKeyName: <key name>` in a backup storage location’s config.
|
||||
|
||||
## CRD Structural Schema
|
||||
|
||||
In Kubernetes 1.16, custom resource definitions (CRDs) reached general availability. Structural schemas are required for CRDs created in the `apiextensions.k8s.io/v1` API group. Velero now defines a structural schema for each of its CRDs and automatically applies it the user runs the `velero install` command. The structural schemas enable the user to get quicker feedback when their backup, restore, or schedule request is invalid, so they can immediately remediate their request.
|
||||
|
||||
## And More
|
||||
|
||||
There are too many new features and improvements to cover in this short blog post. For full details on all of the changes, see the [full changelog](https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.2.md).
|
||||
|
||||
## Community Contributors
|
||||
|
||||
Velero’s user and contributor community continues to grow, and it is a huge part of this project’s success. This release includes many community contributions, including from (GitHub handles listed):
|
||||
|
||||
- [@betta1](https://github.com/betta1)
|
||||
- [@lintongj](https://github.com/lintongj)
|
||||
- [@spiffcs](https://github.com/spiffcs)
|
||||
- [@s12chung](https://github.com/s12chung)
|
||||
- [@boxcee](https://github.com/boxcee)
|
||||
- [@andyzhangx](https://github.com/andyzhangx)
|
||||
- [@sseago](https://github.com/sseago)
|
||||
- [@Frank51](https://github.com/Frank51)
|
||||
- [@alexander-demichev](https://github.com/alexander-demichev)
|
||||
|
||||
**Thank you for helping improve the Velero project!**
|
||||
|
||||
## Catch us at KubeCon
|
||||
|
||||
If you’re going to KubeCon + CloudNativeCon North America 2019 in San Diego, come hang out with us.! The Velero maintainers will all be attending and would love to chat with you. We’ll be having a Velero community lunch on Wednesday, November 20, at 12:30PM in the convention center. Come to the VMware booth or look for the Velero signs in the lunch area.
|
||||
|
||||
Check out these talks related to Velero:
|
||||
|
||||
- [CSI Volume Snapshots: On the Way to Faster and Better Backups](https://sched.co/UaXR), by Adnan Abdulhussein and Nolan Brubaker, both from VMware (and core maintainers)
|
||||
- [How to Backup and Restore Your Kubernetes Cluster](https://sched.co/UaZN), by Annette Clewett and Dylan Murray, both from Red Hat (Dylan is a Velero contributor)
|
||||
|
||||
## Join the Movement – Contribute!
|
||||
|
||||
Velero is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our [online community meetings every Tuesday](https://velero.io/community/) and catch up with past meetings on YouTube on the [Velero Community Meetings playlist](https://www.youtube.com/watch?v=nc48ocI-6go&list=PL7bmigfV0EqQRysvqvqOtRNk4L5S7uqwM).
|
||||
|
||||
You can always find the latest project information at [velero.io](https://velero.io). Look for issues on GitHub marked [Good first issue](https://github.com/vmware-tanzu/velero/issues?q=is:open+is:issue+label:%22Good+first+issue%22) or [Help wanted](https://github.com/vmware-tanzu/velero/issues?utf8=✓&q=is:open+is:issue+label:%22Help+wanted%22+) if you want to roll up your sleeves and write some code with us.
|
||||
|
||||
You can chat with us on [Kubernetes Slack in the #velero channel](https://kubernetes.slack.com/messages/C6VCGP4MT) and follow us on Twitter at [@projectvelero](https://twitter.com/projectvelero).
|
|
@ -6,91 +6,93 @@
|
|||
_Note: if you're upgrading from v1.0, follow the [upgrading to v1.1][2] instructions first._
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Install the Velero v1.2 command-line interface (CLI) by following the [instructions here][3].
|
||||
|
||||
Verify that you've properly installed it by running:
|
||||
Verify that you've properly installed it by running:
|
||||
|
||||
```bash
|
||||
velero version --client-only
|
||||
```
|
||||
```bash
|
||||
velero version --client-only
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
|
||||
```bash
|
||||
Client:
|
||||
Version: v1.2.0
|
||||
Git commit: <git SHA>
|
||||
```
|
||||
You should see the following output:
|
||||
|
||||
```bash
|
||||
Client:
|
||||
Version: v1.2.0
|
||||
Git commit: <git SHA>
|
||||
```
|
||||
|
||||
1. Scale down the existing Velero deployment:
|
||||
|
||||
```bash
|
||||
kubectl scale deployment/velero \
|
||||
--namespace velero \
|
||||
--replicas 0
|
||||
```
|
||||
```bash
|
||||
kubectl scale deployment/velero \
|
||||
--namespace velero \
|
||||
--replicas 0
|
||||
```
|
||||
|
||||
1. Update the container image used by the Velero deployment and, optionally, the restic daemon set:
|
||||
|
||||
```bash
|
||||
kubectl set image deployment/velero \
|
||||
velero=velero/velero:v1.2.0 \
|
||||
--namespace velero
|
||||
```bash
|
||||
kubectl set image deployment/velero \
|
||||
velero=velero/velero:v1.2.0 \
|
||||
--namespace velero
|
||||
|
||||
# optional, if using the restic daemon set
|
||||
kubectl set image daemonset/restic \
|
||||
restic=velero/velero:v1.2.0 \
|
||||
--namespace velero
|
||||
```
|
||||
# optional, if using the restic daemon set
|
||||
kubectl set image daemonset/restic \
|
||||
restic=velero/velero:v1.2.0 \
|
||||
--namespace velero
|
||||
```
|
||||
|
||||
1. If using AWS, Azure, or GCP, add the respective plugin to your Velero deployment:
|
||||
|
||||
For AWS:
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-aws:v1.0.0
|
||||
```
|
||||
For AWS:
|
||||
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-aws:v1.0.0
|
||||
```
|
||||
|
||||
For Azure:
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0
|
||||
```
|
||||
For Azure:
|
||||
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0
|
||||
```
|
||||
|
||||
For GCP:
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-gcp:v1.0.0
|
||||
```
|
||||
For GCP:
|
||||
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-gcp:v1.0.0
|
||||
```
|
||||
|
||||
1. Update the Velero custom resource definitions (CRDs) to include the structural schemas:
|
||||
|
||||
```bash
|
||||
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
|
||||
```
|
||||
```bash
|
||||
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
1. Scale back up the existing Velero deployment:
|
||||
|
||||
```bash
|
||||
kubectl scale deployment/velero \
|
||||
--namespace velero \
|
||||
--replicas 1
|
||||
```
|
||||
```bash
|
||||
kubectl scale deployment/velero \
|
||||
--namespace velero \
|
||||
--replicas 1
|
||||
```
|
||||
|
||||
1. Confirm that the deployment is up and running with the correct version by running:
|
||||
|
||||
```bash
|
||||
velero version
|
||||
```
|
||||
```bash
|
||||
velero version
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
You should see the following output:
|
||||
|
||||
```bash
|
||||
Client:
|
||||
Version: v1.2.0
|
||||
Git commit: <git SHA>
|
||||
```bash
|
||||
Client:
|
||||
Version: v1.2.0
|
||||
Git commit: <git SHA>
|
||||
|
||||
Server:
|
||||
Version: v1.2.0
|
||||
```
|
||||
Server:
|
||||
Version: v1.2.0
|
||||
```
|
||||
|
||||
|
||||
[0]: https://github.com/vmware-tanzu/velero/releases/tag/v1.1.0
|
||||
|
|
|
@ -1,82 +0,0 @@
|
|||
# Velero Backup Storage Locations
|
||||
|
||||
## Backup Storage Location
|
||||
|
||||
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
|
||||
|
||||
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
|
||||
|
||||
A sample YAML `BackupStorageLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
backupSyncPeriod: 2m0s
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: myBucket
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the backups. |
|
||||
| `objectStorage` | ObjectStorageLocation | Specification of the object storage for the given provider. |
|
||||
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
|
||||
| `config` | map[string]string<br><br>(See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.) | None (Optional) | Configuration keys/values to be passed to the cloud provider for backup storage. |
|
||||
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
|
||||
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
|
||||
|
||||
#### AWS
|
||||
|
||||
**(Or other S3-compatible storage)**
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Queried from the AWS S3 API if not provided. |
|
||||
| `s3ForcePathStyle` | bool | `false` | Set this to `true` if you are using a local storage service like Minio. |
|
||||
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Velero can already generate it from `region`, and `bucket`. This field is primarily for local storage services like Minio.|
|
||||
| `publicUrl` | string | Empty | *Example*: https://minio.mycluster.com<br><br>If specified, use this instead of `s3Url` when generating download URLs (e.g., for logs). This field is primarily for local storage services like Minio.|
|
||||
| `serverSideEncryption` | string | Empty | The name of the server-side encryption algorithm to use for uploading objects, e.g. `AES256`. If using SSE-KMS and `kmsKeyId` is specified, this field will automatically be set to `aws:kms` so does not need to be specified by the user. |
|
||||
| `kmsKeyId` | string | Empty | *Example*: "502b409c-4da1-419f-a16e-eif453b3i49f" or "alias/`<KMS-Key-Alias-Name>`"<br><br>Specify an [AWS KMS key][10] id or alias to enable encryption of the backups stored in S3. Only works with AWS S3 and may require explicitly granting key usage rights.|
|
||||
| `signatureVersion` | string | `"4"` | Version of the signature algorithm used to create signed URLs that are used by velero cli to download backups or fetch logs. Possible versions are "1" and "4". Usually the default version 4 is correct, but some S3-compatible providers like Quobyte only support version 1.|
|
||||
| `profile` | string | "default" | AWS profile within the credential file to use for given store |
|
||||
| `insecureSkipTLSVerify` | bool | `false` | Set this to `true` if you do not want to verify the TLS certificate when connecting to the object store--like self-signed certs in Minio. This is susceptible to man-in-the-middle attacks and is not recommended for production. |
|
||||
|
||||
#### Azure
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `resourceGroup` | string | Required Field | Name of the resource group containing the storage account for this backup storage location. |
|
||||
| `storageAccount` | string | Required Field | Name of the storage account for this backup storage location. |
|
||||
| `subscriptionId` | string | Optional Field | ID of the subscription for this backup storage location. |
|
||||
|
||||
#### GCP
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `kmsKeyName` | string | Empty | Name of the Cloud KMS key to use to encrypt backups stored in this location, in the form `projects/P/locations/L/keyRings/R/cryptoKeys/K`. See [customer-managed Cloud KMS keys](https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys) for details. |
|
||||
| `serviceAccount` | string | Empty | Name of the GCP service account to use for this backup storage location. Specify the service account here if you want to use workload identity instead of providing the key file.
|
||||
|
||||
[0]: #aws
|
||||
[1]: #gcp
|
||||
[2]: #azure
|
||||
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
|
||||
[10]: http://docs.aws.amazon.com/kms/latest/developerguide/overview.html
|
|
@ -1,70 +0,0 @@
|
|||
# Velero Volume Snapshot Location
|
||||
|
||||
## Volume Snapshot Location
|
||||
|
||||
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
|
||||
|
||||
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
|
||||
|
||||
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
|
||||
|
||||
A sample YAML `VolumeSnapshotLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: VolumeSnapshotLocation
|
||||
metadata:
|
||||
name: aws-default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the volume. |
|
||||
| `config` | See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.
|
||||
|
||||
#### AWS
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Required. |
|
||||
| `profile` | string | "default" | AWS profile within the credential file to use for given store |
|
||||
|
||||
#### Azure
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `apiTimeout` | metav1.Duration | 2m0s | How long to wait for an Azure API request to complete before timeout. |
|
||||
| `resourceGroup` | string | Optional | The name of the resource group where volume snapshots should be stored, if different from the cluster's resource group. |
|
||||
| `subscriptionId` | string | Optional | The ID of the subscription where volume snapshots should be stored, if different from the cluster's subscription. Requires `resourceGroup`to be set.
|
||||
|
||||
#### GCP
|
||||
|
||||
##### config
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `snapshotLocation` | string | Empty | *Example*: "us-central1"<br><br>See [GCP documentation][4] for the full list.<br><br>If not specified the snapshots are stored in the [default location][5]. |
|
||||
| `project` | string | Empty | The project ID where snapshots should be stored, if different than the project that your IAM account is in. Optional. |
|
||||
|
||||
[0]: #aws
|
||||
[1]: #gcp
|
||||
[2]: #azure
|
||||
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
|
||||
[4]: https://cloud.google.com/storage/docs/locations#available_locations
|
||||
[5]: https://cloud.google.com/compute/docs/disks/create-snapshots#default_location
|
|
@ -1,25 +0,0 @@
|
|||
# Upgrading to Velero 1.1
|
||||
|
||||
## Prerequisites
|
||||
- [Velero v1.0][1] installed.
|
||||
|
||||
Velero v1.1 only requires user intervention if Velero is running in a namespace other than `velero`.
|
||||
|
||||
## Instructions
|
||||
|
||||
### Adding VELERO_NAMESPACE environment variable to the deployment
|
||||
|
||||
Previous versions of Velero's server detected the namespace in which it was running by inspecting the container's filesystem.
|
||||
With v1.1, this is no longer the case, and the server must be made aware of the namespace it is running in with the `VELERO_NAMESPACE` environment variable.
|
||||
|
||||
`velero install` automatically writes this for new deployments, but existing installations will need to add the environment variable before upgrading.
|
||||
|
||||
You can use the following command to patch the deployment:
|
||||
|
||||
```bash
|
||||
kubectl patch deployment/velero -n <YOUR_NAMESPACE> \
|
||||
--type='json' \
|
||||
-p='[{"op":"add","path":"/spec/template/spec/containers/0/env/0","value":{"name":"VELERO_NAMESPACE", "valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]'
|
||||
```
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/releases/tag/v1.0.0
|
|
@ -27,7 +27,7 @@ If you encounter issues, review the [troubleshooting docs][30], [file an issue][
|
|||
|
||||
## Contributing
|
||||
|
||||
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.2.0-beta.1/start-contributing/) documentation for guidance on how to setup Velero for development.
|
||||
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.2.0/start-contributing/) documentation for guidance on how to setup Velero for development.
|
||||
|
||||
## Changelog
|
||||
|
||||
|
@ -48,7 +48,7 @@ See [the list of releases][6] to find out about feature changes.
|
|||
[25]: https://kubernetes.slack.com/messages/velero
|
||||
|
||||
[28]: install-overview.md
|
||||
[29]: https://velero.io/docs/v1.2.0-beta.1/
|
||||
[29]: https://velero.io/docs/v1.2.0/
|
||||
[30]: troubleshooting.md
|
||||
|
||||
[100]: img/velero.png
|
|
@ -0,0 +1,44 @@
|
|||
# Velero Backup Storage Locations
|
||||
|
||||
## Backup Storage Location
|
||||
|
||||
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
|
||||
|
||||
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
|
||||
|
||||
A sample YAML `BackupStorageLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
backupSyncPeriod: 2m0s
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: myBucket
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String | Required Field | The name for whichever object storage provider will be used to store the backups. See [your object storage provider's plugin documentation][0] for the appropriate value to use. |
|
||||
| `objectStorage` | ObjectStorageLocation | Required Field | Specification of the object storage for the given provider. |
|
||||
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
|
||||
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the object store plugin. See [your object storage provider's plugin documentation][0] for details. |
|
||||
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
|
||||
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
|
||||
|
||||
|
||||
[0]: ../supported-providers.md
|
|
@ -0,0 +1,38 @@
|
|||
# Velero Volume Snapshot Location
|
||||
|
||||
## Volume Snapshot Location
|
||||
|
||||
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
|
||||
|
||||
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
|
||||
|
||||
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
|
||||
|
||||
A sample YAML `VolumeSnapshotLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: VolumeSnapshotLocation
|
||||
metadata:
|
||||
name: aws-default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See [your volume snapshot provider's plugin documentation][0] for the appropriate value to use. |
|
||||
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See [your volume snapshot provider's plugin documentation][0] for details. |
|
||||
|
||||
|
||||
[0]: ../supported-providers.md
|
|
@ -0,0 +1,123 @@
|
|||
# Code Standards
|
||||
|
||||
## Adding a changelog
|
||||
|
||||
Authors are expected to include a changelog file with their pull requests. The changelog file
|
||||
should be a new file created in the `changelogs/unreleased` folder. The file should follow the
|
||||
naming convention of `pr-username` and the contents of the file should be your text for the
|
||||
changelog.
|
||||
|
||||
velero/changelogs/unreleased <- folder
|
||||
000-username <- file
|
||||
|
||||
Add that to the PR.
|
||||
|
||||
## Code
|
||||
|
||||
- Log messages are capitalized.
|
||||
|
||||
- Error messages are kept lower-cased.
|
||||
|
||||
- Wrap/add a stack only to errors that are being directly returned from non-velero code, such as an API call to the Kubernetes server.
|
||||
|
||||
```bash
|
||||
errors.WithStack(err)
|
||||
```
|
||||
|
||||
- Prefer to use the utilities in the Kubernetes package [`sets`](https://godoc.org/github.com/kubernetes/apimachinery/pkg/util/sets).
|
||||
|
||||
```bash
|
||||
k8s.io/apimachinery/pkg/util/sets
|
||||
```
|
||||
|
||||
## Imports
|
||||
|
||||
For imports, we use the following convention:
|
||||
|
||||
<group><version><api | client | informer | ...>
|
||||
|
||||
Example:
|
||||
|
||||
import (
|
||||
corev1api "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
corev1listers "k8s.io/client-go/listers/core/v1"
|
||||
|
||||
velerov1api ""github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
|
||||
)
|
||||
|
||||
## Mocks
|
||||
|
||||
We use a package to generate mocks for our interfaces.
|
||||
|
||||
Example: if you want to change this mock: https://github.com/vmware-tanzu/velero/blob/v1.2.0/pkg/restic/mocks/restorer.go
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
go get github.com/vektra/mockery/.../
|
||||
cd pkg/restic
|
||||
mockery -name=Restorer
|
||||
```
|
||||
|
||||
Might need to run `make update` to update the imports.
|
||||
|
||||
## DCO Sign off
|
||||
|
||||
All authors to the project retain copyright to their work. However, to ensure
|
||||
that they are only submitting work that they have rights to, we are requiring
|
||||
everyone to acknowledge this by signing their work.
|
||||
|
||||
Any copyright notices in this repo should specify the authors as "the Velero contributors".
|
||||
|
||||
To sign your work, just add a line like this at the end of your commit message:
|
||||
|
||||
```
|
||||
Signed-off-by: Joe Beda <joe@heptio.com>
|
||||
```
|
||||
|
||||
This can easily be done with the `--signoff` option to `git commit`.
|
||||
|
||||
By doing this you state that you can certify the following (from https://developercertificate.org/):
|
||||
|
||||
```
|
||||
Developer Certificate of Origin
|
||||
Version 1.1
|
||||
|
||||
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
||||
1 Letterman Drive
|
||||
Suite D4700
|
||||
San Francisco, CA, 94129
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies of this
|
||||
license document, but changing it is not allowed.
|
||||
|
||||
|
||||
Developer's Certificate of Origin 1.1
|
||||
|
||||
By making a contribution to this project, I certify that:
|
||||
|
||||
(a) The contribution was created in whole or in part by me and I
|
||||
have the right to submit it under the open source license
|
||||
indicated in the file; or
|
||||
|
||||
(b) The contribution is based upon previous work that, to the best
|
||||
of my knowledge, is covered under an appropriate open source
|
||||
license and I have the right under that license to submit that
|
||||
work with modifications, whether created in whole or in part
|
||||
by me, under the same open source license (unless I am
|
||||
permitted to submit under a different license), as indicated
|
||||
in the file; or
|
||||
|
||||
(c) The contribution was provided directly to me by some other
|
||||
person who certified (a), (b) or (c) and I have not modified
|
||||
it.
|
||||
|
||||
(d) I understand and agree that this project and the contribution
|
||||
are public and that a record of the contribution (including all
|
||||
personal information I submit with it, including my sign-off) is
|
||||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
||||
```
|
|
@ -92,6 +92,5 @@ Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `S
|
|||
[3]: https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
|
||||
[4]: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/kc_welcome_containers.html
|
||||
[5]: https://console.bluemix.net/docs/containers/container_index.html#container_index
|
||||
[6]: api-types/backupstoragelocation.md#aws
|
||||
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
|
||||
[15]: install-overview.md#velero-resource-requirements
|
|
@ -241,5 +241,5 @@ After creating the Velero server in your cluster, try this example:
|
|||
|
||||
## Additional Reading
|
||||
|
||||
* [Official Velero Documentation](https://velero.io/docs/v1.2.0-beta.1/)
|
||||
* [Official Velero Documentation](https://velero.io/docs/v1.2.0/)
|
||||
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)
|
|
@ -87,5 +87,5 @@ Once parsed into a `[]string`, the features can then be registered using the `Ne
|
|||
Velero adds the `LD_LIBRARY_PATH` into the list of environment variables to provide the convenience for plugins that requires C libraries/extensions in the runtime.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero-plugin-example
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/pkg/plugin/logger.go
|
||||
[3]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/pkg/restore/restic_restore_action.go
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0/pkg/plugin/logger.go
|
||||
[3]: https://github.com/vmware-tanzu/velero/blob/v1.2.0/pkg/restore/restic_restore_action.go
|
|
@ -22,7 +22,7 @@ Examples of cases where Velero is useful:
|
|||
|
||||
Yes, with some exceptions. For example, when Velero restores pods it deletes the `nodeName` from the
|
||||
pod so that it can be scheduled onto a new node. You can see some more examples of the differences
|
||||
in [pod_action.go](https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/pkg/restore/pod_action.go)
|
||||
in [pod_action.go](https://github.com/vmware-tanzu/velero/blob/v1.2.0/pkg/restore/pod_action.go)
|
||||
|
||||
## I'm using Velero in multiple clusters. Should I use the same bucket to store all of my backups?
|
||||
|
|
@ -77,5 +77,5 @@ velero backup logs nginx-hook-test | grep hookCommand
|
|||
|
||||
|
||||
[1]: api-types/backup.md
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/examples/nginx-app/with-pv.yaml
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0/examples/nginx-app/with-pv.yaml
|
||||
[3]: cloud-common.md
|
Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 33 KiB |
Before Width: | Height: | Size: 44 KiB After Width: | Height: | Size: 44 KiB |
|
@ -160,5 +160,5 @@ velero backup create full-cluster-backup
|
|||
|
||||
[1]: api-types/backupstoragelocation.md
|
||||
[2]: api-types/volumesnapshotlocation.md
|
||||
[3]: api-types/volumesnapshotlocation.md#azure
|
||||
[4]: api-types/backupstoragelocation.md#azure
|
||||
[3]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/volumesnapshotlocation.md
|
||||
[4]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/backupstoragelocation.md
|
|
@ -1,4 +1,4 @@
|
|||
# Restore Reference
|
||||
# Restore Reference
|
||||
|
||||
## Restoring Into a Different Namespace
|
||||
|
||||
|
@ -9,6 +9,29 @@ velero restore create RESTORE_NAME \
|
|||
--from-backup BACKUP_NAME \
|
||||
--namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-2
|
||||
```
|
||||
## What happens when user removes restore objects
|
||||
A **restore** object represents the restore operation. There are two types of deletion for restore objects:
|
||||
### 1. Deleting with **`velero restore delete`**
|
||||
This command will delete the custom resource representing it, along with its individual log and results files. But, it will not delete any objects that were created by it from your cluster.
|
||||
### 2. Deleting with **`kubectl -n velero delete restore`**
|
||||
This command will delete the custom resource representing the restore, but will not delete log/results files from object storage, or any objects that were created during the restore in your cluster.
|
||||
|
||||
## Restore command-line options
|
||||
To see all commands for restores, run : `velero restore --help`
|
||||
To see all options associated with a specific command, provide the --help flag to that command. For example, **`velero restore create --help`** shows all options associated with the **create** command.
|
||||
|
||||
### To List all options of restore use : **`velero restore --help`**
|
||||
|
||||
```Usage:
|
||||
velero restore [command]
|
||||
|
||||
Available Commands:
|
||||
create Create a restore
|
||||
delete Delete restores
|
||||
describe Describe restores
|
||||
get Get restores
|
||||
logs Get restore logs
|
||||
```
|
||||
|
||||
## Changing PV/PVC Storage Classes
|
||||
|
|
@ -17,5 +17,5 @@ Please browse our list of resources, including a playlist of past online communi
|
|||
|
||||
If you are ready to jump in and test, add code, or help with documentation, please use the navigation on the left under `Contribute`.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/CODE_OF_CONDUCT.md
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/CONTRIBUTING.md
|
||||
[1]: https://github.com/vmware-tanzu/velero/blob/v1.2.0/CODE_OF_CONDUCT.md
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0/CONTRIBUTING.md
|
|
@ -4,20 +4,21 @@ Velero supports a variety of storage providers for different backup and snapshot
|
|||
|
||||
## Velero supported providers
|
||||
|
||||
| Provider | Object Store | Volume Snapshotter | Plugin |
|
||||
|----------------------------|---------------------|------------------------------|---------------------------|
|
||||
| [AWS S3][7] | AWS S3 | AWS EBS | [Velero plugin AWS][8] |
|
||||
| [Azure Blob Storage][9] | Azure Blob Storage | Azure Managed Disks | [Velero plugin Azure][10] |
|
||||
| [Google Cloud Storage][11] | Google Cloud Storage| Google Compute Engine Disks | [Velero plugin GCP][12] |
|
||||
| Provider | Object Store | Volume Snapshotter | Plugin Documentation |
|
||||
|-----------------------------------|---------------------|------------------------------|-----------------------------------------|
|
||||
| [Amazon Web Services (AWS)][7] | AWS S3 | AWS EBS | [Velero plugin for AWS][8] |
|
||||
| [Google Cloud Platform (GCP)][11] | Google Cloud Storage| Google Compute Engine Disks | [Velero plugin for GCP][12] |
|
||||
| [Microsoft Azure][9] | Azure Blob Storage | Azure Managed Disks | [Velero plugin for Microsoft Azure][10] |
|
||||
|
||||
|
||||
Contact: [Slack][28], [GitHub Issue][29]
|
||||
|
||||
## Community supported providers
|
||||
|
||||
| Provider | Object Store | Volume Snapshotter | Plugin | Contact |
|
||||
| Provider | Object Store | Volume Snapshotter | Plugin Documentation | Contact |
|
||||
|---------------------------|------------------------------|------------------------------------|------------------------|---------------------------------|
|
||||
| [Portworx][31] | 🚫 | Portworx Volume | [Portworx][32] | [Slack][33], [GitHub Issue][34] |
|
||||
| [DigitalOcean][15] | DigitalOcean Object Storage | DigitalOcean Volumes Block Storage | [StackPointCloud][16] | |
|
||||
| [Portworx][31] | 🚫 | Portworx Volume | [Portworx][32] | [Slack][33], [GitHub Issue][34] |
|
||||
| [DigitalOcean][15] | DigitalOcean Object Storage | DigitalOcean Volumes Block Storage | [StackPointCloud][16] | |
|
||||
| [OpenEBS][17] | 🚫 | OpenEBS CStor Volume | [OpenEBS][18] | [Slack][19], [GitHub Issue][20] |
|
||||
| [AlibabaCloud][21] | 🚫 | Alibaba Cloud | [AlibabaCloud][22] | [GitHub Issue][23] |
|
||||
| [Hewlett Packard][24] | 🚫 | HPE Storage | [Hewlett Packard][25] | [Slack][26], [GitHub Issue][27] |
|
||||
|
@ -48,12 +49,12 @@ In the case you want to take volume snapshots but didn't find a plugin for your
|
|||
[3]: contributions/minio.md
|
||||
[4]: https://github.com/StackPointCloud/ark-plugin-digitalocean
|
||||
[5]: http://www.noobaa.com/
|
||||
[6]: api-types/backupstoragelocation.md#aws
|
||||
[7]: https://aws.amazon.com/s3/
|
||||
[6]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/master/backupstoragelocation.md
|
||||
[7]: https://aws.amazon.com
|
||||
[8]: https://github.com/vmware-tanzu/velero-plugin-for-aws
|
||||
[9]: https://azure.microsoft.com/en-us/services/storage/blobs
|
||||
[9]: https://azure.com
|
||||
[10]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure
|
||||
[11]: https://cloud.google.com/storage/
|
||||
[11]: https://cloud.google.com
|
||||
[12]: https://github.com/vmware-tanzu/velero-plugin-for-gcp
|
||||
[15]: https://www.digitalocean.com/
|
||||
[16]: https://github.com/StackPointCloud/ark-plugin-digitalocean
|
|
@ -0,0 +1,101 @@
|
|||
# Upgrading to Velero 1.2
|
||||
|
||||
## Prerequisites
|
||||
- Velero [v1.1][0] or [v1.0][1] installed.
|
||||
|
||||
_Note: if you're upgrading from v1.0, follow the [upgrading to v1.1][2] instructions first._
|
||||
|
||||
## Instructions
|
||||
1. Install the Velero v1.2 command-line interface (CLI) by following the [instructions here][3].
|
||||
|
||||
Verify that you've properly installed it by running:
|
||||
|
||||
```bash
|
||||
velero version --client-only
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
|
||||
```bash
|
||||
Client:
|
||||
Version: v1.2.0
|
||||
Git commit: <git SHA>
|
||||
```
|
||||
|
||||
1. Scale down the existing Velero deployment:
|
||||
|
||||
```bash
|
||||
kubectl scale deployment/velero \
|
||||
--namespace velero \
|
||||
--replicas 0
|
||||
```
|
||||
|
||||
1. Update the container image used by the Velero deployment and, optionally, the restic daemon set:
|
||||
|
||||
```bash
|
||||
kubectl set image deployment/velero \
|
||||
velero=velero/velero:v1.2.0 \
|
||||
--namespace velero
|
||||
|
||||
# optional, if using the restic daemon set
|
||||
kubectl set image daemonset/restic \
|
||||
restic=velero/velero:v1.2.0 \
|
||||
--namespace velero
|
||||
```
|
||||
|
||||
1. If using AWS, Azure, or GCP, add the respective plugin to your Velero deployment:
|
||||
|
||||
For AWS:
|
||||
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-aws:v1.0.0
|
||||
```
|
||||
|
||||
For Azure:
|
||||
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0
|
||||
```
|
||||
|
||||
For GCP:
|
||||
|
||||
```bash
|
||||
velero plugin add velero/velero-plugin-for-gcp:v1.0.0
|
||||
```
|
||||
|
||||
1. Update the Velero custom resource definitions (CRDs) to include the structural schemas:
|
||||
|
||||
```bash
|
||||
velero install --crds-only --dry-run -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
1. Scale back up the existing Velero deployment:
|
||||
|
||||
```bash
|
||||
kubectl scale deployment/velero \
|
||||
--namespace velero \
|
||||
--replicas 1
|
||||
```
|
||||
|
||||
1. Confirm that the deployment is up and running with the correct version by running:
|
||||
|
||||
```bash
|
||||
velero version
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
|
||||
```bash
|
||||
Client:
|
||||
Version: v1.2.0
|
||||
Git commit: <git SHA>
|
||||
|
||||
Server:
|
||||
Version: v1.2.0
|
||||
```
|
||||
|
||||
|
||||
[0]: https://github.com/vmware-tanzu/velero/releases/tag/v1.1.0
|
||||
[1]: https://github.com/vmware-tanzu/velero/releases/tag/v1.0.0
|
||||
[2]: https://velero.io/docs/v1.1.0/upgrade-to-1.1/
|
||||
[3]: install-overview.md#install-the-cli
|
|
@ -0,0 +1,46 @@
|
|||
# Website Guidelines
|
||||
|
||||
## Running the website locally
|
||||
|
||||
When making changes to the website, please run the site locally before submitting a PR and manually verify your changes.
|
||||
|
||||
At the root of the project, run:
|
||||
|
||||
```bash
|
||||
make serve-docs
|
||||
```
|
||||
|
||||
This runs all the Ruby dependencies in a container.
|
||||
|
||||
Alternatively, for quickly loading the website, under the `velero/site/` directory run:
|
||||
|
||||
```bash
|
||||
bundle exec jekyll serve --livereload --future
|
||||
```
|
||||
|
||||
For more information on how to run the website locally, please see our [jekyll documentation](https://github.com/vmware-tanzu/velero/blob/v1.2.0/site/README-JEKYLL.md).
|
||||
|
||||
## Adding a blog post
|
||||
|
||||
The `author_name` value must also be included in the tags field so the page that lists the author's posts will work properly (Ex: https://velero.io/tags/carlisia%20campos/).
|
||||
|
||||
Note that the tags field can have multiple values.
|
||||
|
||||
Example:
|
||||
|
||||
```text
|
||||
author_name: Carlisia Campos
|
||||
tags: ['Carlisia Campos', "release", "how-to"]
|
||||
```
|
||||
|
||||
### Please add an image
|
||||
|
||||
If there is no image added to the header of the post, the default Velero logo will be used. This is fine, but not ideal.
|
||||
|
||||
If there's an image that can be used as the blog post icon, the image field must be set to:
|
||||
|
||||
```text
|
||||
image: /img/posts/<your_image_name.png>
|
||||
```
|
||||
|
||||
This image file must be added to the `/site/img/posts` folder.
|
Binary file not shown.
After Width: | Height: | Size: 2.4 MiB |
|
@ -11,7 +11,7 @@ hero:
|
|||
content: Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
|
||||
cta_link1:
|
||||
text: Latest Release Information
|
||||
url: /announcing-velero-1.1/
|
||||
url: /blog/velero-1.2-sets-sail/
|
||||
cta_link2:
|
||||
text: Download Velero
|
||||
url: https://github.com/vmware-tanzu/velero/releases/latest
|
||||
|
|
Loading…
Reference in New Issue