replace v1.1.0-beta.1 docs with v1.1.0-beta.2 docs

Signed-off-by: Steve Kriss <krisss@vmware.com>
pull/1762/head
Steve Kriss 2019-08-13 13:32:32 -06:00
parent 3c8020e922
commit c80f679802
43 changed files with 45 additions and 162 deletions

View File

@ -56,10 +56,10 @@ defaults:
gh: https://github.com/heptio/velero/tree/master
layout: "docs"
- scope:
path: docs/v1.1.0-beta.1
path: docs/v1.1.0-beta.2
values:
version: v1.1.0-beta.1
gh: https://github.com/heptio/velero/tree/v1.1.0-beta.1
version: v1.1.0-beta.2
gh: https://github.com/heptio/velero/tree/v1.1.0-beta.2
layout: "docs"
- scope:
path: docs/v1.0.0
@ -148,7 +148,7 @@ versioning: true
latest: v1.0.0
versions:
- master
- v1.1.0-beta.1
- v1.1.0-beta.2
- v1.0.0
- v0.11.0
- v0.10.0

View File

@ -3,7 +3,7 @@
# that the navigation for older versions still work.
master: master-toc
v1.1.0-beta.1: v1-1-0-beta-1-toc
v1.1.0-beta.2: v1-1-0-beta-2-toc
v1.0.0: v1-0-0-toc
v0.11.0: v011-toc
v0.10.0: v010-toc

View File

@ -13,6 +13,8 @@ toc:
subfolderitems:
- page: Overview
url: /install-overview
- page: Upgrade to 1.1
url: /upgrade-to-1.1
- page: Quick start with in-cluster MinIO
url: /get-started
- page: Run on AWS

View File

@ -1,144 +0,0 @@
# Upgrading to Velero 1.0
## Prerequisites
- Velero v0.11 installed. If you're not already on v0.11, see the [instructions for upgrading to v0.11][0]. **Upgrading directly from v0.10.x or earlier to v1.0 is not supported!**
- (Optional, but strongly recommended) Create a full copy of the object storage bucket(s) Velero is using. Part 1 of the upgrade procedure will modify the contents of the bucket, so we recommend creating a backup copy of it prior to upgrading.
## Instructions
### Part 1 - Rewrite Legacy Metadata
#### Overview
You need to replace legacy metadata in object storage with updated versions **for any backups that were originally taken with a version prior to v0.11 (i.e. when the project was named Ark)**. While Velero v0.11 is backwards-compatible with these legacy files, Velero v1.0 is not.
_If you're sure that you do not have any backups that were originally created prior to v0.11 (with Ark), you can proceed directly to Part 2._
We've added a CLI command to [Velero v0.11.1][1], `velero migrate-backups`, to help you with this. This command will:
- Replace `ark-backup.json` files in object storage with equivalent `velero-backup.json` files.
- Create `<backup-name>-volumesnapshots.json.gz` files in object storage if they don't already exist, containing snapshot metadata populated from the backups' `status.volumeBackups` field*.
_*backups created prior to v0.10 stored snapshot metadata in the `status.volumeBackups` field, but it has subsequently been replaced with the `<backup-name>-volumesnapshots.json.gz` file._
#### Instructions
1. Download the [v0.11.1 release tarball][1] tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
1. Scale down your existing Velero deployment:
```bash
kubectl -n velero scale deployment/velero --replicas 0
```
1. Fetch velero's credentials for accessing your object storage bucket and store them locally for use by `velero migrate-backups`:
For AWS:
```bash
export AWS_SHARED_CREDENTIALS_FILE=./velero-migrate-backups-credentials
kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.cloud}" | base64 --decode > $AWS_SHARED_CREDENTIALS_FILE
````
For Azure:
```bash
export AZURE_SUBSCRIPTION_ID=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_SUBSCRIPTION_ID}" | base64 --decode)
export AZURE_TENANT_ID=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_TENANT_ID}" | base64 --decode)
export AZURE_CLIENT_ID=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_CLIENT_ID}" | base64 --decode)
export AZURE_CLIENT_SECRET=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_CLIENT_SECRET}" | base64 --decode)
export AZURE_RESOURCE_GROUP=$(kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.AZURE_RESOURCE_GROUP}" | base64 --decode)
```
For GCP:
```bash
export GOOGLE_APPLICATION_CREDENTIALS=./velero-migrate-backups-credentials
kubectl -n velero get secret cloud-credentials -o jsonpath="{.data.cloud}" | base64 --decode > $GOOGLE_APPLICATION_CREDENTIALS
```
1. List all of your backup storage locations:
```bash
velero backup-location get
```
1. For each backup storage location that you want to use with Velero 1.0, replace any legacy pre-v0.11 backup metadata with the equivalent current formats:
```
# - BACKUP_LOCATION_NAME is the name of a backup location from the previous step, whose
# backup metadata will be updated in object storage
# - SNAPSHOT_LOCATION_NAME is the name of the volume snapshot location that Velero should
# record volume snapshots as existing in (this is only relevant if you have backups that
# were originally taken with a pre-v0.10 Velero/Ark.)
velero migrate-backups \
--backup-location <BACKUP_LOCATION_NAME> \
--snapshot-location <SNAPSHOT_LOCATION_NAME>
```
1. Scale up your deployment:
```bash
kubectl -n velero scale deployment/velero --replicas 1
```
1. Remove the local `velero` credentials:
For AWS:
```
rm $AWS_SHARED_CREDENTIALS_FILE
unset AWS_SHARED_CREDENTIALS_FILE
```
For Azure:
```
unset AZURE_SUBSCRIPTION_ID
unset AZURE_TENANT_ID
unset AZURE_CLIENT_ID
unset AZURE_CLIENT_SECRET
unset AZURE_RESOURCE_GROUP
```
For GCP:
```
rm $GOOGLE_APPLICATION_CREDENTIALS
unset GOOGLE_APPLICATION_CREDENTIALS
```
### Part 2 - Upgrade Components to Velero 1.0
#### Overview
#### Instructions
1. Download the [v1.0 release tarball][2] tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
1. Move the `velero` binary from the Velero directory to somewhere in your PATH, replacing any existing pre-1.0 `velero` binaries.
1. Update the image for the Velero deployment and daemon set (if applicable):
```bash
kubectl -n velero set image deployment/velero velero=gcr.io/heptio-images/velero:v1.0.0
kubectl -n velero set image daemonset/restic restic=gcr.io/heptio-images/velero:v1.0.0
```
[0]: https://velero.io/docs/v0.11.0/migrating-to-velero
[1]: https://github.com/heptio/velero/releases/tag/v0.11.1
[2]: https://github.com/heptio/velero/releases/tag/v1.0.0

View File

@ -58,10 +58,10 @@ See [the list of releases][6] to find out about feature changes.
[2]: https://travis-ci.org/heptio/velero
[4]: https://github.com/heptio/velero/issues
[5]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/CONTRIBUTING.md
[5]: https://github.com/heptio/velero/blob/v1.1.0-beta.2/CONTRIBUTING.md
[6]: https://github.com/heptio/velero/releases
[8]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/CODE_OF_CONDUCT.md
[8]: https://github.com/heptio/velero/blob/v1.1.0-beta.2/CODE_OF_CONDUCT.md
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
@ -74,7 +74,7 @@ See [the list of releases][6] to find out about feature changes.
[28]: install-overview.md
[29]: https://velero.io/docs/v1.1.0-beta.1/
[29]: https://velero.io/docs/v1.1.0-beta.2/
[30]: troubleshooting.md
[99]: support-matrix.md

View File

@ -238,7 +238,7 @@ If you need to add or update the vendored dependencies, see [Vendoring dependenc
[10]: #vendoring-dependencies
[11]: vendoring-dependencies.md
[12]: #test
[13]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/hack/generate-proto.sh
[13]: https://github.com/heptio/velero/blob/v1.1.0-beta.2/hack/generate-proto.sh
[14]: https://grpc.io/docs/quickstart/go.html#install-protocol-buffers-v3
[15]: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#the-shared-credentials-file
[16]: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable

View File

@ -22,7 +22,7 @@ Examples of cases where Velero is useful:
Yes, with some exceptions. For example, when Velero restores pods it deletes the `nodeName` from the
pod so that it can be scheduled onto a new node. You can see some more examples of the differences
in [pod_action.go](https://github.com/heptio/velero/blob/v1.1.0-beta.1/pkg/restore/pod_action.go)
in [pod_action.go](https://github.com/heptio/velero/blob/v1.1.0-beta.2/pkg/restore/pod_action.go)
## I'm using Velero in multiple clusters. Should I use the same bucket to store all of my backups?

View File

@ -77,5 +77,5 @@ velero backup logs nginx-hook-test | grep hookCommand
[1]: api-types/backup.md
[2]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/examples/nginx-app/with-pv.yaml
[2]: https://github.com/heptio/velero/blob/v1.1.0-beta.2/examples/nginx-app/with-pv.yaml
[3]: cloud-common.md

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 44 KiB

View File

@ -241,5 +241,5 @@ After creating the Velero server in your cluster, try this example:
## Additional Reading
* [Official Velero Documentation](https://velero.io/docs/v1.1.0-beta.1/)
* [Official Velero Documentation](https://velero.io/docs/v1.1.0-beta.2/)
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)

View File

@ -78,5 +78,5 @@ for an example of this -- in particular, the `getPluginConfig(...)` function.
[1]: https://github.com/heptio/velero-plugin-example
[2]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/pkg/plugin/logger.go
[3]: https://github.com/heptio/velero/blob/v1.1.0-beta.1/pkg/restore/restic_restore_action.go
[2]: https://github.com/heptio/velero/blob/v1.1.0-beta.2/pkg/plugin/logger.go
[3]: https://github.com/heptio/velero/blob/v1.1.0-beta.2/pkg/restore/restic_restore_action.go

View File

@ -235,19 +235,19 @@ data:
image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]
# "cpuRequest" sets the request.cpu value on the restic init containers during restore.
# If not set, it will default to "100m".
# If not set, it will default to "100m". A value of "0" is treated as unbounded.
cpuRequest: 200m
# "memRequest" sets the request.memory value on the restic init containers during restore.
# If not set, it will default to "128Mi".
# If not set, it will default to "128Mi". A value of "0" is treated as unbounded.
memRequest: 128Mi
# "cpuLimit" sets the request.cpu value on the restic init containers during restore.
# If not set, it will default to "100m".
# If not set, it will default to "100m". A value of "0" is treated as unbounded.
cpuLimit: 200m
# "memLimit" sets the request.memory value on the restic init containers during restore.
# If not set, it will default to "128Mi".
# If not set, it will default to "128Mi". A value of "0" is treated as unbounded.
memLimit: 128Mi

View File

@ -0,0 +1,25 @@
# Upgrading to Velero 1.1
## Prerequisites
- [Velero v1.0][1] installed.
Velero v1.1 only requires user intervention if Velero is running in a namespace other than `velero`.
## Instructions
### Adding VELERO_NAMESPACE environment variable to the deployment
Previous versions of Velero's server detected the namespace in which it was running by inspecting the container's filesystem.
With v1.1, this is no longer the case, and the server must be made aware of the namespace it is running in with the `VELERO_NAMESPACE` environment variable.
`velero install` automatically writes this for new deployments, but existing installations will need to add the environment variable before upgrading.
You can use the following command to patch the deployment:
```bash
kubectl patch deployment/velero -n <YOUR_NAMESPACE> \
--type='json' \
-p='[{"op":"add","path":"/spec/template/spec/containers/0/env/0","value":{"name":"VELERO_NAMESPACE", "valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]'
```
[1]: https://github.com/heptio/velero/releases/tag/v1.0.0