Go to file
Nolan Brubaker 4fc1c70028
v1.4.3 cherry-picks and changelogs (#3025)
* Ensure PVs and PVCs remain bound when doing a restore (#3007)

* Only remove the UID from a PV's claimRef

The UID is the only part of a claimRef that might prevent it from being
rebound correctly on a restore. The namespace and name within the
claimRef should be preserved in order to ensure that the PV is claimed
by the correct PVC on restore.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remap PVs claimRef.namespace on relevant restores

When remapping namespaces, any included PVs need to have their claimRef
updated to point remapped namespaces to the new namespace name in order
to be bound to the correct PVC.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update tests and ensure claimRef namespace remaps

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove lowercased uid field from unstructured PV

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix issues that prevented PVs from being restored

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add changelog

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Dynamically reprovision volumes without snapshots

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update test for lower case uid field

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove stray debugging print statement

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix typo, remove extra code, add tests.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* fix: rename the PV if VolumeSnapshotter has modified the PV name (#2835)

When VolumeSnapshotter sets the PV name via SetVolumeID and PV is
not there in the cluster, velero does not rename the PV. Which causes
the pvc to be in the lost state as pvc points to the old PV but pv object
has been renamed by VolumeSnapshotter.

Signed-off-by: Pawan <pawan@mayadata.io>

* adding a test case for pv rename

Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* restore proper lowercase/plural CRD resource (#2949)

* restore proper lowercase/plural CRD resource

This commit restores the proper resource string
"customresourcedefinitions" for CRD. The prior change to
"CustomResourceDefinition" was made because this was being used
in another place to populate the CRD "Kind" field in
remap_crd_version_action.go -- there, just use the correct Kind
string instead of pulling from Resource.

Signed-off-by: Scott Seago <sseago@redhat.com>

* add changelog

Signed-off-by: Scott Seago <sseago@redhat.com>

* Update changelogs for v1.4.3

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

Co-authored-by: Pawan Prakash Sharma <pawanprakash101@gmail.com>
Co-authored-by: Scott Seago <sseago@redhat.com>
2020-10-20 14:20:24 -07:00
.github Backport GitHub Actions Docker image building to 1.4 series (#2706) 2020-07-13 12:35:26 -07:00
changelogs v1.4.3 cherry-picks and changelogs (#3025) 2020-10-20 14:20:24 -07:00
cmd Updates for org move to vmware-tanzu (#1920) 2019-09-30 17:26:56 -04:00
design Fix validation on CRD yamls 2020-04-07 08:46:33 -07:00
examples Use more recent nginx in example 2019-12-17 16:50:36 +01:00
hack Backport GitHub Actions Docker image building to 1.4 series (#2706) 2020-07-13 12:35:26 -07:00
pkg v1.4.3 cherry-picks and changelogs (#3025) 2020-10-20 14:20:24 -07:00
site Backport GitHub Actions Docker image building to 1.4 series (#2706) 2020-07-13 12:35:26 -07:00
third_party/kubernetes/pkg/kubectl/cmd remove hardcoded svc, netpol mappings 2020-01-16 19:16:45 -07:00
.gitignore Add Tiltfile to gitignore 2020-04-16 18:38:26 -07:00
.goreleaser.yml Updates for org move to vmware-tanzu (#1920) 2019-09-30 17:26:56 -04:00
ADOPTERS.md Add Okteto to the list of adopters (#2445) 2020-04-20 08:38:51 -06:00
CHANGELOG.md v1.4.0 changelog and docs (#2577) 2020-05-26 13:27:26 -07:00
CODE_OF_CONDUCT.md Update Code of Conduct (#2229) 2020-01-29 09:48:49 -07:00
CONTRIBUTING.md Add code and website guidelines (#2032) 2019-11-05 11:23:47 -05:00
Dockerfile-velero update container base images from bionic to focal (#2471) 2020-04-30 15:13:20 -07:00
Dockerfile-velero-arm update container base images from bionic to focal (#2471) 2020-04-30 15:13:20 -07:00
Dockerfile-velero-arm64 update container base images from bionic to focal (#2471) 2020-04-30 15:13:20 -07:00
Dockerfile-velero-ppc64le update container base images from bionic to focal (#2471) 2020-04-30 15:13:20 -07:00
Dockerfile-velero-restic-restore-helper update container base images from bionic to focal (#2471) 2020-04-30 15:13:20 -07:00
Dockerfile-velero-restic-restore-helper-arm update container base images from bionic to focal (#2471) 2020-04-30 15:13:20 -07:00
Dockerfile-velero-restic-restore-helper-arm64 update container base images from bionic to focal (#2471) 2020-04-30 15:13:20 -07:00
Dockerfile-velero-restic-restore-helper-ppc64le update container base images from bionic to focal (#2471) 2020-04-30 15:13:20 -07:00
LICENSE Initial commit 2017-08-02 13:27:17 -04:00
Makefile v1.4.1 cherrypicks (#2704) 2020-07-13 11:02:42 -07:00
README.md Backport GitHub Actions Docker image building to 1.4 series (#2706) 2020-07-13 12:35:26 -07:00
SUPPORT.md change SUPPORT.md to point to community page 2019-12-18 13:09:13 -07:00
go.mod Update dependencies to latest versio 2020-04-20 13:49:18 -04:00
go.sum Update dependencies to latest versio 2020-04-20 13:49:18 -04:00
netlify.toml add velero.io site content, move docs to under site/docs (#1489) 2019-05-17 10:56:03 -07:00

README.md

100

Build Status

Overview

Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises. Velero lets you:

  • Take backups of your cluster and restore in case of loss.
  • Migrate cluster resources to other clusters.
  • Replicate your production cluster to development and testing clusters.

Velero consists of:

  • A server that runs on your cluster
  • A command-line client that runs locally

Documentation

The documentation provides a getting started guide and information about building from source, architecture, extending Velero, and more.

Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.

Troubleshooting

If you encounter issues, review the troubleshooting docs, file an issue, or talk to us on the #velero channel on the Kubernetes Slack server.

Contributing

If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our Start contributing documentation for guidance on how to setup Velero for development.

Changelog

See the list of releases to find out about feature changes.