Completed jobs and pods may be useful in the backup for auditing
purposes, but don't recreate them when restoring.
Signed-off-by: Nolan Brubaker <nolan@heptio.com>
Always request DeleteBackupRequests for a given backup so we can show
failed deletion attempts if you try to delete a backup that has PV
snapshots when Ark doesn't have a persistentVolumeProvider configured.
When creating a DeleteBackupRequest, include a label for the UID so we
can match based on name and UID when associated DeleteBackupRequests
with a given backup.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
We ran into a lot of problems using a finalizer on the backup to allow
the Ark server to clean up all associated backup data when deleting a
backup.
Users also found it less than desirable that deleting the heptio-ark
namespace resulted in all the backup data being deleted.
This removes the finalizer and replaces it with an explicit
DeleteBackupRequest that is created as a means of requesting the
deletion of a backup and all its associated data. This is what `ark
backup delete` does.
If you use kubectl to delete a backup or to delete the heptio-ark
namespace, this no longer deletes associated backups. Additionally, as
long as the heptio-ark namespace still exists, the Ark server's
BackupSyncController will continually sync backups into the heptio-ark
namespace from object storage.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
If a backup item action errors, log the error as soon as it occurs, so
it's clear when the error happened. Also include information about the
groupResource, namespace, and name of the item in the error.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
The main Ark code was hard-coding specific support for AWS, GCE, and
Azure volume snapshots and restores, and anything else was considered
unsupported.
Add GetVolumeID and SetVolumeID to the BlockStore interface, to allow
block store plugins to handle volume snapshots and restores.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
If backing up specific namespaces with "auto" cluster resources, the
actual namespace objects themselves were not being included in the
backup. Restore would create them but any labels or metadata would be
lost.
Instead handle the special case of namespace as a cluster level resource
we may still need, even if excluding most cluster level resources.
Signed-off-by: Devan Goodwin <dgoodwin@redhat.com>
Previously the directory structure separated resources depending on
whether or not they were cluster or namespace scoped. All cluster
resources were restored first, then all namespace resources. Priority
did not apply across both and you could not order any namespace
resources before any cluster resources.
This restructure sorts firstly on resource type.
resources/serviceaccounts/namespaces/ns1.json
resources/nodes/cluster/node1.json
This will break old backups as the format is no longer consistent as
announced on the Google group.
Signed-off-by: Devan Goodwin <dgoodwin@redhat.com>
- Read PV's AZ info from fault-domain label of the PV object for snapshotting.
- Store PV's AZ info in the VolumeInfo.
- Add tests for reading the label from the PV object.
- Remove availability zone validation in AWS and GCP BlockStorageAdaptor.
- Add volumeAZ as a parameter to methods in the BlockStorageAdapter interface.
- Get AZ from VolumeInfo when restoring PV snapshot.
- Remove references to PV availability zone in docs.
Signed-off-by: Ashish Amarnath <ashish.amarnath@gmail.com>