* Use kubebuilder client for fetching restic secrets
Instead of using a SecretInformer for fetching secrets for restic, use
the cached client provided by the controller-runtime manager.
In order to use this client, the scheme for Secrets must be added to the
scheme used by the manager so this is added when creating the manager in
both the velero and restic servers.
This change also refactors some of the tests to add a shared utility for
creating a fake controller-runtime client which is now used among all
tests which use that client. This has been added to ensure that all
tests use the same client with the same scheme.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Add builder for SecretKeySelector
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
In preparation for modifying the instantiation of `BackupStores` to be
able to load credentials, change the function `NewObjectBackupStore` to
be an interface that is passed in to all controllers.
Previously, the function to get a new backup store was configurable but
for many controllers was fixed to use `NewObjectBackupStore`. This
change introduces an interface for getting the backup store and wraps
the functionality from `NewObjectBackupStore` in a type which implements
this interface. This will allow more flexibility when introducing
credentials for a specific backup store as it will allow us to create a
new `ObjectBackupStoreGetter` type which can be configured to add
credentials config when creating the ObjectBackupStore without needing
to change the API used by the controllers.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Don't fail backup if downloading tarball fails
Previously, we would always attempt to download the tarball for a backup
for processing DeleteItemAction plugins, even if there weren't any.
This caused an issue for some users in the case where the backup tarball
had been deleted from object storage as the backup deletion would fail.
Now, we only attempt to download the tarball in the case where there are
DeleteItemAction plugins. If downloading that tarball fails, we log
the error, skip the processing of the DeleteItemAction plugins and
proceed with the rest of the deletion.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Skip file removal in closeAndRemoveFile if nil
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* k8s 1.18 import wip
backup, cmd, controller, generated, restic, restore, serverstatusrequest, test and util
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* go mod tidy
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* add changelog file
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* go fmt
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* update code-generator and controller-gen in CI
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* checkout proper code-generator version, regen
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* fix remaining calls
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* regenerate CRDs with ./hack/update-generated-crd-code.sh
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* use existing context in restic and server
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* fix test cases by resetting resource version
also use main library go context, not golang.org/x/net/context, in pkg/restore/restore.go
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* clarify changelog message
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* use github.com/kubernetes-csi/external-snapshotter/v2@v2.2.0-rc1
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* run 'go mod tidy' to remove old external-snapshotter version
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* kubebuilder init - minimalist version
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add back main.go, apparently kb needs it
Signed-off-by: Carlisia <carlisia@vmware.com>
* Tweak makefile to accomodate kubebuilder expectations
Signed-off-by: Carlisia <carlisia@vmware.com>
* Port BSL to kubebuilder api client
Signed-off-by: Carlisia <carlisia@vmware.com>
* s/cache/client bc client fetches from cache
And other naming improvements
Signed-off-by: Carlisia <carlisia@vmware.com>
* So, .GetAPIReader is how we bypass the cache
In this case, the cache hasn't started yet
Signed-off-by: Carlisia <carlisia@vmware.com>
* Oh that's what this code was for... adding back
We still need to embed the CRDs as binary data in the Velero binary to
access the generated CRDs at runtime.
Signed-off-by: Carlisia <carlisia@vmware.com>
* Tie in CRD/code generation w/ existing scripts
Signed-off-by: Carlisia <carlisia@vmware.com>
* Mostly result of running update-fmt, updated file formatting
Signed-off-by: Carlisia <carlisia@vmware.com>
* Just a copyright fix
Signed-off-by: Carlisia <carlisia@vmware.com>
* All the test fixes
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add changelog + some cleanup
Signed-off-by: Carlisia <carlisia@vmware.com>
* Update backup manifest
Signed-off-by: Carlisia <carlisia@vmware.com>
* Remove unneeded auto-generated files
Signed-off-by: Carlisia <carlisia@vmware.com>
* Keep everything in the same (existing) package
Signed-off-by: Carlisia <carlisia@vmware.com>
* Fix/clean scripts, generated code, and calls
Deleting the entire `generated` directory and running `make update`
works. Modifying an api and running `make verify` works as expected.
Signed-off-by: Carlisia <carlisia@vmware.com>
* Clean up schema and client calls + code reviews
Signed-off-by: Carlisia <carlisia@vmware.com>
* Move all code gen to inside builder container
Signed-off-by: Carlisia <carlisia@vmware.com>
* Address code review
Signed-off-by: Carlisia <carlisia@vmware.com>
* Fix imports/aliases
Signed-off-by: Carlisia <carlisia@vmware.com>
* More code reviews
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add waitforcachesync
Signed-off-by: Carlisia <carlisia@vmware.com>
* Have manager register ALL controllers
This will allow for proper cache management.
Signed-off-by: Carlisia <carlisia@vmware.com>
* Status subresource is now enabled; cleanup
Signed-off-by: Carlisia <carlisia@vmware.com>
* More code reviews
Signed-off-by: Carlisia <carlisia@vmware.com>
* Clean up
Signed-off-by: Carlisia <carlisia@vmware.com>
* Manager registers ALL controllers for restic too
Signed-off-by: Carlisia <carlisia@vmware.com>
* More code reviews
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add deprecation warning/todo
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add documentation
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add helpful comments
Signed-off-by: Carlisia <carlisia@vmware.com>
* Address code review
Signed-off-by: Carlisia <carlisia@vmware.com>
* More idiomatic Runnable
Signed-off-by: Carlisia <carlisia@vmware.com>
* Clean up imports
Signed-off-by: Carlisia <carlisia@vmware.com>
* update import paths to github.com/vmware-tanzu/...
Signed-off-by: Steve Kriss <krisss@vmware.com>
* update other GH org refs to vmware-tanzu
Signed-off-by: Steve Kriss <krisss@vmware.com>
* site and docs: update GH org to vmware-tanzu
Signed-off-by: Steve Kriss <krisss@vmware.com>
* update travis badge links on docs readmes
Signed-off-by: Steve Kriss <krisss@vmware.com>
This fix initialises an empty map if the request object's Labels map
is nil, allowing the controller to later add and modify labels on the
object.
Signed-off-by: Adnan Abdulhussein <aadnan@vmware.com>
Velero should handle cases when the label length exceeds 63 characters.
- if the length of the backup/restore name is <= 63 characters, use it as the value of the label
- if it's > 63 characters, take the SHA256 hash of the name. the value of
the label will be the first 57 characters of the backup/restore name
plus the first six characters of the SHA256 hash.
Fixes heptio#1021
Signed-off-by: Anshul Chandra <anshulc@vmware.com>
Refactor plugin management:
- support multiple plugins per executable
- support restarting a plugin process in the event it terminates
- simplify plugin lifecycle management by using separate managers for
each scope (server vs backup vs restore)
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
When the BackupDeletionController processes a request, set the request's
backup-name and backup-uid labels if they aren't currently set.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
Make sure a DeleteBackupRequest has its Spec.BackupName filled in. If
not, record an error in the status and mark the request as processed.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
Always request DeleteBackupRequests for a given backup so we can show
failed deletion attempts if you try to delete a backup that has PV
snapshots when Ark doesn't have a persistentVolumeProvider configured.
When creating a DeleteBackupRequest, include a label for the UID so we
can match based on name and UID when associated DeleteBackupRequests
with a given backup.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
We ran into a lot of problems using a finalizer on the backup to allow
the Ark server to clean up all associated backup data when deleting a
backup.
Users also found it less than desirable that deleting the heptio-ark
namespace resulted in all the backup data being deleted.
This removes the finalizer and replaces it with an explicit
DeleteBackupRequest that is created as a means of requesting the
deletion of a backup and all its associated data. This is what `ark
backup delete` does.
If you use kubectl to delete a backup or to delete the heptio-ark
namespace, this no longer deletes associated backups. Additionally, as
long as the heptio-ark namespace still exists, the Ark server's
BackupSyncController will continually sync backups into the heptio-ark
namespace from object storage.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>