Merge pull request #5849 from sseago/async-backup-controller

BIAv2 async operations controller work
pull/5960/head
Shubham Pampattiwar 2023-03-06 18:10:08 -08:00 committed by GitHub
commit 8bed159023
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
52 changed files with 3078 additions and 564 deletions

View File

@ -0,0 +1 @@
BIAv2 async operations controller work

View File

@ -273,6 +273,11 @@ spec:
type: string
nullable: true
type: array
itemOperationTimeout:
description: ItemOperationTimeout specifies the time used to wait
for asynchronous BackupItemAction operations The default value is
1 hour.
type: string
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter with
when adding individual objects to the backup. If empty or nil, all
@ -415,6 +420,20 @@ spec:
status:
description: BackupStatus captures the current status of a Velero backup.
properties:
asyncBackupItemOperationsAttempted:
description: AsyncBackupItemOperationsAttempted is the total number
of attempted async BackupItemAction operations for this backup.
type: integer
asyncBackupItemOperationsCompleted:
description: AsyncBackupItemOperationsCompleted is the total number
of successfully completed async BackupItemAction operations for
this backup.
type: integer
asyncBackupItemOperationsFailed:
description: AsyncBackupItemOperationsFailed is the total number of
async BackupItemAction operations for this backup which ended with
an error.
type: integer
completionTimestamp:
description: CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
@ -457,6 +476,8 @@ spec:
- InProgress
- WaitingForPluginOperations
- WaitingForPluginOperationsPartiallyFailed
- FinalizingAfterPluginOperations
- FinalizingAfterPluginOperationsPartiallyFailed
- Completed
- PartiallyFailed
- Failed

View File

@ -308,6 +308,11 @@ spec:
type: string
nullable: true
type: array
itemOperationTimeout:
description: ItemOperationTimeout specifies the time used to wait
for asynchronous BackupItemAction operations The default value
is 1 hour.
type: string
labelSelector:
description: LabelSelector is a metav1.LabelSelector to filter
with when adding individual objects to the backup. If empty

File diff suppressed because one or more lines are too long

View File

@ -34,6 +34,7 @@ message ExecuteResponse {
bytes item = 1;
repeated generated.ResourceIdentifier additionalItems = 2;
string operationID = 3;
repeated generated.ResourceIdentifier itemsToUpdate = 4;
}
```
The BackupItemAction service gets two new rpc methods:
@ -78,6 +79,19 @@ message OperationProgress {
}
```
In addition to the two new rpc methods added to the BackupItemAction interface, there is also a new `Name()` method. This one is only actually used internally by Velero to get the name that the plugin was registered with, but it still must be defined in a plugin which implements BackupItemActionV2 in order to implement the interface. It doesn't really matter what it returns, though, as this particular method is not delegated to the plugin via RPC calls. The new (and modified) interface methods for `BackupItemAction` are as follows:
```
type BackupItemAction interface {
...
Name() string
...
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error)
Progress(operationID string, backup *api.Backup) (velero.OperationProgress, error)
Cancel(operationID string, backup *api.Backup) error
...
}
```
A new PluginKind, `BackupItemActionV2`, will be created, and the backup process will be modified to use this plugin kind.
See [Plugin Versioning](plugin-versioning.md) for more details on implementation plans, including v1 adapters, etc.

View File

@ -113,13 +113,21 @@ long as they don't use not-yet-completed backups) to be made without interferenc
all data has been moved before starting the next backup will slow the progress of the system without
adding any actual benefit to the user.
A new backup/restore phase, "WaitingForPluginOperations" will be introduced. When a backup or
restore has entered this phase, Velero is free to start another backup/restore. The backup/restore
New backup/restore phases, "WaitingForPluginOperations" and
"WaitingForPluginOperationsPartiallyFailed" will be introduced. When a backup or restore has
entered one of these phases, Velero is free to start another backup/restore. The backup/restore
will remain in the "WaitingForPluginOperations" phase until all BIA/RIA operations have completed
(for example, for a volume snapshotter, until all data has been successfully moved to persistent
storage). The backup/restore will not fail once it reaches this phase. If the backup is deleted
(cancelled), the plug-ins will attempt to delete the snapshots and stop the data movement - this may
not be possible with all storage systems.
storage). The backup/restore will not fail once it reaches this phase, although an error return
from a plugin could cause a backup or restore to move to "PartiallyFailed". If the backup is
deleted (cancelled), the plug-ins will attempt to delete the snapshots and stop the data movement -
this may not be possible with all storage systems.
In addition, for backups (but not restores), there will also be two additional phases,
"FinalizingAfterPluginOperations" and "FinalizingAfterPluginOperationsPartiallyFailed", which will
handle any steps required after plugin operations have all completed. Initially, this will just
include adding any required resources to the backup that might have changed during asynchronous
operation execution, although eventually other cleanup actions could be added to this phase.
### State progression
@ -143,7 +151,14 @@ In the current implementation, Restic backups will move data during the "InProgr
future, it may be possible to combine a snapshot with a Restic (or equivalent) backup which would
allow for data movement to be handled in the "WaitingForPluginOperations" phase,
The next phase is either "Completed", "WaitingForPluginOperations", "Failed" or "PartiallyFailed".
The next phase would be "WaitingForPluginOperations" for backups or restores which have unfinished
asynchronous plugin operations and no errors so far, "WaitingForPluginOperationsPartiallyFailed" for
backups or restores which have unfinished asynchronous plugin operations at least one error,
"Completed" for restores with no unfinished asynchronous plugin operations and no errors,
"PartiallyFailed" for restores with no unfinished asynchronous plugin operations and at least one
error, "FinalizingAfterPluginOperations" for backups with no unfinished asynchronous plugin
operations and no errors, "FinalizingAfterPluginOperationsPartiallyFailed" for backups with no
unfinished asynchronous plugin operations and at least one error, or "PartiallyFailed".
Backups/restores which would have a final phase of "Completed" or "PartiallyFailed" may move to the
"WaitingForPluginOperations" or "WaitingForPluginOperationsPartiallyFailed" state. A backup/restore
which will be marked "Failed" will go directly to the "Failed" phase. Uploads may continue in the
@ -157,8 +172,9 @@ any uploads still in progress should be aborted.
The "WaitingForPluginOperations" phase signifies that the main part of the backup/restore, including
snapshotting has completed successfully and uploading and any other asynchronous BIA/RIA plugin
operations are continuing. In the event of an error during this phase, the phase will change to
WaitingForPluginOperationsPartiallyFailed. On success, the phase changes to Completed. Backups
cannot be restored from when they are in the WaitingForPluginOperations state.
WaitingForPluginOperationsPartiallyFailed. On success, the phase changes to
"FinalizingAfterPluginOperations" for backups and "Completed" for restores. Backups cannot be
restored from when they are in the WaitingForPluginOperations state.
### WaitingForPluginOperationsPartiallyFailed (new)
The "WaitingForPluginOperationsPartiallyFailed" phase signifies that the main part of the
@ -166,6 +182,22 @@ backup/restore, including snapshotting has completed, but there were partial fai
the main part or during any async operations, including snapshot uploads. Backups cannot be
restored from when they are in the WaitingForPluginOperationsPartiallyFailed state.
### FinalizingAfterPluginOperations (new)
The "FinalizingAfterPluginOperations" phase signifies that asynchronous backup operations have all
completed successfully and Velero is currently backing up any resources indicated by asynchronous
plugins as items to back up after operations complete. Once this is done, the phase changes to
Completed. Backups cannot be restored from when they are in the FinalizingAfterPluginOperations
state.
### FinalizingAfterPluginOperationsPartiallyFailed (new)
The "FinalizingAfterPluginOperationsPartiallyFailed" phase signifies that, for a backup which had
errors during initial processing or asynchronous plugin operation, asynchronous backup operations
have all completed and Velero is currently backing up any resources indicated by asynchronous
plugins as items to back up after operations complete. Once this is done, the phase changes to
PartiallyFailed. Backups cannot be restored from when they are in the
FinalizingAfterPluginOperationsPartiallyFailed state.
### Failed
When a backup/restore has had fatal errors it is marked as "Failed" Backups in this state cannot be
restored from.
@ -211,12 +243,21 @@ WaitingForPluginOperationsPartiallyFailed phase, another backup/restore may be s
While in the WaitingForPluginOperations or WaitingForPluginOperationsPartiallyFailed phase, the
snapshots and item actions will be periodically polled. When all of the snapshots and item actions
have reported success, the backup/restore will move to the Completed or PartiallyFailed phase,
depending on whether the backup/restore was in the WaitingForPluginOperations or
WaitingForPluginOperationsPartiallyFailed phase.
have reported success, restores will move directly to the Completed or PartiallyFailed phase, and
backups will move to the FinalizingAfterPluginOperations or
FinalizingAfterPluginOperationsPartiallyFailed phase, depending on whether the backup/restore was in
the WaitingForPluginOperations or WaitingForPluginOperationsPartiallyFailed phase.
The Backup resources will not be written to object storage until the backup has entered a final phase:
Completed, Failed or PartiallyFailed
While in the FinalizingAfterPluginOperations or FinalizingAfterPluginOperationsPartiallyFailed
phase, Velero will update the backup with any resources indicated by plugins that they must be added
to the backup after operations are completed, and then the backup will move to the Completed or
PartiallyFailed phase, depending on whether there are any backup errors.
The Backup resources will be written to object storage at the time the backup leaves the InProgress
phase, but it will not be synced to other clusters (or usable for restores in the current cluster)
until the backup has entered a final phase: Completed, Failed or PartiallyFailed. During the
Finalizing phases, a the backup resources will be updated with any required resources related to
asynchronous plugins.
## Reconciliation of InProgress backups
@ -249,8 +290,6 @@ Two new methods will be added to the VolumeSnapshotter interface:
Progress(snapshotID string) (OperationProgress, error)
Cancel(snapshotID string) (error)
Open question: Does VolumeSnapshotter need Cancel, or is that only needed for BIA/RIA?
Progress will report the current status of a snapshot upload. This should be callable at
any time after the snapshot has been taken. In the event a plug-in is restarted, if the operationID
(snapshot ID) continues to be valid it should be possible to retrieve the progress.
@ -281,35 +320,38 @@ progress, Progress will return an InvalidOperationIDError error rather than a po
OperationProgress struct. If the item action does not start an asynchronous operation, then
operationID will be empty.
Two new methods will be added to the BackupItemAction interface, and the Execute() return signature
Three new methods will be added to the BackupItemAction interface, and the Execute() return signature
will be modified:
// Execute allows the ItemAction to perform arbitrary logic with the item being backed up,
// including mutating the item itself prior to backup. The item (unmodified or modified)
// should be returned, an optional operationID, along with an optional slice of ResourceIdentifiers
// specifying additional related items that should be backed up. If operationID is specified
// then velero will wait for this operation to complete before the backup is marked Completed.
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, operationID string,
[]ResourceIdentifier, error)
// Name returns the name of this BIA. Plugins which implement this interface must defined Name,
// but its content is unimportant, as it won't actually be called via RPC. Velero's plugin infrastructure
// will implement this directly rather than delegating to the RPC plugin in order to return the name
// that the plugin was registered under. The plugins must implement the method to complete the interface.
Name() string
// Execute allows the BackupItemAction to perform arbitrary logic with the item being backed up,
// including mutating the item itself prior to backup. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying
// additional related items that should be backed up now, an optional operationID for actions which
// initiate asynchronous actions, and a second slice of ResourceIdentifiers specifying related items
// which should be backed up after all asynchronous operations have completed. This last field will be
// ignored if operationID is empty, and should not be filled in unless the resource must be updated in the
// backup after async operations complete (i.e. some of the item's kubernetes metadata will be updated
// during the asynch operation which will be required during restore)
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error)
// Progress
Progress(input *BackupItemActionProgressInput) (OperationProgress, error)
Progress(operationID string, backup *api.Backup) (velero.OperationProgress, error)
// Cancel
Cancel(input *BackupItemActionProgressInput) error
// BackupItemActionProgressInput contains the input parameters for the BackupItemAction's Progress function.
type BackupItemActionProgressInput struct {
// Item is the item that was stored in the backup
Item runtime.Unstructured
// OperationID is the operation ID returned by BackupItemAction Execute
operationID string
// Backup is the representation of the backup resource processed by Velero.
Backup *velerov1api.Backup
}
Cancel(operationID string, backup *api.Backup) error
Two new methods will be added to the RestoreItemAction interface, and the
Three new methods will be added to the RestoreItemAction interface, and the
RestoreItemActionExecuteOutput struct will be modified:
// Name returns the name of this RIA. Plugins which implement this interface must defined Name,
// but its content is unimportant, as it won't actually be called via RPC. Velero's plugin infrastructure
// will implement this directly rather than delegating to the RPC plugin in order to return the name
// that the plugin was registered under. The plugins must implement the method to complete the interface.
Name() string
// Execute allows the ItemAction to perform arbitrary logic with the item being restored,
// including mutating the item itself prior to restore. The item (unmodified or modified)
// should be returned, an optional OperationID, along with an optional slice of ResourceIdentifiers
@ -321,10 +363,10 @@ RestoreItemActionExecuteOutput struct will be modified:
// Progress
Progress(input *RestoreItemActionProgressInput) (OperationProgress, error)
Progress(operationID string, restore *api.Restore) (velero.OperationProgress, error)
// Cancel
Cancel(input *RestoreItemActionProgressInput) error
Cancel(operationID string, restore *api.Restore) error
// RestoreItemActionExecuteOutput contains the output variables for the ItemAction's Execution function.
type RestoreItemActionExecuteOutput struct {
@ -345,16 +387,6 @@ RestoreItemActionExecuteOutput struct will be modified:
OperationID string
}
// RestoreItemActionProgressInput contains the input parameters for the RestoreItemAction's Progress function.
type RestoreItemActionProgressInput struct {
// Item is the item that was stored in the restore
Item runtime.Unstructured
// OperationID is the operation ID returned by RestoreItemAction Execute
operationID string
// Restore is the representation of the restore resource processed by Velero.
Restore *velerov1api.Restore
}
## Changes in Velero backup format
No changes to the existing format are introduced by this change. As part of the backup workflow changes, a
@ -370,19 +402,37 @@ to select the appropriate Backup/RestoreItemAction plugin to query for progress.
of what a record for a datamover plugin might look like:
```
{
"itemOperation": {
"plugin": "velero.io/datamover-backup",
"itemID": "<VolumeSnapshotContent objectReference>",
"operationID": "<DataMoverBackup objectReference>",
"completed": true,
"err": "",
"NCompleted": 12345,
"NTotal": 12345,
"OperationUnits": "byte",
"Description": "",
"Started": "2022-12-14T12:01:00Z",
"Updated": "2022-12-14T12:11:02Z"
}
"spec": {
"backupName": "backup-1",
"backupUID": "f8c72709-0f73-46e1-a071-116bc4a76b07",
"backupItemAction": "velero.io/volumesnapshotcontent-backup",
"resourceIdentifier": {
"Group": "snapshot.storage.k8s.io",
"Resource": "VolumeSnapshotContent",
"Namespace": "my-app",
"Name": "my-volume-vsc"
},
"operationID": "<DataMoverBackup objectReference>",
"itemsToUpdate": [
{
"Group": "velero.io",
"Resource": "VolumeSnapshotBackup",
"Namespace": "my-app",
"Name": "vsb-1"
}
]
},
"status": {
"operationPhase": "Completed",
"error": "",
"nCompleted": 12345,
"nTotal": 12345,
"operationUnits": "byte",
"description": "",
"Created": "2022-12-14T12:00:00Z",
"Started": "2022-12-14T12:01:00Z",
"Updated": "2022-12-14T12:11:02Z"
},
}
```
@ -425,36 +475,41 @@ progress.
## Backup workflow changes
The backup workflow remains the same until we get to the point where the `velero-backup.json` object
is written. At this point, we will queue the backup to a finalization go-routine. The next backup
may then begin. The finalization routine will run across all of the
VolumeSnapshotter/BackupItemAction operations and call the _Progress_ method on each of them.
is written. At this point, Velero will
run across all of the VolumeSnapshotter/BackupItemAction operations and call the _Progress_ method
on each of them.
If all snapshot and backup item operations have finished (either successfully or failed), the backup
will be completed and the backup will move to the appropriate terminal phase and upload the
`velero-backup.json` object to the object store and the backup will be complete.
If all backup item operations have finished (either successfully or failed), the backup will move to
one of the finalize phases.
If any of the snapshots or backup items are still being processed, the phase of the backup will be
set to the appropriate phase (_WaitingForPluginOperations_ or
_WaitingForPluginOperationsPartiallyFailed_). In the event of any of the progress checks return an
error, the phase will move to _WaitingForPluginOperationsPartiallyFailed_. The backup will then be
requeued and will be rechecked again after some time has passed.
_WaitingForPluginOperationsPartiallyFailed_), and the async backup operations controller will
reconcile periodically and call Progress on any unfinished operations. In the event of any of the
progress checks return an error, the phase will move to _WaitingForPluginOperationsPartiallyFailed_.
Once all operations have completed, the backup will be moved to one of the finalize phases, and the
backup finalizer controller will update the the `velero-backup.json`in the object store with any
resources necessary after asynchronous operations are complete and the backup will move to the
appropriate terminal phase.
## Restore workflow changes
The restore workflow remains the same until velero would currently move the backup into one of the
terminal states. At this point, we will queue the restore to a finalization go-routine. The next
restore may then begin. The finalization routine will run across all of the RestoreItemAction
operations and call the _Progress_ method on each of them.
terminal states. At this point, Velero will run across all of the RestoreItemAction operations and
call the _Progress_ method on each of them.
If all restore item operations have finished (either successfully or failed), the restore will be
completed and the restore will move to the appropriate terminal phase and the restore will be
complete.
If any of the restore items are still being processed, the phase of the restore will be set to the
appropriate phase (_WaitingForPluginOperations_ or _WaitingForPluginOperationsPartiallyFailed_). In
the event of any of the progress checks return an error, the phase will move to
_WaitingForPluginOperationsPartiallyFailed_. The restore will then be requeued and will be rechecked
again after some time has passed.
appropriate phase (_WaitingForPluginOperations_ or _WaitingForPluginOperationsPartiallyFailed_), and
the async restore operations controller will reconcile periodically and call Progress on any
unfinished operations. In the event of any of the progress checks return an error, the phase will
move to _WaitingForPluginOperationsPartiallyFailed_. Once all of the operations have completed, the
restore will be moved to the appropriate terminal phase.
## Restart workflow

View File

@ -273,6 +273,20 @@ func (h *DefaultItemHookHandler) HandleHooks(
return nil
}
// NoOpItemHookHandler is the an itemHookHandler for the Finalize controller where hooks don't run
type NoOpItemHookHandler struct{}
func (h *NoOpItemHookHandler) HandleHooks(
log logrus.FieldLogger,
groupResource schema.GroupResource,
obj runtime.Unstructured,
resourceHooks []ResourceHook,
phase hookPhase,
) error {
return nil
}
func phasedKey(phase hookPhase, key string) string {
if phase != "" {
return fmt.Sprintf("%v.%v", phase, key)

View File

@ -124,6 +124,11 @@ type BackupSpec struct {
// The default value is 10 minute.
// +optional
CSISnapshotTimeout metav1.Duration `json:"csiSnapshotTimeout,omitempty"`
// ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations
// The default value is 1 hour.
// +optional
ItemOperationTimeout metav1.Duration `json:"itemOperationTimeout,omitempty"`
}
// BackupHooks contains custom behaviors that should be executed at different phases of the backup.
@ -221,7 +226,7 @@ const (
// BackupPhase is a string representation of the lifecycle phase
// of a Velero backup.
// +kubebuilder:validation:Enum=New;FailedValidation;InProgress;WaitingForPluginOperations;WaitingForPluginOperationsPartiallyFailed;Completed;PartiallyFailed;Failed;Deleting
// +kubebuilder:validation:Enum=New;FailedValidation;InProgress;WaitingForPluginOperations;WaitingForPluginOperationsPartiallyFailed;FinalizingAfterPluginOperations;FinalizingAfterPluginOperationsPartiallyFailed;Completed;PartiallyFailed;Failed;Deleting
type BackupPhase string
const (
@ -251,6 +256,23 @@ const (
// ongoing. The backup is not usable yet.
BackupPhaseWaitingForPluginOperationsPartiallyFailed BackupPhase = "WaitingForPluginOperationsPartiallyFailed"
// BackupPhaseFinalizingAfterPluginOperations means the backup of
// Kubernetes resources, creation of snapshots, and other
// async plugin operations were successful and snapshot upload and
// other plugin operations are now complete, but the Backup is awaiting
// final update of resources modified during async operations.
// The backup is not usable yet.
BackupPhaseFinalizingAfterPluginOperations BackupPhase = "FinalizingAfterPluginOperations"
// BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed means the backup of
// Kubernetes resources, creation of snapshots, and other
// async plugin operations were successful and snapshot upload and
// other plugin operations are now complete, but one or more errors
// occurred during backup or async operation processing, and the
// Backup is awaiting final update of resources modified during async
// operations. The backup is not usable yet.
BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed BackupPhase = "FinalizingAfterPluginOperationsPartiallyFailed"
// BackupPhaseCompleted means the backup has run successfully without
// errors.
BackupPhaseCompleted BackupPhase = "Completed"
@ -351,6 +373,21 @@ type BackupStatus struct {
// completed CSI VolumeSnapshots for this backup.
// +optional
CSIVolumeSnapshotsCompleted int `json:"csiVolumeSnapshotsCompleted,omitempty"`
// AsyncBackupItemOperationsAttempted is the total number of attempted
// async BackupItemAction operations for this backup.
// +optional
AsyncBackupItemOperationsAttempted int `json:"asyncBackupItemOperationsAttempted,omitempty"`
// AsyncBackupItemOperationsCompleted is the total number of successfully completed
// async BackupItemAction operations for this backup.
// +optional
AsyncBackupItemOperationsCompleted int `json:"asyncBackupItemOperationsCompleted,omitempty"`
// AsyncBackupItemOperationsFailed is the total number of async
// BackupItemAction operations for this backup which ended with an error.
// +optional
AsyncBackupItemOperationsFailed int `json:"asyncBackupItemOperationsFailed,omitempty"`
}
// BackupProgress stores information about the progress of a Backup's execution.

View File

@ -350,6 +350,7 @@ func (in *BackupSpec) DeepCopyInto(out *BackupSpec) {
}
}
out.CSISnapshotTimeout = in.CSISnapshotTimeout
out.ItemOperationTimeout = in.ItemOperationTimeout
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupSpec.

View File

@ -28,11 +28,20 @@ import (
// GetItemFilePath returns an item's file path once extracted from a Velero backup archive.
func GetItemFilePath(rootDir, groupResource, namespace, name string) string {
switch namespace {
case "":
return filepath.Join(rootDir, velerov1api.ResourcesDir, groupResource, velerov1api.ClusterScopedDir, name+".json")
default:
return filepath.Join(rootDir, velerov1api.ResourcesDir, groupResource, velerov1api.NamespaceScopedDir, namespace, name+".json")
return GetVersionedItemFilePath(rootDir, groupResource, namespace, name, "")
}
// GetVersionedItemFilePath returns an item's file path once extracted from a Velero backup archive, with version included.
func GetVersionedItemFilePath(rootDir, groupResource, namespace, name, versionPath string) string {
return filepath.Join(rootDir, velerov1api.ResourcesDir, groupResource, versionPath, GetScopeDir(namespace), namespace, name+".json")
}
// GetScopeDir returns NamespaceScopedDir if namespace is present, or ClusterScopedDir if empty
func GetScopeDir(namespace string) string {
if namespace == "" {
return velerov1api.ClusterScopedDir
} else {
return velerov1api.NamespaceScopedDir
}
}

View File

@ -28,4 +28,22 @@ func TestGetItemFilePath(t *testing.T) {
res = GetItemFilePath("root", "resource", "namespace", "item")
assert.Equal(t, "root/resources/resource/namespaces/namespace/item.json", res)
res = GetItemFilePath("", "resource", "", "item")
assert.Equal(t, "resources/resource/cluster/item.json", res)
res = GetVersionedItemFilePath("root", "resource", "", "item", "")
assert.Equal(t, "root/resources/resource/cluster/item.json", res)
res = GetVersionedItemFilePath("root", "resource", "namespace", "item", "")
assert.Equal(t, "root/resources/resource/namespaces/namespace/item.json", res)
res = GetVersionedItemFilePath("root", "resource", "namespace", "item", "v1")
assert.Equal(t, "root/resources/resource/v1/namespaces/namespace/item.json", res)
res = GetVersionedItemFilePath("root", "resource", "", "item", "v1")
assert.Equal(t, "root/resources/resource/v1/cluster/item.json", res)
res = GetVersionedItemFilePath("", "resource", "", "item", "")
assert.Equal(t, "resources/resource/cluster/item.json", res)
}

View File

@ -42,8 +42,10 @@ import (
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/discovery"
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
biav2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
vsv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
"github.com/vmware-tanzu/velero/pkg/podexec"
@ -67,6 +69,9 @@ type Backupper interface {
BackupWithResolvers(log logrus.FieldLogger, backupRequest *Request, backupFile io.Writer,
backupItemActionResolver framework.BackupItemActionResolverV2, itemSnapshotterResolver framework.ItemSnapshotterResolver,
volumeSnapshotterGetter VolumeSnapshotterGetter) error
FinalizeBackup(log logrus.FieldLogger, backupRequest *Request, inBackupFile io.Reader, outBackupFile io.Writer,
backupItemActionResolver framework.BackupItemActionResolverV2,
asyncBIAOperations []*itemoperation.BackupOperation) error
}
// kubernetesBackupper implements Backupper.
@ -434,6 +439,25 @@ func (kb *kubernetesBackupper) backupItem(log logrus.FieldLogger, gr schema.Grou
return backedUpItem
}
func (kb *kubernetesBackupper) finalizeItem(log logrus.FieldLogger, gr schema.GroupResource, itemBackupper *itemBackupper, unstructured *unstructured.Unstructured, preferredGVR schema.GroupVersionResource) (bool, []FileForArchive) {
backedUpItem, updateFiles, err := itemBackupper.finalizeItem(log, unstructured, gr, preferredGVR)
if aggregate, ok := err.(kubeerrs.Aggregate); ok {
log.WithField("name", unstructured.GetName()).Infof("%d errors encountered backup up item", len(aggregate.Errors()))
// log each error separately so we get error location info in the log, and an
// accurate count of errors
for _, err = range aggregate.Errors() {
log.WithError(err).WithField("name", unstructured.GetName()).Error("Error backing up item")
}
return false, updateFiles
}
if err != nil {
log.WithError(err).WithField("name", unstructured.GetName()).Error("Error backing up item")
return false, updateFiles
}
return backedUpItem, updateFiles
}
// backupCRD checks if the resource is a custom resource, and if so, backs up the custom resource definition
// associated with it.
func (kb *kubernetesBackupper) backupCRD(log logrus.FieldLogger, gr schema.GroupResource, itemBackupper *itemBackupper) {
@ -492,6 +516,180 @@ func (kb *kubernetesBackupper) writeBackupVersion(tw *tar.Writer) error {
return nil
}
func (kb *kubernetesBackupper) FinalizeBackup(log logrus.FieldLogger,
backupRequest *Request,
inBackupFile io.Reader,
outBackupFile io.Writer,
backupItemActionResolver framework.BackupItemActionResolverV2,
asyncBIAOperations []*itemoperation.BackupOperation) error {
gzw := gzip.NewWriter(outBackupFile)
defer gzw.Close()
tw := tar.NewWriter(gzw)
defer tw.Close()
gzr, err := gzip.NewReader(inBackupFile)
if err != nil {
log.Infof("error creating gzip reader: %v", err)
return err
}
defer gzr.Close()
tr := tar.NewReader(gzr)
backupRequest.ResolvedActions, err = backupItemActionResolver.ResolveActions(kb.discoveryHelper, log)
if err != nil {
log.WithError(errors.WithStack(err)).Debugf("Error from backupItemActionResolver.ResolveActions")
return err
}
backupRequest.BackedUpItems = map[itemKey]struct{}{}
// set up a temp dir for the itemCollector to use to temporarily
// store items as they're scraped from the API.
tempDir, err := ioutil.TempDir("", "")
if err != nil {
return errors.Wrap(err, "error creating temp dir for backup")
}
defer os.RemoveAll(tempDir)
collector := &itemCollector{
log: log,
backupRequest: backupRequest,
discoveryHelper: kb.discoveryHelper,
dynamicFactory: kb.dynamicFactory,
cohabitatingResources: cohabitatingResources(),
dir: tempDir,
pageSize: kb.clientPageSize,
}
// Get item list from itemoperation.BackupOperation.Spec.ItemsToUpdate
var resourceIDs []velero.ResourceIdentifier
for _, operation := range asyncBIAOperations {
if len(operation.Spec.ItemsToUpdate) != 0 {
resourceIDs = append(resourceIDs, operation.Spec.ItemsToUpdate...)
}
}
items := collector.getItemsFromResourceIdentifiers(resourceIDs)
log.WithField("progress", "").Infof("Collected %d items from the async BIA operations ItemsToUpdate list", len(items))
itemBackupper := &itemBackupper{
backupRequest: backupRequest,
tarWriter: tw,
dynamicFactory: kb.dynamicFactory,
discoveryHelper: kb.discoveryHelper,
itemHookHandler: &hook.NoOpItemHookHandler{},
}
updateFiles := make(map[string]FileForArchive)
backedUpGroupResources := map[schema.GroupResource]bool{}
totalItems := len(items)
for i, item := range items {
log.WithFields(map[string]interface{}{
"progress": "",
"resource": item.groupResource.String(),
"namespace": item.namespace,
"name": item.name,
}).Infof("Processing item")
// use an anonymous func so we can defer-close/remove the file
// as soon as we're done with it
func() {
var unstructured unstructured.Unstructured
f, err := os.Open(item.path)
if err != nil {
log.WithError(errors.WithStack(err)).Error("Error opening file containing item")
return
}
defer f.Close()
defer os.Remove(f.Name())
if err := json.NewDecoder(f).Decode(&unstructured); err != nil {
log.WithError(errors.WithStack(err)).Error("Error decoding JSON from file")
return
}
backedUp, itemFiles := kb.finalizeItem(log, item.groupResource, itemBackupper, &unstructured, item.preferredGVR)
if backedUp {
backedUpGroupResources[item.groupResource] = true
for _, itemFile := range itemFiles {
updateFiles[itemFile.FilePath] = itemFile
}
}
}()
// updated total is computed as "how many items we've backed up so far, plus
// how many items we know of that are remaining"
totalItems = len(backupRequest.BackedUpItems) + (len(items) - (i + 1))
log.WithFields(map[string]interface{}{
"progress": "",
"resource": item.groupResource.String(),
"namespace": item.namespace,
"name": item.name,
}).Infof("Updated %d items out of an estimated total of %d (estimate will change throughout the backup finalizer)", len(backupRequest.BackedUpItems), totalItems)
}
// write new tar archive replacing files in original with content updateFiles for matches
buildFinalTarball(tr, tw, updateFiles)
log.WithField("progress", "").Infof("Updated a total of %d items", len(backupRequest.BackedUpItems))
return nil
}
func buildFinalTarball(tr *tar.Reader, tw *tar.Writer, updateFiles map[string]FileForArchive) error {
for {
header, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
return errors.WithStack(err)
}
newFile, ok := updateFiles[header.Name]
if ok {
// add updated file to archive, skip over tr file content
if err := tw.WriteHeader(newFile.Header); err != nil {
return errors.WithStack(err)
}
if _, err := tw.Write(newFile.FileBytes); err != nil {
return errors.WithStack(err)
}
delete(updateFiles, header.Name)
// skip over file contents from old tarball
_, err := io.ReadAll(tr)
if err != nil {
return errors.WithStack(err)
}
} else {
// Add original content to new tarball, as item wasn't updated
oldContents, err := io.ReadAll(tr)
if err != nil {
return errors.WithStack(err)
}
if err := tw.WriteHeader(header); err != nil {
return errors.WithStack(err)
}
if _, err := tw.Write(oldContents); err != nil {
return errors.WithStack(err)
}
}
}
// iterate over any remaining map entries, which represent updated items that
// were not in the original backup tarball
for _, newFile := range updateFiles {
if err := tw.WriteHeader(newFile.Header); err != nil {
return errors.WithStack(err)
}
if _, err := tw.Write(newFile.FileBytes); err != nil {
return errors.WithStack(err)
}
}
return nil
}
type tarWriter interface {
io.Closer
Write([]byte) (int, error)

View File

@ -45,6 +45,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
biav2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
@ -1140,17 +1141,18 @@ type recordResourcesAction struct {
backups []velerov1.Backup
additionalItems []velero.ResourceIdentifier
operationID string
itemsToUpdate []velero.ResourceIdentifier
}
func (a *recordResourcesAction) Execute(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
func (a *recordResourcesAction) Execute(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
metadata, err := meta.Accessor(item)
if err != nil {
return item, a.additionalItems, a.operationID, err
return item, a.additionalItems, a.operationID, a.itemsToUpdate, err
}
a.ids = append(a.ids, kubeutil.NamespaceAndName(metadata))
a.backups = append(a.backups, *backup)
return item, a.additionalItems, a.operationID, nil
return item, a.additionalItems, a.operationID, a.itemsToUpdate, nil
}
func (a *recordResourcesAction) AppliesTo() (velero.ResourceSelector, error) {
@ -1165,6 +1167,10 @@ func (a *recordResourcesAction) Cancel(operationID string, backup *velerov1.Back
return nil
}
func (a *recordResourcesAction) Name() string {
return ""
}
func (a *recordResourcesAction) ForResource(resource string) *recordResourcesAction {
a.selector.IncludedResources = append(a.selector.IncludedResources, resource)
return a
@ -1462,7 +1468,7 @@ func (a *appliesToErrorAction) AppliesTo() (velero.ResourceSelector, error) {
return velero.ResourceSelector{}, errors.New("error calling AppliesTo")
}
func (a *appliesToErrorAction) Execute(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
func (a *appliesToErrorAction) Execute(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
panic("not implemented")
}
@ -1474,6 +1480,10 @@ func (a *appliesToErrorAction) Cancel(operationID string, backup *velerov1.Backu
panic("not implemented")
}
func (a *appliesToErrorAction) Name() string {
return ""
}
// TestBackupActionModifications runs backups with backup item actions that make modifications
// to items in their Execute(...) methods and verifies that these modifications are
// persisted to the backup tarball. Verification is done by inspecting the file contents
@ -1483,16 +1493,16 @@ func TestBackupActionModifications(t *testing.T) {
// method modifies the item being passed in by calling the 'modify' function on it.
modifyingActionGetter := func(modify func(*unstructured.Unstructured)) *pluggableAction {
return &pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
obj, ok := item.(*unstructured.Unstructured)
if !ok {
return nil, nil, "", errors.Errorf("unexpected type %T", item)
return nil, nil, "", nil, errors.Errorf("unexpected type %T", item)
}
res := obj.DeepCopy()
modify(res)
return res, nil, "", nil
return res, nil, "", nil, nil
},
}
}
@ -1621,13 +1631,13 @@ func TestBackupActionAdditionalItems(t *testing.T) {
actions: []biav2.BackupItemAction{
&pluggableAction{
selector: velero.ResourceSelector{IncludedNamespaces: []string{"ns-1"}},
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
{GroupResource: kuberesource.Pods, Namespace: "ns-2", Name: "pod-2"},
{GroupResource: kuberesource.Pods, Namespace: "ns-3", Name: "pod-3"},
}
return item, additionalItems, "", nil
return item, additionalItems, "", nil, nil
},
},
},
@ -1652,13 +1662,13 @@ func TestBackupActionAdditionalItems(t *testing.T) {
},
actions: []biav2.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
{GroupResource: kuberesource.Pods, Namespace: "ns-2", Name: "pod-2"},
{GroupResource: kuberesource.Pods, Namespace: "ns-3", Name: "pod-3"},
}
return item, additionalItems, "", nil
return item, additionalItems, "", nil, nil
},
},
},
@ -1682,13 +1692,13 @@ func TestBackupActionAdditionalItems(t *testing.T) {
},
actions: []biav2.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
{GroupResource: kuberesource.PersistentVolumes, Name: "pv-1"},
{GroupResource: kuberesource.PersistentVolumes, Name: "pv-2"},
}
return item, additionalItems, "", nil
return item, additionalItems, "", nil, nil
},
},
},
@ -1715,13 +1725,13 @@ func TestBackupActionAdditionalItems(t *testing.T) {
},
actions: []biav2.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
{GroupResource: kuberesource.PersistentVolumes, Name: "pv-1"},
{GroupResource: kuberesource.PersistentVolumes, Name: "pv-2"},
}
return item, additionalItems, "", nil
return item, additionalItems, "", nil, nil
},
},
},
@ -1745,13 +1755,13 @@ func TestBackupActionAdditionalItems(t *testing.T) {
},
actions: []biav2.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
{GroupResource: kuberesource.PersistentVolumes, Name: "pv-1"},
{GroupResource: kuberesource.PersistentVolumes, Name: "pv-2"},
}
return item, additionalItems, "", nil
return item, additionalItems, "", nil, nil
},
},
},
@ -1776,13 +1786,13 @@ func TestBackupActionAdditionalItems(t *testing.T) {
},
actions: []biav2.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
{GroupResource: kuberesource.PersistentVolumes, Name: "pv-1"},
{GroupResource: kuberesource.PersistentVolumes, Name: "pv-2"},
}
return item, additionalItems, "", nil
return item, additionalItems, "", nil, nil
},
},
},
@ -1807,13 +1817,13 @@ func TestBackupActionAdditionalItems(t *testing.T) {
actions: []biav2.BackupItemAction{
&pluggableAction{
selector: velero.ResourceSelector{IncludedNamespaces: []string{"ns-1"}},
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
{GroupResource: kuberesource.Pods, Namespace: "ns-4", Name: "pod-4"},
{GroupResource: kuberesource.Pods, Namespace: "ns-5", Name: "pod-5"},
}
return item, additionalItems, "", nil
return item, additionalItems, "", nil, nil
},
},
},
@ -2292,6 +2302,167 @@ func TestBackupWithSnapshots(t *testing.T) {
}
}
// TestBackupWithAsyncOperations runs backups which return operationIDs and
// verifies that the itemoperations are tracked as appropriate. Verification is done by
// looking at the backup request's itemOperationsList field.
func TestBackupWithAsyncOperations(t *testing.T) {
// completedOperationAction is a *pluggableAction, whose Execute(...)
// method returns an operationID which will always be done when calling Progress.
completedOperationAction := &pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
obj, ok := item.(*unstructured.Unstructured)
if !ok {
return nil, nil, "", nil, errors.Errorf("unexpected type %T", item)
}
return obj, nil, obj.GetName() + "-1", nil, nil
},
progressFunc: func(operationID string, backup *velerov1.Backup) (velero.OperationProgress, error) {
return velero.OperationProgress{
Completed: true,
Description: "Done!",
}, nil
},
}
// incompleteOperationAction is a *pluggableAction, whose Execute(...)
// method returns an operationID which will never be done when calling Progress.
incompleteOperationAction := &pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
obj, ok := item.(*unstructured.Unstructured)
if !ok {
return nil, nil, "", nil, errors.Errorf("unexpected type %T", item)
}
return obj, nil, obj.GetName() + "-1", nil, nil
},
progressFunc: func(operationID string, backup *velerov1.Backup) (velero.OperationProgress, error) {
return velero.OperationProgress{
Completed: false,
Description: "Working...",
}, nil
},
}
// noOperationAction is a *pluggableAction, whose Execute(...)
// method does not return an operationID.
noOperationAction := &pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
obj, ok := item.(*unstructured.Unstructured)
if !ok {
return nil, nil, "", nil, errors.Errorf("unexpected type %T", item)
}
return obj, nil, "", nil, nil
},
}
tests := []struct {
name string
req *Request
apiResources []*test.APIResource
actions []biav2.BackupItemAction
want []*itemoperation.BackupOperation
}{
{
name: "action that starts a short-running process records operation",
req: &Request{
Backup: defaultBackup().Result(),
},
apiResources: []*test.APIResource{
test.Pods(
builder.ForPod("ns-1", "pod-1").Result(),
),
},
actions: []biav2.BackupItemAction{
completedOperationAction,
},
want: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-1",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-1"},
OperationID: "pod-1-1",
},
Status: itemoperation.OperationStatus{
Phase: "InProgress",
},
},
},
},
{
name: "action that starts a long-running process records operation",
req: &Request{
Backup: defaultBackup().Result(),
},
apiResources: []*test.APIResource{
test.Pods(
builder.ForPod("ns-1", "pod-2").Result(),
),
},
actions: []biav2.BackupItemAction{
incompleteOperationAction,
},
want: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-1",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-2"},
OperationID: "pod-2-1",
},
Status: itemoperation.OperationStatus{
Phase: "InProgress",
},
},
},
},
{
name: "action that has no operation doesn't record one",
req: &Request{
Backup: defaultBackup().Result(),
},
apiResources: []*test.APIResource{
test.Pods(
builder.ForPod("ns-1", "pod-3").Result(),
),
},
actions: []biav2.BackupItemAction{
noOperationAction,
},
want: []*itemoperation.BackupOperation{},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
var (
h = newHarness(t)
backupFile = bytes.NewBuffer([]byte{})
)
for _, resource := range tc.apiResources {
h.addItems(t, resource)
}
err := h.backupper.Backup(h.log, tc.req, backupFile, tc.actions, nil)
assert.NoError(t, err)
resultOper := *tc.req.GetItemOperationsList()
// set want Created times so it won't fail the assert.Equal test
for i, wantOper := range tc.want {
wantOper.Status.Created = resultOper[i].Status.Created
}
assert.Equal(t, tc.want, *tc.req.GetItemOperationsList())
})
}
}
// TestBackupWithInvalidHooks runs backups with invalid hook specifications and verifies
// that an error is returned.
func TestBackupWithInvalidHooks(t *testing.T) {
@ -2767,16 +2938,17 @@ func TestBackupWithPodVolume(t *testing.T) {
}
}
// pluggableAction is a backup item action that can be plugged with an Execute
// function body at runtime.
// pluggableAction is a backup item action that can be plugged with Execute
// and Progress function bodies at runtime.
type pluggableAction struct {
selector velero.ResourceSelector
executeFunc func(runtime.Unstructured, *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error)
selector velero.ResourceSelector
executeFunc func(runtime.Unstructured, *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error)
progressFunc func(string, *velerov1.Backup) (velero.OperationProgress, error)
}
func (a *pluggableAction) Execute(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
func (a *pluggableAction) Execute(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
if a.executeFunc == nil {
return item, nil, "", nil
return item, nil, "", nil, nil
}
return a.executeFunc(item, backup)
@ -2787,13 +2959,21 @@ func (a *pluggableAction) AppliesTo() (velero.ResourceSelector, error) {
}
func (a *pluggableAction) Progress(operationID string, backup *velerov1.Backup) (velero.OperationProgress, error) {
return velero.OperationProgress{}, nil
if a.progressFunc == nil {
return velero.OperationProgress{}, nil
}
return a.progressFunc(operationID, backup)
}
func (a *pluggableAction) Cancel(operationID string, backup *velerov1.Backup) error {
return nil
}
func (a *pluggableAction) Name() string {
return ""
}
type harness struct {
*test.APIServer
backupper *kubernetesBackupper

View File

@ -20,7 +20,6 @@ import (
"archive/tar"
"encoding/json"
"fmt"
"path/filepath"
"strings"
"time"
@ -39,10 +38,13 @@ import (
"github.com/vmware-tanzu/velero/internal/hook"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/archive"
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/features"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
vsv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
"github.com/vmware-tanzu/velero/pkg/podvolume"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
@ -68,52 +70,87 @@ type itemBackupper struct {
snapshotLocationVolumeSnapshotters map[string]vsv1.VolumeSnapshotter
}
type FileForArchive struct {
FilePath string
Header *tar.Header
FileBytes []byte
}
// finalizeItem backs up an individual item and returns its content to replace previous content
// in the backup tarball
// In addition to the error return, backupItem also returns a bool indicating whether the item
// was actually backed up and a slice of filepaths and filecontent to replace the data in the original tarball.
func (ib *itemBackupper) finalizeItem(logger logrus.FieldLogger, obj runtime.Unstructured, groupResource schema.GroupResource, preferredGVR schema.GroupVersionResource) (bool, []FileForArchive, error) {
return ib.backupItemInternal(logger, obj, groupResource, preferredGVR, true, true)
}
// backupItem backs up an individual item to tarWriter. The item may be excluded based on the
// namespaces IncludesExcludes list.
// In addition to the error return, backupItem also returns a bool indicating whether the item
// was actually backed up.
func (ib *itemBackupper) backupItem(logger logrus.FieldLogger, obj runtime.Unstructured, groupResource schema.GroupResource, preferredGVR schema.GroupVersionResource, mustInclude bool) (bool, error) {
selectedForBackup, files, err := ib.backupItemInternal(logger, obj, groupResource, preferredGVR, mustInclude, false)
// return if not selected, an error occurred, or there are no files to add
if selectedForBackup == false || err != nil || len(files) == 0 {
return selectedForBackup, err
}
for _, file := range files {
if err := ib.tarWriter.WriteHeader(file.Header); err != nil {
return false, errors.WithStack(err)
}
if _, err := ib.tarWriter.Write(file.FileBytes); err != nil {
return false, errors.WithStack(err)
}
}
return true, nil
}
func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runtime.Unstructured, groupResource schema.GroupResource, preferredGVR schema.GroupVersionResource, mustInclude, finalize bool) (bool, []FileForArchive, error) {
var itemFiles []FileForArchive
metadata, err := meta.Accessor(obj)
if err != nil {
return false, err
return false, itemFiles, err
}
namespace := metadata.GetNamespace()
name := metadata.GetName()
log := logger.WithField("name", name)
log = log.WithField("resource", groupResource.String())
log = log.WithField("namespace", namespace)
log := logger.WithFields(map[string]interface{}{
"name": name,
"resource": groupResource.String(),
"namespace": namespace,
})
if mustInclude {
log.Infof("Skipping the exclusion checks for this resource")
} else {
if metadata.GetLabels()[excludeFromBackupLabel] == "true" {
log.Infof("Excluding item because it has label %s=true", excludeFromBackupLabel)
return false, nil
return false, itemFiles, nil
}
// NOTE: we have to re-check namespace & resource includes/excludes because it's possible that
// backupItem can be invoked by a custom action.
if namespace != "" && !ib.backupRequest.NamespaceIncludesExcludes.ShouldInclude(namespace) {
log.Info("Excluding item because namespace is excluded")
return false, nil
return false, itemFiles, nil
}
// NOTE: we specifically allow namespaces to be backed up even if IncludeClusterResources is
// false.
if namespace == "" && groupResource != kuberesource.Namespaces && ib.backupRequest.Spec.IncludeClusterResources != nil && !*ib.backupRequest.Spec.IncludeClusterResources {
log.Info("Excluding item because resource is cluster-scoped and backup.spec.includeClusterResources is false")
return false, nil
return false, itemFiles, nil
}
if !ib.backupRequest.ResourceIncludesExcludes.ShouldInclude(groupResource.String()) {
log.Info("Excluding item because resource is excluded")
return false, nil
return false, itemFiles, nil
}
}
if metadata.GetDeletionTimestamp() != nil {
log.Info("Skipping item because it's being deleted.")
return false, nil
return false, itemFiles, nil
}
key := itemKey{
@ -125,24 +162,23 @@ func (ib *itemBackupper) backupItem(logger logrus.FieldLogger, obj runtime.Unstr
if _, exists := ib.backupRequest.BackedUpItems[key]; exists {
log.Info("Skipping item because it's already been backed up.")
// returning true since this item *is* in the backup, even though we're not backing it up here
return true, nil
return true, itemFiles, nil
}
ib.backupRequest.BackedUpItems[key] = struct{}{}
log.Info("Backing up item")
log.Debug("Executing pre hooks")
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePre); err != nil {
return false, err
}
var (
backupErrs []error
pod *corev1api.Pod
pvbVolumes []string
)
if groupResource == kuberesource.Pods {
log.Debug("Executing pre hooks")
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePre); err != nil {
return false, itemFiles, err
}
if !finalize && groupResource == kuberesource.Pods {
// pod needs to be initialized for the unstructured converter
pod = new(corev1api.Pod)
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pod); err != nil {
@ -166,7 +202,6 @@ func (ib *itemBackupper) backupItem(logger logrus.FieldLogger, obj runtime.Unstr
}).Info("Pod volume uses a persistent volume claim which has already been backed up from another pod, skipping.")
continue
}
pvbVolumes = append(pvbVolumes, volume)
}
}
@ -174,11 +209,9 @@ func (ib *itemBackupper) backupItem(logger logrus.FieldLogger, obj runtime.Unstr
// capture the version of the object before invoking plugin actions as the plugin may update
// the group version of the object.
// group version of this object
// Used on filepath to backup up all groups and versions
version := resourceVersion(obj)
versionPath := resourceVersion(obj)
updatedObj, err := ib.executeActions(log, obj, groupResource, name, namespace, metadata)
updatedObj, err := ib.executeActions(log, obj, groupResource, name, namespace, metadata, finalize)
if err != nil {
backupErrs = append(backupErrs, err)
@ -187,24 +220,23 @@ func (ib *itemBackupper) backupItem(logger logrus.FieldLogger, obj runtime.Unstr
if err := ib.itemHookHandler.HandleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hook.PhasePost); err != nil {
backupErrs = append(backupErrs, err)
}
return false, kubeerrs.NewAggregate(backupErrs)
return false, itemFiles, kubeerrs.NewAggregate(backupErrs)
}
obj = updatedObj
if metadata, err = meta.Accessor(obj); err != nil {
return false, errors.WithStack(err)
return false, itemFiles, errors.WithStack(err)
}
// update name and namespace in case they were modified in an action
name = metadata.GetName()
namespace = metadata.GetNamespace()
if groupResource == kuberesource.PersistentVolumes {
if !finalize && groupResource == kuberesource.PersistentVolumes {
if err := ib.takePVSnapshot(obj, log); err != nil {
backupErrs = append(backupErrs, err)
}
}
if groupResource == kuberesource.Pods && pod != nil {
if !finalize && groupResource == kuberesource.Pods && pod != nil {
// this function will return partial results, so process podVolumeBackups
// even if there are errors.
podVolumeBackups, errs := ib.backupPodVolumes(log, pod, pvbVolumes)
@ -224,33 +256,27 @@ func (ib *itemBackupper) backupItem(logger logrus.FieldLogger, obj runtime.Unstr
}
if len(backupErrs) != 0 {
return false, kubeerrs.NewAggregate(backupErrs)
}
// Getting the preferred group version of this resource
preferredVersion := preferredGVR.Version
var filePath string
// API Group version is now part of path of backup as a subdirectory
// it will add a prefix to subdirectory name for the preferred version
versionPath := version
if version == preferredVersion {
versionPath = version + velerov1api.PreferredVersionDir
}
if namespace != "" {
filePath = filepath.Join(velerov1api.ResourcesDir, groupResource.String(), versionPath, velerov1api.NamespaceScopedDir, namespace, name+".json")
} else {
filePath = filepath.Join(velerov1api.ResourcesDir, groupResource.String(), versionPath, velerov1api.ClusterScopedDir, name+".json")
return false, itemFiles, kubeerrs.NewAggregate(backupErrs)
}
itemBytes, err := json.Marshal(obj.UnstructuredContent())
if err != nil {
return false, errors.WithStack(err)
return false, itemFiles, errors.WithStack(err)
}
if versionPath == preferredGVR.Version {
// backing up preferred version backup without API Group version - for backward compatibility
log.Debugf("Resource %s/%s, version= %s, preferredVersion=%s", groupResource.String(), name, versionPath, preferredGVR.Version)
itemFiles = append(itemFiles, getFileForArchive(namespace, name, groupResource.String(), "", itemBytes))
versionPath = versionPath + velerov1api.PreferredVersionDir
}
itemFiles = append(itemFiles, getFileForArchive(namespace, name, groupResource.String(), versionPath, itemBytes))
return true, itemFiles, nil
}
func getFileForArchive(namespace, name, groupResource, versionPath string, itemBytes []byte) FileForArchive {
filePath := archive.GetVersionedItemFilePath("", groupResource, namespace, name, versionPath)
hdr := &tar.Header{
Name: filePath,
Size: int64(len(itemBytes)),
@ -258,43 +284,7 @@ func (ib *itemBackupper) backupItem(logger logrus.FieldLogger, obj runtime.Unstr
Mode: 0755,
ModTime: time.Now(),
}
if err := ib.tarWriter.WriteHeader(hdr); err != nil {
return false, errors.WithStack(err)
}
if _, err := ib.tarWriter.Write(itemBytes); err != nil {
return false, errors.WithStack(err)
}
// backing up the preferred version backup without API Group version on path - this is for backward compatibility
log.Debugf("Resource %s/%s, version= %s, preferredVersion=%s", groupResource.String(), name, version, preferredVersion)
if version == preferredVersion {
if namespace != "" {
filePath = filepath.Join(velerov1api.ResourcesDir, groupResource.String(), velerov1api.NamespaceScopedDir, namespace, name+".json")
} else {
filePath = filepath.Join(velerov1api.ResourcesDir, groupResource.String(), velerov1api.ClusterScopedDir, name+".json")
}
hdr = &tar.Header{
Name: filePath,
Size: int64(len(itemBytes)),
Typeflag: tar.TypeReg,
Mode: 0755,
ModTime: time.Now(),
}
if err := ib.tarWriter.WriteHeader(hdr); err != nil {
return false, errors.WithStack(err)
}
if _, err := ib.tarWriter.Write(itemBytes); err != nil {
return false, errors.WithStack(err)
}
}
return true, nil
return FileForArchive{FilePath: filePath, Header: hdr, FileBytes: itemBytes}
}
// backupPodVolumes triggers pod volume backups of the specified pod volumes, and returns a list of PodVolumeBackups
@ -318,6 +308,7 @@ func (ib *itemBackupper) executeActions(
groupResource schema.GroupResource,
name, namespace string,
metadata metav1.Object,
finalize bool,
) (runtime.Unstructured, error) {
for _, action := range ib.backupRequest.ResolvedActions {
if !action.ShouldUse(groupResource, namespace, metadata, log) {
@ -325,14 +316,49 @@ func (ib *itemBackupper) executeActions(
}
log.Info("Executing custom action")
// Note: we're ignoring the operationID returned from Execute for now, it will be used
// with the async plugin action implementation
updatedItem, additionalItemIdentifiers, _, err := action.Execute(obj, ib.backupRequest.Backup)
updatedItem, additionalItemIdentifiers, operationID, itemsToUpdate, err := action.Execute(obj, ib.backupRequest.Backup)
if err != nil {
return nil, errors.Wrapf(err, "error executing custom action (groupResource=%s, namespace=%s, name=%s)", groupResource.String(), namespace, name)
}
u := &unstructured.Unstructured{Object: updatedItem.UnstructuredContent()}
mustInclude := u.GetAnnotations()[mustIncludeAdditionalItemAnnotation] == "true"
// remove the annotation as it's for communication between BIA and velero server,
// we don't want the resource be restored with this annotation.
if _, ok := u.GetAnnotations()[mustIncludeAdditionalItemAnnotation]; ok {
delete(u.GetAnnotations(), mustIncludeAdditionalItemAnnotation)
}
obj = u
if finalize {
continue
}
// If async plugin started async operation, add it to the ItemOperations list
// ignore during finalize phase
if operationID != "" {
resourceIdentifier := velero.ResourceIdentifier{
GroupResource: groupResource,
Namespace: namespace,
Name: name,
}
now := metav1.Now()
newOperation := itemoperation.BackupOperation{
Spec: itemoperation.BackupOperationSpec{
BackupName: ib.backupRequest.Backup.Name,
BackupUID: string(ib.backupRequest.Backup.UID),
BackupItemAction: action.Name(),
ResourceIdentifier: resourceIdentifier,
OperationID: operationID,
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseInProgress,
Created: &now,
},
}
newOperation.Spec.ItemsToUpdate = itemsToUpdate
itemOperList := ib.backupRequest.GetItemOperationsList()
*itemOperList = append(*itemOperList, &newOperation)
}
for _, additionalItem := range additionalItemIdentifiers {
gvr, resource, err := ib.discoveryHelper.ResourceFor(additionalItem.GroupResource.WithVersion(""))
@ -363,12 +389,6 @@ func (ib *itemBackupper) executeActions(
return nil, err
}
}
// remove the annotation as it's for communication between BIA and velero server,
// we don't want the resource be restored with this annotation.
if _, ok := u.GetAnnotations()[mustIncludeAdditionalItemAnnotation]; ok {
delete(u.GetAnnotations(), mustIncludeAdditionalItemAnnotation)
}
obj = u
}
return obj, nil
}

View File

@ -37,6 +37,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/util/collections"
)
@ -58,11 +59,27 @@ type kubernetesResource struct {
namespace, name, path string
}
// getItemsFromResourceIdentifiers converts ResourceIdentifiers to
// kubernetesResources
func (r *itemCollector) getItemsFromResourceIdentifiers(resourceIDs []velero.ResourceIdentifier) []*kubernetesResource {
grResourceIDsMap := make(map[schema.GroupResource][]velero.ResourceIdentifier)
for _, resourceID := range resourceIDs {
grResourceIDsMap[resourceID.GroupResource] = append(grResourceIDsMap[resourceID.GroupResource], resourceID)
}
return r.getItems(grResourceIDsMap)
}
// getAllItems gets all relevant items from all API groups.
func (r *itemCollector) getAllItems() []*kubernetesResource {
return r.getItems(nil)
}
// getAllItems gets all relevant items from all API groups.
func (r *itemCollector) getItems(resourceIDsMap map[schema.GroupResource][]velero.ResourceIdentifier) []*kubernetesResource {
var resources []*kubernetesResource
for _, group := range r.discoveryHelper.Resources() {
groupItems, err := r.getGroupItems(r.log, group)
groupItems, err := r.getGroupItems(r.log, group, resourceIDsMap)
if err != nil {
r.log.WithError(err).WithField("apiGroup", group.String()).Error("Error collecting resources from API group")
continue
@ -75,7 +92,7 @@ func (r *itemCollector) getAllItems() []*kubernetesResource {
}
// getGroupItems collects all relevant items from a single API group.
func (r *itemCollector) getGroupItems(log logrus.FieldLogger, group *metav1.APIResourceList) ([]*kubernetesResource, error) {
func (r *itemCollector) getGroupItems(log logrus.FieldLogger, group *metav1.APIResourceList, resourceIDsMap map[schema.GroupResource][]velero.ResourceIdentifier) ([]*kubernetesResource, error) {
log = log.WithField("group", group.GroupVersion)
log.Infof("Getting items for group")
@ -93,7 +110,7 @@ func (r *itemCollector) getGroupItems(log logrus.FieldLogger, group *metav1.APIR
var items []*kubernetesResource
for _, resource := range group.APIResources {
resourceItems, err := r.getResourceItems(log, gv, resource)
resourceItems, err := r.getResourceItems(log, gv, resource, resourceIDsMap)
if err != nil {
log.WithError(err).WithField("resource", resource.String()).Error("Error getting items for resource")
continue
@ -164,7 +181,7 @@ func getOrderedResourcesForType(orderedResources map[string]string, resourceType
}
// getResourceItems collects all relevant items for a given group-version-resource.
func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.GroupVersion, resource metav1.APIResource) ([]*kubernetesResource, error) {
func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.GroupVersion, resource metav1.APIResource, resourceIDsMap map[schema.GroupResource][]velero.ResourceIdentifier) ([]*kubernetesResource, error) {
log = log.WithField("resource", resource.Name)
log.Info("Getting items for resource")
@ -182,6 +199,45 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
return nil, errors.WithStack(err)
}
// If we have a resourceIDs map, then only return items listed in it
if resourceIDsMap != nil {
resourceIDs, ok := resourceIDsMap[gr]
if !ok {
log.Info("Skipping resource because no items found in supplied ResourceIdentifier list")
return nil, nil
}
var items []*kubernetesResource
for _, resourceID := range resourceIDs {
log.WithFields(
logrus.Fields{
"namespace": resourceID.Namespace,
"name": resourceID.Name,
},
).Infof("Getting item")
resourceClient, err := r.dynamicFactory.ClientForGroupVersionResource(gv, resource, resourceID.Namespace)
unstructured, err := resourceClient.Get(resourceID.Name, metav1.GetOptions{})
if err != nil {
log.WithError(errors.WithStack(err)).Error("Error getting item")
continue
}
path, err := r.writeToFile(unstructured)
if err != nil {
log.WithError(err).Error("Error writing item to file")
continue
}
items = append(items, &kubernetesResource{
groupResource: gr,
preferredGVR: preferredGVR,
namespace: resourceID.Namespace,
name: resourceID.Name,
path: path,
})
}
return items, nil
}
// If the resource we are backing up is NOT namespaces, and it is cluster-scoped, check to see if
// we should include it based on the IncludeClusterResources setting.
if gr != kuberesource.Namespaces && clusterScoped {

View File

@ -24,6 +24,7 @@ import (
"github.com/vmware-tanzu/velero/internal/hook"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/util/collections"
"github.com/vmware-tanzu/velero/pkg/volume"
@ -51,6 +52,16 @@ type Request struct {
PodVolumeBackups []*velerov1api.PodVolumeBackup
BackedUpItems map[itemKey]struct{}
CSISnapshots []snapshotv1api.VolumeSnapshot
itemOperationsList *[]*itemoperation.BackupOperation
}
// GetItemOperationsList returns ItemOperationsList, initializing it if necessary
func (r *Request) GetItemOperationsList() *[]*itemoperation.BackupOperation {
if r.itemOperationsList == nil {
list := []*itemoperation.BackupOperation{}
r.itemOperationsList = &list
}
return r.itemOperationsList
}
// BackupResourceList returns the list of backed up resources grouped by the API

View File

@ -245,3 +245,9 @@ func (b *BackupBuilder) CSISnapshotTimeout(timeout time.Duration) *BackupBuilder
b.object.Spec.CSISnapshotTimeout.Duration = timeout
return b
}
// ItemOperationTimeout sets the Backup's ItemOperationTimeout
func (b *BackupBuilder) ItemOperationTimeout(timeout time.Duration) *BackupBuilder {
b.object.Spec.ItemOperationTimeout.Duration = timeout
return b
}

View File

@ -99,6 +99,7 @@ type CreateOptions struct {
FromSchedule string
OrderedResources string
CSISnapshotTimeout time.Duration
ItemOperationTimeout time.Duration
client veleroclient.Interface
}
@ -124,6 +125,7 @@ func (o *CreateOptions) BindFlags(flags *pflag.FlagSet) {
flags.VarP(&o.Selector, "selector", "l", "Only back up resources matching this label selector.")
flags.StringVar(&o.OrderedResources, "ordered-resources", "", "Mapping Kinds to an ordered list of specific resources of that Kind. Resource names are separated by commas and their names are in format 'namespace/resourcename'. For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon. Example: 'pods=ns1/pod1,ns1/pod2;persistentvolumeclaims=ns1/pvc4,ns1/pvc8'. Optional.")
flags.DurationVar(&o.CSISnapshotTimeout, "csi-snapshot-timeout", o.CSISnapshotTimeout, "How long to wait for CSI snapshot creation before timeout.")
flags.DurationVar(&o.ItemOperationTimeout, "item-operation-timeout", o.ItemOperationTimeout, "How long to wait for async plugin operations before timeout.")
f := flags.VarPF(&o.SnapshotVolumes, "snapshot-volumes", "", "Take snapshots of PersistentVolumes as part of the backup. If the parameter is not set, it is treated as setting to 'true'.")
// this allows the user to just specify "--snapshot-volumes" as shorthand for "--snapshot-volumes=true"
// like a normal bool flag
@ -335,7 +337,8 @@ func (o *CreateOptions) BuildBackup(namespace string) (*velerov1api.Backup, erro
TTL(o.TTL).
StorageLocation(o.StorageLocation).
VolumeSnapshotLocations(o.SnapshotLocations...).
CSISnapshotTimeout(o.CSISnapshotTimeout)
CSISnapshotTimeout(o.CSISnapshotTimeout).
ItemOperationTimeout(o.ItemOperationTimeout)
if len(o.OrderedResources) > 0 {
orders, err := ParseOrderedResources(o.OrderedResources)
if err != nil {

View File

@ -37,6 +37,7 @@ func TestCreateOptions_BuildBackup(t *testing.T) {
o.OrderedResources = "pods=p1,p2;persistentvolumeclaims=pvc1,pvc2"
orders, err := ParseOrderedResources(o.OrderedResources)
o.CSISnapshotTimeout = 20 * time.Minute
o.ItemOperationTimeout = 20 * time.Minute
assert.NoError(t, err)
backup, err := o.BuildBackup(testNamespace)
@ -49,6 +50,7 @@ func TestCreateOptions_BuildBackup(t *testing.T) {
IncludeClusterResources: o.IncludeClusterResources.Value,
OrderedResources: orders,
CSISnapshotTimeout: metav1.Duration{Duration: o.CSISnapshotTimeout},
ItemOperationTimeout: metav1.Duration{Duration: o.ItemOperationTimeout},
}, backup.Spec)
assert.Equal(t, map[string]string{

View File

@ -63,8 +63,8 @@ func NewLogsCommand(f client.Factory) *cobra.Command {
}
switch backup.Status.Phase {
case velerov1api.BackupPhaseCompleted, velerov1api.BackupPhasePartiallyFailed, velerov1api.BackupPhaseFailed:
// terminal phases, do nothing.
case velerov1api.BackupPhaseCompleted, velerov1api.BackupPhasePartiallyFailed, velerov1api.BackupPhaseFailed, velerov1api.BackupPhaseWaitingForPluginOperations, velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed:
// terminal and waiting for plugin operations phases, do nothing.
default:
cmd.Exit("Logs for backup %q are not available until it's finished processing. Please wait "+
"until the backup has a phase of Completed or Failed and try again.", backupName)

View File

@ -146,6 +146,7 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
DefaultVolumesToFsBackup: o.BackupOptions.DefaultVolumesToFsBackup.Value,
OrderedResources: orders,
CSISnapshotTimeout: metav1.Duration{Duration: o.BackupOptions.CSISnapshotTimeout},
ItemOperationTimeout: metav1.Duration{Duration: o.BackupOptions.ItemOperationTimeout},
},
Schedule: o.Schedule,
UseOwnerReferencesInBackup: &o.UseOwnerReferencesInBackup,

View File

@ -102,7 +102,8 @@ const (
// the default TTL for a backup
defaultBackupTTL = 30 * 24 * time.Hour
defaultCSISnapshotTimeout = 10 * time.Minute
defaultCSISnapshotTimeout = 10 * time.Minute
defaultItemOperationTimeout = 60 * time.Minute
// defaultCredentialsDirectory is the path on disk where credential
// files will be written to
@ -114,6 +115,7 @@ type serverConfig struct {
pluginDir, metricsAddress, defaultBackupLocation string
backupSyncPeriod, podVolumeOperationTimeout, resourceTerminatingTimeout time.Duration
defaultBackupTTL, storeValidationFrequency, defaultCSISnapshotTimeout time.Duration
defaultItemOperationTimeout time.Duration
restoreResourcePriorities restore.Priorities
defaultVolumeSnapshotLocations map[string]string
restoreOnly bool
@ -125,6 +127,7 @@ type serverConfig struct {
formatFlag *logging.FormatFlag
repoMaintenanceFrequency time.Duration
garbageCollectionFrequency time.Duration
itemOperationSyncFrequency time.Duration
defaultVolumesToFsBackup bool
uploaderType string
}
@ -146,6 +149,7 @@ func NewCommand(f client.Factory) *cobra.Command {
backupSyncPeriod: defaultBackupSyncPeriod,
defaultBackupTTL: defaultBackupTTL,
defaultCSISnapshotTimeout: defaultCSISnapshotTimeout,
defaultItemOperationTimeout: defaultItemOperationTimeout,
storeValidationFrequency: defaultStoreValidationFrequency,
podVolumeOperationTimeout: defaultPodVolumeOperationTimeout,
restoreResourcePriorities: defaultRestorePriorities,
@ -221,8 +225,10 @@ func NewCommand(f client.Factory) *cobra.Command {
command.Flags().DurationVar(&config.defaultBackupTTL, "default-backup-ttl", config.defaultBackupTTL, "How long to wait by default before backups can be garbage collected.")
command.Flags().DurationVar(&config.repoMaintenanceFrequency, "default-repo-maintain-frequency", config.repoMaintenanceFrequency, "How often 'maintain' is run for backup repositories by default.")
command.Flags().DurationVar(&config.garbageCollectionFrequency, "garbage-collection-frequency", config.garbageCollectionFrequency, "How often garbage collection is run for expired backups.")
command.Flags().DurationVar(&config.itemOperationSyncFrequency, "item-operation-sync-frequency", config.itemOperationSyncFrequency, "How often to check status on async backup/restore operations after backup processing.")
command.Flags().BoolVar(&config.defaultVolumesToFsBackup, "default-volumes-to-fs-backup", config.defaultVolumesToFsBackup, "Backup all volumes with pod volume file system backup by default.")
command.Flags().StringVar(&config.uploaderType, "uploader-type", config.uploaderType, "Type of uploader to handle the transfer of data of pod volumes")
command.Flags().DurationVar(&config.defaultItemOperationTimeout, "default-item-operation-timeout", config.defaultItemOperationTimeout, "How long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out.")
return command
}
@ -649,6 +655,7 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
s.config.defaultVolumesToFsBackup,
s.config.defaultBackupTTL,
s.config.defaultCSISnapshotTimeout,
s.config.defaultItemOperationTimeout,
s.sharedInformerFactory.Velero().V1().VolumeSnapshotLocations().Lister(),
defaultVolumeSnapshotLocations,
s.metrics,
@ -674,14 +681,16 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
}
// Note: all runtime type controllers that can be disabled are grouped separately, below:
enabledRuntimeControllers := map[string]struct{}{
controller.ServerStatusRequest: {},
controller.DownloadRequest: {},
controller.Schedule: {},
controller.BackupRepo: {},
controller.BackupDeletion: {},
controller.GarbageCollection: {},
controller.BackupSync: {},
controller.Restore: {},
controller.ServerStatusRequest: {},
controller.DownloadRequest: {},
controller.Schedule: {},
controller.BackupRepo: {},
controller.BackupDeletion: {},
controller.BackupFinalizer: {},
controller.GarbageCollection: {},
controller.BackupSync: {},
controller.AsyncBackupOperations: {},
controller.Restore: {},
}
if s.config.restoreOnly {
@ -691,6 +700,8 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
controller.Schedule,
controller.GarbageCollection,
controller.BackupDeletion,
controller.BackupFinalizer,
controller.AsyncBackupOperations,
)
}
@ -785,6 +796,51 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
}
}
var backupOpsMap *controller.BackupItemOperationsMap
if _, ok := enabledRuntimeControllers[controller.AsyncBackupOperations]; ok {
r, m := controller.NewAsyncBackupOperationsReconciler(
s.logger,
s.mgr.GetClient(),
s.config.itemOperationSyncFrequency,
newPluginManager,
backupStoreGetter,
s.metrics,
)
if err := r.SetupWithManager(s.mgr); err != nil {
s.logger.Fatal(err, "unable to create controller", "controller", controller.AsyncBackupOperations)
}
backupOpsMap = m
}
if _, ok := enabledRuntimeControllers[controller.BackupFinalizer]; ok {
backupper, err := backup.NewKubernetesBackupper(
s.veleroClient.VeleroV1(),
s.discoveryHelper,
client.NewDynamicFactory(s.dynamicClient),
podexec.NewPodCommandExecutor(s.kubeClientConfig, s.kubeClient.CoreV1().RESTClient()),
podvolume.NewBackupperFactory(s.repoLocker, s.repoEnsurer, s.veleroClient, s.kubeClient.CoreV1(),
s.kubeClient.CoreV1(), s.kubeClient.CoreV1(),
s.sharedInformerFactory.Velero().V1().BackupRepositories().Informer().HasSynced, s.logger),
s.config.podVolumeOperationTimeout,
s.config.defaultVolumesToFsBackup,
s.config.clientPageSize,
s.config.uploaderType,
)
cmd.CheckError(err)
r := controller.NewBackupFinalizerReconciler(
s.mgr.GetClient(),
clock.RealClock{},
backupper,
newPluginManager,
backupStoreGetter,
s.logger,
s.metrics,
)
if err := r.SetupWithManager(s.mgr); err != nil {
s.logger.Fatal(err, "unable to create controller", "controller", controller.BackupFinalizer)
}
}
if _, ok := enabledRuntimeControllers[controller.DownloadRequest]; ok {
r := controller.NewDownloadRequestReconciler(
s.mgr.GetClient(),
@ -792,6 +848,7 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
newPluginManager,
backupStoreGetter,
s.logger,
backupOpsMap,
)
if err := r.SetupWithManager(s.mgr); err != nil {
s.logger.Fatal(err, "unable to create controller", "controller", controller.DownloadRequest)

View File

@ -89,6 +89,7 @@ func TestRemoveControllers(t *testing.T) {
{
name: "Remove all disable controllers",
disabledControllers: []string{
controller.AsyncBackupOperations,
controller.Backup,
controller.BackupDeletion,
controller.BackupSync,
@ -127,11 +128,12 @@ func TestRemoveControllers(t *testing.T) {
}
enabledRuntimeControllers := map[string]struct{}{
controller.ServerStatusRequest: {},
controller.Schedule: {},
controller.BackupDeletion: {},
controller.BackupRepo: {},
controller.DownloadRequest: {},
controller.ServerStatusRequest: {},
controller.Schedule: {},
controller.BackupDeletion: {},
controller.BackupRepo: {},
controller.DownloadRequest: {},
controller.AsyncBackupOperations: {},
}
totalNumOriginalControllers := len(enabledControllers) + len(enabledRuntimeControllers)

View File

@ -35,6 +35,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/cmd/util/downloadrequest"
"github.com/vmware-tanzu/velero/pkg/features"
clientset "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/volume"
)
@ -66,6 +67,8 @@ func DescribeBackup(
case velerov1api.BackupPhaseCompleted:
phaseString = color.GreenString(phaseString)
case velerov1api.BackupPhaseDeleting:
case velerov1api.BackupPhaseWaitingForPluginOperations, velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed:
case velerov1api.BackupPhaseFinalizingAfterPluginOperations, velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed:
case velerov1api.BackupPhaseInProgress:
case velerov1api.BackupPhaseNew:
}
@ -166,6 +169,7 @@ func DescribeBackupSpec(d *Describer, spec velerov1api.BackupSpec) {
d.Println()
d.Printf("CSISnapshotTimeout:\t%s\n", spec.CSISnapshotTimeout.Duration)
d.Printf("ItemOperationTimeout:\t%s\n", spec.ItemOperationTimeout.Duration)
d.Println()
if len(spec.Hooks.Resources) == 0 {
@ -284,6 +288,8 @@ func DescribeBackupStatus(ctx context.Context, kbClient kbclient.Client, d *Desc
d.Println()
}
describeAsyncBackupItemOperations(ctx, kbClient, d, backup, details, insecureSkipTLSVerify, caCertPath)
if details {
describeBackupResourceList(ctx, kbClient, d, backup, insecureSkipTLSVerify, caCertPath)
d.Println()
@ -317,6 +323,33 @@ func DescribeBackupStatus(ctx context.Context, kbClient kbclient.Client, d *Desc
d.Printf("Velero-Native Snapshots: <none included>\n")
}
func describeAsyncBackupItemOperations(ctx context.Context, kbClient kbclient.Client, d *Describer, backup *velerov1api.Backup, details bool, insecureSkipTLSVerify bool, caCertPath string) {
status := backup.Status
if status.AsyncBackupItemOperationsAttempted > 0 {
if !details {
d.Printf("Async Backup Item Operations:\t%d of %d completed successfully, %d failed (specify --details for more information)\n", status.AsyncBackupItemOperationsCompleted, status.AsyncBackupItemOperationsAttempted, status.AsyncBackupItemOperationsFailed)
return
}
buf := new(bytes.Buffer)
if err := downloadrequest.Stream(ctx, kbClient, backup.Namespace, backup.Name, velerov1api.DownloadTargetKindBackupItemOperations, buf, downloadRequestTimeout, insecureSkipTLSVerify, caCertPath); err != nil {
d.Printf("Async Backup Item Operations:\t<error getting operation info: %v>\n", err)
return
}
var operations []*itemoperation.BackupOperation
if err := json.NewDecoder(buf).Decode(&operations); err != nil {
d.Printf("Async Backup Item Operations:\t<error reading operation info: %v>\n", err)
return
}
d.Printf("Async Backup Item Operations:\n")
for _, operation := range operations {
describeAsyncBackupItemOperation(d, operation)
}
}
}
func describeBackupResourceList(ctx context.Context, kbClient kbclient.Client, d *Describer, backup *velerov1api.Backup, insecureSkipTLSVerify bool, caCertPath string) {
buf := new(bytes.Buffer)
if err := downloadrequest.Stream(ctx, kbClient, backup.Namespace, backup.Name, velerov1api.DownloadTargetKindBackupResourceList, buf, downloadRequestTimeout, insecureSkipTLSVerify, caCertPath); err != nil {
@ -365,6 +398,40 @@ func describeSnapshot(d *Describer, pvName, snapshotID, volumeType, volumeAZ str
d.Printf("\t\tIOPS:\t%s\n", iopsString)
}
func describeAsyncBackupItemOperation(d *Describer, operation *itemoperation.BackupOperation) {
d.Printf("\tOperation for %s %s/%s:\n", operation.Spec.ResourceIdentifier, operation.Spec.ResourceIdentifier.Namespace, operation.Spec.ResourceIdentifier.Name)
d.Printf("\t\tBackup Item Action Plugin:\t%s\n", operation.Spec.BackupItemAction)
d.Printf("\t\tOperation ID:\t%s\n", operation.Spec.OperationID)
if len(operation.Spec.ItemsToUpdate) > 0 {
d.Printf("\t\tItems to Update:\n")
}
for _, item := range operation.Spec.ItemsToUpdate {
d.Printf("\t\t\t%s %s/%s\n", item, item.Namespace, item.Name)
}
d.Printf("\t\tPhase:\t%s\n", operation.Status.Phase)
if operation.Status.Error != "" {
d.Printf("\t\tOperation Error:\t%s\n", operation.Status.Error)
}
if operation.Status.NTotal > 0 || operation.Status.NCompleted > 0 {
d.Printf("\t\tProgress:\t%v of %v complete (%s)\n",
operation.Status.NCompleted,
operation.Status.NTotal,
operation.Status.OperationUnits)
}
if operation.Status.Description != "" {
d.Printf("\t\tProgress description:\t%s\n", operation.Status.Description)
}
if operation.Status.Created != nil {
d.Printf("\t\tCreated:\t%s\n", operation.Status.Created.String())
}
if operation.Status.Started != nil {
d.Printf("\t\tStarted:\t%s\n", operation.Status.Started.String())
}
if operation.Status.Updated != nil {
d.Printf("\t\tUpdated:\t%s\n", operation.Status.Updated.String())
}
}
// DescribeDeleteBackupRequests describes delete backup requests in human-readable format.
func DescribeDeleteBackupRequests(d *Describer, requests []velerov1api.DeleteBackupRequest) {
d.Printf("Deletion Attempts")

View File

@ -0,0 +1,483 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"bytes"
"context"
"sync"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clocks "k8s.io/utils/clock"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/metrics"
"github.com/vmware-tanzu/velero/pkg/persistence"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
"github.com/vmware-tanzu/velero/pkg/util/encode"
"github.com/vmware-tanzu/velero/pkg/util/kube"
)
const (
defaultAsyncBackupOperationsFrequency = 2 * time.Minute
)
type operationsForBackup struct {
operations []*itemoperation.BackupOperation
changesSinceUpdate bool
errsSinceUpdate []string
}
// FIXME: remove if handled by backup finalizer controller
func (o *operationsForBackup) anyItemsToUpdate() bool {
for _, op := range o.operations {
if len(op.Spec.ItemsToUpdate) > 0 {
return true
}
}
return false
}
func (in *operationsForBackup) DeepCopy() *operationsForBackup {
if in == nil {
return nil
}
out := new(operationsForBackup)
in.DeepCopyInto(out)
return out
}
func (in *operationsForBackup) DeepCopyInto(out *operationsForBackup) {
*out = *in
if in.operations != nil {
in, out := &in.operations, &out.operations
*out = make([]*itemoperation.BackupOperation, len(*in))
for i := range *in {
if (*in)[i] != nil {
in, out := &(*in)[i], &(*out)[i]
*out = new(itemoperation.BackupOperation)
(*in).DeepCopyInto(*out)
}
}
}
if in.errsSinceUpdate != nil {
in, out := &in.errsSinceUpdate, &out.errsSinceUpdate
*out = make([]string, len(*in))
copy(*out, *in)
}
}
func (o *operationsForBackup) uploadProgress(backupStore persistence.BackupStore, backupName string) error {
if len(o.operations) > 0 {
var backupItemOperations *bytes.Buffer
backupItemOperations, errs := encodeToJSONGzip(o.operations, "backup item operations list")
if errs != nil {
return errors.Wrap(errs[0], "error encoding item operations json")
}
err := backupStore.PutBackupItemOperations(backupName, backupItemOperations)
if err != nil {
return errors.Wrap(err, "error uploading item operations json")
}
}
o.changesSinceUpdate = false
o.errsSinceUpdate = nil
return nil
}
type BackupItemOperationsMap struct {
operations map[string]*operationsForBackup
opsLock sync.Mutex
}
// If backup has changes not yet uploaded, upload them now
func (m *BackupItemOperationsMap) UpdateForBackup(backupStore persistence.BackupStore, backupName string) error {
// lock operations map
m.opsLock.Lock()
defer m.opsLock.Unlock()
operations, ok := m.operations[backupName]
// if operations for this backup aren't found, or if there are no changes
// or errors since last update, do nothing
if !ok || (!operations.changesSinceUpdate && len(operations.errsSinceUpdate) == 0) {
return nil
}
if err := operations.uploadProgress(backupStore, backupName); err != nil {
return err
}
return nil
}
type asyncBackupOperationsReconciler struct {
client.Client
logger logrus.FieldLogger
clock clocks.WithTickerAndDelayedExecution
frequency time.Duration
itemOperationsMap *BackupItemOperationsMap
newPluginManager func(logger logrus.FieldLogger) clientmgmt.Manager
backupStoreGetter persistence.ObjectBackupStoreGetter
metrics *metrics.ServerMetrics
}
func NewAsyncBackupOperationsReconciler(
logger logrus.FieldLogger,
client client.Client,
frequency time.Duration,
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager,
backupStoreGetter persistence.ObjectBackupStoreGetter,
metrics *metrics.ServerMetrics,
) (*asyncBackupOperationsReconciler, *BackupItemOperationsMap) {
abor := &asyncBackupOperationsReconciler{
Client: client,
logger: logger,
clock: clocks.RealClock{},
frequency: frequency,
itemOperationsMap: &BackupItemOperationsMap{operations: make(map[string]*operationsForBackup)},
newPluginManager: newPluginManager,
backupStoreGetter: backupStoreGetter,
metrics: metrics,
}
if abor.frequency <= 0 {
abor.frequency = defaultAsyncBackupOperationsFrequency
}
return abor, abor.itemOperationsMap
}
func (c *asyncBackupOperationsReconciler) SetupWithManager(mgr ctrl.Manager) error {
s := kube.NewPeriodicalEnqueueSource(c.logger, mgr.GetClient(), &velerov1api.BackupList{}, c.frequency, kube.PeriodicalEnqueueSourceOption{})
return ctrl.NewControllerManagedBy(mgr).
For(&velerov1api.Backup{}, builder.WithPredicates(kube.FalsePredicate{})).
Watches(s, nil).
Complete(c)
}
// +kubebuilder:rbac:groups=velero.io,resources=backups,verbs=get;list;watch;update
// +kubebuilder:rbac:groups=velero.io,resources=backups/status,verbs=get
// +kubebuilder:rbac:groups=velero.io,resources=backupstoragelocations,verbs=get
func (c *asyncBackupOperationsReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := c.logger.WithField("async backup operations for backup", req.String())
// FIXME: make this log.Debug
log.Info("asyncBackupOperationsReconciler getting backup")
original := &velerov1api.Backup{}
if err := c.Get(ctx, req.NamespacedName, original); err != nil {
if apierrors.IsNotFound(err) {
log.WithError(err).Error("backup not found")
return ctrl.Result{}, nil
}
return ctrl.Result{}, errors.Wrapf(err, "error getting backup %s", req.String())
}
backup := original.DeepCopy()
log.Debugf("backup: %s", backup.Name)
log = c.logger.WithFields(
logrus.Fields{
"backup": req.String(),
},
)
switch backup.Status.Phase {
case velerov1api.BackupPhaseWaitingForPluginOperations, velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed:
// only process backups waiting for plugin operations to complete
default:
log.Debug("Backup has no ongoing async plugin operations, skipping")
return ctrl.Result{}, nil
}
loc := &velerov1api.BackupStorageLocation{}
if err := c.Get(ctx, client.ObjectKey{
Namespace: req.Namespace,
Name: backup.Spec.StorageLocation,
}, loc); err != nil {
if apierrors.IsNotFound(err) {
log.Warnf("Cannot check progress on async Backup operations because backup storage location %s does not exist; marking backup PartiallyFailed", backup.Spec.StorageLocation)
backup.Status.Phase = velerov1api.BackupPhasePartiallyFailed
} else {
log.Warnf("Cannot check progress on async Backup operations because backup storage location %s could not be retrieved: %s; marking backup PartiallyFailed", backup.Spec.StorageLocation, err.Error())
backup.Status.Phase = velerov1api.BackupPhasePartiallyFailed
}
err2 := c.updateBackupAndOperationsJSON(ctx, original, backup, nil, &operationsForBackup{errsSinceUpdate: []string{err.Error()}}, false, false)
if err2 != nil {
log.WithError(err2).Error("error updating Backup")
}
return ctrl.Result{}, errors.Wrap(err, "error getting backup storage location")
}
if loc.Spec.AccessMode == velerov1api.BackupStorageLocationAccessModeReadOnly {
log.Infof("Cannot check progress on async Backup operations because backup storage location %s is currently in read-only mode; marking backup PartiallyFailed", loc.Name)
backup.Status.Phase = velerov1api.BackupPhasePartiallyFailed
err := c.updateBackupAndOperationsJSON(ctx, original, backup, nil, &operationsForBackup{errsSinceUpdate: []string{"BSL is read-only"}}, false, false)
if err != nil {
log.WithError(err).Error("error updating Backup")
}
return ctrl.Result{}, nil
}
pluginManager := c.newPluginManager(c.logger)
defer pluginManager.CleanupClients()
backupStore, err := c.backupStoreGetter.Get(loc, pluginManager, c.logger)
if err != nil {
return ctrl.Result{}, errors.Wrap(err, "error getting backup store")
}
operations, err := c.getOperationsForBackup(backupStore, backup.Name)
if err != nil {
err2 := c.updateBackupAndOperationsJSON(ctx, original, backup, backupStore, &operationsForBackup{errsSinceUpdate: []string{err.Error()}}, false, false)
if err2 != nil {
return ctrl.Result{}, errors.Wrap(err2, "error updating Backup")
}
return ctrl.Result{}, errors.Wrap(err, "error getting backup operations")
}
stillInProgress, changes, opsCompleted, opsFailed, errs := getBackupItemOperationProgress(backup, pluginManager, operations.operations)
// if len(errs)>0, need to update backup errors and error log
operations.errsSinceUpdate = append(operations.errsSinceUpdate, errs...)
backup.Status.Errors += len(operations.errsSinceUpdate)
asyncCompletionChanges := false
if backup.Status.AsyncBackupItemOperationsCompleted != opsCompleted || backup.Status.AsyncBackupItemOperationsFailed != opsFailed {
asyncCompletionChanges = true
backup.Status.AsyncBackupItemOperationsCompleted = opsCompleted
backup.Status.AsyncBackupItemOperationsFailed = opsFailed
}
if changes {
operations.changesSinceUpdate = true
}
// if stillInProgress is false, backup moves to finalize phase and needs update
// if operations.errsSinceUpdate is not empty, then backup phase needs to change to
// BackupPhaseWaitingForPluginOperationsPartiallyFailed and needs update
// If the only changes are incremental progress, then no write is necessary, progress can remain in memory
if !stillInProgress {
if len(operations.errsSinceUpdate) > 0 {
backup.Status.Phase = velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed
}
if backup.Status.Phase == velerov1api.BackupPhaseWaitingForPluginOperations {
log.Infof("Marking backup %s FinalizingAfterPluginOperations", backup.Name)
backup.Status.Phase = velerov1api.BackupPhaseFinalizingAfterPluginOperations
} else {
log.Infof("Marking backup %s FinalizingAfterPluginOperationsPartiallyFailed", backup.Name)
backup.Status.Phase = velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed
}
}
err = c.updateBackupAndOperationsJSON(ctx, original, backup, backupStore, operations, asyncCompletionChanges, changes)
if err != nil {
return ctrl.Result{}, errors.Wrap(err, "error updating Backup")
}
return ctrl.Result{}, nil
}
func (c *asyncBackupOperationsReconciler) updateBackupAndOperationsJSON(
ctx context.Context,
original, backup *velerov1api.Backup,
backupStore persistence.BackupStore,
operations *operationsForBackup,
changes bool,
asyncCompletionChanges bool) error {
backupScheduleName := backup.GetLabels()[velerov1api.ScheduleNameLabel]
if len(operations.errsSinceUpdate) > 0 {
c.metrics.RegisterBackupItemsErrorsGauge(backupScheduleName, backup.Status.Errors)
// FIXME: download/upload results once https://github.com/vmware-tanzu/velero/pull/5576 is merged
}
removeIfComplete := true
defer func() {
// remove local operations list if complete
c.itemOperationsMap.opsLock.Lock()
if removeIfComplete && (backup.Status.Phase == velerov1api.BackupPhaseCompleted ||
backup.Status.Phase == velerov1api.BackupPhasePartiallyFailed ||
backup.Status.Phase == velerov1api.BackupPhaseFinalizingAfterPluginOperations ||
backup.Status.Phase == velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed) {
c.deleteOperationsForBackup(backup.Name)
} else if changes {
c.putOperationsForBackup(operations, backup.Name)
}
c.itemOperationsMap.opsLock.Unlock()
}()
// update backup and upload progress if errs or complete
if len(operations.errsSinceUpdate) > 0 ||
backup.Status.Phase == velerov1api.BackupPhaseCompleted ||
backup.Status.Phase == velerov1api.BackupPhasePartiallyFailed ||
backup.Status.Phase == velerov1api.BackupPhaseFinalizingAfterPluginOperations ||
backup.Status.Phase == velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed {
// update file store
if backupStore != nil {
backupJSON := new(bytes.Buffer)
if err := encode.EncodeTo(backup, "json", backupJSON); err != nil {
removeIfComplete = false
return errors.Wrap(err, "error encoding backup json")
}
err := backupStore.PutBackupMetadata(backup.Name, backupJSON)
if err != nil {
removeIfComplete = false
return errors.Wrap(err, "error uploading backup json")
}
if err := operations.uploadProgress(backupStore, backup.Name); err != nil {
removeIfComplete = false
return err
}
}
// update backup
err := c.Client.Patch(ctx, backup, client.MergeFrom(original))
if err != nil {
removeIfComplete = false
return errors.Wrapf(err, "error updating Backup %s", backup.Name)
}
} else if asyncCompletionChanges {
// If backup is still incomplete and no new errors are found but there are some new operations
// completed, patch backup to reflect new completion numbers, but don't upload detailed json file
err := c.Client.Patch(ctx, backup, client.MergeFrom(original))
if err != nil {
return errors.Wrapf(err, "error updating Backup %s", backup.Name)
}
}
return nil
}
// returns a deep copy so we can minimize the time the map is locked
func (c *asyncBackupOperationsReconciler) getOperationsForBackup(
backupStore persistence.BackupStore,
backupName string) (*operationsForBackup, error) {
var err error
// lock operations map
c.itemOperationsMap.opsLock.Lock()
defer c.itemOperationsMap.opsLock.Unlock()
operations, ok := c.itemOperationsMap.operations[backupName]
if !ok || len(operations.operations) == 0 {
operations = &operationsForBackup{}
operations.operations, err = backupStore.GetBackupItemOperations(backupName)
if err == nil {
c.itemOperationsMap.operations[backupName] = operations
}
}
return operations.DeepCopy(), err
}
func (c *asyncBackupOperationsReconciler) putOperationsForBackup(
operations *operationsForBackup,
backupName string) {
if operations != nil {
c.itemOperationsMap.operations[backupName] = operations
}
}
func (c *asyncBackupOperationsReconciler) deleteOperationsForBackup(backupName string) {
if _, ok := c.itemOperationsMap.operations[backupName]; ok {
delete(c.itemOperationsMap.operations, backupName)
}
return
}
func getBackupItemOperationProgress(
backup *velerov1api.Backup,
pluginManager clientmgmt.Manager,
operationsList []*itemoperation.BackupOperation) (bool, bool, int, int, []string) {
inProgressOperations := false
changes := false
var errs []string
var completedCount, failedCount int
for _, operation := range operationsList {
if operation.Status.Phase == itemoperation.OperationPhaseInProgress {
bia, err := pluginManager.GetBackupItemActionV2(operation.Spec.BackupItemAction)
if err != nil {
operation.Status.Phase = itemoperation.OperationPhaseFailed
operation.Status.Error = err.Error()
errs = append(errs, err.Error())
changes = true
failedCount++
continue
}
operationProgress, err := bia.Progress(operation.Spec.OperationID, backup)
if err != nil {
operation.Status.Phase = itemoperation.OperationPhaseFailed
operation.Status.Error = err.Error()
errs = append(errs, err.Error())
changes = true
failedCount++
continue
}
if operation.Status.NCompleted != operationProgress.NCompleted {
operation.Status.NCompleted = operationProgress.NCompleted
changes = true
}
if operation.Status.NTotal != operationProgress.NTotal {
operation.Status.NTotal = operationProgress.NTotal
changes = true
}
if operation.Status.OperationUnits != operationProgress.OperationUnits {
operation.Status.OperationUnits = operationProgress.OperationUnits
changes = true
}
if operation.Status.Description != operationProgress.Description {
operation.Status.Description = operationProgress.Description
changes = true
}
started := metav1.NewTime(operationProgress.Started)
if operation.Status.Started == nil || *(operation.Status.Started) != started {
operation.Status.Started = &started
changes = true
}
updated := metav1.NewTime(operationProgress.Updated)
if operation.Status.Updated == nil || *(operation.Status.Updated) != updated {
operation.Status.Updated = &updated
changes = true
}
if operationProgress.Completed {
if operationProgress.Err != "" {
operation.Status.Phase = itemoperation.OperationPhaseFailed
operation.Status.Error = operationProgress.Err
errs = append(errs, operationProgress.Err)
changes = true
failedCount++
continue
}
operation.Status.Phase = itemoperation.OperationPhaseCompleted
changes = true
completedCount++
continue
}
// cancel operation if past timeout period
if operation.Status.Created.Time.Add(backup.Spec.ItemOperationTimeout.Duration).Before(time.Now()) {
_ = bia.Cancel(operation.Spec.OperationID, backup)
operation.Status.Phase = itemoperation.OperationPhaseFailed
operation.Status.Error = "Asynchronous action timed out"
errs = append(errs, operation.Status.Error)
changes = true
failedCount++
continue
}
// if we reach this point, the operation is still running
inProgressOperations = true
} else if operation.Status.Phase == itemoperation.OperationPhaseCompleted {
completedCount++
} else if operation.Status.Phase == itemoperation.OperationPhaseFailed {
failedCount++
}
}
return inProgressOperations, changes, completedCount, failedCount, errs
}

View File

@ -0,0 +1,308 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"context"
"testing"
"time"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
testclocks "k8s.io/utils/clock/testing"
ctrl "sigs.k8s.io/controller-runtime"
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/metrics"
persistencemocks "github.com/vmware-tanzu/velero/pkg/persistence/mocks"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
pluginmocks "github.com/vmware-tanzu/velero/pkg/plugin/mocks"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
biav2mocks "github.com/vmware-tanzu/velero/pkg/plugin/velero/mocks/backupitemaction/v2"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
)
var (
pluginManager = &pluginmocks.Manager{}
backupStore = &persistencemocks.BackupStore{}
bia = &biav2mocks.BackupItemAction{}
)
func mockAsyncBackupOperationsReconciler(fakeClient kbclient.Client, fakeClock *testclocks.FakeClock, freq time.Duration) (*asyncBackupOperationsReconciler, *BackupItemOperationsMap) {
abor, biaMap := NewAsyncBackupOperationsReconciler(
logrus.StandardLogger(),
fakeClient,
freq,
func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager },
NewFakeSingleObjectBackupStoreGetter(backupStore),
metrics.NewServerMetrics(),
)
abor.clock = fakeClock
return abor, biaMap
}
func TestAsyncBackupOperationsReconcile(t *testing.T) {
fakeClock := testclocks.NewFakeClock(time.Now())
metav1Now := metav1.NewTime(fakeClock.Now())
defaultBackupLocation := builder.ForBackupStorageLocation(velerov1api.DefaultNamespace, "default").Result()
tests := []struct {
name string
backup *velerov1api.Backup
backupOperations []*itemoperation.BackupOperation
backupLocation *velerov1api.BackupStorageLocation
operationComplete bool
operationErr string
expectError bool
expectPhase velerov1api.BackupPhase
}{
{
name: "WaitingForPluginOperations backup with completed operations is FinalizingAfterPluginOperations",
backup: builder.ForBackup(velerov1api.DefaultNamespace, "backup-1").
StorageLocation("default").
ItemOperationTimeout(60 * time.Minute).
ObjectMeta(builder.WithUID("foo")).
Phase(velerov1api.BackupPhaseWaitingForPluginOperations).Result(),
backupLocation: defaultBackupLocation,
operationComplete: true,
expectPhase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
backupOperations: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-1",
BackupUID: "foo",
BackupItemAction: "foo",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-1",
},
OperationID: "operation-1",
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseInProgress,
Created: &metav1Now,
},
},
},
},
{
name: "WaitingForPluginOperations backup with incomplete operations is still incomplete",
backup: builder.ForBackup(velerov1api.DefaultNamespace, "backup-2").
StorageLocation("default").
ItemOperationTimeout(60 * time.Minute).
ObjectMeta(builder.WithUID("foo")).
Phase(velerov1api.BackupPhaseWaitingForPluginOperations).Result(),
backupLocation: defaultBackupLocation,
operationComplete: false,
expectPhase: velerov1api.BackupPhaseWaitingForPluginOperations,
backupOperations: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-2",
BackupUID: "foo-2",
BackupItemAction: "foo-2",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-1",
},
OperationID: "operation-2",
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseInProgress,
Created: &metav1Now,
},
},
},
},
{
name: "WaitingForPluginOperations backup with completed failed operations is FinalizingAfterPluginOperationsPartiallyFailed",
backup: builder.ForBackup(velerov1api.DefaultNamespace, "backup-3").
StorageLocation("default").
ItemOperationTimeout(60 * time.Minute).
ObjectMeta(builder.WithUID("foo")).
Phase(velerov1api.BackupPhaseWaitingForPluginOperations).Result(),
backupLocation: defaultBackupLocation,
operationComplete: true,
operationErr: "failed",
expectPhase: velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed,
backupOperations: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-3",
BackupUID: "foo-3",
BackupItemAction: "foo-3",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-1",
},
OperationID: "operation-3",
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseInProgress,
Created: &metav1Now,
},
},
},
},
{
name: "WaitingForPluginOperationsPartiallyFailed backup with completed operations is FinalizingAfterPluginOperationsPartiallyFailed",
backup: builder.ForBackup(velerov1api.DefaultNamespace, "backup-1").
StorageLocation("default").
ItemOperationTimeout(60 * time.Minute).
ObjectMeta(builder.WithUID("foo")).
Phase(velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed).Result(),
backupLocation: defaultBackupLocation,
operationComplete: true,
expectPhase: velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed,
backupOperations: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-4",
BackupUID: "foo-4",
BackupItemAction: "foo-4",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-1",
},
OperationID: "operation-4",
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseInProgress,
Created: &metav1Now,
},
},
},
},
{
name: "WaitingForPluginOperationsPartiallyFailed backup with incomplete operations is still incomplete",
backup: builder.ForBackup(velerov1api.DefaultNamespace, "backup-2").
StorageLocation("default").
ItemOperationTimeout(60 * time.Minute).
ObjectMeta(builder.WithUID("foo")).
Phase(velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed).Result(),
backupLocation: defaultBackupLocation,
operationComplete: false,
expectPhase: velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed,
backupOperations: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-5",
BackupUID: "foo-5",
BackupItemAction: "foo-5",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-1",
},
OperationID: "operation-5",
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseInProgress,
Created: &metav1Now,
},
},
},
},
{
name: "WaitingForPluginOperationsPartiallyFailed backup with completed failed operations is FinalizingAfterPluginOperationsPartiallyFailed",
backup: builder.ForBackup(velerov1api.DefaultNamespace, "backup-3").
StorageLocation("default").
ItemOperationTimeout(60 * time.Minute).
ObjectMeta(builder.WithUID("foo")).
Phase(velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed).Result(),
backupLocation: defaultBackupLocation,
operationComplete: true,
operationErr: "failed",
expectPhase: velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed,
backupOperations: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-6",
BackupUID: "foo-6",
BackupItemAction: "foo-6",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-1",
},
OperationID: "operation-6",
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseInProgress,
Created: &metav1Now,
},
},
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
if test.backup == nil {
return
}
initObjs := []runtime.Object{}
initObjs = append(initObjs, test.backup)
if test.backupLocation != nil {
initObjs = append(initObjs, test.backupLocation)
}
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, initObjs...)
reconciler, _ := mockAsyncBackupOperationsReconciler(fakeClient, fakeClock, defaultAsyncBackupOperationsFrequency)
pluginManager.On("CleanupClients").Return(nil)
backupStore.On("GetBackupItemOperations", test.backup.Name).Return(test.backupOperations, nil)
backupStore.On("PutBackupItemOperations", mock.Anything, mock.Anything).Return(nil)
backupStore.On("PutBackupMetadata", mock.Anything, mock.Anything).Return(nil)
for _, operation := range test.backupOperations {
bia.On("Progress", operation.Spec.OperationID, mock.Anything).
Return(velero.OperationProgress{
Completed: test.operationComplete,
Err: test.operationErr,
}, nil)
pluginManager.On("GetBackupItemActionV2", operation.Spec.BackupItemAction).Return(bia, nil)
}
_, err := reconciler.Reconcile(context.TODO(), ctrl.Request{NamespacedName: types.NamespacedName{Namespace: test.backup.Namespace, Name: test.backup.Name}})
gotErr := err != nil
assert.Equal(t, test.expectError, gotErr)
backupAfter := velerov1api.Backup{}
err = fakeClient.Get(context.TODO(), types.NamespacedName{
Namespace: test.backup.Namespace,
Name: test.backup.Name,
}, &backupAfter)
require.NoError(t, err)
assert.Equal(t, test.expectPhase, backupAfter.Status.Phase)
})
}
}

View File

@ -74,27 +74,28 @@ import (
type backupController struct {
*genericController
discoveryHelper discovery.Helper
backupper pkgbackup.Backupper
lister velerov1listers.BackupLister
client velerov1client.BackupsGetter
kbClient kbclient.Client
clock clocks.WithTickerAndDelayedExecution
backupLogLevel logrus.Level
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager
backupTracker BackupTracker
defaultBackupLocation string
defaultVolumesToFsBackup bool
defaultBackupTTL time.Duration
defaultCSISnapshotTimeout time.Duration
snapshotLocationLister velerov1listers.VolumeSnapshotLocationLister
defaultSnapshotLocations map[string]string
metrics *metrics.ServerMetrics
backupStoreGetter persistence.ObjectBackupStoreGetter
formatFlag logging.Format
volumeSnapshotLister snapshotv1listers.VolumeSnapshotLister
volumeSnapshotClient snapshotterClientSet.Interface
credentialFileStore credentials.FileStore
discoveryHelper discovery.Helper
backupper pkgbackup.Backupper
lister velerov1listers.BackupLister
client velerov1client.BackupsGetter
kbClient kbclient.Client
clock clocks.WithTickerAndDelayedExecution
backupLogLevel logrus.Level
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager
backupTracker BackupTracker
defaultBackupLocation string
defaultVolumesToFsBackup bool
defaultBackupTTL time.Duration
defaultCSISnapshotTimeout time.Duration
defaultItemOperationTimeout time.Duration
snapshotLocationLister velerov1listers.VolumeSnapshotLocationLister
defaultSnapshotLocations map[string]string
metrics *metrics.ServerMetrics
backupStoreGetter persistence.ObjectBackupStoreGetter
formatFlag logging.Format
volumeSnapshotLister snapshotv1listers.VolumeSnapshotLister
volumeSnapshotClient snapshotterClientSet.Interface
credentialFileStore credentials.FileStore
}
func NewBackupController(
@ -111,6 +112,7 @@ func NewBackupController(
defaultVolumesToFsBackup bool,
defaultBackupTTL time.Duration,
defaultCSISnapshotTimeout time.Duration,
defaultItemOperationTimeout time.Duration,
volumeSnapshotLocationLister velerov1listers.VolumeSnapshotLocationLister,
defaultSnapshotLocations map[string]string,
metrics *metrics.ServerMetrics,
@ -121,28 +123,29 @@ func NewBackupController(
credentialStore credentials.FileStore,
) Interface {
c := &backupController{
genericController: newGenericController(Backup, logger),
discoveryHelper: discoveryHelper,
backupper: backupper,
lister: backupInformer.Lister(),
client: client,
clock: &clocks.RealClock{},
backupLogLevel: backupLogLevel,
newPluginManager: newPluginManager,
backupTracker: backupTracker,
kbClient: kbClient,
defaultBackupLocation: defaultBackupLocation,
defaultVolumesToFsBackup: defaultVolumesToFsBackup,
defaultBackupTTL: defaultBackupTTL,
defaultCSISnapshotTimeout: defaultCSISnapshotTimeout,
snapshotLocationLister: volumeSnapshotLocationLister,
defaultSnapshotLocations: defaultSnapshotLocations,
metrics: metrics,
backupStoreGetter: backupStoreGetter,
formatFlag: formatFlag,
volumeSnapshotLister: volumeSnapshotLister,
volumeSnapshotClient: volumeSnapshotClient,
credentialFileStore: credentialStore,
genericController: newGenericController(Backup, logger),
discoveryHelper: discoveryHelper,
backupper: backupper,
lister: backupInformer.Lister(),
client: client,
clock: &clocks.RealClock{},
backupLogLevel: backupLogLevel,
newPluginManager: newPluginManager,
backupTracker: backupTracker,
kbClient: kbClient,
defaultBackupLocation: defaultBackupLocation,
defaultVolumesToFsBackup: defaultVolumesToFsBackup,
defaultBackupTTL: defaultBackupTTL,
defaultCSISnapshotTimeout: defaultCSISnapshotTimeout,
defaultItemOperationTimeout: defaultItemOperationTimeout,
snapshotLocationLister: volumeSnapshotLocationLister,
defaultSnapshotLocations: defaultSnapshotLocations,
metrics: metrics,
backupStoreGetter: backupStoreGetter,
formatFlag: formatFlag,
volumeSnapshotLister: volumeSnapshotLister,
volumeSnapshotClient: volumeSnapshotClient,
credentialFileStore: credentialStore,
}
c.syncHandler = c.processBackup
@ -366,6 +369,11 @@ func (c *backupController) prepareBackupRequest(backup *velerov1api.Backup, logg
request.Spec.CSISnapshotTimeout.Duration = c.defaultCSISnapshotTimeout
}
if request.Spec.ItemOperationTimeout.Duration == 0 {
// set default item operation timeout
request.Spec.ItemOperationTimeout.Duration = c.defaultItemOperationTimeout
}
// calculate expiration
request.Status.Expiration = &metav1.Time{Time: c.clock.Now().Add(request.Spec.TTL.Duration)}
@ -705,10 +713,6 @@ func (c *backupController) runBackup(backup *pkgbackup.Request) error {
}
}
// Mark completion timestamp before serializing and uploading.
// Otherwise, the JSON file in object storage has a CompletionTimestamp of 'null'.
backup.Status.CompletionTimestamp = &metav1.Time{Time: c.clock.Now()}
backup.Status.VolumeSnapshotsAttempted = len(backup.VolumeSnapshots)
for _, snap := range backup.VolumeSnapshots {
if snap.Status.Phase == volume.SnapshotPhaseCompleted {
@ -723,11 +727,24 @@ func (c *backupController) runBackup(backup *pkgbackup.Request) error {
}
}
// Iterate over backup item operations and update progress.
// Any errors on operations at this point should be added to backup errors.
// If any operations are still not complete, then back will not be set to
// Completed yet.
inProgressOperations, _, opsCompleted, opsFailed, errs := getBackupItemOperationProgress(backup.Backup, pluginManager, *backup.GetItemOperationsList())
if len(errs) > 0 {
for err := range errs {
backupLog.Error(err)
}
}
backup.Status.AsyncBackupItemOperationsAttempted = len(*backup.GetItemOperationsList())
backup.Status.AsyncBackupItemOperationsCompleted = opsCompleted
backup.Status.AsyncBackupItemOperationsFailed = opsFailed
backup.Status.Warnings = logCounter.GetCount(logrus.WarnLevel)
backup.Status.Errors = logCounter.GetCount(logrus.ErrorLevel)
recordBackupMetrics(backupLog, backup.Backup, backupFile, c.metrics)
backupWarnings := logCounter.GetEntries(logrus.WarnLevel)
backupErrors := logCounter.GetEntries(logrus.ErrorLevel)
results := map[string]results.Result{
@ -747,10 +764,26 @@ func (c *backupController) runBackup(backup *pkgbackup.Request) error {
case len(fatalErrs) > 0:
backup.Status.Phase = velerov1api.BackupPhaseFailed
case logCounter.GetCount(logrus.ErrorLevel) > 0:
backup.Status.Phase = velerov1api.BackupPhasePartiallyFailed
if inProgressOperations {
backup.Status.Phase = velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed
} else {
backup.Status.Phase = velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed
}
default:
backup.Status.Phase = velerov1api.BackupPhaseCompleted
if inProgressOperations {
backup.Status.Phase = velerov1api.BackupPhaseWaitingForPluginOperations
} else {
backup.Status.Phase = velerov1api.BackupPhaseFinalizingAfterPluginOperations
}
}
// Mark completion timestamp before serializing and uploading.
// Otherwise, the JSON file in object storage has a CompletionTimestamp of 'null'.
if backup.Status.Phase == velerov1api.BackupPhaseFailed ||
backup.Status.Phase == velerov1api.BackupPhasePartiallyFailed ||
backup.Status.Phase == velerov1api.BackupPhaseCompleted {
backup.Status.CompletionTimestamp = &metav1.Time{Time: c.clock.Now()}
}
recordBackupMetrics(backupLog, backup.Backup, backupFile, c.metrics, false)
// re-instantiate the backup store because credentials could have changed since the original
// instantiation, if this was a long-running backup
@ -771,37 +804,43 @@ func (c *backupController) runBackup(backup *pkgbackup.Request) error {
return kerrors.NewAggregate(fatalErrs)
}
func recordBackupMetrics(log logrus.FieldLogger, backup *velerov1api.Backup, backupFile *os.File, serverMetrics *metrics.ServerMetrics) {
func recordBackupMetrics(log logrus.FieldLogger, backup *velerov1api.Backup, backupFile *os.File, serverMetrics *metrics.ServerMetrics, finalize bool) {
backupScheduleName := backup.GetLabels()[velerov1api.ScheduleNameLabel]
var backupSizeBytes int64
if backupFileStat, err := backupFile.Stat(); err != nil {
log.WithError(errors.WithStack(err)).Error("Error getting backup file info")
} else {
backupSizeBytes = backupFileStat.Size()
}
serverMetrics.SetBackupTarballSizeBytesGauge(backupScheduleName, backupSizeBytes)
backupDuration := backup.Status.CompletionTimestamp.Time.Sub(backup.Status.StartTimestamp.Time)
backupDurationSeconds := float64(backupDuration / time.Second)
serverMetrics.RegisterBackupDuration(backupScheduleName, backupDurationSeconds)
serverMetrics.RegisterVolumeSnapshotAttempts(backupScheduleName, backup.Status.VolumeSnapshotsAttempted)
serverMetrics.RegisterVolumeSnapshotSuccesses(backupScheduleName, backup.Status.VolumeSnapshotsCompleted)
serverMetrics.RegisterVolumeSnapshotFailures(backupScheduleName, backup.Status.VolumeSnapshotsAttempted-backup.Status.VolumeSnapshotsCompleted)
if features.IsEnabled(velerov1api.CSIFeatureFlag) {
serverMetrics.RegisterCSISnapshotAttempts(backupScheduleName, backup.Name, backup.Status.CSIVolumeSnapshotsAttempted)
serverMetrics.RegisterCSISnapshotSuccesses(backupScheduleName, backup.Name, backup.Status.CSIVolumeSnapshotsCompleted)
serverMetrics.RegisterCSISnapshotFailures(backupScheduleName, backup.Name, backup.Status.CSIVolumeSnapshotsAttempted-backup.Status.CSIVolumeSnapshotsCompleted)
if backupFile != nil {
var backupSizeBytes int64
if backupFileStat, err := backupFile.Stat(); err != nil {
log.WithError(errors.WithStack(err)).Error("Error getting backup file info")
} else {
backupSizeBytes = backupFileStat.Size()
}
serverMetrics.SetBackupTarballSizeBytesGauge(backupScheduleName, backupSizeBytes)
}
if backup.Status.Progress != nil {
serverMetrics.RegisterBackupItemsTotalGauge(backupScheduleName, backup.Status.Progress.TotalItems)
if backup.Status.CompletionTimestamp != nil {
backupDuration := backup.Status.CompletionTimestamp.Time.Sub(backup.Status.StartTimestamp.Time)
backupDurationSeconds := float64(backupDuration / time.Second)
serverMetrics.RegisterBackupDuration(backupScheduleName, backupDurationSeconds)
}
serverMetrics.RegisterBackupItemsErrorsGauge(backupScheduleName, backup.Status.Errors)
if !finalize {
serverMetrics.RegisterVolumeSnapshotAttempts(backupScheduleName, backup.Status.VolumeSnapshotsAttempted)
serverMetrics.RegisterVolumeSnapshotSuccesses(backupScheduleName, backup.Status.VolumeSnapshotsCompleted)
serverMetrics.RegisterVolumeSnapshotFailures(backupScheduleName, backup.Status.VolumeSnapshotsAttempted-backup.Status.VolumeSnapshotsCompleted)
if backup.Status.Warnings > 0 {
serverMetrics.RegisterBackupWarning(backupScheduleName)
if features.IsEnabled(velerov1api.CSIFeatureFlag) {
serverMetrics.RegisterCSISnapshotAttempts(backupScheduleName, backup.Name, backup.Status.CSIVolumeSnapshotsAttempted)
serverMetrics.RegisterCSISnapshotSuccesses(backupScheduleName, backup.Name, backup.Status.CSIVolumeSnapshotsCompleted)
serverMetrics.RegisterCSISnapshotFailures(backupScheduleName, backup.Name, backup.Status.CSIVolumeSnapshotsAttempted-backup.Status.CSIVolumeSnapshotsCompleted)
}
if backup.Status.Progress != nil {
serverMetrics.RegisterBackupItemsTotalGauge(backupScheduleName, backup.Status.Progress.TotalItems)
}
serverMetrics.RegisterBackupItemsErrorsGauge(backupScheduleName, backup.Status.Errors)
if backup.Status.Warnings > 0 {
serverMetrics.RegisterBackupWarning(backupScheduleName)
}
}
}
@ -826,6 +865,12 @@ func persistBackup(backup *pkgbackup.Request,
persistErrs = append(persistErrs, errs...)
}
var backupItemOperations *bytes.Buffer
backupItemOperations, errs = encodeToJSONGzip(backup.GetItemOperationsList(), "backup item operations list")
if errs != nil {
persistErrs = append(persistErrs, errs...)
}
podVolumeBackups, errs := encodeToJSONGzip(backup.PodVolumeBackups, "pod volume backups list")
if errs != nil {
persistErrs = append(persistErrs, errs...)
@ -860,6 +905,7 @@ func persistBackup(backup *pkgbackup.Request,
backupJSON = nil
backupContents = nil
nativeVolumeSnapshots = nil
backupItemOperations = nil
backupResourceList = nil
csiSnapshotJSON = nil
csiSnapshotContentsJSON = nil
@ -875,6 +921,7 @@ func persistBackup(backup *pkgbackup.Request,
BackupResults: backupResult,
PodVolumeBackups: podVolumeBackups,
VolumeSnapshots: nativeVolumeSnapshots,
BackupItemOperations: backupItemOperations,
BackupResourceList: backupResourceList,
CSIVolumeSnapshots: csiSnapshotJSON,
CSIVolumeSnapshotContents: csiSnapshotContentsJSON,

View File

@ -47,6 +47,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/fake"
informers "github.com/vmware-tanzu/velero/pkg/generated/informers/externalversions"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/metrics"
"github.com/vmware-tanzu/velero/pkg/persistence"
persistencemocks "github.com/vmware-tanzu/velero/pkg/persistence/mocks"
@ -75,6 +76,13 @@ func (b *fakeBackupper) BackupWithResolvers(logger logrus.FieldLogger, backup *p
return args.Error(0)
}
func (b *fakeBackupper) FinalizeBackup(logger logrus.FieldLogger, backup *pkgbackup.Request, inBackupFile io.Reader, outBackupFile io.Writer,
backupItemActionResolver framework.BackupItemActionResolverV2,
asyncBIAOperations []*itemoperation.BackupOperation) error {
args := b.Called(logger, backup, inBackupFile, outBackupFile, backupItemActionResolver, asyncBIAOperations)
return args.Error(0)
}
func defaultBackup() *builder.BackupBuilder {
return builder.ForBackup(velerov1api.DefaultNamespace, "backup-1")
}
@ -597,7 +605,7 @@ func TestProcessBackupCompletions(t *testing.T) {
backupExists bool
existenceCheckError error
}{
// Completed
// FinalizingAfterPluginOperations
{
name: "backup with no backup location gets the default",
backup: defaultBackup().Result(),
@ -625,12 +633,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.True(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Expiration: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
},
},
},
@ -661,12 +668,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Expiration: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
},
},
},
@ -700,12 +706,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.True(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Expiration: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
},
},
},
@ -737,12 +742,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
Expiration: &metav1.Time{now.Add(10 * time.Minute)},
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
Expiration: &metav1.Time{now.Add(10 * time.Minute)},
StartTimestamp: &timestamp,
},
},
},
@ -774,12 +778,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.True(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Expiration: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
},
},
},
@ -812,12 +815,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Expiration: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
},
},
},
@ -850,12 +852,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.True(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Expiration: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
},
},
},
@ -888,12 +889,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.True(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Expiration: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
},
},
},
@ -926,12 +926,11 @@ func TestProcessBackupCompletions(t *testing.T) {
DefaultVolumesToFsBackup: boolptr.False(),
},
Status: velerov1api.BackupStatus{
Phase: velerov1api.BackupPhaseCompleted,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
CompletionTimestamp: &timestamp,
Expiration: &timestamp,
Phase: velerov1api.BackupPhaseFinalizingAfterPluginOperations,
Version: 1,
FormatVersion: "1.1.0",
StartTimestamp: &timestamp,
Expiration: &timestamp,
},
},
},
@ -1079,13 +1078,16 @@ func TestProcessBackupCompletions(t *testing.T) {
// Ensure we have a CompletionTimestamp when uploading and that the backup name matches the backup in the object store.
// Failures will display the bytes in buf.
hasNameAndCompletionTimestamp := func(info persistence.BackupInfo) bool {
hasNameAndCompletionTimestampIfCompleted := func(info persistence.BackupInfo) bool {
buf := new(bytes.Buffer)
buf.ReadFrom(info.Metadata)
return info.Name == test.backup.Name &&
strings.Contains(buf.String(), `"completionTimestamp": "2006-01-02T22:04:05Z"`)
(!(strings.Contains(buf.String(), `"phase": "Completed"`) ||
strings.Contains(buf.String(), `"phase": "Failed"`) ||
strings.Contains(buf.String(), `"phase": "PartiallyFailed"`)) ||
strings.Contains(buf.String(), `"completionTimestamp": "2006-01-02T22:04:05Z"`))
}
backupStore.On("PutBackup", mock.MatchedBy(hasNameAndCompletionTimestamp)).Return(nil)
backupStore.On("PutBackup", mock.MatchedBy(hasNameAndCompletionTimestampIfCompleted)).Return(nil)
// add the test's backup to the informer/lister store
require.NotNil(t, test.backup)

View File

@ -0,0 +1,204 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"bytes"
"context"
"io/ioutil"
"os"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clocks "k8s.io/utils/clock"
ctrl "sigs.k8s.io/controller-runtime"
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
pkgbackup "github.com/vmware-tanzu/velero/pkg/backup"
"github.com/vmware-tanzu/velero/pkg/metrics"
"github.com/vmware-tanzu/velero/pkg/persistence"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/util/encode"
)
// backupFinalizerReconciler reconciles a Backup object
type backupFinalizerReconciler struct {
client kbclient.Client
clock clocks.WithTickerAndDelayedExecution
backupper pkgbackup.Backupper
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager
metrics *metrics.ServerMetrics
backupStoreGetter persistence.ObjectBackupStoreGetter
log logrus.FieldLogger
}
// NewBackupFinalizerReconciler initializes and returns backupFinalizerReconciler struct.
func NewBackupFinalizerReconciler(
client kbclient.Client,
clock clocks.WithTickerAndDelayedExecution,
backupper pkgbackup.Backupper,
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager,
backupStoreGetter persistence.ObjectBackupStoreGetter,
log logrus.FieldLogger,
metrics *metrics.ServerMetrics,
) *backupFinalizerReconciler {
return &backupFinalizerReconciler{
client: client,
clock: clock,
backupper: backupper,
newPluginManager: newPluginManager,
backupStoreGetter: backupStoreGetter,
log: log,
metrics: metrics,
}
}
// +kubebuilder:rbac:groups=velero.io,resources=backups,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=velero.io,resources=backups/status,verbs=get;update;patch
func (r *backupFinalizerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.log.WithFields(logrus.Fields{
"controller": "backup-finalizer",
"backup": req.NamespacedName,
})
// Fetch the Backup instance.
log.Debug("Getting Backup")
backup := &velerov1api.Backup{}
if err := r.client.Get(ctx, req.NamespacedName, backup); err != nil {
if apierrors.IsNotFound(err) {
log.Debug("Unable to find Backup")
return ctrl.Result{}, nil
}
log.WithError(err).Error("Error getting Backup")
return ctrl.Result{}, errors.WithStack(err)
}
switch backup.Status.Phase {
case velerov1api.BackupPhaseFinalizingAfterPluginOperations, velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed:
// only process backups finalizing after plugin operations are complete
default:
log.Debug("Backup is not awaiting finalizing, skipping")
return ctrl.Result{}, nil
}
original := backup.DeepCopy()
defer func() {
// Always attempt to Patch the backup object and status after each reconciliation.
if err := r.client.Patch(ctx, backup, kbclient.MergeFrom(original)); err != nil {
log.WithError(err).Error("Error updating backup")
return
}
}()
location := &velerov1api.BackupStorageLocation{}
if err := r.client.Get(ctx, kbclient.ObjectKey{
Namespace: backup.Namespace,
Name: backup.Spec.StorageLocation,
}, location); err != nil {
return ctrl.Result{}, errors.WithStack(err)
}
pluginManager := r.newPluginManager(log)
defer pluginManager.CleanupClients()
backupStore, err := r.backupStoreGetter.Get(location, pluginManager, log)
if err != nil {
log.WithError(err).Error("Error getting a backup store")
return ctrl.Result{}, errors.WithStack(err)
}
// Download item operations list and backup contents
operations, err := backupStore.GetBackupItemOperations(backup.Name)
if err != nil {
log.WithError(err).Error("Error getting backup item operations")
return ctrl.Result{}, errors.WithStack(err)
}
backupRequest := &pkgbackup.Request{
Backup: backup,
StorageLocation: location,
}
var outBackupFile *os.File
if len(operations) > 0 {
// Call itemBackupper.BackupItem for the list of items updated by async operations
log.Info("Setting up finalized backup temp file")
inBackupFile, err := downloadToTempFile(backup.Name, backupStore, log)
if err != nil {
return ctrl.Result{}, errors.Wrap(err, "error downloading backup")
}
defer closeAndRemoveFile(inBackupFile, log)
outBackupFile, err = ioutil.TempFile("", "")
if err != nil {
log.WithError(err).Error("error creating temp file for backup")
return ctrl.Result{}, errors.WithStack(err)
}
defer closeAndRemoveFile(outBackupFile, log)
log.Info("Getting backup item actions")
actions, err := pluginManager.GetBackupItemActionsV2()
if err != nil {
log.WithError(err).Error("error getting Backup Item Actions")
return ctrl.Result{}, errors.WithStack(err)
}
backupItemActionsResolver := framework.NewBackupItemActionResolverV2(actions)
err = r.backupper.FinalizeBackup(log, backupRequest, inBackupFile, outBackupFile, backupItemActionsResolver, operations)
if err != nil {
log.WithError(err).Error("error finalizing Backup")
return ctrl.Result{}, errors.WithStack(err)
}
}
backupScheduleName := backupRequest.GetLabels()[velerov1api.ScheduleNameLabel]
switch backup.Status.Phase {
case velerov1api.BackupPhaseFinalizingAfterPluginOperations:
backup.Status.Phase = velerov1api.BackupPhaseCompleted
r.metrics.RegisterBackupSuccess(backupScheduleName)
r.metrics.RegisterBackupLastStatus(backupScheduleName, metrics.BackupLastStatusSucc)
case velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed:
backup.Status.Phase = velerov1api.BackupPhasePartiallyFailed
r.metrics.RegisterBackupPartialFailure(backupScheduleName)
r.metrics.RegisterBackupLastStatus(backupScheduleName, metrics.BackupLastStatusFailure)
}
backup.Status.CompletionTimestamp = &metav1.Time{Time: r.clock.Now()}
recordBackupMetrics(log, backup, outBackupFile, r.metrics, true)
// update backup metadata in object store
backupJSON := new(bytes.Buffer)
if err := encode.EncodeTo(backup, "json", backupJSON); err != nil {
return ctrl.Result{}, errors.Wrap(err, "error encoding backup json")
}
err = backupStore.PutBackupMetadata(backup.Name, backupJSON)
if err != nil {
return ctrl.Result{}, errors.Wrap(err, "error uploading backup json")
}
if len(operations) > 0 {
err = backupStore.PutBackupContents(backup.Name, outBackupFile)
if err != nil {
return ctrl.Result{}, errors.Wrap(err, "error uploading backup final contents")
}
}
return ctrl.Result{}, nil
}
func (r *backupFinalizerReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&velerov1api.Backup{}).
Complete(r)
}

View File

@ -0,0 +1,184 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"bytes"
"context"
"io/ioutil"
"testing"
"time"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
testclocks "k8s.io/utils/clock/testing"
ctrl "sigs.k8s.io/controller-runtime"
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/metrics"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
)
func mockBackupFinalizerReconciler(fakeClient kbclient.Client, fakeClock *testclocks.FakeClock) (*backupFinalizerReconciler, *fakeBackupper) {
backupper := new(fakeBackupper)
return NewBackupFinalizerReconciler(
fakeClient,
fakeClock,
backupper,
func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager },
NewFakeSingleObjectBackupStoreGetter(backupStore),
logrus.StandardLogger(),
metrics.NewServerMetrics(),
), backupper
}
func TestBackupFinalizerReconcile(t *testing.T) {
fakeClock := testclocks.NewFakeClock(time.Now())
metav1Now := metav1.NewTime(fakeClock.Now())
defaultBackupLocation := builder.ForBackupStorageLocation(velerov1api.DefaultNamespace, "default").Result()
tests := []struct {
name string
backup *velerov1api.Backup
backupOperations []*itemoperation.BackupOperation
backupLocation *velerov1api.BackupStorageLocation
expectError bool
expectPhase velerov1api.BackupPhase
}{
{
name: "FinalizingAfterPluginOperations backup is completed",
backup: builder.ForBackup(velerov1api.DefaultNamespace, "backup-1").
StorageLocation("default").
ObjectMeta(builder.WithUID("foo")).
StartTimestamp(fakeClock.Now()).
Phase(velerov1api.BackupPhaseFinalizingAfterPluginOperations).Result(),
backupLocation: defaultBackupLocation,
expectPhase: velerov1api.BackupPhaseCompleted,
backupOperations: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-1",
BackupUID: "foo",
BackupItemAction: "foo",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-1",
Name: "pod-1",
},
ItemsToUpdate: []velero.ResourceIdentifier{
{
GroupResource: kuberesource.Secrets,
Namespace: "ns-1",
Name: "secret-1",
},
},
OperationID: "operation-1",
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseCompleted,
Created: &metav1Now,
},
},
},
},
{
name: "FinalizingAfterPluginOperationsPartiallyFailed backup is partially failed",
backup: builder.ForBackup(velerov1api.DefaultNamespace, "backup-2").
StorageLocation("default").
ObjectMeta(builder.WithUID("foo")).
StartTimestamp(fakeClock.Now()).
Phase(velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed).Result(),
backupLocation: defaultBackupLocation,
expectPhase: velerov1api.BackupPhasePartiallyFailed,
backupOperations: []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "backup-2",
BackupUID: "foo",
BackupItemAction: "foo",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns-2",
Name: "pod-2",
},
ItemsToUpdate: []velero.ResourceIdentifier{
{
GroupResource: kuberesource.Secrets,
Namespace: "ns-2",
Name: "secret-2",
},
},
OperationID: "operation-2",
},
Status: itemoperation.OperationStatus{
Phase: itemoperation.OperationPhaseCompleted,
Created: &metav1Now,
},
},
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
if test.backup == nil {
return
}
initObjs := []runtime.Object{}
initObjs = append(initObjs, test.backup)
if test.backupLocation != nil {
initObjs = append(initObjs, test.backupLocation)
}
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, initObjs...)
reconciler, backupper := mockBackupFinalizerReconciler(fakeClient, fakeClock)
pluginManager.On("CleanupClients").Return(nil)
backupStore.On("GetBackupItemOperations", test.backup.Name).Return(test.backupOperations, nil)
backupStore.On("GetBackupContents", mock.Anything).Return(ioutil.NopCloser(bytes.NewReader([]byte("hello world"))), nil)
backupStore.On("PutBackupContents", mock.Anything, mock.Anything).Return(nil)
backupStore.On("PutBackupMetadata", mock.Anything, mock.Anything).Return(nil)
pluginManager.On("GetBackupItemActionsV2").Return(nil, nil)
backupper.On("FinalizeBackup", mock.Anything, mock.Anything, mock.Anything, mock.Anything, framework.BackupItemActionResolverV2{}, mock.Anything).Return(nil)
_, err := reconciler.Reconcile(context.TODO(), ctrl.Request{NamespacedName: types.NamespacedName{Namespace: test.backup.Namespace, Name: test.backup.Name}})
gotErr := err != nil
assert.Equal(t, test.expectError, gotErr)
backupAfter := velerov1api.Backup{}
err = fakeClient.Get(context.TODO(), types.NamespacedName{
Namespace: test.backup.Namespace,
Name: test.backup.Name,
}, &backupAfter)
require.NoError(t, err)
assert.Equal(t, test.expectPhase, backupAfter.Status.Phase)
})
}
}

View File

@ -148,6 +148,26 @@ func (b *backupSyncReconciler) Reconcile(ctx context.Context, req ctrl.Request)
continue
}
if backup.Status.Phase == velerov1api.BackupPhaseWaitingForPluginOperations ||
backup.Status.Phase == velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed {
if backup.Status.Expiration == nil || backup.Status.Expiration.After(time.Now()) {
log.Debugf("Skipping non-expired WaitingForPluginOperations backup %v", backup.Name)
continue
}
log.Debug("WaitingForPluginOperations Backup is past expiration, syncing for garbage collection")
backup.Status.Phase = velerov1api.BackupPhasePartiallyFailed
}
if backup.Status.Phase == velerov1api.BackupPhaseFinalizingAfterPluginOperations ||
backup.Status.Phase == velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed {
if backup.Status.Expiration == nil || backup.Status.Expiration.After(time.Now()) {
log.Debugf("Skipping non-expired FinalizingAfterPluginOperations backup %v", backup.Name)
continue
}
log.Debug("FinalizingAfterPluginOperations Backup is past expiration, syncing for garbage collection")
backup.Status.Phase = velerov1api.BackupPhasePartiallyFailed
}
backup.Namespace = b.namespace
backup.ResourceVersion = ""

View File

@ -25,6 +25,7 @@ import (
. "github.com/onsi/gomega"
"github.com/sirupsen/logrus"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
@ -32,6 +33,7 @@ import (
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/validation"
core "k8s.io/client-go/testing"
testclocks "k8s.io/utils/clock/testing"
ctrl "sigs.k8s.io/controller-runtime"
ctrlClient "sigs.k8s.io/controller-runtime/pkg/client"
@ -155,9 +157,11 @@ func numBackups(c ctrlClient.WithWatch, ns string) (int, error) {
var _ = Describe("Backup Sync Reconciler", func() {
It("Test Backup Sync Reconciler basic function", func() {
fakeClock := testclocks.NewFakeClock(time.Now())
type cloudBackupData struct {
backup *velerov1api.Backup
podVolumeBackups []*velerov1api.PodVolumeBackup
backup *velerov1api.Backup
podVolumeBackups []*velerov1api.PodVolumeBackup
backupShouldSkipSync bool // backups waiting for plugin operations should not sync
}
tests := []struct {
@ -187,6 +191,98 @@ var _ = Describe("Backup Sync Reconciler", func() {
},
},
},
{
name: "backups waiting for plugin operations aren't synced",
namespace: "ns-1",
location: defaultLocation("ns-1"),
cloudBackups: []*cloudBackupData{
{
backup: builder.ForBackup("ns-1", "backup-1").
Phase(velerov1api.BackupPhaseWaitingForPluginOperations).Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-2").
Phase(velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed).Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-3").
Phase(velerov1api.BackupPhaseWaitingForPluginOperations).Result(),
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("ns-1", "pvb-1").Result(),
},
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-4").
Phase(velerov1api.BackupPhaseFinalizingAfterPluginOperations).Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-5").
Phase(velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed).Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-6").
Phase(velerov1api.BackupPhaseFinalizingAfterPluginOperations).Result(),
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("ns-1", "pvb-2").Result(),
},
backupShouldSkipSync: true,
},
},
},
{
name: "expired backups waiting for plugin operations are synced",
namespace: "ns-1",
location: defaultLocation("ns-1"),
cloudBackups: []*cloudBackupData{
{
backup: builder.ForBackup("ns-1", "backup-1").
Phase(velerov1api.BackupPhaseWaitingForPluginOperations).
Expiration(fakeClock.Now().Add(-time.Hour)).Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-2").
Phase(velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed).
Expiration(fakeClock.Now().Add(-time.Hour)).Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-3").
Phase(velerov1api.BackupPhaseWaitingForPluginOperations).
Expiration(fakeClock.Now().Add(-time.Hour)).Result(),
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("ns-1", "pvb-1").Result(),
},
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-4").
Phase(velerov1api.BackupPhaseFinalizingAfterPluginOperations).
Expiration(fakeClock.Now().Add(-time.Hour)).Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-5").
Phase(velerov1api.BackupPhaseFinalizingAfterPluginOperationsPartiallyFailed).
Expiration(fakeClock.Now().Add(-time.Hour)).Result(),
backupShouldSkipSync: true,
},
{
backup: builder.ForBackup("ns-1", "backup-6").
Phase(velerov1api.BackupPhaseFinalizingAfterPluginOperations).
Expiration(fakeClock.Now().Add(-time.Hour)).Result(),
podVolumeBackups: []*velerov1api.PodVolumeBackup{
builder.ForPodVolumeBackup("ns-1", "pvb-2").Result(),
},
backupShouldSkipSync: true,
},
},
},
{
name: "all synced backups get created in Velero server's namespace",
namespace: "velero",
@ -364,36 +460,42 @@ var _ = Describe("Backup Sync Reconciler", func() {
Namespace: cloudBackupData.backup.Namespace,
Name: cloudBackupData.backup.Name},
obj)
Expect(err).To(BeNil())
// did this cloud backup already exist in the cluster?
var existing *velerov1api.Backup
for _, obj := range test.existingBackups {
if obj.Name == cloudBackupData.backup.Name {
existing = obj
break
}
}
if existing != nil {
// if this cloud backup already exists in the cluster, make sure that what we get from the
// client is the existing backup, not the cloud one.
// verify that the in-cluster backup has its storage location populated, if it's not already.
expected := existing.DeepCopy()
expected.Spec.StorageLocation = test.location.Name
Expect(expected).To(BeEquivalentTo(obj))
if cloudBackupData.backupShouldSkipSync &&
(cloudBackupData.backup.Status.Expiration == nil ||
cloudBackupData.backup.Status.Expiration.After(fakeClock.Now())) {
Expect(apierrors.IsNotFound(err)).To(BeTrue())
} else {
// verify that the storage location field and label are set properly
Expect(test.location.Name).To(BeEquivalentTo(obj.Spec.StorageLocation))
Expect(err).To(BeNil())
locationName := test.location.Name
if test.longLocationNameEnabled {
locationName = label.GetValidName(locationName)
// did this cloud backup already exist in the cluster?
var existing *velerov1api.Backup
for _, obj := range test.existingBackups {
if obj.Name == cloudBackupData.backup.Name {
existing = obj
break
}
}
if existing != nil {
// if this cloud backup already exists in the cluster, make sure that what we get from the
// client is the existing backup, not the cloud one.
// verify that the in-cluster backup has its storage location populated, if it's not already.
expected := existing.DeepCopy()
expected.Spec.StorageLocation = test.location.Name
Expect(expected).To(BeEquivalentTo(obj))
} else {
// verify that the storage location field and label are set properly
Expect(test.location.Name).To(BeEquivalentTo(obj.Spec.StorageLocation))
locationName := test.location.Name
if test.longLocationNameEnabled {
locationName = label.GetValidName(locationName)
}
Expect(locationName).To(BeEquivalentTo(obj.Labels[velerov1api.StorageLocationLabel]))
Expect(len(obj.Labels[velerov1api.StorageLocationLabel]) <= validation.DNS1035LabelMaxLength).To(BeTrue())
}
Expect(locationName).To(BeEquivalentTo(obj.Labels[velerov1api.StorageLocationLabel]))
Expect(len(obj.Labels[velerov1api.StorageLocationLabel]) <= validation.DNS1035LabelMaxLength).To(BeTrue())
}
// process the cloud pod volume backups for this backup, if any
@ -406,22 +508,28 @@ var _ = Describe("Backup Sync Reconciler", func() {
Name: podVolumeBackup.Name,
},
objPodVolumeBackup)
Expect(err).ShouldNot(HaveOccurred())
if cloudBackupData.backupShouldSkipSync &&
(cloudBackupData.backup.Status.Expiration == nil ||
cloudBackupData.backup.Status.Expiration.After(fakeClock.Now())) {
Expect(apierrors.IsNotFound(err)).To(BeTrue())
} else {
Expect(err).ShouldNot(HaveOccurred())
// did this cloud pod volume backup already exist in the cluster?
var existingPodVolumeBackup *velerov1api.PodVolumeBackup
for _, objPodVolumeBackup := range test.existingPodVolumeBackups {
if objPodVolumeBackup.Name == podVolumeBackup.Name {
existingPodVolumeBackup = objPodVolumeBackup
break
// did this cloud pod volume backup already exist in the cluster?
var existingPodVolumeBackup *velerov1api.PodVolumeBackup
for _, objPodVolumeBackup := range test.existingPodVolumeBackups {
if objPodVolumeBackup.Name == podVolumeBackup.Name {
existingPodVolumeBackup = objPodVolumeBackup
break
}
}
}
if existingPodVolumeBackup != nil {
// if this cloud pod volume backup already exists in the cluster, make sure that what we get from the
// client is the existing backup, not the cloud one.
expected := existingPodVolumeBackup.DeepCopy()
Expect(expected).To(BeEquivalentTo(objPodVolumeBackup))
if existingPodVolumeBackup != nil {
// if this cloud pod volume backup already exists in the cluster, make sure that what we get from the
// client is the existing backup, not the cloud one.
expected := existingPodVolumeBackup.DeepCopy()
Expect(expected).To(BeEquivalentTo(objPodVolumeBackup))
}
}
}
}

View File

@ -17,8 +17,10 @@ limitations under the License.
package controller
const (
AsyncBackupOperations = "async-backup-operations"
Backup = "backup"
BackupDeletion = "backup-deletion"
BackupFinalizer = "backup-finalizer"
BackupStorageLocation = "backup-storage-location"
BackupSync = "backup-sync"
DownloadRequest = "download-request"
@ -33,8 +35,10 @@ const (
// DisableableControllers is a list of controllers that can be disabled
var DisableableControllers = []string{
AsyncBackupOperations,
Backup,
BackupDeletion,
BackupFinalizer,
BackupSync,
DownloadRequest,
GarbageCollection,

View File

@ -41,6 +41,9 @@ type downloadRequestReconciler struct {
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager
backupStoreGetter persistence.ObjectBackupStoreGetter
// used to force update of async backup item operations before processing download request
backupItemOperationsMap *BackupItemOperationsMap
log logrus.FieldLogger
}
@ -51,13 +54,15 @@ func NewDownloadRequestReconciler(
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager,
backupStoreGetter persistence.ObjectBackupStoreGetter,
log logrus.FieldLogger,
backupItemOperationsMap *BackupItemOperationsMap,
) *downloadRequestReconciler {
return &downloadRequestReconciler{
client: client,
clock: clock,
newPluginManager: newPluginManager,
backupStoreGetter: backupStoreGetter,
log: log,
client: client,
clock: clock,
newPluginManager: newPluginManager,
backupStoreGetter: backupStoreGetter,
backupItemOperationsMap: backupItemOperationsMap,
log: log,
}
}
@ -158,6 +163,13 @@ func (r *downloadRequestReconciler) Reconcile(ctx context.Context, req ctrl.Requ
return ctrl.Result{}, errors.WithStack(err)
}
// If this is a request for backup item operations, force update of in-memory operations that
// are not yet uploaded
if downloadRequest.Spec.Target.Kind == velerov1api.DownloadTargetKindBackupItemOperations &&
r.backupItemOperationsMap != nil {
// ignore errors here. If we can't upload anything here, process the download as usual
_ = r.backupItemOperationsMap.UpdateForBackup(backupStore, backupName)
}
if downloadRequest.Status.DownloadURL, err = backupStore.GetDownloadURL(downloadRequest.Spec.Target); err != nil {
return ctrl.Result{Requeue: true}, errors.WithStack(err)
}

View File

@ -112,6 +112,7 @@ var _ = Describe("Download Request Reconciler", func() {
func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager },
NewFakeObjectBackupStoreGetter(backupStores),
velerotest.NewLogger(),
nil,
)
if test.backupLocation != nil && test.expectGetsURL {

View File

@ -16,6 +16,10 @@ limitations under the License.
package itemoperation
import (
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// BackupOperation stores information about an async item operation
// started by a BackupItemAction plugin (v2 or later)
type BackupOperation struct {
@ -24,6 +28,21 @@ type BackupOperation struct {
Status OperationStatus `json:"status"`
}
func (in *BackupOperation) DeepCopy() *BackupOperation {
if in == nil {
return nil
}
out := new(BackupOperation)
in.DeepCopyInto(out)
return out
}
func (in *BackupOperation) DeepCopyInto(out *BackupOperation) {
*out = *in
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
type BackupOperationSpec struct {
// BackupName is the name of the Velero backup this item operation
// is associated with.
@ -37,8 +56,32 @@ type BackupOperationSpec struct {
BackupItemAction string `json:"backupItemAction"`
// Kubernetes resource identifier for the item
ResourceIdentifier string "json:resourceIdentifier"
ResourceIdentifier velero.ResourceIdentifier "json:resourceIdentifier"
// OperationID returned by the BIA plugin
OperationID string "json:operationID"
// Items needing update after all async operations have completed
ItemsToUpdate []velero.ResourceIdentifier "json:itemsToUpdate"
}
func (in *BackupOperationSpec) DeepCopy() *BackupOperationSpec {
if in == nil {
return nil
}
out := new(BackupOperationSpec)
in.DeepCopyInto(out)
return out
}
func (in *BackupOperationSpec) DeepCopyInto(out *BackupOperationSpec) {
*out = *in
in.ResourceIdentifier.DeepCopyInto(&out.ResourceIdentifier)
if in.ItemsToUpdate != nil {
in, out := &in.ItemsToUpdate, &out.ItemsToUpdate
*out = make([]velero.ResourceIdentifier, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}

View File

@ -16,6 +16,10 @@ limitations under the License.
package itemoperation
import (
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// RestoreOperation stores information about an async item operation
// started by a RestoreItemAction plugin (v2 or later)
type RestoreOperation struct {
@ -37,7 +41,7 @@ type RestoreOperationSpec struct {
RestoreItemAction string `json:"restoreItemAction"`
// Kubernetes resource identifier for the item
ResourceIdentifier string "json:resourceIdentifier"
ResourceIdentifier velero.ResourceIdentifier "json:resourceIdentifier"
// OperationID returned by the RIA plugin
OperationID string "json:operationID"

View File

@ -40,12 +40,19 @@ type OperationStatus struct {
// Units that NCompleted,NTotal are measured in
// i.e. "bytes"
OperationUnits int64 `json:"nTotal,omitempty"`
OperationUnits string `json:"operationUnits,omitempty"`
// Description of progress made
// i.e. "processing", "Current phase: Running", etc.
Description string `json:"description,omitempty"`
// Created records the time the item operation was created
Created *metav1.Time `json:"created,omitempty"`
// Started records the time the item operation was started, if known
// +optional
// +nullable
Started *metav1.Time `json:"start,omitempty"`
Started *metav1.Time `json:"started,omitempty"`
// Updated records the time the item operation was updated, if known.
// +optional
@ -53,10 +60,35 @@ type OperationStatus struct {
Updated *metav1.Time `json:"updated,omitempty"`
}
func (in *OperationStatus) DeepCopy() *OperationStatus {
if in == nil {
return nil
}
out := new(OperationStatus)
in.DeepCopyInto(out)
return out
}
func (in *OperationStatus) DeepCopyInto(out *OperationStatus) {
*out = *in
if in.Created != nil {
in, out := &in.Created, &out.Created
*out = (*in).DeepCopy()
}
if in.Started != nil {
in, out := &in.Started, &out.Started
*out = (*in).DeepCopy()
}
if in.Updated != nil {
in, out := &in.Updated, &out.Updated
*out = (*in).DeepCopy()
}
}
const (
// OperationPhaseNew means the item operation has been created and started
// by the plugin
OperationPhaseInProgress OperationPhase = "New"
OperationPhaseInProgress OperationPhase = "InProgress"
// OperationPhaseCompleted means the item operation was successfully completed
// and can be used for restore.

View File

@ -1,5 +1,5 @@
/*
Copyright 2020 the Velero contributors.
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@ -13,6 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by mockery v2.16.0. DO NOT EDIT.
package mocks
@ -20,13 +21,15 @@ import (
io "io"
mock "github.com/stretchr/testify/mock"
itemoperation "github.com/vmware-tanzu/velero/pkg/itemoperation"
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
persistence "github.com/vmware-tanzu/velero/pkg/persistence"
v1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
persistence "github.com/vmware-tanzu/velero/pkg/persistence"
volume "github.com/vmware-tanzu/velero/pkg/volume"
volumesnapshotv1 "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
)
// BackupStore is an autogenerated mock type for the BackupStore type
@ -106,6 +109,29 @@ func (_m *BackupStore) GetBackupContents(name string) (io.ReadCloser, error) {
return r0, r1
}
// GetBackupItemOperations provides a mock function with given fields: name
func (_m *BackupStore) GetBackupItemOperations(name string) ([]*itemoperation.BackupOperation, error) {
ret := _m.Called(name)
var r0 []*itemoperation.BackupOperation
if rf, ok := ret.Get(0).(func(string) []*itemoperation.BackupOperation); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]*itemoperation.BackupOperation)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(name)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetBackupMetadata provides a mock function with given fields: name
func (_m *BackupStore) GetBackupMetadata(name string) (*v1.Backup, error) {
ret := _m.Called(name)
@ -152,6 +178,75 @@ func (_m *BackupStore) GetBackupVolumeSnapshots(name string) ([]*volume.Snapshot
return r0, r1
}
// GetCSIVolumeSnapshotClasses provides a mock function with given fields: name
func (_m *BackupStore) GetCSIVolumeSnapshotClasses(name string) ([]*volumesnapshotv1.VolumeSnapshotClass, error) {
ret := _m.Called(name)
var r0 []*volumesnapshotv1.VolumeSnapshotClass
if rf, ok := ret.Get(0).(func(string) []*volumesnapshotv1.VolumeSnapshotClass); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]*volumesnapshotv1.VolumeSnapshotClass)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(name)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetCSIVolumeSnapshotContents provides a mock function with given fields: name
func (_m *BackupStore) GetCSIVolumeSnapshotContents(name string) ([]*volumesnapshotv1.VolumeSnapshotContent, error) {
ret := _m.Called(name)
var r0 []*volumesnapshotv1.VolumeSnapshotContent
if rf, ok := ret.Get(0).(func(string) []*volumesnapshotv1.VolumeSnapshotContent); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]*volumesnapshotv1.VolumeSnapshotContent)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(name)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetCSIVolumeSnapshots provides a mock function with given fields: name
func (_m *BackupStore) GetCSIVolumeSnapshots(name string) ([]*volumesnapshotv1.VolumeSnapshot, error) {
ret := _m.Called(name)
var r0 []*volumesnapshotv1.VolumeSnapshot
if rf, ok := ret.Get(0).(func(string) []*volumesnapshotv1.VolumeSnapshot); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]*volumesnapshotv1.VolumeSnapshot)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(name)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetDownloadURL provides a mock function with given fields: target
func (_m *BackupStore) GetDownloadURL(target v1.DownloadTarget) (string, error) {
ret := _m.Called(target)
@ -196,6 +291,29 @@ func (_m *BackupStore) GetPodVolumeBackups(name string) ([]*v1.PodVolumeBackup,
return r0, r1
}
// GetRestoreItemOperations provides a mock function with given fields: name
func (_m *BackupStore) GetRestoreItemOperations(name string) ([]*itemoperation.RestoreOperation, error) {
ret := _m.Called(name)
var r0 []*itemoperation.RestoreOperation
if rf, ok := ret.Get(0).(func(string) []*itemoperation.RestoreOperation); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]*itemoperation.RestoreOperation)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(name)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// IsValid provides a mock function with given fields:
func (_m *BackupStore) IsValid() error {
ret := _m.Called()
@ -247,6 +365,62 @@ func (_m *BackupStore) PutBackup(info persistence.BackupInfo) error {
return r0
}
// PutBackupContents provides a mock function with given fields: backup, backupContents
func (_m *BackupStore) PutBackupContents(backup string, backupContents io.Reader) error {
ret := _m.Called(backup, backupContents)
var r0 error
if rf, ok := ret.Get(0).(func(string, io.Reader) error); ok {
r0 = rf(backup, backupContents)
} else {
r0 = ret.Error(0)
}
return r0
}
// PutBackupItemOperations provides a mock function with given fields: backup, backupItemOperations
func (_m *BackupStore) PutBackupItemOperations(backup string, backupItemOperations io.Reader) error {
ret := _m.Called(backup, backupItemOperations)
var r0 error
if rf, ok := ret.Get(0).(func(string, io.Reader) error); ok {
r0 = rf(backup, backupItemOperations)
} else {
r0 = ret.Error(0)
}
return r0
}
// PutBackupMetadata provides a mock function with given fields: backup, backupMetadata
func (_m *BackupStore) PutBackupMetadata(backup string, backupMetadata io.Reader) error {
ret := _m.Called(backup, backupMetadata)
var r0 error
if rf, ok := ret.Get(0).(func(string, io.Reader) error); ok {
r0 = rf(backup, backupMetadata)
} else {
r0 = ret.Error(0)
}
return r0
}
// PutRestoreItemOperations provides a mock function with given fields: backup, restore, restoreItemOperations
func (_m *BackupStore) PutRestoreItemOperations(backup string, restore string, restoreItemOperations io.Reader) error {
ret := _m.Called(backup, restore, restoreItemOperations)
var r0 error
if rf, ok := ret.Get(0).(func(string, string, io.Reader) error); ok {
r0 = rf(backup, restore, restoreItemOperations)
} else {
r0 = ret.Error(0)
}
return r0
}
// PutRestoreLog provides a mock function with given fields: backup, restore, log
func (_m *BackupStore) PutRestoreLog(backup string, restore string, log io.Reader) error {
ret := _m.Called(backup, restore, log)
@ -275,57 +449,19 @@ func (_m *BackupStore) PutRestoreResults(backup string, restore string, results
return r0
}
// PutRestoreItemOperations provides a mock function with given fields: backup, restore, restoreItemOperations
func (_m *BackupStore) PutRestoreItemOperations(backup string, restore string, restoreItemOperations io.Reader) error {
ret := _m.Called(backup, restore, restoreItemOperations)
var r0 error
if rf, ok := ret.Get(0).(func(string, string, io.Reader) error); ok {
r0 = rf(backup, restore, restoreItemOperations)
} else {
r0 = ret.Error(0)
}
return r0
type mockConstructorTestingTNewBackupStore interface {
mock.TestingT
Cleanup(func())
}
// PutBackupItemOperations provides a mock function with given fields: backup, backupItemOperations
func (_m *BackupStore) PutBackupItemOperations(backup string, backupItemOperations io.Reader) error {
ret := _m.Called(backup, backupItemOperations)
// NewBackupStore creates a new instance of BackupStore. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewBackupStore(t mockConstructorTestingTNewBackupStore) *BackupStore {
mock := &BackupStore{}
mock.Mock.Test(t)
var r0 error
if rf, ok := ret.Get(0).(func(string, io.Reader) error); ok {
r0 = rf(backup, backupItemOperations)
} else {
r0 = ret.Error(0)
}
t.Cleanup(func() { mock.AssertExpectations(t) })
return r0
}
func (_m *BackupStore) GetCSIVolumeSnapshots(backup string) ([]*snapshotv1api.VolumeSnapshot, error) {
panic("Not implemented")
return nil, nil
}
func (_m *BackupStore) GetCSIVolumeSnapshotContents(backup string) ([]*snapshotv1api.VolumeSnapshotContent, error) {
panic("Not implemented")
return nil, nil
}
func (_m *BackupStore) GetCSIVolumeSnapshotClasses(backup string) ([]*snapshotv1api.VolumeSnapshotClass, error) {
panic("Not implemented")
return nil, nil
}
func (_m *BackupStore) GetBackupItemOperations(name string) ([]*itemoperation.BackupOperation, error) {
panic("implement me")
return nil, nil
}
func (_m *BackupStore) GetRestoreItemOperations(name string) ([]*itemoperation.RestoreOperation, error) {
panic("implement me")
return nil, nil
return mock
}
func (_m *BackupStore) PutRestoredResourceList(restore string, results io.Reader) error {

View File

@ -61,7 +61,9 @@ type BackupStore interface {
ListBackups() ([]string, error)
PutBackup(info BackupInfo) error
PutBackupMetadata(backup string, backupMetadata io.Reader) error
PutBackupItemOperations(backup string, backupItemOperations io.Reader) error
PutBackupContents(backup string, backupContents io.Reader) error
GetBackupMetadata(name string) (*velerov1api.Backup, error)
GetBackupItemOperations(name string) ([]*itemoperation.BackupOperation, error)
GetBackupVolumeSnapshots(name string) ([]*volume.Snapshot, error)
@ -315,6 +317,10 @@ func (s *objectBackupStore) GetBackupMetadata(name string) (*velerov1api.Backup,
return backupObj, nil
}
func (s *objectBackupStore) PutBackupMetadata(backup string, backupMetadata io.Reader) error {
return seekAndPutObject(s.objectStore, s.bucket, s.layout.getBackupMetadataKey(backup), backupMetadata)
}
func (s *objectBackupStore) GetBackupVolumeSnapshots(name string) ([]*volume.Snapshot, error) {
// if the volumesnapshots file doesn't exist, we don't want to return an error, since
// a legacy backup or a backup with no snapshots would not have this file, so check for
@ -550,6 +556,10 @@ func (s *objectBackupStore) PutBackupItemOperations(backup string, backupItemOpe
return seekAndPutObject(s.objectStore, s.bucket, s.layout.getBackupItemOperationsKey(backup), backupItemOperations)
}
func (s *objectBackupStore) PutBackupContents(backup string, backupContents io.Reader) error {
return seekAndPutObject(s.objectStore, s.bucket, s.layout.getBackupContentsKey(backup), backupContents)
}
func (s *objectBackupStore) GetDownloadURL(target velerov1api.DownloadTarget) (string, error) {
switch target.Kind {
case velerov1api.DownloadTargetKindBackupContents:

View File

@ -37,6 +37,7 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
providermocks "github.com/vmware-tanzu/velero/pkg/plugin/velero/mocks"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
@ -461,14 +462,22 @@ func TestGetBackupItemOperations(t *testing.T) {
operations := []*itemoperation.BackupOperation{
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "test-backup",
ResourceIdentifier: "item-1",
BackupName: "test-backup",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns",
Name: "item-1",
},
},
},
{
Spec: itemoperation.BackupOperationSpec{
BackupName: "test-backup",
ResourceIdentifier: "item-2",
BackupName: "test-backup",
ResourceIdentifier: velero.ResourceIdentifier{
GroupResource: kuberesource.Pods,
Namespace: "ns",
Name: "item-2",
},
},
},
}

View File

@ -96,6 +96,11 @@ func (r *RestartableBackupItemAction) getDelegate() (biav2.BackupItemAction, err
return r.getBackupItemAction()
}
// Name returns the plugin's name.
func (r *RestartableBackupItemAction) Name() string {
return r.Key.Name
}
// AppliesTo restarts the plugin's process if needed, then delegates the call.
func (r *RestartableBackupItemAction) AppliesTo() (velero.ResourceSelector, error) {
delegate, err := r.getDelegate()
@ -107,10 +112,10 @@ func (r *RestartableBackupItemAction) AppliesTo() (velero.ResourceSelector, erro
}
// Execute restarts the plugin's process if needed, then delegates the call.
func (r *RestartableBackupItemAction) Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
func (r *RestartableBackupItemAction) Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, nil, "", err
return nil, nil, "", nil, err
}
return delegate.Execute(item, backup)
@ -148,15 +153,20 @@ func NewAdaptedV1RestartableBackupItemAction(v1Restartable *biav1cli.Restartable
return r
}
// Name restarts the plugin's name.
func (r *AdaptedV1RestartableBackupItemAction) Name() string {
return r.V1Restartable.Key.Name
}
// AppliesTo delegates to the v1 AppliesTo call.
func (r *AdaptedV1RestartableBackupItemAction) AppliesTo() (velero.ResourceSelector, error) {
return r.V1Restartable.AppliesTo()
}
// Execute delegates to the v1 Execute call, returning an empty operationID.
func (r *AdaptedV1RestartableBackupItemAction) Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
func (r *AdaptedV1RestartableBackupItemAction) Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
updatedItem, additionalItems, err := r.V1Restartable.Execute(item, backup)
return updatedItem, additionalItems, "", err
return updatedItem, additionalItems, "", nil, err
}
// Progress returns with an error since v1 plugins will never return an operationID, which means that

View File

@ -145,8 +145,8 @@ func TestRestartableBackupItemActionDelegatedFunctions(t *testing.T) {
restartabletest.RestartableDelegateTest{
Function: "Execute",
Inputs: []interface{}{pv, b},
ExpectedErrorOutputs: []interface{}{nil, ([]velero.ResourceIdentifier)(nil), "", errors.Errorf("reset error")},
ExpectedDelegateOutputs: []interface{}{pvToReturn, additionalItems, "", errors.Errorf("delegate error")},
ExpectedErrorOutputs: []interface{}{nil, ([]velero.ResourceIdentifier)(nil), "", ([]velero.ResourceIdentifier)(nil), errors.Errorf("reset error")},
ExpectedDelegateOutputs: []interface{}{pvToReturn, additionalItems, "", ([]velero.ResourceIdentifier)(nil), errors.Errorf("delegate error")},
},
restartabletest.RestartableDelegateTest{
Function: "Progress",

View File

@ -76,15 +76,15 @@ func (c *BackupItemActionGRPCClient) AppliesTo() (velero.ResourceSelector, error
}, nil
}
func (c *BackupItemActionGRPCClient) Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
func (c *BackupItemActionGRPCClient) Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
itemJSON, err := json.Marshal(item.UnstructuredContent())
if err != nil {
return nil, nil, "", errors.WithStack(err)
return nil, nil, "", nil, errors.WithStack(err)
}
backupJSON, err := json.Marshal(backup)
if err != nil {
return nil, nil, "", errors.WithStack(err)
return nil, nil, "", nil, errors.WithStack(err)
}
req := &protobiav2.ExecuteRequest{
@ -95,12 +95,12 @@ func (c *BackupItemActionGRPCClient) Execute(item runtime.Unstructured, backup *
res, err := c.grpcClient.Execute(context.Background(), req)
if err != nil {
return nil, nil, "", common.FromGRPCError(err)
return nil, nil, "", nil, common.FromGRPCError(err)
}
var updatedItem unstructured.Unstructured
if err := json.Unmarshal(res.Item, &updatedItem); err != nil {
return nil, nil, "", errors.WithStack(err)
return nil, nil, "", nil, errors.WithStack(err)
}
var additionalItems []velero.ResourceIdentifier
@ -118,7 +118,22 @@ func (c *BackupItemActionGRPCClient) Execute(item runtime.Unstructured, backup *
additionalItems = append(additionalItems, newItem)
}
return &updatedItem, additionalItems, res.OperationID, nil
var itemsToUpdate []velero.ResourceIdentifier
for _, itm := range res.ItemsToUpdate {
newItem := velero.ResourceIdentifier{
GroupResource: schema.GroupResource{
Group: itm.Group,
Resource: itm.Resource,
},
Namespace: itm.Namespace,
Name: itm.Name,
}
itemsToUpdate = append(itemsToUpdate, newItem)
}
return &updatedItem, additionalItems, res.OperationID, itemsToUpdate, nil
}
func (c *BackupItemActionGRPCClient) Progress(operationID string, backup *api.Backup) (velero.OperationProgress, error) {
@ -167,3 +182,9 @@ func (c *BackupItemActionGRPCClient) Cancel(operationID string, backup *api.Back
return nil
}
// This shouldn't be called on the GRPC client since the RestartableBackupItemAction won't delegate
// this method
func (c *BackupItemActionGRPCClient) Name() string {
return ""
}

View File

@ -107,7 +107,7 @@ func (s *BackupItemActionGRPCServer) Execute(
return nil, common.NewGRPCError(errors.WithStack(err))
}
updatedItem, additionalItems, operationID, err := impl.Execute(&item, &backup)
updatedItem, additionalItems, operationID, itemsToUpdate, err := impl.Execute(&item, &backup)
if err != nil {
return nil, common.NewGRPCError(err)
}
@ -132,6 +132,9 @@ func (s *BackupItemActionGRPCServer) Execute(
for _, item := range additionalItems {
res.AdditionalItems = append(res.AdditionalItems, backupResourceIdentifierToProto(item))
}
for _, item := range itemsToUpdate {
res.ItemsToUpdate = append(res.ItemsToUpdate, backupResourceIdentifierToProto(item))
}
return res, nil
}
@ -210,3 +213,9 @@ func backupResourceIdentifierToProto(id velero.ResourceIdentifier) *proto.Resour
Name: id.Name,
}
}
// This shouldn't be called on the GRPC server since the server won't ever receive this request, as
// the RestartableBackupItemAction in Velero won't delegate this to the server
func (c *BackupItemActionGRPCServer) Name() string {
return ""
}

View File

@ -97,6 +97,7 @@ func TestBackupItemActionGRPCServerExecute(t *testing.T) {
implUpdatedItem runtime.Unstructured
implAdditionalItems []velero.ResourceIdentifier
implOperationID string
implItemsToUpdate []velero.ResourceIdentifier
implError error
expectError bool
skipMock bool
@ -153,7 +154,7 @@ func TestBackupItemActionGRPCServerExecute(t *testing.T) {
defer itemAction.AssertExpectations(t)
if !test.skipMock {
itemAction.On("Execute", &validItemObject, &validBackupObject).Return(test.implUpdatedItem, test.implAdditionalItems, test.implOperationID, test.implError)
itemAction.On("Execute", &validItemObject, &validBackupObject).Return(test.implUpdatedItem, test.implAdditionalItems, test.implOperationID, test.implItemsToUpdate, test.implError)
}
s := &BackupItemActionGRPCServer{mux: &common.ServerMux{

View File

@ -102,6 +102,7 @@ type ExecuteResponse struct {
Item []byte `protobuf:"bytes,1,opt,name=item,proto3" json:"item,omitempty"`
AdditionalItems []*generated.ResourceIdentifier `protobuf:"bytes,2,rep,name=additionalItems,proto3" json:"additionalItems,omitempty"`
OperationID string `protobuf:"bytes,3,opt,name=operationID,proto3" json:"operationID,omitempty"`
ItemsToUpdate []*generated.ResourceIdentifier `protobuf:"bytes,4,rep,name=itemsToUpdate,proto3" json:"itemsToUpdate,omitempty"`
}
func (x *ExecuteResponse) Reset() {
@ -157,6 +158,13 @@ func (x *ExecuteResponse) GetOperationID() string {
return ""
}
func (x *ExecuteResponse) GetItemsToUpdate() []*generated.ResourceIdentifier {
if x != nil {
return x.ItemsToUpdate
}
return nil
}
type BackupItemActionAppliesToRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@ -438,7 +446,7 @@ var file_backupitemaction_v2_BackupItemAction_proto_rawDesc = []byte{
0x6c, 0x75, 0x67, 0x69, 0x6e, 0x12, 0x12, 0x0a, 0x04, 0x69, 0x74, 0x65, 0x6d, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0c, 0x52, 0x04, 0x69, 0x74, 0x65, 0x6d, 0x12, 0x16, 0x0a, 0x06, 0x62, 0x61, 0x63,
0x6b, 0x75, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x62, 0x61, 0x63, 0x6b, 0x75,
0x70, 0x22, 0x90, 0x01, 0x0a, 0x0f, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x65, 0x52, 0x65, 0x73,
0x70, 0x22, 0xd5, 0x01, 0x0a, 0x0f, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x65, 0x52, 0x65, 0x73,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x69, 0x74, 0x65, 0x6d, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0c, 0x52, 0x04, 0x69, 0x74, 0x65, 0x6d, 0x12, 0x47, 0x0a, 0x0f, 0x61, 0x64, 0x64,
0x69, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x18, 0x02, 0x20, 0x03,
@ -447,63 +455,67 @@ var file_backupitemaction_v2_BackupItemAction_proto_rawDesc = []byte{
0x72, 0x52, 0x0f, 0x61, 0x64, 0x64, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x49, 0x74, 0x65,
0x6d, 0x73, 0x12, 0x20, 0x0a, 0x0b, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49,
0x44, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x49, 0x44, 0x22, 0x3a, 0x0a, 0x20, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74,
0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x65, 0x73, 0x54,
0x6f, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x6c, 0x75, 0x67,
0x69, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e,
0x22, 0x6c, 0x0a, 0x21, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63,
0x74, 0x69, 0x6f, 0x6e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x65, 0x73, 0x54, 0x6f, 0x52, 0x65, 0x73,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x47, 0x0a, 0x10, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63,
0x65, 0x53, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x1b, 0x2e, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x64, 0x2e, 0x52, 0x65, 0x73, 0x6f,
0x75, 0x72, 0x63, 0x65, 0x53, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x52, 0x10, 0x52, 0x65,
0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x22, 0x73,
0x0a, 0x1f, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69,
0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
0x74, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28,
0x09, 0x52, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x12, 0x20, 0x0a, 0x0b, 0x6f, 0x70, 0x65,
0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b,
0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12, 0x16, 0x0a, 0x06, 0x62,
0x61, 0x63, 0x6b, 0x75, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x62, 0x61, 0x63,
0x6b, 0x75, 0x70, 0x22, 0x5c, 0x0a, 0x20, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65,
0x6f, 0x6e, 0x49, 0x44, 0x12, 0x43, 0x0a, 0x0d, 0x69, 0x74, 0x65, 0x6d, 0x73, 0x54, 0x6f, 0x55,
0x70, 0x64, 0x61, 0x74, 0x65, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x67, 0x65,
0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x64, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
0x49, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, 0x52, 0x0d, 0x69, 0x74, 0x65, 0x6d,
0x73, 0x54, 0x6f, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x22, 0x3a, 0x0a, 0x20, 0x42, 0x61, 0x63,
0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x70, 0x70,
0x6c, 0x69, 0x65, 0x73, 0x54, 0x6f, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x16, 0x0a,
0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x70,
0x6c, 0x75, 0x67, 0x69, 0x6e, 0x22, 0x6c, 0x0a, 0x21, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49,
0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x65, 0x73,
0x54, 0x6f, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x47, 0x0a, 0x10, 0x52, 0x65,
0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x18, 0x01,
0x20, 0x01, 0x28, 0x0b, 0x32, 0x1b, 0x2e, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x64,
0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f,
0x72, 0x52, 0x10, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x65, 0x6c, 0x65, 0x63,
0x74, 0x6f, 0x72, 0x22, 0x73, 0x0a, 0x1f, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65,
0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x38, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x67, 0x72,
0x65, 0x73, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1c, 0x2e, 0x67, 0x65, 0x6e, 0x65,
0x72, 0x61, 0x74, 0x65, 0x64, 0x2e, 0x4f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x50,
0x72, 0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x65, 0x73,
0x73, 0x22, 0x71, 0x0a, 0x1d, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41,
0x63, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x61, 0x6e, 0x63, 0x65, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x12, 0x20, 0x0a, 0x0b, 0x6f, 0x70,
0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
0x0b, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12, 0x16, 0x0a, 0x06,
0x62, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x62, 0x61,
0x63, 0x6b, 0x75, 0x70, 0x32, 0xbc, 0x02, 0x0a, 0x10, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49,
0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x58, 0x0a, 0x09, 0x41, 0x70, 0x70,
0x6c, 0x69, 0x65, 0x73, 0x54, 0x6f, 0x12, 0x24, 0x2e, 0x76, 0x32, 0x2e, 0x42, 0x61, 0x63, 0x6b,
0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x70, 0x70, 0x6c,
0x69, 0x65, 0x73, 0x54, 0x6f, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x76,
0x32, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69,
0x6f, 0x6e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x65, 0x73, 0x54, 0x6f, 0x52, 0x65, 0x73, 0x70, 0x6f,
0x6e, 0x73, 0x65, 0x12, 0x32, 0x0a, 0x07, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x65, 0x12, 0x12,
0x2e, 0x76, 0x32, 0x2e, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x1a, 0x13, 0x2e, 0x76, 0x32, 0x2e, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x65, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x55, 0x0a, 0x08, 0x50, 0x72, 0x6f, 0x67, 0x72,
0x65, 0x73, 0x73, 0x12, 0x23, 0x2e, 0x76, 0x32, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49,
0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x67, 0x72, 0x65, 0x73,
0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x24, 0x2e, 0x76, 0x32, 0x2e, 0x42, 0x61,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x12, 0x20,
0x0a, 0x0b, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x02, 0x20,
0x01, 0x28, 0x09, 0x52, 0x0b, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44,
0x12, 0x16, 0x0a, 0x06, 0x62, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c,
0x52, 0x06, 0x62, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x22, 0x5c, 0x0a, 0x20, 0x42, 0x61, 0x63, 0x6b,
0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x67,
0x72, 0x65, 0x73, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x38, 0x0a, 0x08,
0x70, 0x72, 0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1c,
0x2e, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x64, 0x2e, 0x4f, 0x70, 0x65, 0x72, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x52, 0x08, 0x70, 0x72,
0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x22, 0x71, 0x0a, 0x1d, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70,
0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x61, 0x6e, 0x63, 0x65, 0x6c,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69,
0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x12,
0x20, 0x0a, 0x0b, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x02,
0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49,
0x44, 0x12, 0x16, 0x0a, 0x06, 0x62, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x06, 0x62, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x32, 0xbc, 0x02, 0x0a, 0x10, 0x42, 0x61,
0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x58,
0x0a, 0x09, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x65, 0x73, 0x54, 0x6f, 0x12, 0x24, 0x2e, 0x76, 0x32,
0x2e, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f,
0x6e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x65, 0x73, 0x54, 0x6f, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
0x74, 0x1a, 0x25, 0x2e, 0x76, 0x32, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65,
0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x65, 0x73, 0x54, 0x6f,
0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x32, 0x0a, 0x07, 0x45, 0x78, 0x65, 0x63,
0x75, 0x74, 0x65, 0x12, 0x12, 0x2e, 0x76, 0x32, 0x2e, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x65,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x13, 0x2e, 0x76, 0x32, 0x2e, 0x45, 0x78, 0x65,
0x63, 0x75, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x55, 0x0a, 0x08,
0x50, 0x72, 0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x12, 0x23, 0x2e, 0x76, 0x32, 0x2e, 0x42, 0x61,
0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72,
0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x43,
0x0a, 0x06, 0x43, 0x61, 0x6e, 0x63, 0x65, 0x6c, 0x12, 0x21, 0x2e, 0x76, 0x32, 0x2e, 0x42, 0x61,
0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x61,
0x6e, 0x63, 0x65, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x16, 0x2e, 0x67, 0x6f,
0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d,
0x70, 0x74, 0x79, 0x42, 0x49, 0x5a, 0x47, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f,
0x6d, 0x2f, 0x76, 0x6d, 0x77, 0x61, 0x72, 0x65, 0x2d, 0x74, 0x61, 0x6e, 0x7a, 0x75, 0x2f, 0x76,
0x65, 0x6c, 0x65, 0x72, 0x6f, 0x2f, 0x70, 0x6b, 0x67, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e,
0x2f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x64, 0x2f, 0x62, 0x61, 0x63, 0x6b, 0x75,
0x70, 0x69, 0x74, 0x65, 0x6d, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x76, 0x32, 0x62, 0x06,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x24, 0x2e,
0x76, 0x32, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74,
0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x67, 0x72, 0x65, 0x73, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f,
0x6e, 0x73, 0x65, 0x12, 0x43, 0x0a, 0x06, 0x43, 0x61, 0x6e, 0x63, 0x65, 0x6c, 0x12, 0x21, 0x2e,
0x76, 0x32, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x74, 0x65, 0x6d, 0x41, 0x63, 0x74,
0x69, 0x6f, 0x6e, 0x43, 0x61, 0x6e, 0x63, 0x65, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x1a, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x42, 0x49, 0x5a, 0x47, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x76, 0x6d, 0x77, 0x61, 0x72, 0x65, 0x2d, 0x74, 0x61,
0x6e, 0x7a, 0x75, 0x2f, 0x76, 0x65, 0x6c, 0x65, 0x72, 0x6f, 0x2f, 0x70, 0x6b, 0x67, 0x2f, 0x70,
0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x64, 0x2f,
0x62, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x69, 0x74, 0x65, 0x6d, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e,
0x2f, 0x76, 0x32, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@ -534,21 +546,22 @@ var file_backupitemaction_v2_BackupItemAction_proto_goTypes = []interface{}{
}
var file_backupitemaction_v2_BackupItemAction_proto_depIdxs = []int32{
7, // 0: v2.ExecuteResponse.additionalItems:type_name -> generated.ResourceIdentifier
8, // 1: v2.BackupItemActionAppliesToResponse.ResourceSelector:type_name -> generated.ResourceSelector
9, // 2: v2.BackupItemActionProgressResponse.progress:type_name -> generated.OperationProgress
2, // 3: v2.BackupItemAction.AppliesTo:input_type -> v2.BackupItemActionAppliesToRequest
0, // 4: v2.BackupItemAction.Execute:input_type -> v2.ExecuteRequest
4, // 5: v2.BackupItemAction.Progress:input_type -> v2.BackupItemActionProgressRequest
6, // 6: v2.BackupItemAction.Cancel:input_type -> v2.BackupItemActionCancelRequest
3, // 7: v2.BackupItemAction.AppliesTo:output_type -> v2.BackupItemActionAppliesToResponse
1, // 8: v2.BackupItemAction.Execute:output_type -> v2.ExecuteResponse
5, // 9: v2.BackupItemAction.Progress:output_type -> v2.BackupItemActionProgressResponse
10, // 10: v2.BackupItemAction.Cancel:output_type -> google.protobuf.Empty
7, // [7:11] is the sub-list for method output_type
3, // [3:7] is the sub-list for method input_type
3, // [3:3] is the sub-list for extension type_name
3, // [3:3] is the sub-list for extension extendee
0, // [0:3] is the sub-list for field type_name
7, // 1: v2.ExecuteResponse.itemsToUpdate:type_name -> generated.ResourceIdentifier
8, // 2: v2.BackupItemActionAppliesToResponse.ResourceSelector:type_name -> generated.ResourceSelector
9, // 3: v2.BackupItemActionProgressResponse.progress:type_name -> generated.OperationProgress
2, // 4: v2.BackupItemAction.AppliesTo:input_type -> v2.BackupItemActionAppliesToRequest
0, // 5: v2.BackupItemAction.Execute:input_type -> v2.ExecuteRequest
4, // 6: v2.BackupItemAction.Progress:input_type -> v2.BackupItemActionProgressRequest
6, // 7: v2.BackupItemAction.Cancel:input_type -> v2.BackupItemActionCancelRequest
3, // 8: v2.BackupItemAction.AppliesTo:output_type -> v2.BackupItemActionAppliesToResponse
1, // 9: v2.BackupItemAction.Execute:output_type -> v2.ExecuteResponse
5, // 10: v2.BackupItemAction.Progress:output_type -> v2.BackupItemActionProgressResponse
10, // 11: v2.BackupItemAction.Cancel:output_type -> google.protobuf.Empty
8, // [8:12] is the sub-list for method output_type
4, // [4:8] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
4, // [4:4] is the sub-list for extension extendee
0, // [0:4] is the sub-list for field type_name
}
func init() { file_backupitemaction_v2_BackupItemAction_proto_init() }

View File

@ -16,6 +16,7 @@ message ExecuteResponse {
bytes item = 1;
repeated generated.ResourceIdentifier additionalItems = 2;
string operationID = 3;
repeated generated.ResourceIdentifier itemsToUpdate = 4;
}
service BackupItemAction {

View File

@ -29,6 +29,12 @@ import (
// BackupItemAction is an actor that performs an operation on an individual item being backed up.
type BackupItemAction interface {
// Name returns the name of this BIA. Plugins which implement this interface must defined Name,
// but its content is unimportant, as it won't actually be called via RPC. Velero's plugin infrastructure
// will implement this directly rather than delegating to the RPC plugin in order to return the name
// that the plugin was registered under. The plugins must implement the method to complete the interface.
Name() string
// AppliesTo returns information about which resources this action should be invoked for.
// A BackupItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
@ -37,8 +43,13 @@ type BackupItemAction interface {
// Execute allows the BackupItemAction to perform arbitrary logic with the item being backed up,
// including mutating the item itself prior to backup. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying
// additional related items that should be backed up.
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error)
// additional related items that should be backed up now, an optional operationID for actions which
// initiate asynchronous actions, and a second slice of ResourceIdentifiers specifying related items
// which should be backed up after all asynchronous operations have completed. This last field will be
// ignored if operationID is empty, and should not be filled in unless the resource must be updated in the
// backup after async operations complete (i.e. some of the item's kubernetes metadata will be updated
// during the asynch operation which will be required during restore)
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error)
// Progress allows the BackupItemAction to report on progress of an asynchronous action.
// For the passed-in operation, the plugin will return an OperationProgress struct, indicating

View File

@ -13,7 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by mockery v1.0.0. DO NOT EDIT.
// Code generated by mockery v2.16.0. DO NOT EDIT.
package v2
@ -67,7 +67,7 @@ func (_m *BackupItemAction) Cancel(operationID string, backup *v1.Backup) error
}
// Execute provides a mock function with given fields: item, backup
func (_m *BackupItemAction) Execute(item runtime.Unstructured, backup *v1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, error) {
func (_m *BackupItemAction) Execute(item runtime.Unstructured, backup *v1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) {
ret := _m.Called(item, backup)
var r0 runtime.Unstructured
@ -95,14 +95,37 @@ func (_m *BackupItemAction) Execute(item runtime.Unstructured, backup *v1.Backup
r2 = ret.Get(2).(string)
}
var r3 error
if rf, ok := ret.Get(3).(func(runtime.Unstructured, *v1.Backup) error); ok {
var r3 []velero.ResourceIdentifier
if rf, ok := ret.Get(3).(func(runtime.Unstructured, *v1.Backup) []velero.ResourceIdentifier); ok {
r3 = rf(item, backup)
} else {
r3 = ret.Error(3)
if ret.Get(3) != nil {
r3 = ret.Get(3).([]velero.ResourceIdentifier)
}
}
return r0, r1, r2, r3
var r4 error
if rf, ok := ret.Get(4).(func(runtime.Unstructured, *v1.Backup) error); ok {
r4 = rf(item, backup)
} else {
r4 = ret.Error(4)
}
return r0, r1, r2, r3, r4
}
// Name provides a mock function with given fields:
func (_m *BackupItemAction) Name() string {
ret := _m.Called()
var r0 string
if rf, ok := ret.Get(0).(func() string); ok {
r0 = rf()
} else {
r0 = ret.Get(0).(string)
}
return r0
}
// Progress provides a mock function with given fields: operationID, backup
@ -125,3 +148,18 @@ func (_m *BackupItemAction) Progress(operationID string, backup *v1.Backup) (vel
return r0, r1
}
type mockConstructorTestingTNewBackupItemAction interface {
mock.TestingT
Cleanup(func())
}
// NewBackupItemAction creates a new instance of BackupItemAction. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewBackupItemAction(t mockConstructorTestingTNewBackupItemAction) *BackupItemAction {
mock := &BackupItemAction{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@ -68,6 +68,20 @@ type ResourceIdentifier struct {
Name string
}
func (in *ResourceIdentifier) DeepCopy() *ResourceIdentifier {
if in == nil {
return nil
}
out := new(ResourceIdentifier)
in.DeepCopyInto(out)
return out
}
func (in *ResourceIdentifier) DeepCopyInto(out *ResourceIdentifier) {
*out = *in
out.GroupResource = in.GroupResource
}
// OperationProgress describes progress of an asynchronous plugin operation.
type OperationProgress struct {
// True when the operation has completed, either successfully or with a failure

View File

@ -33,6 +33,10 @@ spec:
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
# returning error as timeout. The default value is 10 minute.
csiSnapshotTimeout: 10m
# ItemOperationTimeout specifies the time used to wait for
# asynchronous BackupItemAction operations
# The default value is 1 hour.
csiSnapshotTimeout: 1h
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
@ -146,7 +150,8 @@ status:
expiration: null
# The current phase.
# Valid values are New, FailedValidation, InProgress, WaitingForPluginOperations,
# WaitingForPluginOperationsPartiallyFailed, Completed, PartiallyFailed, Failed.
# WaitingForPluginOperationsPartiallyFailed, FinalizingafterPluginOperations,
# FinalizingAfterPluginOperationsPartiallyFailed, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
@ -158,6 +163,12 @@ status:
volumeSnapshotsAttempted: 2
# Number of volume snapshots that Velero successfully created for this backup.
volumeSnapshotsCompleted: 1
# Number of attempted async BackupItemAction operations for this backup.
asyncBackupItemOperationsAttempted: 2
# Number of async BackupItemAction operations that Velero successfully completed for this backup.
asyncBackupItemOperationsCompleted: 1
# Number of async BackupItemAction operations that ended in failure for this backup.
asyncBackupItemOperationsFailed: 0
# Number of warnings that were logged by the backup.
warnings: 2
# Number of errors that were logged by the backup.