Merge pull request #7794 from blackpiglet/modify_volume_helper

Modify the volume helper logic.
pull/7814/head
Xun Jiang/Bruce Jiang 2024-05-23 11:18:38 +08:00 committed by GitHub
commit 0e7fb402cd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 925 additions and 834 deletions

View File

@ -0,0 +1 @@
Modify the volume helper logic.

View File

@ -86,205 +86,241 @@ volumePolicies:
- If it is not snapshot then we skip the csi BIA execute action and avoid taking the snapshot of the PVC by not invoking the csi plugin action for the PVC
**Note:**
- When we are using the `VolumePolicy` approach for backing up the volumes then the volume policy criteria and action need to be specific and explicit, there is no default behaviour, if a volume matches `fs-backup` action then `fs-backup` method will be used for that volume and similarly if the volume matches the criteria for `snapshot` action then the snapshot workflow will be used for the volume backup.
- Another thing to note is the workflow proposed in this design uses the legacy opt-in/opt-out approach as a fallback option. For instance, the user specifies a VolumePolicy but for a particular volume included in the backup there are no actions(fs-backup/snapshot) matching in the volume policy for that volume, in such a scenario the legacy approach will be used for backing up the particular volume.
- When we are using the `VolumePolicy` approach for backing up the volumes then the volume policy criteria and action need to be specific and explicit, there is no default behavior, if a volume matches `fs-backup` action then `fs-backup` method will be used for that volume and similarly if the volume matches the criteria for `snapshot` action then the snapshot workflow will be used for the volume backup.
- Another thing to note is the workflow proposed in this design uses the legacy `opt-in/opt-out` approach as a fallback option. For instance, the user specifies a VolumePolicy but for a particular volume included in the backup there are no actions(fs-backup/snapshot) matching in the volume policy for that volume, in such a scenario the legacy approach will be used for backing up the particular volume.
- The relation between the `VolumePolicy` and the backup's legacy parameter `SnapshotVolumes`:
- The `VolumePolicy`'s `snapshot` action matching for volume has higher priority. When there is a `snapshot` action matching for the selected volume, it will be backed by the snapshot way, no matter of the `backup.Spec.SnapshotVolumes` setting.
- If there is no `snapshot` action matching the selected volume in the `VolumePolicy`, then the volume will be backed up by `snapshot` way, if the `backup.Spec.SnapshotVolumes` is not set to false.
- The relation between the `VolumePolicy` and the backup's legacy filesystem `opt-in/opt-out` approach:
- The `VolumePolicy`'s `fs-backup` action matching for volume has higher priority. When there is a `fs-backup` action matching for the selected volume, it will be backed by the fs-backup way, no matter of the `backup.Spec.DefaultVolumesToFsBackup` setting and the pod's `opt-in/opt-out` annotation setting.
- If there is no `fs-backup` action matching the selected volume in the `VolumePolicy`, then the volume will be backed up by the legacy `opt-in/opt-out` way.
## Implementation
- The implementation should be included in velero 1.14
- We will introduce a `VolumeHelper` interface. It will consist of two methods:
- `ShouldPerformFSBackupForPodVolume(pod *corev1api.Pod)`
- `ShouldPerformSnapshot(obj runtime.Unstructured)`
```go
type VolumeHelper interface {
GetVolumesForFSBackup(pod *corev1api.Pod) ([]string, []string, error)
ShouldPerformSnapshot(obj runtime.Unstructured) (bool, error)
ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error)
ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error)
}
```
- The `VolumeHelperImpl` struct will implement the `VolumeHelper` interface and will consist of the functions that we will use through the backup workflow to accommodate volume policies for PVs and PVCs.
```go
type VolumeHelperImpl struct {
Backup *velerov1api.Backup
VolumePolicy *resourcepolicies.Policies
BackupExcludePVC bool
DefaultVolumesToFsBackup bool
SnapshotVolumes *bool
Logger logrus.FieldLogger
type volumeHelperImpl struct {
volumePolicy *resourcepolicies.Policies
snapshotVolumes *bool
logger logrus.FieldLogger
client crclient.Client
defaultVolumesToFSBackup bool
backupExcludePVC bool
}
```
- We will create an instance of the struct the `VolumeHelperImpl` in `item_backupper.go`
- We will create an instance of the structure `volumeHelperImpl` in `item_backupper.go`
```go
vh := &volumehelper.VolumeHelperImpl{
Backup: ib.backupRequest.Backup,
VolumePolicy: ib.backupRequest.ResPolicies,
BackupExcludePVC: !ib.backupRequest.ResourceIncludesExcludes.ShouldInclude(kuberesource.PersistentVolumeClaims.String()),
DefaultVolumesToFsBackup: boolptr.IsSetToTrue(ib.backupRequest.Spec.DefaultVolumesToFsBackup),
SnapshotVolumes: ib.backupRequest.Spec.SnapshotVolumes,
Logger: logger,
itemBackupper := &itemBackupper{
...
volumeHelperImpl: volumehelper.NewVolumeHelperImpl(
resourcePolicy,
backupRequest.Spec.SnapshotVolumes,
log,
kb.kbClient,
boolptr.IsSetToTrue(backupRequest.Spec.DefaultVolumesToFsBackup),
!backupRequest.ResourceIncludesExcludes.ShouldInclude(kuberesource.PersistentVolumeClaims.String()),
),
}
```
#### FS-Backup
- Regarding `fs-backup` action to decide whether to use legacy annotation based approach or volume policy based approach:
- We will use the `vh.GetVolumesForFSBackup()` function from the `volumehelper` package
- We will use the `vh.ShouldPerformFSBackup()` function from the `volumehelper` package
- Functions involved in processing `fs-backup` volume policy action will somewhat look like:
```go
func (v *VolumeHelperImpl) GetVolumesForFSBackup(pod *corev1api.Pod) ([]string, []string, error) {
// Check if there is a fs-backup/snapshot volumepolicy specified by the user, if yes then use the volume policy approach to
// get the list volumes for fs-backup else go via the legacy annotation based approach
var includedVolumes = make([]string, 0)
var optedOutVolumes = make([]string, 0)
FSBackupOrSnapshot, err := checkIfFsBackupORSnapshotPolicyForPodVolume(pod, v.VolumePolicy)
if err != nil {
return includedVolumes, optedOutVolumes, err
func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error) {
if !v.shouldIncludeVolumeInBackup(volume) {
v.logger.Debugf("skip fs-backup action for pod %s's volume %s, due to not pass volume check.", pod.Namespace+"/"+pod.Name, volume.Name)
return false, nil
}
if v.VolumePolicy != nil && FSBackupOrSnapshot {
// Get the list of volumes to back up using pod volume backup for the given pod matching fs-backup volume policy action
includedVolumes, optedOutVolumes, err = GetVolumesMatchingFSBackupAction(pod, v.VolumePolicy)
if err != nil {
return includedVolumes, optedOutVolumes, err
}
} else {
// Get the list of volumes to back up using pod volume backup from the pod's annotations.
includedVolumes, optedOutVolumes = pdvolumeutil.GetVolumesByPod(pod, v.DefaultVolumesToFsBackup, v.BackupExcludePVC)
}
return includedVolumes, optedOutVolumes, err
}
func checkIfFsBackupORSnapshotPolicyForPodVolume(pod *corev1api.Pod, volumePolicies *resourcepolicies.Policies) (bool, error) {
for volume := range pod.Spec.Volumes {
action, err := volumePolicies.GetMatchAction(volume)
if v.volumePolicy != nil {
pvc, err := kubeutil.GetPVCForPodVolume(&volume, &pod, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PVC for pod %s", pod.Namespace+"/"+pod.Name)
return false, err
}
if action.Type == resourcepolicies.FSBackup || action.Type == resourcepolicies.Snapshot {
return true, nil
}
}
return false, nil
}
// GetVolumesMatchingFSBackupAction returns a list of volume names to backup for the provided pod having fs-backup volume policy action
func GetVolumesMatchingFSBackupAction(pod *corev1api.Pod, volumePolicy *resourcepolicies.Policies) ([]string, []string, error) {
ActionMatchingVols := []string{}
NonActionMatchingVols := []string{}
for _, vol := range pod.Spec.Volumes {
action, err := volumePolicy.GetMatchAction(vol)
pv, err := kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
return nil, nil, err
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, err
}
// Now if the matched action is `fs-backup` then add that Volume to the fsBackupVolumeList
if action != nil && action.Type == resourcepolicies.FSBackup {
ActionMatchingVols = append(ActionMatchingVols, vol.Name)
} else {
NonActionMatchingVols = append(NonActionMatchingVols, vol.Name)
action, err := v.volumePolicy.GetMatchAction(pv)
if err != nil {
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
return false, err
}
if action != nil {
if action.Type == resourcepolicies.FSBackup {
v.logger.Infof("Perform fs-backup action for volume %s of pod %s due to volume policy match",
volume.Name, pod.Namespace+"/"+pod.Name)
return true, nil
} else {
v.logger.Infof("Skip fs-backup action for volume %s for pod %s because the action type is %s",
volume.Name, pod.Namespace+"/"+pod.Name, action.Type)
return false, nil
}
}
}
return ActionMatchingVols, NonActionMatchingVols, nil
if v.shouldPerformFSBackupLegacy(volume, pod) {
v.logger.Infof("Perform fs-backup action for volume %s of pod %s due to opt-in/out way",
volume.Name, pod.Namespace+"/"+pod.Name)
return true, nil
} else {
v.logger.Infof("Skip fs-backup action for volume %s of pod %s due to opt-in/out way",
volume.Name, pod.Namespace+"/"+pod.Name)
return false, nil
}
}
```
- The main function from the above `vph.ProcessVolumePolicyFSbackup` will be called when we encounter Pods during the backup workflow:
```go
includedVolumes, optedOutVolumes, err := vh.GetVolumesForFSBackup(pod)
if err != nil {
backupErrs = append(backupErrs, errors.WithStack(err))
}
- The main function from the above will be called when we encounter Pods during the backup workflow:
```go
for _, volume := range pod.Spec.Volumes {
shouldDoFSBackup, err := ib.volumeHelperImpl.ShouldPerformFSBackup(volume, *pod)
if err != nil {
backupErrs = append(backupErrs, errors.WithStack(err))
}
...
}
```
#### Snapshot (PV)
- Making sure that `snapshot` action is skipped for PVs that do not fit the volume policy criteria, for this we will use the `vh.ShouldPerformSnapshot` from the `VolumeHelperImpl(vh)` receiver.
```go
func (v *VolumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured) (bool, error) {
func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error) {
// check if volume policy exists and also check if the object(pv/pvc) fits a volume policy criteria and see if the associated action is snapshot
// if it is not snapshot then skip the code path for snapshotting the PV/PVC
if v.VolumePolicy != nil {
action, err := v.VolumePolicy.GetMatchAction(obj)
pvc := new(corev1api.PersistentVolumeClaim)
pv := new(corev1api.PersistentVolume)
var err error
if groupResource == kuberesource.PersistentVolumeClaims {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pvc); err != nil {
return false, err
}
pv, err = kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
return false, err
}
}
if groupResource == kuberesource.PersistentVolumes {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pv); err != nil {
return false, err
}
}
if v.volumePolicy != nil {
action, err := v.volumePolicy.GetMatchAction(pv)
if err != nil {
return false, err
}
// Also account for SnapshotVolumes flag on backup CR
if action != nil && action.Type == resourcepolicies.Snapshot && boolptr.IsSetToTrue(v.SnapshotVolumes) {
return true, nil
// If there is a match action, and the action type is snapshot, return true,
// or the action type is not snapshot, then return false.
// If there is no match action, go on to the next check.
if action != nil {
if action.Type == resourcepolicies.Snapshot {
v.logger.Infof(fmt.Sprintf("performing snapshot action for pv %s", pv.Name))
return true, nil
} else {
v.logger.Infof("Skip snapshot action for pv %s as the action type is %s", pv.Name, action.Type)
return false, nil
}
}
}
// now if volumepolicy is not specified then just check for snapshotVolumes flag
if boolptr.IsSetToTrue(v.SnapshotVolumes) {
// If this PV is claimed, see if we've already taken a (pod volume backup)
// snapshot of the contents of this PV. If so, don't take a snapshot.
if pv.Spec.ClaimRef != nil {
pods, err := podvolumeutil.GetPodsUsingPVC(
pv.Spec.ClaimRef.Namespace,
pv.Spec.ClaimRef.Name,
v.client,
)
if err != nil {
v.logger.WithError(err).Errorf("fail to get pod for PV %s", pv.Name)
return false, err
}
for _, pod := range pods {
for _, vol := range pod.Spec.Volumes {
if vol.PersistentVolumeClaim != nil &&
vol.PersistentVolumeClaim.ClaimName == pv.Spec.ClaimRef.Name &&
v.shouldPerformFSBackupLegacy(vol, pod) {
v.logger.Infof("Skipping snapshot of pv %s because it is backed up with PodVolumeBackup.", pv.Name)
return false, nil
}
}
}
}
if !boolptr.IsSetToFalse(v.snapshotVolumes) {
// If the backup.Spec.SnapshotVolumes is not set, or set to true, then should take the snapshot.
v.logger.Infof("performing snapshot action for pv %s as the snapshotVolumes is not set to false")
return true, nil
}
v.logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name))
return false, nil
}
```
- The above function will be used as follows in `takePVSnapshot` function of the backup workflow:
- The function `ShouldPerformSnapshot` will be used as follows in `takePVSnapshot` function of the backup workflow:
```go
snapshotVolume, err := vh.ShouldPerformSnapshot(obj)
snapshotVolume, err := ib.volumeHelperImpl.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumes)
if err != nil {
return err
}
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping volume snapshot for PV %s as it does not fit the volume policy criteria for snapshot action", pv.Name))
log.Info(fmt.Sprintf("skipping volume snapshot for PV %s as it does not fit the volume policy criteria specified by the user for snapshot action", pv.Name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumes, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
return nil
}
```
#### Snapshot (PVC)
- Making sure that `snapshot` action is skipped for PVCs that do not fit the volume policy criteria, for this we will again use the `vh.ShouldPerformSnapshot` from the `VolumeHelperImpl(vh)` receiver.
- We will pass the `VolumeHelperImpl(vh)` instance in `executeActions` method so that it is available to use.
```go
func (v *VolumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured) (bool, error) {
// check if volume policy exists and also check if the object(pv/pvc) fits a volume policy criteria and see if the associated action is snapshot
// if it is not snapshot then skip the code path for snapshotting the PV/PVC
if v.VolumePolicy != nil {
action, err := v.VolumePolicy.GetMatchAction(obj)
if err != nil {
return false, err
}
// Also account for SnapshotVolumes flag on backup CR
if action != nil && action.Type == resourcepolicies.Snapshot && boolptr.IsSetToTrue(v.SnapshotVolumes) {
return true, nil
}
}
// now if volumepolicy is not specified then just check for snapshotVolumes flag
if boolptr.IsSetToTrue(v.SnapshotVolumes) {
return true, nil
}
return false, nil
}
```
- The above function will be used as follows in the `executeActions` function of backup workflow:
- The above function will be used as follows in the `executeActions` function of backup workflow.
- Considering the vSphere plugin doesn't support the VolumePolicy yet, don't use the VolumePolicy for vSphere plugin by now.
```go
if groupResource == kuberesource.PersistentVolumeClaims && actionName == csiBIAPluginName {
snapshotVolume, err := vh.ShouldPerformSnapshot(obj)
if err != nil {
return nil, itemFiles, errors.WithStack(err)
}
if groupResource == kuberesource.PersistentVolumeClaims {
if actionName == csiBIAPluginName {
snapshotVolume, err := ib.volumeHelperImpl.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumeClaims)
if err != nil {
return nil, itemFiles, errors.WithStack(err)
}
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping csi volume snapshot for PVC %s as it does not fit the volume policy criteria for snapshot action", namespace+" /"+name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
continue
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping csi volume snapshot for PVC %s as it does not fit the volume policy criteria specified by the user for snapshot action", namespace+"/"+name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
continue
}
}
}
```

View File

@ -4,34 +4,56 @@ import (
"fmt"
"strings"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
"github.com/sirupsen/logrus"
corev1api "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
crclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
pdvolumeutil "github.com/vmware-tanzu/velero/pkg/util/podvolume"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
podvolumeutil "github.com/vmware-tanzu/velero/pkg/util/podvolume"
)
type VolumeHelper interface {
GetVolumesForFSBackup(pod *corev1api.Pod, defaultVolumesToFsBackup, backupExcludePVC bool, kbclient kbclient.Client) ([]string, []string, error)
ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource, kbclient kbclient.Client) (bool, error)
ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error)
ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error)
}
type VolumeHelperImpl struct {
VolumePolicy *resourcepolicies.Policies
SnapshotVolumes *bool
Logger logrus.FieldLogger
type volumeHelperImpl struct {
volumePolicy *resourcepolicies.Policies
snapshotVolumes *bool
logger logrus.FieldLogger
client crclient.Client
defaultVolumesToFSBackup bool
// This parameter is used to align the fs-backup with snapshot action,
// because PVC is already filtered by the resource filter before getting
// to the volume policy check, but fs-backup is based on the pod resource,
// the resource filter on PVC and PV doesn't work on this scenario.
backupExcludePVC bool
}
func (v *VolumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource, kbclient kbclient.Client) (bool, error) {
func NewVolumeHelperImpl(
volumePolicy *resourcepolicies.Policies,
snapshotVolumes *bool,
logger logrus.FieldLogger,
client crclient.Client,
defaultVolumesToFSBackup bool,
backupExcludePVC bool,
) VolumeHelper {
return &volumeHelperImpl{
volumePolicy: volumePolicy,
snapshotVolumes: snapshotVolumes,
logger: logger,
client: client,
defaultVolumesToFSBackup: defaultVolumesToFSBackup,
backupExcludePVC: backupExcludePVC,
}
}
func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, groupResource schema.GroupResource) (bool, error) {
// check if volume policy exists and also check if the object(pv/pvc) fits a volume policy criteria and see if the associated action is snapshot
// if it is not snapshot then skip the code path for snapshotting the PV/PVC
pvc := new(corev1api.PersistentVolumeClaim)
@ -40,130 +62,156 @@ func (v *VolumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
if groupResource == kuberesource.PersistentVolumeClaims {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pvc); err != nil {
v.logger.WithError(err).Error("fail to convert unstructured into PVC")
return false, err
}
pv, err = kubeutil.GetPVForPVC(pvc, kbclient)
pv, err = kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, err
}
}
if groupResource == kuberesource.PersistentVolumes {
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pv); err != nil {
v.logger.WithError(err).Error("fail to convert unstructured into PV")
return false, err
}
}
if v.VolumePolicy != nil {
action, err := v.VolumePolicy.GetMatchAction(pv)
if v.volumePolicy != nil {
action, err := v.volumePolicy.GetMatchAction(pv)
if err != nil {
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
return false, err
}
// Also account for SnapshotVolumes flag on backup CR
if action != nil && action.Type == resourcepolicies.Snapshot && boolptr.IsSetToTrue(v.SnapshotVolumes) {
v.Logger.Infof(fmt.Sprintf("performing snapshot action for pv %s as it satisfies the volume policy criteria and snapshotVolumes is set to true", pv.Name))
return true, nil
// If there is a match action, and the action type is snapshot, return true,
// or the action type is not snapshot, then return false.
// If there is no match action, go on to the next check.
if action != nil {
if action.Type == resourcepolicies.Snapshot {
v.logger.Infof(fmt.Sprintf("performing snapshot action for pv %s", pv.Name))
return true, nil
} else {
v.logger.Infof("Skip snapshot action for pv %s as the action type is %s", pv.Name, action.Type)
return false, nil
}
}
v.Logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to not satisfying the volume policy criteria or snapshotVolumes is not true", pv.Name))
return false, nil
}
// If this PV is claimed, see if we've already taken a (pod volume backup)
// snapshot of the contents of this PV. If so, don't take a snapshot.
if pv.Spec.ClaimRef != nil {
pods, err := podvolumeutil.GetPodsUsingPVC(
pv.Spec.ClaimRef.Namespace,
pv.Spec.ClaimRef.Name,
v.client,
)
if err != nil {
v.logger.WithError(err).Errorf("fail to get pod for PV %s", pv.Name)
return false, err
}
for _, pod := range pods {
for _, vol := range pod.Spec.Volumes {
if vol.PersistentVolumeClaim != nil &&
vol.PersistentVolumeClaim.ClaimName == pv.Spec.ClaimRef.Name &&
v.shouldPerformFSBackupLegacy(vol, pod) {
v.logger.Infof("Skipping snapshot of pv %s because it is backed up with PodVolumeBackup.", pv.Name)
return false, nil
}
}
}
}
if !boolptr.IsSetToFalse(v.snapshotVolumes) {
// If the backup.Spec.SnapshotVolumes is not set, or set to true, then should take the snapshot.
v.logger.Infof("performing snapshot action for pv %s as the snapshotVolumes is not set to false")
return true, nil
}
v.logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name))
return false, nil
}
func (v *VolumeHelperImpl) GetVolumesForFSBackup(pod *corev1api.Pod, defaultVolumesToFsBackup, backupExcludePVC bool, kbclient kbclient.Client) ([]string, []string, error) {
// Check if there is a fs-backup/snapshot volume policy specified by the user, if yes then use the volume policy approach to
// get the list volumes for fs-backup else go via the legacy annotation based approach
var fsBackupVolumePolicyVols, nonFsBackupVolumePolicyVols, volsToProcessByLegacyApproach = make([]string, 0), make([]string, 0), make([]string, 0)
var err error
func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod corev1api.Pod) (bool, error) {
if !v.shouldIncludeVolumeInBackup(volume) {
v.logger.Debugf("skip fs-backup action for pod %s's volume %s, due to not pass volume check.", pod.Namespace+"/"+pod.Name, volume.Name)
return false, nil
}
if v.VolumePolicy != nil {
// Get the list of volumes to back up using pod volume backup for the given pod matching fs-backup volume policy action
v.Logger.Infof("Volume Policy specified by the user, using volume policy approach to segregate pod volumes for fs-backup")
// GetVolumesMatchingFSBackupAction return 3 list of Volumes:
// fsBackupVolumePolicyVols: Volumes that have a matching fs-backup action from the volume policy specified by the user
// nonFsBackupVolumePolicyVols: Volumes that have an action matching from the volume policy specified by the user, but it is not fs-backup action
// volsToProcessByLegacyApproach: Volumes that did not have any matching action i.e. action was nil from the volume policy specified by the user, these volumes will be processed via the legacy annotations based approach (fallback option)
fsBackupVolumePolicyVols, nonFsBackupVolumePolicyVols, volsToProcessByLegacyApproach, err = v.GetVolumesMatchingFSBackupAction(pod, v.VolumePolicy, backupExcludePVC, kbclient)
if v.volumePolicy != nil {
pvc, err := kubeutil.GetPVCForPodVolume(&volume, &pod, v.client)
if err != nil {
return fsBackupVolumePolicyVols, nonFsBackupVolumePolicyVols, err
v.logger.WithError(err).Errorf("fail to get PVC for pod %s", pod.Namespace+"/"+pod.Name)
return false, err
}
// if volsToProcessByLegacyApproach is empty then no need to sue legacy approach as fallback option return from here
if len(volsToProcessByLegacyApproach) == 0 {
return fsBackupVolumePolicyVols, nonFsBackupVolumePolicyVols, nil
pv, err := kubeutil.GetPVForPVC(pvc, v.client)
if err != nil {
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
return false, err
}
action, err := v.volumePolicy.GetMatchAction(pv)
if err != nil {
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
return false, err
}
if action != nil {
if action.Type == resourcepolicies.FSBackup {
v.logger.Infof("Perform fs-backup action for volume %s of pod %s due to volume policy match",
volume.Name, pod.Namespace+"/"+pod.Name)
return true, nil
} else {
v.logger.Infof("Skip fs-backup action for volume %s for pod %s because the action type is %s",
volume.Name, pod.Namespace+"/"+pod.Name, action.Type)
return false, nil
}
}
}
// process legacy annotation based approach, this will done when:
// 1. volume policy os specified by the user
// 2. And there are some volumes for which the volume policy approach did not get any supported matching actions
if v.VolumePolicy != nil && len(volsToProcessByLegacyApproach) > 0 {
v.Logger.Infof("volume policy specified by the user but there are volumes with no matching action, using legacy approach based on annotations as a fallback for those volumes")
includedVolumesFromLegacyFallBack, optedOutVolumesFromLegacyFallBack := pdvolumeutil.GetVolumesByPod(pod, defaultVolumesToFsBackup, backupExcludePVC, volsToProcessByLegacyApproach)
// merge the volumePolicy approach and legacy Fallback lists
fsBackupVolumePolicyVols = append(fsBackupVolumePolicyVols, includedVolumesFromLegacyFallBack...)
nonFsBackupVolumePolicyVols = append(nonFsBackupVolumePolicyVols, optedOutVolumesFromLegacyFallBack...)
return fsBackupVolumePolicyVols, nonFsBackupVolumePolicyVols, nil
if v.shouldPerformFSBackupLegacy(volume, pod) {
v.logger.Infof("Perform fs-backup action for volume %s of pod %s due to opt-in/out way",
volume.Name, pod.Namespace+"/"+pod.Name)
return true, nil
} else {
v.logger.Infof("Skip fs-backup action for volume %s of pod %s due to opt-in/out way",
volume.Name, pod.Namespace+"/"+pod.Name)
return false, nil
}
// Normal legacy workflow
// Get the list of volumes to back up using pod volume backup from the pod's annotations.
// We will also pass the list of volume that did not have any supported volume policy action matched in legacy approach so that
// those volumes get processed via legacy annotation based approach, this is a fallback option on annotation based legacy approach
v.Logger.Infof("fs-backup or snapshot Volume Policy not specified by the user, using legacy approach based on annotations")
includedVolumes, optedOutVolumes := pdvolumeutil.GetVolumesByPod(pod, defaultVolumesToFsBackup, backupExcludePVC, volsToProcessByLegacyApproach)
return includedVolumes, optedOutVolumes, nil
}
// GetVolumesMatchingFSBackupAction returns a list of volume names to backup for the provided pod having fs-backup volume policy action
func (v *VolumeHelperImpl) GetVolumesMatchingFSBackupAction(pod *corev1api.Pod, volumePolicies *resourcepolicies.Policies, backupExcludePVC bool, kbclient kbclient.Client) ([]string, []string, []string, error) {
FSBackupActionMatchingVols := []string{}
FSBackupNonActionMatchingVols := []string{}
NoActionMatchingVols := []string{}
for i, vol := range pod.Spec.Volumes {
if !v.ShouldIncludeVolumeInBackup(vol, backupExcludePVC) {
continue
}
if vol.PersistentVolumeClaim != nil {
// fetch the associated PVC first
pvc, err := kubeutil.GetPVCForPodVolume(&pod.Spec.Volumes[i], pod, kbclient)
if err != nil {
return FSBackupActionMatchingVols, FSBackupNonActionMatchingVols, NoActionMatchingVols, err
}
// now fetch the PV and call GetMatchAction on it
pv, err := kubeutil.GetPVForPVC(pvc, kbclient)
if err != nil {
return FSBackupActionMatchingVols, FSBackupNonActionMatchingVols, NoActionMatchingVols, err
}
// now get the action for pv
action, err := volumePolicies.GetMatchAction(pv)
if err != nil {
return FSBackupActionMatchingVols, FSBackupNonActionMatchingVols, NoActionMatchingVols, err
}
// Record volume list having no matched action so that they are processed in legacy fallback option
if action == nil {
NoActionMatchingVols = append(NoActionMatchingVols, vol.Name)
}
// Now if the matched action is not nil and is `fs-backup` then add that Volume to the FSBackupActionMatchingVols
// else add that volume to FSBackupNonActionMatchingVols
// we already tracked the volume not matching any kind actions supported by volume policy in NoActionMatchingVols
// The NoActionMatchingVols list will be processed via legacy annotation based approach as a fallback option
if action != nil && action.Type == resourcepolicies.FSBackup {
FSBackupActionMatchingVols = append(FSBackupActionMatchingVols, vol.Name)
} else if action != nil {
FSBackupNonActionMatchingVols = append(FSBackupNonActionMatchingVols, vol.Name)
func (v volumeHelperImpl) shouldPerformFSBackupLegacy(
volume corev1api.Volume,
pod corev1api.Pod,
) bool {
// Check volume in opt-in way
if !v.defaultVolumesToFSBackup {
optInVolumeNames := podvolumeutil.GetVolumesToBackup(&pod)
for _, volumeName := range optInVolumeNames {
if volume.Name == volumeName {
return true
}
}
return false
} else {
// Check volume in opt-out way
optOutVolumeNames := podvolumeutil.GetVolumesToExclude(&pod)
for _, volumeName := range optOutVolumeNames {
if volume.Name == volumeName {
return false
}
}
return true
}
return FSBackupActionMatchingVols, FSBackupNonActionMatchingVols, NoActionMatchingVols, nil
}
func (v *VolumeHelperImpl) ShouldIncludeVolumeInBackup(vol corev1api.Volume, backupExcludePVC bool) bool {
func (v *volumeHelperImpl) shouldIncludeVolumeInBackup(vol corev1api.Volume) bool {
includeVolumeInBackup := true
// cannot backup hostpath volumes as they are not mounted into /var/lib/kubelet/pods
// and therefore not accessible to the node agent daemon set.
@ -186,7 +234,7 @@ func (v *VolumeHelperImpl) ShouldIncludeVolumeInBackup(vol corev1api.Volume, bac
if vol.DownwardAPI != nil {
includeVolumeInBackup = false
}
if vol.PersistentVolumeClaim != nil && backupExcludePVC {
if vol.PersistentVolumeClaim != nil && v.backupExcludePVC {
includeVolumeInBackup = false
}
// don't include volumes that mount the default service account token.

View File

@ -1,81 +1,59 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package volumehelper
import (
"context"
"testing"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/utils/pointer"
"k8s.io/utils/ptr"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
)
func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
PVObjectGP2 := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": map[string]interface{}{
"name": "example-pv",
},
"spec": map[string]interface{}{
"capacity": map[string]interface{}{
"storage": "1Gi",
},
"volumeMode": "Filesystem",
"accessModes": []interface{}{"ReadWriteOnce"},
"persistentVolumeReclaimPolicy": "Retain",
"storageClassName": "gp2-csi",
"hostPath": map[string]interface{}{
"path": "/mnt/data",
},
},
},
}
PVObjectGP3 := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": map[string]interface{}{
"name": "example-pv",
},
"spec": map[string]interface{}{
"capacity": map[string]interface{}{
"storage": "1Gi",
},
"volumeMode": "Filesystem",
"accessModes": []interface{}{"ReadWriteOnce"},
"persistentVolumeReclaimPolicy": "Retain",
"storageClassName": "gp3-csi",
"hostPath": map[string]interface{}{
"path": "/mnt/data",
},
},
},
}
testCases := []struct {
name string
obj runtime.Unstructured
groupResource schema.GroupResource
resourcePolicies resourcepolicies.ResourcePolicies
snapshotVolumesFlag *bool
shouldSnapshot bool
expectedErr bool
name string
inputObj runtime.Object
groupResource schema.GroupResource
pod *corev1.Pod
resourcePolicies *resourcepolicies.ResourcePolicies
snapshotVolumesFlag *bool
defaultVolumesToFSBackup bool
shouldSnapshot bool
expectedErr bool
}{
{
name: "Given PV object matches volume policy snapshot action snapshotVolumes flags is true returns true and no error",
obj: PVObjectGP2,
name: "VolumePolicy match, returns true and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp2-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
resourcePolicies: resourcepolicies.ResourcePolicies{
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
@ -88,15 +66,15 @@ func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
},
},
},
snapshotVolumesFlag: pointer.Bool(true),
snapshotVolumesFlag: ptr.To(true),
shouldSnapshot: true,
expectedErr: false,
},
{
name: "Given PV object matches volume policy snapshot action snapshotVolumes flags is false returns false and no error",
obj: PVObjectGP2,
name: "VolumePolicy match, snapshotVolumes is false, return true and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp2-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
resourcePolicies: resourcepolicies.ResourcePolicies{
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
@ -109,15 +87,37 @@ func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
},
},
},
snapshotVolumesFlag: pointer.Bool(false),
snapshotVolumesFlag: ptr.To(false),
shouldSnapshot: true,
expectedErr: false,
},
{
name: "VolumePolicy match but action is unexpected, return false and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp2-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.FSBackup,
},
},
},
},
snapshotVolumesFlag: ptr.To(true),
shouldSnapshot: false,
expectedErr: false,
},
{
name: "Given PV object matches volume policy snapshot action snapshotVolumes flags is true returns false and no error",
obj: PVObjectGP3,
name: "VolumePolicy not match, not selected by fs-backup as opt-out way, snapshotVolumes is true, returns true and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp3-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
resourcePolicies: resourcepolicies.ResourcePolicies{
pod: builder.ForPod("ns", "pod-1").Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
@ -130,33 +130,59 @@ func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
},
},
},
snapshotVolumesFlag: pointer.Bool(true),
shouldSnapshot: false,
snapshotVolumesFlag: ptr.To(true),
shouldSnapshot: true,
expectedErr: false,
},
{
name: "Given PVC object matches volume policy snapshot action snapshotVolumes flags is true return false and error case PVC not found",
obj: &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "v1",
"kind": "PersistentVolumeClaim",
"metadata": map[string]interface{}{
"name": "example-pvc",
"namespace": "default",
name: "VolumePolicy not match, selected by fs-backup as opt-out way, snapshotVolumes is true, returns false and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp3-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
pod: builder.ForPod("ns", "pod-1").Volumes(
&corev1.Volume{
Name: "volume",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
"spec": map[string]interface{}{
"accessModes": []string{"ReadWriteOnce"},
"resources": map[string]interface{}{
"requests": map[string]interface{}{
"storage": "1Gi",
},
).Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
},
snapshotVolumesFlag: ptr.To(true),
defaultVolumesToFSBackup: true,
shouldSnapshot: false,
expectedErr: false,
},
{
name: "VolumePolicy not match, selected by fs-backup as opt-out way, snapshotVolumes is true, returns false and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp3-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
pod: builder.ForPod("ns", "pod-1").
ObjectMeta(builder.WithAnnotations(velerov1api.VolumesToExcludeAnnotation, "volume")).
Volumes(
&corev1.Volume{
Name: "volume",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
"storageClassName": "gp2-csi",
},
},
},
groupResource: kuberesource.PersistentVolumeClaims,
resourcePolicies: resourcepolicies.ResourcePolicies{
).Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
@ -169,52 +195,152 @@ func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
},
},
},
snapshotVolumesFlag: pointer.Bool(true),
snapshotVolumesFlag: ptr.To(true),
defaultVolumesToFSBackup: true,
shouldSnapshot: true,
expectedErr: false,
},
{
name: "VolumePolicy not match, not selected by fs-backup as opt-in way, snapshotVolumes is true, returns false and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp3-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
pod: builder.ForPod("ns", "pod-1").
ObjectMeta(builder.WithAnnotations(velerov1api.VolumesToBackupAnnotation, "volume")).
Volumes(
&corev1.Volume{
Name: "volume",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
},
).Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
},
snapshotVolumesFlag: ptr.To(true),
defaultVolumesToFSBackup: false,
shouldSnapshot: false,
expectedErr: false,
},
{
name: "VolumePolicy not match, not selected by fs-backup as opt-in way, snapshotVolumes is true, returns true and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp3-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1.Volume{
Name: "volume",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
},
).Result(),
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
},
snapshotVolumesFlag: ptr.To(true),
defaultVolumesToFSBackup: false,
shouldSnapshot: true,
expectedErr: false,
},
{
name: "No VolumePolicy, not selected by fs-backup, snapshotVolumes is true, returns true and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp3-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
resourcePolicies: nil,
snapshotVolumesFlag: ptr.To(true),
shouldSnapshot: true,
expectedErr: false,
},
{
name: "No VolumePolicy, not selected by fs-backup, snapshotVolumes is false, returns false and no error",
inputObj: builder.ForPersistentVolume("example-pv").StorageClass("gp3-csi").ClaimRef("ns", "pvc-1").Result(),
groupResource: kuberesource.PersistentVolumes,
resourcePolicies: nil,
snapshotVolumesFlag: ptr.To(false),
shouldSnapshot: false,
expectedErr: false,
},
{
name: "PVC not having PV, return false and error case PV not found",
inputObj: builder.ForPersistentVolumeClaim("default", "example-pvc").StorageClass("gp2-csi").Result(),
groupResource: kuberesource.PersistentVolumeClaims,
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
},
snapshotVolumesFlag: ptr.To(true),
shouldSnapshot: false,
expectedErr: true,
},
}
mockedPV := &corev1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "example-pv",
},
Spec: corev1.PersistentVolumeSpec{
Capacity: corev1.ResourceList{
corev1.ResourceStorage: resource.MustParse("1Gi"),
},
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
PersistentVolumeReclaimPolicy: corev1.PersistentVolumeReclaimRetain,
StorageClassName: "gp2-csi",
ClaimRef: &corev1.ObjectReference{
Name: "example-pvc",
Namespace: "default",
objs := []runtime.Object{
&corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Namespace: "ns",
Name: "pvc-1",
},
},
}
objs := []runtime.Object{mockedPV}
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
policies := tc.resourcePolicies
p := &resourcepolicies.Policies{}
err := p.BuildPolicy(&policies)
if err != nil {
t.Fatalf("failed to build policy with error %v", err)
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
if tc.pod != nil {
fakeClient.Create(context.Background(), tc.pod)
}
vh := &VolumeHelperImpl{
VolumePolicy: p,
SnapshotVolumes: tc.snapshotVolumesFlag,
Logger: velerotest.NewLogger(),
var p *resourcepolicies.Policies = nil
if tc.resourcePolicies != nil {
p = &resourcepolicies.Policies{}
err := p.BuildPolicy(tc.resourcePolicies)
if err != nil {
t.Fatalf("failed to build policy with error %v", err)
}
}
ActualShouldSnapshot, actualError := vh.ShouldPerformSnapshot(tc.obj, tc.groupResource, fakeClient)
vh := NewVolumeHelperImpl(
p,
tc.snapshotVolumesFlag,
logrus.StandardLogger(),
fakeClient,
tc.defaultVolumesToFSBackup,
false,
)
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.inputObj)
require.NoError(t, err)
actualShouldSnapshot, actualError := vh.ShouldPerformSnapshot(&unstructured.Unstructured{Object: obj}, tc.groupResource)
if tc.expectedErr {
assert.NotNil(t, actualError, "Want error; Got nil error")
require.NotNil(t, actualError, "Want error; Got nil error")
return
}
assert.Equalf(t, ActualShouldSnapshot, tc.shouldSnapshot, "Want shouldSnapshot as %v; Got shouldSnapshot as %v", tc.shouldSnapshot, ActualShouldSnapshot)
require.Equalf(t, tc.shouldSnapshot, actualShouldSnapshot, "Want shouldSnapshot as %t; Got shouldSnapshot as %t", tc.shouldSnapshot, actualShouldSnapshot)
})
}
}
@ -354,348 +480,203 @@ func TestVolumeHelperImpl_ShouldIncludeVolumeInBackup(t *testing.T) {
if err != nil {
t.Fatalf("failed to build policy with error %v", err)
}
vh := &VolumeHelperImpl{
VolumePolicy: p,
SnapshotVolumes: pointer.Bool(true),
Logger: velerotest.NewLogger(),
vh := &volumeHelperImpl{
volumePolicy: p,
snapshotVolumes: ptr.To(true),
logger: velerotest.NewLogger(),
backupExcludePVC: tc.backupExcludePVC,
}
actualShouldInclude := vh.ShouldIncludeVolumeInBackup(tc.vol, tc.backupExcludePVC)
actualShouldInclude := vh.shouldIncludeVolumeInBackup(tc.vol)
assert.Equalf(t, actualShouldInclude, tc.shouldInclude, "Want shouldInclude as %v; Got actualShouldInclude as %v", tc.shouldInclude, actualShouldInclude)
})
}
}
var (
gp2csi = "gp2-csi"
gp3csi = "gp3-csi"
)
var (
samplePod = &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "sample-pod",
Namespace: "sample-ns",
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "sample-container",
Image: "sample-image",
VolumeMounts: []corev1.VolumeMount{
{
Name: "sample-vm",
MountPath: "/etc/pod-info",
},
},
},
},
Volumes: []corev1.Volume{
{
Name: "sample-volume-1",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "sample-pvc-1",
},
},
},
{
Name: "sample-volume-2",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "sample-pvc-2",
},
},
},
},
},
}
samplePVC1 = &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "sample-pvc-1",
Namespace: "sample-ns",
},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
Resources: corev1.VolumeResourceRequirements{
Requests: corev1.ResourceList{},
},
StorageClassName: &gp2csi,
VolumeName: "sample-pv-1",
},
Status: corev1.PersistentVolumeClaimStatus{
Phase: corev1.ClaimBound,
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
Capacity: corev1.ResourceList{},
},
}
samplePVC2 = &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "sample-pvc-2",
Namespace: "sample-ns",
},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
Resources: corev1.VolumeResourceRequirements{
Requests: corev1.ResourceList{},
},
StorageClassName: &gp3csi,
VolumeName: "sample-pv-2",
},
Status: corev1.PersistentVolumeClaimStatus{
Phase: corev1.ClaimBound,
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
Capacity: corev1.ResourceList{},
},
}
samplePV1 = &corev1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "sample-pv-1",
},
Spec: corev1.PersistentVolumeSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
Capacity: corev1.ResourceList{},
ClaimRef: &corev1.ObjectReference{
Kind: "PersistentVolumeClaim",
Name: "sample-pvc-1",
Namespace: "sample-ns",
ResourceVersion: "1027",
UID: "7d28e566-ade7-4ed6-9e15-2e44d2fbcc08",
},
PersistentVolumeSource: corev1.PersistentVolumeSource{
CSI: &corev1.CSIPersistentVolumeSource{
Driver: "ebs.csi.aws.com",
FSType: "ext4",
VolumeAttributes: map[string]string{
"storage.kubernetes.io/csiProvisionerIdentity": "1582049697841-8081-hostpath.csi.k8s.io",
},
VolumeHandle: "e61f2b48-527a-11ea-b54f-cab6317018f1",
},
},
PersistentVolumeReclaimPolicy: corev1.PersistentVolumeReclaimDelete,
StorageClassName: gp2csi,
},
Status: corev1.PersistentVolumeStatus{
Phase: corev1.VolumeBound,
},
}
samplePV2 = &corev1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: "sample-pv-2",
},
Spec: corev1.PersistentVolumeSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
Capacity: corev1.ResourceList{},
ClaimRef: &corev1.ObjectReference{
Kind: "PersistentVolumeClaim",
Name: "sample-pvc-2",
Namespace: "sample-ns",
ResourceVersion: "1027",
UID: "7d28e566-ade7-4ed6-9e15-2e44d2fbcc08",
},
PersistentVolumeSource: corev1.PersistentVolumeSource{
CSI: &corev1.CSIPersistentVolumeSource{
Driver: "ebs.csi.aws.com",
FSType: "ext4",
VolumeAttributes: map[string]string{
"storage.kubernetes.io/csiProvisionerIdentity": "1582049697841-8081-hostpath.csi.k8s.io",
},
VolumeHandle: "e61f2b48-527a-11ea-b54f-cab6317018f1",
},
},
PersistentVolumeReclaimPolicy: corev1.PersistentVolumeReclaimDelete,
StorageClassName: gp3csi,
},
Status: corev1.PersistentVolumeStatus{
Phase: corev1.VolumeBound,
},
}
resourcePolicies1 = resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.FSBackup,
},
},
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp3-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
}
resourcePolicies2 = resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.FSBackup,
},
},
},
}
resourcePolicies3 = resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp4-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.FSBackup,
},
},
},
}
)
func TestVolumeHelperImpl_GetVolumesMatchingFSBackupAction(t *testing.T) {
testCases := []struct {
name string
backupExcludePVC bool
resourcepoliciesApplied resourcepolicies.ResourcePolicies
FSBackupActionMatchingVols []string
FSBackupNonActionMatchingVols []string
NoActionMatchingVols []string
expectedErr bool
}{
{
name: "For a given pod with 2 volumes and volume policy we get one fs-backup action matching volume, one fs-back action non-matching volume but has some matching action and zero no action matching volumes",
backupExcludePVC: false,
resourcepoliciesApplied: resourcePolicies1,
FSBackupActionMatchingVols: []string{"sample-volume-1"},
FSBackupNonActionMatchingVols: []string{"sample-volume-2"},
NoActionMatchingVols: []string{},
expectedErr: false,
},
{
name: "For a given pod with 2 volumes and volume policy we get one fs-backup action matching volume, zero fs-backup action non-matching volume and one no action matching volumes",
backupExcludePVC: false,
resourcepoliciesApplied: resourcePolicies2,
FSBackupActionMatchingVols: []string{"sample-volume-1"},
FSBackupNonActionMatchingVols: []string{},
NoActionMatchingVols: []string{"sample-volume-2"},
expectedErr: false,
},
{
name: "For a given pod with 2 volumes and volume policy we get one fs-backup action matching volume, one fs-backup action non-matching volume and one no action matching volumes but backupExcludePVC is true so all returned list should be empty",
backupExcludePVC: true,
resourcepoliciesApplied: resourcePolicies2,
FSBackupActionMatchingVols: []string{},
FSBackupNonActionMatchingVols: []string{},
NoActionMatchingVols: []string{},
expectedErr: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
policies := tc.resourcepoliciesApplied
p := &resourcepolicies.Policies{}
err := p.BuildPolicy(&policies)
if err != nil {
t.Fatalf("failed to build policy with error %v", err)
}
vh := &VolumeHelperImpl{
VolumePolicy: p,
SnapshotVolumes: pointer.Bool(true),
Logger: velerotest.NewLogger(),
}
objs := []runtime.Object{samplePod, samplePVC1, samplePVC2, samplePV1, samplePV2}
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
gotFSBackupActionMatchingVols, gotFSBackupNonActionMatchingVols, gotNoActionMatchingVols, actualError := vh.GetVolumesMatchingFSBackupAction(samplePod, vh.VolumePolicy, tc.backupExcludePVC, fakeClient)
if tc.expectedErr {
assert.NotNil(t, actualError, "Want error; Got nil error")
return
}
assert.Nilf(t, actualError, "Want: nil error; Got: %v", actualError)
assert.Equalf(t, gotFSBackupActionMatchingVols, tc.FSBackupActionMatchingVols, "Want FSBackupActionMatchingVols as %v; Got gotFSBackupActionMatchingVols as %v", tc.FSBackupActionMatchingVols, gotFSBackupActionMatchingVols)
assert.Equalf(t, gotFSBackupNonActionMatchingVols, tc.FSBackupNonActionMatchingVols, "Want FSBackupNonActionMatchingVols as %v; Got gotFSBackupNonActionMatchingVols as %v", tc.FSBackupNonActionMatchingVols, gotFSBackupNonActionMatchingVols)
assert.Equalf(t, gotNoActionMatchingVols, tc.NoActionMatchingVols, "Want NoActionMatchingVols as %v; Got gotNoActionMatchingVols as %v", tc.NoActionMatchingVols, gotNoActionMatchingVols)
})
}
}
func TestVolumeHelperImpl_GetVolumesForFSBackup(t *testing.T) {
func TestVolumeHelperImpl_ShouldPerformFSBackup(t *testing.T) {
testCases := []struct {
name string
backupExcludePVC bool
defaultVolumesToFsBackup bool
resourcepoliciesApplied resourcepolicies.ResourcePolicies
includedVolumes []string
optedOutVolumes []string
pod *corev1.Pod
resources []runtime.Object
resourcePolicies *resourcepolicies.ResourcePolicies
snapshotVolumesFlag *bool
defaultVolumesToFSBackup bool
shouldFSBackup bool
expectedErr bool
}{
{
name: "For a given pod with 2 volumes and volume policy we get one fs-backup action matching volume, one fs-back action non-matching volume but matches snapshot action so no volumes for legacy fallback process, defaultvolumestofsbackup is false but no effect",
backupExcludePVC: false,
defaultVolumesToFsBackup: false,
resourcepoliciesApplied: resourcePolicies1,
includedVolumes: []string{"sample-volume-1"},
optedOutVolumes: []string{"sample-volume-2"},
name: "HostPath volume should be skipped.",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1.Volume{
Name: "",
VolumeSource: corev1.VolumeSource{
HostPath: &corev1.HostPathVolumeSource{
Path: "/mnt/test",
},
},
}).Result(),
shouldFSBackup: false,
expectedErr: false,
},
{
name: "For a given pod with 2 volumes and volume policy we get one fs-backup action matching volume, one fs-back action non-matching volume but matches snapshot action so no volumes for legacy fallback process, defaultvolumestofsbackup is true but no effect",
backupExcludePVC: false,
defaultVolumesToFsBackup: true,
resourcepoliciesApplied: resourcePolicies1,
includedVolumes: []string{"sample-volume-1"},
optedOutVolumes: []string{"sample-volume-2"},
name: "VolumePolicy match, return true and no error",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1.Volume{
Name: "",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
}).Result(),
resources: []runtime.Object{
builder.ForPersistentVolumeClaim("ns", "pvc-1").
VolumeName("pv-1").
StorageClass("gp2-csi").Phase(corev1.ClaimBound).Result(),
builder.ForPersistentVolume("pv-1").StorageClass("gp2-csi").Result(),
},
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.FSBackup,
},
},
},
},
shouldFSBackup: true,
expectedErr: false,
},
{
name: "For a given pod with 2 volumes and volume policy we get no volume matching fs-backup action defaultvolumesToFSBackup is false, no annotations, using legacy as fallback for non-action matching volumes",
backupExcludePVC: false,
defaultVolumesToFsBackup: false,
resourcepoliciesApplied: resourcePolicies3,
includedVolumes: []string{},
optedOutVolumes: []string{},
name: "VolumePolicy match, action type is not fs-backup, return false and no error",
pod: builder.ForPod("ns", "pod-1").
Volumes(
&corev1.Volume{
Name: "",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
}).Result(),
resources: []runtime.Object{
builder.ForPersistentVolumeClaim("ns", "pvc-1").
VolumeName("pv-1").
StorageClass("gp2-csi").Phase(corev1.ClaimBound).Result(),
builder.ForPersistentVolume("pv-1").StorageClass("gp2-csi").Result(),
},
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp2-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.Snapshot,
},
},
},
},
shouldFSBackup: false,
expectedErr: false,
},
{
name: "For a given pod with 2 volumes and volume policy we get no volume matching fs-backup action defaultvolumesToFSBackup is true, no annotations, using legacy as fallback for non-action matching volumes",
backupExcludePVC: false,
defaultVolumesToFsBackup: true,
resourcepoliciesApplied: resourcePolicies3,
includedVolumes: []string{"sample-volume-1", "sample-volume-2"},
optedOutVolumes: []string{},
name: "VolumePolicy not match, selected by opt-in way, return true and no error",
pod: builder.ForPod("ns", "pod-1").
ObjectMeta(builder.WithAnnotations(velerov1api.VolumesToBackupAnnotation, "pvc-1")).
Volumes(
&corev1.Volume{
Name: "pvc-1",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
}).Result(),
resources: []runtime.Object{
builder.ForPersistentVolumeClaim("ns", "pvc-1").
VolumeName("pv-1").
StorageClass("gp2-csi").Phase(corev1.ClaimBound).Result(),
builder.ForPersistentVolume("pv-1").StorageClass("gp2-csi").Result(),
},
resourcePolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Conditions: map[string]interface{}{
"storageClass": []string{"gp3-csi"},
},
Action: resourcepolicies.Action{
Type: resourcepolicies.FSBackup,
},
},
},
},
shouldFSBackup: true,
expectedErr: false,
},
{
name: "No VolumePolicy, not selected by opt-out way, return false and no error",
pod: builder.ForPod("ns", "pod-1").
ObjectMeta(builder.WithAnnotations(velerov1api.VolumesToExcludeAnnotation, "pvc-1")).
Volumes(
&corev1.Volume{
Name: "pvc-1",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "pvc-1",
},
},
}).Result(),
resources: []runtime.Object{
builder.ForPersistentVolumeClaim("ns", "pvc-1").
VolumeName("pv-1").
StorageClass("gp2-csi").Phase(corev1.ClaimBound).Result(),
builder.ForPersistentVolume("pv-1").StorageClass("gp2-csi").Result(),
},
defaultVolumesToFSBackup: true,
shouldFSBackup: false,
expectedErr: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
policies := tc.resourcepoliciesApplied
p := &resourcepolicies.Policies{}
err := p.BuildPolicy(&policies)
if err != nil {
t.Fatalf("failed to build policy with error %v", err)
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, tc.resources...)
if tc.pod != nil {
fakeClient.Create(context.Background(), tc.pod)
}
vh := &VolumeHelperImpl{
VolumePolicy: p,
SnapshotVolumes: pointer.Bool(true),
Logger: velerotest.NewLogger(),
var p *resourcepolicies.Policies = nil
if tc.resourcePolicies != nil {
p = &resourcepolicies.Policies{}
err := p.BuildPolicy(tc.resourcePolicies)
if err != nil {
t.Fatalf("failed to build policy with error %v", err)
}
}
objs := []runtime.Object{samplePod, samplePVC1, samplePVC2, samplePV1, samplePV2}
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
gotIncludedVolumes, gotOptedOutVolumes, actualError := vh.GetVolumesForFSBackup(samplePod, tc.defaultVolumesToFsBackup, tc.backupExcludePVC, fakeClient)
vh := NewVolumeHelperImpl(
p,
tc.snapshotVolumesFlag,
logrus.StandardLogger(),
fakeClient,
tc.defaultVolumesToFSBackup,
false,
)
actualShouldFSBackup, actualError := vh.ShouldPerformFSBackup(tc.pod.Spec.Volumes[0], *tc.pod)
if tc.expectedErr {
assert.NotNil(t, actualError, "Want error; Got nil error")
require.NotNil(t, actualError, "Want error; Got nil error")
return
}
assert.Nilf(t, actualError, "Want: nil error; Got: %v", actualError)
assert.Equalf(t, tc.includedVolumes, gotIncludedVolumes, "Want includedVolumes as %v; Got gotIncludedVolumes as %v", tc.includedVolumes, gotIncludedVolumes)
assert.Equalf(t, tc.optedOutVolumes, gotOptedOutVolumes, "Want optedOutVolumes as %v; Got gotOptedOutVolumes as %v", tc.optedOutVolumes, gotOptedOutVolumes)
require.Equalf(t, tc.shouldFSBackup, actualShouldFSBackup, "Want shouldFSBackup as %t; Got shouldFSBackup as %t", tc.shouldFSBackup, actualShouldFSBackup)
})
}
}

View File

@ -46,7 +46,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/csi"
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
"github.com/vmware-tanzu/velero/pkg/util/podvolume"
)
// pvcBackupItemAction is a backup item action plugin for Velero.
@ -64,14 +63,6 @@ func (p *pvcBackupItemAction) AppliesTo() (velero.ResourceSelector, error) {
}
func (p *pvcBackupItemAction) validateBackup(backup velerov1api.Backup) (valid bool) {
// Do nothing if volume snapshots have not been requested in this backup
if boolptr.IsSetToFalse(backup.Spec.SnapshotVolumes) {
p.log.Infof(
"Volume snapshotting not requested for backup %s/%s",
backup.Namespace, backup.Name)
return false
}
if backup.Status.Phase == velerov1api.BackupPhaseFinalizing ||
backup.Status.Phase == velerov1api.BackupPhaseFinalizingPartiallyFailed {
p.log.WithFields(
@ -88,7 +79,6 @@ func (p *pvcBackupItemAction) validateBackup(backup velerov1api.Backup) (valid b
func (p *pvcBackupItemAction) validatePVCandPV(
pvc corev1api.PersistentVolumeClaim,
defaultVolumesToFsBackup *bool,
item runtime.Unstructured,
) (
valid bool,
@ -132,23 +122,6 @@ func (p *pvcBackupItemAction) validatePVCandPV(
return false, updateItem, err
}
// Do nothing if FS uploader is used to backup this PV
isFSUploaderUsed, err := podvolume.IsPVCDefaultToFSBackup(
pvc.Namespace,
pvc.Name,
p.crClient,
boolptr.IsSetToTrue(defaultVolumesToFsBackup),
)
if err != nil {
return false, updateItem, errors.WithStack(err)
}
if isFSUploaderUsed {
p.log.Infof(
"Skipping PVC %s/%s, PV %s will be backed up using FS uploader",
pvc.Namespace, pvc.Name, pv.Name)
return false, updateItem, nil
}
return true, updateItem, nil
}
@ -248,7 +221,6 @@ func (p *pvcBackupItemAction) Execute(
if valid, item, err := p.validatePVCandPV(
pvc,
backup.Spec.DefaultVolumesToFsBackup,
item,
); !valid {
if err != nil {

View File

@ -62,11 +62,6 @@ func TestExecute(t *testing.T) {
expectedDataUpload *velerov2alpha1.DataUpload
expectedPVC *corev1.PersistentVolumeClaim
}{
{
name: "Skip PVC handling if SnapshotVolume set to false",
backup: builder.ForBackup("velero", "test").SnapshotVolumes(false).Result(),
expectedErr: nil,
},
{
name: "Skip PVC BIA when backup is in finalizing phase",
backup: builder.ForBackup("velero", "test").Phase(velerov1api.BackupPhaseFinalizing).Result(),

View File

@ -39,7 +39,9 @@ import (
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/vmware-tanzu/velero/internal/hook"
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
"github.com/vmware-tanzu/velero/internal/volume"
"github.com/vmware-tanzu/velero/internal/volumehelper"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov2alpha1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v2alpha1"
"github.com/vmware-tanzu/velero/pkg/client"
@ -321,6 +323,11 @@ func (kb *kubernetesBackupper) BackupWithResolvers(
}
backupRequest.Status.Progress = &velerov1api.BackupProgress{TotalItems: len(items)}
var resourcePolicy *resourcepolicies.Policies = nil
if backupRequest.ResPolicies != nil {
resourcePolicy = backupRequest.ResPolicies
}
itemBackupper := &itemBackupper{
backupRequest: backupRequest,
tarWriter: tw,
@ -328,12 +335,20 @@ func (kb *kubernetesBackupper) BackupWithResolvers(
kbClient: kb.kbClient,
discoveryHelper: kb.discoveryHelper,
podVolumeBackupper: podVolumeBackupper,
podVolumeSnapshotTracker: newPVCSnapshotTracker(),
podVolumeSnapshotTracker: podvolume.NewTracker(),
volumeSnapshotterGetter: volumeSnapshotterGetter,
itemHookHandler: &hook.DefaultItemHookHandler{
PodCommandExecutor: kb.podCommandExecutor,
},
hookTracker: hook.NewHookTracker(),
volumeHelperImpl: volumehelper.NewVolumeHelperImpl(
resourcePolicy,
backupRequest.Spec.SnapshotVolumes,
log,
kb.kbClient,
boolptr.IsSetToTrue(backupRequest.Spec.DefaultVolumesToFsBackup),
!backupRequest.ResourceIncludesExcludes.ShouldInclude(kuberesource.PersistentVolumeClaims.String()),
),
}
// helper struct to send current progress between the main
@ -652,7 +667,7 @@ func (kb *kubernetesBackupper) FinalizeBackup(
kbClient: kb.kbClient,
discoveryHelper: kb.discoveryHelper,
itemHookHandler: &hook.NoOpItemHookHandler{},
podVolumeSnapshotTracker: newPVCSnapshotTracker(),
podVolumeSnapshotTracker: podvolume.NewTracker(),
hookTracker: hook.NewHookTracker(),
}
updateFiles := make(map[string]FileForArchive)

View File

@ -1335,10 +1335,12 @@ func (a *recordResourcesAction) WithSkippedCSISnapshotFlag(flag bool) *recordRes
// verifies that the data in SkippedPVTracker is updated as expected.
func TestBackupItemActionsForSkippedPV(t *testing.T) {
tests := []struct {
name string
backupReq *Request
apiResources []*test.APIResource
actions []*recordResourcesAction
name string
backupReq *Request
apiResources []*test.APIResource
runtimeResources []runtime.Object
actions []*recordResourcesAction
resPolicies *resourcepolicies.ResourcePolicies
// {pvName:{approach: reason}}
expectSkippedPVs map[string]map[string]string
expectNotSkippedPVs []string
@ -1346,13 +1348,31 @@ func TestBackupItemActionsForSkippedPV(t *testing.T) {
{
name: "backup item action returns the 'not a CSI volume' error and the PV should be tracked as skippedPV",
backupReq: &Request{
Backup: defaultBackup().Result(),
Backup: defaultBackup().SnapshotVolumes(false).Result(),
SkippedPVTracker: NewSkipPVTracker(),
},
resPolicies: &resourcepolicies.ResourcePolicies{
Version: "v1",
VolumePolicies: []resourcepolicies.VolumePolicy{
{
Action: resourcepolicies.Action{Type: "snapshot"},
Conditions: map[string]interface{}{
"storageClass": []string{"gp2"},
},
},
},
},
apiResources: []*test.APIResource{
test.PVCs(
builder.ForPersistentVolumeClaim("ns-1", "pvc-1").VolumeName("pv-1").Result(),
test.PVs(
builder.ForPersistentVolume("pv-1").StorageClass("gp2").Result(),
),
test.PVCs(
builder.ForPersistentVolumeClaim("ns-1", "pvc-1").VolumeName("pv-1").StorageClass("gp2").Phase(corev1.ClaimBound).Result(),
),
},
runtimeResources: []runtime.Object{
builder.ForPersistentVolume("pv-1").StorageClass("gp2").Result(),
builder.ForPersistentVolumeClaim("ns-1", "pvc-1").VolumeName("pv-1").StorageClass("gp2").Phase(corev1.ClaimBound).Result(),
},
actions: []*recordResourcesAction{
new(recordResourcesAction).WithName(csiBIAPluginName).ForNamespace("ns-1").ForResource("persistentvolumeclaims").WithSkippedCSISnapshotFlag(true),
@ -1379,9 +1399,13 @@ func TestBackupItemActionsForSkippedPV(t *testing.T) {
},
apiResources: []*test.APIResource{
test.PVCs(
builder.ForPersistentVolumeClaim("ns-1", "pvc-1").VolumeName("pv-1").Result(),
builder.ForPersistentVolumeClaim("ns-1", "pvc-1").VolumeName("pv-1").Phase(corev1.ClaimBound).Result(),
),
},
runtimeResources: []runtime.Object{
builder.ForPersistentVolumeClaim("ns-1", "pvc-1").VolumeName("pv-1").Phase(corev1.ClaimBound).Result(),
builder.ForPersistentVolume("pv-1").StorageClass("gp2").Result(),
},
actions: []*recordResourcesAction{
new(recordResourcesAction).ForNamespace("ns-1").ForResource("persistentvolumeclaims").WithName(csiBIAPluginName),
},
@ -1399,7 +1423,9 @@ func TestBackupItemActionsForSkippedPV(t *testing.T) {
var (
h = newHarness(t)
backupFile = bytes.NewBuffer([]byte{})
fakeClient = test.NewFakeControllerRuntimeClient(t, tc.runtimeResources...)
)
h.backupper.kbClient = fakeClient
for _, resource := range tc.apiResources {
h.addItems(t, resource)
@ -1410,6 +1436,11 @@ func TestBackupItemActionsForSkippedPV(t *testing.T) {
actions = append(actions, action)
}
if tc.resPolicies != nil {
tc.backupReq.ResPolicies = new(resourcepolicies.Policies)
require.NoError(t, tc.backupReq.ResPolicies.BuildPolicy(tc.resPolicies))
}
err := h.backupper.Backup(h.log, tc.backupReq, backupFile, actions, nil)
assert.NoError(t, err)
@ -3098,6 +3129,7 @@ func TestBackupWithPodVolume(t *testing.T) {
name string
backup *velerov1.Backup
apiResources []*test.APIResource
pod *corev1.Pod
vsl *velerov1.VolumeSnapshotLocation
snapshotterGetter volumeSnapshotterGetter
want []*velerov1.PodVolumeBackup
@ -3107,8 +3139,19 @@ func TestBackupWithPodVolume(t *testing.T) {
backup: defaultBackup().Result(),
apiResources: []*test.APIResource{
test.Pods(
builder.ForPod("ns-1", "pod-1").ObjectMeta(builder.WithAnnotations("backup.velero.io/backup-volumes", "foo")).Result(),
builder.ForPod("ns-1", "pod-1").
ObjectMeta(builder.WithAnnotations("backup.velero.io/backup-volumes", "foo")).
Volumes(&corev1.Volume{
Name: "foo",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: "foo",
},
},
}).
Result(),
),
test.PVCs(),
},
want: []*velerov1.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-ns-1-pod-1-foo").Volume("foo").Result(),
@ -3130,6 +3173,7 @@ func TestBackupWithPodVolume(t *testing.T) {
Volumes(builder.ForVolume("bar").PersistentVolumeClaimSource("pvc-1").Result()).
Result(),
),
test.PVCs(),
},
want: []*velerov1.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-ns-1-pod-1-foo").Volume("foo").Result(),
@ -3151,6 +3195,7 @@ func TestBackupWithPodVolume(t *testing.T) {
Volumes(builder.ForVolume("bar").PersistentVolumeClaimSource("pvc-1").Result()).
Result(),
),
test.PVCs(),
},
want: []*velerov1.PodVolumeBackup{
builder.ForPodVolumeBackup("velero", "pvb-ns-1-pod-2-bar").Volume("bar").Result(),
@ -3176,11 +3221,19 @@ func TestBackupWithPodVolume(t *testing.T) {
builder.ForPersistentVolumeClaim("ns-1", "pvc-2").VolumeName("pv-2").Result(),
),
test.PVs(
builder.ForPersistentVolume("pv-1").ClaimRef("ns-1", "pvc-1").Result(),
builder.ForPersistentVolume("pv-2").ClaimRef("ns-1", "pvc-2").Result(),
),
},
pod: builder.ForPod("ns-1", "pod-1").
Volumes(
builder.ForVolume("vol-1").PersistentVolumeClaimSource("pvc-1").Result(),
builder.ForVolume("vol-2").PersistentVolumeClaimSource("pvc-2").Result(),
).
ObjectMeta(
builder.WithAnnotations("backup.velero.io/backup-volumes", "vol-1,vol-2"),
).
Result(),
vsl: newSnapshotLocation("velero", "default", "default"),
snapshotterGetter: map[string]vsv1.VolumeSnapshotter{
"default": new(fakeVolumeSnapshotter).
@ -3206,6 +3259,10 @@ func TestBackupWithPodVolume(t *testing.T) {
backupFile = bytes.NewBuffer([]byte{})
)
if tc.pod != nil {
require.NoError(t, h.backupper.kbClient.Create(context.Background(), tc.pod))
}
h.backupper.podVolumeBackupperFactory = new(fakePodVolumeBackupperFactory)
for _, resource := range tc.apiResources {

View File

@ -69,12 +69,13 @@ type itemBackupper struct {
kbClient kbClient.Client
discoveryHelper discovery.Helper
podVolumeBackupper podvolume.Backupper
podVolumeSnapshotTracker *pvcSnapshotTracker
podVolumeSnapshotTracker *podvolume.Tracker
volumeSnapshotterGetter VolumeSnapshotterGetter
itemHookHandler hook.ItemHookHandler
snapshotLocationVolumeSnapshotters map[string]vsv1.VolumeSnapshotter
hookTracker *hook.HookTracker
volumeHelperImpl volumehelper.VolumeHelper
}
type FileForArchive struct {
@ -188,16 +189,6 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
ib.trackSkippedPV(obj, groupResource, podVolumeApproach, fmt.Sprintf("opted out due to annotation in pod %s", podName), log)
}
// Instantiate volumepolicyhelper struct here
vh := &volumehelper.VolumeHelperImpl{
SnapshotVolumes: ib.backupRequest.Spec.SnapshotVolumes,
Logger: logger,
}
if ib.backupRequest.ResPolicies != nil {
vh.VolumePolicy = ib.backupRequest.ResPolicies
}
if groupResource == kuberesource.Pods {
// pod needs to be initialized for the unstructured converter
pod = new(corev1api.Pod)
@ -206,31 +197,33 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
// nil it on error since it's not valid
pod = nil
} else {
// Get the list of volumes to back up using pod volume backup from the pod's annotations or volume policy approach. Remove from this list
// any volumes that use a PVC that we've already backed up (this would be in a read-write-many scenario,
// Get the list of volumes to back up using pod volume backup from the pod's annotations
// or volume policy approach. Remove from this list any volumes that use a PVC that we've
// already backed up (this would be in a read-write-many scenario,
// where it's been backed up from another pod), since we don't need >1 backup per PVC.
includedVolumes, optedOutVolumes, err := vh.GetVolumesForFSBackup(pod, boolptr.IsSetToTrue(ib.backupRequest.Spec.DefaultVolumesToFsBackup), !ib.backupRequest.ResourceIncludesExcludes.ShouldInclude(kuberesource.PersistentVolumeClaims.String()), ib.kbClient)
if err != nil {
backupErrs = append(backupErrs, errors.WithStack(err))
}
for _, volume := range includedVolumes {
// track the volumes that are PVCs using the PVC snapshot tracker, so that when we backup PVCs/PVs
// via an item action in the next step, we don't snapshot PVs that will have their data backed up
// with pod volume backup.
ib.podVolumeSnapshotTracker.Track(pod, volume)
if found, pvcName := ib.podVolumeSnapshotTracker.TakenForPodVolume(pod, volume); found {
log.WithFields(map[string]interface{}{
"podVolume": volume,
"pvcName": pvcName,
}).Info("Pod volume uses a persistent volume claim which has already been backed up from another pod, skipping.")
continue
for _, volume := range pod.Spec.Volumes {
shouldDoFSBackup, err := ib.volumeHelperImpl.ShouldPerformFSBackup(volume, *pod)
if err != nil {
backupErrs = append(backupErrs, errors.WithStack(err))
}
if shouldDoFSBackup {
// track the volumes backing up by PVB , so that when we backup PVCs/PVs
// via an item action in the next step, we don't snapshot PVs that will have their data backed up
// with pod volume backup.
ib.podVolumeSnapshotTracker.Track(pod, volume.Name)
if found, pvcName := ib.podVolumeSnapshotTracker.TakenForPodVolume(pod, volume.Name); found {
log.WithFields(map[string]interface{}{
"podVolume": volume,
"pvcName": pvcName,
}).Info("Pod volume uses a persistent volume claim which has already been backed up from another pod, skipping.")
continue
}
pvbVolumes = append(pvbVolumes, volume.Name)
} else {
ib.podVolumeSnapshotTracker.Optout(pod, volume.Name)
}
pvbVolumes = append(pvbVolumes, volume)
}
for _, optedOutVol := range optedOutVolumes {
ib.podVolumeSnapshotTracker.Optout(pod, optedOutVol)
}
}
}
@ -239,7 +232,7 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
// the group version of the object.
versionPath := resourceVersion(obj)
updatedObj, additionalItemFiles, err := ib.executeActions(log, obj, groupResource, name, namespace, metadata, finalize, vh)
updatedObj, additionalItemFiles, err := ib.executeActions(log, obj, groupResource, name, namespace, metadata, finalize)
if err != nil {
backupErrs = append(backupErrs, err)
@ -265,7 +258,7 @@ func (ib *itemBackupper) backupItemInternal(logger logrus.FieldLogger, obj runti
backupErrs = append(backupErrs, err)
}
if err := ib.takePVSnapshot(obj, log, vh); err != nil {
if err := ib.takePVSnapshot(obj, log); err != nil {
backupErrs = append(backupErrs, err)
}
}
@ -361,7 +354,6 @@ func (ib *itemBackupper) executeActions(
name, namespace string,
metadata metav1.Object,
finalize bool,
vh *volumehelper.VolumeHelperImpl,
) (runtime.Unstructured, []FileForArchive, error) {
var itemFiles []FileForArchive
for _, action := range ib.backupRequest.ResolvedActions {
@ -385,15 +377,21 @@ func (ib *itemBackupper) executeActions(
continue
}
if groupResource == kuberesource.PersistentVolumeClaims && actionName == csiBIAPluginName && vh.VolumePolicy != nil {
snapshotVolume, err := vh.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumeClaims, ib.kbClient)
if groupResource == kuberesource.PersistentVolumeClaims &&
actionName == csiBIAPluginName {
snapshotVolume, err := ib.volumeHelperImpl.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumeClaims)
if err != nil {
return nil, itemFiles, errors.WithStack(err)
}
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping csi volume snapshot for PVC %s as it does not fit the volume policy criteria specified by the user for snapshot action", namespace+"/"+name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
ib.trackSkippedPV(
obj,
kuberesource.PersistentVolumeClaims,
volumeSnapshotApproach,
"not satisfy the criteria for VolumePolicy or the legacy snapshot way",
log,
)
continue
}
}
@ -402,15 +400,21 @@ func (ib *itemBackupper) executeActions(
if err != nil {
return nil, itemFiles, errors.Wrapf(err, "error executing custom action (groupResource=%s, namespace=%s, name=%s)", groupResource.String(), namespace, name)
}
u := &unstructured.Unstructured{Object: updatedItem.UnstructuredContent()}
if actionName == csiBIAPluginName && additionalItemIdentifiers == nil && u.GetAnnotations()[velerov1api.SkippedNoCSIPVAnnotation] == "true" {
// snapshot was skipped by CSI plugin
ib.trackSkippedPV(obj, groupResource, csiSnapshotApproach, "skipped b/c it's not a CSI volume", log)
delete(u.GetAnnotations(), velerov1api.SkippedNoCSIPVAnnotation)
} else if (actionName == csiBIAPluginName || actionName == vsphereBIAPluginName) && !boolptr.IsSetToFalse(ib.backupRequest.Backup.Spec.SnapshotVolumes) {
// the snapshot has been taken by the BIA plugin
ib.unTrackSkippedPV(obj, groupResource, log)
if actionName == csiBIAPluginName {
if additionalItemIdentifiers == nil && u.GetAnnotations()[velerov1api.SkippedNoCSIPVAnnotation] == "true" {
// snapshot was skipped by CSI plugin
log.Infof("skip CSI snapshot for PVC %s as it's not a CSI compatible volume", namespace+"/"+name)
ib.trackSkippedPV(obj, groupResource, csiSnapshotApproach, "skipped b/c it's not a CSI volume", log)
delete(u.GetAnnotations(), velerov1api.SkippedNoCSIPVAnnotation)
} else {
// the snapshot has been taken by the BIA plugin
log.Infof("Untrack the PVC %s, because it's backed up by CSI BIA.", namespace+"/"+name)
ib.unTrackSkippedPV(obj, kuberesource.PersistentVolumeClaims, log)
}
}
mustInclude := u.GetAnnotations()[velerov1api.MustIncludeAdditionalItemAnnotation] == "true" || finalize
// remove the annotation as it's for communication between BIA and velero server,
// we don't want the resource be restored with this annotation.
@ -528,7 +532,7 @@ const (
// takePVSnapshot triggers a snapshot for the volume/disk underlying a PersistentVolume if the provided
// backup has volume snapshots enabled and the PV is of a compatible type. Also records cloud
// disk type and IOPS (if applicable) to be able to restore to current state later.
func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.FieldLogger, vh *volumehelper.VolumeHelperImpl) error {
func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.FieldLogger) error {
log.Info("Executing takePVSnapshot")
pv := new(corev1api.PersistentVolume)
@ -538,35 +542,22 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
log = log.WithField("persistentVolume", pv.Name)
if vh.VolumePolicy != nil {
snapshotVolume, err := vh.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumes, ib.kbClient)
if err != nil {
return err
}
if !snapshotVolume {
log.Info(fmt.Sprintf("skipping volume snapshot for PV %s as it does not fit the volume policy criteria specified by the user for snapshot action", pv.Name))
ib.trackSkippedPV(obj, kuberesource.PersistentVolumes, volumeSnapshotApproach, "does not satisfy the criteria for volume policy based snapshot action", log)
return nil
}
snapshotVolume, err := ib.volumeHelperImpl.ShouldPerformSnapshot(obj, kuberesource.PersistentVolumes)
if err != nil {
return err
}
if boolptr.IsSetToFalse(ib.backupRequest.Spec.SnapshotVolumes) {
log.Info("Backup has volume snapshots disabled; skipping volume snapshot action.")
ib.trackSkippedPV(obj, kuberesource.PersistentVolumes, volumeSnapshotApproach, "backup has volume snapshots disabled", log)
if !snapshotVolume {
ib.trackSkippedPV(
obj,
kuberesource.PersistentVolumes,
volumeSnapshotApproach,
"not satisfy the criteria for VolumePolicy or the legacy snapshot way",
log,
)
return nil
}
// If this PV is claimed, see if we've already taken a (pod volume backup) snapshot of the contents
// of this PV. If so, don't take a snapshot.
if pv.Spec.ClaimRef != nil {
if ib.podVolumeSnapshotTracker.Has(pv.Spec.ClaimRef.Namespace, pv.Spec.ClaimRef.Name) {
log.Info("Skipping snapshot of persistent volume because volume is being backed up with pod volume backup.")
return nil
}
}
// #4758 Do not take snapshot for CSI PV to avoid duplicated snapshotting, when CSI feature is enabled.
if features.IsEnabled(velerov1api.CSIFeatureFlag) && pv.Spec.CSI != nil {
log.Infof("Skipping snapshot of persistent volume %s, because it's handled by CSI plugin.", pv.Name)
@ -671,6 +662,7 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
snapshot := volumeSnapshot(ib.backupRequest.Backup, pv.Name, volumeID, volumeType, pvFailureDomainZone, location, iops)
var errs []error
log.Info("Untrack the PV %s from the skipped volumes, because it's backed by Velero native snapshot.", pv.Name)
ib.backupRequest.SkippedPVTracker.Untrack(pv.Name)
snapshotID, err := volumeSnapshotter.CreateSnapshot(snapshot.Spec.ProviderVolumeID, snapshot.Spec.VolumeAZ, tags)
if err != nil {

View File

@ -50,10 +50,11 @@ type skipPVTracker struct {
}
const (
podVolumeApproach = "podvolume"
csiSnapshotApproach = "csiSnapshot"
volumeSnapshotApproach = "volumeSnapshot"
anyApproach = "any"
podVolumeApproach = "podvolume"
csiSnapshotApproach = "csiSnapshot"
volumeSnapshotApproach = "volumeSnapshot"
vsphereSnapshotApproach = "vsphereSnapshot"
anyApproach = "any"
)
func NewSkipPVTracker() *skipPVTracker {

View File

@ -68,7 +68,7 @@ func TestNewDescribeCommand(t *testing.T) {
stdout, _, err := veleroexec.RunCommand(cmd)
if err == nil {
assert.Contains(t, stdout, "Backup Volumes: <none included>")
assert.Contains(t, stdout, "Backup Volumes:")
assert.Contains(t, stdout, "Or label selector: <none>")
assert.Contains(t, stdout, fmt.Sprintf("Name: %s", backupName))
return

View File

@ -40,7 +40,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/internal/volume"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/collections"
"github.com/vmware-tanzu/velero/pkg/util/results"
)
@ -430,13 +429,16 @@ func describeBackupResourceList(ctx context.Context, kbClient kbclient.Client, d
}
}
func describeBackupVolumes(ctx context.Context, kbClient kbclient.Client, d *Describer, backup *velerov1api.Backup, details bool,
insecureSkipTLSVerify bool, caCertPath string, podVolumeBackupCRs []velerov1api.PodVolumeBackup) {
if boolptr.IsSetToFalse(backup.Spec.SnapshotVolumes) {
d.Println("Backup Volumes: <none included>")
return
}
func describeBackupVolumes(
ctx context.Context,
kbClient kbclient.Client,
d *Describer,
backup *velerov1api.Backup,
details bool,
insecureSkipTLSVerify bool,
caCertPath string,
podVolumeBackupCRs []velerov1api.PodVolumeBackup,
) {
d.Println("Backup Volumes:")
nativeSnapshots := []*volume.BackupVolumeInfo{}

View File

@ -31,7 +31,6 @@ import (
"github.com/vmware-tanzu/velero/internal/volume"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/cmd/util/downloadrequest"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/results"
)
@ -301,11 +300,6 @@ func describeBackupResourceListInSF(ctx context.Context, kbClient kbclient.Clien
func describeBackupVolumesInSF(ctx context.Context, kbClient kbclient.Client, backup *velerov1api.Backup, details bool,
insecureSkipTLSVerify bool, caCertPath string, podVolumeBackupCRs []velerov1api.PodVolumeBackup, backupStatusInfo map[string]interface{}) {
if boolptr.IsSetToFalse(backup.Spec.SnapshotVolumes) {
backupStatusInfo["backupVolumes"] = "<none included>"
return
}
backupVolumes := make(map[string]interface{})
nativeSnapshots := []*volume.BackupVolumeInfo{}

View File

@ -504,11 +504,6 @@ func (b *backupReconciler) validateAndGetSnapshotLocations(backup *velerov1api.B
errors := []string{}
providerLocations := make(map[string]*velerov1api.VolumeSnapshotLocation)
// if snapshotVolume is set to false then we don't need to validate volumesnapshotlocation
if boolptr.IsSetToFalse(backup.Spec.SnapshotVolumes) {
return nil, nil
}
for _, locationName := range backup.Spec.VolumeSnapshotLocations {
// validate each locationName exists as a VolumeSnapshotLocation
location := &velerov1api.VolumeSnapshotLocation{}

View File

@ -1489,7 +1489,7 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
expectedSuccess: true,
},
{
name: "location name does not correspond to any existing location and snapshotvolume disabled; should return empty VSL and no error",
name: "location name does not correspond to any existing location and snapshotvolume disabled; should return error",
backup: defaultBackup().Phase(velerov1api.BackupPhaseNew).VolumeSnapshotLocations("random-name").SnapshotVolumes(false).Result(),
locations: []*velerov1api.VolumeSnapshotLocation{
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "aws-us-east-1").Provider("aws").Result(),
@ -1497,36 +1497,36 @@ func TestValidateAndGetSnapshotLocations(t *testing.T) {
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "some-name").Provider("fake-provider").Result(),
},
expectedVolumeSnapshotLocationNames: nil,
expectedSuccess: true,
expectedErrors: "a VolumeSnapshotLocation CRD for the location random-name with the name specified in the backup spec needs to be created before this snapshot can be executed. Error: volumesnapshotlocations.velero.io \"random-name\" not found", expectedSuccess: false,
},
{
name: "duplicate locationName per provider and snapshotvolume disabled; should return empty VSL and no error",
name: "duplicate locationName per provider and snapshotvolume disabled; should return only one BSL",
backup: defaultBackup().Phase(velerov1api.BackupPhaseNew).VolumeSnapshotLocations("aws-us-west-1", "aws-us-west-1").SnapshotVolumes(false).Result(),
locations: []*velerov1api.VolumeSnapshotLocation{
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "aws-us-east-1").Provider("aws").Result(),
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "aws-us-west-1").Provider("aws").Result(),
},
expectedVolumeSnapshotLocationNames: nil,
expectedVolumeSnapshotLocationNames: []string{"aws-us-west-1"},
expectedSuccess: true,
},
{
name: "no location name for the provider exists, only one VSL created and snapshotvolume disabled; should return empty VSL and no error",
name: "no location name for the provider exists, only one VSL created and snapshotvolume disabled; should return the VSL",
backup: defaultBackup().Phase(velerov1api.BackupPhaseNew).SnapshotVolumes(false).Result(),
locations: []*velerov1api.VolumeSnapshotLocation{
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "aws-us-east-1").Provider("aws").Result(),
},
expectedVolumeSnapshotLocationNames: nil,
expectedVolumeSnapshotLocationNames: []string{"aws-us-east-1"},
expectedSuccess: true,
},
{
name: "multiple location names for a provider, no default location and backup has no location defined, but snapshotvolume disabled, should return empty VSL and no error",
name: "multiple location names for a provider, no default location and backup has no location defined, but snapshotvolume disabled, should return error",
backup: defaultBackup().Phase(velerov1api.BackupPhaseNew).SnapshotVolumes(false).Result(),
locations: []*velerov1api.VolumeSnapshotLocation{
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "aws-us-west-1").Provider("aws").Result(),
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "aws-us-east-1").Provider("aws").Result(),
},
expectedVolumeSnapshotLocationNames: nil,
expectedSuccess: true,
expectedErrors: "provider aws has more than one possible volume snapshot location, and none were specified explicitly or as a default",
},
}

View File

@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
package podvolume
import (
"testing"
@ -29,7 +29,7 @@ func TestOptoutVolume(t *testing.T) {
builder.ForVolume("pod-vol-1").PersistentVolumeClaimSource("pvc-1").Result(),
builder.ForVolume("pod-vol-2").PersistentVolumeClaimSource("pvc-2").Result(),
).Result()
tracker := newPVCSnapshotTracker()
tracker := NewTracker()
tracker.Optout(pod, "pod-vol-1")
ok, pn := tracker.OptedoutByPod("ns-1", "pvc-1")
assert.True(t, ok)
@ -48,7 +48,7 @@ func TestOptoutVolume(t *testing.T) {
}
func TestABC(t *testing.T) {
tracker := newPVCSnapshotTracker()
tracker := NewTracker()
v1, v2 := tracker.OptedoutByPod("a", "b")
t.Logf("v1: %v, v2: %v", v1, v2)
}

View File

@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
package podvolume
import (
"fmt"
@ -22,9 +22,9 @@ import (
corev1api "k8s.io/api/core/v1"
)
// pvcSnapshotTracker keeps track of persistent volume claims that have been handled
// Tracker keeps track of persistent volume claims that have been handled
// via pod volume backup.
type pvcSnapshotTracker struct {
type Tracker struct {
pvcs map[string]pvcSnapshotStatus
pvcPod map[string]string
}
@ -38,8 +38,8 @@ const (
pvcSnapshotStatusOptedout
)
func newPVCSnapshotTracker() *pvcSnapshotTracker {
return &pvcSnapshotTracker{
func NewTracker() *Tracker {
return &Tracker{
pvcs: make(map[string]pvcSnapshotStatus),
// key: pvc ns/name, value: pod name
pvcPod: make(map[string]string),
@ -47,23 +47,23 @@ func newPVCSnapshotTracker() *pvcSnapshotTracker {
}
// Track indicates a volume from a pod should be snapshotted by pod volume backup.
func (t *pvcSnapshotTracker) Track(pod *corev1api.Pod, volumeName string) {
func (t *Tracker) Track(pod *corev1api.Pod, volumeName string) {
t.recordStatus(pod, volumeName, pvcSnapshotStatusTracked, pvcSnapshotStatusNotTracked)
}
// Take indicates a volume from a pod has been taken by pod volume backup.
func (t *pvcSnapshotTracker) Take(pod *corev1api.Pod, volumeName string) {
func (t *Tracker) Take(pod *corev1api.Pod, volumeName string) {
t.recordStatus(pod, volumeName, pvcSnapshotStatusTaken, pvcSnapshotStatusTracked)
}
// Optout indicates a volume from a pod has been opted out by pod's annotation
func (t *pvcSnapshotTracker) Optout(pod *corev1api.Pod, volumeName string) {
func (t *Tracker) Optout(pod *corev1api.Pod, volumeName string) {
t.recordStatus(pod, volumeName, pvcSnapshotStatusOptedout, pvcSnapshotStatusNotTracked)
}
// OptedoutByPod returns true if the PVC with the specified namespace and name has been opted out by the pod. The
// second return value is the name of the pod which has the annotation that opted out the volume/pvc
func (t *pvcSnapshotTracker) OptedoutByPod(namespace, name string) (bool, string) {
func (t *Tracker) OptedoutByPod(namespace, name string) (bool, string) {
status, found := t.pvcs[key(namespace, name)]
if !found || status != pvcSnapshotStatusOptedout {
@ -73,7 +73,7 @@ func (t *pvcSnapshotTracker) OptedoutByPod(namespace, name string) (bool, string
}
// if the volume is a PVC, record the status and the related pod
func (t *pvcSnapshotTracker) recordStatus(pod *corev1api.Pod, volumeName string, status pvcSnapshotStatus, preReqStatus pvcSnapshotStatus) {
func (t *Tracker) recordStatus(pod *corev1api.Pod, volumeName string, status pvcSnapshotStatus, preReqStatus pvcSnapshotStatus) {
for _, volume := range pod.Spec.Volumes {
if volume.Name == volumeName {
if volume.PersistentVolumeClaim != nil {
@ -92,14 +92,14 @@ func (t *pvcSnapshotTracker) recordStatus(pod *corev1api.Pod, volumeName string,
}
// Has returns true if the PVC with the specified namespace and name has been tracked.
func (t *pvcSnapshotTracker) Has(namespace, name string) bool {
func (t *Tracker) Has(namespace, name string) bool {
status, found := t.pvcs[key(namespace, name)]
return found && (status == pvcSnapshotStatusTracked || status == pvcSnapshotStatusTaken)
}
// TakenForPodVolume returns true and the PVC's name if the pod volume with the specified name uses a
// PVC and that PVC has been taken by pod volume backup.
func (t *pvcSnapshotTracker) TakenForPodVolume(pod *corev1api.Pod, volume string) (bool, string) {
func (t *Tracker) TakenForPodVolume(pod *corev1api.Pod, volume string) (bool, string) {
for _, podVolume := range pod.Spec.Volumes {
if podVolume.Name != volume {
continue

View File

@ -38,7 +38,6 @@ type PVRestorer interface {
type pvRestorer struct {
logger logrus.FieldLogger
backup *api.Backup
snapshotVolumes *bool
restorePVs *bool
volumeSnapshots []*volume.Snapshot
volumeSnapshotterGetter VolumeSnapshotterGetter
@ -53,11 +52,6 @@ func (r *pvRestorer) executePVAction(obj *unstructured.Unstructured) (*unstructu
return nil, errors.New("PersistentVolume is missing its name")
}
if boolptr.IsSetToFalse(r.snapshotVolumes) {
// The backup had snapshots disabled, so we can return early
return obj, nil
}
if boolptr.IsSetToFalse(r.restorePVs) {
// The restore has pv restores disabled, so we can return early
return obj, nil

View File

@ -125,7 +125,6 @@ func TestExecutePVAction_NoSnapshotRestores(t *testing.T) {
}
if tc.backup != nil {
r.backup = tc.backup
r.snapshotVolumes = tc.backup.Spec.SnapshotVolumes
}
for _, loc := range tc.locations {

View File

@ -270,7 +270,6 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
pvRestorer := &pvRestorer{
logger: req.Log,
backup: req.Backup,
snapshotVolumes: req.Backup.Spec.SnapshotVolumes,
restorePVs: req.Restore.Spec.RestorePVs,
volumeSnapshots: req.VolumeSnapshots,
volumeSnapshotterGetter: volumeSnapshotterGetter,

View File

@ -110,7 +110,7 @@ func GetVolumesToExclude(obj metav1.Object) []string {
}
func IsPVCDefaultToFSBackup(pvcNamespace, pvcName string, crClient crclient.Client, defaultVolumesToFsBackup bool) (bool, error) {
pods, err := getPodsUsingPVC(pvcNamespace, pvcName, crClient)
pods, err := GetPodsUsingPVC(pvcNamespace, pvcName, crClient)
if err != nil {
return false, errors.WithStack(err)
}
@ -140,7 +140,7 @@ func getPodVolumeNameForPVC(pod corev1api.Pod, pvcName string) (string, error) {
return "", errors.Errorf("Pod %s/%s does not use PVC %s/%s", pod.Namespace, pod.Name, pod.Namespace, pvcName)
}
func getPodsUsingPVC(
func GetPodsUsingPVC(
pvcNamespace, pvcName string,
crClient crclient.Client,
) ([]corev1api.Pod, error) {

View File

@ -786,7 +786,7 @@ func TestGetPodsUsingPVC(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
actualPods, err := getPodsUsingPVC(tc.pvcNamespace, tc.pvcName, fakeClient)
actualPods, err := GetPodsUsingPVC(tc.pvcNamespace, tc.pvcName, fakeClient)
assert.Nilf(t, err, "Want error=nil; Got error=%v", err)
assert.Lenf(t, actualPods, tc.expectedPodCount, "unexpected number of pods in result; Want: %d; Got: %d", tc.expectedPodCount, len(actualPods))
})

View File

@ -225,7 +225,7 @@ Kubernetes namespace resources to exclude from the backup, formatted as resource
## Resource policies
Velero provides resource policies to filter resources to do backup or restore. currently, it only supports skip backup volume by resource policies.
**Creating resource policies**
### Creating resource policies
Below is the two-step of using resource policies to skip backup of volume:
1. Creating resource policies configmap
@ -242,7 +242,7 @@ Below is the two-step of using resource policies to skip backup of volume:
```
This flag could also be combined with the other include and exclude filters above
**YAML template**
### YAML template
Velero only support volume resource policies currently, other kinds of resource policies could be extended in the future. The policies YAML config file would look like this:
- Yaml template:
@ -299,7 +299,7 @@ Velero only support volume resource policies currently, other kinds of resource
type: skip
```
**Supported conditions**
### Supported conditions
Currently, Velero supports the volume attributes listed below:
- capacity: matching volumes have the capacity that falls within this `capacity` range. The capacity value should include the lower value and upper value concatenated by commas, the unit of each value in capacity could be `Ti`, `Gi`, `Mi`, `Ki` etc, which is a standard storage unit in Kubernetes. And it has several combinations below:
@ -357,12 +357,23 @@ Velero supported conditions and format listed below:
```
Volume types could be found in [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes) and pod [Volume](https://kubernetes.io/docs/concepts/storage/volumes)
**Resource policies rules**
### Resource policies rules
- Velero already has lots of include or exclude filters. the resource policies are the final filters after others include or exclude filters in one backup processing workflow. So if use a defined similar filter like the opt-in approach to backup one pod volume but skip backup of the same pod volume in resource policies, as resource policies are the final filters that are applied, the volume will not be backed up.
- If volume resource policies conflict with themselves the first matched policy will be respected when many policies are defined.
#### VolumePolicy priority with existing filters
* [Includes filters](#includes) and [Excludes filters](#excludes) have the highest priority. The filtered-out resources by them cannot reach to the VolumePolicy.
* The VolumePolicy has the second priority. It supersedes all the other filters.
* The filesystem volume backup opt-in/opt-out way has the third priority.
* The `backup.Spec.SnapshotVolumes` has the fourth priority.
**Support for `fs-backup` and `snapshot` actions via volume policy feature**
### Supported VolumePolicy actions
By now, there are three supporting action types:
* skip: don't back up the action matching volume's data.
* snapshot: back up the action matching volume's data by the snapshot way.
* fs-backup: back up the action matching volumes' data by the fs-backup way.
#### Support for `fs-backup` and `snapshot` actions via volume policy feature
- Starting from velero 1.14, the resource policy/volume policy feature has been extended to support more actions like `fs-backup` and `snapshot`.
- This feature only extends the action aspect of volume policy and not criteria aspect, the criteria components as described above remain the same.
- When we are using the volume policy approach for backing up the volumes then the volume policy criteria and action need to be specific and explicit,
@ -460,4 +471,3 @@ volumePolicies:
3. The outcome would be that velero would perform `fs-backup` operation on both the volumes
- `fs-backup` on `Volume 1` because `Volume 1` satisfies the criteria for `fs-backup` action.
- Also, for Volume 2 as no matching action was found so legacy approach will be used as a fallback option for this volume (`fs-backup` operation will be done as `defaultVolumesToFSBackup: true` is specified by the user).