Update CSI document. Remove the CSI plugin verifier.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
pull/7656/head
Xun Jiang 2024-04-11 14:21:23 +08:00
parent 3c377bc3ec
commit 59eeec268b
22 changed files with 79 additions and 412 deletions

View File

@ -38,7 +38,6 @@ jobs:
- name: 'use gcloud CLI'
run: |
gcloud auth login
gcloud info
- name: Set up QEMU

View File

@ -84,14 +84,14 @@ Below are actions from Velero and DMP:
**BIA Execute**
This is the existing logic in Velero. For a source PVC/PV, Velero delivers it to the corresponding BackupItemAction plugin, the plugin then takes the related actions to back it up.
For example, the existing CSI plugin takes a CSI snapshot to the volume represented by the PVC and then returns additional items (i.e., VolumeSnapshot, VolumeSnapshotContent and VolumeSnapshotClass) for Velero to further backup.
To support data movement, we will use BIA V2 which supports asynchronized operation management. Here is the Execute method from BIA V2:
To support data movement, we will use BIA V2 which supports asynchronous operation management. Here is the Execute method from BIA V2:
```
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error)
```
Besides ```additionalItem``` (as the 2nd return value), Execute method will return one more resource list called ```itemToUpdate```, which means the items to be updated and persisted when the async operation completes. For details, visit [general progress monitoring design][2].
Specifically, this mechanism will be used to persist DUCR into the persisted backup data, in another words, DUCR will be returned as ```itemToUpdate``` from Execute method. DUCR contains all the information the restore requires, so during restore, DUCR will be extracted from the backup data.
Additionally, in the same way, a DMP could add any other items into the persisted backup data.
Execute method also returns the ```operationID``` which uniquely identifies the asynchronized operation. This ```operationID``` is generated by plugins. The [general progress monitoring design][2] doesn't restrict the format of the ```operationID```, for Velero CSI plugin, the ```operationID``` is a combination of the backup CR UID and the source PVC (represented by the ```item``` parameter) UID.
Execute method also returns the ```operationID``` which uniquely identifies the asynchronous operation. This ```operationID``` is generated by plugins. The [general progress monitoring design][2] doesn't restrict the format of the ```operationID```, for Velero CSI plugin, the ```operationID``` is a combination of the backup CR UID and the source PVC (represented by the ```item``` parameter) UID.
**Create Snapshot**
The DMP creates a snapshot of the requested volume and deliver it to DM through DUCR. After that, the DMP leaves the snapshot and its related objects (e.g., VolumeSnapshot and VolumeSnapshotContent for CSI snapshot) to the DM, DM then has full control of the snapshot and its related objects, i.e., deciding whether to delete the snapshot or its related objects and when to do it.
@ -930,12 +930,20 @@ For 3, Velero leverage on DMs to decide how to save the log, but they will not g
## Installation
DMs need to be configured during installation so that they can be installed. Plugin DMs may have their own configuration, for VGDM, the only requirement is to install Velero node-agent.
Moreover, the DMP is also required during the installation.
From release-1.14, the `github.com/vmware-tanzu/velero-plugin-for-csi` repository, which is the Velero CSI plugin, is merged into the `github.com/vmware-tanzu/velero` repository.
The reason to merge the CSI plugin is:
* The VolumeSnapshot data mover depends on the CSI plugin, it's reasonabe to integrate them.
* This change reduces the Velero deploying complexity.
* This makes performance tuning easier in the future.
As a result, no need to install Velero CSI plugin anymore.
For example, to move CSI snapshot through VBDM, below is the installation script:
```
velero install \
--provider \
--image \
--plugins velero/velero-plugin-for-csi:xxx \
--features=EnableCSI \
--use-node-agent \
```

View File

@ -224,8 +224,8 @@ func (p *volumeSnapshotBackupItemAction) Execute(
return nil, nil, "", nil, errors.WithStack(err)
}
p.log.Infof("Returning from VolumeSnapshotBackupItemAction",
"with %d additionalItems to backup", len(additionalItems))
p.log.Infof("Returning from VolumeSnapshotBackupItemAction with %d additionalItems to backup",
len(additionalItems))
for _, ai := range additionalItems {
p.log.Debugf("%s: %s", ai.GroupResource.String(), ai.Name)
}
@ -294,22 +294,20 @@ func (p *volumeSnapshotBackupItemAction) Progress(
}
if vs.Status == nil {
p.log.Debugf("VolumeSnapshot %s/%s has an empty status.",
"Skip progress update.", vs.Namespace, vs.Name)
p.log.Debugf("VolumeSnapshot %s/%s has an empty status. Skip progress update.", vs.Namespace, vs.Name)
return progress, nil
}
if boolptr.IsSetToTrue(vs.Status.ReadyToUse) {
p.log.Debugf("VolumeSnapshot %s/%s is ReadyToUse.",
"Continue on querying corresponding VolumeSnapshotContent.",
p.log.Debugf("VolumeSnapshot %s/%s is ReadyToUse. Continue on querying corresponding VolumeSnapshotContent.",
vs.Namespace, vs.Name)
} else if vs.Status.Error != nil {
errorMessage := ""
if vs.Status.Error.Message != nil {
errorMessage = *vs.Status.Error.Message
}
p.log.Warnf("VolumeSnapshot has a temporary error %s.",
"Snapshot controller will retry later.", errorMessage)
p.log.Warnf("VolumeSnapshot has a temporary error %s. Snapshot controller will retry later.",
errorMessage)
return progress, nil
}
@ -328,8 +326,8 @@ func (p *volumeSnapshotBackupItemAction) Progress(
}
if vsc.Status == nil {
p.log.Debugf("VolumeSnapshotContent %s has an empty Status.",
"Skip progress update.", vsc.Name)
p.log.Debugf("VolumeSnapshotContent %s has an empty Status. Skip progress update.",
vsc.Name)
return progress, nil
}

View File

@ -284,7 +284,6 @@ type server struct {
mgr manager.Manager
credentialFileStore credentials.FileStore
credentialSecretStore credentials.SecretStore
featureVerifier features.Verifier
}
func newServer(f client.Factory, config serverConfig, logger *logrus.Logger) (*server, error) {
@ -326,12 +325,6 @@ func newServer(f client.Factory, config serverConfig, logger *logrus.Logger) (*s
return nil, err
}
featureVerifier := features.NewVerifier(pluginRegistry)
if _, err := featureVerifier.Verify(velerov1api.CSIFeatureFlag); err != nil {
logger.WithError(err).Warn("CSI feature verification failed, the feature may not be ready.")
}
// cancelFunc is not deferred here because if it was, then ctx would immediately
// be canceled once this function exited, making it useless to any informers using later.
// That, in turn, causes the velero server to halt when the first informer tries to use it.
@ -425,7 +418,6 @@ func newServer(f client.Factory, config serverConfig, logger *logrus.Logger) (*s
mgr: mgr,
credentialFileStore: credentialFileStore,
credentialSecretStore: credentialSecretStore,
featureVerifier: featureVerifier,
}
return s, nil
@ -973,7 +965,6 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
s.kubeClient.CoreV1().RESTClient(),
s.credentialFileStore,
s.mgr.GetClient(),
s.featureVerifier,
)
cmd.CheckError(err)

View File

@ -1,43 +0,0 @@
// Code generated by mockery v2.22.1. DO NOT EDIT.
package mocks
import (
common "github.com/vmware-tanzu/velero/pkg/plugin/framework/common"
mock "github.com/stretchr/testify/mock"
)
// PluginFinder is an autogenerated mock type for the PluginFinder type
type PluginFinder struct {
mock.Mock
}
// Find provides a mock function with given fields: kind, name
func (_m *PluginFinder) Find(kind common.PluginKind, name string) bool {
ret := _m.Called(kind, name)
var r0 bool
if rf, ok := ret.Get(0).(func(common.PluginKind, string) bool); ok {
r0 = rf(kind, name)
} else {
r0 = ret.Get(0).(bool)
}
return r0
}
type mockConstructorTestingTNewPluginFinder interface {
mock.TestingT
Cleanup(func())
}
// NewPluginFinder creates a new instance of PluginFinder. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewPluginFinder(t mockConstructorTestingTNewPluginFinder) *PluginFinder {
mock := &PluginFinder{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@ -1,49 +0,0 @@
// Code generated by mockery v2.22.1. DO NOT EDIT.
package mocks
import mock "github.com/stretchr/testify/mock"
// Verifier is an autogenerated mock type for the Verifier type
type Verifier struct {
mock.Mock
}
// Verify provides a mock function with given fields: name
func (_m *Verifier) Verify(name string) (bool, error) {
ret := _m.Called(name)
var r0 bool
var r1 error
if rf, ok := ret.Get(0).(func(string) (bool, error)); ok {
return rf(name)
}
if rf, ok := ret.Get(0).(func(string) bool); ok {
r0 = rf(name)
} else {
r0 = ret.Get(0).(bool)
}
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(name)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewVerifier interface {
mock.TestingT
Cleanup(func())
}
// NewVerifier creates a new instance of Verifier. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewVerifier(t mockConstructorTestingTNewVerifier) *Verifier {
mock := &Verifier{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@ -1,71 +0,0 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package features
import (
"errors"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/plugin/framework/common"
)
type PluginFinder interface {
Find(kind common.PluginKind, name string) bool
}
type Verifier interface {
Verify(name string) (bool, error)
}
type verifier struct {
finder PluginFinder
}
func NewVerifier(finder PluginFinder) Verifier {
return &verifier{
finder: finder,
}
}
func (v *verifier) Verify(name string) (bool, error) {
enabled := IsEnabled(name)
switch name {
case velerov1api.CSIFeatureFlag:
return verifyCSIFeature(v.finder, enabled)
default:
return false, nil
}
}
func verifyCSIFeature(finder PluginFinder, enabled bool) (bool, error) {
installed := false
installed = finder.Find(common.PluginKindBackupItemActionV2, "velero.io/csi-pvc-backupper")
if installed {
installed = finder.Find(common.PluginKindRestoreItemActionV2, "velero.io/csi-pvc-restorer")
}
if !enabled && installed {
return false, errors.New("CSI plugins are registered, but the EnableCSI feature is not enabled")
} else if enabled && !installed {
return false, errors.New("CSI feature is enabled, but CSI plugins are not registered")
} else if !enabled && !installed {
return false, nil
} else {
return true, nil
}
}

View File

@ -1,61 +0,0 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package features
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
findermocks "github.com/vmware-tanzu/velero/pkg/features/mocks"
)
func TestVerify(t *testing.T) {
NewFeatureFlagSet()
verifier := verifier{}
finder := new(findermocks.PluginFinder)
finder.On("Find", mock.Anything, mock.Anything).Return(false)
verifier.finder = finder
ready, err := verifier.Verify("EnableCSI")
assert.Equal(t, false, ready)
assert.Nil(t, err)
finder = new(findermocks.PluginFinder)
finder.On("Find", mock.Anything, mock.Anything).Return(true)
verifier.finder = finder
ready, err = verifier.Verify("EnableCSI")
assert.Equal(t, false, ready)
assert.EqualError(t, err, "CSI plugins are registered, but the EnableCSI feature is not enabled")
Enable("EnableCSI")
finder = new(findermocks.PluginFinder)
finder.On("Find", mock.Anything, mock.Anything).Return(false)
verifier.finder = finder
ready, err = verifier.Verify("EnableCSI")
assert.Equal(t, false, ready)
assert.EqualError(t, err, "CSI feature is enabled, but CSI plugins are not registered")
Enable("EnableCSI")
finder = new(findermocks.PluginFinder)
finder.On("Find", mock.Anything, mock.Anything).Return(true)
verifier.finder = finder
ready, err = verifier.Verify("EnableCSI")
assert.Equal(t, true, ready)
assert.Nil(t, err)
}

View File

@ -61,10 +61,6 @@ func (r *mockRegistry) Get(kind common.PluginKind, name string) (framework.Plugi
return id, args.Error(1)
}
func (r *mockRegistry) Find(kind common.PluginKind, name string) bool {
return false
}
func TestNewManager(t *testing.T) {
logger := test.NewLogger()
logLevel := logrus.InfoLevel

View File

@ -37,9 +37,6 @@ type Registry interface {
List(kind common.PluginKind) []framework.PluginIdentifier
// Get returns the PluginIdentifier for kind and name.
Get(kind common.PluginKind, name string) (framework.PluginIdentifier, error)
// Find checks if the specified plugin exists in the registry
Find(kind common.PluginKind, name string) bool
}
// KindAndName is a convenience struct that combines a PluginKind and a name.
@ -128,12 +125,6 @@ func (r *registry) Get(kind common.PluginKind, name string) (framework.PluginIde
return p, nil
}
// Contain if the specified plugin exists in the registry
func (r *registry) Find(kind common.PluginKind, name string) bool {
_, found := r.pluginsByID[KindAndName{Kind: kind, Name: name}]
return found
}
// readPluginsDir recursively reads dir looking for plugins.
func (r *registry) readPluginsDir(dir string) ([]string, error) {
if _, err := r.fs.Stat(dir); err != nil {

View File

@ -115,9 +115,7 @@ func (p *volumeSnapshotRestoreItemAction) Execute(
vs.Namespace, vs.Name, velerov1api.DriverNameAnnotation)
}
p.log.Debugf("Set VolumeSnapshotContent %s/%s DeletionPolicy",
"to Retain to make sure VS deletion in namespace will not",
"delete Snapshot on cloud provider.",
p.log.Debugf("Set VolumeSnapshotContent %s/%s DeletionPolicy to Retain to make sure VS deletion in namespace will not delete Snapshot on cloud provider.",
newNamespace, vs.Name)
vsc := snapshotv1api.VolumeSnapshotContent{
@ -154,8 +152,8 @@ func (p *volumeSnapshotRestoreItemAction) Execute(
"failed to create volumesnapshotcontents %s",
vsc.GenerateName)
}
p.log.Infof("Created VolumesnapshotContents %s with static",
"binding to volumesnapshot %s/%s", vsc, newNamespace, vs.Name)
p.log.Infof("Created VolumesnapshotContents %s with static binding to volumesnapshot %s/%s",
vsc, newNamespace, vs.Name)
// Reset Spec to convert the VolumeSnapshot from using
// the dynamic VolumeSnapshotContent to the static one.

View File

@ -75,8 +75,8 @@ func (p *volumeSnapshotContentRestoreItemAction) Execute(
)
}
p.log.Infof("Returning from VolumeSnapshotContentRestoreItemAction",
"with %d additionalItems", len(additionalItems))
p.log.Infof("Returning from VolumeSnapshotContentRestoreItemAction with %d additionalItems",
len(additionalItems))
return &velero.RestoreItemActionExecuteOutput{
UpdatedItem: input.Item,
AdditionalItems: additionalItems,

View File

@ -114,7 +114,6 @@ type kubernetesRestorer struct {
podGetter cache.Getter
credentialFileStore credentials.FileStore
kbClient crclient.Client
featureVerifier features.Verifier
}
// NewKubernetesRestorer creates a new kubernetesRestorer.
@ -132,7 +131,6 @@ func NewKubernetesRestorer(
podGetter cache.Getter,
credentialStore credentials.FileStore,
kbClient crclient.Client,
featureVerifier features.Verifier,
) (Restorer, error) {
return &kubernetesRestorer{
discoveryHelper: discoveryHelper,
@ -157,7 +155,6 @@ func NewKubernetesRestorer(
podGetter: podGetter,
credentialFileStore: credentialStore,
kbClient: kbClient,
featureVerifier: featureVerifier,
}, nil
}
@ -322,7 +319,6 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
itemOperationsList: req.GetItemOperationsList(),
resourceModifiers: req.ResourceModifiers,
disableInformerCache: req.DisableInformerCache,
featureVerifier: kr.featureVerifier,
hookTracker: hook.NewHookTracker(),
backupVolumeInfoMap: req.BackupVolumeInfoMap,
restoreVolumeInfoTracker: req.RestoreVolumeInfoTracker,
@ -376,7 +372,6 @@ type restoreContext struct {
itemOperationsList *[]*itemoperation.RestoreOperation
resourceModifiers *resourcemodifiers.ResourceModifiers
disableInformerCache bool
featureVerifier features.Verifier
hookTracker *hook.HookTracker
backupVolumeInfoMap map[string]volume.BackupVolumeInfo
restoreVolumeInfoTracker *volume.RestoreVolumeInfoTracker
@ -1252,11 +1247,6 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
restoreLogger.Infof("Dynamically re-provisioning persistent volume because it has a CSI VolumeSnapshot or a related snapshot DataUpload.")
ctx.pvsToProvision.Insert(name)
if ready, err := ctx.featureVerifier.Verify(velerov1api.CSIFeatureFlag); !ready {
ctx.log.Errorf("Failed to verify CSI modules, ready %v, err %v", ready, err)
errs.Add(namespace, fmt.Errorf("CSI modules are not ready for restore. Check CSI feature is enabled and CSI plugin is installed"))
}
// Return early because we don't want to restore the PV itself, we
// want to dynamically re-provision it.
return warnings, errs, itemExists
@ -1308,11 +1298,6 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
restoreLogger.Infof("Dynamically re-provisioning persistent volume because it has a CSI VolumeSnapshot or a related snapshot DataUpload.")
ctx.pvsToProvision.Insert(name)
if ready, err := ctx.featureVerifier.Verify(velerov1api.CSIFeatureFlag); !ready {
ctx.log.Errorf("Failed to verify CSI modules, ready %v, err %v", ready, err)
errs.Add(namespace, fmt.Errorf("CSI modules are not ready for restore. Check CSI feature is enabled and CSI plugin is installed"))
}
// Return early because we don't want to restore the PV itself, we
// want to dynamically re-provision it.
return warnings, errs, itemExists

View File

@ -49,7 +49,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/features"
verifiermocks "github.com/vmware-tanzu/velero/pkg/features/mocks"
"github.com/vmware-tanzu/velero/pkg/itemoperation"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
@ -220,9 +219,6 @@ func TestRestorePVWithVolumeInfo(t *testing.T) {
}
features.Enable("EnableCSI")
finder := new(verifiermocks.PluginFinder)
finder.On("Find", mock.Anything, mock.Anything).Return(true)
verifier := features.NewVerifier(finder)
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
@ -232,7 +228,6 @@ func TestRestorePVWithVolumeInfo(t *testing.T) {
h.DiscoveryClient.WithAPIResource(r)
}
require.NoError(t, h.restorer.discoveryHelper.Refresh())
h.restorer.featureVerifier = verifier
data := &Request{
Log: h.log,
@ -3160,47 +3155,6 @@ func TestRestorePersistentVolumes(t *testing.T) {
},
want: []*test.APIResource{},
},
{
name: "when a PV has a CSI VolumeSnapshot, but CSI modules are not ready, the PV is not restored",
restore: defaultRestore().Result(),
backup: defaultBackup().Result(),
tarball: test.NewTarWriter(t).
AddItems("persistentvolumes",
builder.ForPersistentVolume("pv-1").
ReclaimPolicy(corev1api.PersistentVolumeReclaimRetain).
ClaimRef("velero", testPVCName).
Result(),
).
Done(),
apiResources: []*test.APIResource{
test.PVs(),
test.PVCs(),
},
csiVolumeSnapshots: []*snapshotv1api.VolumeSnapshot{
{
ObjectMeta: metav1.ObjectMeta{
Namespace: "velero",
Name: "test",
},
Spec: snapshotv1api.VolumeSnapshotSpec{
Source: snapshotv1api.VolumeSnapshotSource{
PersistentVolumeClaimName: &testPVCName,
},
},
},
},
volumeSnapshotLocations: []*velerov1api.VolumeSnapshotLocation{
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "default").Provider("provider-1").Result(),
},
volumeSnapshotterGetter: map[string]vsv1.VolumeSnapshotter{
"provider-1": &volumeSnapshotter{
snapshotVolumes: map[string]string{"snapshot-1": "new-volume"},
},
},
want: []*test.APIResource{},
csiFeatureVerifierErr: "fake-feature-check-error",
wantError: true,
},
{
name: "when a PV with a reclaim policy of retain has a DataUpload result CM and does not exist in-cluster, the PV is not restored",
restore: defaultRestore().ObjectMeta(builder.WithUID("fakeUID")).Result(),
@ -3233,40 +3187,6 @@ func TestRestorePersistentVolumes(t *testing.T) {
})).Result(),
want: []*test.APIResource{},
},
{
name: "when a PV has a DataUpload result CM, but CSI modules are not ready, the PV is not restored",
restore: defaultRestore().ObjectMeta(builder.WithUID("fakeUID")).Result(),
backup: defaultBackup().Result(),
tarball: test.NewTarWriter(t).
AddItems("persistentvolumes",
builder.ForPersistentVolume("pv-1").
ReclaimPolicy(corev1api.PersistentVolumeReclaimRetain).
ClaimRef("velero", testPVCName).
Result(),
).
Done(),
apiResources: []*test.APIResource{
test.PVs(),
test.PVCs(),
test.ConfigMaps(),
},
volumeSnapshotLocations: []*velerov1api.VolumeSnapshotLocation{
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "default").Provider("provider-1").Result(),
},
volumeSnapshotterGetter: map[string]vsv1.VolumeSnapshotter{
"provider-1": &volumeSnapshotter{
snapshotVolumes: map[string]string{"snapshot-1": "new-volume"},
},
},
dataUploadResult: builder.ForConfigMap("velero", "test").ObjectMeta(builder.WithLabelsMap(map[string]string{
velerov1api.RestoreUIDLabel: "fakeUID",
velerov1api.PVCNamespaceNameLabel: "velero.testPVC",
velerov1api.ResourceUsageLabel: string(velerov1api.VeleroResourceUsageDataUploadResult),
})).Result(),
want: []*test.APIResource{},
csiFeatureVerifierErr: "fake-feature-check-error",
wantError: true,
},
}
for _, tc := range tests {
@ -3278,14 +3198,6 @@ func TestRestorePersistentVolumes(t *testing.T) {
return renamed, nil
}
verifierMock := new(verifiermocks.Verifier)
if tc.csiFeatureVerifierErr != "" {
verifierMock.On("Verify", mock.Anything, mock.Anything).Return(false, errors.New(tc.csiFeatureVerifierErr))
} else {
verifierMock.On("Verify", mock.Anything, mock.Anything).Return(true, nil)
}
h.restorer.featureVerifier = verifierMock
// set up the VolumeSnapshotLocation client and add test data to it
for _, vsl := range tc.volumeSnapshotLocations {
require.NoError(t, h.restorer.kbClient.Create(context.Background(), vsl))

View File

@ -152,14 +152,22 @@ Backup repository is created during the first execution of backup targeting to i
## Install Velero with CSI support on source cluster
On source cluster, Velero needs to manipulate CSI snapshots through the CSI volume snapshot APIs, so you must enable the `EnableCSI` feature flag and install the Velero [CSI plugin][2] on the Velero server.
On source cluster, Velero needs to manipulate CSI snapshots through the CSI volume snapshot APIs, so you must enable the `EnableCSI` feature flag on the Velero server.
Both of these can be added with the `velero install` command.
To integrate Velero with the CSI volume snapshot APIs, you must enable the `EnableCSI` feature flag.
From release-1.14, the `github.com/vmware-tanzu/velero-plugin-for-csi` repository, which is the Velero CSI plugin, is merged into the `github.com/vmware-tanzu/velero` repository.
The reasons to merge the CSI plugin are:
* The VolumeSnapshot data mover depends on the CSI plugin, it's reasonabe to integrate them.
* This change reduces the Velero deploying complexity.
* This makes performance tuning easier in the future.
As a result, no need to install Velero CSI plugin anymore.
```bash
velero install \
--features=EnableCSI \
--plugins=<object storage plugin>,velero/velero-plugin-for-csi:v0.6.0 \
--plugins=<object storage plugin> \
...
```

View File

@ -5,7 +5,16 @@ layout: docs
Integrating Container Storage Interface (CSI) snapshot support into Velero enables Velero to backup and restore CSI-backed volumes using the [Kubernetes CSI Snapshot APIs](https://kubernetes.io/docs/concepts/storage/volume-snapshots/).
By supporting CSI snapshot APIs, Velero can support any volume provider that has a CSI driver, without requiring a Velero-specific plugin to be available. This page gives an overview of how to add support for CSI snapshots to Velero through CSI plugins. For more information about specific components, see the [plugin repo](https://github.com/vmware-tanzu/velero-plugin-for-csi/).
By supporting CSI snapshot APIs, Velero can support any volume provider that has a CSI driver, without requiring a Velero-specific plugin to be available. This page gives an overview of how to add support for CSI snapshots to Velero.
## Notice
From release-1.14, the `github.com/vmware-tanzu/velero-plugin-for-csi` repository, which is the Velero CSI plugin, is merged into the `github.com/vmware-tanzu/velero` repository.
The reasons to merge the CSI plugin are:
* The VolumeSnapshot data mover depends on the CSI plugin, it's reasonabe to integrate them.
* This change reduces the Velero deploying complexity.
* This makes performance tuning easier in the future.
As a result, no need to install Velero CSI plugin anymore.
## Prerequisites
@ -17,27 +26,25 @@ By supporting CSI snapshot APIs, Velero can support any volume provider that has
## Installing Velero with CSI support
To integrate Velero with the CSI volume snapshot APIs, you must enable the `EnableCSI` feature flag and install the Velero [CSI plugins][2] on the Velero server.
Both of these can be added with the `velero install` command.
To integrate Velero with the CSI volume snapshot APIs, you must enable the `EnableCSI` feature flag.
```bash
velero install \
--features=EnableCSI \
--plugins=<object storage plugin>,velero/velero-plugin-for-csi:v0.7.0 \
--plugins=<object storage plugin> \
...
```
To include the status of CSI objects associated with a Velero backup in `velero backup describe` output, run `velero client config set features=EnableCSI`.
See [Enabling Features][1] for more information about managing client-side feature flags. You can also view the image on [Docker Hub][3].
See [Enabling Features][1] for more information about managing client-side feature flags.
## Implementation Choices
This section documents some of the choices made during implementation of the Velero [CSI plugins][2]:
This section documents some of the choices made during implementing the CSI snapshot.
1. VolumeSnapshots created by the Velero CSI plugins are retained only for the lifetime of the backup even if the `DeletionPolicy` on the VolumeSnapshotClass is set to `Retain`. To accomplish this, during deletion of the backup the prior to deleting the VolumeSnapshot, VolumeSnapshotContent object is patched to set its `DeletionPolicy` to `Delete`. Deleting the VolumeSnapshot object will result in cascade delete of the VolumeSnapshotContent and the snapshot in the storage provider.
1. VolumeSnapshotContent objects created during a `velero backup` that are dangling, unbound to a VolumeSnapshot object, will be discovered, using labels, and deleted on backup deletion.
1. The Velero CSI plugins, to backup CSI backed PVCs, will choose the VolumeSnapshotClass in the cluster based on the following logic:
2. VolumeSnapshotContent objects created during a `velero backup` that are dangling, unbound to a VolumeSnapshot object, will be discovered, using labels, and deleted on backup deletion.
3. The Velero CSI plugins, to backup CSI backed PVCs, will choose the VolumeSnapshotClass in the cluster based on the following logic:
1. **Default Behavior:**
You can simply create a VolumeSnapshotClass for a particular driver and put a label on it to indicate that it is the default VolumeSnapshotClass for that driver. For example, if you want to create a VolumeSnapshotClass for the CSI driver `disk.csi.cloud.com` for taking snapshots of disks created with `disk.csi.cloud.com` based storage classes, you can create a VolumeSnapshotClass like this:
```yaml
@ -83,7 +90,7 @@ This section documents some of the choices made during implementation of the Vel
storage: 1Gi
storageClassName: disk.csi.cloud.com
```
1. The VolumeSnapshot objects will be removed from the cluster after the backup is uploaded to the object storage, so that the namespace that is backed up can be deleted without removing the snapshot in the storage provider if the `DeletionPolicy` is `Delete`.
4. The VolumeSnapshot objects will be removed from the cluster after the backup is uploaded to the object storage, so that the namespace that is backed up can be deleted without removing the snapshot in the storage provider if the `DeletionPolicy` is `Delete`.
## How it Works - Overview
@ -110,10 +117,7 @@ The `DeletionPolicy` on the VolumeSnapshotContent will be the same as the `Delet
When the Velero backup expires, the VolumeSnapshot objects will be deleted and the VolumeSnapshotContent objects will be updated to have a `DeletionPolicy` of `Delete`, to free space on the storage system.
For more details on how each plugin works, see the [CSI plugin repo][2]'s documentation.
**Note:** The AWS, Microsoft Azure, and Google Cloud Platform (GCP) Velero plugins version 1.4 and later are able to snapshot and restore persistent volumes provisioned by a CSI driver via the APIs of the cloud provider, without having to install Velero CSI plugins. See the [AWS](https://github.com/vmware-tanzu/velero-plugin-for-aws), [Microsoft Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure), and [Google Cloud Platform (GCP)](https://github.com/vmware-tanzu/velero-plugin-for-gcp) Velero plugin repo for more information on supported CSI drivers.
From v1.14, no need to install the CSI plugin, because it is integrated into the Velero code base.
[1]: customize-installation.md#enable-server-side-features
[2]: https://github.com/vmware-tanzu/velero-plugin-for-csi/
[3]: https://hub.docker.com/repository/docker/velero/velero-plugin-for-csi

View File

@ -15,7 +15,6 @@ Velero supports a variety of storage providers for different backup and snapshot
| [Google Cloud Platform (GCP)](https://cloud.google.com) | [Google Cloud Storage](https://github.com/vmware-tanzu/velero-plugin-for-gcp/blob/main/backupstoragelocation.md) | [Google Compute Engine Disks](https://github.com/vmware-tanzu/velero-plugin-for-gcp/blob/main/volumesnapshotlocation.md) | [Velero plugin for GCP](https://github.com/vmware-tanzu/velero-plugin-for-gcp) | [GCP Plugin Setup](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) | [BackupStorageLocation](https://github.com/vmware-tanzu/velero-plugin-for-gcp/blob/main/backupstoragelocation.md) <br/> [VolumeSnapshotLocation](https://github.com/vmware-tanzu/velero-plugin-for-gcp/blob/main/volumesnapshotlocation.md) |
| [Microsoft Azure](https://azure.com) | Azure Blob Storage | Azure Managed Disks | [Velero plugin for Microsoft Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure) | [Azure Plugin Setup](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure#setup) | [BackupStorageLocation](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/main/backupstoragelocation.md) <br/> [VolumeSnapshotLocation](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/main/volumesnapshotlocation.md) |
| [VMware vSphere](https://www.vmware.com/ca/products/vsphere.html) | 🚫 | vSphere Volumes | [VMware vSphere](https://github.com/vmware-tanzu/velero-plugin-for-vsphere) | [vSphere Plugin Setup](https://github.com/vmware-tanzu/velero-plugin-for-vsphere#velero-plugin-for-vsphere-installation-and-configuration-details) | 🚫 |
| [Container Storage Interface (CSI)](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/)| 🚫 | CSI Volumes | [Velero plugin for CSI](https://github.com/vmware-tanzu/velero-plugin-for-csi/) | [CSI Plugin Setup](https://github.com/vmware-tanzu/velero-plugin-for-csi#kinds-of-plugins-included) | 🚫 |
{{< /table >}}
Contact: [#Velero Slack](https://kubernetes.slack.com/messages/velero), [GitHub Issues](https://github.com/vmware-tanzu/velero/issues)

View File

@ -55,7 +55,6 @@ Here is an example:
"aws": "../velero-plugin-for-aws",
"gcp": "../velero-plugin-for-gcp",
"azure": "../velero-plugin-for-microsoft-azure",
"csi": "../velero-plugin-for-csi"
},
"allowed_contexts": [
"development"

View File

@ -1,11 +1,11 @@
---
title: "Upgrading to Velero 1.13"
title: "Upgrading to Velero 1.14"
layout: docs
---
## Prerequisites
- Velero [v1.12.x][5] installed.
- Velero [v1.13.x][5] installed.
If you're not yet running at least Velero v1.8, see the following:
@ -14,6 +14,7 @@ If you're not yet running at least Velero v1.8, see the following:
- [Upgrading to v1.10][3]
- [Upgrading to v1.11][4]
- [Upgrading to v1.12][5]
- [Upgrading to v1.13][6]
Before upgrading, check the [Velero compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix) to make sure your version of Kubernetes is supported by the new version of Velero.
@ -34,7 +35,7 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.13.0
Version: v1.14.0
Git commit: <git SHA>
```
@ -46,21 +47,29 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
**NOTE:** Since velero v1.10.0 only v1 CRD will be supported during installation, therefore, the v1.10.0 will only work on Kubernetes version >= v1.16
3. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
3. Delete the CSI plugin. Because the Velero CSI plugin is already merged into the Velero, need to remove the existing CSI plugin InitContainer. Otherwise, the Velero server plugin would fail to start due to same plugin registered twice.
Please find more information of CSI plugin merging in this page [csi].
If the plugin move CLI fails due to `not found`, that is caused by the Velero CSI plugin not installed before upgrade. It's safe to ignore the error.
``` bash
velero plugin remove velero-velero-plugin-for-csi; echo 0
```
4. Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set:
```bash
# set the container and image of the init container for plugin accordingly,
# if you are using other plugin
kubectl set image deployment/velero \
velero=velero/velero:v1.13.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.9.0 \
velero=velero/velero:v1.14.0 \
velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.10.0 \
--namespace velero
# optional, if using the node agent daemonset
kubectl set image daemonset/node-agent \
node-agent=velero/velero:v1.13.0 \
node-agent=velero/velero:v1.14.0 \
--namespace velero
```
4. Confirm that the deployment is up and running with the correct version by running:
5. Confirm that the deployment is up and running with the correct version by running:
```bash
velero version
@ -70,23 +79,23 @@ Before upgrading, check the [Velero compatibility matrix](https://github.com/vmw
```bash
Client:
Version: v1.13.0
Version: v1.14.0
Git commit: <git SHA>
Server:
Version: v1.13.0
Version: v1.14.0
```
### Upgrade from version lower than v1.10.0
The procedure for upgrading from a version lower than v1.10.0 is identical to the procedure above, except for step 3 as shown below.
The procedure for upgrading from a version lower than v1.10.0 is identical to the procedure above, except for step 4 as shown below.
3. Update the container image and objects fields used by the Velero deployment and, optionally, the restic daemon set:
1. Update the container image and objects fields used by the Velero deployment and, optionally, the restic daemon set:
```bash
# uploader_type value could be restic or kopia
kubectl get deploy -n velero -ojson \
| sed "s#\"image\"\: \"velero\/velero\:v[0-9]*.[0-9]*.[0-9]\"#\"image\"\: \"velero\/velero\:v1.13.0\"#g" \
| sed "s#\"image\"\: \"velero\/velero\:v[0-9]*.[0-9]*.[0-9]\"#\"image\"\: \"velero\/velero\:v1.14.0\"#g" \
| sed "s#\"server\",#\"server\",\"--uploader-type=$uploader_type\",#g" \
| sed "s#default-volumes-to-restic#default-volumes-to-fs-backup#g" \
| sed "s#default-restic-prune-frequency#default-repo-maintain-frequency#g" \
@ -95,7 +104,7 @@ The procedure for upgrading from a version lower than v1.10.0 is identical to th
# optional, if using the restic daemon set
echo $(kubectl get ds -n velero restic -ojson) \
| sed "s#\"image\"\: \"velero\/velero\:v[0-9]*.[0-9]*.[0-9]\"#\"image\"\: \"velero\/velero\:v1.13.0\"#g" \
| sed "s#\"image\"\: \"velero\/velero\:v[0-9]*.[0-9]*.[0-9]\"#\"image\"\: \"velero\/velero\:v1.14.0\"#g" \
| sed "s#\"name\"\: \"restic\"#\"name\"\: \"node-agent\"#g" \
| sed "s#\[ \"restic\",#\[ \"node-agent\",#g" \
| kubectl apply -f -
@ -115,3 +124,4 @@ If upgrading from Velero v1.9.x or lower, there will likely remain some unused r
[3]: https://velero.io/docs/v1.10/upgrade-to-1.10
[4]: https://velero.io/docs/v1.11/upgrade-to-1.11
[5]: https://velero.io/docs/v1.12/upgrade-to-1.12
[6]: https://velero.io/docs/v1.13/upgrade-to-1.13

View File

@ -272,7 +272,7 @@ As we provided several examples for E2E test execution, what if no filter is inv
### Suggested pipelines for full test
Following pipelines should cover all E2E tests along with proper filters:
1. **CSI pipeline:** As we can see lots of labels in E2E test code, there're many snapshot-labeled test scripts. To cover CSI scenario, a pipeline with CSI enabled should be a good choice, otherwise, we will double all the snapshot cases for CSI scenario, it's very time-wasting. By providing `FEATURES=EnableCSI` and `PLUGINS=<provider-plugin-images>,velero/velero-plugin-for-csi:<target-version>`, a CSI pipeline is ready for testing.
1. **CSI pipeline:** As we can see lots of labels in E2E test code, there're many snapshot-labeled test scripts. To cover CSI scenario, a pipeline with CSI enabled should be a good choice, otherwise, we will double all the snapshot cases for CSI scenario, it's very time-wasting. By providing `FEATURES=EnableCSI` and `PLUGINS=<provider-plugin-images>`, a CSI pipeline is ready for testing.
1. **Data mover pipeline:** Data mover scenario is the same scenario with migaration test except the restriction of migaration between different providers, so it better to separated it out from other pipelines. Please refer the example in previous.
1. **Restic/Kopia backup path pipelines:**
1. **Restic pipeline:** For the same reason of saving time, set `UPLOADER_TYPE` to `restic` for all file system backup test cases;

View File

@ -84,12 +84,6 @@ spec:
volumeMounts:
- mountPath: /target
name: plugins
- image: velero/velero-plugin-for-csi
imagePullPolicy: Always
name: velero-plugin-for-csi
volumeMounts:
- mountPath: /target
name: plugins
restartPolicy: Always
serviceAccountName: velero
volumes:

View File

@ -9,8 +9,7 @@
"providers": {
"aws": "../velero-plugin-for-aws",
"gcp": "../velero-plugin-for-gcp",
"azure": "../velero-plugin-for-microsoft-azure",
"csi": "../velero-plugin-for-csi"
"azure": "../velero-plugin-for-microsoft-azure"
},
"allowed_contexts": [
"development"