Merge branch 'vmware-tanzu:main' into main
commit
970f05260d
|
@ -15,6 +15,7 @@ reviewers:
|
|||
- ywk253100
|
||||
- blackpiglet
|
||||
- qiuming-best
|
||||
- shubham-pampattiwar
|
||||
|
||||
tech-writer:
|
||||
- a-mccarthy
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
|
||||
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
|
||||
| Ming Qiu | [qiuming-best](https://github.com/qiuming-best) | [VMware](https://www.github.com/vmware/) |
|
||||
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift)
|
||||
|
||||
## Emeritus Maintainers
|
||||
* Adnan Abdulhussein ([prydonius](https://github.com/prydonius))
|
||||
|
|
21
README.md
21
README.md
|
@ -38,13 +38,20 @@ See [the list of releases][6] to find out about feature changes.
|
|||
|
||||
The following is a list of the supported Kubernetes versions for each Velero version.
|
||||
|
||||
| Velero version | Kubernetes versions|
|
||||
|----------------|--------------------|
|
||||
| 1.8 | 1.16-latest |
|
||||
| 1.6.3-1.7.1 | 1.12-latest |
|
||||
| 1.60-1.6.2 | 1.12-1.21 |
|
||||
| 1.5 | 1.12-1.21 |
|
||||
| 1.4 | 1.10-1.21 |
|
||||
| Velero version | Expected Kubernetes version compatibility| Tested on Kubernetes version|
|
||||
|----------------|--------------------|--------------------|
|
||||
| 1.9 | 1.16-latest | 1.20.5, 1.21.2, 1.22.5, 1.23, and 1.24 |
|
||||
| 1.8 | 1.16-latest | |
|
||||
| 1.6.3-1.7.1 | 1.12-latest ||
|
||||
| 1.60-1.6.2 | 1.12-1.21 ||
|
||||
| 1.5 | 1.12-1.21 ||
|
||||
| 1.4 | 1.10-1.21 | |
|
||||
|
||||
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
|
||||
|
||||
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version. If you have a question about test coverage before v1.9, please reach out in the [#velero-users](https://kubernetes.slack.com/archives/C6VCGP4MT) Slack channel.
|
||||
|
||||
If you are interested in using a different version of Kubernetes with a given Velero version, we'd recommend that you perform testing before installing or upgrading your environment. For full information around capabilities within a release, also see the Velero [release notes](https://github.com/vmware-tanzu/velero/releases) or Kubernetes [release notes](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG). See the Velero [support page](https://velero.io/docs/latest/support-process/) for information about supported versions of Velero.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
|
||||
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
Add ability to restore status on selected resources
|
|
@ -0,0 +1 @@
|
|||
Make velero completion zsh command output can be used by `source` command.
|
|
@ -1773,6 +1773,26 @@ spec:
|
|||
PVs from snapshot (via the cloudprovider).
|
||||
nullable: true
|
||||
type: boolean
|
||||
restoreStatus:
|
||||
description: RestoreStatus specifies which resources we should restore
|
||||
the status field. If nil, no objects are included. Optional.
|
||||
nullable: true
|
||||
properties:
|
||||
excludedResources:
|
||||
description: ExcludedResources specifies the resources to which
|
||||
will not restore the status.
|
||||
items:
|
||||
type: string
|
||||
nullable: true
|
||||
type: array
|
||||
includedResources:
|
||||
description: IncludedResources specifies the resources to which
|
||||
will restore the status. If empty, it applies to all resources.
|
||||
items:
|
||||
type: string
|
||||
nullable: true
|
||||
type: array
|
||||
type: object
|
||||
scheduleName:
|
||||
description: ScheduleName is the unique name of the Velero schedule
|
||||
to restore from. If specified, and BackupName is empty, Velero will
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -86,6 +86,12 @@ type RestoreSpec struct {
|
|||
// +nullable
|
||||
RestorePVs *bool `json:"restorePVs,omitempty"`
|
||||
|
||||
// RestoreStatus specifies which resources we should restore the status
|
||||
// field. If nil, no objects are included. Optional.
|
||||
// +optional
|
||||
// +nullable
|
||||
RestoreStatus *RestoreStatusSpec `json:"restoreStatus,omitempty"`
|
||||
|
||||
// PreserveNodePorts specifies whether to restore old nodePorts from backup.
|
||||
// +optional
|
||||
// +nullable
|
||||
|
@ -113,6 +119,19 @@ type RestoreHooks struct {
|
|||
Resources []RestoreResourceHookSpec `json:"resources,omitempty"`
|
||||
}
|
||||
|
||||
type RestoreStatusSpec struct {
|
||||
// IncludedResources specifies the resources to which will restore the status.
|
||||
// If empty, it applies to all resources.
|
||||
// +optional
|
||||
// +nullable
|
||||
IncludedResources []string `json:"includedResources,omitempty"`
|
||||
|
||||
// ExcludedResources specifies the resources to which will not restore the status.
|
||||
// +optional
|
||||
// +nullable
|
||||
ExcludedResources []string `json:"excludedResources,omitempty"`
|
||||
}
|
||||
|
||||
// RestoreResourceHookSpec defines one or more RestoreResrouceHooks that should be executed based on
|
||||
// the rules defined for namespaces, resources, and label selector.
|
||||
type RestoreResourceHookSpec struct {
|
||||
|
|
|
@ -1278,6 +1278,11 @@ func (in *RestoreSpec) DeepCopyInto(out *RestoreSpec) {
|
|||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.RestoreStatus != nil {
|
||||
in, out := &in.RestoreStatus, &out.RestoreStatus
|
||||
*out = new(RestoreStatusSpec)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
if in.PreserveNodePorts != nil {
|
||||
in, out := &in.PreserveNodePorts, &out.PreserveNodePorts
|
||||
*out = new(bool)
|
||||
|
@ -1334,6 +1339,31 @@ func (in *RestoreStatus) DeepCopy() *RestoreStatus {
|
|||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *RestoreStatusSpec) DeepCopyInto(out *RestoreStatusSpec) {
|
||||
*out = *in
|
||||
if in.IncludedResources != nil {
|
||||
in, out := &in.IncludedResources, &out.IncludedResources
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.ExcludedResources != nil {
|
||||
in, out := &in.ExcludedResources, &out.ExcludedResources
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestoreStatusSpec.
|
||||
func (in *RestoreStatusSpec) DeepCopy() *RestoreStatusSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(RestoreStatusSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *Schedule) DeepCopyInto(out *Schedule) {
|
||||
*out = *in
|
||||
|
|
|
@ -89,6 +89,11 @@ type Deletor interface {
|
|||
Delete(name string, opts metav1.DeleteOptions) error
|
||||
}
|
||||
|
||||
// StatusUpdater updates status field of a object
|
||||
type StatusUpdater interface {
|
||||
UpdateStatus(obj *unstructured.Unstructured, opts metav1.UpdateOptions) (*unstructured.Unstructured, error)
|
||||
}
|
||||
|
||||
// Dynamic contains client methods that Velero needs for backing up and restoring resources.
|
||||
type Dynamic interface {
|
||||
Creator
|
||||
|
@ -97,6 +102,7 @@ type Dynamic interface {
|
|||
Getter
|
||||
Patcher
|
||||
Deletor
|
||||
StatusUpdater
|
||||
}
|
||||
|
||||
// dynamicResourceClient implements Dynamic.
|
||||
|
@ -129,3 +135,7 @@ func (d *dynamicResourceClient) Patch(name string, data []byte) (*unstructured.U
|
|||
func (d *dynamicResourceClient) Delete(name string, opts metav1.DeleteOptions) error {
|
||||
return d.resourceClient.Delete(context.TODO(), name, opts)
|
||||
}
|
||||
|
||||
func (d *dynamicResourceClient) UpdateStatus(obj *unstructured.Unstructured, opts metav1.UpdateOptions) (*unstructured.Unstructured, error) {
|
||||
return d.resourceClient.UpdateStatus(context.TODO(), obj, opts)
|
||||
}
|
||||
|
|
|
@ -66,7 +66,15 @@ $ velero completion fish > ~/.config/fish/completions/velero.fish
|
|||
case "bash":
|
||||
cmd.Root().GenBashCompletion(os.Stdout)
|
||||
case "zsh":
|
||||
cmd.Root().GenZshCompletion(os.Stdout)
|
||||
// # fix #4912
|
||||
// cobra does not support zsh completion ouptput used by source command
|
||||
// according to https://github.com/spf13/cobra/issues/1529
|
||||
// Need to append compdef manually to do that.
|
||||
zshHead := "#compdef velero\ncompdef _velero velero\n"
|
||||
out := os.Stdout
|
||||
out.Write([]byte(zshHead))
|
||||
|
||||
cmd.Root().GenZshCompletion(out)
|
||||
case "fish":
|
||||
cmd.Root().GenFishCompletion(os.Stdout, true)
|
||||
default:
|
||||
|
|
|
@ -85,6 +85,8 @@ type CreateOptions struct {
|
|||
ExistingResourcePolicy string
|
||||
IncludeResources flag.StringArray
|
||||
ExcludeResources flag.StringArray
|
||||
StatusIncludeResources flag.StringArray
|
||||
StatusExcludeResources flag.StringArray
|
||||
NamespaceMappings flag.Map
|
||||
Selector flag.LabelSelector
|
||||
IncludeClusterResources flag.OptionalBool
|
||||
|
@ -115,6 +117,8 @@ func (o *CreateOptions) BindFlags(flags *pflag.FlagSet) {
|
|||
flags.Var(&o.IncludeResources, "include-resources", "Resources to include in the restore, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources).")
|
||||
flags.Var(&o.ExcludeResources, "exclude-resources", "Resources to exclude from the restore, formatted as resource.group, such as storageclasses.storage.k8s.io.")
|
||||
flags.StringVar(&o.ExistingResourcePolicy, "existing-resource-policy", "", "Restore Policy to be used during the restore workflow, can be - none or update")
|
||||
flags.Var(&o.StatusIncludeResources, "status-include-resources", "Resources to include in the restore status, formatted as resource.group, such as storageclasses.storage.k8s.io.")
|
||||
flags.Var(&o.StatusExcludeResources, "status-exclude-resources", "Resources to exclude from the restore status, formatted as resource.group, such as storageclasses.storage.k8s.io.")
|
||||
flags.VarP(&o.Selector, "selector", "l", "Only restore resources matching this label selector.")
|
||||
f := flags.VarPF(&o.RestoreVolumes, "restore-volumes", "", "Whether to restore volumes from snapshots.")
|
||||
// this allows the user to just specify "--restore-volumes" as shorthand for "--restore-volumes=true"
|
||||
|
@ -279,6 +283,13 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
|
|||
},
|
||||
}
|
||||
|
||||
if len([]string(o.StatusIncludeResources)) > 0 {
|
||||
restore.Spec.RestoreStatus = &api.RestoreStatusSpec{
|
||||
IncludedResources: o.StatusIncludeResources,
|
||||
ExcludedResources: o.StatusExcludeResources,
|
||||
}
|
||||
}
|
||||
|
||||
if printed, err := output.PrintWithFormat(c, restore); printed || err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -206,6 +206,20 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
|
|||
req.Restore.Spec.ExcludedResources,
|
||||
)
|
||||
|
||||
// Get resource status includes-excludes. Defaults to excluding all resources
|
||||
restoreStatusIncludesExcludes := collections.GetResourceIncludesExcludes(
|
||||
kr.discoveryHelper,
|
||||
[]string{},
|
||||
[]string{"*"},
|
||||
)
|
||||
if req.Restore.Spec.RestoreStatus != nil {
|
||||
restoreStatusIncludesExcludes = collections.GetResourceIncludesExcludes(
|
||||
kr.discoveryHelper,
|
||||
req.Restore.Spec.RestoreStatus.IncludedResources,
|
||||
req.Restore.Spec.RestoreStatus.ExcludedResources,
|
||||
)
|
||||
}
|
||||
|
||||
// Get namespace includes-excludes.
|
||||
namespaceIncludesExcludes := collections.NewIncludesExcludes().
|
||||
Includes(req.Restore.Spec.IncludedNamespaces...).
|
||||
|
@ -268,83 +282,85 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
|
|||
}
|
||||
|
||||
restoreCtx := &restoreContext{
|
||||
backup: req.Backup,
|
||||
backupReader: req.BackupReader,
|
||||
restore: req.Restore,
|
||||
resourceIncludesExcludes: resourceIncludesExcludes,
|
||||
namespaceIncludesExcludes: namespaceIncludesExcludes,
|
||||
chosenGrpVersToRestore: make(map[string]ChosenGroupVersion),
|
||||
selector: selector,
|
||||
OrSelectors: OrSelectors,
|
||||
log: req.Log,
|
||||
dynamicFactory: kr.dynamicFactory,
|
||||
fileSystem: kr.fileSystem,
|
||||
namespaceClient: kr.namespaceClient,
|
||||
restoreItemActions: resolvedActions,
|
||||
itemSnapshotterActions: resolvedItemSnapshotterActions,
|
||||
volumeSnapshotterGetter: volumeSnapshotterGetter,
|
||||
resticRestorer: resticRestorer,
|
||||
resticErrs: make(chan error),
|
||||
pvsToProvision: sets.NewString(),
|
||||
pvRestorer: pvRestorer,
|
||||
volumeSnapshots: req.VolumeSnapshots,
|
||||
podVolumeBackups: req.PodVolumeBackups,
|
||||
resourceTerminatingTimeout: kr.resourceTerminatingTimeout,
|
||||
resourceClients: make(map[resourceClientKey]client.Dynamic),
|
||||
restoredItems: make(map[velero.ResourceIdentifier]struct{}),
|
||||
renamedPVs: make(map[string]string),
|
||||
pvRenamer: kr.pvRenamer,
|
||||
discoveryHelper: kr.discoveryHelper,
|
||||
resourcePriorities: kr.resourcePriorities,
|
||||
resourceRestoreHooks: resourceRestoreHooks,
|
||||
hooksErrs: make(chan error),
|
||||
waitExecHookHandler: waitExecHookHandler,
|
||||
hooksContext: hooksCtx,
|
||||
hooksCancelFunc: hooksCancelFunc,
|
||||
restoreClient: kr.restoreClient,
|
||||
backup: req.Backup,
|
||||
backupReader: req.BackupReader,
|
||||
restore: req.Restore,
|
||||
resourceIncludesExcludes: resourceIncludesExcludes,
|
||||
resourceStatusIncludesExcludes: restoreStatusIncludesExcludes,
|
||||
namespaceIncludesExcludes: namespaceIncludesExcludes,
|
||||
chosenGrpVersToRestore: make(map[string]ChosenGroupVersion),
|
||||
selector: selector,
|
||||
OrSelectors: OrSelectors,
|
||||
log: req.Log,
|
||||
dynamicFactory: kr.dynamicFactory,
|
||||
fileSystem: kr.fileSystem,
|
||||
namespaceClient: kr.namespaceClient,
|
||||
restoreItemActions: resolvedActions,
|
||||
itemSnapshotterActions: resolvedItemSnapshotterActions,
|
||||
volumeSnapshotterGetter: volumeSnapshotterGetter,
|
||||
resticRestorer: resticRestorer,
|
||||
resticErrs: make(chan error),
|
||||
pvsToProvision: sets.NewString(),
|
||||
pvRestorer: pvRestorer,
|
||||
volumeSnapshots: req.VolumeSnapshots,
|
||||
podVolumeBackups: req.PodVolumeBackups,
|
||||
resourceTerminatingTimeout: kr.resourceTerminatingTimeout,
|
||||
resourceClients: make(map[resourceClientKey]client.Dynamic),
|
||||
restoredItems: make(map[velero.ResourceIdentifier]struct{}),
|
||||
renamedPVs: make(map[string]string),
|
||||
pvRenamer: kr.pvRenamer,
|
||||
discoveryHelper: kr.discoveryHelper,
|
||||
resourcePriorities: kr.resourcePriorities,
|
||||
resourceRestoreHooks: resourceRestoreHooks,
|
||||
hooksErrs: make(chan error),
|
||||
waitExecHookHandler: waitExecHookHandler,
|
||||
hooksContext: hooksCtx,
|
||||
hooksCancelFunc: hooksCancelFunc,
|
||||
restoreClient: kr.restoreClient,
|
||||
}
|
||||
|
||||
return restoreCtx.execute()
|
||||
}
|
||||
|
||||
type restoreContext struct {
|
||||
backup *velerov1api.Backup
|
||||
backupReader io.Reader
|
||||
restore *velerov1api.Restore
|
||||
restoreDir string
|
||||
restoreClient velerov1client.RestoresGetter
|
||||
resourceIncludesExcludes *collections.IncludesExcludes
|
||||
namespaceIncludesExcludes *collections.IncludesExcludes
|
||||
chosenGrpVersToRestore map[string]ChosenGroupVersion
|
||||
selector labels.Selector
|
||||
OrSelectors []labels.Selector
|
||||
log logrus.FieldLogger
|
||||
dynamicFactory client.DynamicFactory
|
||||
fileSystem filesystem.Interface
|
||||
namespaceClient corev1.NamespaceInterface
|
||||
restoreItemActions []framework.RestoreItemResolvedAction
|
||||
itemSnapshotterActions []framework.ItemSnapshotterResolvedAction
|
||||
volumeSnapshotterGetter VolumeSnapshotterGetter
|
||||
resticRestorer restic.Restorer
|
||||
resticWaitGroup sync.WaitGroup
|
||||
resticErrs chan error
|
||||
pvsToProvision sets.String
|
||||
pvRestorer PVRestorer
|
||||
volumeSnapshots []*volume.Snapshot
|
||||
podVolumeBackups []*velerov1api.PodVolumeBackup
|
||||
resourceTerminatingTimeout time.Duration
|
||||
resourceClients map[resourceClientKey]client.Dynamic
|
||||
restoredItems map[velero.ResourceIdentifier]struct{}
|
||||
renamedPVs map[string]string
|
||||
pvRenamer func(string) (string, error)
|
||||
discoveryHelper discovery.Helper
|
||||
resourcePriorities []string
|
||||
hooksWaitGroup sync.WaitGroup
|
||||
hooksErrs chan error
|
||||
resourceRestoreHooks []hook.ResourceRestoreHook
|
||||
waitExecHookHandler hook.WaitExecHookHandler
|
||||
hooksContext go_context.Context
|
||||
hooksCancelFunc go_context.CancelFunc
|
||||
backup *velerov1api.Backup
|
||||
backupReader io.Reader
|
||||
restore *velerov1api.Restore
|
||||
restoreDir string
|
||||
restoreClient velerov1client.RestoresGetter
|
||||
resourceIncludesExcludes *collections.IncludesExcludes
|
||||
resourceStatusIncludesExcludes *collections.IncludesExcludes
|
||||
namespaceIncludesExcludes *collections.IncludesExcludes
|
||||
chosenGrpVersToRestore map[string]ChosenGroupVersion
|
||||
selector labels.Selector
|
||||
OrSelectors []labels.Selector
|
||||
log logrus.FieldLogger
|
||||
dynamicFactory client.DynamicFactory
|
||||
fileSystem filesystem.Interface
|
||||
namespaceClient corev1.NamespaceInterface
|
||||
restoreItemActions []framework.RestoreItemResolvedAction
|
||||
itemSnapshotterActions []framework.ItemSnapshotterResolvedAction
|
||||
volumeSnapshotterGetter VolumeSnapshotterGetter
|
||||
resticRestorer restic.Restorer
|
||||
resticWaitGroup sync.WaitGroup
|
||||
resticErrs chan error
|
||||
pvsToProvision sets.String
|
||||
pvRestorer PVRestorer
|
||||
volumeSnapshots []*volume.Snapshot
|
||||
podVolumeBackups []*velerov1api.PodVolumeBackup
|
||||
resourceTerminatingTimeout time.Duration
|
||||
resourceClients map[resourceClientKey]client.Dynamic
|
||||
restoredItems map[velero.ResourceIdentifier]struct{}
|
||||
renamedPVs map[string]string
|
||||
pvRenamer func(string) (string, error)
|
||||
discoveryHelper discovery.Helper
|
||||
resourcePriorities []string
|
||||
hooksWaitGroup sync.WaitGroup
|
||||
hooksErrs chan error
|
||||
resourceRestoreHooks []hook.ResourceRestoreHook
|
||||
waitExecHookHandler hook.WaitExecHookHandler
|
||||
hooksContext go_context.Context
|
||||
hooksCancelFunc go_context.CancelFunc
|
||||
}
|
||||
|
||||
type resourceClientKey struct {
|
||||
|
@ -1111,19 +1127,21 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
|
|||
}
|
||||
}
|
||||
|
||||
objStatus, statusFieldExists, statusFieldErr := unstructured.NestedFieldCopy(obj.Object, "status")
|
||||
// Clear out non-core metadata fields and status.
|
||||
if obj, err = resetMetadataAndStatus(obj); err != nil {
|
||||
errs.Add(namespace, err)
|
||||
return warnings, errs
|
||||
}
|
||||
|
||||
ctx.log.Infof("restore status includes excludes: %+v", ctx.resourceStatusIncludesExcludes)
|
||||
|
||||
for _, action := range ctx.getApplicableActions(groupResource, namespace) {
|
||||
if !action.Selector.Matches(labels.Set(obj.GetLabels())) {
|
||||
continue
|
||||
}
|
||||
|
||||
ctx.log.Infof("Executing item action for %v", &groupResource)
|
||||
|
||||
executeOutput, err := action.RestoreItemAction.Execute(&velero.RestoreItemActionExecuteInput{
|
||||
Item: obj,
|
||||
ItemFromBackup: itemFromBackup,
|
||||
|
@ -1344,6 +1362,29 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
|
|||
return warnings, errs
|
||||
}
|
||||
|
||||
shouldRestoreStatus := ctx.resourceStatusIncludesExcludes.ShouldInclude(groupResource.String())
|
||||
if shouldRestoreStatus && statusFieldErr != nil {
|
||||
err := fmt.Errorf("could not get status to be restored %s: %v", kube.NamespaceAndName(obj), statusFieldErr)
|
||||
ctx.log.Errorf(err.Error())
|
||||
errs.Add(namespace, err)
|
||||
return warnings, errs
|
||||
}
|
||||
// if it should restore status, run a UpdateStatus
|
||||
if statusFieldExists && shouldRestoreStatus {
|
||||
if err := unstructured.SetNestedField(obj.Object, objStatus, "status"); err != nil {
|
||||
ctx.log.Errorf("could not set status field %s: %v", kube.NamespaceAndName(obj), err)
|
||||
errs.Add(namespace, err)
|
||||
return warnings, errs
|
||||
}
|
||||
obj.SetResourceVersion(createdObj.GetResourceVersion())
|
||||
updated, err := resourceClient.UpdateStatus(obj, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
warnings.Add(namespace, err)
|
||||
} else {
|
||||
createdObj = updated
|
||||
}
|
||||
}
|
||||
|
||||
if groupResource == kuberesource.Pods {
|
||||
pod := new(v1.Pod)
|
||||
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pod); err != nil {
|
||||
|
@ -1631,7 +1672,7 @@ func resetVolumeBindingInfo(obj *unstructured.Unstructured) *unstructured.Unstru
|
|||
return obj
|
||||
}
|
||||
|
||||
func resetMetadataAndStatus(obj *unstructured.Unstructured) (*unstructured.Unstructured, error) {
|
||||
func resetMetadata(obj *unstructured.Unstructured) (*unstructured.Unstructured, error) {
|
||||
res, ok := obj.Object["metadata"]
|
||||
if !ok {
|
||||
return nil, errors.New("metadata not found")
|
||||
|
@ -1649,9 +1690,19 @@ func resetMetadataAndStatus(obj *unstructured.Unstructured) (*unstructured.Unstr
|
|||
}
|
||||
}
|
||||
|
||||
// Never restore status
|
||||
delete(obj.UnstructuredContent(), "status")
|
||||
return obj, nil
|
||||
}
|
||||
|
||||
func resetStatus(obj *unstructured.Unstructured) {
|
||||
unstructured.RemoveNestedField(obj.UnstructuredContent(), "status")
|
||||
}
|
||||
|
||||
func resetMetadataAndStatus(obj *unstructured.Unstructured) (*unstructured.Unstructured, error) {
|
||||
_, err := resetMetadata(obj)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
resetStatus(obj)
|
||||
return obj, nil
|
||||
}
|
||||
|
||||
|
|
|
@ -1867,6 +1867,7 @@ func assertRestoredItems(t *testing.T, h *harness, want []*test.APIResource) {
|
|||
// empty in the structured objects. Remove them to make comparison easier.
|
||||
unstructured.RemoveNestedField(want.Object, "metadata", "creationTimestamp")
|
||||
unstructured.RemoveNestedField(want.Object, "status")
|
||||
unstructured.RemoveNestedField(res.Object, "status")
|
||||
|
||||
assert.Equal(t, want, res)
|
||||
}
|
||||
|
@ -2805,7 +2806,7 @@ func TestRestoreWithRestic(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestResetMetadataAndStatus(t *testing.T) {
|
||||
func TestResetMetadata(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
obj *unstructured.Unstructured
|
||||
|
@ -2824,20 +2825,46 @@ func TestResetMetadataAndStatus(t *testing.T) {
|
|||
expectedRes: NewTestUnstructured().WithMetadata("name", "namespace", "labels", "annotations").Unstructured,
|
||||
},
|
||||
{
|
||||
name: "don't keep status",
|
||||
name: "keep status",
|
||||
obj: NewTestUnstructured().WithMetadata().WithStatus().Unstructured,
|
||||
expectedErr: false,
|
||||
expectedRes: NewTestUnstructured().WithMetadata().WithStatus().Unstructured,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
res, err := resetMetadata(test.obj)
|
||||
|
||||
if assert.Equal(t, test.expectedErr, err != nil) {
|
||||
assert.Equal(t, test.expectedRes, res)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestResetStatus(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
obj *unstructured.Unstructured
|
||||
expectedRes *unstructured.Unstructured
|
||||
}{
|
||||
{
|
||||
name: "no status don't cause error",
|
||||
obj: &unstructured.Unstructured{},
|
||||
expectedRes: &unstructured.Unstructured{},
|
||||
},
|
||||
{
|
||||
name: "remove status",
|
||||
obj: NewTestUnstructured().WithMetadata().WithStatus().Unstructured,
|
||||
expectedRes: NewTestUnstructured().WithMetadata().Unstructured,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
res, err := resetMetadataAndStatus(test.obj)
|
||||
|
||||
if assert.Equal(t, test.expectedErr, err != nil) {
|
||||
assert.Equal(t, test.expectedRes, res)
|
||||
}
|
||||
resetStatus(test.obj)
|
||||
assert.Equal(t, test.expectedRes, test.obj)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -72,3 +72,8 @@ func (c *FakeDynamicClient) Delete(name string, opts metav1.DeleteOptions) error
|
|||
args := c.Called(name, opts)
|
||||
return args.Error(1)
|
||||
}
|
||||
|
||||
func (c *FakeDynamicClient) UpdateStatus(obj *unstructured.Unstructured, opts metav1.UpdateOptions) (*unstructured.Unstructured, error) {
|
||||
args := c.Called(obj, opts)
|
||||
return args.Get(0).(*unstructured.Unstructured), args.Error(1)
|
||||
}
|
||||
|
|
|
@ -46,6 +46,21 @@ spec:
|
|||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
|
||||
# restoreStatus selects resources to restore not only the specification, but
|
||||
# the status of the manifest. This is specially useful for CRDs that maintain
|
||||
# external references. By default, it excludes all resources.
|
||||
restoreStatus:
|
||||
# Array of resources to include in the restore status. Just like above,
|
||||
# resources may be shortcuts (for example 'po' for 'pods') or fully-qualified.
|
||||
# If unspecified, no resources are included. Optional.
|
||||
includedResources:
|
||||
- workflows
|
||||
# Array of resources to exclude from the restore status. Resources may be
|
||||
# shortcuts (for example 'po' for 'pods') or fully-qualified.
|
||||
# If unspecified, all resources are excluded. Optional.
|
||||
excludedResources: []
|
||||
|
||||
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
|
|
|
@ -25,11 +25,9 @@ If you've already run `velero install` without the `--use-restic` flag, you can
|
|||
|
||||
## Default Pod Volume backup to restic
|
||||
|
||||
By default, `velero install` does not enable use of restic to take backups of all pod volumes. An annotation has to be applied on every pod which contains volumes to be backed up by restic.
|
||||
By default, `velero install` does not enable the use of restic to take backups of all pod volumes. You must apply an [annotation](restic.md/#using-opt-in-pod-volume-backup) to every pod which contains volumes for Velero to use restic for the backup.
|
||||
|
||||
To backup all pod volumes using restic without having to apply annotation on the pod, run the `velero install` command with the `--default-volumes-to-restic` flag.
|
||||
|
||||
Using this flag requires restic integration to be enabled with the `--use-restic` flag. Please refer to the [restic integration][3] page for more information.
|
||||
If you are planning to only use restic for volume backups, you can run the `velero install` command with the `--default-volumes-to-restic` flag. This will default all pod volumes backups to use restic without having to apply annotations to pods. Note that when this flag is set during install, Velero will always try to use restic to perform the backup, even want an individual backup to use volume snapshots, by setting the `--snapshot-volumes` flag in the `backup create` command. Alternatively, you can set the `--default-volumes-to-restic` on an individual backup to to make sure Velero uses Restic for each volume being backed up.
|
||||
|
||||
## Enable features
|
||||
|
||||
|
@ -173,6 +171,26 @@ However, `velero install` only supports configuring at most one backup storage l
|
|||
|
||||
To configure additional locations after running `velero install`, use the `velero backup-location create` and/or `velero snapshot-location create` commands along with provider-specific configuration. Use the `--help` flag on each of these commands for more details.
|
||||
|
||||
### Set default backup storage location or volume snapshot locations
|
||||
|
||||
When performing backups, Velero needs to know where to backup your data. This means that if you configure multiple locations, you must specify the location Velero should use each time you run `velero backup create`, or you can set a default backup storage location or default volume snapshot locations. If you only have one backup storage llocation or volume snapshot location set for a provider, Velero will automatically use that location as the default.
|
||||
|
||||
Set a default backup storage location by passing a `--default` flag with when running `velero backup-location create`.
|
||||
|
||||
```
|
||||
velero backup-location create backups-primary \
|
||||
--provider aws \
|
||||
--bucket velero-backups \
|
||||
--config region=us-east-1 \
|
||||
--default
|
||||
```
|
||||
|
||||
You can set a default volume snapshot location for each of your volume snapshot providers using the `--default-volume-snapshot-locations` flag on the `velero server` command.
|
||||
|
||||
```
|
||||
velero server --default-volume-snapshot-locations="<PROVIDER-NAME>:<LOCATION-NAME>,<PROVIDER2-NAME>:<LOCATION2-NAME>"
|
||||
```
|
||||
|
||||
## Do not configure a backup storage location during install
|
||||
|
||||
If you need to install Velero without a default backup storage location (without specifying `--bucket` or `--provider`), the `--no-default-backup-location` flag is required for confirmation.
|
||||
|
|
|
@ -41,6 +41,8 @@ This configuration design enables a number of different use cases, including:
|
|||
|
||||
- Velero's compression for object metadata is limited, using Golang's tar implementation. In most instances, Kubernetes objects are limited to 1.5MB in size, but many don't approach that, meaning that compression may not be necessary. Note that restic has not yet implemented compression, but does have de-deduplication capabilities.
|
||||
|
||||
- If you have [multiple](customize-installation.md/#configure-more-than-one-storage-location-for-backups-or-volume-snapshots) `VolumeSnapshotLocations` configured for a provider, you must always specify a valid `VolumeSnapshotLocation` when creating a backup, even if you are using [Restic](restic.md) for volume backups. You can optionally decide to set the [`--default-volume-snapshot-locations`](customize-locations.md#set-default-backup-storage-location-or-volume-snapshot-locations) flag using the `velero server`, which lists the default `VolumeSnapshotLocation` Velero should use if a `VolumeSnapshotLocation` is not specified when creating a backup. If you only have one `VolumeSnapshotLocation` for a provider, Velero will automatically use that location as the default.
|
||||
|
||||
## Examples
|
||||
|
||||
Let's look at some examples of how you can use this configuration mechanism to address some common use cases:
|
||||
|
|
|
@ -3,52 +3,91 @@ title: "Cluster migration"
|
|||
layout: docs
|
||||
---
|
||||
|
||||
Velero can help you port your resources from one cluster to another, as long as you point each Velero instance to the same cloud object storage location.
|
||||
Velero's backup and restore capabilities make it a valuable tool for migrating your data between clusters. Cluster migration with Velero is based on Velero's [object storage sync](how-velero-works.md#object-storage-sync) functionality, which is responsible for syncing Velero resources from your designated object storage to your cluster. This means that to perform cluster migration with Velero you must point each Velero instance running on clusters involved with the migration to the same cloud object storage location.
|
||||
|
||||
## Migration Limitations
|
||||
This page outlines a cluster migration scenario and some common configurations you will need to start using Velero to begin migrating data.
|
||||
|
||||
## Before migrating your cluster
|
||||
|
||||
Before migrating you should consider the following,
|
||||
|
||||
* Velero does not natively support the migration of persistent volumes snapshots across cloud providers. If you would like to migrate volume data between cloud platforms, enable [restic][2], which will backup volume contents at the filesystem level.
|
||||
* Velero does not natively support the migration of persistent volumes snapshots across cloud providers. If you would like to migrate volume data between cloud platforms, enable [restic](restic.md), which will backup volume contents at the filesystem level.
|
||||
* Velero doesn't support restoring into a cluster with a lower Kubernetes version than where the backup was taken.
|
||||
* Migrating workloads across clusters that are not running the same version of Kubernetes might be possible, but some factors need to be considered before migration, including the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core/native API groups, migrating with Velero will not be possible without first updating the impacted custom resources. For more information about API group versions, please see [EnableAPIGroupVersions](enable-api-group-versions-feature.md).
|
||||
* The Velero plugin for AWS and Azure does not support migrating data between regions. If you need to do this, you must use [restic][2].
|
||||
* The Velero plugin for AWS and Azure does not support migrating data between regions. If you need to do this, you must use [restic](restic.md).
|
||||
|
||||
|
||||
## Migration Scenario
|
||||
|
||||
This scenario assumes that your clusters are hosted by the same cloud provider
|
||||
This scenario steps through the migration of resources from Cluster 1 to Cluster 2. In this scenario, both clusters are using the same cloud provider, AWS, and Velero's [AWS plugin](https://github.com/vmware-tanzu/velero-plugin-for-aws).
|
||||
|
||||
1. *(Cluster 1)* Assuming you haven't already been checkpointing your data with the Velero `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired):
|
||||
1. On Cluster 1, make sure Velero is installed and points to an object storage location using the `--bucket` flag.
|
||||
|
||||
```
|
||||
velero install --provider aws --image velero/velero:v1.8.0 --plugins velero/velero-plugin-for-aws:v1.4.0 --bucket velero-migration-demo --secret-file xxxx/aws-credentials-cluster1 --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2
|
||||
```
|
||||
|
||||
During installation, Velero creates a Backup Storage Location called `default` inside the `--bucket` your provided in the install command, in this case `velero-migration-demo`. This is the location that Velero will use to store backups. Running `velero backup-location get` will show the backup location of Cluster 1.
|
||||
|
||||
|
||||
```
|
||||
velero backup-location get
|
||||
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
|
||||
default aws velero-migration-demo Available 2022-05-13 13:41:30 +0800 CST ReadWrite true
|
||||
```
|
||||
|
||||
1. Still on Cluster 1, make sure you have a backup of your cluster. Replace `<BACKUP-NAME>` with a name for your backup.
|
||||
|
||||
```
|
||||
velero backup create <BACKUP-NAME>
|
||||
```
|
||||
|
||||
The default backup retention period, expressed as TTL (time to live), is 30 days (720 hours); you can use the `--ttl <DURATION>` flag to change this as necessary. See [how velero works][1] for more information about backup expiry.
|
||||
Alternatively, you can create a [scheduled backup](https://velero.io/docs/main/backup-reference/#schedule-a-backup) of your data with the Velero `schedule` operation. This is the recommended way to make sure your data is automatically backed up according to the schedule you define.
|
||||
|
||||
1. *(Cluster 2)* Configure `BackupStorageLocations` and `VolumeSnapshotLocations`, pointing to the locations used by *Cluster 1*, using `velero backup-location create` and `velero snapshot-location create`. Make sure to configure the `BackupStorageLocations` as read-only
|
||||
by using the `--access-mode=ReadOnly` flag for `velero backup-location create`.
|
||||
The default backup retention period, expressed as TTL (time to live), is 30 days (720 hours); you can use the `--ttl <DURATION>` flag to change this as necessary. See [how velero works](how-velero-works.md#set-a-backup-to-expire) for more information about backup expiry.
|
||||
|
||||
1. *(Cluster 2)* Make sure that the Velero Backup object is created. Velero resources are synchronized with the backup files in cloud storage.
|
||||
1. On Cluster 2, make sure that Velero is installed. Note that the install command below has the same `region` and `--bucket` location as the install command for Cluster 1. The Velero plugin for AWS does not support migrating data between regions.
|
||||
|
||||
```
|
||||
velero install --provider aws --image velero/velero:v1.8.0 --plugins velero/velero-plugin-for-aws:v1.4.0 --bucket velero-migration-demo --secret-file xxxx/aws-credentials-cluster2 --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2
|
||||
```
|
||||
|
||||
Alternatively you could configure `BackupStorageLocations` and `VolumeSnapshotLocations` after installing Velero on Cluster 2, pointing to the `--bucket` location and `region` used by Cluster 1. To do this you can use to `velero backup-location create` and `velero snapshot-location create` commands.
|
||||
|
||||
```
|
||||
velero backup-location create bsl --provider aws --bucket velero-migration-demo --config region=us-east-2 --access-mode=ReadOnly
|
||||
```
|
||||
|
||||
Its recommended that you configure the `BackupStorageLocations` as read-only
|
||||
by using the `--access-mode=ReadOnly` flag for `velero backup-location create`. This will make sure that the backup is not deleted from the object store by mistake during the restore. See `velero backup-location –help` for more information about the available flags for this command.
|
||||
|
||||
```
|
||||
velero snapshot-location create vsl --provider aws --config region=us-east-2
|
||||
```
|
||||
See `velero snapshot-location –help` for more information about the available flags for this command.
|
||||
|
||||
|
||||
1. Continuing on Cluster 2, make sure that the Velero Backup object created on Cluster 1 is available. `<BACKUP-NAME>` should be the same name used to create your backup of Cluster 1.
|
||||
|
||||
```
|
||||
velero backup describe <BACKUP-NAME>
|
||||
```
|
||||
|
||||
**Note:** The default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Velero server.
|
||||
Velero resources are [synchronized](how-velero-works.md#object-storage-sync) with the backup files in object storage. This means that the Velero resources created by Cluster 1's backup will be synced to Cluster 2 through the shared Backup Storage Location. Once the sync occurs, you will be able to access the backup from Cluster 1 on Cluster 2 using Velero commands. The default sync interval is 1 minute, so you may need to wait before checking for the backup's availability on Cluster 2. You can configure this interval with the `--backup-sync-period` flag to the Velero server on Cluster 2.
|
||||
|
||||
1. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with:
|
||||
1. On Cluster 2, once you have confirmed that the right backup is available, you can restore everything to Cluster 2.
|
||||
|
||||
```
|
||||
velero restore create --from-backup <BACKUP-NAME>
|
||||
```
|
||||
|
||||
Make sure `<BACKUP-NAME>` is the same backup name from Cluster 1.
|
||||
|
||||
## Verify Both Clusters
|
||||
|
||||
Check that the second cluster is behaving as expected:
|
||||
Check that the Cluster 2 is behaving as expected:
|
||||
|
||||
1. *(Cluster 2)* Run:
|
||||
1. On Cluster 2, run:
|
||||
|
||||
```
|
||||
velero restore get
|
||||
|
@ -60,9 +99,6 @@ Check that the second cluster is behaving as expected:
|
|||
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
|
||||
```
|
||||
|
||||
Your data that was backed up from Cluster 1 should now be available on Cluster 2.
|
||||
|
||||
If you encounter issues, make sure that Velero is running in the same namespace in both clusters.
|
||||
|
||||
|
||||
|
||||
[1]: how-velero-works.md#set-a-backup-to-expire
|
||||
[2]: restic.md
|
||||
|
|
|
@ -251,7 +251,7 @@ Instructions to back up using this approach are as follows:
|
|||
|
||||
### Using opt-in pod volume backup
|
||||
|
||||
Velero, by default, uses this approach to discover pod volumes that need to be backed up using Restic, where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
|
||||
Velero, by default, uses this approach to discover pod volumes that need to be backed up using Restic. Every pod containing a volume to be backed up using Restic must be annotated with the volume's name using the `backup.velero.io/backup-volumes` annotation.
|
||||
|
||||
Instructions to back up using this approach are as follows:
|
||||
|
||||
|
@ -527,6 +527,8 @@ within each restored volume, under `.velero`, whose name is the UID of the Veler
|
|||
1. Once all such files are found, the init container's process terminates successfully and the pod moves
|
||||
on to running other init containers/the main containers.
|
||||
|
||||
Velero won't restore a resource if a that resource is scaled to 0 and already exists in the cluster. If Velero restored the requested pods in this scenario, the Kubernetes reconciliation loops that manage resources would delete the running pods because its scaled to be 0. Velero will be able to restore once the resources is scaled up, and the pods are created and remain running.
|
||||
|
||||
## 3rd party controllers
|
||||
|
||||
### Monitor backup annotation
|
||||
|
|
|
@ -46,3 +46,20 @@ Error 116 represents certificate required as seen here in [error codes](https://
|
|||
Velero as a client does not include its certificate while performing SSL handshake with the server.
|
||||
From [TLS 1.3 spec](https://tools.ietf.org/html/rfc8446), verifying client certificate is optional on the server.
|
||||
You will need to change this setting on the server to make it work.
|
||||
|
||||
|
||||
## Skipping TLS verification
|
||||
|
||||
**Note:** The `--insecure-skip-tls-verify` flag is insecure and susceptible to man-in-the-middle attacks and meant to help your testing and developing scenarios in an on-premise environment. Using this flag in production is not recommended.
|
||||
|
||||
Velero provides a way for you to skip TLS verification on the object store when using the [AWS provider plugin](https://github.com/vmware-tanzu/velero-plugin-for-aws) or [Restic](restic.md) by passing the `--insecure-skip-tls-verify` flag with the following Velero commands,
|
||||
|
||||
* velero backup describe
|
||||
* velero backup download
|
||||
* velero backup logs
|
||||
* velero restore describe
|
||||
* velero restore log
|
||||
|
||||
If true, the object store's TLS certificate will not be checked for validity before Velero connects to the object store or Restic repo. You can permanently skip TLS verification for an object store by setting `Spec.Config.InsecureSkipTLSVerify` to true in the [BackupStorageLocation](api-types/backupstoragelocation.md) CRD.
|
||||
|
||||
Note that Velero's Restic integration uses Restic commands to do data transfer between object store and Kubernetes cluster disks. This means that when you specify `--insecure-skip-tls-verify` in Velero operations that involve interacting with Restic, Velero will add the Restic global command parameter `--insecure-tls` to Restic commands.
|
||||
|
|
Loading…
Reference in New Issue