21 KiB
title | layout |
---|---|
Customize Velero Install | docs |
Plugins
During install, Velero requires that at least one plugin is added (with the --plugins
flag). Please see the documentation under Plugins
Install in any namespace
Velero is installed in the velero
namespace by default. However, you can install Velero in any namespace. See run in custom namespace for details.
Use non-file-based identity mechanisms
By default, velero install
expects a credentials file for your velero
IAM account to be provided via the --secret-file
flag.
If you are using an alternate identity mechanism, such as kube2iam/kiam on AWS, Workload Identity on GKE, etc., that does not require a credentials file, you can specify the --no-secret
flag instead of --secret-file
.
Enable file system backup
By default, velero install
does not install Velero's File System Backup. To enable it, specify the --use-node-agent
flag.
If you've already run velero install
without the --use-node-agent
flag, you can run the same command again, including the --use-node-agent
flag, to add the file system backup to your existing install.
CSI Snapshot Data Movement
Velero node-agent is required by CSI snapshot data movement when Velero built-in data mover is used. By default, velero install
does not install Velero's node-agent. To enable it, specify the --use-node-agent
flag.
For some use cases, Velero node-agent requires to run under privileged mode. For example, when backing up block volumes, it is required to allow the node-agent to access the block device. To enable it set velero install flags --privileged-node-agent
.
If you've already run velero install
without the --use-node-agent
or --privileged-node-agent
flag, you can run the same command again, including the --use-node-agent
or --privileged-node-agent
flag, to add CSI snapshot data movement to your existing install.
Default Pod Volume backup to file system backup
By default, velero install
does not enable the use of File System Backup (FSB) to take backups of all pod volumes. You must apply an annotation to every pod which contains volumes for Velero to use FSB for the backup.
If you are planning to only use FSB for volume backups, you can run the velero install
command with the --default-volumes-to-fs-backup
flag. This will default all pod volumes backups to use FSB without having to apply annotations to pods. Note that when this flag is set during install, Velero will always try to use FSB to perform the backup, even want an individual backup to use volume snapshots, by setting the --snapshot-volumes
flag in the backup create
command. Alternatively, you can set the --default-volumes-to-fs-backup
on an individual backup to to make sure Velero uses FSB for each volume being backed up.
Enable features
New features in Velero will be released as beta features behind feature flags which are not enabled by default. A full listing of Velero feature flags can be found here.
Enable server side features
Features on the Velero server can be enabled using the --features
flag to the velero install
command. This flag takes as value a comma separated list of feature flags to enable. As an example CSI snapshotting of PVCs can be enabled using EnableCSI
feature flag in the velero install
command as shown below:
velero install --features=EnableCSI
Another example is enabling the support of multiple API group versions, as documented at - -features=EnableAPIGroupVersions.
Feature flags, passed to velero install
will be passed to the Velero deployment and also to the node-agent
daemon set, if --use-node-agent
flag is used.
Similarly, features may be disabled by removing the corresponding feature flags from the --features
flag.
Enabling and disabling feature flags will require modifying the Velero deployment and also the node-agent daemonset. This may be done from the CLI by uninstalling and re-installing Velero, or by editing the deploy/velero
and daemonset/node-agent
resources in-cluster.
$ kubectl -n velero edit deploy/velero
$ kubectl -n velero edit daemonset/node-agent
Enable client side features
For some features it may be necessary to use the --features
flag to the Velero client. This may be done by passing the --features
on every command run using the Velero CLI or the by setting the features in the velero client config file using the velero client config set
command as shown below:
velero client config set features=feature1,feature2...
This stores the config in a file at $HOME/.config/velero/config.json
.
All client side feature flags may be disabled using the below command
velero client config set features=
Colored CLI output
Velero CLI uses colored output for some commands, such as velero describe
. If
the environment in which Velero is run doesn't support colored output, the
colored output will be automatically disabled. However, you can manually disable
colors with config file:
velero client config set colorized=false
Note that if you specify --colorized=true
as a CLI option it will override
the config file setting.
Customize resource requests and limits
At installation, Velero sets default resource requests and limits for the Velero pod and the node-agent pod, if you using the File System Backup.
{{< table caption="Velero Customize resource requests and limits defaults" >}}
Setting | Velero pod defaults | node-agent pod defaults |
---|---|---|
CPU request | 500m | 500m |
Memory requests | 128Mi | 512Mi |
CPU limit | 1000m (1 CPU) | 1000m (1 CPU) |
Memory limit | 512Mi | 1024Mi |
{{< /table >}} |
Depending on the cluster resources, you may need to increase these defaults. Through testing, the Velero maintainers have found these defaults work well when backing up and restoring 1000 or less resources and total size of files is 100GB or below. If the resources you are planning to backup or restore exceed this, you will need to increase the CPU or memory resources available to Velero. In general, the Velero maintainer's testing found that backup operations needed more CPU & memory resources but were less time-consuming than restore operations, when comparing backing up and restoring the same amount of data. The exact CPU and memory limits you will need depend on the scale of the files and directories of your resources and your hardware. It's recommended that you perform your own testing to find the best resource limits for your clusters and resources.
You may need to increase the resource limits if you are using File System Backup, see the details in File System Backup.
Install with custom resource requests and limits
You can customize these resource requests and limit when you first install using the velero install CLI command.
velero install \
--velero-pod-cpu-request <CPU_REQUEST> \
--velero-pod-mem-request <MEMORY_REQUEST> \
--velero-pod-cpu-limit <CPU_LIMIT> \
--velero-pod-mem-limit <MEMORY_LIMIT> \
[--use-node-agent] \
[--default-volumes-to-fs-backup] \
[--node-agent-pod-cpu-request <CPU_REQUEST>] \
[--node-agent-pod-mem-request <MEMORY_REQUEST>] \
[--node-agent-pod-cpu-limit <CPU_LIMIT>] \
[--node-agent-pod-mem-limit <MEMORY_LIMIT>]
Update resource requests and limits after install
After installation you can adjust the resource requests and limits in the Velero Deployment spec or node-agent DaemonSet spec, if you are using the File System Backup.
Velero pod
Update the spec.template.spec.containers.resources.limits
and spec.template.spec.containers.resources.requests
values in the Velero deployment.
kubectl patch deployment velero -n velero --patch \
'{"spec":{"template":{"spec":{"containers":[{"name": "velero", "resources": {"limits":{"cpu": "1", "memory": "512Mi"}, "requests": {"cpu": "1", "memory": "128Mi"}}}]}}}}'
node-agent pod
Update the spec.template.spec.containers.resources.limits
and spec.template.spec.containers.resources.requests
values in the node-agent DaemonSet spec.
kubectl patch daemonset node-agent -n velero --patch \
'{"spec":{"template":{"spec":{"containers":[{"name": "node-agent", "resources": {"limits":{"cpu": "1", "memory": "1024Mi"}, "requests": {"cpu": "1", "memory": "512Mi"}}}]}}}}'
Additionally, you may want to update the the default File System Backup operation timeout (default 240 minutes) to allow larger backups more time to complete. You can adjust this timeout by adding the - --fs-backup-timeout
argument to the Velero Deployment spec.
NOTE: Changes made to this timeout value will revert back to the default value if you re-run the Velero install command.
-
Open the Velero Deployment spec.
kubectl edit deploy velero -n velero
-
Add
- --fs-backup-timeout
tospec.template.spec.containers
.spec: template: spec: containers: - args: - --fs-backup-timeout=240m
Configure more than one storage location for backups or volume snapshots
Velero supports any number of backup storage locations and volume snapshot locations. For more details, see about locations.
However, velero install
only supports configuring at most one backup storage location and one volume snapshot location.
To configure additional locations after running velero install
, use the velero backup-location create
and/or velero snapshot-location create
commands along with provider-specific configuration. Use the --help
flag on each of these commands for more details.
Set default backup storage location or volume snapshot locations
When performing backups, Velero needs to know where to backup your data. This means that if you configure multiple locations, you must specify the location Velero should use each time you run velero backup create
, or you can set a default backup storage location or default volume snapshot locations. If you only have one backup storage location or volume snapshot location set for a provider, Velero will automatically use that location as the default.
Set default backup storage location
currently, Velero could set the default backup storage location as below:
-
First way: Set a default backup storage location by passing a
--default
flag when runningvelero backup-location create
.velero backup-location create backups-primary \ --provider aws \ --bucket velero-backups \ --config region=us-east-1 \ --default
-
Second way: Set a default backup storage location by passing a
--default
flag when runningvelero backup-location set
.velero backup-location set backups-primary --default
We also could remove the default backup storage location by this command, below is one example
velero backup-location set backups-primary --default=false
-
Third way: Set a default backup storage location by passing
--default-backup-storage-location
flag on thevelero server
command.velero server --default-backup-storage-location backups-primary
Note: Only could have one default backup storage location, which means it's not allowed to set two default backup storage locations at the same time, the priorities among these three are as follows:
- if velero server side has specified one default backup storage location, suppose it's
A
- if
A
backup storage location exists, it's not allowed to set a new default backup storage location - if
A
does not exist- if using
velero backup-location set
orvelero backup-location create --default
command- it could be successful if no default backup storage location exists.
- it would fail if already exist one default backup storage location. (So it need to remove other default backup storage location at first)
- if using
- if
- if velero server side has not specified one default backup storage location
- if using
velero backup-location set
orvelero backup-location create --default
command- it could be successful if no default backup storage location exists.
- it would fail if already exist one default backup storage location. (So it need to remove other default backup storage location at first)
- if using
Set default volume snapshot location
You can set a default volume snapshot location for each of your volume snapshot providers using the --default-volume-snapshot-locations
flag on the velero server
command.
velero server --default-volume-snapshot-locations="<PROVIDER-NAME>:<LOCATION-NAME>,<PROVIDER2-NAME>:<LOCATION2-NAME>"
Do not configure a backup storage location during install
If you need to install Velero without a default backup storage location (without specifying --bucket
or --provider
), the --no-default-backup-location
flag is required for confirmation.
Install an additional volume snapshot provider
Velero supports using different providers for volume snapshots than for object storage -- for example, you can use AWS S3 for object storage, and Portworx for block volume snapshots.
However, velero install
only supports configuring a single matching provider for both object storage and volume snapshots.
To use a different volume snapshot provider:
-
Install the Velero server components by following the instructions for your object storage provider
-
Add your volume snapshot provider's plugin to Velero (look in [your provider][0]'s documentation for the image name):
velero plugin add <registry/image:version>
-
Add a volume snapshot location for your provider, following [your provider][0]'s documentation for configuration:
velero snapshot-location create <NAME> \ --provider <PROVIDER-NAME> \ [--config <PROVIDER-CONFIG>]
Generate YAML only
By default, velero install
generates and applies a customized set of Kubernetes configuration (YAML) to your cluster.
To generate the YAML without applying it to your cluster, use the --dry-run -o yaml
flags.
This is useful for applying bespoke customizations, integrating with a GitOps workflow, etc.
If you are installing Velero in Kubernetes 1.14.x or earlier, you need to use kubectl apply
's --validate=false
option when applying the generated configuration to your cluster. See issue 2077 and issue 2311 for more context.
Use a storage provider secured by a self-signed certificate
If you intend to use Velero with a storage provider that is secured by a self-signed certificate, you may need to instruct Velero to trust that certificate. See use Velero with a storage provider secured by a self-signed certificate for details.
Additional options
Run velero install --help
or see the Helm chart documentation for the full set of installation options.
Optional Velero CLI configurations
Enabling shell autocompletion
Velero CLI provides autocompletion support for Bash
and Zsh
, which can save you a lot of typing.
Below are the procedures to set up autocompletion for Bash
(including the difference between Linux
and macOS
) and Zsh
.
Bash on Linux
The Velero CLI completion script for Bash
can be generated with the command velero completion bash
. Sourcing the completion script in your shell enables velero autocompletion.
However, the completion script depends on bash-completion, which means that you have to install this software first (you can test if you have bash-completion already installed by running type _init_completion
).
Install bash-completion
bash-completion
is provided by many package managers (see here). You can install it with apt-get install bash-completion
or yum install bash-completion
, etc.
The above commands create /usr/share/bash-completion/bash_completion
, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your ~/.bashrc
file.
To find out, reload your shell and run type _init_completion
. If the command succeeds, you're already set, otherwise add the following to your ~/.bashrc
file:
source /usr/share/bash-completion/bash_completion
Reload your shell and verify that bash-completion is correctly installed by typing type _init_completion
.
Enable Velero CLI autocompletion for Bash on Linux
You now need to ensure that the Velero CLI completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
-
Source the completion script in your
~/.bashrc
file:echo 'source <(velero completion bash)' >>~/.bashrc
-
Add the completion script to the
/etc/bash_completion.d
directory:velero completion bash >/etc/bash_completion.d/velero
-
If you have an alias for velero, you can extend shell completion to work with that alias:
echo 'alias v=velero' >>~/.bashrc echo 'complete -F __start_velero v' >>~/.bashrc
bash-completion
sources all completion scripts in/etc/bash_completion.d
.
Both approaches are equivalent. After reloading your shell, velero autocompletion should be working.
Bash on macOS
The Velero CLI completion script for Bash can be generated with velero completion bash
. Sourcing this script in your shell enables velero completion.
However, the velero completion script depends on bash-completion which you thus have to previously install.
There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The velero completion script doesn't work correctly with bash-completion v1 and Bash 3.2. It requires bash-completion v2 and Bash 4.1+. Thus, to be able to correctly use velero completion on macOS, you have to install and use Bash 4.1+ (instructions). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer).
Install bash-completion
As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work).
You can test if you have bash-completion v2 already installed with type _init_completion
. If not, you can install it with Homebrew:
brew install bash-completion@2
As stated in the output of this command, add the following to your ~/.bashrc
file:
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
Reload your shell and verify that bash-completion v2 is correctly installed with type _init_completion
.
Enable Velero CLI autocompletion for Bash on macOS
You now have to ensure that the velero completion script gets sourced in all your shell sessions. There are multiple ways to achieve this:
-
Source the completion script in your
~/.bashrc
file:echo 'source <(velero completion bash)' >>~/.bashrc
-
Add the completion script to the
/usr/local/etc/bash_completion.d
directory:velero completion bash >/usr/local/etc/bash_completion.d/velero
-
If you have an alias for velero, you can extend shell completion to work with that alias:
echo 'alias v=velero' >>~/.bashrc echo 'complete -F __start_velero v' >>~/.bashrc
-
If you installed velero with Homebrew (as explained above), then the velero completion script should already be in
/usr/local/etc/bash_completion.d/velero
. In that case, you don't need to do anything.
The Homebrew installation of bash-completion v2 sources all the files in the
BASH_COMPLETION_COMPAT_DIR
directory, that's why the latter two methods work.
In any case, after reloading your shell, velero completion should be working.
Autocompletion on Zsh
The velero completion script for Zsh can be generated with the command velero completion zsh
. Sourcing the completion script in your shell enables velero autocompletion.
To do so in all your shell sessions, add the following to your ~/.zshrc
file:
source <(velero completion zsh)
If you have an alias for kubectl, you can extend shell completion to work with that alias:
echo 'alias v=velero' >>~/.zshrc
echo 'complete -F __start_velero v' >>~/.zshrc
After reloading your shell, kubectl autocompletion should be working.
If you get an error like complete:13: command not found: compdef
, then add the following to the beginning of your ~/.zshrc
file:
autoload -Uz compinit
compinit