Fix instances of 'an Velero' typo

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
pull/1465/head
Nolan Brubaker 2019-05-09 16:34:25 -04:00
parent d4f9c62449
commit 363748667b
7 changed files with 7 additions and 7 deletions

View File

@ -95,7 +95,7 @@ az storage container create -n $BLOB_CONTAINER --public-access off --account-nam
## Create service principal
To integrate Velero with Azure, you must create an Velero-specific [service principal][17].
To integrate Velero with Azure, you must create a Velero-specific [service principal][17].
1. Obtain your Azure Account Subscription ID and Tenant ID:

View File

@ -35,7 +35,7 @@ gsutil mb gs://$BUCKET/
## Create service account
To integrate Velero with GCP, create an Velero-specific [Service Account][15]:
To integrate Velero with GCP, create a Velero-specific [Service Account][15]:
1. View your current config settings:

View File

@ -41,7 +41,7 @@ Several comments:
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. We will use them in the next step.
4. Create an Velero-specific credentials file (`credentials-velero`) in your local directory:
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]

View File

@ -20,7 +20,7 @@ This configuration design enables a number of different use cases, including:
- Velero only supports a single set of credentials *per provider*. It's not yet possible to use different credentials for different locations, if they're for the same provider.
- Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take an Velero backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail.
- Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail.
- Each Velero backup has one `BackupStorageLocation`, and one `VolumeSnapshotLocation` per volume provider. It is not possible (yet) to send a single Velero backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can always set up multiple scheduled backups that differ only in the storage locations used if redundancy of backups across locations is important.

View File

@ -90,7 +90,7 @@ You're now ready to use Velero with restic.
This annotation can also be provided in a pod template spec if you use a controller to manage your pods.
1. Take an Velero backup:
1. Take a Velero backup:
```bash
velero backup create NAME OPTIONS...

View File

@ -41,7 +41,7 @@ kubectl edit deployment/velero -n velero
## Known issue with restoring LoadBalancer Service
Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform an Velero restore.
Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform a Velero restore.
Alternatively, you might be able to use the Service's `spec.loadBalancerIP` field to keep connections valid, if your cloud provider supports this value. See [the Kubernetes documentation about Services of Type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).

View File

@ -8,7 +8,7 @@ While GitHub issues, milestones, and labels generally work pretty well, the Vele
In our effort to minimize tooling while enabling product management insights, we have decided to use [ZenHub Open-Source](https://www.zenhub.com/blog/open-source/) to overlay product and project tracking on top of GitHub.
ZenHub is a GitHub application that provides Kanban visualization, Epic tracking, fine-grained prioritization, and more. It's primary backing storage system is existing GitHub issues along with additional metadata stored in ZenHub's database.
If you are an Velero user or Velero Developer, you do not _need_ to use ZenHub for your regular workflow (e.g to see open bug reports or feature requests, work on pull requests). However, if you'd like to be able to visualize the high-level project goals and roadmap, you will need to use the free version of ZenHub.
If you are a Velero user or Velero Developer, you do not _need_ to use ZenHub for your regular workflow (e.g to see open bug reports or feature requests, work on pull requests). However, if you'd like to be able to visualize the high-level project goals and roadmap, you will need to use the free version of ZenHub.
## Using ZenHub