Fix various typos found by codespell (#3057)

By running the following command:

codespell -S .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico -L \
iam,aks,ist,bridget,ue

Signed-off-by: Mateusz Gozdek <mgozdekof@gmail.com>
pull/3064/head
Mateusz Gozdek 2020-11-10 17:48:35 +01:00 committed by GitHub
parent dc6762a895
commit dbc83af77b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
82 changed files with 117 additions and 116 deletions

View File

@ -8,7 +8,7 @@
### Bug fixes
* If a Service is headless, retain ClusterIP = None when backing up and restoring.
* Use the specifed --label-selector when listing backups, schedules, and restores.
* Use the specified --label-selector when listing backups, schedules, and restores.
* Restore namespace mapping functionality that was accidentally broken in 0.5.0.
* Always include namespaces in the backup, regardless of the --include-cluster-resources setting.

View File

@ -104,7 +104,7 @@
### Download
- https://github.com/heptio/ark/releases/tag/v0.9.3
### Bug Fixes
* Initalize Prometheus metrics when creating a new schedule (#689, @lemaral)
* Initialize Prometheus metrics when creating a new schedule (#689, @lemaral)
## v0.9.2

View File

@ -75,7 +75,7 @@ Finally, thanks to testing by [Dylan Murray](https://github.com/dymurray) and [S
* Adds configurable CPU/memory requests and limits to the Velero Deployment generated by velero install. (#1678, @prydonius)
* Store restic PodVolumeBackups in obj storage & use that as source of truth like regular backups. (#1577, @carlisia)
* Update Velero Deployment to use apps/v1 API group. `velero install` and `velero plugin add/remove` commands will now require Kubernetes 1.9+ (#1673, @nrb)
* Respect the --kubecontext and --kubeconfig arugments for `velero install`. (#1656, @nrb)
* Respect the --kubecontext and --kubeconfig arguments for `velero install`. (#1656, @nrb)
* add plugin for updating PV & PVC storage classes on restore based on a config map (#1621, @skriss)
* Add restic support for CSI volumes (#1615, @nrb)
* bug fix: Fixed namespace usage with cli command 'version' (#1630, @jwmatthews)

View File

@ -109,7 +109,7 @@ We fixed a large number of bugs and made some smaller usability improvements in
* bug fix: only prioritize restoring `replicasets.apps`, not `replicasets.extensions` (#2157, @skriss)
* bug fix: restore both `replicasets.apps` *and* `replicasets.extensions` before `deployments` (#2120, @skriss)
* bug fix: don't restore cluster-scoped resources when restoring specific namespaces and IncludeClusterResources is nil (#2118, @skriss)
* Enableing Velero to switch credentials (`AWS_PROFILE`) if multiple s3-compatible backupLocations are present (#2096, @dinesh)
* Enabling Velero to switch credentials (`AWS_PROFILE`) if multiple s3-compatible backupLocations are present (#2096, @dinesh)
* bug fix: deep-copy backup's labels when constructing snapshot tags, so the PV name isn't added as a label to the backup (#2075, @skriss)
* remove the `fsfreeze-pause` image being published from this repo; replace it with `ubuntu:bionic` in the nginx example app (#2068, @skriss)
* add support for a private registry with a custom port in a restic-helper image (#1999, @cognoz)

View File

@ -0,0 +1 @@
Fixed various typos across codebase

View File

@ -335,7 +335,7 @@ spec:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the Kind name and value is a list
of resource names separeted by commas. Each resource name has format
of resource names separated by commas. Each resource name has format
"namespace/resourcename". For cluster resources, simply use "resourcename".
nullable: true
type: object

View File

@ -344,7 +344,7 @@ spec:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the Kind name and value is a
list of resource names separeted by commas. Each resource name
list of resource names separated by commas. Each resource name
has format "namespace/resourcename". For cluster resources, simply
use "resourcename".
nullable: true

View File

@ -61,7 +61,7 @@ spec:
a Velero VolumeSnapshotLocation.
properties:
phase:
description: VolumeSnapshotLocationPhase is the lifecyle phase of a
description: VolumeSnapshotLocationPhase is the lifecycle phase of a
Velero VolumeSnapshotLocation.
enum:
- Available

File diff suppressed because one or more lines are too long

View File

@ -52,7 +52,7 @@ spec:
of a Velero VolumeSnapshotLocation.
properties:
phase:
description: VolumeSnapshotLocationPhase is the lifecyle phase of
description: VolumeSnapshotLocationPhase is the lifecycle phase of
a Velero VolumeSnapshotLocation.
enum:
- Available

View File

@ -84,7 +84,7 @@ If the metadata file does not exist, this is an older backup and we cannot displ
### Fetch backup contents archive and walkthrough to list contents
Instead of recording new metadata about what resources have been backed up, we could simply download the backup contents archive and walkthrough it to list the contents everytime `velero backup describe <name> --details` is run.
Instead of recording new metadata about what resources have been backed up, we could simply download the backup contents archive and walkthrough it to list the contents every time `velero backup describe <name> --details` is run.
The advantage of this approach is that we don't need to change any backup procedures as we already have this content, and we will also be able to list resources for older backups.
Additionally, if we wanted to expose more information about the backed up resources, we can do so without having to update what we store in the metadata.

View File

@ -176,7 +176,7 @@ This will allow the development to continue on the feature while it's in pre-pro
[`BackupStore.PutBackup`][9] will receive an additional argument, `volumeSnapshots io.Reader`, that contains the JSON representation of `VolumeSnapshots`.
This will be written to a file named `csi-snapshots.json.gz`.
[`defaultRestorePriorities`][11] should be rewritten to the following to accomodate proper association between the CSI objects and PVCs. `CustomResourceDefinition`s are moved up because they're necessary for creating the CSI CRDs. The CSI CRDs are created before `PersistentVolume`s and `PersistentVolumeClaim`s so that they may be used as data sources.
[`defaultRestorePriorities`][11] should be rewritten to the following to accommodate proper association between the CSI objects and PVCs. `CustomResourceDefinition`s are moved up because they're necessary for creating the CSI CRDs. The CSI CRDs are created before `PersistentVolume`s and `PersistentVolumeClaim`s so that they may be used as data sources.
GitHub issue [1565][17] represents this work.
```go
@ -248,7 +248,7 @@ Volumes with any other `PersistentVolumeSource` set will use Velero's current Vo
### VolumeSnapshotLocations and VolumeSnapshotClasses
Velero uses its own `VolumeSnapshotLocation` CRDs to specify configuration options for a given storage system.
In Velero, this often includes topology information such as regions or availibility zones, as well as credential information.
In Velero, this often includes topology information such as regions or availability zones, as well as credential information.
CSI volume snapshotting has a `VolumeSnapshotClass` CRD which also contains configuration options for a given storage system, but these options are not the same as those that Velero would use.
Since CSI volume snapshotting is operating within the same storage system that manages the volumes already, it does not need the same topology or credential information that Velero does.
@ -269,7 +269,7 @@ Additionally, the VolumeSnapshotter plugins and CSI volume snapshot drivers over
Thus, there's not a logical place to fit the creation of VolumeSnapshot creation in the VolumeSnapshotter interface.
* Implement CSI logic directly in Velero core code.
The plugins could be packaged separately, but that doesn't necessarily make sense with server and client changes being made to accomodate CSI snapshot lookup.
The plugins could be packaged separately, but that doesn't necessarily make sense with server and client changes being made to accommodate CSI snapshot lookup.
* Implementing the CSI logic entirely in external plugins.
As mentioned above, the necessary plugins for `PersistentVolumeClaim`, `VolumeSnapshot`, and `VolumeSnapshotContent` could be hosted out-out-of-tree from Velero.

View File

@ -19,7 +19,7 @@ This design seeks to provide the missing extension point.
## Non Goals
- Specific implementations of hte DeleteItemAction API beyond test cases.
- Specific implementations of the DeleteItemAction API beyond test cases.
- Rollback of DeleteItemAction execution.
## High-Level Design

View File

@ -45,7 +45,7 @@ Currently, the Velero repository sits under the Heptio GitHub organization. With
### Notes/How-Tos
#### Transfering the GH repository
#### Transferring the GH repository
All action items needed for the repo transfer are listed in the Todo list above. For details about what gets moved and other info, this is the GH documentation: https://help.github.com/en/articles/transferring-a-repository
@ -57,7 +57,7 @@ Someone with owner permission on the new repository needs to go to their Travis
After this, webhook notifications can be added following these instructions: https://docs.travis-ci.com/user/notifications/#configuring-webhook-notifications.
#### Transfering ZenHub
#### Transferring ZenHub
Pre-requisite: A new Zenhub account must exist for a vmware or vmware-tanzu organization.

View File

@ -413,7 +413,7 @@ However, it can provide preference over latest supported API.
If new fields are added without changing API version, it won't cause any problem as these resources are intended to provide information, and, there is no reconciliation on these resources.
### Compatibility of latest plugin with older version of Velero
Plugin that supports this CR should handle the situation gracefully when CRDs are not installed. It can handle the errors occured during creation/updation of the CRs.
Plugin that supports this CR should handle the situation gracefully when CRDs are not installed. It can handle the errors occurred during creation/updation of the CRs.
## Limitations:
@ -432,7 +432,7 @@ But, this involves good amount of changes and needs a way for backward compatibi
As volume plugins are mostly K8s native, its fine to go ahead with current limiation.
### Update Backup CR
Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having seperate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress.
Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having separate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress.
### Restricting on name rather than using labels
Instead of using labels to identify the CR related to particular backup on a volume, restrictions can be placed on the name of VolumePluginBackup CR to be same as the value returned from CreateSnapshot.

View File

@ -63,7 +63,7 @@ With the `--json` flag, `restic backup` outputs single lines of JSON reporting t
The [command factory for backup](https://github.com/heptio/velero/blob/af4b9373fc73047f843cd4bc3648603d780c8b74/pkg/restic/command_factory.go#L37) will be updated to include the `--json` flag.
The code to run the `restic backup` command (https://github.com/heptio/velero/blob/af4b9373fc73047f843cd4bc3648603d780c8b74/pkg/controller/pod_volume_backup_controller.go#L241) will be changed to include a Goroutine that reads from the command's stdout stream.
The implementation of this will largely follow [@jmontleon's PoC](https://github.com/fusor/velero/pull/4/files) of this.
The Goroutine will periodically read the stream (every 10 seconds) and get the last printed status line, which will be convered to JSON.
The Goroutine will periodically read the stream (every 10 seconds) and get the last printed status line, which will be converted to JSON.
If `bytes_done` is empty, restic has not finished scanning the volume and hasn't calculated the `total_bytes`.
In this case, we will not update the PodVolumeBackup and instead will wait for the next iteration.
Once we get a non-zero value for `bytes_done`, the `bytes_done` and `total_bytes` properties will be read and the PodVolumeBackup will be patched to update `status.Progress.BytesDone` and `status.Progress.TotalBytes` respectively.

View File

@ -132,7 +132,7 @@ func (i *InitContainerRestoreHookHandler) HandleRestoreHooks(
hooksFromAnnotations := getInitRestoreHookFromAnnotation(kube.NamespaceAndName(pod), metadata.GetAnnotations(), log)
if hooksFromAnnotations != nil {
log.Infof("Handling InitRestoreHooks from pod annotaions")
log.Infof("Handling InitRestoreHooks from pod annotations")
initContainers = append(initContainers, hooksFromAnnotations.InitContainers...)
} else {
log.Infof("Handling InitRestoreHooks from RestoreSpec")
@ -351,7 +351,7 @@ func getInitRestoreHookFromAnnotation(podName string, annotations map[string]str
return nil
}
if command == "" {
log.Infof("RestoreHook init contianer for pod %s is using container's default entrypoint", podName, containerImage)
log.Infof("RestoreHook init container for pod %s is using container's default entrypoint", podName, containerImage)
}
if containerName == "" {
uid, err := uuid.NewV4()

View File

@ -1655,7 +1655,7 @@ func TestHandleRestoreHooks(t *testing.T) {
},
},
{
name: "shoud not apply any restore hook init containers when resource hook selector mismatch",
name: "should not apply any restore hook init containers when resource hook selector mismatch",
podInput: corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
@ -1807,7 +1807,7 @@ func TestHandleRestoreHooks(t *testing.T) {
},
},
{
name: "shoud not apply any restore hook init containers when resource hook is nil",
name: "should not apply any restore hook init containers when resource hook is nil",
podInput: corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
@ -1824,7 +1824,7 @@ func TestHandleRestoreHooks(t *testing.T) {
restoreHooks: nil,
},
{
name: "shoud not apply any restore hook init containers when resource hook is empty",
name: "should not apply any restore hook init containers when resource hook is empty",
podInput: corev1api.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",

View File

@ -545,7 +545,7 @@ func TestWaitExecHandleHooks(t *testing.T) {
sharedHooksContextTimeout: time.Millisecond,
},
{
name: "shoudl return no error with 2 spec hooks in 2 different containers, 1st container starts running after 10ms, 2nd container after 20ms, both succeed",
name: "should return no error with 2 spec hooks in 2 different containers, 1st container starts running after 10ms, 2nd container after 20ms, both succeed",
groupResource: "pods",
initialPod: builder.ForPod("default", "my-pod").
Containers(&v1.Container{

View File

@ -39,7 +39,7 @@ type DefaultBackupLocationInfo struct {
// IsReadyToValidate calculates if a given backup storage location is ready to be validated.
//
// Rules:
// Users can choose a validation frequency per location. This will overrite the server's default value
// Users can choose a validation frequency per location. This will override the server's default value
// To skip/stop validation, set the frequency to zero
// This will always return "true" for the first attempt at validating a location, regardless of its validation frequency setting
// Otherwise, it returns "ready" only when NOW is equal to or after the next validation time

View File

@ -90,7 +90,7 @@ type BackupSpec struct {
DefaultVolumesToRestic *bool `json:"defaultVolumesToRestic,omitempty"`
// OrderedResources specifies the backup order of resources of specific Kind.
// The map key is the Kind name and value is a list of resource names separeted by commas.
// The map key is the Kind name and value is a list of resource names separated by commas.
// Each resource name has format "namespace/resourcename". For cluster resources, simply use "resourcename".
// +optional
// +nullable

View File

@ -59,7 +59,7 @@ type VolumeSnapshotLocationSpec struct {
Config map[string]string `json:"config,omitempty"`
}
// VolumeSnapshotLocationPhase is the lifecyle phase of a Velero VolumeSnapshotLocation.
// VolumeSnapshotLocationPhase is the lifecycle phase of a Velero VolumeSnapshotLocation.
// +kubebuilder:validation:Enum=Available;Unavailable
type VolumeSnapshotLocationPhase string

View File

@ -251,7 +251,7 @@ func (ib *itemBackupper) backupItem(logger logrus.FieldLogger, obj runtime.Unstr
return false, errors.WithStack(err)
}
// backing up the preferred version backup without API Group version on path - this is for backward compability
// backing up the preferred version backup without API Group version on path - this is for backward compatibility
log.Debugf("Resource %s/%s, version= %s, preferredVersion=%s", groupResource.String(), name, version, preferredVersion)
if version == preferredVersion {

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -48,7 +48,7 @@ func TestFactory(t *testing.T) {
assert.Equal(t, s, f.Namespace())
// An arugment overrides the env variable if both are set.
// An argument overrides the env variable if both are set.
os.Setenv("VELERO_NAMESPACE", "env-velero")
f = NewFactory("velero", make(map[string]interface{}))
flags = new(pflag.FlagSet)

View File

@ -182,7 +182,7 @@ func (o *CreateOptions) Validate(c *cobra.Command, args []string, f client.Facto
}
func (o *CreateOptions) Complete(args []string, f client.Factory) error {
// If an explict name is specified, use that name
// If an explicit name is specified, use that name
if len(args) > 0 {
o.Name = args[0]
}

View File

@ -62,7 +62,7 @@ func TestCreateOptions_BuildBackupFromSchedule(t *testing.T) {
o.FromSchedule = "test"
o.client = fake.NewSimpleClientset()
t.Run("inexistant schedule", func(t *testing.T) {
t.Run("inexistent schedule", func(t *testing.T) {
_, err := o.BuildBackup(testNamespace)
assert.Error(t, err)
})

View File

@ -55,7 +55,7 @@ type BackupStorageLocationReconciler struct {
func (r *BackupStorageLocationReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithField("controller", BackupStorageLocation)
log.Debug("Validating availablity of backup storage locations.")
log.Debug("Validating availability of backup storage locations.")
locationList, err := storage.ListBackupStorageLocations(r.Ctx, r.Client, req.Namespace)
if err != nil {

View File

@ -38,7 +38,7 @@ import (
)
// kindToResource translates a Kind (mixed case, singular) to a Resource (lowercase, plural) string.
// This is to accomodate the dynamic client's need for an APIResource, as the Unstructured objects do not have easy helpers for this information.
// This is to accommodate the dynamic client's need for an APIResource, as the Unstructured objects do not have easy helpers for this information.
var kindToResource = map[string]string{
"CustomResourceDefinition": "customresourcedefinitions",
"Namespace": "namespaces",
@ -51,7 +51,7 @@ var kindToResource = map[string]string{
"VolumeSnapshotLocation": "volumesnapshotlocations",
}
// ResourceGroup represents a collection of kubernetes objects with a common ready conditon
// ResourceGroup represents a collection of kubernetes objects with a common ready condition
type ResourceGroup struct {
CRDResources []*unstructured.Unstructured
OtherResources []*unstructured.Unstructured

View File

@ -106,7 +106,7 @@ func (p *restartableProcess) reset() error {
// Callers of resetLH *must* acquire the lock before calling it.
func (p *restartableProcess) resetLH() error {
if p.resetFailures > 10 {
return errors.Errorf("unable to restart plugin process: execeeded maximum number of reset failures")
return errors.Errorf("unable to restart plugin process: exceeded maximum number of reset failures")
}
process, err := newProcess(p.command, p.logger, p.logLevel)

View File

@ -164,7 +164,7 @@ func TestChangePVCNodeSelectorActionExecute(t *testing.T) {
assert.NoError(t, err)
wantUnstructured, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.want)
fmt.Printf("exptected +%v\n", wantUnstructured)
fmt.Printf("expected +%v\n", wantUnstructured)
require.NoError(t, err)
assert.Equal(t, &unstructured.Unstructured{Object: wantUnstructured}, res.UpdatedItem)

View File

@ -387,7 +387,7 @@ func (ctx *restoreContext) execute() (Result, Result) {
// Iterate through an ordered list of resources to restore, checking each one to see if it should be restored.
// Note that resources *may* be in this list twice, i.e. once due to being a prioritized resource, and once due
// to being in the backup tarball. We can't de-dupe this upfront, because it's possible that items in the prioritized
// resources list may not be fully resolved group-resource strings (e.g. may be specfied as "po" instead of "pods"),
// resources list may not be fully resolved group-resource strings (e.g. may be specified as "po" instead of "pods"),
// and we don't want to fully resolve them via discovery until we reach them in the loop, because it is possible
// that the resource/API itself is being restored via a custom resource definition, meaning it's not available via
// discovery prior to beginning the restore.
@ -1248,7 +1248,7 @@ func remapClaimRefNS(ctx *restoreContext, obj *unstructured.Unstructured) (bool,
}
if pv.Spec.ClaimRef == nil {
ctx.log.Debugf("Persistent volume does not need to have the claimRef.namepace remapped because it's not claimed")
ctx.log.Debugf("Persistent volume does not need to have the claimRef.namespace remapped because it's not claimed")
return false, nil
}

View File

@ -64,7 +64,7 @@ type SnapshotStatus struct {
Phase SnapshotPhase `json:"phase,omitempty"`
}
// SnapshotPhase is the lifecyle phase of a Velero volume snapshot.
// SnapshotPhase is the lifecycle phase of a Velero volume snapshot.
type SnapshotPhase string
const (

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -261,7 +261,7 @@ nginx 1/1 Running 0 13s 10.200.0.4 worker0
A list of Velero-specific terms and words to be used consistently across the site.
{{< table caption="Velero.io word list" >}}
|Trem|Useage|
|Trem|Usage|
|--- |--- |
|Kubernetes|Kubernetes should always be capitalized.|
|Docker|Docker should always be capitalized.|

View File

@ -100,7 +100,7 @@ Forwarding from [::1]:8085 -> 8085
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the nessary [annotations][8] for prometheus to scrape metrics.
- Confirm that the Velero server pod has the necessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
[1]: debugging-restores.md

View File

@ -5,7 +5,7 @@ layout: docs
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server compoents to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage

View File

@ -14,7 +14,7 @@ If you do not have the `aws` CLI locally installed, follow the [user guide][5] t
## Create S3 bucket
Heptio Ark requires an object storage bucket to store backups in, preferrably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
Heptio Ark requires an object storage bucket to store backups in, preferably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
```bash
aws s3api create-bucket \

View File

@ -112,7 +112,7 @@ To provision a cluster on AWS using Amazons official CloudFormation templates
Running the Ark server locally can speed up iterative development. This eliminates the need to rebuild the Ark server
image and redeploy it to the cluster with each change.
#### 1. Set enviroment variables
#### 1. Set environment variables
Set the appropriate environment variable for your cloud provider:

View File

@ -31,7 +31,7 @@ branch of the Velero repository are under active development and are not guarant
## Create S3 bucket
Velero requires an object storage bucket to store backups in, preferrably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
Velero requires an object storage bucket to store backups in, preferably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
```bash
aws s3api create-bucket \

View File

@ -123,7 +123,7 @@ To provision a cluster on AWS using Amazons official CloudFormation templates
Running the Velero server locally can speed up iterative development. This eliminates the need to rebuild the Velero server
image and redeploy it to the cluster with each change.
#### 1. Set enviroment variables
#### 1. Set environment variables
Set the appropriate environment variable for your cloud provider:

View File

@ -72,7 +72,7 @@ The configurable parameters are as follows:
| `availabilityZone` | string | Required Field | *Example*: "us-east-1a"<br><br>See [AWS documentation][4] for details. |
| `disableSSL` | bool | `false` | Set this to `true` if you are using Minio (or another local, S3-compatible storage service) and your deployment is not secured. |
| `s3ForcePathStyle` | bool | `false` | Set this to `true` if you are using a local storage service like Minio. |
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Ark can already generate it from `region`, `availabilityZone`, and `bucket`. This field is primarily for local sotrage services like Minio.|
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Ark can already generate it from `region`, `availabilityZone`, and `bucket`. This field is primarily for local storage services like Minio.|
### GCP
| Key | Type | Default | Meaning |

View File

@ -79,7 +79,7 @@ kubectl get deployments -l component=ark --namespace=heptio-ark
kubectl get deployments --namespace=nginx-example
```
Finally, install the Ark client somehwere in your `$PATH`:
Finally, install the Ark client somewhere in your `$PATH`:
* [Download a pre-built release][26], or
* [Build it from scratch][7]

View File

@ -79,7 +79,7 @@ kubectl get deployments -l component=ark --namespace=heptio-ark
kubectl get deployments --namespace=nginx-example
```
Finally, install the Ark client somehwere in your `$PATH`:
Finally, install the Ark client somewhere in your `$PATH`:
* [Download a pre-built release][26], or
* [Build it from scratch][7]

View File

@ -41,6 +41,6 @@ new backups are created, and no existing backups are deleted or overwritten.
## I receive 'custom resource not found' errors when starting up the Ark server
Ark's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Apply
the `examples/common/00-prereqs.yaml` file to create these defintions, then restart Ark.
the `examples/common/00-prereqs.yaml` file to create these definitions, then restart Ark.
[1]: config-definition.md#main-config-parameters

View File

@ -45,7 +45,7 @@ kubectl edit deployment/ark -n heptio-ark
### Ark reports `custom resource not found` errors when starting up.
Ark's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Apply
the `examples/common/00-prereqs.yaml` file to create these defintions, then restart Ark.
the `examples/common/00-prereqs.yaml` file to create these definitions, then restart Ark.
### `ark backup logs` returns a `SignatureDoesNotMatch` error

View File

@ -32,7 +32,7 @@ of the Velero repository is under active development and is not guaranteed to be
## Create S3 bucket
Velero requires an object storage bucket to store backups in, preferrably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
Velero requires an object storage bucket to store backups in, preferably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
```bash
BUCKET=<YOUR_BUCKET>

View File

@ -141,7 +141,7 @@ To provision a cluster on AWS using Amazons official CloudFormation templates
Running the Velero server locally can speed up iterative development. This eliminates the need to rebuild the Velero server
image and redeploy it to the cluster with each change.
#### 1. Set enviroment variables
#### 1. Set environment variables
Set the appropriate environment variable for your cloud provider:

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -32,7 +32,7 @@ of the Velero repository is under active development and is not guaranteed to be
## Create S3 bucket
Velero requires an object storage bucket to store backups in, preferrably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
Velero requires an object storage bucket to store backups in, preferably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
```bash
BUCKET=<YOUR_BUCKET>

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -112,7 +112,7 @@ Forwarding from [::1]:8085 -> 8085
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the nessary [annotations][8] for prometheus to scrape metrics.
- Confirm that the Velero server pod has the necessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
[1]: debugging-restores.md

View File

@ -5,7 +5,7 @@ layout: docs
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server compoents to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -112,7 +112,7 @@ Forwarding from [::1]:8085 -> 8085
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the nessary [annotations][8] for prometheus to scrape metrics.
- Confirm that the Velero server pod has the necessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
[1]: debugging-restores.md

View File

@ -5,7 +5,7 @@ layout: docs
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server compoents to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -112,7 +112,7 @@ Forwarding from [::1]:8085 -> 8085
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the nessary [annotations][8] for prometheus to scrape metrics.
- Confirm that the Velero server pod has the necessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
[1]: debugging-restores.md

View File

@ -5,7 +5,7 @@ layout: docs
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server compoents to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -112,7 +112,7 @@ Forwarding from [::1]:8085 -> 8085
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the nessary [annotations][8] for prometheus to scrape metrics.
- Confirm that the Velero server pod has the necessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
[1]: debugging-restores.md

View File

@ -5,7 +5,7 @@ layout: docs
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server compoents to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -34,7 +34,7 @@ First, download the Velero image, tag it for the your private registry, then upl
```bash
PRIVATE_REG=<your private registry>
VELERO_VERSION=<version of Velero you're targetting, e.g. v1.4.0>
VELERO_VERSION=<version of Velero you're targeting, e.g. v1.4.0>
docker pull velero/velero:$VELERO_VERSION
docker tag velero/velero:$VELERO_VERSION $PRIVATE_REG/velero:$VELERO_VERSION
@ -47,7 +47,7 @@ Next, repeat these steps for any plugins you may need. This example will use the
```bash
PRIVATE_REG=<your private registry>
PLUGIN_VERSION=<version of plugin you're targetting, e.g. v1.0.2>
PLUGIN_VERSION=<version of plugin you're targeting, e.g. v1.0.2>
docker pull velero/velero-plugin-for-aws:$PLUGIN_VERSION
docker tag velero/velero-plugin-for-aws:$PLUGIN_VERSION $PRIVATE_REG/velero-plugin-for-aws:$PLUGIN_VERSION
@ -60,7 +60,7 @@ If you are using restic, you will also need to upload the restic helper image.
```bash
PRIVATE_REG=<your private registry>
VELERO_VERSION=<version of Velero you're targetting, e.g. v1.4.0>
VELERO_VERSION=<version of Velero you're targeting, e.g. v1.4.0>
docker pull velero/velero-restic-restore-helper:$VELERO_VERSION
docker tag velero/velero-restic-restore-helper:$VELERO_VERSION $PRIVATE_REG/velero-restic-restore-helper:$VELERO_VERSION

View File

@ -100,7 +100,7 @@ Forwarding from [::1]:8085 -> 8085
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the nessary [annotations][8] for prometheus to scrape metrics.
- Confirm that the Velero server pod has the necessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
[1]: debugging-restores.md

View File

@ -5,7 +5,7 @@ layout: docs
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server compoents to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage

View File

@ -27,7 +27,7 @@ layout: docs
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`

View File

@ -261,7 +261,7 @@ nginx 1/1 Running 0 13s 10.200.0.4 worker0
A list of Velero-specific terms and words to be used consistently across the site.
{{< table caption="Velero.io word list" >}}
|Trem|Useage|
|Trem|Usage|
|--- |--- |
|Kubernetes|Kubernetes should always be capitalized.|
|Docker|Docker should always be capitalized.|

View File

@ -100,7 +100,7 @@ Forwarding from [::1]:8085 -> 8085
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.
- Confirm that the Velero server pod has the nessary [annotations][8] for prometheus to scrape metrics.
- Confirm that the Velero server pod has the necessary [annotations][8] for prometheus to scrape metrics.
- Confirm, from the Prometheus UI, that the Velero pod is one of the targets being scraped from Prometheus.
[1]: debugging-restores.md

View File

@ -5,7 +5,7 @@ layout: docs
This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster.
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server compoents to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
_NOTE_: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to [Building Server Component Container Images][1].
## Usage

View File

@ -29,7 +29,7 @@ Velero version 1.1 provides support to backup applications orchestrated on upstr
## Download and Deploy Cassandra
For instructions on how to download and deploy a simple Cassandra StatefulSet, please refer to [this blog post](https://cormachogan.com/2019/06/12/kubernetes-storage-on-vsphere-101-statefulset/). This will show you how to deploy a Cassandra StatefulSet which we can use to do our Stateful aplication backup and restore. The manifests [available here](https://github.com/cormachogan/vsphere-storage-101/tree/master/StatefulSets) use an earlier version of Cassandra (v11) that includes the `cqlsh` tool which we will use now to create a database and populate a table with some sample data.
For instructions on how to download and deploy a simple Cassandra StatefulSet, please refer to [this blog post](https://cormachogan.com/2019/06/12/kubernetes-storage-on-vsphere-101-statefulset/). This will show you how to deploy a Cassandra StatefulSet which we can use to do our Stateful application backup and restore. The manifests [available here](https://github.com/cormachogan/vsphere-storage-101/tree/master/StatefulSets) use an earlier version of Cassandra (v11) that includes the `cqlsh` tool which we will use now to create a database and populate a table with some sample data.
If you follow the instructions above on how to deploy Cassandra on Kubernetes, you should see a similar response if you run the following command against your deployment:

View File

@ -133,7 +133,7 @@ spec:
```
For demonstration purposes, instead of relying on the application writing data to the mounted CSI volume, exec into the pod running the stateful application to write data into `/mnt/azuredisk`, where the CSI volume is mounted.
This is to let us get a consistent checksum value of the data and verify that the data on restore is exacly same as that in the backup.
This is to let us get a consistent checksum value of the data and verify that the data on restore is exactly same as that in the backup.
```bash
$ kubectl -n csi-app exec -ti csi-nginx bash

View File

@ -4,7 +4,7 @@
The Vision
---
To create a modern, geometric typeface. Open sourced, and openly available. Influenced by other popular geometric, minimalist sans-serif typefaces of the new millenium. Designed for optimal readability at small point sizes while beautiful at large point sizes.
To create a modern, geometric typeface. Open sourced, and openly available. Influenced by other popular geometric, minimalist sans-serif typefaces of the new millennium. Designed for optimal readability at small point sizes while beautiful at large point sizes.
December 2017 update
---