Fix the grammar by using the verb form 'set up' where appropriate instead of the noun 'setup'

pull/35790/head
William Steinford 2022-07-21 13:41:01 -04:00
parent 17c3350879
commit d6a1ba2a6d
16 changed files with 18 additions and 18 deletions

View File

@ -56,7 +56,7 @@ There are several different proxies you may encounter when using Kubernetes:
- implementation varies by cloud provider.
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
will typically ensure that the latter types are setup correctly.
will typically ensure that the latter types are set up correctly.
## Requesting redirects

View File

@ -162,7 +162,7 @@ Credentials can be provided in several ways:
- requires node configuration by cluster administrator
- Pre-pulled Images
- all pods can use any images cached on a node
- requires root access to all nodes to setup
- requires root access to all nodes to set up
- Specifying ImagePullSecrets on a Pod
- only pods which provide own keys can access the private registry
- Vendor-specific or local extensions

View File

@ -305,7 +305,7 @@ Noteworthy points about the nginx-secure-app manifest:
serves HTTP traffic on port 80 and HTTPS traffic on 443, and nginx Service
exposes both ports.
- Each container has access to the keys through a volume mounted at `/etc/nginx/ssl`.
This is setup *before* the nginx server is started.
This is set up *before* the nginx server is started.
```shell
kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml

View File

@ -301,7 +301,7 @@ search ns1.svc.cluster-domain.example my.dns.search.suffix
options ndots:2 edns0
```
For IPv6 setup, search path and name server should be setup like this:
For IPv6 setup, search path and name server should be set up like this:
```shell
kubectl exec -it dns-example -- cat /etc/resolv.conf

View File

@ -620,7 +620,7 @@ deployment.apps/nginx-deployment scaled
```
Assuming [horizontal Pod autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) is enabled
in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of
in your cluster, you can set up an autoscaler for your Deployment and choose the minimum and maximum number of
Pods you want to run based on the CPU utilization of your existing Pods.
```shell

View File

@ -457,7 +457,7 @@ access the certificate signing API.
This is implemented by creating a ClusterRoleBinding named `kubeadm:kubelet-bootstrap` between the
group above and the default RBAC role `system:node-bootstrapper`.
#### Setup auto approval for new bootstrap tokens
#### Set up auto approval for new bootstrap tokens
Kubeadm ensures that the Bootstrap Token will get its CSR request automatically approved by the
csrapprover controller.
@ -470,7 +470,7 @@ The role `system:certificates.k8s.io:certificatesigningrequests:nodeclient` shou
well, granting POST permission to
`/apis/certificates.k8s.io/certificatesigningrequests/nodeclient`.
#### Setup nodes certificate rotation with auto approval
#### Set up nodes certificate rotation with auto approval
Kubeadm ensures that certificate rotation is enabled for nodes, and that new certificate request
for nodes will get its CSR request automatically approved by the csrapprover controller.

View File

@ -269,7 +269,7 @@ in the kubeadm config file.
1. Follow these [instructions](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) to set up the etcd cluster.
1. Setup SSH as described [here](#manual-certs).
1. Set up SSH as described [here](#manual-certs).
1. Copy the following files from any etcd node in the cluster to the first control plane node:

View File

@ -18,7 +18,7 @@ Kubernetes CLI, `kubectl`.
To access a cluster, you need to know the location of the cluster and have credentials
to access it. Typically, this is automatically set-up when you work through
a [Getting started guide](/docs/setup/),
or someone else setup the cluster and provided you with credentials and a location.
or someone else set up the cluster and provided you with credentials and a location.
Check the location and credentials that kubectl knows about with this command:
@ -265,5 +265,5 @@ There are several different proxies you may encounter when using Kubernetes:
- implementation varies by cloud provider.
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
will typically ensure that the latter types are setup correctly.
will typically ensure that the latter types are set up correctly.

View File

@ -22,7 +22,7 @@ Kubernetes command-line tool, `kubectl`.
To access a cluster, you need to know the location of the cluster and have credentials
to access it. Typically, this is automatically set-up when you work through
a [Getting started guide](/docs/setup/),
or someone else setup the cluster and provided you with credentials and a location.
or someone else set up the cluster and provided you with credentials and a location.
Check the location and credentials that kubectl knows about with this command:

View File

@ -22,7 +22,7 @@ The [Container runtimes](/docs/setup/production-environment/container-runtimes)
explains that the `systemd` driver is recommended for kubeadm based setups instead
of the `cgroupfs` driver, because kubeadm manages the kubelet as a systemd service.
The page also provides details on how to setup a number of different container runtimes with the
The page also provides details on how to set up a number of different container runtimes with the
`systemd` driver by default.
## Configuring the kubelet cgroup driver

View File

@ -276,6 +276,6 @@ spec:
## {{% heading "whatsnext" %}}
* [Setup an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) to work with the aggregation layer.
* [Set up an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) to work with the aggregation layer.
* For a high level overview, see [Extending the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
* Learn how to [Extend the Kubernetes API Using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).

View File

@ -25,7 +25,7 @@ Setting up an extension API server to work with the aggregation layer allows the
<!-- steps -->
## Setup an extension api-server to work with the aggregation layer
## Set up an extension api-server to work with the aggregation layer
The following steps describe how to set up an extension-apiserver *at a high level*. These steps apply regardless if you're using YAML configs or using APIs. An attempt is made to specifically identify any differences between the two. For a concrete example of how they can be implemented using YAML configs, you can look at the [sample-apiserver](https://github.com/kubernetes/sample-apiserver/blob/master/README.md) in the Kubernetes repo.

View File

@ -104,7 +104,7 @@ Address: 10.0.147.152
# Your address will vary.
```
If Kube-DNS is not setup correctly, the previous step may not work for you.
If Kube-DNS is not set up correctly, the previous step may not work for you.
You can also find the service IP in an env var:
```

View File

@ -357,7 +357,7 @@ reference page.
## Configuring your cluster to provide signing
This page assumes that a signer is setup to serve the certificates API. The
This page assumes that a signer is set up to serve the certificates API. The
Kubernetes controller manager provides a default implementation of a signer. To
enable it, pass the `--cluster-signing-cert-file` and
`--cluster-signing-key-file` parameters to the controller manager with paths to

View File

@ -330,7 +330,7 @@ Note the pod status is Pending, with a helpful error message: `Pod Cannot enforc
### Setting up nodes with profiles
Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles onto
nodes. There are lots of ways to setup the profiles though, such as:
nodes. There are lots of ways to set up the profiles though, such as:
* Through a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on each node to
ensure the correct profiles are loaded. An example implementation can be found

View File

@ -1,4 +1,4 @@
# This is an example of how to setup cloud-controller-manager as a Daemonset in your cluster.
# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.
# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master
# Note that this Daemonset will not work straight out of the box for your cloud, this is
# meant to be a guideline.