Merge pull request #2027 from luojie233/groupbranch

fix some typos
pull/1982/head^2
devin-donnelly 2016-12-22 18:29:59 -08:00 committed by GitHub
commit dd5e71e131
22 changed files with 25 additions and 25 deletions

View File

@ -24,7 +24,7 @@ following diagram:
In a typical Kubernetes cluster, the API served on port 443. A TLS connection is
established. The API server presents a certificate. This certificate is
often self-signed, so `$USER/.kube/config` on the user's machine typically
contains the root certficate for the API server's certificate, which when specified
contains the root certificate for the API server's certificate, which when specified
is used in place of the system default root certificates. This certificate is typically
automatically written into your `$USER/.kube/config` when you create a cluster yourself
using `kube-up.sh`. If the cluster has multiple users, then the creator needs to share

View File

@ -330,7 +330,7 @@ roleRef:
Finally a `ClusterRoleBinding` may be used to grant permissions in all
namespaces. The following `ClusterRoleBinding` allows any user in the group
"manager" to read secrets in any namepsace.
"manager" to read secrets in any namespace.
```yaml
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.

View File

@ -92,7 +92,7 @@ an extended period of time (10min but it may change in the future).
Cluster autoscaler is configured per instance group (GCE) or node pool (GKE).
If you are using GCE then you can either enable it while creating a cluster with kube-up.sh script.
To configure cluser autoscaler you have to set 3 environment variables:
To configure cluster autoscaler you have to set 3 environment variables:
* `KUBE_ENABLE_CLUSTER_AUTOSCALER` - it enables cluster autoscaler if set to true.
* `KUBE_AUTOSCALER_MIN_NODES` - minimum number of nodes in the cluster.

View File

@ -77,7 +77,7 @@ For example, a pod with ip `1.2.3.4` in the namespace `default` with a DNS name
Currently when a pod is created, its hostname is the Pod's `metadata.name` value.
With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, to specify what the Pod's hostname should be.
The Pod annotation, if specified, takes precendence over the Pod's name, to be the hostname of the pod.
The Pod annotation, if specified, takes precedence over the Pod's name, to be the hostname of the pod.
For example, given a Pod with annotation `pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname set to "my-pod-name".
With v1.3, the PodSpec has a `hostname` field, which can be used to specify the Pod's hostname. This field value takes precedence over the

View File

@ -242,7 +242,7 @@ Once the cluster is up, you can grab the admin credentials from the master node
## Environment variables
There are some environment variables that modify the way that `kubeadm` works. Most users will have no need to set these.
These enviroment variables are a short-term solution, eventually they will be integrated in the kubeadm configuration file.
These environment variables are a short-term solution, eventually they will be integrated in the kubeadm configuration file.
| Variable | Default | Description |
| --- | --- | --- |

View File

@ -9,7 +9,7 @@ title: TLS bootstrapping
## Overview
This document describes how to set up TLS client certificate boostrapping for kubelets.
This document describes how to set up TLS client certificate bootstrapping for kubelets.
Kubernetes 1.4 introduces an experimental API for requesting certificates from a cluster-level
Certificate Authority (CA). The first supported use of this API is the provisioning of TLS client
certificates for kubelets. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439)
@ -17,7 +17,7 @@ and progress on the feature is being tracked as [feature #43](https://github.com
## apiserver configuration
You must provide a token file which specifies at least one "bootstrap token" assigned to a kubelet boostrap-specific group.
You must provide a token file which specifies at least one "bootstrap token" assigned to a kubelet bootstrap-specific group.
This group will later be used in the controller-manager configuration to scope approvals in the default approval
controller. As this feature matures, you should ensure tokens are bound to an RBAC policy which limits requests
using the bootstrap token to only be able to make requests related to certificate provisioning. When RBAC policy

View File

@ -78,9 +78,9 @@ kubelet
--experimental-allowed-unsafe-sysctls stringSlice Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk.
--experimental-bootstrap-kubeconfig string <Warning: Experimental feature> Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by --kubeconfig does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by --kubeconfig. The certificate and key file will be stored in the directory pointed by --cert-dir.
--experimental-cgroups-per-qos Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created.
--experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required componenets (binaries, etc.) before performing the mount
--experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount
--experimental-cri [Experimental] Enable the Container Runtime Interface (CRI) integration. If --container-runtime is set to "remote", Kubelet will communicate with the runtime/image CRI server listening on the endpoint specified by --remote-runtime-endpoint/--remote-image-endpoint. If --container-runtime is set to "docker", Kubelet will launch a in-process CRI server on behalf of docker, and communicate over a default endpoint.
--experimental-fail-swap-on Makes the Kubelet fail to start if swap is enabled on the node. This is a temporary opton to maintain legacy behavior, failing due to swap enabled will happen by default in v1.6.
--experimental-fail-swap-on Makes the Kubelet fail to start if swap is enabled on the node. This is a temporary option to maintain legacy behavior, failing due to swap enabled will happen by default in v1.6.
--experimental-kernel-memcg-notification If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling.
--experimental-mounter-path string [Experimental] Path of mounter binary. Leave empty to use the default mount.
--experimental-nvidia-gpus int32 Number of NVIDIA GPU devices on this node. Only 0 (default) and 1 are currently supported.

View File

@ -184,7 +184,7 @@ Note that this pod specifies explicit resource *limits* and *requests* so it did
default values.
Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
that runs the container unless the administrator deploys the kubelet with the following flag:
```shell
$ kubelet --help

View File

@ -330,7 +330,7 @@ for eviction. Instead `DaemonSet` should ideally launch `Guaranteed` pods.
`kubelet` has been freeing up disk space on demand to keep the node stable.
As disk based eviction matures, the following `kubelet` flags will be marked for deprecation
in favor of the simpler configuation supported around eviction.
in favor of the simpler configuration supported around eviction.
| Existing Flag | New Flag |
| ------------- | -------- |

View File

@ -50,7 +50,7 @@ It's enabled by default. It can be disabled:
### Marking add-on as critical
To be critical an add-on has to run in `kube-system` namespace (cofigurable via flag)
To be critical an add-on has to run in `kube-system` namespace (configurable via flag)
and have the following annotations specified:
* `scheduler.alpha.kubernetes.io/critical-pod` set to empty string

View File

@ -207,7 +207,7 @@ Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VM
## Cluster Features and Architecture
We configue the Kubernetes cluster with the following features:
We configure the Kubernetes cluster with the following features:
* KubeDNS: DNS resolution and service discovery
* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling.

View File

@ -31,4 +31,4 @@ There are two main components to be aware of:
- One `calico-node` Pod runs on each node in your cluster, and enforces network policy on the traffic to/from Pods on that machine by configuring iptables.
- The `calico-policy-controller` Pod reads policy and label information from the Kubernetes API and configures Calico appropriately.
Once your cluster is running, you can follow the [NetworkPolicy gettting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy.
Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy.

View File

@ -8,4 +8,4 @@ The [Weave Net Addon](https://www.weave.works/docs/net/latest/kube-addon/) for K
This component automatically monitors Kubernetes for any NetworkPolicy annotations on all namespaces, and configures `iptables` rules to allow or block traffic as directed by the policies.
Once you have installed the Weave Net Addon you can follow the [NetworkPolicy gettting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy.
Once you have installed the Weave Net Addon you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy.

View File

@ -93,7 +93,7 @@ Note that each controller can host multiple Kubernetes clusters in a given cloud
## Launch a Kubernetes cluster
The following command will deploy the intial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but
The following command will deploy the initial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but
```shell
juju deploy canonical-kubernetes

View File

@ -122,7 +122,7 @@ through `FLANNEL_BACKEND` and `FLANNEL_OTHER_NET_CONFIG`, as explained in `clust
The default setting for `ADMISSION_CONTROL` is right for the latest
release of Kubernetes, but if you choose an earlier release then you
might want a different setting. See
[the admisson control doc](http://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use)
[the admission control doc](http://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use)
for the recommended settings for various releases.
**Note:** When deploying, master needs to be connected to the Internet to download the necessary files.

View File

@ -52,7 +52,7 @@ load-balanced access to an application running in a cluster.
NAME DESIRED CURRENT AGE
hello-world-2189936611 2 2 12m
1. Create a Serivice object that exposes the replica set:
1. Create a Service object that exposes the replica set:
kubectl expose rs <your-replica-set-name> --type="LoadBalancer" --name="example-service"

View File

@ -4,7 +4,7 @@ title: Exposing an External IP Address to Access an Application in a Cluster
{% capture overview %}
This page shows how to create a Kubernetes Service object that exposees an
This page shows how to create a Kubernetes Service object that exposes an
external IP address.
{% endcapture %}

View File

@ -295,7 +295,7 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el
Kubernetes also supports Federated Services, which can span multiple
clusters and cloud providers, to provide increased availability,
bettern fault tolerance and greater scalability for your services. See
better fault tolerance and greater scalability for your services. See
the [Federated Services User Guide](/docs/user-guide/federation/federated-services/)
for further information.

View File

@ -413,7 +413,7 @@ $ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=
deployment "nginx-deployment" autoscaled
```
RollingUpdate Deployments support running multitple versions of an application at the same time. When you
RollingUpdate Deployments support running multiple versions of an application at the same time. When you
or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress
or paused), then the Deployment controller will balance the additional replicas in the existing active
ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*.
@ -568,7 +568,7 @@ Your Deployment may get stuck trying to deploy its newest ReplicaSet without eve
* Limit ranges
* Application runtime misconfiguration
One way you can detect this condition is to specify specify a deadline parameter in your Deployment spec: ([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the number of seconds the Deployment controller waits before indicating (via the Deployment status) that the Deployment progress has stalled.
One way you can detect this condition is to specify a deadline parameter in your Deployment spec: ([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the number of seconds the Deployment controller waits before indicating (via the Deployment status) that the Deployment progress has stalled.
The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report lack of progress for a Deployment after 10 minutes:

View File

@ -166,7 +166,7 @@ parallelism, for a variety or reasons:
- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc.),
then there may be fewer pods than requested.
- The controller may throttle new pod creation due to excessive previous pod failures in the same Job.
- When a pod is gracefully shutdown, it make take time to stop.
- When a pod is gracefully shutdown, it takes time to stop.
## Handling Pod and Container Failures

View File

@ -11,7 +11,7 @@ Drain node in preparation for maintenance
Drain node in preparation for maintenance.
The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supports eviciton (http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force.
The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supports eviction (http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force.
'drain' waits for graceful termination. You should not operate on the machine until the command completes.

View File

@ -497,7 +497,7 @@ parameters:
```
* `quobyteAPIServer`: API Server of Quobyte in the format `http(s)://api-server:7860`
* `registry`: Quobyte registry to use to mount the volume. You can specifiy the registry as ``<host>:<port>`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
* `registry`: Quobyte registry to use to mount the volume. You can specify the registry as ``<host>:<port>`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
* `adminSecretNamespace`: The namespace for `adminSecretName`. Default is "default".
* `adminSecretName`: secret that holds information about the Quobyte user and the password to authenticate agains the API server. The provided secret must have type "kubernetes.io/quobyte", e.g. created in this way:
```