Fixed some e.g. problems and some spelling errors.

pull/2036/head
dongziming 2016-12-23 13:44:14 +08:00
parent 67be6311ee
commit 2374e3d1bb
4 changed files with 5 additions and 5 deletions

View File

@ -89,7 +89,7 @@ Mitigations:
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Mitigates: Supporting services VM shutdown or crashes
- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd
- Action use IaaS providers reliable storage (e.g. GCE PD or AWS EBS volume) for VMs with apiserver+etcd
- Mitigates: Apiserver backing storage lost
- Action: Use (experimental) [high-availability](/docs/admin/high-availability) configuration

View File

@ -129,7 +129,7 @@ Support for updating DaemonSets and controlled updating of nodes is planned.
### Init Scripts
It is certainly possible to run daemon processes by directly starting them on a node (e.g using
It is certainly possible to run daemon processes by directly starting them on a node (e.g. using
`init`, `upstartd`, or `systemd`). This is perfectly fine. However, there are several advantages to
running such processes via a DaemonSet:

View File

@ -52,7 +52,7 @@ Second, decide how many clusters should be able to be unavailable at the same ti
the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice.
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
you need at least the larger of `R` or `U + 1` clusters. If it is not (e.g you want to ensure low latency for all
you need at least the larger of `R` or `U + 1` clusters. If it is not (e.g. you want to ensure low latency for all
users in the event of a cluster failure), then you need to have `R * (U + 1)` clusters
(`U + 1` in each of `R` regions). In any case, try to put each cluster in a different zone.

View File

@ -49,7 +49,7 @@ either `kubectl` or addon pod.
### Kubectl
This is the recommanded way to start node problem detector outside of GCE. It
This is the recommended way to start node problem detector outside of GCE. It
provides more flexible management, such as overwriting the default
configuration to fit it into your environment or detect
customized node problems.
@ -238,7 +238,7 @@ implement a new translator for a new log format.
## Caveats
It is recommanded to run the node problem detector in your cluster to monitor
It is recommended to run the node problem detector in your cluster to monitor
the node health. However, you should be aware that this will introduce extra
resource overhead on each node. Usually this is fine, because: