Capitalizes the first letter

Signed-off-by: YuPengZTE <yu.peng36@zte.com.cn>
pull/1283/head
YuPengZTE 2016-09-22 17:07:06 +08:00
parent f2465eec29
commit b0d3fd5f6b
6 changed files with 16 additions and 16 deletions

View File

@ -18,7 +18,7 @@ You need 2 or more machines with Fedora installed.
**Perform following commands on the Kubernetes master**
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. Flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```json
{

View File

@ -64,10 +64,10 @@ If you run into trouble, please see the section on [troubleshooting](/docs/getti
The next few steps will show you:
1. how to set up the command line client on your workstation to manage the cluster
1. examples of how to use the cluster
1. how to delete the cluster
1. how to start clusters with non-default options (like larger clusters)
1. How to set up the command line client on your workstation to manage the cluster
1. Examples of how to use the cluster
1. How to delete the cluster
1. How to start clusters with non-default options (like larger clusters)
### Installing the Kubernetes command line tools on your workstation

View File

@ -514,9 +514,9 @@ availability.
To run an etcd instance:
1. copy `cluster/saltbase/salt/etcd/etcd.manifest`
1. make any modifications needed
1. start the pod by putting it into the kubelet manifest directory
1. Copy `cluster/saltbase/salt/etcd/etcd.manifest`
1. Make any modifications needed
1. Start the pod by putting it into the kubelet manifest directory
### Apiserver, Controller Manager, and Scheduler

View File

@ -40,9 +40,9 @@ API for traditional Kubernetes Services.
Once created, the Federated Service automatically:
1. creates matching Kubernetes Services in every cluster underlying your Cluster Federation,
2. monitors the health of those service "shards" (and the clusters in which they reside), and
3. manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients
1. Creates matching Kubernetes Services in every cluster underlying your Cluster Federation,
2. Monitors the health of those service "shards" (and the clusters in which they reside), and
3. Manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients
of your federated service can seamlessly locate an appropriate healthy service endpoint at all times, even in the event of cluster,
availability zone or regional outages.

View File

@ -118,12 +118,12 @@ in the `$HOME` of user `root` on a kubelet, then docker will use it.
Here are the recommended steps to configuring your nodes to use a private registry. In this
example, run these on your desktop/laptop:
1. run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json`.
1. view `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use.
1. get a list of your nodes, for example:
1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json`.
1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use.
1. Get a list of your nodes, for example:
- if you want the names: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
- if you want to get the IPs: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
1. copy your local `.docker/config.json` to the home directory of root on each node.
1. Copy your local `.docker/config.json` to the home directory of root on each node.
- for example: `for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done`
Verify by creating a pod that uses a private image, e.g.:

View File

@ -32,7 +32,7 @@ We also put the same label on the pod template so that we can check on all Pods
with a single command.
After the job is created, the system will add more labels that distinguish one Job's pods
from another Job's pods.
Note that the label key `jobgroup` is not special to Kubernetes. you can pick your own label scheme.
Note that the label key `jobgroup` is not special to Kubernetes. You can pick your own label scheme.
Next, expand the template into multiple files, one for each item to be processed.