* 'master' of https://github.com/kubernetes/kubernetes.github.io:
  bump openstack cli tools versions to working values
  fix header size for sub-sections under Services
  Finish #4036
  Be more specific about cpu-shares passed to Docker (#4016)
  Clarify the required uniqueness of Label Key
  Add --show-all to documentation
  docs(service): remove invalid `,` from example
  Add  "remove" example for kubectl patch (#4042)
  Fix wrong link to create a ConfigMap
  Fix a typo in the High Availability docs (#4048)
  Remove ports from ExternalName Service example
  remove extra that
  update kops installation instructions
pull/4156/head
Andrew Chen 2017-06-21 14:43:13 -07:00
commit c8f3f8f19f
11 changed files with 27 additions and 24 deletions

View File

@ -17,7 +17,7 @@ be working to add this continuous testing, but for now the single-node master in
## Overview
Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to
Setting up a truly reliable, highly available distributed system requires a number of steps. It is akin to
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
of these steps in detail, but a summary is given here to help guide and orient the user.

View File

@ -133,8 +133,8 @@ to the container runtime.
When using Docker:
- The `spec.containers[].resources.requests.cpu` is converted to its core value,
which is potentially fractional, and multiplied by 1024. This number is used
as the value of the
which is potentially fractional, and multiplied by 1024. The greater of this number
or 2 is used as the value of the
[`--cpu-shares`](https://docs.docker.com/engine/reference/run/#/cpu-share-constraint)
flag in the `docker run` command.

View File

@ -19,7 +19,7 @@ This page explains how Kubernetes objects are represented in the Kubernetes API,
* The resources available to those applications
* The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's **desired state**.
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's **desired state**.
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md). When you use the `kubectl` command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you; you can also use the Kubernetes API directly in your own programs. Kubernetes currently provides a `golang` [client library](https://github.com/kubernetes/client-go) for this purpose, and other language libraries (such as [Python](https://github.com/kubernetes-incubator/client-python)) are being developed.

View File

@ -38,7 +38,7 @@ Example labels:
* `"partition" : "customerA"`, `"partition" : "customerB"`
* `"track" : "daily"`, `"track" : "weekly"`
These are just examples; you are free to develop your own conventions.
These are just examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
## Syntax and character set

View File

@ -48,7 +48,7 @@ Services, this resolves to the set of IPs of the pods selected by the Service.
Clients are expected to consume the set or else use standard round-robin
selection from the set.
### SRV records
#### SRV records
SRV Records are created for named ports that are part of normal or [Headless
Services](https://kubernetes.io/docs/user-guide/services/#headless-services).
@ -60,7 +60,7 @@ For a headless service, this resolves to multiple answers, one for each pod
that is backing the service, and contains the port number and a CNAME of the pod
of the form `auto-generated-name.my-svc.my-namespace.svc.cluster.local`.
### Backwards compatibility
#### Backwards compatibility
Previous versions of kube-dns made names of the form
`my-svc.my-namespace.cluster.local` (the 'svc' level was added later). This

View File

@ -140,8 +140,6 @@ metadata:
spec:
type: ExternalName
externalName: my.database.example.com
ports:
- port: 12345
```
When looking up the host `my-service.prod.svc.CLUSTER`, the cluster DNS service
@ -489,17 +487,17 @@ In the ServiceSpec, `externalIPs` can be specified along with any of the `Servic
In the example below, my-service can be accessed by clients on 80.11.12.10:80 (externalIP:port)
```yaml
kind: Service,
apiVersion: v1,
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http,
protocol: TCP,
port: 80,
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10

View File

@ -34,15 +34,17 @@ Download kops from the [releases page](https://github.com/kubernetes/kops/releas
On MacOS:
```
wget https://github.com/kubernetes/kops/releases/download/v1.4.1/kops-darwin-amd64
wget https://github.com/kubernetes/kops/releases/download/1.6.1/kops-darwin-amd64
chmod +x kops-darwin-amd64
mv kops-darwin-amd64 /usr/local/bin/kops
# you can also install using Homebrew
brew update && brew install kops
```
On Linux:
```
wget https://github.com/kubernetes/kops/releases/download/v1.4.1/kops-linux-amd64
wget https://github.com/kubernetes/kops/releases/download/1.6.1/kops-linux-amd64
chmod +x kops-linux-amd64
mv kops-linux-amd64 /usr/local/bin/kops
```

View File

@ -31,11 +31,11 @@ If you already have the required versions of the OpenStack CLI tools installed a
#### Install OpenStack CLI tools
```sh
sudo pip install -U --force 'python-openstackclient==2.4.0'
sudo pip install -U --force 'python-heatclient==1.1.0'
sudo pip install -U --force 'python-swiftclient==3.0.0'
sudo pip install -U --force 'python-glanceclient==2.0.0'
sudo pip install -U --force 'python-novaclient==3.4.0'
sudo pip install -U --force 'python-openstackclient==3.11.0'
sudo pip install -U --force 'python-heatclient==1.10.0'
sudo pip install -U --force 'python-swiftclient==3.3.0'
sudo pip install -U --force 'python-glanceclient==2.7.0'
sudo pip install -U --force 'python-novaclient==9.0.1'
```
#### Configure Openstack CLI tools
@ -196,7 +196,7 @@ See the [OpenStack CLI Reference](http://docs.openstack.org/cli-reference/) for
### Salt
The OpenStack-Heat provider uses a [standalone Salt configuration](/docs/admin/salt/#standalone-salt-configuration-on-gce-and-others).
The OpenStack-Heat provider uses a [standalone Salt configuration](/docs/admin/salt/#standalone-salt-configuration-on-gce-and-others).
It only uses Salt for bootstraping the machines and creates no salt-master and does not auto-start the salt-minion service on the nodes.
## SSHing to your nodes

View File

@ -8,7 +8,7 @@ This page provides a series of usage examples demonstrating how to configure Pod
{% capture prerequisites %}
* {% include task-tutorial-prereqs.md %}
* [Create a ConfigMap](/docs/tasks/configure-pod-container/configmap.html)
* [Create a ConfigMap](/docs/tasks/configure-pod-container/configmap/)
{% endcapture %}
{% capture steps %}

View File

@ -99,7 +99,7 @@ There is not a single command to check on the output of all jobs at once,
but looping over all the pods is pretty easy:
```shell
$ for p in $(kubectl get pods -l jobgroup=jobexample -o name)
$ for p in $(kubectl get pods -l jobgroup=jobexample --show-all -o name)
do
kubectl logs $p
done

View File

@ -168,6 +168,9 @@ $ kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-ser
# Update a container's image using a json patch with positional arrays
$ kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
# Disable a deployment livenessProbe using a json patch with positional arrays
$ kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'
```
## Editing Resources