addresses issue #81

Signed-off-by: mikebrow <brownwm@us.ibm.com>
pull/349/head
mikebrow 2016-04-08 13:49:34 -05:00
parent 736890d924
commit 62fdb3f5fa
7 changed files with 20 additions and 20 deletions

View File

@ -68,7 +68,7 @@ Check the tasks and templates in `roles/k8s` if you want to modify anything.
Once the playbook as finished, it will print out the IP of the Kubernetes master:
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
TASK: [k8s | debug msg='k8s master IP is {% raw %}{{ k8s_master.default_ip }}{% endraw %}'] ********
SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster:
@ -77,4 +77,4 @@ SSH to it using the key that was created and using the _core_ user and you can l
MACHINE IP METADATA
a017c422... <node #1 IP> role=node
ad13bf84... <master IP> role=master
e9af8293... <node #2 IP> role=node
e9af8293... <node #2 IP> role=node

View File

@ -229,23 +229,23 @@ Note that we have passed these two values already as parameter to the apiserver
A template for an replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file:
- replace `{{ pillar['dns_replicas'] }}` with `1`
- replace `{{ pillar['dns_domain'] }}` with `cluster.local.`
- replace `{% raw %}{{ pillar['dns_replicas'] }}{% endraw %}` with `1`
- replace `{% raw %}{{ pillar['dns_domain'] }}{% endraw %}` with `cluster.local.`
- add `--kube_master_url=${KUBERNETES_MASTER}` parameter to the kube2sky container command.
In addition the service template at [cluster/addons/dns/skydns-svc.yaml.in][12] needs the following replacement:
- `{{ pillar['dns_server'] }}` with `10.10.10.10`.
- `{% raw %}{{ pillar['dns_server'] }}{% endraw %}` with `10.10.10.10`.
To do this automatically:
```shell
```shell{% raw %}
sed -e "s/{{ pillar\['dns_replicas'\] }}/1/g;"\
"s,\(command = \"/kube2sky\"\),\\1\\"$'\n'" - --kube_master_url=${KUBERNETES_MASTER},;"\
"s/{{ pillar\['dns_domain'\] }}/cluster.local/g" \
cluster/addons/dns/skydns-rc.yaml.in > skydns-rc.yaml
sed -e "s/{{ pillar\['dns_server'\] }}/10.10.10.10/g" \
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml{% endraw %}
```
Now the kube-dns pod and service are ready to be launched:
@ -326,4 +326,4 @@ Future work will add instructions to this guide to enable support for Kubernetes
[10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup
[11]: https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-rc.yaml.in
[12]: https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-svc.yaml.in
[13]: https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/README.md
[13]: https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/README.md

View File

@ -222,10 +222,10 @@ The `Restart Count: 5` indicates that the `simmemleak` container in this pod wa
You can call `get pod` with the `-o go-template=...` option to fetch the status of previously terminated containers:
```shell
```shell{% raw %}
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]{% endraw %}
```
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
@ -246,4 +246,4 @@ Currently, one unit of CPU means different things on different cloud providers,
machine types within the same cloud providers. For example, on AWS, the capacity of a node
is reported in [ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
cores. We plan to revise the definition of the cpu resource to allow for more consistency
across providers and platforms.
across providers and platforms.

View File

@ -22,9 +22,9 @@ redis-master 2/2 Running 0 41s
The Redis master is listening on port 6397, to verify this,
```shell
```shell{% raw %}
$ kubectl get pods redis-master -t='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
6379
6379{% endraw %}
```
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
@ -43,4 +43,4 @@ $ redis-cli
PONG
```
Now one can debug the database from the local workstation.
Now one can debug the database from the local workstation.

View File

@ -84,9 +84,9 @@ my-nginx 2 2 2 2 2m my-nginx
More importantly, the pod template's labels are used to create a [`selector`](/docs/user-guide/labels/#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](/docs/user-guide/kubectl/kubectl_get):
```shell
```shell{% raw %}
$ kubectl get deployment/my-nginx -o template --template="{{.spec.selector}}"
map[matchLabels:map[run:my-nginx]]
map[matchLabels:map[run:my-nginx]]{% endraw %}
```
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didn't want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it won't match pods created by other Deployments. The most straightforward way to ensure the latter is to create a unique label value for the Deployment, and to specify it in both the pod template's labels and in the selector's

View File

@ -67,7 +67,7 @@ On most providers, the pod IPs are not externally accessible. The easiest way to
Provided the pod IP is accessible, you should be able to access its http endpoint with curl on port 80:
```shell
$ curl http://$(kubectl get pod nginx -o go-template={{.status.podIP}})
$ curl http://$(kubectl get pod nginx -o go-template={% raw %}{{.status.podIP}}{% endraw %})
```
Delete the pod by name:
@ -162,4 +162,4 @@ Finally, we have also introduced an environment variable to the `git-monitor` co
## What's Next?
Continue on to [Kubernetes 201](/docs/user-guide/walkthrough/k8s201) or
for a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/)
for a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/)

View File

@ -106,8 +106,8 @@ On most providers, the service IPs are not externally accessible. The easiest wa
Provided the service IP is accessible, you should be able to access its http endpoint with curl on port 80:
```shell
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template={{.spec.clusterIP}})
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template'={{(index .spec.ports 0).port}}')
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template={% raw %}{{.spec.clusterIP}}{% endraw %})
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template={% raw %}{{(index .spec.ports 0).port}}{% endraw %})
$ curl http://${SERVICE_IP}:${SERVICE_PORT}
```