Replace the tab with 4 spaces in md files (#2797)

* Replace the tab with 4 spaces in md files

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of accessing-the-api.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of manage-compute-resources-container.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of run-to-completion-finite-workloads.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of connect-applications-service.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of init-containers.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of calico.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of ingress.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of index.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of index.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of apparmor.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of run-stateful-application.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of index.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of index.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>

* Fix typos of replicasets.md

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>
reviewable/pr2970/r1
Andy Yu 2017-03-23 15:10:00 +08:00 committed by Andrew Chen
parent 505387b463
commit 9ce54855d5
38 changed files with 418 additions and 422 deletions

View File

@ -63,7 +63,7 @@ users in its object store.
Once the request is authenticated as coming from a specific user,
it moves to a generic authorization step. This is shown as step **2** in the
diagram.
diagram.
The input to the Authorization step are attributes of the REST request, including:
- the username determined by the Authentication step.
@ -80,7 +80,7 @@ then the request can proceed. If all deny the request, then the request is deni
code 403).
The [Authorization Modules](/docs/admin/authorization) page describes what authorization modules
are available and how to configure them.
are available and how to configure them.
For version 1.2, clusters created by `kube-up.sh` are configured so that no authorization is
required for any request.
@ -108,7 +108,7 @@ They act on objects being created, deleted, updated or connected (proxy), but no
Multiple admission controllers can be configured. Each is called in order.
This is shown as step **3** in the diagram.
This is shown as step **3** in the diagram.
Unlike Authentication and Authorization Modules, if any admission controller module
rejects, then the request is immediately rejected.
@ -122,7 +122,7 @@ Once a request passes all admission controllers, it is validated using the valid
for the corresponding API object, and then written to the object store (shown as step **4**).
## API Server Ports and IPs
## API Server Ports and IPs
The previous discussion applies to requests sent to the secure port of the API server
(the typical case). The API server can actually serve on 2 ports:
@ -132,7 +132,7 @@ By default the Kubernetes API server serves HTTP on 2 ports:
1. `Localhost Port`:
- is intended for testing and bootstrap, and for other components of the master node
(scheduler, controller-manager) to talk to the API
(scheduler, controller-manager) to talk to the API
- no TLS
- default is port 8080, change with `--insecure-port` flag.
- defaults IP is localhost, change with `--insecure-bind-address` flag.
@ -141,7 +141,7 @@ By default the Kubernetes API server serves HTTP on 2 ports:
- protected by need to have host access
2. `Secure Port`:
- use whenever possible
- uses TLS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
- default is port 6443, change with `--secure-port` flag.

View File

@ -229,7 +229,7 @@ cross-zone attachments are not generally permitted by cloud providers:
```shell
> kubectl describe pod mypod | grep Node
Node: kubernetes-minion-9vlv/10.240.0.5
Node: kubernetes-minion-9vlv/10.240.0.5
> kubectl get node kubernetes-minion-9vlv --show-labels
NAME STATUS AGE LABELS
kubernetes-minion-9vlv Ready 22m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
@ -268,9 +268,9 @@ The pods should be spread across all 3 zones:
```shell
> kubectl describe pod -l app=guestbook | grep Node
Node: kubernetes-minion-9vlv/10.240.0.5
Node: kubernetes-minion-281d/10.240.0.8
Node: kubernetes-minion-olsh/10.240.0.11
Node: kubernetes-minion-9vlv/10.240.0.5
Node: kubernetes-minion-281d/10.240.0.8
Node: kubernetes-minion-olsh/10.240.0.11
> kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels
NAME STATUS AGE LABELS

View File

@ -64,16 +64,16 @@ Or you can get detailed information with:
```shell
$ kubectl describe namespaces <name>
Name: default
Labels: <none>
Status: Active
Name: default
Labels: <none>
Status: Active
No resource quota.
Resource Limits
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
```
Note that these details show both resource quota (if present) as well as resource limit ranges.

View File

@ -55,11 +55,11 @@ For jQuery to work in Node, a window with a document is required. Since no such
```js
require("jsdom").env("", function(err, window) {
if (err) {
console.error(err);
return;
}
if (err) {
console.error(err);
return;
}
var $ = require("jquery")(window);
var $ = require("jquery")(window);
});
```

View File

@ -149,7 +149,7 @@ in 13 underlying clusters:
for CLUSTER in asia-east1-c asia-east1-a asia-east1-b \
europe-west1-d europe-west1-c europe-west1-b \
us-central1-f us-central1-a us-central1-b us-central1-c \
us-east1-d us-east1-c us-east1-b
us-east1-d us-east1-c us-east1-b
do
kubectl --context=$CLUSTER run nginx --image=nginx:1.11.1-alpine --port=80
done

View File

@ -202,20 +202,20 @@ You can check node capacities and amounts allocated with the
```shell
$ kubectl.sh describe nodes e2e-test-minion-group-4lw4
Name: e2e-test-minion-group-4lw4
Name: e2e-test-minion-group-4lw4
[ ... lines removed for clarity ...]
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 7679792Ki
pods: 110
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 7679792Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 1800m
memory: 7474992Ki
pods: 110
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 1800m
memory: 7474992Ki
pods: 110
[ ... lines removed for clarity ...]
Non-terminated Pods: (5 in total)
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)
@ -225,9 +225,9 @@ Non-terminated Pods: (5 in total)
kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
```
In the preceding output, you can see that if a Pod requests more than 1120m

View File

@ -91,15 +91,15 @@ Currently, the logs for a hook handler are not exposed in the pod events. If you
```
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
```

View File

@ -177,7 +177,7 @@ If it failed, then you will see:
```shell
$ kubectl describe pods/private-image-test-1 | grep "Failed"
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
```

View File

@ -45,20 +45,20 @@ Check on the status of the job using this command:
```shell
$ kubectl describe jobs/pi
Name: pi
Namespace: default
Image(s): perl
Selector: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
Parallelism: 1
Completions: 1
Start Time: Tue, 07 Jun 2016 10:56:16 +0200
Labels: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495,job-name=pi
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
Name: pi
Namespace: default
Image(s): perl
Selector: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
Parallelism: 1
Completions: 1
Start Time: Tue, 07 Jun 2016 10:56:16 +0200
Labels: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495,job-name=pi
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q
```
To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too.

View File

@ -56,7 +56,7 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods runni
You can create a Service for your 2 nginx replicas with `kubectl expose`:
```shell
$ kubectl expose deployment/my-nginx
$ kubectl expose deployment/my-nginx
service "my-nginx" exposed
```
@ -77,15 +77,15 @@ As mentioned previously, a Service is backed by a group of pods. These pods are
```shell
$ kubectl describe svc my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: ClusterIP
IP: 10.0.162.149
Port: <unset> 80/TCP
Endpoints: 10.244.2.5:80,10.244.3.4:80
Session Affinity: None
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: ClusterIP
IP: 10.0.162.149
Port: <unset> 80/TCP
Endpoints: 10.244.2.5:80,10.244.3.4:80
Session Affinity: None
No events.
$ kubectl get ep my-nginx
@ -213,7 +213,7 @@ Let's test this from a pod (the same secret is being reused for simplicity, the
```shell
$ kubectl create -f ./curlpod.yaml
$ kubectl get pods -l app=curlpod
$ kubectl get pods -l app=curlpod
NAME READY STATUS RESTARTS AGE
curl-deployment-1515033274-1410r 1/1 Running 0 1m
$ kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt

View File

@ -141,36 +141,36 @@ NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 6m
$ kubectl describe -f myapp.yaml
i11:32 $ kubectl describe -f examples/init-container.yaml
Name: myapp-pod
Namespace: default
Name: myapp-pod
Namespace: default
[...]
Labels: app=myapp
Status: Pending
Labels: app=myapp
Status: Pending
[...]
Init Containers:
init-myservice:
[...]
State: Running
State: Running
[...]
init-mydb:
[...]
State: Running
State: Running
[...]
Containers:
myapp-container:
[...]
State: Waiting
Reason: PodInitializing
Ready: False
State: Waiting
Reason: PodInitializing
Ready: False
[...]
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201
16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image "busybox"
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox"
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container with docker id 5ced34a04634; Security:[seccomp=unconfined]
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201
16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image "busybox"
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox"
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container with docker id 5ced34a04634; Security:[seccomp=unconfined]
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634
$ kubectl logs myapp-pod -c init-myservice # Inspect the first init container
$ kubectl logs myapp-pod -c init-mydd # Inspect the second init container
```

View File

@ -92,9 +92,9 @@ web server:
<i>NOTE: If you do not want Jekyll to interfere with your other globally installed gems, you can use `bundler`:</i>
gem install bundler
bundle install
bundler exec jekyll serve
gem install bundler
bundle install
bundler exec jekyll serve
<i> Regardless of whether you use `bundler` or not, your copy of the site will then be viewable at: [http://localhost:4000](http://localhost:4000)</i>

View File

@ -52,8 +52,8 @@ yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```shell
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion-1
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion-1
192.168.121.66 centos-minion-2
192.168.121.67 centos-minion-3" >> /etc/hosts
```
@ -147,9 +147,9 @@ FLANNEL_ETCD_PREFIX="/kube-centos/network"
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```

View File

@ -77,10 +77,10 @@ SSH to it using the key that was created and using the _core_ user and you can l
$ ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
$ fleetctl list-machines
MACHINE IP METADATA
a017c422... <node #1 IP> role=node
ad13bf84... <master IP> role=master
e9af8293... <node #2 IP> role=node
MACHINE IP METADATA
a017c422... <node #1 IP> role=node
ad13bf84... <master IP> role=master
e9af8293... <node #2 IP> role=node
## Support Level

View File

@ -48,8 +48,8 @@ dnf -y install etcd
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```shell
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
* Edit /etc/kubernetes/config (which should be the same on all hosts) to set
@ -95,9 +95,9 @@ ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```

View File

@ -227,13 +227,13 @@ The ```kubectl describe``` command gives you more details about the logs
```
# kubectl describe -n kube-system po kube-dns-2924299975-1l2t7
2m 2m 1 {kubelet nac} spec.containers{flannel} Warning Failed Failed to start container with docker id 927e7ccdc32b with error: Error response from daemon: {"message":"chown /etc/resolv.conf: operation not permitted"}
2m 2m 1 {kubelet nac} spec.containers{flannel} Warning Failed Failed to start container with docker id 927e7ccdc32b with error: Error response from daemon: {"message":"chown /etc/resolv.conf: operation not permitted"}
```
Or
```
6m 1m 191 {kubelet nac} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2924299975-1l2t7_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2924299975-1l2t7_kube-system(dee8ef21-fbcb-11e6-ba19-38d547e0006a)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"
6m 1m 191 {kubelet nac} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2924299975-1l2t7_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2924299975-1l2t7_kube-system(dee8ef21-fbcb-11e6-ba19-38d547e0006a)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"
```
You can then search Google for the error messages, which may help you find a solution.

View File

@ -15,7 +15,7 @@ This guide will set up a simple Kubernetes cluster with a single Kubernetes mast
- This guide uses `systemd` for process management. Ubuntu 15.04 supports systemd natively as do a number of other Linux distributions.
- All machines should have Docker >= 1.7.0 installed.
- To install Docker on Ubuntu, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
- To install Docker on Ubuntu, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
- All machines should have connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).

View File

@ -70,7 +70,7 @@ At this stage, you can query the controller model as well:
```
juju status
Model Controller Cloud/Region Version
controller k8s lxd/localhost 2.0.2
controller k8s lxd/localhost 2.0.2
App Version Status Scale Charm Store Rev OS Notes
@ -142,16 +142,16 @@ If the deployment is larger the following commands will run on all units success
```
juju show-status kubernetes-master --format json | \
jq --raw-output '.applications."kubernetes-master".units | keys[]' | \
xargs -I UNIT juju ssh UNIT "sudo sed -i 's/KUBE_API_ARGS=\"/KUBE_API_ARGS=\"--allow-privileged\ /' /etc/default/kube-apiserver && sudo systemctl restart kube-apiserver.service"
jq --raw-output '.applications."kubernetes-master".units | keys[]' | \
xargs -I UNIT juju ssh UNIT "sudo sed -i 's/KUBE_API_ARGS=\"/KUBE_API_ARGS=\"--allow-privileged\ /' /etc/default/kube-apiserver && sudo systemctl restart kube-apiserver.service"
```
2. Update all workers
```
juju show-status kubernetes-worker --format json | \
jq --raw-output '.applications."kubernetes-worker".units | keys[]' | \
xargs -I UNIT juju ssh UNIT "sudo sed -i 's/KUBELET_ARGS=\"/KUBELET_ARGS=\"--allow-privileged\ /' /etc/default/kubelet && sudo systemctl restart kubelet.service"
jq --raw-output '.applications."kubernetes-worker".units | keys[]' | \
xargs -I UNIT juju ssh UNIT "sudo sed -i 's/KUBELET_ARGS=\"/KUBELET_ARGS=\"--allow-privileged\ /' /etc/default/kubelet && sudo systemctl restart kubelet.service"
```

View File

@ -71,7 +71,7 @@ Sample Config:
working-dir = <Folder in which VMs are provisioned, can be null>
vm-uuid = <VM Instance UUID of virtual machine which can be retrieved from instanceUuid property in VmConfigInfo, or also set as vc.uuid in VMX file. If empty, will be retrieved from sysfs (requires root)>
[Disk]
scsicontrollertype = pvscsi
scsicontrollertype = pvscsi
```
* Set the cloud provider via ```--cloud-provider=vsphere``` flag for each instance of kubelet, apiserver and controller manager.

View File

@ -16,7 +16,7 @@ In the reference section, you can find reference documentation for Kubernetes AP
## CLI References
* [kubectl](/docs/user-guide/kubectl-overview/) - Runs commands against Kubernetes clusters.
* [JSONPath](/docs/user-guide/jsonpath/) - Syntax guide for using [JSONPath expressions](http://goessner.net/articles/JsonPath/) with kubectl.
* [JSONPath](/docs/user-guide/jsonpath/) - Syntax guide for using [JSONPath expressions](http://goessner.net/articles/JsonPath/) with kubectl.
* [kube-apiserver](/docs/admin/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers.
* [kube-proxy](/docs/admin/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of backends.
* [kube-scheduler](/docs/admin/kube-scheduler/) - A policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity.

View File

@ -55,11 +55,11 @@ For jQuery to work in Node, a window with a document is required. Since no such
```js
require("jsdom").env("", function(err, window) {
if (err) {
console.error(err);
return;
}
if (err) {
console.error(err);
return;
}
var $ = require("jquery")(window);
var $ = require("jquery")(window);
});
```

View File

@ -345,7 +345,7 @@ Check that:
delete any ingresses created before the cluster joined the
federation (and had it's GLBC reconfigured), and recreate them if
necessary.
#### This troubleshooting guide did not help me solve my problem
Please use one of our [support channels](http://kubernetes.io/docs/troubleshooting/) to seek assistance.

View File

@ -69,8 +69,8 @@ Let's describe the quota to see what is currently being consumed in this namespa
$ kubectl describe quota object-counts --namespace=quota-example
Name: object-counts
Namespace: quota-example
Resource Used Hard
-------- ---- ----
Resource Used Hard
-------- ---- ----
persistentvolumeclaims 0 2
services.loadbalancers 0 2
services.nodeports 0 0
@ -162,8 +162,8 @@ Replicas: 0 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 7s 11 {replicaset-controller } Warning FailedCreate Error creating: pods "nginx-3137573019-" is forbidden: Failed quota: compute-resources: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
```
@ -257,7 +257,7 @@ Name: best-effort
Namespace: quota-scopes
Scopes: BestEffort
* Matches all pods that have best effort quality of service.
Resource Used Hard
Resource Used Hard
-------- ---- ----
pods 0 10

View File

@ -77,10 +77,10 @@ Next, we will check that we can discover the rabbitmq service:
# Note the rabitmq-service has a DNS name, provided by Kubernetes:
root@temp-loe07:/# nslookup rabbitmq-service
Server: 10.0.0.10
Address: 10.0.0.10#53
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: rabbitmq-service.default.svc.cluster.local
Name: rabbitmq-service.default.svc.cluster.local
Address: 10.0.147.152
# Your address will vary.
@ -222,27 +222,27 @@ kubectl create -f ./job.yaml
Now wait a bit, then check on the job.
```shell
$ kubectl describe jobs/job-wq-1
Name: job-wq-1
Namespace: default
Image(s): gcr.io/causal-jigsaw-637/job-wq-1
Selector: app in (job-wq-1)
Parallelism: 2
Completions: 8
Labels: app=job-wq-1
Pods Statuses: 0 Running / 8 Succeeded / 0 Failed
$ kubectl describe jobs/job-wq-1
Name: job-wq-1
Namespace: default
Image(s): gcr.io/causal-jigsaw-637/job-wq-1
Selector: app in (job-wq-1)
Parallelism: 2
Completions: 8
Labels: app=job-wq-1
Pods Statuses: 0 Running / 8 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-hcobb
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-weytj
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-qaam5
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-b67sr
26s 26s 1 {job } SuccessfulCreate Created pod: job-wq-1-xe5hj
15s 15s 1 {job } SuccessfulCreate Created pod: job-wq-1-w2zqe
14s 14s 1 {job } SuccessfulCreate Created pod: job-wq-1-d6ppa
14s 14s 1 {job } SuccessfulCreate Created pod: job-wq-1-p17e0
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-hcobb
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-weytj
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-qaam5
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-b67sr
26s 26s 1 {job } SuccessfulCreate Created pod: job-wq-1-xe5hj
15s 15s 1 {job } SuccessfulCreate Created pod: job-wq-1-w2zqe
14s 14s 1 {job } SuccessfulCreate Created pod: job-wq-1-d6ppa
14s 14s 1 {job } SuccessfulCreate Created pod: job-wq-1-p17e0
```
All our pods succeeded. Yay.

View File

@ -175,21 +175,21 @@ kubectl create -f ./job.yaml
Now wait a bit, then check on the job.
```shell
$ kubectl describe jobs/job-wq-2
Name: job-wq-2
Namespace: default
Image(s): gcr.io/exampleproject/job-wq-2
Selector: app in (job-wq-2)
Parallelism: 2
Completions: Unset
Start Time: Mon, 11 Jan 2016 17:07:59 -0800
Labels: app=job-wq-2
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
$ kubectl describe jobs/job-wq-2
Name: job-wq-2
Namespace: default
Image(s): gcr.io/exampleproject/job-wq-2
Selector: app in (job-wq-2)
Parallelism: 2
Completions: Unset
Start Time: Mon, 11 Jan 2016 17:07:59 -0800
Labels: app=job-wq-2
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8
$ kubectl logs pods/job-wq-2-7r7b2

View File

@ -222,39 +222,39 @@ To wrap up, let's look at what happens if we try to specify a profile that hasn'
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
EOF
pod "hello-apparmor-2" created
$ kubectl describe pod hello-apparmor-2
Name: hello-apparmor-2
Namespace: default
Node: gke-test-default-pool-239f5d02-x1kf/
Start Time: Tue, 30 Aug 2016 17:58:56 -0700
Labels: <none>
Status: Failed
Reason: AppArmor
Message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
IP:
Controllers: <none>
Name: hello-apparmor-2
Namespace: default
Node: gke-test-default-pool-239f5d02-x1kf/
Start Time: Tue, 30 Aug 2016 17:58:56 -0700
Labels: <none>
Status: Failed
Reason: AppArmor
Message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
IP:
Controllers: <none>
Containers:
hello:
Image: busybox
Port:
Image: busybox
Port:
Command:
sh
-c
echo 'Hello AppArmor!' && sleep 1h
Requests:
cpu: 100m
Environment Variables: <none>
cpu: 100m
Environment Variables: <none>
Volumes:
default-token-dnz7v:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dnz7v
QoS Tier: Burstable
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dnz7v
QoS Tier: Burstable
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-minion-group-t1f5
23s 23s 1 {kubelet e2e-test-stclair-minion-group-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-minion-group-t1f5
23s 23s 1 {kubelet e2e-test-stclair-minion-group-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
Note the pod status is Failed, with a helpful error message: `Pod Cannot enforce AppArmor: profile
"k8s-apparmor-example-allow-write" is not loaded`. An event was also recorded with the same message.

View File

@ -58,19 +58,19 @@ kubectl describe deployment hello
The output is similar to this:
```
Name: hello
Namespace: default
CreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700
Labels: app=hello
tier=backend
track=stable
Selector: app=hello,tier=backend,track=stable
Replicas: 7 updated | 7 total | 7 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: <none>
NewReplicaSet: hello-3621623197 (7/7 replicas created)
Name: hello
Namespace: default
CreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700
Labels: app=hello
tier=backend
track=stable
Selector: app=hello,tier=backend,track=stable
Replicas: 7 updated | 7 total | 7 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: <none>
NewReplicaSet: hello-3621623197 (7/7 replicas created)
Events:
...
```

View File

@ -91,20 +91,20 @@ for a secure solution.
kubectl describe deployment mysql
Name: mysql
Namespace: default
CreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700
Labels: app=mysql
Selector: app=mysql
Replicas: 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
OldReplicaSets: <none>
NewReplicaSet: mysql-63082529 (1/1 replicas created)
Name: mysql
Namespace: default
CreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700
Labels: app=mysql
Selector: app=mysql
Replicas: 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
OldReplicaSets: <none>
NewReplicaSet: mysql-63082529 (1/1 replicas created)
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33s 33s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set mysql-63082529 to 1
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33s 33s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set mysql-63082529 to 1
1. List the pods created by the Deployment:
@ -117,33 +117,33 @@ for a secure solution.
kubectl describe pv mysql-pv
Name: mysql-pv
Labels: <none>
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 20Gi
Message:
Name: mysql-pv
Labels: <none>
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 20Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: mysql-disk
FSType: ext4
Partition: 0
ReadOnly: false
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: mysql-disk
FSType: ext4
Partition: 0
ReadOnly: false
No events.
1. Inspect the PersistentVolumeClaim:
kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
Status: Bound
Volume: mysql-pv
Labels: <none>
Capacity: 20Gi
Access Modes: RWO
Name: mysql-pv-claim
Namespace: default
Status: Bound
Volume: mysql-pv
Labels: <none>
Capacity: 20Gi
Access Modes: RWO
No events.
## Accessing the MySQL instance

View File

@ -877,9 +877,9 @@ word to test the server's health.
ZK_CLIENT_PORT=${ZK_CLIENT_PORT:-2181}
OK=$(echo ruok | nc 127.0.0.1 $ZK_CLIENT_PORT)
if [ "$OK" == "imok" ]; then
exit 0
exit 0
else
exit 1
exit 1
fi
```

View File

@ -57,49 +57,49 @@ We can retrieve a lot more information about each of these pods using `kubectl d
```shell
$ kubectl describe pod nginx-deployment-1006230814-6winp
Name: nginx-deployment-1006230814-6winp
Namespace: default
Node: kubernetes-node-wul5/10.240.0.9
Start Time: Thu, 24 Mar 2016 01:39:49 +0000
Labels: app=nginx,pod-template-hash=1006230814
Status: Running
IP: 10.244.0.6
Controllers: ReplicaSet/nginx-deployment-1006230814
Name: nginx-deployment-1006230814-6winp
Namespace: default
Node: kubernetes-node-wul5/10.240.0.9
Start Time: Thu, 24 Mar 2016 01:39:49 +0000
Labels: app=nginx,pod-template-hash=1006230814
Status: Running
IP: 10.244.0.6
Controllers: ReplicaSet/nginx-deployment-1006230814
Containers:
nginx:
Container ID: docker://90315cc9f513c724e9957a4788d3e625a078de84750f244a40f97ae355eb1149
Image: nginx
Image ID: docker://6f62f48c4e55d700cf3eb1b5e33fa051802986b77b874cc351cce539e5163707
Port: 80/TCP
Container ID: docker://90315cc9f513c724e9957a4788d3e625a078de84750f244a40f97ae355eb1149
Image: nginx
Image ID: docker://6f62f48c4e55d700cf3eb1b5e33fa051802986b77b874cc351cce539e5163707
Port: 80/TCP
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
cpu: Guaranteed
memory: Guaranteed
Limits:
cpu: 500m
memory: 128Mi
cpu: 500m
memory: 128Mi
Requests:
memory: 128Mi
cpu: 500m
State: Running
Started: Thu, 24 Mar 2016 01:39:51 +0000
Ready: True
Restart Count: 0
memory: 128Mi
cpu: 500m
State: Running
Started: Thu, 24 Mar 2016 01:39:51 +0000
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready True
Type Status
Ready True
Volumes:
default-token-4bcbi:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
54s 54s 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-deployment-1006230814-6winp to kubernetes-node-wul5
54s 54s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulling pulling image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulled Successfully pulled image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Created Created container with docker id 90315cc9f513
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Started Started container with docker id 90315cc9f513
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
54s 54s 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-deployment-1006230814-6winp to kubernetes-node-wul5
54s 54s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulling pulling image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulled Successfully pulled image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Created Created container with docker id 90315cc9f513
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Started Started container with docker id 90315cc9f513
```
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
@ -132,35 +132,35 @@ To find out why the nginx-deployment-1370807587-fz9sd pod is not running, we can
```shell
$ kubectl describe pod nginx-deployment-1370807587-fz9sd
Name: nginx-deployment-1370807587-fz9sd
Namespace: default
Node: /
Labels: app=nginx,pod-template-hash=1370807587
Status: Pending
IP:
Controllers: ReplicaSet/nginx-deployment-1370807587
Name: nginx-deployment-1370807587-fz9sd
Namespace: default
Node: /
Labels: app=nginx,pod-template-hash=1370807587
Status: Pending
IP:
Controllers: ReplicaSet/nginx-deployment-1370807587
Containers:
nginx:
Image: nginx
Port: 80/TCP
Image: nginx
Port: 80/TCP
QoS Tier:
memory: Guaranteed
cpu: Guaranteed
memory: Guaranteed
cpu: Guaranteed
Limits:
cpu: 1
memory: 128Mi
cpu: 1
memory: 128Mi
Requests:
cpu: 1
memory: 128Mi
cpu: 1
memory: 128Mi
Environment Variables:
Volumes:
default-token-4bcbi:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 48s 7 {default-scheduler } Warning FailedScheduling pod (nginx-deployment-1370807587-fz9sd) failed to fit in any node
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 48s 7 {default-scheduler } Warning FailedScheduling pod (nginx-deployment-1370807587-fz9sd) failed to fit in any node
fit failure on node (kubernetes-node-6ta5): Node didn't have enough resource: CPU, requested: 1000, used: 1420, capacity: 2000
fit failure on node (kubernetes-node-wul5): Node didn't have enough resource: CPU, requested: 1000, used: 1100, capacity: 2000
```
@ -270,34 +270,34 @@ kubernetes-node-st6x Ready 1h
kubernetes-node-unaj Ready 1h
$ kubectl describe node kubernetes-node-861h
Name: kubernetes-node-861h
Labels: kubernetes.io/hostname=kubernetes-node-861h
CreationTimestamp: Fri, 10 Jul 2015 14:32:29 -0700
Name: kubernetes-node-861h
Labels: kubernetes.io/hostname=kubernetes-node-861h
CreationTimestamp: Fri, 10 Jul 2015 14:32:29 -0700
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
Ready Unknown Fri, 10 Jul 2015 14:34:32 -0700 Fri, 10 Jul 2015 14:35:15 -0700 Kubelet stopped posting node status.
Addresses: 10.240.115.55,104.197.0.26
Type Status LastHeartbeatTime LastTransitionTime Reason Message
Ready Unknown Fri, 10 Jul 2015 14:34:32 -0700 Fri, 10 Jul 2015 14:35:15 -0700 Kubelet stopped posting node status.
Addresses: 10.240.115.55,104.197.0.26
Capacity:
cpu: 1
memory: 3800808Ki
pods: 100
cpu: 1
memory: 3800808Ki
pods: 100
Version:
Kernel Version: 3.16.0-0.bpo.4-amd64
OS Image: Debian GNU/Linux 7 (wheezy)
Container Runtime Version: docker://Unknown
Kubelet Version: v0.21.1-185-gffc5a86098dc01
Kube-Proxy Version: v0.21.1-185-gffc5a86098dc01
PodCIDR: 10.244.0.0/24
ExternalID: 15233045891481496305
Pods: (0 in total)
Namespace Name
Kernel Version: 3.16.0-0.bpo.4-amd64
OS Image: Debian GNU/Linux 7 (wheezy)
Container Runtime Version: docker://Unknown
Kubelet Version: v0.21.1-185-gffc5a86098dc01
Kube-Proxy Version: v0.21.1-185-gffc5a86098dc01
PodCIDR: 10.244.0.0/24
ExternalID: 15233045891481496305
Pods: (0 in total)
Namespace Name
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Fri, 10 Jul 2015 14:32:28 -0700 Fri, 10 Jul 2015 14:32:28 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:32:30 -0700 Fri, 10 Jul 2015 14:32:30 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:33:00 -0700 Fri, 10 Jul 2015 14:33:00 -0700 1 {kubelet kubernetes-node-861h} starting Starting kubelet.
Fri, 10 Jul 2015 14:33:02 -0700 Fri, 10 Jul 2015 14:33:02 -0700 1 {kubelet kubernetes-node-861h} NodeReady Node kubernetes-node-861h status is now: NodeReady
Fri, 10 Jul 2015 14:35:15 -0700 Fri, 10 Jul 2015 14:35:15 -0700 1 {controllermanager } NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
FirstSeen LastSeen Count From SubobjectPath Reason Message
Fri, 10 Jul 2015 14:32:28 -0700 Fri, 10 Jul 2015 14:32:28 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:32:30 -0700 Fri, 10 Jul 2015 14:32:30 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:33:00 -0700 Fri, 10 Jul 2015 14:33:00 -0700 1 {kubelet kubernetes-node-861h} starting Starting kubelet.
Fri, 10 Jul 2015 14:33:02 -0700 Fri, 10 Jul 2015 14:33:02 -0700 1 {kubelet kubernetes-node-861h} NodeReady Node kubernetes-node-861h status is now: NodeReady
Fri, 10 Jul 2015 14:35:15 -0700 Fri, 10 Jul 2015 14:35:15 -0700 1 {controllermanager } NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
$ kubectl get node kubernetes-node-861h -o yaml

View File

@ -44,34 +44,34 @@ If you need help, just run `kubectl help` from the terminal window.
The following table includes short descriptions and the general syntax for all of the `kubectl` operations:
Operation | Syntax | Description
Operation | Syntax | Description
-------------------- | -------------------- | --------------------
`annotate` | `kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]` | Add or update the annotations of one or more resources.
`api-versions` | `kubectl api-versions [flags]` | List the API versions that are available.
`apply` | `kubectl apply -f FILENAME [flags]`| Apply a configuration change to a resource from a file or stdin.
`attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | Attach to a running container either to view the output stream or interact with the container (stdin).
`autoscale` | `kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]` | Automatically scale the set of pods that are managed by a replication controller.
`cluster-info` | `kubectl cluster-info [flags]` | Display endpoint information about the master and services in the cluster.
`config` | `kubectl config SUBCOMMAND [flags]` | Modifies kubeconfig files. See the individual subcommands for details.
`create` | `kubectl create -f FILENAME [flags]` | Create one or more resources from a file or stdin.
`delete` | `kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags]` | Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.
`describe` | `kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags]` | Display the detailed state of one or more resources.
`edit` | `kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags]` | Edit and update the definition of one or more resources on the server by using the default editor.
`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod,
`explain` | `kubectl explain [--include-extended-apis=true] [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc.
`expose` | `kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [----external-ip=external-ip-of-service] [--type=type] [flags]` | Expose a replication controller, service, or pod as a new Kubernetes service.
`get` | `kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags]` | List one or more resources.
`label` | `kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]` | Add or update the labels of one or more resources.
`logs` | `kubectl logs POD [-c CONTAINER] [--follow] [flags]` | Print the logs for a container in a pod.
`patch` | `kubectl patch (-f FILENAME | TYPE NAME | TYPE/NAME) --patch PATCH [flags]` | Update one or more fields of a resource by using the strategic merge patch process.
`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | Forward one or more local ports to a pod.
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server.
`replace` | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin.
`rolling-update` | `kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags]` | Perform a rolling update by gradually replacing the specified replication controller and its pods.
`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Run a specified image on the cluster.
`scale` | `kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags]` | Update the size of the specified replication controller.
`stop` | `kubectl stop` | Deprecated: Instead, see `kubectl delete`.
`version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server.
`annotate` | `kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]` | Add or update the annotations of one or more resources.
`api-versions` | `kubectl api-versions [flags]` | List the API versions that are available.
`apply` | `kubectl apply -f FILENAME [flags]`| Apply a configuration change to a resource from a file or stdin.
`attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | Attach to a running container either to view the output stream or interact with the container (stdin).
`autoscale` | `kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]` | Automatically scale the set of pods that are managed by a replication controller.
`cluster-info` | `kubectl cluster-info [flags]` | Display endpoint information about the master and services in the cluster.
`config` | `kubectl config SUBCOMMAND [flags]` | Modifies kubeconfig files. See the individual subcommands for details.
`create` | `kubectl create -f FILENAME [flags]` | Create one or more resources from a file or stdin.
`delete` | `kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags]` | Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.
`describe` | `kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags]` | Display the detailed state of one or more resources.
`edit` | `kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags]` | Edit and update the definition of one or more resources on the server by using the default editor.
`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod,
`explain` | `kubectl explain [--include-extended-apis=true] [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc.
`expose` | `kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [----external-ip=external-ip-of-service] [--type=type] [flags]` | Expose a replication controller, service, or pod as a new Kubernetes service.
`get` | `kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags]` | List one or more resources.
`label` | `kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]` | Add or update the labels of one or more resources.
`logs` | `kubectl logs POD [-c CONTAINER] [--follow] [flags]` | Print the logs for a container in a pod.
`patch` | `kubectl patch (-f FILENAME | TYPE NAME | TYPE/NAME) --patch PATCH [flags]` | Update one or more fields of a resource by using the strategic merge patch process.
`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | Forward one or more local ports to a pod.
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server.
`replace` | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin.
`rolling-update` | `kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags]` | Perform a rolling update by gradually replacing the specified replication controller and its pods.
`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Run a specified image on the cluster.
`scale` | `kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags]` | Update the size of the specified replication controller.
`stop` | `kubectl stop` | Deprecated: Instead, see `kubectl delete`.
`version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server.
Remember: For more about command operations, see the [kubectl](/docs/user-guide/kubectl) reference documentation.
@ -79,7 +79,7 @@ Remember: For more about command operations, see the [kubectl](/docs/user-guide/
The following table includes a list of all the supported resource types and their abbreviated aliases:
Resource type | Abbreviated alias
Resource type | Abbreviated alias
-------------------- | --------------------
`clusters` |
`clusterrolebindings` |

View File

@ -47,43 +47,43 @@ kubectl
### SEE ALSO
* [kubectl annotate](kubectl_annotate.md) - Update the annotations on a resource
* [kubectl api-versions](kubectl_api-versions.md) - Print the supported API versions on the server, in the form of "group/version"
* [kubectl apply](kubectl_apply.md) - Apply a configuration to a resource by filename or stdin
* [kubectl attach](kubectl_attach.md) - Attach to a running container
* [kubectl autoscale](kubectl_autoscale.md) - Auto-scale a Deployment, ReplicaSet, or ReplicationController
* [kubectl certificate](kubectl_certificate.md) - Modify certificate resources.
* [kubectl cluster-info](kubectl_cluster-info.md) - Display cluster info
* [kubectl completion](kubectl_completion.md) - Output shell completion code for the given shell (bash or zsh)
* [kubectl config](kubectl_config.md) - Modify kubeconfig files
* [kubectl convert](kubectl_convert.md) - Convert config files between different API versions
* [kubectl cordon](kubectl_cordon.md) - Mark node as unschedulable
* [kubectl cp](kubectl_cp.md) - Copy files and directories to and from containers.
* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin
* [kubectl delete](kubectl_delete.md) - Delete resources by filenames, stdin, resources and names, or by resources and label selector
* [kubectl describe](kubectl_describe.md) - Show details of a specific resource or group of resources
* [kubectl drain](kubectl_drain.md) - Drain node in preparation for maintenance
* [kubectl edit](kubectl_edit.md) - Edit a resource on the server
* [kubectl exec](kubectl_exec.md) - Execute a command in a container
* [kubectl explain](kubectl_explain.md) - Documentation of resources
* [kubectl expose](kubectl_expose.md) - Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
* [kubectl get](kubectl_get.md) - Display one or many resources
* [kubectl label](kubectl_label.md) - Update the labels on a resource
* [kubectl logs](kubectl_logs.md) - Print the logs for a container in a pod
* [kubectl options](kubectl_options.md) -
* [kubectl patch](kubectl_patch.md) - Update field(s) of a resource using strategic merge patch
* [kubectl port-forward](kubectl_port-forward.md) - Forward one or more local ports to a pod
* [kubectl proxy](kubectl_proxy.md) - Run a proxy to the Kubernetes API server
* [kubectl replace](kubectl_replace.md) - Replace a resource by filename or stdin
* [kubectl rolling-update](kubectl_rolling-update.md) - Perform a rolling update of the given ReplicationController
* [kubectl rollout](kubectl_rollout.md) - Manage a deployment rollout
* [kubectl run](kubectl_run.md) - Run a particular image on the cluster
* [kubectl scale](kubectl_scale.md) - Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
* [kubectl set](kubectl_set.md) - Set specific features on objects
* [kubectl taint](kubectl_taint.md) - Update the taints on one or more nodes
* [kubectl top](kubectl_top.md) - Display Resource (CPU/Memory/Storage) usage
* [kubectl uncordon](kubectl_uncordon.md) - Mark node as schedulable
* [kubectl version](kubectl_version.md) - Print the client and server version information
* [kubectl annotate](kubectl_annotate.md) - Update the annotations on a resource
* [kubectl api-versions](kubectl_api-versions.md) - Print the supported API versions on the server, in the form of "group/version"
* [kubectl apply](kubectl_apply.md) - Apply a configuration to a resource by filename or stdin
* [kubectl attach](kubectl_attach.md) - Attach to a running container
* [kubectl autoscale](kubectl_autoscale.md) - Auto-scale a Deployment, ReplicaSet, or ReplicationController
* [kubectl certificate](kubectl_certificate.md) - Modify certificate resources.
* [kubectl cluster-info](kubectl_cluster-info.md) - Display cluster info
* [kubectl completion](kubectl_completion.md) - Output shell completion code for the given shell (bash or zsh)
* [kubectl config](kubectl_config.md) - Modify kubeconfig files
* [kubectl convert](kubectl_convert.md) - Convert config files between different API versions
* [kubectl cordon](kubectl_cordon.md) - Mark node as unschedulable
* [kubectl cp](kubectl_cp.md) - Copy files and directories to and from containers.
* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin
* [kubectl delete](kubectl_delete.md) - Delete resources by filenames, stdin, resources and names, or by resources and label selector
* [kubectl describe](kubectl_describe.md) - Show details of a specific resource or group of resources
* [kubectl drain](kubectl_drain.md) - Drain node in preparation for maintenance
* [kubectl edit](kubectl_edit.md) - Edit a resource on the server
* [kubectl exec](kubectl_exec.md) - Execute a command in a container
* [kubectl explain](kubectl_explain.md) - Documentation of resources
* [kubectl expose](kubectl_expose.md) - Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
* [kubectl get](kubectl_get.md) - Display one or many resources
* [kubectl label](kubectl_label.md) - Update the labels on a resource
* [kubectl logs](kubectl_logs.md) - Print the logs for a container in a pod
* [kubectl options](kubectl_options.md) -
* [kubectl patch](kubectl_patch.md) - Update field(s) of a resource using strategic merge patch
* [kubectl port-forward](kubectl_port-forward.md) - Forward one or more local ports to a pod
* [kubectl proxy](kubectl_proxy.md) - Run a proxy to the Kubernetes API server
* [kubectl replace](kubectl_replace.md) - Replace a resource by filename or stdin
* [kubectl rolling-update](kubectl_rolling-update.md) - Perform a rolling update of the given ReplicationController
* [kubectl rollout](kubectl_rollout.md) - Manage a deployment rollout
* [kubectl run](kubectl_run.md) - Run a particular image on the cluster
* [kubectl scale](kubectl_scale.md) - Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
* [kubectl set](kubectl_set.md) - Set specific features on objects
* [kubectl taint](kubectl_taint.md) - Update the taints on one or more nodes
* [kubectl top](kubectl_top.md) - Display Resource (CPU/Memory/Storage) usage
* [kubectl uncordon](kubectl_uncordon.md) - Mark node as schedulable
* [kubectl version](kubectl_version.md) - Print the client and server version information
###### Auto generated by spf13/cobra on 13-Dec-2016

View File

@ -55,11 +55,11 @@ For jQuery to work in Node, a window with a document is required. Since no such
```js
require("jsdom").env("", function(err, window) {
if (err) {
console.error(err);
return;
}
if (err) {
console.error(err);
return;
}
var $ = require("jquery")(window);
var $ = require("jquery")(window);
});
```

View File

@ -246,11 +246,11 @@ web-0 # apt-get update && apt-get install -y dnsutils
...
web-0 # nslookup -type=srv nginx.default
Server: 10.0.0.10
Address: 10.0.0.10#53
Server: 10.0.0.10
Address: 10.0.0.10#53
nginx.default.svc.cluster.local service = 10 50 0 web-1.ub.default.svc.cluster.local.
nginx.default.svc.cluster.local service = 10 50 0 web-0.ub.default.svc.cluster.local.
nginx.default.svc.cluster.local service = 10 50 0 web-1.ub.default.svc.cluster.local.
nginx.default.svc.cluster.local service = 10 50 0 web-0.ub.default.svc.cluster.local.
```
## Updating a PetSet

View File

@ -55,20 +55,20 @@ create the defined ReplicaSet and the pods that it manages.
$ kubectl create -f frontend.yaml
replicaset "frontend" created
$ kubectl describe rs/frontend
Name: frontend
Namespace: default
Image(s): gcr.io/google_samples/gb-frontend:v3
Selector: tier=frontend,tier in (frontend)
Labels: app=guestbook,tier=frontend
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Name: frontend
Namespace: default
Image(s): gcr.io/google_samples/gb-frontend:v3
Selector: tier=frontend,tier in (frontend)
Labels: app=guestbook,tier=frontend
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m

View File

@ -46,29 +46,25 @@ Check on the status of the ReplicationController using this command:
```shell
$ kubectl describe replicationcontrollers/nginx
Name: nginx
Namespace: default
Image(s): nginx
Selector: app=nginx
Labels: app=nginx
Replicas: 3 current / 3 desired
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Name: nginx
Namespace: default
Image(s): nginx
Selector: app=nginx
Labels: app=nginx
Replicas: 3 current / 3 desired
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Events:
FirstSeen LastSeen Count From
SubobjectPath Reason Message
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
{replication-controller } SuccessfulCreate Created pod: nginx-qrm3m
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
{replication-controller } SuccessfulCreate Created pod: nginx-3ntk0
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
{replication-controller } SuccessfulCreate Created pod: nginx-4ok8v
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1 {replication-controller } SuccessfulCreate Created pod: nginx-qrm3m
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1 {replication-controller } SuccessfulCreate Created pod: nginx-3ntk0
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1 {replication-controller } SuccessfulCreate Created pod: nginx-4ok8v
```
Here, 3 pods have been made, but none are running yet, perhaps because the image is being pulled.
A little later, the same command may show:
```shell
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
```
To list all the pods that belong to the rc in a machine readable form, you can use a command like this:

View File

@ -71,17 +71,17 @@ NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
$ kubectl describe secrets/db-user-pass
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Type: Opaque
Data
====
password.txt: 13 bytes
username.txt: 6 bytes
password.txt: 13 bytes
username.txt: 6 bytes
```
Note that neither `get` nor `describe` shows the contents of the file by default.

View File

@ -9,15 +9,15 @@ title: Contributing to the Kubernetes Documentation
var forwarding=window.location.hash.replace("#","");
$( document ).ready(function() {
if(forwarding) {
$("#generalInstructions").hide();
$("#continueEdit").show();
$("#continueEditButton").text("Edit " + forwarding);
$("#continueEditButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/edit/master/" + forwarding)
$("#viewOnGithubButton").text("View " + forwarding + " on GitHub");
$("#viewOnGithubButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/tree/master/" + forwarding)
$("#generalInstructions").hide();
$("#continueEdit").show();
$("#continueEditButton").text("Edit " + forwarding);
$("#continueEditButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/edit/master/" + forwarding)
$("#viewOnGithubButton").text("View " + forwarding + " on GitHub");
$("#viewOnGithubButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/tree/master/" + forwarding)
} else {
$("#generalInstructions").show();
$("#continueEdit").hide();
$("#continueEdit").hide();
}
});
</script>