Code snippents shouldn't include the command prompt (#12779)

pull/12771/head^2
Neha Yadav 2019-03-07 15:01:05 +05:30 committed by Kubernetes Prow Robot
parent 99e4d3bac6
commit d3cca48e3f
40 changed files with 1052 additions and 441 deletions

View File

@ -231,8 +231,11 @@ refresh the local list for valid certificates.
On each client, perform the following operations:
```bash
$ sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
$ sudo update-ca-certificates
sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
sudo update-ca-certificates
```
```
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....

View File

@ -35,14 +35,14 @@ a container that writes some text to standard output once per second.
To run this pod, use the following command:
```shell
$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
pod/counter created
```
To fetch the logs, use the `kubectl logs` command, as follows:
```shell
$ kubectl logs counter
kubectl logs counter
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
@ -178,7 +178,9 @@ Now when you run this pod, you can access each log stream separately by
running the following commands:
```shell
$ kubectl logs counter count-log-1
kubectl logs counter count-log-1
```
```
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
@ -186,7 +188,9 @@ $ kubectl logs counter count-log-1
```
```shell
$ kubectl logs counter count-log-2
kubectl logs counter count-log-2
```
```
Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2

View File

@ -26,7 +26,10 @@ Many applications require multiple resources to be created, such as a Deployment
Multiple resources can be created the same way as a single resource:
```shell
$ kubectl create -f https://k8s.io/examples/application/nginx-app.yaml
kubectl create -f https://k8s.io/examples/application/nginx-app.yaml
```
```shell
service/my-nginx-svc created
deployment.apps/my-nginx created
```
@ -36,13 +39,13 @@ The resources will be created in the order they appear in the file. Therefore, i
`kubectl create` also accepts multiple `-f` arguments:
```shell
$ kubectl create -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
kubectl create -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
```
And a directory can be specified rather than or in addition to individual files:
```shell
$ kubectl create -f https://k8s.io/examples/application/nginx/
kubectl create -f https://k8s.io/examples/application/nginx/
```
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
@ -52,7 +55,10 @@ It is a recommended practice to put resources related to the same microservice o
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
```shell
$ kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
```
```shell
deployment.apps/my-nginx created
```
@ -61,7 +67,10 @@ deployment.apps/my-nginx created
Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created:
```shell
$ kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml
kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml
```
```shell
deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
@ -69,13 +78,16 @@ service "my-nginx-svc" deleted
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
```shell
$ kubectl delete deployments/my-nginx services/my-nginx-svc
kubectl delete deployments/my-nginx services/my-nginx-svc
```
For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels:
```shell
$ kubectl delete deployment,services -l app=nginx
kubectl delete deployment,services -l app=nginx
```
```shell
deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
@ -83,7 +95,10 @@ service "my-nginx-svc" deleted
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
```shell
$ kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
```
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s
```
@ -108,14 +123,20 @@ project/k8s/development
By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we had tried to create the resources in this directory using the following command, we would have encountered an error:
```shell
$ kubectl create -f project/k8s/development
kubectl create -f project/k8s/development
```
```shell
error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)
```
Instead, specify the `--recursive` or `-R` flag with the `--filename,-f` flag as such:
```shell
$ kubectl create -f project/k8s/development --recursive
kubectl create -f project/k8s/development --recursive
```
```shell
configmap/my-config created
deployment.apps/my-deployment created
persistentvolumeclaim/my-pvc created
@ -126,7 +147,10 @@ The `--recursive` flag works with any operation that accepts the `--filename,-f`
The `--recursive` flag also works when multiple `-f` arguments are provided:
```shell
$ kubectl create -f project/k8s/namespaces -f project/k8s/development --recursive
kubectl create -f project/k8s/namespaces -f project/k8s/development --recursive
```
```shell
namespace/development created
namespace/staging created
configmap/my-config created
@ -169,8 +193,11 @@ and
The labels allow us to slice and dice our resources along any dimension specified by a label:
```shell
$ kubectl create -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
$ kubectl get pods -Lapp -Ltier -Lrole
kubectl create -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
kubectl get pods -Lapp -Ltier -Lrole
```
```shell
NAME READY STATUS RESTARTS AGE APP TIER ROLE
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>
@ -180,7 +207,12 @@ guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestboo
guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave
my-nginx-divi2 1/1 Running 0 29m nginx <none> <none>
my-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>
$ kubectl get pods -lapp=guestbook,role=slave
```
```shell
kubectl get pods -lapp=guestbook,role=slave
```
```shell
NAME READY STATUS RESTARTS AGE
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
guestbook-redis-slave-qgazl 1/1 Running 0 3m
@ -240,7 +272,10 @@ Sometimes existing pods and other resources need to be relabeled before creating
For example, if you want to label all your nginx pods as frontend tier, simply run:
```shell
$ kubectl label pods -l app=nginx tier=fe
kubectl label pods -l app=nginx tier=fe
```
```shell
pod/my-nginx-2035384211-j5fhi labeled
pod/my-nginx-2035384211-u2c7e labeled
pod/my-nginx-2035384211-u3t6x labeled
@ -250,7 +285,9 @@ This first filters all pods with the label "app=nginx", and then labels them wit
To see the pods you just labeled, run:
```shell
$ kubectl get pods -l app=nginx -L tier
kubectl get pods -l app=nginx -L tier
```
```shell
NAME READY STATUS RESTARTS AGE TIER
my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe
my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe
@ -266,8 +303,10 @@ For more information, please see [labels](/docs/concepts/overview/working-with-o
Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example:
```shell
$ kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
$ kubectl get pods my-nginx-v4-9gw19 -o yaml
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
kubectl get pods my-nginx-v4-9gw19 -o yaml
```
```shell
apiversion: v1
kind: pod
metadata:
@ -283,14 +322,18 @@ For more information, please see [annotations](/docs/concepts/overview/working-w
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do:
```shell
$ kubectl scale deployment/my-nginx --replicas=1
kubectl scale deployment/my-nginx --replicas=1
```
```shell
deployment.extensions/my-nginx scaled
```
Now you only have one pod managed by the deployment.
```shell
$ kubectl get pods -l app=nginx
kubectl get pods -l app=nginx
```
```shell
NAME READY STATUS RESTARTS AGE
my-nginx-2035384211-j5fhi 1/1 Running 0 30m
```
@ -298,7 +341,9 @@ my-nginx-2035384211-j5fhi 1/1 Running 0 30m
To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:
```shell
$ kubectl autoscale deployment/my-nginx --min=1 --max=3
kubectl autoscale deployment/my-nginx --min=1 --max=3
```
```shell
horizontalpodautoscaler.autoscaling/my-nginx autoscaled
```
@ -320,7 +365,9 @@ Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-co
This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
```shell
$ kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
```
```shell
deployment.apps/my-nginx configured
```
@ -339,18 +386,20 @@ To use apply, always create resource initially with either `kubectl apply` or `k
Alternatively, you may also update resources with `kubectl edit`:
```shell
$ kubectl edit deployment/my-nginx
kubectl edit deployment/my-nginx
```
This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the resource with the updated version:
```shell
$ kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml
$ vi /tmp/nginx.yaml
kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml
vi /tmp/nginx.yaml
# do some edit, and then save the file
$ kubectl apply -f /tmp/nginx.yaml
kubectl apply -f /tmp/nginx.yaml
deployment.apps/my-nginx configured
$ rm /tmp/nginx.yaml
rm /tmp/nginx.yaml
```
This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables.
@ -370,7 +419,9 @@ and
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
```shell
$ kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
```
```shell
deployment.apps/my-nginx deleted
deployment.apps/my-nginx replaced
```
@ -385,14 +436,16 @@ you should read [how to use `kubectl rolling-update`](/docs/tasks/run-applicatio
Let's say you were running version 1.7.9 of nginx:
```shell
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
```
```shell
deployment.apps/my-nginx created
```
To update to version 1.9.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`, with the kubectl commands we learned above.
```shell
$ kubectl edit deployment/my-nginx
kubectl edit deployment/my-nginx
```
That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/).

View File

@ -189,7 +189,9 @@ unscheduled until a place can be found. An event is produced each time the
scheduler fails to find a place for the Pod, like this:
```shell
$ kubectl describe pod frontend | grep -A 3 Events
kubectl describe pod frontend | grep -A 3 Events
```
```
Events:
FirstSeen LastSeen Count From Subobject PathReason Message
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
@ -210,7 +212,9 @@ You can check node capacities and amounts allocated with the
`kubectl describe nodes` command. For example:
```shell
$ kubectl describe nodes e2e-test-minion-group-4lw4
kubectl describe nodes e2e-test-minion-group-4lw4
```
```
Name: e2e-test-minion-group-4lw4
[ ... lines removed for clarity ...]
Capacity:
@ -260,7 +264,9 @@ whether a Container is being killed because it is hitting a resource limit, call
`kubectl describe pod` on the Pod of interest:
```shell
[12:54:41] $ kubectl describe pod simmemleak-hra99
kubectl describe pod simmemleak-hra99
```
```
Name: simmemleak-hra99
Namespace: default
Image(s): saadali/simmemleak
@ -304,7 +310,9 @@ You can call `kubectl get pod` with the `-o go-template=...` option to fetch the
of previously terminated Containers:
```shell
[13:59:01] $ kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99
kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99
```
```
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
```

View File

@ -61,8 +61,8 @@ username and password that the pods should use is in the files
```shell
# Create files needed for rest of example.
$ echo -n 'admin' > ./username.txt
$ echo -n '1f2d1e2e67df' > ./password.txt
echo -n 'admin' > ./username.txt
echo -n '1f2d1e2e67df' > ./password.txt
```
The `kubectl create secret` command
@ -70,18 +70,25 @@ packages these files into a Secret and creates
the object on the Apiserver.
```shell
$ kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
```
```
secret "db-user-pass" created
```
You can check that the secret was created like this:
```shell
$ kubectl get secrets
kubectl get secrets
```
```
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
$ kubectl describe secrets/db-user-pass
```
```shell
kubectl describe secrets/db-user-pass
```
```
Name: db-user-pass
Namespace: default
Labels: <none>
@ -139,7 +146,9 @@ data:
Now create the Secret using [`kubectl create`](/docs/reference/generated/kubectl/kubectl-commands#create):
```shell
$ kubectl create -f ./secret.yaml
kubectl create -f ./secret.yaml
```
```
secret "mysecret" created
```
@ -250,7 +259,9 @@ the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if
Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section:
```shell
$ kubectl get secret mysecret -o yaml
kubectl get secret mysecret -o yaml
```
```
apiVersion: v1
data:
username: YWRtaW4=
@ -269,7 +280,9 @@ type: Opaque
Decode the password field:
```shell
$ echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
```
```
1f2d1e2e67df
```
@ -429,12 +442,19 @@ This is the result of commands
executed inside the container from the example above:
```shell
$ ls /etc/foo/
ls /etc/foo/
username
password
$ cat /etc/foo/username
```
```shell
cat /etc/foo/username
```
admin
$ cat /etc/foo/password
```
```
cat /etc/foo/password
```
```
1f2d1e2e67df
```
@ -502,9 +522,14 @@ normal environment variables containing the base-64 decoded values of the secret
This is the result of commands executed inside the container from the example above:
```shell
$ echo $SECRET_USERNAME
echo $SECRET_USERNAME
```
```
admin
$ echo $SECRET_PASSWORD
```
```
echo $SECRET_PASSWORD
```
1f2d1e2e67df
```
@ -569,7 +594,9 @@ invalid keys that were skipped. The example shows a pod which refers to the
default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad.
```shell
$ kubectl get events
kubectl get events
```
```
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
```
@ -592,7 +619,10 @@ start until all the pod's volumes are mounted.
Create a secret containing some ssh keys:
```shell
$ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa
```
```
--from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
```
{{< caution >}}
@ -642,9 +672,18 @@ credentials.
Make the secrets:
```shell
$ kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11
kubectl create secret generic prod-db-secret --from-literal=username=produser
--from-literal=password=Y4nys7f11
```
```
secret "prod-db-secret" created
$ kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
```
```shell
kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
```
```
secret "test-db-secret" created
```
{{< note >}}

View File

@ -12,15 +12,15 @@ _Field selectors_ let you [select Kubernetes resources](/docs/concepts/overview/
This `kubectl` command selects all Pods for which the value of the [`status.phase`](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) field is `Running`:
```shell
$ kubectl get pods --field-selector status.phase=Running
kubectl get pods --field-selector status.phase=Running
```
{{< note >}}
Field selectors are essentially resource *filters*. By default, no selectors/filters are applied, meaning that all resources of the specified type are selected. This makes the following `kubectl` queries equivalent:
```shell
$ kubectl get pods
$ kubectl get pods --field-selector ""
kubectl get pods
kubectl get pods --field-selector ""
```
{{< /note >}}
@ -29,7 +29,9 @@ $ kubectl get pods --field-selector ""
Supported field selectors vary by Kubernetes resource type. All resource types support the `metadata.name` and `metadata.namespace` fields. Using unsupported field selectors produces an error. For example:
```shell
$ kubectl get ingress --field-selector foo.bar=baz
kubectl get ingress --field-selector foo.bar=baz
```
```
Error from server (BadRequest): Unable to find "ingresses" that match label selector "", field selector "foo.bar=baz": "foo.bar" is not a known field selector: only "metadata.name", "metadata.namespace"
```
@ -38,7 +40,7 @@ Error from server (BadRequest): Unable to find "ingresses" that match label sele
You can use the `=`, `==`, and `!=` operators with field selectors (`=` and `==` mean the same thing). This `kubectl` command, for example, selects all Kubernetes Services that aren't in the `default` namespace:
```shell
$ kubectl get services --field-selector metadata.namespace!=default
kubectl get services --field-selector metadata.namespace!=default
```
## Chained selectors
@ -46,7 +48,7 @@ $ kubectl get services --field-selector metadata.namespace!=default
As with [label](/docs/concepts/overview/working-with-objects/labels) and other selectors, field selectors can be chained together as a comma-separated list. This `kubectl` command selects all Pods for which the `status.phase` does not equal `Running` and the `spec.restartPolicy` field equals `Always`:
```shell
$ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always
kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always
```
## Multiple resource types
@ -54,5 +56,5 @@ $ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Alw
You use field selectors across multiple resource types. This `kubectl` command selects all Statefulsets and Services that are not in the `default` namespace:
```shell
$ kubectl get statefulsets,services --field-selector metadata.namespace!=default
kubectl get statefulsets,services --field-selector metadata.namespace!=default
```

View File

@ -46,7 +46,7 @@ One way to create a Deployment using a `.yaml` file like the one above is to use
in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example:
```shell
$ kubectl create -f https://k8s.io/examples/application/deployment.yaml --record
kubectl create -f https://k8s.io/examples/application/deployment.yaml --record
```
The output is similar to this:

View File

@ -139,25 +139,25 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje
Both label selector styles can be used to list or watch resources via a REST client. For example, targeting `apiserver` with `kubectl` and using _equality-based_ one may write:
```shell
$ kubectl get pods -l environment=production,tier=frontend
kubectl get pods -l environment=production,tier=frontend
```
or using _set-based_ requirements:
```shell
$ kubectl get pods -l 'environment in (production),tier in (frontend)'
kubectl get pods -l 'environment in (production),tier in (frontend)'
```
As already mentioned _set-based_ requirements are more expressive.  For instance, they can implement the _OR_ operator on values:
```shell
$ kubectl get pods -l 'environment in (production, qa)'
kubectl get pods -l 'environment in (production, qa)'
```
or restricting negative matching via _exists_ operator:
```shell
$ kubectl get pods -l 'environment,environment notin (frontend)'
kubectl get pods -l 'environment,environment notin (frontend)'
```
### Set references in API objects

View File

@ -46,7 +46,9 @@ for namespaces](/docs/admin/namespaces).
You can list the current namespaces in a cluster using:
```shell
$ kubectl get namespaces
kubectl get namespaces
```
```
NAME STATUS AGE
default Active 1d
kube-system Active 1d
@ -66,8 +68,8 @@ To temporarily set the namespace for a request, use the `--namespace` flag.
For example:
```shell
$ kubectl --namespace=<insert-namespace-name-here> run nginx --image=nginx
$ kubectl --namespace=<insert-namespace-name-here> get pods
kubectl --namespace=<insert-namespace-name-here> run nginx --image=nginx
kubectl --namespace=<insert-namespace-name-here> get pods
```
### Setting the namespace preference
@ -76,9 +78,9 @@ You can permanently save the namespace for all subsequent kubectl commands in th
context.
```shell
$ kubectl config set-context $(kubectl config current-context) --namespace=<insert-namespace-name-here>
kubectl config set-context $(kubectl config current-context) --namespace=<insert-namespace-name-here>
# Validate it
$ kubectl config view | grep namespace:
kubectl config view | grep namespace:
```
## Namespaces and DNS
@ -101,10 +103,10 @@ To see which Kubernetes resources are and aren't in a namespace:
```shell
# In a namespace
$ kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=true
# Not in a namespace
$ kubectl api-resources --namespaced=false
kubectl api-resources --namespaced=false
```
{{% /capture %}}

View File

@ -35,8 +35,10 @@ Create an nginx Pod, and note that it has a container port specification:
This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:
```shell
$ kubectl create -f ./run-my-nginx.yaml
$ kubectl get pods -l run=my-nginx -o wide
kubectl create -f ./run-my-nginx.yaml
kubectl get pods -l run=my-nginx -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE
my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m
my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd
@ -45,7 +47,7 @@ my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5
Check your pods' IPs:
```shell
$ kubectl get pods -l run=my-nginx -o yaml | grep podIP
kubectl get pods -l run=my-nginx -o yaml | grep podIP
podIP: 10.244.3.4
podIP: 10.244.2.5
```
@ -63,7 +65,9 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods runni
You can create a Service for your 2 nginx replicas with `kubectl expose`:
```shell
$ kubectl expose deployment/my-nginx
kubectl expose deployment/my-nginx
```
```
service/my-nginx exposed
```
@ -81,7 +85,9 @@ API object to see the list of supported fields in service definition.
Check your Service:
```shell
$ kubectl get svc my-nginx
kubectl get svc my-nginx
```
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx ClusterIP 10.0.162.149 <none> 80/TCP 21s
```
@ -95,7 +101,9 @@ Check the endpoints, and note that the IPs are the same as the Pods created in
the first step:
```shell
$ kubectl describe svc my-nginx
kubectl describe svc my-nginx
```
```
Name: my-nginx
Namespace: default
Labels: run=my-nginx
@ -107,8 +115,11 @@ Port: <unset> 80/TCP
Endpoints: 10.244.2.5:80,10.244.3.4:80
Session Affinity: None
Events: <none>
$ kubectl get ep my-nginx
```
```shell
kubectl get ep my-nginx
```
```
NAME ENDPOINTS AGE
my-nginx 10.244.2.5:80,10.244.3.4:80 1m
```
@ -131,7 +142,9 @@ each active Service. This introduces an ordering problem. To see why, inspect
the environment of your running nginx Pods (your Pod name will be different):
```shell
$ kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
```
```
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
@ -147,9 +160,11 @@ replicas. This will give you scheduler-level Service spreading of your Pods
variables:
```shell
$ kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
$ kubectl get pods -l run=my-nginx -o wide
kubectl get pods -l run=my-nginx -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE
my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd
my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m
@ -158,7 +173,9 @@ my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8
You may notice that the pods have different names, since they are killed and recreated.
```shell
$ kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE
kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE
```
```
KUBERNETES_SERVICE_PORT=443
MY_NGINX_SERVICE_HOST=10.0.162.149
KUBERNETES_SERVICE_HOST=10.0.0.1
@ -171,7 +188,9 @@ KUBERNETES_SERVICE_PORT_HTTPS=443
Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:
```shell
$ kubectl get services kube-dns --namespace=kube-system
kubectl get services kube-dns --namespace=kube-system
```
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8m
```
@ -183,7 +202,9 @@ cluster addon), so you can talk to the Service from any pod in your cluster usin
standard methods (e.g. gethostbyname). Let's run another curl application to test this:
```shell
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty
kubectl run curl --image=radial/busyboxplus:curl -i --tty
```
```
Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false
Hit enter for command prompt
```
@ -210,10 +231,16 @@ Till now we have only accessed the nginx server from within the cluster. Before
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
```shell
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
$ kubectl create -f /tmp/secret.json
make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
kubectl create -f /tmp/secret.json
```
```
secret/nginxsecret created
$ kubectl get secrets
```
```shell
kubectl get secrets
```
```
NAME TYPE DATA AGE
default-token-il9rc kubernetes.io/service-account-token 1 1d
nginxsecret Opaque 2 1m
@ -242,8 +269,10 @@ data:
Now create the secrets using the file:
```shell
$ kubectl create -f nginxsecrets.yaml
$ kubectl get secrets
kubectl create -f nginxsecrets.yaml
kubectl get secrets
```
```
NAME TYPE DATA AGE
default-token-il9rc kubernetes.io/service-account-token 1 1d
nginxsecret Opaque 2 1m
@ -263,13 +292,13 @@ Noteworthy points about the nginx-secure-app manifest:
This is setup *before* the nginx server is started.
```shell
$ kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml
kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml
```
At this point you can reach the nginx server from any node.
```shell
$ kubectl get pods -o yaml | grep -i podip
kubectl get pods -o yaml | grep -i podip
podIP: 10.244.3.5
node $ curl -k https://10.244.3.5
...
@ -283,11 +312,15 @@ Let's test this from a pod (the same secret is being reused for simplicity, the
{{< codenew file="service/networking/curlpod.yaml" >}}
```shell
$ kubectl create -f ./curlpod.yaml
$ kubectl get pods -l app=curlpod
kubectl create -f ./curlpod.yaml
kubectl get pods -l app=curlpod
```
```
NAME READY STATUS RESTARTS AGE
curl-deployment-1515033274-1410r 1/1 Running 0 1m
$ kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt
```
```shell
kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt
...
<title>Welcome to nginx!</title>
...
@ -302,7 +335,7 @@ so your nginx HTTPS replica is ready to serve traffic on the internet if your
node has a public IP.
```shell
$ kubectl get svc my-nginx -o yaml | grep nodePort -C 5
kubectl get svc my-nginx -o yaml | grep nodePort -C 5
uid: 07191fb3-f61a-11e5-8ae5-42010af00002
spec:
clusterIP: 10.0.162.149
@ -319,8 +352,9 @@ spec:
targetPort: 443
selector:
run: my-nginx
$ kubectl get nodes -o yaml | grep ExternalIP -C 1
```
```shell
kubectl get nodes -o yaml | grep ExternalIP -C 1
- address: 104.197.41.11
type: ExternalIP
allocatable:
@ -338,12 +372,15 @@ $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
```shell
$ kubectl edit svc my-nginx
$ kubectl get svc my-nginx
kubectl edit svc my-nginx
kubectl get svc my-nginx
```
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx ClusterIP 10.0.162.149 162.222.184.144 80/TCP,81/TCP,82/TCP 21s
$ curl https://<EXTERNAL-IP> -k
```
```
curl https://<EXTERNAL-IP> -k
...
<title>Welcome to nginx!</title>
```
@ -357,7 +394,7 @@ output, in fact, so you'll need to do `kubectl describe service my-nginx` to
see it. You'll see something like this:
```shell
$ kubectl describe service my-nginx
kubectl describe service my-nginx
...
LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com
...

View File

@ -172,21 +172,28 @@ Suppose that you now want to update the nginx Pods to use the `nginx:1.9.1` imag
instead of the `nginx:1.7.9` image.
```shell
$ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment
```
```
nginx=nginx:1.9.1
image updated
```
Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`:
```shell
$ kubectl edit deployment.v1.apps/nginx-deployment
kubectl edit deployment.v1.apps/nginx-deployment
```
```
deployment.apps/nginx-deployment edited
```
To see the rollout status, run:
```shell
$ kubectl rollout status deployment.v1.apps/nginx-deployment
kubectl rollout status deployment.v1.apps/nginx-deployment
```
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment.apps/nginx-deployment successfully rolled out
```
@ -194,7 +201,9 @@ deployment.apps/nginx-deployment successfully rolled out
After the rollout succeeds, you may want to `get` the Deployment:
```shell
$ kubectl get deployments
kubectl get deployments
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 36s
```
@ -207,7 +216,9 @@ You can run `kubectl get rs` to see that the Deployment updated the Pods by crea
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
```shell
$ kubectl get rs
kubectl get rs
```
```
NAME DESIRED CURRENT READY AGE
nginx-deployment-1564180365 3 3 3 6s
nginx-deployment-2035384211 0 0 0 36s
@ -216,7 +227,9 @@ nginx-deployment-2035384211 0 0 0 36s
Running `get pods` should now show only the new Pods:
```shell
$ kubectl get pods
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
nginx-deployment-1564180365-khku8 1/1 Running 0 14s
nginx-deployment-1564180365-nacti 1/1 Running 0 14s
@ -237,7 +250,9 @@ new Pods have come up, and does not create new Pods until a sufficient number of
It makes sure that number of available Pods is at least 2 and the number of total Pods is at most 4.
```shell
$ kubectl describe deployments
kubectl describe deployments
```
```
Name: nginx-deployment
Namespace: default
CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000
@ -338,14 +353,18 @@ rolled back.
Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
```shell
$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
```
```
deployment.apps/nginx-deployment image updated
```
The rollout will be stuck.
```shell
$ kubectl rollout status deployment.v1.apps/nginx-deployment
kubectl rollout status deployment.v1.apps/nginx-deployment
```
```
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
```
@ -355,7 +374,9 @@ Press Ctrl-C to stop the above rollout status watch. For more information on stu
You will see that the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) is 2, and new replicas (nginx-deployment-3066724191) is 1.
```shell
$ kubectl get rs
kubectl get rs
```
```
NAME DESIRED CURRENT READY AGE
nginx-deployment-1564180365 3 3 3 25s
nginx-deployment-2035384211 0 0 0 36s
@ -365,7 +386,9 @@ nginx-deployment-3066724191 1 1 0 6s
Looking at the Pods created, you will see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.
```shell
$ kubectl get pods
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
nginx-deployment-1564180365-70iae 1/1 Running 0 25s
nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
@ -380,7 +403,9 @@ Kubernetes by default sets the value to 25%.
{{< /note >}}
```shell
$ kubectl describe deployment
kubectl describe deployment
```
```
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
@ -427,7 +452,9 @@ To fix this, you need to rollback to a previous revision of Deployment that is s
First, check the revisions of this deployment:
```shell
$ kubectl rollout history deployment.v1.apps/nginx-deployment
kubectl rollout history deployment.v1.apps/nginx-deployment
```
```
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
@ -443,7 +470,9 @@ REVISION CHANGE-CAUSE
To further see the details of each revision, run:
```shell
$ kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
```
```
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
@ -464,14 +493,18 @@ deployments "nginx-deployment" revision 2
Now you've decided to undo the current rollout and rollback to the previous revision:
```shell
$ kubectl rollout undo deployment.v1.apps/nginx-deployment
kubectl rollout undo deployment.v1.apps/nginx-deployment
```
```
deployment.apps/nginx-deployment
```
Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`:
```shell
$ kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
```
```
deployment.apps/nginx-deployment
```
@ -481,11 +514,17 @@ The Deployment is now rolled back to a previous stable revision. As you can see,
for rolling back to revision 2 is generated from Deployment controller.
```shell
$ kubectl get deployment nginx-deployment
kubectl get deployment nginx-deployment
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 30m
```
$ kubectl describe deployment nginx-deployment
```shell
kubectl describe deployment nginx-deployment
```
```
Name: nginx-deployment
Namespace: default
CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500
@ -534,7 +573,9 @@ Events:
You can scale a Deployment by using the following command:
```shell
$ kubectl scale deployment.v1.apps/nginx-deployment --replicas=10
kubectl scale deployment.v1.apps/nginx-deployment --replicas=10
```
```
deployment.apps/nginx-deployment scaled
```
@ -543,7 +584,9 @@ in your cluster, you can setup an autoscaler for your Deployment and choose the
Pods you want to run based on the CPU utilization of your existing Pods.
```shell
$ kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80
kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80
```
```
deployment.apps/nginx-deployment scaled
```
@ -557,7 +600,9 @@ ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *p
For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2.
```shell
$ kubectl get deploy
kubectl get deploy
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 10 10 10 10 50s
```
@ -565,7 +610,9 @@ nginx-deployment 10 10 10 10 50s
You update to a new image which happens to be unresolvable from inside the cluster.
```shell
$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag
```
```
deployment.apps/nginx-deployment image updated
```
@ -573,7 +620,9 @@ The image update starts a new rollout with ReplicaSet nginx-deployment-198919819
`maxUnavailable` requirement that you mentioned above.
```shell
$ kubectl get rs
kubectl get rs
```
```
NAME DESIRED CURRENT READY AGE
nginx-deployment-1989198191 5 5 0 9s
nginx-deployment-618515232 8 8 8 1m
@ -591,10 +640,17 @@ new ReplicaSet. The rollout process should eventually move all replicas to the n
the new replicas become healthy.
```shell
$ kubectl get deploy
kubectl get deploy
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 15 18 7 8 7m
$ kubectl get rs
```
```shell
kubectl get rs
```
```
NAME DESIRED CURRENT READY AGE
nginx-deployment-1989198191 7 7 0 7m
nginx-deployment-618515232 11 11 11 7m
@ -608,10 +664,16 @@ apply multiple fixes in between pausing and resuming without triggering unnecess
For example, with a Deployment that was just created:
```shell
$ kubectl get deploy
kubectl get deploy
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 3 3 3 3 1m
$ kubectl get rs
```
```shell
kubectl get rs
```
```
NAME DESIRED CURRENT READY AGE
nginx-2142116321 3 3 3 1m
```
@ -619,26 +681,36 @@ nginx-2142116321 3 3 3 1m
Pause by running the following command:
```shell
$ kubectl rollout pause deployment.v1.apps/nginx-deployment
kubectl rollout pause deployment.v1.apps/nginx-deployment
```
```
deployment.apps/nginx-deployment paused
```
Then update the image of the Deployment:
```shell
$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
```
```
deployment.apps/nginx-deployment image updated
```
Notice that no new rollout started:
```shell
$ kubectl rollout history deployment.v1.apps/nginx-deployment
kubectl rollout history deployment.v1.apps/nginx-deployment
```
```
deployments "nginx"
REVISION CHANGE-CAUSE
1 <none>
```
$ kubectl get rs
```shell
kubectl get rs
```
```
NAME DESIRED CURRENT READY AGE
nginx-2142116321 3 3 3 2m
```
@ -646,7 +718,9 @@ nginx-2142116321 3 3 3 2m
You can make as many updates as you wish, for example, update the resources that will be used:
```shell
$ kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
```
```
deployment.apps/nginx-deployment resource requirements updated
```
@ -656,9 +730,14 @@ the Deployment will not have any effect as long as the Deployment is paused.
Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates:
```shell
$ kubectl rollout resume deployment.v1.apps/nginx-deployment
kubectl rollout resume deployment.v1.apps/nginx-deployment
```
deployment.apps/nginx-deployment resumed
$ kubectl get rs -w
```
```shell
kubectl get rs -w
```
```
NAME DESIRED CURRENT READY AGE
nginx-2142116321 2 2 2 2m
nginx-3926361531 2 2 0 6s
@ -674,8 +753,12 @@ nginx-2142116321 0 1 1 2m
nginx-2142116321 0 1 1 2m
nginx-2142116321 0 0 0 2m
nginx-3926361531 3 3 3 20s
^C
$ kubectl get rs
```
```shell
kubectl get rs
```
```
NAME DESIRED CURRENT READY AGE
nginx-2142116321 0 0 0 2m
nginx-3926361531 3 3 3 28s
@ -714,7 +797,9 @@ You can check if a Deployment has completed by using `kubectl rollout status`. I
successfully, `kubectl rollout status` returns a zero exit code.
```shell
$ kubectl rollout status deployment.v1.apps/nginx-deployment
kubectl rollout status deployment.v1.apps/nginx-deployment
```
```
Waiting for rollout to finish: 2 of 3 updated replicas are available...
deployment.apps/nginx-deployment successfully rolled out
$ echo $?
@ -742,7 +827,9 @@ The following `kubectl` command sets the spec with `progressDeadlineSeconds` to
lack of progress for a Deployment after 10 minutes:
```shell
$ kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
```
```
deployment.apps/nginx-deployment patched
```
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
@ -771,7 +858,9 @@ due to any other kind of error that can be treated as transient. For example, le
insufficient quota. If you describe the Deployment you will notice the following section:
```shell
$ kubectl describe deployment nginx-deployment
kubectl describe deployment nginx-deployment
```
```
<...>
Conditions:
Type Status Reason
@ -847,7 +936,9 @@ You can check if a Deployment has failed to progress by using `kubectl rollout s
returns a non-zero exit code if the Deployment has exceeded the progression deadline.
```shell
$ kubectl rollout status deployment.v1.apps/nginx-deployment
kubectl rollout status deployment.v1.apps/nginx-deployment
```
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline
$ echo $?

View File

@ -39,14 +39,18 @@ It takes around 10s to complete.
You can run the example with this command:
```shell
$ kubectl create -f https://k8s.io/examples/controllers/job.yaml
kubectl create -f https://k8s.io/examples/controllers/job.yaml
```
```
job "pi" created
```
Check on the status of the Job with `kubectl`:
```shell
$ kubectl describe jobs/pi
kubectl describe jobs/pi
```
```
Name: pi
Namespace: default
Selector: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
@ -83,8 +87,10 @@ To view completed Pods of a Job, use `kubectl get pods`.
To list all the Pods that belong to a Job in a machine readable form, you can use a command like this:
```shell
$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
$ echo $pods
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
```
```
pi-aiw0a
```

View File

@ -55,14 +55,18 @@ This example ReplicationController config runs three copies of the nginx web ser
Run the example job by downloading the example file and then running this command:
```shell
$ kubectl create -f https://k8s.io/examples/controllers/replication.yaml
kubectl create -f https://k8s.io/examples/controllers/replication.yaml
```
```
replicationcontroller/nginx created
```
Check on the status of the ReplicationController using this command:
```shell
$ kubectl describe replicationcontrollers/nginx
kubectl describe replicationcontrollers/nginx
```
```
Name: nginx
Namespace: default
Selector: app=nginx
@ -97,8 +101,10 @@ Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:
```shell
$ pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
echo $pods
```
```
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
```

View File

@ -180,12 +180,24 @@ spec:
This Pod can be started and debugged with the following commands:
```shell
$ kubectl create -f myapp.yaml
kubectl create -f myapp.yaml
```
```
pod/myapp-pod created
$ kubectl get -f myapp.yaml
```
```shell
kubectl get -f myapp.yaml
```
```
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 6m
$ kubectl describe -f myapp.yaml
```
```shell
kubectl describe -f myapp.yaml
```
```
Name: myapp-pod
Namespace: default
[...]
@ -218,18 +230,25 @@ Events:
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox"
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container with docker id 5ced34a04634; Security:[seccomp=unconfined]
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634
$ kubectl logs myapp-pod -c init-myservice # Inspect the first init container
$ kubectl logs myapp-pod -c init-mydb # Inspect the second init container
```
```shell
kubectl logs myapp-pod -c init-myservice # Inspect the first init container
kubectl logs myapp-pod -c init-mydb # Inspect the second init container
```
Once we start the `mydb` and `myservice` services, we can see the Init Containers
complete and the `myapp-pod` is created:
```shell
$ kubectl create -f services.yaml
kubectl create -f services.yaml
```
```
service/myservice created
service/mydb created
$ kubectl get -f myapp.yaml
```
```shell
kubectl get -f myapp.yaml
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 9m
```

View File

@ -48,7 +48,7 @@ empty string or undefined the code will attempt to find the default network
adapter similar to the following command:
```bash
$ route | grep default | head -n 1 | awk {'print $8'}
route | grep default | head -n 1 | awk {'print $8'}
```
**cidr** The network range to configure the flannel or canal SDN to declare when

View File

@ -90,9 +90,16 @@ a given action, and works regardless of the authorization mode used.
```bash
$ kubectl auth can-i create deployments --namespace dev
kubectl auth can-i create deployments --namespace dev
```
```
yes
$ kubectl auth can-i create deployments --namespace prod
```
```shell
kubectl auth can-i create deployments --namespace prod
```
```
no
```
@ -100,7 +107,9 @@ Administrators can combine this with [user impersonation](/docs/reference/access
to determine what action other users can perform.
```bash
$ kubectl auth can-i list secrets --namespace dev --as dave
kubectl auth can-i list secrets --namespace dev --as dave
```
```
no
```
@ -116,7 +125,9 @@ These APIs can be queried by creating normal Kubernetes resources, where the res
field of the returned object is the result of the query.
```bash
$ kubectl create -f - -o yaml << EOF
kubectl create -f - -o yaml << EOF
```
```
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
spec:

View File

@ -19,10 +19,15 @@ To run an nginx Deployment and expose the Deployment, see [kubectl run](/docs/re
docker:
```shell
$ docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
```
55c103fa129692154a7652490236fee9be47d70a8dd562281ae7d2f9a339a6db
```
$ docker ps
```shell
docker ps
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55c103fa1296 nginx "nginx -g 'daemon of…" 9 seconds ago Up 9 seconds 0.0.0.0:80->80/tcp nginx-app
```
@ -31,7 +36,9 @@ kubectl:
```shell
# start the pod running nginx
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
```
```
deployment "nginx-app" created
```
@ -41,7 +48,9 @@ deployment "nginx-app" created
```shell
# expose a port through with a service
$ kubectl expose deployment nginx-app --port=80 --name=nginx-http
kubectl expose deployment nginx-app --port=80 --name=nginx-http
```
```
service "nginx-http" exposed
```
@ -66,7 +75,9 @@ To list what is currently running, see [kubectl get](/docs/reference/generated/k
docker:
```shell
$ docker ps -a
docker ps -a
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
14636241935f ubuntu:16.04 "echo test" 5 seconds ago Exited (0) 5 seconds ago cocky_fermi
55c103fa1296 nginx "nginx -g 'daemon of…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp nginx-app
@ -75,7 +86,9 @@ CONTAINER ID IMAGE COMMAND CREATED
kubectl:
```shell
$ kubectl get po
kubectl get po
```
```
NAME READY STATUS RESTARTS AGE
nginx-app-8df569cb7-4gd89 1/1 Running 0 3m
ubuntu 0/1 Completed 0 20s
@ -88,22 +101,30 @@ To attach a process that is already running in a container, see [kubectl attach]
docker:
```shell
$ docker ps
docker ps
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55c103fa1296 nginx "nginx -g 'daemon of…" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp nginx-app
```
$ docker attach 55c103fa1296
```shell
docker attach 55c103fa1296
...
```
kubectl:
```shell
$ kubectl get pods
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
```
$ kubectl attach -it nginx-app-5jyvm
```shell
kubectl attach -it nginx-app-5jyvm
...
```
@ -116,22 +137,33 @@ To execute a command in a container, see [kubectl exec](/docs/reference/generate
docker:
```shell
$ docker ps
docker ps
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55c103fa1296 nginx "nginx -g 'daemon of…" 6 minutes ago Up 6 minutes 0.0.0.0:80->80/tcp nginx-app
$ docker exec 55c103fa1296 cat /etc/hostname
```
```shell
docker exec 55c103fa1296 cat /etc/hostname
```
```
55c103fa1296
```
kubectl:
```shell
$ kubectl get po
kubectl get po
```
```
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
```
$ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
```shell
kubectl exec nginx-app-5jyvm -- cat /etc/hostname
```
```
nginx-app-5jyvm
```
@ -141,14 +173,14 @@ To use interactive commands.
docker:
```shell
$ docker exec -ti 55c103fa1296 /bin/sh
docker exec -ti 55c103fa1296 /bin/sh
# exit
```
kubectl:
```shell
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
@ -162,7 +194,9 @@ To follow stdout/stderr of a process that is running, see [kubectl logs](/docs/r
docker:
```shell
$ docker logs -f a9e
docker logs -f a9e
```
```
192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
```
@ -170,7 +204,9 @@ $ docker logs -f a9e
kubectl:
```shell
$ kubectl logs -f nginx-app-zibvs
kubectl logs -f nginx-app-zibvs
```
```
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
@ -178,7 +214,9 @@ $ kubectl logs -f nginx-app-zibvs
There is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated, but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
```shell
$ kubectl logs --previous nginx-app-zibvs
kubectl logs --previous nginx-app-zibvs
```
```
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
@ -192,32 +230,53 @@ To stop and delete a running process, see [kubectl delete](/docs/reference/gener
docker:
```shell
$ docker ps
docker ps
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of" 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
$ docker stop a9ec34d98787
```shell
docker stop a9ec34d98787
```
```
a9ec34d98787
```
$ docker rm a9ec34d98787
```shell
docker rm a9ec34d98787
```
```
a9ec34d98787
```
kubectl:
```shell
$ kubectl get deployment nginx-app
kubectl get deployment nginx-app
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-app 1 1 1 1 2m
```
$ kubectl get po -l run=nginx-app
```shell
kubectl get po -l run=nginx-app
```
```
NAME READY STATUS RESTARTS AGE
nginx-app-2883164633-aklf7 1/1 Running 0 2m
$ kubectl delete deployment nginx-app
```
```shell
kubectl delete deployment nginx-app
```
```
deployment "nginx-app" deleted
```
$ kubectl get po -l run=nginx-app
```shell
kubectl get po -l run=nginx-app
# Return nothing
```
@ -236,7 +295,9 @@ To get the version of client and server, see [kubectl version](/docs/reference/g
docker:
```shell
$ docker version
docker version
```
```
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
@ -252,7 +313,9 @@ OS/Arch (server): linux/amd64
kubectl:
```shell
$ kubectl version
kubectl version
```
```
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4335", GitCommit:"9b77fed11a9843ce3780f70dd251e92901c43072", GitTreeState:"dirty", BuildDate:"2017-08-29T20:32:58Z", OpenPaasKubernetesVersion:"v1.03.02", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4335", GitCommit:"9b77fed11a9843ce3780f70dd251e92901c43072", GitTreeState:"dirty", BuildDate:"2017-08-29T20:32:58Z", OpenPaasKubernetesVersion:"v1.03.02", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
```
@ -264,7 +327,9 @@ To get miscellaneous information about the environment and configuration, see [k
docker:
```shell
$ docker info
docker info
```
```
Containers: 40
Images: 168
Storage Driver: aufs
@ -286,7 +351,9 @@ WARNING: No swap limit support
kubectl:
```shell
$ kubectl cluster-info
kubectl cluster-info
```
```
Kubernetes master is running at https://108.59.85.141
KubeDNS is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

View File

@ -81,11 +81,11 @@ range, end | iterate list | {range .items[*]}[{.metadata.nam
Examples using `kubectl` and JSONPath expressions:
```shell
$ kubectl get pods -o json
$ kubectl get pods -o=jsonpath='{@}'
$ kubectl get pods -o=jsonpath='{.items[0]}'
$ kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
$ kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'
kubectl get pods -o json
kubectl get pods -o=jsonpath='{@}'
kubectl get pods -o=jsonpath='{.items[0]}'
kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'
```
On Windows, you must _double_ quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example:

View File

@ -33,27 +33,27 @@ where `command`, `TYPE`, `NAME`, and `flags` are:
* `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:
```shell
$ kubectl get pod pod1
$ kubectl get pods pod1
$ kubectl get po pod1
kubectl get pod pod1
kubectl get pods pod1
kubectl get po pod1
```
* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `$ kubectl get pods`.
* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `kubectl get pods`.
When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files:
* To specify resources by type and name:
* To group resources if they are all the same type: `TYPE1 name1 name2 name<#>`.<br/>
Example: `$ kubectl get pod example-pod1 example-pod2`
Example: `kubectl get pod example-pod1 example-pod2`
* To specify multiple resource types individually: `TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`.<br/>
Example: `$ kubectl get pod/example-pod1 replicationcontroller/example-rc1`
Example: `kubectl get pod/example-pod1 replicationcontroller/example-rc1`
* To specify resources with one or more files: `-f file1 -f file2 -f file<#>`
* [Use YAML rather than JSON](/docs/concepts/configuration/overview/#general-configuration-tips) since YAML tends to be more user-friendly, especially for configuration files.<br/>
Example: `$ kubectl get pod -f ./pod.yaml`
Example: `kubectl get pod -f ./pod.yaml`
* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.<br/>
@ -176,7 +176,7 @@ Output format | Description
In this example, the following command outputs the details for a single pod as a YAML formatted object:
```shell
$ kubectl get pod web-pod-13je7 -o=yaml
kubectl get pod web-pod-13je7 -o=yaml
```
Remember: See the [kubectl](/docs/user-guide/kubectl/) reference documentation for details about which output format is supported by each command.
@ -190,13 +190,13 @@ To define custom columns and output only the details that you want into a table,
Inline:
```shell
$ kubectl get pods <pod-name> -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion
kubectl get pods <pod-name> -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion
```
Template file:
```shell
$ kubectl get pods <pod-name> -o=custom-columns-file=template.txt
kubectl get pods <pod-name> -o=custom-columns-file=template.txt
```
where the `template.txt` file contains:
@ -251,7 +251,7 @@ kubectl [command] [TYPE] [NAME] --sort-by=<jsonpath_exp>
To print a list of pods sorted by name, you run:
```shell
$ kubectl get pods --sort-by=.metadata.name
kubectl get pods --sort-by=.metadata.name
```
## Examples: Common operations
@ -262,52 +262,52 @@ Use the following set of examples to help you familiarize yourself with running
```shell
# Create a service using the definition in example-service.yaml.
$ kubectl create -f example-service.yaml
kubectl create -f example-service.yaml
# Create a replication controller using the definition in example-controller.yaml.
$ kubectl create -f example-controller.yaml
kubectl create -f example-controller.yaml
# Create the objects that are defined in any .yaml, .yml, or .json file within the <directory> directory.
$ kubectl create -f <directory>
kubectl create -f <directory>
```
`kubectl get` - List one or more resources.
```shell
# List all pods in plain-text output format.
$ kubectl get pods
kubectl get pods
# List all pods in plain-text output format and include additional information (such as node name).
$ kubectl get pods -o wide
kubectl get pods -o wide
# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.
$ kubectl get replicationcontroller <rc-name>
kubectl get replicationcontroller <rc-name>
# List all replication controllers and services together in plain-text output format.
$ kubectl get rc,services
kubectl get rc,services
# List all daemon sets, including uninitialized ones, in plain-text output format.
$ kubectl get ds --include-uninitialized
kubectl get ds --include-uninitialized
# List all pods running on node server01
$ kubectl get pods --field-selector=spec.nodeName=server01
kubectl get pods --field-selector=spec.nodeName=server01
```
`kubectl describe` - Display detailed state of one or more resources, including the uninitialized ones by default.
```shell
# Display the details of the node with name <node-name>.
$ kubectl describe nodes <node-name>
kubectl describe nodes <node-name>
# Display the details of the pod with name <pod-name>.
$ kubectl describe pods/<pod-name>
kubectl describe pods/<pod-name>
# Display the details of all the pods that are managed by the replication controller named <rc-name>.
# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.
$ kubectl describe pods <rc-name>
kubectl describe pods <rc-name>
# Describe all pods, not including uninitialized ones
$ kubectl describe pods --include-uninitialized=false
kubectl describe pods --include-uninitialized=false
```
{{< note >}}
@ -326,39 +326,39 @@ the pods running on it, the events generated for the node etc.
```shell
# Delete a pod using the type and name specified in the pod.yaml file.
$ kubectl delete -f pod.yaml
kubectl delete -f pod.yaml
# Delete all the pods and services that have the label name=<label-name>.
$ kubectl delete pods,services -l name=<label-name>
kubectl delete pods,services -l name=<label-name>
# Delete all the pods and services that have the label name=<label-name>, including uninitialized ones.
$ kubectl delete pods,services -l name=<label-name> --include-uninitialized
kubectl delete pods,services -l name=<label-name> --include-uninitialized
# Delete all pods, including uninitialized ones.
$ kubectl delete pods --all
kubectl delete pods --all
```
`kubectl exec` - Execute a command against a container in a pod.
```shell
# Get output from running 'date' from pod <pod-name>. By default, output is from the first container.
$ kubectl exec <pod-name> date
kubectl exec <pod-name> date
# Get output from running 'date' in container <container-name> of pod <pod-name>.
$ kubectl exec <pod-name> -c <container-name> date
kubectl exec <pod-name> -c <container-name> date
# Get an interactive TTY and run /bin/bash from pod <pod-name>. By default, output is from the first container.
$ kubectl exec -ti <pod-name> /bin/bash
kubectl exec -ti <pod-name> /bin/bash
```
`kubectl logs` - Print the logs for a container in a pod.
```shell
# Return a snapshot of the logs from pod <pod-name>.
$ kubectl logs <pod-name>
kubectl logs <pod-name>
# Start streaming the logs from pod <pod-name>. This is similar to the 'tail -f' Linux command.
$ kubectl logs -f <pod-name>
kubectl logs -f <pod-name>
```
## Examples: Creating and using plugins
@ -368,43 +368,52 @@ Use the following set of examples to help you familiarize yourself with writing
```shell
# create a simple plugin in any language and name the resulting executable file
# so that it begins with the prefix "kubectl-"
$ cat ./kubectl-hello
cat ./kubectl-hello
#!/bin/bash
# this plugin prints the words "hello world"
echo "hello world"
# with our plugin written, let's make it executable
$ sudo chmod +x ./kubectl-hello
sudo chmod +x ./kubectl-hello
# and move it to a location in our PATH
$ sudo mv ./kubectl-hello /usr/local/bin
sudo mv ./kubectl-hello /usr/local/bin
# we have now created and "installed" a kubectl plugin.
# we can begin using our plugin by invoking it from kubectl as if it were a regular command
$ kubectl hello
kubectl hello
```
```
hello world
```
```
# we can "uninstall" a plugin, by simply removing it from our PATH
$ sudo rm /usr/local/bin/kubectl-hello
sudo rm /usr/local/bin/kubectl-hello
```
In order to view all of the plugins that are available to `kubectl`, we can use
the `kubectl plugin list` subcommand:
```shell
$ kubectl plugin list
kubectl plugin list
```
```
The following kubectl-compatible plugins are available:
/usr/local/bin/kubectl-hello
/usr/local/bin/kubectl-foo
/usr/local/bin/kubectl-bar
```
```
# this command can also warn us about plugins that are
# not executable, or that are overshadowed by other
# plugins, for example
$ sudo chmod -x /usr/local/bin/kubectl-foo
$ kubectl plugin list
sudo chmod -x /usr/local/bin/kubectl-foo
kubectl plugin list
```
```
The following kubectl-compatible plugins are available:
/usr/local/bin/kubectl-hello
@ -419,7 +428,7 @@ We can think of plugins as a means to build more complex functionality on top
of the existing kubectl commands:
```shell
$ cat ./kubectl-whoami
cat ./kubectl-whoami
#!/bin/bash
# this plugin makes use of the `kubectl config` command in order to output
@ -432,12 +441,12 @@ context in our KUBECONFIG file:
```shell
# make the file executable
$ sudo chmod +x ./kubectl-whoami
sudo chmod +x ./kubectl-whoami
# and move it into our PATH
$ sudo mv ./kubectl-whoami /usr/local/bin
sudo mv ./kubectl-whoami /usr/local/bin
$ kubectl whoami
kubectl whoami
Current user: plugins-user
```

View File

@ -47,30 +47,49 @@ Note that the IP below is dynamic and can change. It can be retrieved with `mini
* none (Runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker ([docker install](https://docs.docker.com/install/linux/docker-ce/ubuntu/)) and a Linux environment)
```shell
$ minikube start
minikube start
```
```
Starting local Kubernetes cluster...
Running pre-create checks...
Creating machine...
Starting local Kubernetes cluster...
$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
```
```shell
kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
```
deployment.apps/hello-minikube created
$ kubectl expose deployment hello-minikube --type=NodePort
service/hello-minikube exposed
```
```shell
kubectl expose deployment hello-minikube --type=NodePort
```
```
service/hello-minikube exposed
```
```
# We have now launched an echoserver pod but we have to wait until the pod is up before curling/accessing it
# via the exposed service.
# To check whether the pod is up and running we can use the following:
$ kubectl get pod
kubectl get pod
```
```
NAME READY STATUS RESTARTS AGE
hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s
```
```
# We can see that the pod is still being created from the ContainerCreating status
$ kubectl get pod
kubectl get pod
```
```
NAME READY STATUS RESTARTS AGE
hello-minikube-3383150820-vctvh 1/1 Running 0 13s
```
```
# We can see that the pod is now Running and we will now be able to curl it:
$ curl $(minikube service hello-minikube --url)
curl $(minikube service hello-minikube --url)
```
```
Hostname: hello-minikube-7c77b68cff-8wdzq
@ -96,13 +115,26 @@ Request Headers:
Request Body:
-no body in request-
```
$ kubectl delete services hello-minikube
```shell
kubectl delete services hello-minikube
```
```
service "hello-minikube" deleted
$ kubectl delete deployment hello-minikube
```
```shell
kubectl delete deployment hello-minikube
```
```
deployment.extensions "hello-minikube" deleted
$ minikube stop
```
```shell
minikube stop
```
```
Stopping local Kubernetes cluster...
Stopping "minikube"...
```
@ -114,7 +146,7 @@ Stopping "minikube"...
To use [containerd](https://github.com/containerd/containerd) as the container runtime, run:
```bash
$ minikube start \
minikube start \
--network-plugin=cni \
--enable-default-cni \
--container-runtime=containerd \
@ -124,7 +156,7 @@ $ minikube start \
Or you can use the extended version:
```bash
$ minikube start \
minikube start \
--network-plugin=cni \
--enable-default-cni \
--extra-config=kubelet.container-runtime=remote \
@ -138,7 +170,7 @@ $ minikube start \
To use [CRI-O](https://github.com/kubernetes-incubator/cri-o) as the container runtime, run:
```bash
$ minikube start \
minikube start \
--network-plugin=cni \
--enable-default-cni \
--container-runtime=cri-o \
@ -148,7 +180,7 @@ $ minikube start \
Or you can use the extended version:
```bash
$ minikube start \
minikube start \
--network-plugin=cni \
--enable-default-cni \
--extra-config=kubelet.container-runtime=remote \
@ -162,7 +194,7 @@ $ minikube start \
To use [rkt](https://github.com/rkt/rkt) as the container runtime run:
```shell
$ minikube start \
minikube start \
--network-plugin=cni \
--enable-default-cni \
--container-runtime=rkt
@ -379,7 +411,7 @@ To do this, pass the required environment variables as flags during `minikube st
For example:
```shell
$ minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \
minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \
--docker-env https_proxy=https://$YOURPROXY:PORT
```
@ -387,7 +419,7 @@ If your Virtual Machine address is 192.168.99.100, then chances are your proxy s
To by-pass proxy configuration for this IP address, you should modify your no_proxy settings. You can do so with:
```shell
$ export no_proxy=$no_proxy,$(minikube ip)
export no_proxy=$no_proxy,$(minikube ip)
```
## Known Issues

View File

@ -96,7 +96,9 @@ kubectl create -f my-scheduler.yaml
Verify that the scheduler pod is running:
```shell
$ kubectl get pods --namespace=kube-system
kubectl get pods --namespace=kube-system
```
```
NAME READY STATUS RESTARTS AGE
....
my-scheduler-lnf4s-4744f 1/1 Running 0 2m
@ -116,7 +118,9 @@ First, update the following fields in your YAML file:
If RBAC is enabled on your cluster, you must update the `system:kube-scheduler` cluster role. Add your scheduler name to the resourceNames of the rule applied for endpoints resources, as in the following example:
```
$ kubectl edit clusterrole system:kube-scheduler
kubectl edit clusterrole system:kube-scheduler
```
```yaml
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:

View File

@ -42,7 +42,7 @@ Set the following flag:
The following sample command sets up a HA-compatible cluster in the GCE zone europe-west1-b:
```shell
$ MULTIZONE=true KUBE_GCE_ZONE=europe-west1-b ENABLE_ETCD_QUORUM_READS=true ./cluster/kube-up.sh
MULTIZONE=true KUBE_GCE_ZONE=europe-west1-b ENABLE_ETCD_QUORUM_READS=true ./cluster/kube-up.sh
```
Note that the commands above create a cluster with one master;
@ -65,7 +65,7 @@ as those are inherited from when you started your HA-compatible cluster.
The following sample command replicates the master on an existing HA-compatible cluster:
```shell
$ KUBE_GCE_ZONE=europe-west1-c KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh
KUBE_GCE_ZONE=europe-west1-c KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh
```
## Removing a master replica
@ -82,7 +82,7 @@ If empty: any replica from the given zone will be removed.
The following sample command removes a master replica from an existing HA cluster:
```shell
$ KUBE_DELETE_NODES=false KUBE_GCE_ZONE=europe-west1-c ./cluster/kube-down.sh
KUBE_DELETE_NODES=false KUBE_GCE_ZONE=europe-west1-c ./cluster/kube-down.sh
```
## Handling master replica failures
@ -94,13 +94,13 @@ The following sample commands demonstrate this process:
1. Remove the broken replica:
```shell
$ KUBE_DELETE_NODES=false KUBE_GCE_ZONE=replica_zone KUBE_REPLICA_NAME=replica_name ./cluster/kube-down.sh
KUBE_DELETE_NODES=false KUBE_GCE_ZONE=replica_zone KUBE_REPLICA_NAME=replica_name ./cluster/kube-down.sh
```
<ol start="2"><li>Add a new replica in place of the old one:</li></ol>
```shell
$ KUBE_GCE_ZONE=replica-zone KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh
KUBE_GCE_ZONE=replica-zone KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh
```
## Best practices for replicating masters for HA clusters

View File

@ -71,10 +71,14 @@ Otherwise, you must manually approve certificates with the [`kubectl certificate
The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur:
```shell
$ sudo kubeadm alpha certs renew apiserver --use-api &
sudo kubeadm alpha certs renew apiserver --use-api &
```
```
[1] 2890
[certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created
$ kubectl certificate approve kubeadm-cert-kube-apiserver-ld526
```
```shell
kubectl certificate approve kubeadm-cert-kube-apiserver-ld526
certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved
[1]+ Done sudo kubeadm alpha certs renew apiserver --use-api
```

View File

@ -45,7 +45,9 @@ Services, and Deployments used by the cluster.
Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following:
```shell
$ kubectl get namespaces
kubectl get namespaces
```
```
NAME STATUS AGE
default Active 13m
```
@ -74,7 +76,7 @@ Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which de
Create the `development` namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
```
Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a `production` namespace:
@ -84,13 +86,15 @@ Save the following contents into file [`namespace-prod.json`](/examples/admin/na
And then let's create the `production` namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.
```shell
$ kubectl get namespaces --show-labels
kubectl get namespaces --show-labels
```
```
NAME STATUS AGE LABELS
default Active 32m <none>
development Active 29s name=development
@ -108,7 +112,9 @@ To demonstrate this, let's spin up a simple Deployment and Pods in the `developm
We first check what is the current context:
```shell
$ kubectl config view
kubectl config view
```
```yaml
apiVersion: v1
clusters:
- cluster:
@ -133,18 +139,22 @@ users:
user:
password: h5M0FtUUIflBSdI7
username: admin
$ kubectl config current-context
```
```shell
kubectl config current-context
```
```
lithe-cocoa-92103_kubernetes
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
```shell
$ kubectl config set-context dev --namespace=development \
kubectl config set-context dev --namespace=development \
--cluster=lithe-cocoa-92103_kubernetes \
--user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production \
kubectl config set-context prod --namespace=production \
--cluster=lithe-cocoa-92103_kubernetes \
--user=lithe-cocoa-92103_kubernetes
```
@ -156,7 +166,9 @@ new request contexts depending on which namespace you wish to work against.
To view the new contexts:
```shell
$ kubectl config view
kubectl config view
```
```yaml
apiVersion: v1
clusters:
- cluster:
@ -196,13 +208,15 @@ users:
Let's switch to operate in the `development` namespace.
```shell
$ kubectl config use-context dev
kubectl config use-context dev
```
You can verify your current context by doing the following:
```shell
$ kubectl config current-context
kubectl config current-context
```
```
dev
```
@ -211,18 +225,24 @@ At this point, all requests we make to the Kubernetes cluster from the command l
Let's create some contents.
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname.
Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details.
```shell
$ kubectl get deployment
kubectl get deployment
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
snowflake 2 2 2 2 2m
```
$ kubectl get pods -l run=snowflake
```shell
kubectl get pods -l run=snowflake
```
```
NAME READY STATUS RESTARTS AGE
snowflake-3968820950-9dgr8 1/1 Running 0 2m
snowflake-3968820950-vgc4n 1/1 Running 0 2m
@ -233,22 +253,24 @@ And this is great, developers are able to do what they want, and they do not hav
Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other.
```shell
$ kubectl config use-context prod
kubectl config use-context prod
```
The `production` namespace should be empty, and the following commands should return nothing.
```shell
$ kubectl get deployment
$ kubectl get pods
kubectl get deployment
kubectl get pods
```
Production likes to run cattle, so let's create some cattle pods.
```shell
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get deployment
kubectl get deployment
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
cattle 5 5 5 5 10s

View File

@ -22,7 +22,9 @@ This page shows how to view, work in, and delete {{< glossary_tooltip text="name
1. List the current namespaces in a cluster using:
```shell
$ kubectl get namespaces
kubectl get namespaces
```
```
NAME STATUS AGE
default Active 11d
kube-system Active 11d
@ -38,13 +40,15 @@ Kubernetes starts with three initial namespaces:
You can also get the summary of a specific namespace using:
```shell
$ kubectl get namespaces <name>
kubectl get namespaces <name>
```
Or you can get detailed information with:
```shell
$ kubectl describe namespaces <name>
kubectl describe namespaces <name>
```
```
Name: default
Labels: <none>
Annotations: <none>
@ -89,7 +93,7 @@ metadata:
Then run:
```shell
$ kubectl create -f ./my-namespace.yaml
kubectl create -f ./my-namespace.yaml
```
Note that the name of your namespace must be a DNS compatible label.
@ -103,7 +107,7 @@ More information on `finalizers` can be found in the namespace [design doc](http
1. Delete a namespace with
```shell
$ kubectl delete namespaces <insert-some-namespace-name>
kubectl delete namespaces <insert-some-namespace-name>
```
{{< warning >}}
@ -122,7 +126,9 @@ Services, and Deployments used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
```shell
$ kubectl get namespaces
kubectl get namespaces
```
```
NAME STATUS AGE
default Active 13m
```
@ -151,19 +157,21 @@ Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which de
Create the `development` namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
```
And then let's create the `production` namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
```
To be sure things are right, list all of the namespaces in our cluster.
```shell
$ kubectl get namespaces --show-labels
kubectl get namespaces --show-labels
```
```
NAME STATUS AGE LABELS
default Active 32m <none>
development Active 29s name=development
@ -181,7 +189,9 @@ To demonstrate this, let's spin up a simple Deployment and Pods in the `developm
We first check what is the current context:
```shell
$ kubectl config view
kubectl config view
```
```yaml
apiVersion: v1
clusters:
- cluster:
@ -206,16 +216,20 @@ users:
user:
password: h5M0FtUUIflBSdI7
username: admin
```
$ kubectl config current-context
```shell
kubectl config current-context
```
```
lithe-cocoa-92103_kubernetes
```
The next step is to define a context for the kubectl client to work in each namespace. The values of "cluster" and "user" fields are copied from the current context.
```shell
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
```
The above commands provided two request contexts you can alternate against depending on what namespace you
@ -224,13 +238,13 @@ wish to work against.
Let's switch to operate in the `development` namespace.
```shell
$ kubectl config use-context dev
kubectl config use-context dev
```
You can verify your current context by doing the following:
```shell
$ kubectl config current-context
kubectl config current-context
dev
```
@ -239,18 +253,23 @@ At this point, all requests we make to the Kubernetes cluster from the command l
Let's create some contents.
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname.
Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details.
```shell
$ kubectl get deployment
kubectl get deployment
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
snowflake 2 2 2 2 2m
$ kubectl get pods -l run=snowflake
```
```shell
kubectl get pods -l run=snowflake
```
```
NAME READY STATUS RESTARTS AGE
snowflake-3968820950-9dgr8 1/1 Running 0 2m
snowflake-3968820950-vgc4n 1/1 Running 0 2m
@ -261,26 +280,32 @@ And this is great, developers are able to do what they want, and they do not hav
Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other.
```shell
$ kubectl config use-context prod
kubectl config use-context prod
```
The `production` namespace should be empty, and the following commands should return nothing.
```shell
$ kubectl get deployment
$ kubectl get pods
kubectl get deployment
kubectl get pods
```
Production likes to run cattle, so let's create some cattle pods.
```shell
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get deployment
kubectl get deployment
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
cattle 5 5 5 5 10s
```
```shell
kubectl get pods -l run=cattle
```
```
NAME READY STATUS RESTARTS AGE
cattle-2263376956-41xy6 1/1 Running 0 34s
cattle-2263376956-kw466 1/1 Running 0 34s

View File

@ -30,10 +30,14 @@ To start minikube, minimal version required is >= v0.33.1, run the with the
following arguments:
```shell
$ minikube version
minikube version
```
```
minikube version: v0.33.1
$
$ minikube start --network-plugin=cni --memory=4096
```
```shell
minikube start --network-plugin=cni --memory=4096
```
For minikube you can deploy this simple ''all-in-one'' YAML file that includes
@ -41,7 +45,9 @@ DaemonSet configurations for Cilium, and the necessary configurations to connect
to the etcd instance deployed in minikube as well as appropriate RBAC settings:
```shell
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium-minikube.yaml
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium-minikube.yaml
```
```
configmap/cilium-config created
daemonset.apps/cilium created
clusterrolebinding.rbac.authorization.k8s.io/cilium created

View File

@ -117,7 +117,7 @@ itself. To attempt an eviction (perhaps more REST-precisely, to attempt to
You can attempt an eviction using `curl`:
```bash
$ curl -v -H 'Content-type: application/json' http://127.0.0.1:8080/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json
curl -v -H 'Content-type: application/json' http://127.0.0.1:8080/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json
```
The API can respond in one of three ways:

View File

@ -36,7 +36,7 @@ process file system. The parameters cover various subsystems such as:
To get a list of all parameters, you can run
```shell
$ sudo sysctl -a
sudo sysctl -a
```
## Enabling Unsafe Sysctls
@ -76,14 +76,14 @@ application tuning. _Unsafe_ sysctls are enabled on a node-by-node basis with a
flag of the kubelet, e.g.:
```shell
$ kubelet --allowed-unsafe-sysctls \
kubelet --allowed-unsafe-sysctls \
'kernel.msg*,net.ipv4.route.min_pmtu' ...
```
For minikube, this can be done via the `extra-config` flag:
```shell
$ minikube start --extra-config="kubelet.AllowedUnsafeSysctls=kernel.msg*,net.ipv4.route.min_pmtu"...
minikube start --extra-config="kubelet.AllowedUnsafeSysctls=kernel.msg*,net.ipv4.route.min_pmtu"...
```
Only _namespaced_ sysctls can be enabled this way.

View File

@ -31,7 +31,7 @@ your Service?
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
```shell
$ kubectl describe pods ${POD_NAME}
kubectl describe pods ${POD_NAME}
```
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
@ -68,19 +68,19 @@ First, take a look at the logs of
the current container:
```shell
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
kubectl logs ${POD_NAME} ${CONTAINER_NAME}
```
If your container has previously crashed, you can access the previous container's crash log with:
```shell
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
```
Alternately, you can run commands inside that container with `exec`:
```shell
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
```
{{< note >}}
@ -90,7 +90,7 @@ $ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${
As an example, to look at the logs from a running Cassandra pod, you might run
```shell
$ kubectl exec cassandra -- cat /var/log/cassandra/system.log
kubectl exec cassandra -- cat /var/log/cassandra/system.log
```
If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host,
@ -145,7 +145,7 @@ First, verify that there are endpoints for the service. For every Service object
You can view this resource with:
```shell
$ kubectl get endpoints ${SERVICE_NAME}
kubectl get endpoints ${SERVICE_NAME}
```
Make sure that the endpoints match up with the number of containers that you expect to be a member of your service.
@ -168,7 +168,7 @@ spec:
You can use:
```shell
$ kubectl get pods --selector=name=nginx,type=frontend
kubectl get pods --selector=name=nginx,type=frontend
```
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.

View File

@ -39,7 +39,9 @@ Now, when you create a cluster, a message will indicate that the Fluentd log
collection daemons that run on each node will target Elasticsearch:
```shell
$ cluster/kube-up.sh
cluster/kube-up.sh
```
```
...
Project: kubernetes-satnam
Zone: us-central1-b
@ -63,7 +65,9 @@ all be running in the kube-system namespace soon after the cluster comes to
life.
```shell
$ kubectl get pods --namespace=kube-system
kubectl get pods --namespace=kube-system
```
```
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h

View File

@ -145,7 +145,9 @@ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
You can observe the running pod:
```shell
$ kubectl get pods
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 5m
```
@ -155,7 +157,9 @@ has to download the container image first. When the pod status changes to `Runni
you can use the `kubectl logs` command to view the output of this counter pod.
```shell
$ kubectl logs counter
kubectl logs counter
```
```
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
@ -169,21 +173,27 @@ if the pod is evicted from the node, log files are lost. Let's demonstrate this
by deleting the currently running counter container:
```shell
$ kubectl delete pod counter
kubectl delete pod counter
```
```
pod "counter" deleted
```
and then recreating it:
```shell
$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
```
```
pod/counter created
```
After some time, you can access logs from the counter pod again:
```shell
$ kubectl logs counter
kubectl logs counter
```
```
0: Mon Jan 1 00:01:00 UTC 2001
1: Mon Jan 1 00:01:01 UTC 2001
2: Mon Jan 1 00:01:02 UTC 2001
@ -226,7 +236,9 @@ It uses Stackdriver Logging [filtering syntax](https://cloud.google.com/logging/
to query specific logs. For example, you can run the following command:
```none
$ gcloud beta logging read 'logName="projects/$YOUR_PROJECT_ID/logs/count"' --format json | jq '.[].textPayload'
gcloud beta logging read 'logName="projects/$YOUR_PROJECT_ID/logs/count"' --format json | jq '.[].textPayload'
```
```
...
"2: Mon Jan 1 00:01:02 UTC 2001\n"
"1: Mon Jan 1 00:01:01 UTC 2001\n"

View File

@ -96,25 +96,35 @@ sudo mv ./kubectl-foo /usr/local/bin
You may now invoke your plugin as a `kubectl` command:
```
$ kubectl foo
kubectl foo
```
```
I am a plugin named kubectl-foo
```
All args and flags are passed as-is to the executable:
```
$ kubectl foo version
kubectl foo version
```
```
1.0.0
```
All environment variables are also passed as-is to the executable:
```bash
$ export KUBECONFIG=~/.kube/config
$ kubectl foo config
export KUBECONFIG=~/.kube/config
kubectl foo config
```
```
/home/<user>/.kube/config
```
$ KUBECONFIG=/etc/kube/config kubectl foo config
```shell
KUBECONFIG=/etc/kube/config kubectl foo config
```
```
/etc/kube/config
```
@ -142,22 +152,27 @@ Example:
```bash
# create a plugin
$ echo '#!/bin/bash\n\necho "My first command-line argument was $1"' > kubectl-foo-bar-baz
$ sudo chmod +x ./kubectl-foo-bar-baz
echo '#!/bin/bash\n\necho "My first command-line argument was $1"' > kubectl-foo-bar-baz
sudo chmod +x ./kubectl-foo-bar-baz
# "install" our plugin by placing it on our PATH
$ sudo mv ./kubectl-foo-bar-baz /usr/local/bin
sudo mv ./kubectl-foo-bar-baz /usr/local/bin
# ensure our plugin is recognized by kubectl
$ kubectl plugin list
kubectl plugin list
```
```
The following kubectl-compatible plugins are available:
/usr/local/bin/kubectl-foo-bar-baz
```
```
# test that calling our plugin via a "kubectl" command works
# even when additional arguments and flags are passed to our
# plugin executable by the user.
$ kubectl foo bar baz arg1 --meaningless-flag=true
kubectl foo bar baz arg1 --meaningless-flag=true
```
```
My first command-line argument was arg1
```
@ -172,14 +187,16 @@ Example:
```bash
# create a plugin containing an underscore in its filename
$ echo '#!/bin/bash\n\necho "I am a plugin with a dash in my name"' > ./kubectl-foo_bar
$ sudo chmod +x ./kubectl-foo_bar
echo '#!/bin/bash\n\necho "I am a plugin with a dash in my name"' > ./kubectl-foo_bar
sudo chmod +x ./kubectl-foo_bar
# move the plugin into your PATH
$ sudo mv ./kubectl-foo_bar /usr/local/bin
sudo mv ./kubectl-foo_bar /usr/local/bin
# our plugin can now be invoked from `kubectl` like so:
$ kubectl foo-bar
kubectl foo-bar
```
```
I am a plugin with a dash in my name
```
@ -188,11 +205,17 @@ The command from the above example, can be invoked using either a dash (`-`) or
```bash
# our plugin can be invoked with a dash
$ kubectl foo-bar
kubectl foo-bar
```
```
I am a plugin with a dash in my name
```
```bash
# it can also be invoked using an underscore
$ kubectl foo_bar
kubectl foo_bar
```
```
I am a plugin with a dash in my name
```
@ -203,7 +226,9 @@ For example, given a PATH with the following value: `PATH=/usr/local/bin/plugins
such that the output of the `kubectl plugin list` command is:
```bash
$ PATH=/usr/local/bin/plugins:/usr/local/bin/moreplugins kubectl plugin list
PATH=/usr/local/bin/plugins:/usr/local/bin/moreplugins kubectl plugin list
```
```bash
The following kubectl-compatible plugins are available:
/usr/local/bin/plugins/kubectl-foo
@ -223,23 +248,39 @@ There is another kind of overshadowing that can occur with plugin filenames. Giv
```bash
# for a given kubectl command, the plugin with the longest possible filename will always be preferred
$ kubectl foo bar baz
kubectl foo bar baz
```
```
Plugin kubectl-foo-bar-baz is executed
```
$ kubectl foo bar
```bash
kubectl foo bar
```
```
Plugin kubectl-foo-bar is executed
```
$ kubectl foo bar baz buz
```bash
kubectl foo bar baz buz
```
```
Plugin kubectl-foo-bar-baz is executed, with "buz" as its first argument
```
$ kubectl foo bar buz
```bash
kubectl foo bar buz
```
```
Plugin kubectl-foo-bar is executed, with "buz" as its first argument
```
This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a "parent" plugin command:
```bash
$ ls ./plugin_command_tree
ls ./plugin_command_tree
```
```
kubectl-parent
kubectl-parent-subcommand
kubectl-parent-subcommand-subsubcommand
@ -250,7 +291,9 @@ kubectl-parent-subcommand-subsubcommand
You can use the aforementioned `kubectl plugin list` command to ensure that your plugin is visible by `kubectl`, and verify that there are no warnings preventing it from being called as a `kubectl` command.
```bash
$ kubectl plugin list
kubectl plugin list
```
```
The following kubectl-compatible plugins are available:
test/fixtures/pkg/kubectl/plugins/kubectl-foo

View File

@ -118,8 +118,9 @@ The status of your Federated Service will automatically reflect the
real-time status of the underlying Kubernetes services, for example:
``` shell
$kubectl --context=federation-cluster describe services nginx
kubectl --context=federation-cluster describe services nginx
```
```
Name: nginx
Namespace: default
Labels: run=nginx
@ -187,7 +188,9 @@ this. For example, if your Federation is configured to use Google
Cloud DNS, and a managed DNS domain 'example.com':
``` shell
$ gcloud dns managed-zones describe example-dot-com
gcloud dns managed-zones describe example-dot-com
```
```
creationTime: '2016-06-26T18:18:39.229Z'
description: Example domain for Kubernetes Cluster Federation
dnsName: example.com.
@ -202,7 +205,9 @@ nameServers:
```
```shell
$ gcloud dns record-sets list --zone example-dot-com
gcloud dns record-sets list --zone example-dot-com
```
```
NAME TYPE TTL DATA
example.com. NS 21600 ns-cloud-e1.googledomains.com., ns-cloud-e2.googledomains.com.
example.com. OA 21600 ns-cloud-e1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 1209600 300
@ -225,12 +230,12 @@ nginx.mynamespace.myfederation.svc.europe-west1-d.example.com. CNAME 180
If your Federation is configured to use AWS Route53, you can use one of the equivalent AWS tools, for example:
``` shell
$ aws route53 list-hosted-zones
aws route53 list-hosted-zones
```
and
``` shell
$ aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX
aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX
```
{{< /note >}}

View File

@ -42,7 +42,9 @@ kubectl create -f https://k8s.io/examples/podpreset/preset.yaml
Examine the created PodPreset:
```shell
$ kubectl get podpreset
kubectl get podpreset
```
```
NAME AGE
allow-database 1m
```
@ -54,13 +56,15 @@ The new PodPreset will act upon any pod that has label `role: frontend`.
Create a pod:
```shell
$ kubectl create -f https://k8s.io/examples/podpreset/pod.yaml
kubectl create -f https://k8s.io/examples/podpreset/pod.yaml
```
List the running Pods:
```shell
$ kubectl get pods
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
website 1/1 Running 0 4m
```
@ -72,7 +76,7 @@ website 1/1 Running 0 4m
To see above output, run the following command:
```shell
$ kubectl get pod website -o yaml
kubectl get pod website -o yaml
```
## Pod Spec with ConfigMap Example
@ -157,7 +161,9 @@ when there is a conflict.
**If we run `kubectl describe...` we can see the event:**
```shell
$ kubectl describe ...
kubectl describe ...
```
```
....
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
@ -169,7 +175,9 @@ Events:
Once you don't need a pod preset anymore, you can delete it with `kubectl`:
```shell
$ kubectl delete podpreset allow-database
kubectl delete podpreset allow-database
```
```
podpreset "allow-database" deleted
```

View File

@ -50,21 +50,27 @@ This example cron job config `.spec` file prints the current time and a hello me
Run the example cron job by downloading the example file and then running this command:
```shell
$ kubectl create -f ./cronjob.yaml
kubectl create -f ./cronjob.yaml
```
```
cronjob "hello" created
```
Alternatively, you can use `kubectl run` to create a cron job without writing a full config:
```shell
$ kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
```
```
cronjob "hello" created
```
After creating the cron job, get its status using this command:
```shell
$ kubectl get cronjob hello
kubectl get cronjob hello
```
```
NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE
hello */1 * * * * False 0 <none>
```
@ -73,7 +79,9 @@ As you can see from the results of the command, the cron job has not scheduled o
Watch for the job to be created in around one minute:
```shell
$ kubectl get jobs --watch
kubectl get jobs --watch
```
```
NAME DESIRED SUCCESSFUL AGE
hello-4111706356 1 1 2s
```
@ -82,7 +90,9 @@ Now you've seen one running job scheduled by the "hello" cron job.
You can stop watching the job and view the cron job again to see that it scheduled the job:
```shell
$ kubectl get cronjob hello
kubectl get cronjob hello
```
```
NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE
hello */1 * * * * False 0 Mon, 29 Aug 2016 14:34:00 -0700
```
@ -95,12 +105,18 @@ Note that the job name and pod name are different.
```shell
# Replace "hello-4111706356" with the job name in your system
$ pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items..metadata.name})
pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items..metadata.name})
$ echo $pods
echo $pods
```
```
hello-4111706356-o9qcm
```
$ kubectl logs $pods
```shell
kubectl logs $pods
```
```
Mon Aug 29 21:34:09 UTC 2016
Hello from the Kubernetes cluster
```
@ -110,7 +126,9 @@ Hello from the Kubernetes cluster
When you don't need a cron job any more, delete it with `kubectl delete cronjob`:
```shell
$ kubectl delete cronjob hello
kubectl delete cronjob hello
```
```
cronjob "hello" deleted
```

View File

@ -46,9 +46,16 @@ cluster and reuse it for many jobs, as well as for long-running services.
Start RabbitMQ as follows:
```shell
$ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml
kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml
```
```
service "rabbitmq-service" created
$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml
```
```shell
kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml
```
```
replicationcontroller "rabbitmq-controller" created
```
@ -64,7 +71,9 @@ First create a temporary interactive Pod.
```shell
# Create a temporary interactive container
$ kubectl run -i --tty temp --image ubuntu:18.04
kubectl run -i --tty temp --image ubuntu:18.04
```
```
Waiting for pod default/temp-loe07 to be running, status is Pending, pod ready: false
... [ previous line repeats several times .. hit return when it stops ] ...
```
@ -161,9 +170,11 @@ For our example, we will create the queue and fill it using the amqp command lin
In practice, you might write a program to fill the queue using an amqp client library.
```shell
$ /usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d
/usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d
job1
$ for f in apple banana cherry date fig grape lemon melon
```
```shell
for f in apple banana cherry date fig grape lemon melon
do
/usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f
done
@ -184,7 +195,7 @@ example program:
Give the script execution permission:
```shell
$ chmod +x worker.py
chmod +x worker.py
```
Now, build an image. If you are working in the source
@ -195,7 +206,7 @@ and [worker.py](/examples/application/job/rabbitmq/worker.py). In either case,
build the image with this command:
```shell
$ docker build -t job-wq-1 .
docker build -t job-wq-1 .
```
For the [Docker Hub](https://hub.docker.com/), tag your app image with
@ -240,7 +251,9 @@ kubectl create -f ./job.yaml
Now wait a bit, then check on the job.
```shell
$ kubectl describe jobs/job-wq-1
kubectl describe jobs/job-wq-1
```
```
Name: job-wq-1
Namespace: default
Selector: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f

View File

@ -179,7 +179,9 @@ Assuming you don't actually have pods matching `app: zookeeper` in your namespac
then you'll see something like this:
```shell
$ kubectl get poddisruptionbudgets
kubectl get poddisruptionbudgets
```
```
NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
zk-pdb 2 0 7s
```
@ -187,7 +189,9 @@ zk-pdb 2 0 7s
If there are matching pods (say, 3), then you would see something like this:
```shell
$ kubectl get poddisruptionbudgets
kubectl get poddisruptionbudgets
```
```
NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
zk-pdb 2 1 7s
```
@ -198,7 +202,9 @@ counted the matching pods, and updated the status of the PDB.
You can get more information about the status of a PDB with this command:
```shell
$ kubectl get poddisruptionbudgets zk-pdb -o yaml
kubectl get poddisruptionbudgets zk-pdb -o yaml
```
```yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:

View File

@ -65,7 +65,9 @@ It defines an index.php page which performs some CPU intensive computations:
First, we will start a deployment running the image and expose it as a service:
```shell
$ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80
kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80
```
```
service/php-apache created
deployment.apps/php-apache created
```
@ -82,14 +84,18 @@ Roughly speaking, HPA will increase and decrease the number of replicas
See [here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
```shell
$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
```
```
horizontalpodautoscaler.autoscaling/php-apache autoscaled
```
We may check the current status of autoscaler by running:
```shell
$ kubectl get hpa
kubectl get hpa
```
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s
@ -104,17 +110,19 @@ Now, we will see how the autoscaler reacts to increased load.
We will start a container, and send an infinite loop of queries to the php-apache service (please run it in a different terminal):
```shell
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
kubectl run -i --tty load-generator --image=busybox /bin/sh
Hit enter for command prompt
$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
```
Within a minute or so, we should see the higher CPU load by executing:
```shell
$ kubectl get hpa
kubectl get hpa
```
```
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 305% / 50% 305% 1 10 1 3m
@ -124,7 +132,9 @@ Here, CPU consumption has increased to 305% of the request.
As a result, the deployment was resized to 7 replicas:
```shell
$ kubectl get deployment php-apache
kubectl get deployment php-apache
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
php-apache 7 7 7 7 19m
```
@ -145,11 +155,17 @@ the load generation by typing `<Ctrl> + C`.
Then we will verify the result state (after a minute or so):
```shell
$ kubectl get hpa
kubectl get hpa
```
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m
```
$ kubectl get deployment php-apache
```shell
kubectl get deployment php-apache
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
php-apache 1 1 1 1 27m
```
@ -172,7 +188,7 @@ by making use of the `autoscaling/v2beta2` API version.
First, get the YAML of your HorizontalPodAutoscaler in the `autoscaling/v2beta2` form:
```shell
$ kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml
kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml
```
Open the `/tmp/hpa-v2.yaml` file in an editor, and you should see YAML which looks like this:
@ -401,7 +417,9 @@ The conditions appear in the `status.conditions` field. To see the conditions a
we can use `kubectl describe hpa`:
```shell
$ kubectl describe hpa cm-test
kubectl describe hpa cm-test
```
```shell
Name: cm-test
Namespace: prom
Labels: <none>
@ -454,7 +472,9 @@ can use the following file to create it declaratively:
We will create the autoscaler by executing the following command:
```shell
$ kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
```
```
horizontalpodautoscaler.autoscaling/php-apache created
```

View File

@ -37,7 +37,7 @@ A rolling update works by:
Rolling updates are initiated with the `kubectl rolling-update` command:
$ kubectl rolling-update NAME \
kubectl rolling-update NAME \
([NEW_NAME] --image=IMAGE | -f FILE)
{{% /capture %}}
@ -50,7 +50,7 @@ Rolling updates are initiated with the `kubectl rolling-update` command:
To initiate a rolling update using a configuration file, pass the new file to
`kubectl rolling-update`:
$ kubectl rolling-update NAME -f FILE
kubectl rolling-update NAME -f FILE
The configuration file must:
@ -66,17 +66,17 @@ Replication controller configuration files are described in
### Examples
// Update pods of frontend-v1 using new replication controller data in frontend-v2.json.
$ kubectl rolling-update frontend-v1 -f frontend-v2.json
kubectl rolling-update frontend-v1 -f frontend-v2.json
// Update pods of frontend-v1 using JSON data passed into stdin.
$ cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
## Updating the container image
To update only the container image, pass a new image name and tag with the
`--image` flag and (optionally) a new controller name:
$ kubectl rolling-update NAME [NEW_NAME] --image=IMAGE:TAG
kubectl rolling-update NAME [NEW_NAME] --image=IMAGE:TAG
The `--image` flag is only supported for single-container pods. Specifying
`--image` with multi-container pods returns an error.
@ -95,10 +95,10 @@ Moreover, the use of `:latest` is not recommended, see
### Examples
// Update the pods of frontend-v1 to frontend-v2
$ kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
// Update the pods of frontend, keeping the replication controller name
$ kubectl rolling-update frontend --image=image:v2
kubectl rolling-update frontend --image=image:v2
## Required and optional fields
@ -165,14 +165,18 @@ spec:
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://git.k8s.io/community/contributors/design-proposals/cli/simple-rolling-update.md) to specify the new image:
```shell
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
kubectl rolling-update my-nginx --image=nginx:1.9.1
```
```
Created my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
```
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
```shell
$ kubectl get pods -l app=nginx -L deployment
kubectl get pods -l app=nginx -L deployment
```
```
NAME READY STATUS RESTARTS AGE DEPLOYMENT
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46
@ -199,7 +203,9 @@ replicationcontroller "my-nginx" rolling updated
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`:
```shell
$ kubectl rolling-update my-nginx --rollback
kubectl rolling-update my-nginx --rollback
```
```
Setting "my-nginx" replicas to 1
Continuing update with existing controller my-nginx.
Scaling up nginx from 1 to 1, scaling down my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
@ -239,7 +245,9 @@ spec:
and roll it out:
```shell
$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml
kubectl rolling-update my-nginx -f ./nginx-rc.yaml
```
```
Created my-nginx-v4
Scaling up my-nginx-v4 from 0 to 5, scaling down my-nginx from 4 to 0 (keep 4 pods available, don't exceed 5 pods)
Scaling my-nginx-v4 up to 1

View File

@ -46,7 +46,9 @@ Make sure:
receiving the expected protections, it is important to verify the Kubelet version of your nodes:
```shell
$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}'
kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}'
```
```
gke-test-default-pool-239f5d02-gyn2: v1.4.0
gke-test-default-pool-239f5d02-x1kf: v1.4.0
gke-test-default-pool-239f5d02-xwux: v1.4.0
@ -58,7 +60,7 @@ Make sure:
module is enabled, check the `/sys/module/apparmor/parameters/enabled` file:
```shell
$ cat /sys/module/apparmor/parameters/enabled
cat /sys/module/apparmor/parameters/enabled
Y
```
@ -76,7 +78,9 @@ Make sure:
expanded. You can verify that your nodes are running docker with:
```shell
$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.containerRuntimeVersion}\n{end}'
kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.containerRuntimeVersion}\n{end}'
```
```
gke-test-default-pool-239f5d02-gyn2: docker://1.11.2
gke-test-default-pool-239f5d02-x1kf: docker://1.11.2
gke-test-default-pool-239f5d02-xwux: docker://1.11.2
@ -91,7 +95,9 @@ Make sure:
node by checking the `/sys/kernel/security/apparmor/profiles` file. For example:
```shell
$ ssh gke-test-default-pool-239f5d02-gyn2 "sudo cat /sys/kernel/security/apparmor/profiles | sort"
ssh gke-test-default-pool-239f5d02-gyn2 "sudo cat /sys/kernel/security/apparmor/profiles | sort"
```
```
apparmor-test-deny-write (enforce)
apparmor-test-audit-write (enforce)
docker-default (enforce)
@ -107,7 +113,9 @@ on nodes by checking the node ready condition message (though this is likely to
later release):
```shell
$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}\n{end}'
kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}\n{end}'
```
```
gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled
gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor enabled
gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled
@ -148,14 +156,18 @@ prerequisites have not been met, the Pod will be rejected, and will not run.
To verify that the profile was applied, you can look for the AppArmor security option listed in the container created event:
```shell
$ kubectl get events | grep Created
kubectl get events | grep Created
```
```
22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-minion-group-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
```
You can also verify directly that the container's root process is running with the correct profile by checking its proc attr:
```shell
$ kubectl exec <pod_name> cat /proc/1/attr/current
kubectl exec <pod_name> cat /proc/1/attr/current
```
```
k8s-apparmor-example-deny-write (enforce)
```
@ -173,12 +185,12 @@ nodes. For this example we'll just use SSH to install the profiles, but other ap
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
```shell
$ NODES=(
NODES=(
# The SSH-accessible domain names of your nodes
gke-test-default-pool-239f5d02-gyn2.us-central1-a.my-k8s
gke-test-default-pool-239f5d02-x1kf.us-central1-a.my-k8s
gke-test-default-pool-239f5d02-xwux.us-central1-a.my-k8s)
$ for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
#include <tunables/global>
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
@ -198,14 +210,16 @@ Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile:
{{< codenew file="pods/security/hello-apparmor.yaml" >}}
```shell
$ kubectl create -f ./hello-apparmor.yaml
kubectl create -f ./hello-apparmor.yaml
```
If we look at the pod events, we can see that the Pod container was created with the AppArmor
profile "k8s-apparmor-example-deny-write":
```shell
$ kubectl get events | grep hello-apparmor
kubectl get events | grep hello-apparmor
```
```
14s 14s 1 hello-apparmor Pod Normal Scheduled {default-scheduler } Successfully assigned hello-apparmor to gke-test-default-pool-239f5d02-gyn2
14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling {kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox"
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled {kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox"
@ -216,14 +230,18 @@ $ kubectl get events | grep hello-apparmor
We can verify that the container is actually running with that profile by checking its proc attr:
```shell
$ kubectl exec hello-apparmor cat /proc/1/attr/current
kubectl exec hello-apparmor cat /proc/1/attr/current
```
```
k8s-apparmor-example-deny-write (enforce)
```
Finally, we can see what happens if we try to violate the profile by writing to a file:
```shell
$ kubectl exec hello-apparmor touch /tmp/test
kubectl exec hello-apparmor touch /tmp/test
```
```
touch: /tmp/test: Permission denied
error: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
```
@ -231,7 +249,9 @@ error: error executing remote command: command terminated with non-zero exit cod
To wrap up, let's look at what happens if we try to specify a profile that hasn't been loaded:
```shell
$ kubectl create -f /dev/stdin <<EOF
kubectl create -f /dev/stdin <<EOF
```
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -245,8 +265,12 @@ spec:
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
EOF
pod/hello-apparmor-2 created
```
$ kubectl describe pod hello-apparmor-2
```shell
kubectl describe pod hello-apparmor-2
```
```
Name: hello-apparmor-2
Namespace: default
Node: gke-test-default-pool-239f5d02-x1kf/