Merge branch 'master' of https://github.com/kubernetes/kubernetes.github.io into release-1.6
* 'master' of https://github.com/kubernetes/kubernetes.github.io: (22 commits) Revert "Document new optional support for ConfigMap and Secret" Revert "mend" modify one word fix typo Fix typos in running zookeeper article Add note about moved content. (#2563) Move Guide topic to Tasks: Downward API (#2439) modify typora fix etcd disaster-recovery hyperlink replace kubernetes.d with kubelet.d remove its name from file content Let's put kubectl in ~/bin. Move Guide toic to Tasks: kubectl exec. Fix travis and add comments Move Pod Lifecycle to Concepts. (#2420) Update deployment completeness documentation rollback PR #2522 kubectl_apply.md-change it for label key Update the links of Deployment User Guide #2534 mark openstack-heat as standalone-salt-conf ... # Conflicts: # docs/user-guide/configmap/index.mdreviewable/pr2618/r1
commit
7673131c02
18
.travis.yml
18
.travis.yml
|
@ -7,9 +7,17 @@ install:
|
|||
- export PATH=$GOPATH/bin:$PATH
|
||||
- mkdir -p $HOME/gopath/src/k8s.io
|
||||
- mv $TRAVIS_BUILD_DIR $HOME/gopath/src/k8s.io/kubernetes.github.io
|
||||
|
||||
# (1) Fetch dependencies for us to run the tests in test/examples_test.go
|
||||
- go get -t -v k8s.io/kubernetes.github.io/test
|
||||
- git clone --depth=50 --branch=master https://github.com/kubernetes/md-check $HOME/gopath/src/k8s.io/md-check
|
||||
- go get -t -v k8s.io/md-check
|
||||
|
||||
# The dependencies are complicated for test/examples_test.go
|
||||
# k8s.io/kubernetes/pkg is a dependency, which in turn depends on apimachinery
|
||||
# but we also have apimachinery directly as one of our dependencies, which causes a conflict.
|
||||
# Additionally, we get symlinks when we clone the directory. The below steps do the following:
|
||||
|
||||
# (a) Replace the symlink with the actual dependencies from kubernetes/staging/src/
|
||||
# (b) copy all the vendored files to $GOPATH/src
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apiserver
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/client-go
|
||||
|
@ -18,6 +26,12 @@ install:
|
|||
- cp -r $GOPATH/src/k8s.io/kubernetes/vendor/* $GOPATH/src/
|
||||
- rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/*
|
||||
- cp -r $GOPATH/src/k8s.io/kubernetes/staging/src/* $GOPATH/src/
|
||||
|
||||
# (2) Fetch md-check along with all its dependencies.
|
||||
- git clone --depth=50 --branch=master https://github.com/kubernetes/md-check $HOME/gopath/src/k8s.io/md-check
|
||||
- go get -t -v k8s.io/md-check
|
||||
|
||||
# (3) Fetch mungedocs
|
||||
- go get -v k8s.io/kubernetes/cmd/mungedocs
|
||||
|
||||
script:
|
||||
|
|
|
@ -25,6 +25,12 @@ toc:
|
|||
section:
|
||||
- docs/concepts/object-metadata/annotations.md
|
||||
|
||||
- title: Workloads
|
||||
section:
|
||||
- title: Pods
|
||||
section:
|
||||
- docs/concepts/workloads/pods/pod-lifecycle.md
|
||||
|
||||
- title: Configuration
|
||||
section:
|
||||
- docs/concepts/configuration/container-command-args.md
|
||||
|
|
|
@ -6,6 +6,7 @@ toc:
|
|||
- title: Using the Kubectl Command-Line
|
||||
section:
|
||||
- docs/tasks/kubectl/list-all-running-container-images.md
|
||||
- docs/tasks/kubectl/get-shell-running-container.md
|
||||
|
||||
- title: Configuring Pods and Containers
|
||||
section:
|
||||
|
@ -15,6 +16,7 @@ toc:
|
|||
- docs/tasks/configure-pod-container/configure-volume-storage.md
|
||||
- docs/tasks/configure-pod-container/configure-persistent-volume-storage.md
|
||||
- docs/tasks/configure-pod-container/environment-variable-expose-pod-information.md
|
||||
- docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information.md
|
||||
- docs/tasks/configure-pod-container/distribute-credentials-secure.md
|
||||
- docs/tasks/configure-pod-container/pull-image-private-registry.md
|
||||
- docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md
|
||||
|
|
|
@ -20,7 +20,7 @@ Data Reliability: for reasonable safety, either etcd needs to be run as a
|
|||
etcd) or etcd's data directory should be located on durable storage (e.g., GCE's
|
||||
persistent disk). In either case, if high availability is required--as it might
|
||||
be in a production cluster--the data directory ought to be [backed up
|
||||
periodically](https://coreos.com/etcd/docs/2.2.1/admin_guide.html#disaster-recovery),
|
||||
periodically](https://coreos.com/etcd/docs/latest/op-guide/recovery.html),
|
||||
to reduce downtime in case of corruption.
|
||||
|
||||
## Default configuration
|
||||
|
|
|
@ -10,11 +10,11 @@ The Salt scripts are shared across multiple hosting providers and depending on w
|
|||
|
||||
## Salt cluster setup
|
||||
|
||||
The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
|
||||
The **salt-master** service runs on the kubernetes-master [(except on the default GCE and OpenStack-Heat setup)](#standalone-salt-configuration-on-gce-and-others).
|
||||
|
||||
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
|
||||
|
||||
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
|
||||
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE and OpenStack-Heat)](#standalone-salt-configuration-on-gce-and-others).
|
||||
|
||||
```shell
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
|
||||
|
@ -25,15 +25,15 @@ The salt-master is contacted by each salt-minion and depending upon the machine
|
|||
|
||||
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
|
||||
|
||||
## Standalone Salt Configuration on GCE
|
||||
## Standalone Salt Configuration on GCE and others
|
||||
|
||||
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
|
||||
On GCE and OpenStack, using the Openstack-Heat provider, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
|
||||
|
||||
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
|
||||
All remaining sections that refer to master/minion setups should be ignored for GCE and OpenStack. One fallout of this setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
|
||||
|
||||
## Salt security
|
||||
|
||||
*(Not applicable on default GCE setup.)*
|
||||
*(Not applicable on default GCE and OpenStack-Heat setup.)*
|
||||
|
||||
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
|
||||
|
||||
|
|
|
@ -71,8 +71,9 @@ account. To create additional API tokens for a service account, create a secret
|
|||
of type `ServiceAccountToken` with an annotation referencing the service
|
||||
account, and the controller will update it with a generated token:
|
||||
|
||||
```json
|
||||
secret.json:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
|
@ -100,4 +101,4 @@ kubectl delete secret mysecretname
|
|||
### Service Account Controller
|
||||
|
||||
Service Account Controller manages ServiceAccount inside namespaces, and ensures
|
||||
a ServiceAccount named "default" exists in every active namespace.
|
||||
a ServiceAccount named "default" exists in every active namespace.
|
||||
|
|
|
@ -26,11 +26,11 @@ For example, this is how to start a simple web server as a static pod:
|
|||
[joe@host ~] $ ssh my-node1
|
||||
```
|
||||
|
||||
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:
|
||||
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubelet.d/static-web.yaml`:
|
||||
|
||||
```shell
|
||||
[root@my-node1 ~] $ mkdir /etc/kubernetes.d/
|
||||
[root@my-node1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
|
||||
[root@my-node1 ~] $ mkdir /etc/kubelet.d/
|
||||
[root@my-node1 ~] $ cat <<EOF >/etc/kubelet.d/static-web.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -72,8 +72,8 @@ When kubelet starts, it automatically starts all pods defined in directory speci
|
|||
|
||||
```shell
|
||||
[joe@my-node1 ~] $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
|
||||
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
|
||||
```
|
||||
|
||||
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
|
||||
|
@ -81,9 +81,9 @@ If we look at our Kubernetes API server (running on host `my-master`), we see th
|
|||
```shell
|
||||
[joe@host ~] $ ssh my-master
|
||||
[joe@my-master ~] $ kubectl get pods
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 role=myrole Running 11 minutes
|
||||
web nginx Running 11 minutes
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
static-web-my-node1 1/1 Running 0 2m
|
||||
|
||||
```
|
||||
|
||||
Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering.
|
||||
|
@ -94,8 +94,9 @@ Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](/docs/
|
|||
[joe@my-master ~] $ kubectl delete pod static-web-my-node1
|
||||
pods/static-web-my-node1
|
||||
[joe@my-master ~] $ kubectl get pods
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST ...
|
||||
static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 ...
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
static-web-my-node1 1/1 Running 0 12s
|
||||
|
||||
```
|
||||
|
||||
Back to our `my-node1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
|
||||
|
@ -114,11 +115,11 @@ CONTAINER ID IMAGE COMMAND CREATED ...
|
|||
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
|
||||
|
||||
```shell
|
||||
[joe@my-node1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
|
||||
[joe@my-node1 ~] $ mv /etc/kubelet.d/static-web.yaml /tmp
|
||||
[joe@my-node1 ~] $ sleep 20
|
||||
[joe@my-node1 ~] $ docker ps
|
||||
// no nginx container is running
|
||||
[joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/
|
||||
[joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubelet.d/
|
||||
[joe@my-node1 ~] $ sleep 20
|
||||
[joe@my-node1 ~] $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
|
|
|
@ -0,0 +1,282 @@
|
|||
---
|
||||
title: Pod Lifecycle
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
{% comment %}Updated: 4/14/2015{% endcomment %}
|
||||
{% comment %}Edited and moved to Concepts section: 2/2/17{% endcomment %}
|
||||
|
||||
This page describes the lifecycle of a Pod.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture body %}
|
||||
|
||||
## Pod phase
|
||||
|
||||
A Pod's `status` field is a
|
||||
[PodStatus](/docs/resources-reference/v1.5/#podstatus-v1)
|
||||
object, which has a `phase` field.
|
||||
|
||||
The phase of a Pod is a simple, high-level summary of where the Pod is in its
|
||||
lifecycle. The phase is not intended to be a comprehensive rollup of observations
|
||||
of Container or Pod state, nor is it intended to be a comprehensive state machine.
|
||||
|
||||
The number and meanings of Pod phase values are tightly guarded.
|
||||
Other than what is documented here, nothing should be assumed about Pods that
|
||||
have a given `phase` value.
|
||||
|
||||
Here are the possible values for `phase`:
|
||||
|
||||
* Pending: The Pod has been accepted by the Kubernetes system, but one or more of
|
||||
the Container images has not been created. This includes time before being
|
||||
scheduled as well as time spent downloading images over the network,
|
||||
which could take a while.
|
||||
|
||||
* Running: The Pod has been bound to a node, and all of the Containers have been
|
||||
created. At least one Container is still running, or is in the process of
|
||||
starting or restarting.
|
||||
|
||||
* Succeeded: All Containers in the Pod have terminated in success, and will not
|
||||
be restarted.
|
||||
|
||||
* Failed: All Containers in the Pod have terminated, and at least one Container
|
||||
has terminated in failure. That is, the Container either exited with non-zero
|
||||
status or was terminated by the system.
|
||||
|
||||
* Unknown: For some reason the state of the Pod could not be obtained, typically
|
||||
due to an error in communicating with the host of the Pod.
|
||||
|
||||
## Pod conditions
|
||||
|
||||
A Pod has a PodStatus, which has an array of
|
||||
[PodConditions](docs/resources-reference/v1.5/#podcondition). Each element
|
||||
of the PodCondition array has a `type` field and a `status` field. The `type`
|
||||
field is a string, with possible values PodScheduled, Ready, Initialized, and
|
||||
Unschedulable. The `status` field is a string, with possible values True, False,
|
||||
and Unknown.
|
||||
|
||||
## Container probes
|
||||
|
||||
A [Probe](/docs/resources-reference/v1.5/#probe-v1) is a diagnostic
|
||||
performed periodically by the [kubelet](/docs/admin/kubelet/)
|
||||
on a Container. To perform a diagnostic,
|
||||
the kublet calls a
|
||||
[Handler](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler) implemented by
|
||||
the Container. There are three types of handlers:
|
||||
|
||||
* [ExecAction](/docs/resources-reference/v1.5/#execaction-v1):
|
||||
Executes a specified command inside the Container. The diagnostic
|
||||
is considered successful if the command exits with a status code of 0.
|
||||
|
||||
* [TCPSocketAction](/docs/resources-reference/v1.5/#tcpsocketaction-v1):
|
||||
Performs a TCP check against the Container's IP address on
|
||||
a specified port. The diagnostic is considered successful if the port is open.
|
||||
|
||||
* [HTTPGetAction](/docs/resources-reference/v1.5/#httpgetaction-v1):
|
||||
Performs an HTTP Get request against the Container's IP
|
||||
address on a specified port and path. The diagnostic is considered successful
|
||||
if the response has a status code greater than or equal to 200 and less than 400.
|
||||
|
||||
Each probe has one of three results:
|
||||
|
||||
* Success: The Container passed the diagnostic.
|
||||
* Failure: The Container failed the diagnostic.
|
||||
* Unknown: The diagnostic failed, so no action should be taken.
|
||||
|
||||
The kubelet can optionally perform and react to two kinds of probes on running
|
||||
Containers:
|
||||
|
||||
* `livenessProbe`: Indicates whether the Container is running. If
|
||||
the liveness probe fails, the kubelet kills the Container, and the Container
|
||||
is subjected to its [restart policy](#restart-policy). If a Container does not
|
||||
provide a liveness probe, the default state is `Success`.
|
||||
|
||||
* `readinessProbe`: Indicates whether the Container is ready to service requests.
|
||||
If the readiness probe fails, the endpoints controller removes the Pod's IP
|
||||
address from the endpoints of all Services that match the Pod. The default
|
||||
state of readiness before the initial delay is `Failure`. If a Container does
|
||||
not provide a readiness probe, the default state is `Success`.
|
||||
|
||||
### When should you use liveness or readiness probes?
|
||||
|
||||
If the process in your Container is able to crash on its own whenever it
|
||||
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
|
||||
probe; the kubelet will automatically perform the correct action in accordance
|
||||
with the Pod's `restartPolicy`.
|
||||
|
||||
If you'd like your Container to be killed and restarted if a probe fails, then
|
||||
specify a liveness probe, and specify a `restartPolicy` of Always or OnFailure.
|
||||
|
||||
If you'd like to start sending traffic to a Pod only when a probe succeeds,
|
||||
specify a readiness probe. In this case, the readiness probe might be the same
|
||||
as the liveness probe, but the existence of the readiness probe in the spec means
|
||||
that the Pod will start without receiving any traffic and only start receiving
|
||||
traffic after the probe starts succeeding.
|
||||
|
||||
If you want your Container to be able to take itself down for maintenance, you
|
||||
can specify a readiness probe that checks an endpoint specific to readiness that
|
||||
is different from the liveness probe.
|
||||
|
||||
Note that if you just want to be able to drain requests when the Pod is deleted,
|
||||
you do not necessarily need a readiness probe; on deletion, the Pod automatically
|
||||
puts itself into an unready state regardless of whether the readiness probe exists.
|
||||
The Pod remains in the unready state while it waits for the Containers in the Pod
|
||||
to stop.
|
||||
|
||||
## Pod and Container status
|
||||
|
||||
For detailed information about Pod Container status, see
|
||||
[PodStatus](/docs/resources-reference/v1.5/#podstatus-v1)
|
||||
and
|
||||
[ContainerStatus](/docs/resources-reference/v1.5/#containerstatus-v1).
|
||||
Note that the information reported as Pod status depends on the current
|
||||
[ContainerState](/docs/resources-reference/v1.5/#containerstate-v1).
|
||||
|
||||
## Restart policy
|
||||
|
||||
A PodSpec has a `restartPolicy` field with possible values Always, OnFailure,
|
||||
and Never. The default value is Always.
|
||||
`restartPolicy` applies to all Containers in the Pod. `restartPolicy` only
|
||||
refers to restarts of the Containers by the kubelet on the same node. Failed
|
||||
Containers that are restarted by the kubelet are restarted with an exponential
|
||||
back-off delay (10s, 20s, 40s ...) capped at five minutes, and is reset after ten
|
||||
minutes of successful execution. As discussed in the
|
||||
[Pods document](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof),
|
||||
once bound to a node, a Pod will never be rebound to another node.
|
||||
|
||||
|
||||
|
||||
## Pod lifetime
|
||||
|
||||
In general, Pods do not disappear until someone destroys them. This might be a
|
||||
human or a controller. The only exception to
|
||||
this rule is that Pods with a `phase` of Succeeded or Failed for more than some
|
||||
duration (determined by the master) will expire and be automatically destroyed.
|
||||
|
||||
Three types of controllers are available:
|
||||
|
||||
- Use a [Job](/docs/user-guide/jobs/) for Pods that are expected to terminate,
|
||||
for example, batch computations. Jobs are appropriate only for Pods with
|
||||
`restartPolicy` equal to OnFailure or Never.
|
||||
|
||||
- Use a [ReplicationController](/docs/user-guide/replication-controller/),
|
||||
[ReplicaSet](/docs/user-guide/replicasets/), or
|
||||
[Deployment](/docs/user-guide/deployments/)
|
||||
for Pods that are not expected to terminate, for example, web servers.
|
||||
ReplicationControllers are appropriate only for Pods with a `restartPolicy` of
|
||||
Always.
|
||||
|
||||
- Use a [DaemonSet](/docs/admin/daemons/) for Pods that need to run one per
|
||||
machine, because they provide a machine-specific system service.
|
||||
|
||||
All three types of controllers contain a PodTemplate. It
|
||||
is recommended to create the appropriate controller and let
|
||||
it create Pods, rather than directly create Pods yourself. That is because Pods
|
||||
alone are not resilient to machine failures, but controllers are.
|
||||
|
||||
If a node dies or is disconnected from the rest of the cluster, Kubernetes
|
||||
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
|
||||
|
||||
## Examples
|
||||
|
||||
### Advanced liveness probe example
|
||||
|
||||
Liveness probes are executed by the kubelet, so all requests are made in the
|
||||
kubelet network namespace.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-http
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- /server
|
||||
image: gcr.io/google_containers/liveness
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
# when "host" is not defined, "PodIP" will be used
|
||||
# host: my-host
|
||||
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
|
||||
# scheme: HTTPS
|
||||
path: /healthz
|
||||
port: 8080
|
||||
httpHeaders:
|
||||
- name: X-Custom-Header
|
||||
value: Awesome
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
name: liveness
|
||||
```
|
||||
|
||||
### Example states
|
||||
|
||||
* Pod is running and has one Container. Container exits with success.
|
||||
* Log completion event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Pod `phase` becomes Succeeded.
|
||||
* Never: Pod `phase` becomes Succeeded.
|
||||
|
||||
* Pod is running and has one Container. Container exits with failure.
|
||||
* Log failure event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||
* Never: Pod `phase` becomes Failed.
|
||||
|
||||
* Pod is running and has two Containers. Container 1 exits with failure.
|
||||
* Log failure event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||
* Never: Do not restart Container; Pod `phase` stays Running.
|
||||
* If Container 1 is not running, and Container 2 exits:
|
||||
* Log failure event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||
* Never: Pod `phase` becomes Failed.
|
||||
|
||||
* Pod is running and has one Container. Container runs out of memory.
|
||||
* Container terminates in failure.
|
||||
* Log OOM event.
|
||||
* If `restartPolicy` is:
|
||||
* Always: Restart Container; Pod `phase` stays Running.
|
||||
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||
* Never: Log failure event; Pod `phase` becomes Failed.
|
||||
|
||||
* Pod is running, and a disk dies.
|
||||
* Kill all Containers.
|
||||
* Log appropriate event.
|
||||
* Pod `phase` becomes Failed.
|
||||
* If running under a controller, Pod is recreated elsewhere.
|
||||
|
||||
* Pod is running, and its node is segmented out.
|
||||
* Node controller waits for timeout.
|
||||
* Node controller sets Pod `phase` to Failed.
|
||||
* If running under a controller, Pod is recreated elsewhere.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
* Get hands-on experience
|
||||
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
|
||||
* Get hands-on experience
|
||||
[configuring liveness and readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
|
||||
|
||||
* [Container Lifecycle Hooks](/docs/user-guide/container-environment/)
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/concept.md %}
|
||||
|
|
@ -23,7 +23,7 @@ This guide assumes you have access to a working OpenStack cluster with the follo
|
|||
- Heat
|
||||
- DNS resolution of instance names
|
||||
|
||||
By default this provider provisions 4 m1.medium instances. If you do not have resources available, please see the [Set additional configuration values](#set-additional-configuration-values) section for information on reducing the footprint of your cluster.
|
||||
By default this provider provisions 4 `m1.medium` instances. If you do not have resources available, please see the [Set additional configuration values](#set-additional-configuration-values) section for information on reducing the footprint of your cluster.
|
||||
|
||||
## Pre-Requisites
|
||||
If you already have the required versions of the OpenStack CLI tools installed and configured, you can move on to the [Starting a cluster](#starting-a-cluster) section.
|
||||
|
@ -92,7 +92,7 @@ Please see the contents of these files for documentation regarding each variable
|
|||
|
||||
## Starting a cluster
|
||||
|
||||
Once Kubernetes version 1.3 is released, and you've installed the OpenStack CLI tools and have set your OpenStack environment variables, issue this command:
|
||||
Once you've installed the OpenStack CLI tools and have set your OpenStack environment variables, issue this command:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=openstack-heat; curl -sS https://get.k8s.io | bash
|
||||
|
@ -194,6 +194,11 @@ nova list --name=$STACK_NAME
|
|||
|
||||
See the [OpenStack CLI Reference](http://docs.openstack.org/cli-reference/) for more details.
|
||||
|
||||
### Salt
|
||||
|
||||
The OpenStack-Heat provider uses a [standalone Salt configuration](/docs/admin/salt/#standalone-salt-configuration-on-gce-and-others).
|
||||
It only uses Salt for bootstraping the machines and creates no salt-master and does not auto-start the salt-minion service on the nodes.
|
||||
|
||||
## SSHing to your nodes
|
||||
|
||||
Your public key was added during the cluster turn-up, so you can easily ssh to them for troubleshooting purposes.
|
||||
|
|
|
@ -159,15 +159,17 @@ juju scp kubernetes-master/0:config ~/.kube/config
|
|||
|
||||
Fetch a binary for the architecture you have deployed. If your client is a
|
||||
different architecture you will need to get the appropriate `kubectl` binary
|
||||
through other means.
|
||||
through other means. In this example we copy kubectl to `~/bin` for convenience,
|
||||
by default this should be in your $PATH.
|
||||
|
||||
```
|
||||
juju scp kubernetes-master/0:kubectl ./kubectl
|
||||
mkdir -p ~/bin
|
||||
juju scp kubernetes-master/0:kubectl ~/bin/kubectl
|
||||
```
|
||||
|
||||
Query the cluster:
|
||||
|
||||
./kubectl cluster-info
|
||||
kubectl cluster-info
|
||||
|
||||
Output:
|
||||
|
||||
|
|
|
@ -156,7 +156,7 @@ The output is:
|
|||
|
||||
Verify that the replica count is zero:
|
||||
|
||||
kubectl get deployment --namespace-kube-system
|
||||
kubectl get deployment --namespace=kube-system
|
||||
|
||||
The output displays 0 in the DESIRED and CURRENT columns:
|
||||
|
||||
|
|
|
@ -0,0 +1,54 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kubernetes-downwardapi-volume-example-2
|
||||
spec:
|
||||
containers:
|
||||
- name: client-container
|
||||
image: gcr.io/google_containers/busybox:1.24
|
||||
command: ["sh", "-c"]
|
||||
args:
|
||||
- while true; do
|
||||
echo -en '\n';
|
||||
if [[ -e /etc/cpu_limit ]]; then
|
||||
echo -en '\n'; cat /etc/cpu_limit; fi;
|
||||
if [[ -e /etc/cpu_request ]]; then
|
||||
echo -en '\n'; cat /etc/cpu_request; fi;
|
||||
if [[ -e /etc/mem_limit ]]; then
|
||||
echo -en '\n'; cat /etc/mem_limit; fi;
|
||||
if [[ -e /etc/mem_request ]]; then
|
||||
echo -en '\n'; cat /etc/mem_request; fi;
|
||||
sleep 5;
|
||||
done;
|
||||
resources:
|
||||
requests:
|
||||
memory: "32Mi"
|
||||
cpu: "125m"
|
||||
limits:
|
||||
memory: "64Mi"
|
||||
cpu: "250m"
|
||||
volumeMounts:
|
||||
- name: podinfo
|
||||
mountPath: /etc
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: podinfo
|
||||
downwardAPI:
|
||||
items:
|
||||
- path: "cpu_limit"
|
||||
resourceFieldRef:
|
||||
containerName: client-container
|
||||
resource: limits.cpu
|
||||
- path: "cpu_request"
|
||||
resourceFieldRef:
|
||||
containerName: client-container
|
||||
resource: requests.cpu
|
||||
- path: "mem_limit"
|
||||
resourceFieldRef:
|
||||
containerName: client-container
|
||||
resource: limits.memory
|
||||
- path: "mem_request"
|
||||
resourceFieldRef:
|
||||
containerName: client-container
|
||||
resource: requests.memory
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kubernetes-downwardapi-volume-example
|
||||
labels:
|
||||
zone: us-est-coast
|
||||
cluster: test-cluster1
|
||||
rack: rack-22
|
||||
annotations:
|
||||
build: two
|
||||
builder: john-doe
|
||||
spec:
|
||||
containers:
|
||||
- name: client-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: ["sh", "-c"]
|
||||
args:
|
||||
- while true; do
|
||||
if [[ -e /etc/labels ]]; then
|
||||
echo -en '\n\n'; cat /etc/labels; fi;
|
||||
if [[ -e /etc/annotations ]]; then
|
||||
echo -en '\n\n'; cat /etc/annotations; fi;
|
||||
sleep 5;
|
||||
done;
|
||||
volumeMounts:
|
||||
- name: podinfo
|
||||
mountPath: /etc
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: podinfo
|
||||
downwardAPI:
|
||||
items:
|
||||
- path: "labels"
|
||||
fieldRef:
|
||||
fieldPath: metadata.labels
|
||||
- path: "annotations"
|
||||
fieldRef:
|
||||
fieldPath: metadata.annotations
|
||||
|
|
@ -0,0 +1,242 @@
|
|||
---
|
||||
title: Exposing Pod Information to Containers Using a DownwardApiVolumeFile
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how a Pod can use a DownwardAPIVolumeFile to expose information
|
||||
about itself to Containers running in the Pod. A DownwardAPIVolumeFile can expose
|
||||
Pod fields and Container fields.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## The Downward API
|
||||
|
||||
There are two ways to expose Pod and Container fields to a running Container:
|
||||
|
||||
* [Environment variables](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/)
|
||||
* DownwardAPIVolumeFiles
|
||||
|
||||
Together, these two ways of exposing Pod and Container fields are called the
|
||||
*Downward API*.
|
||||
|
||||
## Storing Pod fields
|
||||
|
||||
In this exercise, you create a Pod that has one Container.
|
||||
Here is the configuration file for the Pod:
|
||||
|
||||
{% include code.html language="yaml" file="dapi-volume.yaml" ghlink="/docs/tasks/configure-pod-container/dapi-volume.yaml" %}
|
||||
|
||||
In the configuration file, you can see that the Pod has a `downwardAPI` Volume,
|
||||
and the Container mounts the Volume at `/etc`.
|
||||
|
||||
Look at the `items` array under `downwardAPI`. Each element of the array is a
|
||||
[DownwardAPIVolumeFile](/docs/resources-reference/v1.5/#downwardapivolumefile-v1).
|
||||
The first element specifies that the value of the Pod's
|
||||
`metadata.labels` field should be stored in a file named `labels`.
|
||||
The second element specifies that the value of the Pod's `annotations`
|
||||
field should be stored in a file named `annotations`.
|
||||
|
||||
**Note**: The fields in this example are Pod fields. They are not
|
||||
fields of the Container in the Pod.
|
||||
|
||||
Create the Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/dapi-volume.yaml
|
||||
```
|
||||
|
||||
Verify that Container in the Pod is running:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
View the Container's logs:
|
||||
|
||||
```shell
|
||||
kubectl logs kubernetes-downwardapi-volume-example
|
||||
```
|
||||
|
||||
The output shows the contents of the `labels` file and the `annotations` file:
|
||||
|
||||
```shell
|
||||
cluster="test-cluster1"
|
||||
rack="rack-22"
|
||||
zone="us-est-coast"
|
||||
|
||||
build="two"
|
||||
builder="john-doe"
|
||||
```
|
||||
|
||||
Get a shell into the Container that is running in your Pod:
|
||||
|
||||
```
|
||||
kubectl exec -it kubernetes-downwardapi-volume-example -- sh
|
||||
```
|
||||
|
||||
In your shell, view the `labels` file:
|
||||
|
||||
```shell
|
||||
/# cat /etc/labels
|
||||
```
|
||||
|
||||
The output shows that all of the Pod's labels have been written
|
||||
to the `labels` file:
|
||||
|
||||
```shell
|
||||
cluster="test-cluster1"
|
||||
rack="rack-22"
|
||||
zone="us-est-coast"
|
||||
```
|
||||
|
||||
Similarly, view the `annotations` file:
|
||||
|
||||
```shell
|
||||
/# cat /etc/annotations
|
||||
```
|
||||
|
||||
View the files in the `/etc` directory:
|
||||
|
||||
```shell
|
||||
/# ls -laR /etc
|
||||
```
|
||||
|
||||
In the output, you can see that the `labels` and `annotations` files
|
||||
are in a temporary subdirectory: in this example,
|
||||
`..2982_06_02_21_47_53.299460680`. In the `/etc` directory, `..data` is
|
||||
a symbolic link to the temporary subdirectory. Also in the `/etc` directory,
|
||||
`labels` and `annotations` are symbolic links.
|
||||
|
||||
```
|
||||
drwxr-xr-x ... Feb 6 21:47 ..2982_06_02_21_47_53.299460680
|
||||
lrwxrwxrwx ... Feb 6 21:47 ..data -> ..2982_06_02_21_47_53.299460680
|
||||
lrwxrwxrwx ... Feb 6 21:47 annotations -> ..data/annotations
|
||||
lrwxrwxrwx ... Feb 6 21:47 labels -> ..data/labels
|
||||
|
||||
/etc/..2982_06_02_21_47_53.299460680:
|
||||
total 8
|
||||
-rw-r--r-- ... Feb 6 21:47 annotations
|
||||
-rw-r--r-- ... Feb 6 21:47 labels
|
||||
```
|
||||
|
||||
Using symbolic links enables dynamic atomic refresh of the metadata; updates are
|
||||
written to a new temporary directory, and the `..data` symlink is updated
|
||||
atomically using
|
||||
[rename(2)](http://man7.org/linux/man-pages/man2/rename.2.html).
|
||||
|
||||
Exit the shell:
|
||||
|
||||
```shell
|
||||
/# exit
|
||||
```
|
||||
|
||||
## Storing Container fields
|
||||
|
||||
The preceding exercise, you stored Pod fields in a DownwardAPIVolumeFile.
|
||||
In this next exercise, you store Container fields. Here is the configuration
|
||||
file for a Pod that has one Container:
|
||||
|
||||
{% include code.html language="yaml" file="dapi-volume-resources.yaml" ghlink="/docs/tasks/configure-pod-container/dapi-volume-resources.yaml" %}
|
||||
|
||||
In the configuration file, you can see that the Pod has a `downwardAPI` Volume,
|
||||
and the Container mounts the Volume at `/etc`.
|
||||
|
||||
Look at the `items` array under `downwardAPI`. Each element of the array is a
|
||||
DownwardAPIVolumeFile.
|
||||
|
||||
The first element specifies that in the Container named `client-container`,
|
||||
the value of the `limits.cpu` field
|
||||
`metadata.labels` field should be stored in a file named `cpu_limit`.
|
||||
|
||||
Create the Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/dapi-volume-resources.yaml
|
||||
```
|
||||
|
||||
Get a shell into the Container that is running in your Pod:
|
||||
|
||||
```
|
||||
kubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh
|
||||
```
|
||||
|
||||
In your shell, view the `cpu_limit` file:
|
||||
|
||||
```shell
|
||||
/# cat /etc/cpu_limit
|
||||
```
|
||||
You can use similar commands to view the `cpu_request`, `mem_limit` and
|
||||
`mem_request` files.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
|
||||
## Capabilities of the Downward API
|
||||
|
||||
The following information is available to Containers through environment
|
||||
variables and DownwardAPIVolumeFiles:
|
||||
|
||||
* The node’s name
|
||||
* The Pod’s name
|
||||
* The Pod’s namespace
|
||||
* The Pod’s IP address
|
||||
* The Pod’s service account name
|
||||
* A Container’s CPU limit
|
||||
* A container’s CPU request
|
||||
* A Container’s memory limit
|
||||
* A Container’s memory request
|
||||
|
||||
In addition, the following information is available through
|
||||
DownwardAPIVolumeFiles.
|
||||
|
||||
* The Pod's labels
|
||||
* The Pod's annotations
|
||||
|
||||
**Note**: If CPU and memory limits are not specified for a Container, the
|
||||
Downward API defaults to the node allocatable value for CPU and memory.
|
||||
|
||||
## Projecting keys to specific paths and file permissions
|
||||
|
||||
You can project keys to specific paths and specific permissions on a per-file
|
||||
basis. For more information, see
|
||||
[Secrets](/docs/user-guide/secrets/).
|
||||
|
||||
## Motivation for the Downward API
|
||||
|
||||
It is sometimes useful for a Container to have information about itself, without
|
||||
being overly coupled to Kubernetes. The Downward API allows containers to consume
|
||||
information about themselves or the cluster without using the Kubernetes client
|
||||
or API server.
|
||||
|
||||
An example is an existing application that assumes a particular well-known
|
||||
environment variable holds a unique identifier. One possibility is to wrap the
|
||||
application, but that is tedious and error prone, and it violates the goal of low
|
||||
coupling. A better option would be to use the Pod's name as an identifier, and
|
||||
inject the Pod's name into the well-known environment variable.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
* [PodSpec](/docs/resources-reference/v1.5/#podspec-v1)
|
||||
* [Volume](/docs/resources-reference/v1.5/#volume-v1)
|
||||
* [DownwardAPIVolumeSource](/docs/resources-reference/v1.5/#downwardapivolumesource-v1)
|
||||
* [DownwardAPIVolumeFile](/docs/resources-reference/v1.5/#downwardapivolumefile-v1)
|
||||
* [ResourceFieldSelector](/docs/resources-reference/v1.5/#resourcefieldselector-v1)
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
||||
|
|
@ -26,6 +26,17 @@ Together, these two ways of exposing Pod and Container fields are called the
|
|||
|
||||
{% capture steps %}
|
||||
|
||||
## The Downward API
|
||||
|
||||
There are two ways to expose Pod and Container fields to a running Container:
|
||||
|
||||
* Environment variables
|
||||
* [DownwardAPIVolumeFiles](/docs/resources-reference/v1.5/#downwardapivolumefile-v1)
|
||||
|
||||
Together, these two ways of exposing Pod and Container fields are called the
|
||||
*Downward API*.
|
||||
|
||||
|
||||
## Using Pod fields as values for environment variables
|
||||
|
||||
In this exercise, you create a Pod that has one Container. Here is the
|
||||
|
@ -161,3 +172,4 @@ The output shows the values of selected environment variables:
|
|||
|
||||
|
||||
{% include templates/task.md %}
|
||||
|
||||
|
|
|
@ -12,6 +12,8 @@ single thing, typically by giving a short sequence of steps.
|
|||
* [Defining a Command and Arguments for a Container](/docs/tasks/configure-pod-container/define-command-argument-container/)
|
||||
* [Assigning CPU and RAM Resources to a Container](/docs/tasks/configure-pod-container/assign-cpu-ram-container/)
|
||||
* [Configuring a Pod to Use a Volume for Storage](/docs/tasks/configure-pod-container/configure-volume-storage/)
|
||||
* [Exposing Pod Information to Containers Through Environment Variables](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/)
|
||||
* [Exposing Pod Information to Containers Using a DownwardAPIVolumeFile](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/)
|
||||
* [Distributing Credentials Securely](/docs/tasks/configure-pod-container/distribute-credentials-secure/)
|
||||
* [Pulling an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry/)
|
||||
* [Configuring Liveness and Readiness Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
|
||||
|
@ -55,3 +57,4 @@ single thing, typically by giving a short sequence of steps.
|
|||
|
||||
If you would like to write a task page, see
|
||||
[Creating a Documentation Pull Request](/docs/contribute/create-pull-request/).
|
||||
|
||||
|
|
|
@ -0,0 +1,148 @@
|
|||
---
|
||||
assignees:
|
||||
- caesarxuchao
|
||||
- mikedanese
|
||||
title: Getting a Shell to a Running Container
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to use `kubectl exec` to get a shell to a
|
||||
running Container.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Getting a shell to a Container
|
||||
|
||||
In this exercise, you create a Pod that has one Container. The Container
|
||||
runs the nginx image. Here is the configuration file for the Pod:
|
||||
|
||||
{% include code.html language="yaml" file="shell-demo.yaml" ghlink="/docs/tasks/kubectl/shell-demo.yaml" %}
|
||||
|
||||
Create the Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/kubectl/shell-demo.yaml
|
||||
```
|
||||
|
||||
Verify that the Container is running:
|
||||
|
||||
```shell
|
||||
kubectl get pod shell-demo
|
||||
```
|
||||
|
||||
Get a shell to the running Container:
|
||||
|
||||
```shell
|
||||
kubectl exec -it shell-demo -- /bin/bash
|
||||
```
|
||||
|
||||
In your shell, list the running processes:
|
||||
|
||||
```shell
|
||||
root@shell-demo:/# ps aux
|
||||
```
|
||||
|
||||
In your shell, list the nginx processes:
|
||||
|
||||
```shell
|
||||
root@shell-demo:/# ps aux | grep nginx
|
||||
```
|
||||
|
||||
In your shell, experiment with other commands. Here are
|
||||
some examples:
|
||||
|
||||
```shell
|
||||
root@shell-demo:/# ls /
|
||||
root@shell-demo:/# cat /proc/mounts
|
||||
root@shell-demo:/# cat /proc/1/maps
|
||||
root@shell-demo:/# apt-get update
|
||||
root@shell-demo:/# apt-get install tcpdump
|
||||
root@shell-demo:/# tcpdump
|
||||
root@shell-demo:/# apt-get install lsof
|
||||
root@shell-demo:/# lsof
|
||||
```
|
||||
|
||||
## Writing the root page for nginx
|
||||
|
||||
Look again at the configuration file for your Pod. The Pod
|
||||
has an `emptyDir` volume, and the Container mounts the volume
|
||||
at `/usr/share/nginx/html`.
|
||||
|
||||
In your shell, create an `index.html` file in the `/usr/share/nginx/html`
|
||||
directory:
|
||||
|
||||
```shell
|
||||
root@shell-demo:/# echo Hello shell demo > /usr/share/nginx/html/index.html
|
||||
```
|
||||
|
||||
In your shell, send a GET request to the nginx server:
|
||||
|
||||
```shell
|
||||
root@shell-demo:/# apt-get update
|
||||
root@shell-demo:/# apt-get install curl
|
||||
root@shell-demo:/# curl localhost
|
||||
```
|
||||
|
||||
The output shows the text that you wrote to the `index.html` file:
|
||||
|
||||
```shell
|
||||
Hello shell demo
|
||||
```
|
||||
|
||||
When you are finished with your shell, enter `exit`.
|
||||
|
||||
## Running individual commands in a Container
|
||||
|
||||
In an ordinary command window, not your shell, list the environment
|
||||
variables in the running Container:
|
||||
|
||||
```shell
|
||||
kubectl exec shell-demo env
|
||||
```
|
||||
|
||||
Experiment running other commands. Here are some examples:
|
||||
|
||||
```shell
|
||||
kubectl exec shell-demo ps aux
|
||||
kubectl exec shell-demo ls /
|
||||
kubectl exec shell-demo cat /proc/1/mounts
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
|
||||
## Opening a shell when a Pod has more than one Container
|
||||
|
||||
If a Pod has more than one Container, use `--container` or `-c` to
|
||||
specify a Container in the `kubectl exec` command. For example,
|
||||
suppose you have a Pod named my-pod, and the Pod has two containers
|
||||
named main-app and helper-app. The following command would open a
|
||||
shell to the main-app Container.
|
||||
|
||||
```shell
|
||||
kubectl exec -it my-pod --container main-app -- /bin/bash
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
* [kubectl exec](/docs/user-guide/kubectl/v1.5/#exec)
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: shell-demo
|
||||
spec:
|
||||
volumes:
|
||||
- name: shared-data
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: shared-data
|
||||
mountPath: /usr/share/nginx/html
|
|
@ -580,7 +580,7 @@ env:
|
|||
key: purge.interval
|
||||
```
|
||||
|
||||
The entry point of the container invokes a bash script, `zkConfig.sh`, prior to
|
||||
The entry point of the container invokes a bash script, `zkGenConfig.sh`, prior to
|
||||
launching the ZooKeeper server process. This bash script generates the
|
||||
ZooKeeper configuration files from the supplied environment variables.
|
||||
|
||||
|
@ -653,7 +653,7 @@ ZK_LOG_DIR=/var/log/zookeeper
|
|||
|
||||
### Configuring Logging
|
||||
|
||||
One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging.
|
||||
One of the files generated by the `zkGenConfig.sh` script controls ZooKeeper's logging.
|
||||
ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default,
|
||||
it uses a time and size based rolling file appender for its logging configuration.
|
||||
Get the logging configuration from one of Pods in the `zk` StatefulSet.
|
||||
|
|
|
@ -579,6 +579,7 @@ Kubernetes marks a Deployment as _complete_ when it has the following characteri
|
|||
equals or exceeds the number required by the Deployment strategy.
|
||||
* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
|
||||
updates you've requested have been completed.
|
||||
* No old pods for the Deployment are running.
|
||||
|
||||
You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed successfully, `kubectl rollout status` returns a zero exit code.
|
||||
|
||||
|
@ -616,7 +617,7 @@ the Deployment's `status.conditions`:
|
|||
* Status=False
|
||||
* Reason=ProgressDeadlineExceeded
|
||||
|
||||
See the [Kubernetes API conventions](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/devel/api-conventions.md#typical-status-properties) for more information on status conditions.
|
||||
See the [Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#typical-status-properties) for more information on status conditions.
|
||||
|
||||
Note that in version 1.5, Kubernetes will take no action on a stalled Deployment other than to report a status condition with
|
||||
`Reason=ProgressDeadlineExceeded`.
|
||||
|
@ -726,7 +727,7 @@ As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, a
|
|||
`metadata` fields. For general information about working with config files,
|
||||
see [deploying applications](/docs/user-guide/deploying-applications), [configuring containers](/docs/user-guide/configuring-containers), and [using kubectl to manage resources](/docs/user-guide/working-with-resources) documents.
|
||||
|
||||
A Deployment also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
|
||||
A Deployment also needs a [`.spec` section](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
|
|
|
@ -5,137 +5,6 @@ assignees:
|
|||
title: Using the Downward API to Convey Pod Properties
|
||||
---
|
||||
|
||||
It is sometimes useful for a container to have information about itself, but we
|
||||
want to be careful not to over-couple containers to Kubernetes. The downward
|
||||
API allows containers to consume information about themselves or the system and
|
||||
expose that information how they want it, without necessarily coupling to the
|
||||
Kubernetes client or REST API.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
An example of this is a "legacy" app that is already written assuming
|
||||
that a particular environment variable will hold a unique identifier. While it
|
||||
is often possible to "wrap" such applications, this is tedious and error prone,
|
||||
and violates the goal of low coupling. Instead, the user should be able to use
|
||||
the Pod's name, for example, and inject it into this well-known variable.
|
||||
|
||||
|
||||
## Capabilities
|
||||
|
||||
The following information is available to a `Pod` through the downward API:
|
||||
|
||||
* The node's name
|
||||
* The pod's name
|
||||
* The pod's namespace
|
||||
* The pod's IP
|
||||
* The pod's service account name
|
||||
* A container's cpu limit
|
||||
* A container's cpu request
|
||||
* A container's memory limit
|
||||
* A container's memory request
|
||||
|
||||
More information will be exposed through this same API over time.
|
||||
|
||||
|
||||
## Exposing pod information into a container
|
||||
|
||||
Containers consume information from the downward API using environment
|
||||
variables or using a volume plugin.
|
||||
|
||||
|
||||
## Environment variables
|
||||
|
||||
Most environment variables in the Kubernetes API use the `value` field to carry
|
||||
simple values. However, the alternate `valueFrom` field allows you to specify
|
||||
a `fieldRef` to select fields from the pod's definition, and a `resourceFieldRef`
|
||||
to select fields from one of its container's definition.
|
||||
|
||||
The `fieldRef` field is a structure that has an `apiVersion` field and a `fieldPath`
|
||||
field. The `fieldPath` field is an expression designating a field of the pod. The
|
||||
`apiVersion` field is the version of the API schema that the `fieldPath` is
|
||||
written in terms of. If the `apiVersion` field is not specified it is
|
||||
defaulted to the API version of the enclosing object.
|
||||
|
||||
The `fieldRef` is evaluated and the resulting value is used as the value for
|
||||
the environment variable. This allows users to publish their pod's name in any
|
||||
environment variable they want.
|
||||
|
||||
The `resourceFieldRef` is a structure that has a `containerName` field, a `resource`
|
||||
field, and a `divisor` field. The `containerName` is the name of a container,
|
||||
whose resource (cpu or memory) information is to be exposed. The `containerName` is
|
||||
optional for environment variables and defaults to the current container. The
|
||||
`resource` field is an expression designating a resource in a container, and the `divisor`
|
||||
field specifies an output format of the resource being exposed. If the `divisor`
|
||||
is not specified, it defaults to "1" for cpu and memory. The table shows possible
|
||||
values for cpu and memory resources for `resource` and `divisor` settings:
|
||||
|
||||
|
||||
| Setting | Cpu | Memory |
|
||||
| ------------- |-------------| -----|
|
||||
| resource | limits.cpu, requests.cpu| limits.memory, requests.memory|
|
||||
| divisor | 1(cores), 1m(millicores) | 1(bytes), 1k(kilobytes), 1M(megabytes), 1G(gigabytes), 1T(terabytes), 1P(petabytes), 1E(exabytes), 1Ki(kibibyte), 1Mi(mebibyte), 1Gi(gibibyte), 1Ti(tebibyte), 1Pi(pebibyte), 1Ei(exbibyte)|
|
||||
|
||||
|
||||
### Example
|
||||
|
||||
This is an example of a pod that consumes its name and namespace via the
|
||||
downward API:
|
||||
|
||||
{% include code.html language="yaml" file="dapi-pod.yaml" ghlink="/docs/user-guide/downward-api/dapi-pod.yaml" %}
|
||||
|
||||
This is an example of a pod that consumes its container's resources via the downward API:
|
||||
|
||||
{% include code.html language="yaml" file="dapi-container-resources.yaml" ghlink="/docs/user-guide/downward-api/dapi-container-resources.yaml" %}
|
||||
|
||||
## Downward API volume
|
||||
|
||||
Using a similar syntax it's possible to expose pod information to containers using plain text files.
|
||||
Downward API are dumped to a mounted volume. This is achieved using a `downwardAPI`
|
||||
volume type and the different items represent the files to be created. `fieldPath` references the field to be exposed.
|
||||
For exposing a container's resources limits and requests, `containerName` must be specified with `resourceFieldRef`.
|
||||
|
||||
Downward API volume permits to store more complex data like [`metadata.labels`](/docs/user-guide/labels) and [`metadata.annotations`](/docs/user-guide/annotations). Currently key/value pair set fields are saved using `key="value"` format:
|
||||
|
||||
```conf
|
||||
key1="value1"
|
||||
key2="value2"
|
||||
```
|
||||
|
||||
In future, it will be possible to specify an output format option.
|
||||
|
||||
Downward API volumes can expose:
|
||||
|
||||
* The node's name
|
||||
* The pod's name
|
||||
* The pod's namespace
|
||||
* The pod's labels
|
||||
* The pod's annotations
|
||||
* The pod's service account name
|
||||
* A container's cpu limit
|
||||
* A container's cpu request
|
||||
* A container's memory limit
|
||||
* A container's memory request
|
||||
|
||||
The downward API volume refreshes its data in step with the kubelet refresh loop. When labels will be modifiable on the fly without respawning the pod containers will be able to detect changes through mechanisms such as [inotify](https://en.wikipedia.org/wiki/Inotify).
|
||||
|
||||
In future, it will be possible to specify a specific annotation or label.
|
||||
|
||||
#### Projecting keys to specific paths and file permissions
|
||||
|
||||
You can project keys to specific paths and specific permissions on a per-file
|
||||
basis. The [Secrets](/docs/user-guide/secrets/) user guide explains the syntax.
|
||||
|
||||
### Example
|
||||
|
||||
This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations are dumped in `/etc/labels` and in `/etc/annotations`, respectively:
|
||||
|
||||
{% include code.html language="yaml" file="volume/dapi-volume.yaml" ghlink="/docs/user-guide/downward-api/volume/dapi-volume.yaml" %}
|
||||
|
||||
This is an example of a pod that consumes its container's resources via the downward API volume.
|
||||
|
||||
{% include code.html language="yaml" file="volume/dapi-volume-resources.yaml" ghlink="/docs/user-guide/downward-api/volume/dapi-volume-resources.yaml" %}
|
||||
|
||||
For a more thorough example, see
|
||||
[environment variables](/docs/user-guide/environment-guide/).
|
||||
|
||||
## Default values for container resource limits
|
||||
|
||||
If cpu and memory limits are not specified for a container, the downward API will default to the node allocatable value for cpu and memory.
|
||||
[Exposing Pod Information to Containers Using a DownwardAPIVolumeFile](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/)
|
||||
|
|
|
@ -2,118 +2,6 @@
|
|||
title: Downward API Volumes
|
||||
---
|
||||
|
||||
Following this example, you will create a pod with a downward API volume.
|
||||
A downward API volume is a k8s volume plugin with the ability to save some pod information in a plain text file. The pod information can be for example some [metadata](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#metadata) or a container's [resources](/docs/user-guide/compute-resources).
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
Supported metadata fields:
|
||||
|
||||
1. `metadata.annotations`
|
||||
2. `metadata.namespace`
|
||||
3. `metadata.name`
|
||||
4. `metadata.labels`
|
||||
|
||||
Supported container's resources:
|
||||
|
||||
1. `limits.cpu`
|
||||
2. `limits.memory`
|
||||
3. `requests.cpu`
|
||||
4. `requests.memory`
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and the `kubectl` command line tool somewhere in your path. Please see the [gettingstarted](/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
### Step One: Create the pod
|
||||
|
||||
Use the [dapi-volume.yaml](/docs/user-guide/downward-api/volume/dapi-volume.yaml) file to create a Pod with a downward API volume which stores pod labels and pod annotations to `/etc/labels` and `/etc/annotations` respectively.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
|
||||
```
|
||||
|
||||
### Step Two: Examine pod/container output
|
||||
|
||||
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
|
||||
|
||||
```shell
|
||||
$ kubectl logs kubernetes-downwardapi-volume-example
|
||||
cluster="test-cluster1"
|
||||
rack="rack-22"
|
||||
zone="us-est-coast"
|
||||
build="two"
|
||||
builder="john-doe"
|
||||
kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
|
||||
kubernetes.io/config.source="api"
|
||||
```
|
||||
|
||||
### Internals
|
||||
|
||||
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
|
||||
|
||||
```shell
|
||||
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
|
||||
/ # ls -laR /etc
|
||||
/etc:
|
||||
total 4
|
||||
drwxrwxrwt 3 0 0 120 Jun 1 19:55 .
|
||||
drwxr-xr-x 17 0 0 4096 Jun 1 19:55 ..
|
||||
drwxr-xr-x 2 0 0 80 Jun 1 19:55 ..6986_01_06_15_55_10.473583074
|
||||
lrwxrwxrwx 1 0 0 31 Jun 1 19:55 ..data -> ..6986_01_06_15_55_10.473583074
|
||||
lrwxrwxrwx 1 0 0 18 Jun 1 19:55 annotations -> ..data/annotations
|
||||
lrwxrwxrwx 1 0 0 13 Jun 1 19:55 labels -> ..data/labels
|
||||
|
||||
/etc/..6986_01_06_15_55_10.473583074:
|
||||
total 8
|
||||
drwxr-xr-x 2 0 0 80 Jun 1 19:55 .
|
||||
drwxrwxrwt 3 0 0 120 Jun 1 19:55 ..
|
||||
-rw-r--r-- 1 0 0 129 Jun 1 19:55 annotations
|
||||
-rw-r--r-- 1 0 0 59 Jun 1 19:55 labels
|
||||
/ #
|
||||
```
|
||||
|
||||
The file `labels` is stored in a temporary directory (`..6986_01_06_15_55_10.473583074` in the example above) which is symlinked to by `..data`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..data` indirection. This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..data` symlink is updated atomically using `rename(2)`.
|
||||
|
||||
## Example of downward API volume with container resources
|
||||
|
||||
Use the `docs/user-guide/downward-api/volume/dapi-volume-resources.yaml` file to create a Pod with a downward API volume which stores its container's limits and requests in /etc.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume-resources.yaml
|
||||
```
|
||||
|
||||
### Examine pod/container output
|
||||
|
||||
In pod's `/etc` directory one may find the files created by the plugin:
|
||||
|
||||
```shell
|
||||
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
|
||||
/ # ls -alR /etc
|
||||
/etc:
|
||||
total 4
|
||||
drwxrwxrwt 3 0 0 160 Jun 1 19:47 .
|
||||
drwxr-xr-x 17 0 0 4096 Jun 1 19:48 ..
|
||||
drwxr-xr-x 2 0 0 120 Jun 1 19:47 ..6986_01_06_15_47_23.076909525
|
||||
lrwxrwxrwx 1 0 0 31 Jun 1 19:47 ..data -> ..6986_01_06_15_47_23.076909525
|
||||
lrwxrwxrwx 1 0 0 16 Jun 1 19:47 cpu_limit -> ..data/cpu_limit
|
||||
lrwxrwxrwx 1 0 0 18 Jun 1 19:47 cpu_request -> ..data/cpu_request
|
||||
lrwxrwxrwx 1 0 0 16 Jun 1 19:47 mem_limit -> ..data/mem_limit
|
||||
lrwxrwxrwx 1 0 0 18 Jun 1 19:47 mem_request -> ..data/mem_request
|
||||
|
||||
/etc/..6986_01_06_15_47_23.076909525:
|
||||
total 16
|
||||
drwxr-xr-x 2 0 0 120 Jun 1 19:47 .
|
||||
drwxrwxrwt 3 0 0 160 Jun 1 19:47 ..
|
||||
-rw-r--r-- 1 0 0 1 Jun 1 19:47 cpu_limit
|
||||
-rw-r--r-- 1 0 0 1 Jun 1 19:47 cpu_request
|
||||
-rw-r--r-- 1 0 0 8 Jun 1 19:47 mem_limit
|
||||
-rw-r--r-- 1 0 0 8 Jun 1 19:47 mem_request
|
||||
|
||||
/ # cat /etc/cpu_limit
|
||||
1
|
||||
/ # cat /etc/mem_limit
|
||||
67108864
|
||||
/ # cat /etc/cpu_request
|
||||
1
|
||||
/ # cat /etc/mem_request
|
||||
33554432
|
||||
```
|
||||
[Exposing Pod Information to Containers Using a DownwardAPIVolumeFile](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/)
|
||||
|
|
|
@ -5,70 +5,6 @@ assignees:
|
|||
title: Running Commands in a Container with kubectl exec
|
||||
---
|
||||
|
||||
Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
## Using kubectl exec to check the environment variables of a container
|
||||
|
||||
Kubernetes exposes [services](/docs/user-guide/services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
|
||||
|
||||
We first create a pod and a service,
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
|
||||
$ kubectl create -f examples/guestbook/redis-master-service.yaml
|
||||
```
|
||||
wait until the pod is Running and Ready,
|
||||
|
||||
```shell
|
||||
$ kubectl get pod
|
||||
NAME READY REASON RESTARTS AGE
|
||||
redis-master-ft9ex 1/1 Running 0 12s
|
||||
```
|
||||
|
||||
then we can check the environment variables of the pod,
|
||||
|
||||
```shell
|
||||
$ kubectl exec redis-master-ft9ex env
|
||||
...
|
||||
REDIS_MASTER_SERVICE_PORT=6379
|
||||
REDIS_MASTER_SERVICE_HOST=10.0.0.219
|
||||
...
|
||||
```
|
||||
|
||||
We can use these environment variables in applications to find the service.
|
||||
|
||||
|
||||
## Using kubectl exec to check the mounted volumes
|
||||
|
||||
It is convenient to use `kubectl exec` to check if the volumes are mounted as expected.
|
||||
We first create a Pod with a volume mounted at /data/redis,
|
||||
|
||||
```shell
|
||||
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
|
||||
```
|
||||
|
||||
wait until the pod is Running and Ready,
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY REASON RESTARTS AGE
|
||||
storage 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
|
||||
|
||||
```shell
|
||||
$ kubectl exec storage ls /data
|
||||
redis
|
||||
```
|
||||
|
||||
## Using kubectl exec to open a bash terminal in a pod
|
||||
|
||||
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
|
||||
|
||||
```shell
|
||||
$ kubectl exec -ti storage -- bash
|
||||
root@storage:/data#
|
||||
```
|
||||
|
||||
This gets you a terminal.
|
||||
[Getting a Shell to a Running Container](/docs/tasks/kubectl/get-shell-running-container/)
|
||||
|
|
|
@ -32,7 +32,7 @@ kubectl apply -f FILENAME
|
|||
# Apply the configuration in manifest.yaml that matches label app=nginx and delete all the other resources that are not in the file and match label app=nginx.
|
||||
kubectl apply --prune -f manifest.yaml -l app=nginx
|
||||
|
||||
# Apply the configuration in manifest.yaml and delete all the other configmaps that are not in the file.
|
||||
# Apply the configuration in manifest.yaml and delete all the other configmaps with the same label key that are not in the file.
|
||||
kubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap
|
||||
```
|
||||
|
||||
|
|
|
@ -4,168 +4,6 @@ assignees:
|
|||
title: The Lifecycle of a Pod
|
||||
---
|
||||
|
||||
Updated: 4/14/2015
|
||||
|
||||
This document covers the lifecycle of a pod. It is not an exhaustive document, but an introduction to the topic.
|
||||
|
||||
## Pod Phase
|
||||
|
||||
As consistent with the overall [API convention](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#typical-status-properties), phase is a simple, high-level summary of the phase of the lifecycle of a pod. It is not intended to be a comprehensive rollup of observations of container-level or even pod-level conditions or other state, nor is it intended to be a comprehensive state machine.
|
||||
|
||||
The number and meanings of `PodPhase` values are tightly guarded. Other than what is documented here, nothing should be assumed about pods with a given `PodPhase`.
|
||||
|
||||
* Pending: The pod has been accepted by the system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.
|
||||
* Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
|
||||
* Succeeded: All containers in the pod have terminated in success, and will not be restarted.
|
||||
* Failed: All containers in the pod have terminated, at least one container has terminated in failure (exited with non-zero exit status or was terminated by the system).
|
||||
* Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.
|
||||
|
||||
## Pod Conditions
|
||||
|
||||
A pod containing containers that specify readiness probes will also report the Ready condition. Condition status values may be `True`, `False`, or `Unknown`.
|
||||
|
||||
## Container Probes
|
||||
|
||||
A [Probe](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Probe) is a diagnostic performed periodically by the kubelet on a container. Specifically the diagnostic is one of three [Handlers](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler):
|
||||
|
||||
* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0.
|
||||
* `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open.
|
||||
* `HTTPGetAction`: performs an HTTP Get against the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
|
||||
|
||||
Each probe will have one of three results:
|
||||
|
||||
* `Success`: indicates that the container passed the diagnostic.
|
||||
* `Failure`: indicates that the container failed the diagnostic.
|
||||
* `Unknown`: indicates that the diagnostic failed so no action should be taken.
|
||||
|
||||
The kubelet can optionally perform and react to two kinds of probes on running containers:
|
||||
|
||||
* `LivenessProbe`: indicates whether the container is *live*, i.e. running. If the LivenessProbe fails, the kubelet will kill the container and the container will be subjected to its [RestartPolicy](#restartpolicy). The default state of Liveness before the initial delay is `Success`. The state of Liveness for a container when no probe is provided is assumed to be `Success`.
|
||||
* `ReadinessProbe`: indicates whether the container is *ready* to service requests. If the ReadinessProbe fails, the endpoints controller will remove the pod's IP address from the endpoints of all services that match the pod. The default state of Readiness before the initial delay is `Failure`. The state of Readiness for a container when no probe is provided is assumed to be `Success`.
|
||||
|
||||
### When should I use liveness or readiness probes?
|
||||
|
||||
If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe - the kubelet will automatically perform the correct action in accordance with the RestartPolicy when the process crashes.
|
||||
|
||||
If you'd like your container to be killed and restarted if a probe fails, then specify a LivenessProbe and a RestartPolicy of `Always` or `OnFailure`.
|
||||
|
||||
If you'd like to start sending traffic to a pod only when a probe succeeds, specify a ReadinessProbe. In this case, the ReadinessProbe may be the same as the LivenessProbe, but the existence of the ReadinessProbe in the spec means that the pod will start without receiving any traffic and only start receiving traffic once the probe starts succeeding.
|
||||
|
||||
If a container wants the ability to take itself down for maintenance, you can specify a ReadinessProbe that checks an endpoint specific to readiness which is different than the LivenessProbe.
|
||||
|
||||
Note that if you just want to be able to drain requests when the pod is deleted, you do not necessarily need a ReadinessProbe - on deletion, the pod automatically puts itself into an unready state regardless of whether the ReadinessProbe exists or not while it waits for the containers in the pod to stop.
|
||||
|
||||
## Container Statuses
|
||||
|
||||
More detailed information about the current (and previous) container statuses can be found in [ContainerStatuses](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#PodStatus). The information reported depends on the current [ContainerState](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#ContainerState), which may be Waiting, Running, or Terminated.
|
||||
|
||||
## RestartPolicy
|
||||
|
||||
The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If RestartPolicy is not set, the default value is `Always`. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. Failed containers that are restarted by Kubelet, are restarted with an exponential back-off delay, the delay is in multiples of sync-frequency 0, 1x, 2x, 4x, 8x ... capped at 5 minutes and is reset after 10 minutes of successful execution. As discussed in the [pods document](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof), once bound to a node, a pod will never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired.
|
||||
|
||||
Three types of controllers are currently available:
|
||||
|
||||
- Use a [`Job`](/docs/user-guide/jobs/) for pods which are expected to terminate (e.g. batch computations).
|
||||
- Use a [`ReplicationController`](/docs/user-guide/replication-controller/) or [`Deployment`](/docs/user-guide/deployments/)
|
||||
for pods which are not expected to terminate (e.g. web servers).
|
||||
- Use a [`DaemonSet`](/docs/admin/daemons/): Use for pods which need to run 1 per machine because they provide a
|
||||
machine-specific system service.
|
||||
If you are unsure whether to use ReplicationController or Daemon, then see [Daemon Set versus
|
||||
Replication Controller](/docs/admin/daemons/#daemon-set-versus-replication-controller).
|
||||
|
||||
`ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`.
|
||||
`Job` is *only* appropriate for pods with `RestartPolicy` equal to `OnFailure` or `Never`.
|
||||
|
||||
All 3 types of controllers contain a PodTemplate, which has all the same fields as a Pod.
|
||||
It is recommended to create the appropriate controller and let it create pods, rather than to
|
||||
directly create pods yourself. That is because pods alone are not resilient to machine failures,
|
||||
but Controllers are.
|
||||
|
||||
## Pod lifetime
|
||||
|
||||
In general, pods which are created do not disappear until someone destroys them. This might be a human or a `ReplicationController`, or another controller. The only exception to this rule is that pods with a `PodPhase` of `Succeeded` or `Failed` for more than some duration (determined by the master) will expire and be automatically reaped.
|
||||
|
||||
If a node dies or is disconnected from the rest of the cluster, some entity within the system (call it the NodeController for now) is responsible for applying policy (e.g. a timeout) and marking any pods on the lost node as `Failed`.
|
||||
|
||||
## Examples
|
||||
|
||||
### Advanced livenessProbe example
|
||||
|
||||
Liveness probes are executed by `kubelet`, so all requests will be made within kubelet network namespace.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-http
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- /server
|
||||
image: gcr.io/google_containers/liveness
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
# when "host" is not defined, "PodIP" will be used
|
||||
# host: my-host
|
||||
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
|
||||
# scheme: HTTPS
|
||||
path: /healthz
|
||||
port: 8080
|
||||
httpHeaders:
|
||||
- name: X-Custom-Header
|
||||
value: Awesome
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
name: liveness
|
||||
```
|
||||
|
||||
### Example states
|
||||
|
||||
* Pod is `Running`, 1 container, container exits success
|
||||
* Log completion event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: pod becomes `Succeeded`
|
||||
* Never: pod becomes `Succeeded`
|
||||
|
||||
* Pod is `Running`, 1 container, container exits failure
|
||||
* Log failure event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: restart container, pod stays `Running`
|
||||
* Never: pod becomes `Failed`
|
||||
|
||||
* Pod is `Running`, 2 containers, container 1 exits failure
|
||||
* Log failure event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: restart container, pod stays `Running`
|
||||
* Never: pod stays `Running`
|
||||
* When container 2 exits...
|
||||
* Log failure event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: restart container, pod stays `Running`
|
||||
* Never: pod becomes `Failed`
|
||||
|
||||
* Pod is `Running`, container becomes OOM
|
||||
* Container terminates in failure
|
||||
* Log OOM event
|
||||
* If RestartPolicy is:
|
||||
* Always: restart container, pod stays `Running`
|
||||
* OnFailure: restart container, pod stays `Running`
|
||||
* Never: log failure event, pod becomes `Failed`
|
||||
|
||||
* Pod is `Running`, a disk dies
|
||||
* All containers are killed
|
||||
* Log appropriate event
|
||||
* Pod becomes `Failed`
|
||||
* If running under a controller, pod will be recreated elsewhere
|
||||
|
||||
* Pod is `Running`, its node is segmented out
|
||||
* NodeController waits for timeout
|
||||
* NodeController marks pod `Failed`
|
||||
* If running under a controller, pod will be recreated elsewhere
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
[Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/)
|
||||
|
|
|
@ -375,41 +375,6 @@ However, it is using its local ttl-based cache for getting the current value of
|
|||
As a result, the total delay from the moment when the secret is updated to the moment when new keys are
|
||||
projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet.
|
||||
|
||||
#### Optional Secrets as Files from a Pod
|
||||
|
||||
Volumes and files provided by a Secret can be also be marked as optional.
|
||||
The Secret or the key within a Secret does not have to exist. The mount path for
|
||||
such items will always be created.
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "mypod",
|
||||
"namespace": "myns"
|
||||
},
|
||||
"spec": {
|
||||
"containers": [{
|
||||
"name": "mypod",
|
||||
"image": "redis",
|
||||
"volumeMounts": [{
|
||||
"name": "foo",
|
||||
"mountPath": "/etc/foo"
|
||||
}]
|
||||
}],
|
||||
"volumes": [{
|
||||
"name": "foo",
|
||||
"secret": {
|
||||
"secretName": "mysecret",
|
||||
"defaultMode": 256,
|
||||
"optional": true
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Using Secrets as Environment Variables
|
||||
|
||||
To use a secret in an environment variable in a pod:
|
||||
|
@ -456,30 +421,6 @@ $ echo $SECRET_PASSWORD
|
|||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
#### Optional Secrets from Environment Variables
|
||||
|
||||
You may not want to require all your secrets to exist. They can be marked as
|
||||
optional as shown in the pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: optional-secret-env-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: mycontainer
|
||||
image: redis
|
||||
env:
|
||||
- name: OPTIONAL_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mysecret
|
||||
key: username
|
||||
optional: true
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
#### Using imagePullSecrets
|
||||
|
||||
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
|
||||
|
@ -511,8 +452,7 @@ can be automatically attached to pods based on their service account.
|
|||
|
||||
Secret volume sources are validated to ensure that the specified object
|
||||
reference actually points to an object of type `Secret`. Therefore, a secret
|
||||
needs to be created before any pods that depend on it, unless it is marked as
|
||||
optional.
|
||||
needs to be created before any pods that depend on it.
|
||||
|
||||
Secret API objects reside in a namespace. They can only be referenced by pods
|
||||
in that same namespace.
|
||||
|
@ -532,12 +472,12 @@ not common ways to create pods.)
|
|||
|
||||
When a pod is created via the API, there is no check whether a referenced
|
||||
secret exists. Once a pod is scheduled, the kubelet will try to fetch the
|
||||
secret value. If a required secret cannot be fetched because it does not
|
||||
exist or because of a temporary lack of connection to the API server, the
|
||||
kubelet will periodically retry. It will report an event about the pod
|
||||
explaining the reason it is not started yet. Once the secret is fetched, the
|
||||
kubelet will create and mount a volume containing it. None of the pod's
|
||||
containers will start until all the pod's volumes are mounted.
|
||||
secret value. If the secret cannot be fetched because it does not exist or
|
||||
because of a temporary lack of connection to the API server, kubelet will
|
||||
periodically retry. It will report an event about the pod explaining the
|
||||
reason it is not started yet. Once the secret is fetched, the kubelet will
|
||||
create and mount a volume containing it. None of the pod's containers will
|
||||
start until all the pod's volumes are mounted.
|
||||
|
||||
## Use cases
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ title: Kubernetes 101
|
|||
|
||||
For Kubernetes 101, we will cover kubectl, pods, volumes, and multiple containers
|
||||
|
||||
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
|
||||
In order for the kubectl usage examples to work, make sure you have an example directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
|
Loading…
Reference in New Issue