Merge branch 'master' into patch-7
commit
fd9ff4c4e0
21
.travis.yml
21
.travis.yml
|
@ -7,16 +7,33 @@ install:
|
||||||
- export PATH=$GOPATH/bin:$PATH
|
- export PATH=$GOPATH/bin:$PATH
|
||||||
- mkdir -p $HOME/gopath/src/k8s.io
|
- mkdir -p $HOME/gopath/src/k8s.io
|
||||||
- mv $TRAVIS_BUILD_DIR $HOME/gopath/src/k8s.io/kubernetes.github.io
|
- mv $TRAVIS_BUILD_DIR $HOME/gopath/src/k8s.io/kubernetes.github.io
|
||||||
|
|
||||||
|
# (1) Fetch dependencies for us to run the tests in test/examples_test.go
|
||||||
- go get -t -v k8s.io/kubernetes.github.io/test
|
- go get -t -v k8s.io/kubernetes.github.io/test
|
||||||
- git clone --depth=50 --branch=master https://github.com/kubernetes/md-check $HOME/gopath/src/k8s.io/md-check
|
|
||||||
- go get -t -v k8s.io/md-check
|
# The dependencies are complicated for test/examples_test.go
|
||||||
|
# k8s.io/kubernetes/pkg is a dependency, which in turn depends on apimachinery
|
||||||
|
# but we also have apimachinery directly as one of our dependencies, which causes a conflict.
|
||||||
|
# Additionally, we get symlinks when we clone the directory. The below steps do the following:
|
||||||
|
|
||||||
|
# (a) Replace the symlink with the actual dependencies from kubernetes/staging/src/
|
||||||
|
# (b) copy all the vendored files to $GOPATH/src
|
||||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery
|
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery
|
||||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apiserver
|
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apiserver
|
||||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/client-go
|
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/client-go
|
||||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/sample-apiserver
|
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/sample-apiserver
|
||||||
|
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator
|
||||||
- cp -r $GOPATH/src/k8s.io/kubernetes/vendor/* $GOPATH/src/
|
- cp -r $GOPATH/src/k8s.io/kubernetes/vendor/* $GOPATH/src/
|
||||||
- rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/*
|
- rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/*
|
||||||
- cp -r $GOPATH/src/k8s.io/kubernetes/staging/src/* $GOPATH/src/
|
- cp -r $GOPATH/src/k8s.io/kubernetes/staging/src/* $GOPATH/src/
|
||||||
|
- cp -r $GOPATH/src/k8s.io/apimachinery/vendor/* $GOPATH/src/
|
||||||
|
- rm -rf $GOPATH/src/k8s.io/apimachinery/vendor/*
|
||||||
|
|
||||||
|
# (2) Fetch md-check along with all its dependencies.
|
||||||
|
- git clone --depth=50 --branch=master https://github.com/kubernetes/md-check $HOME/gopath/src/k8s.io/md-check
|
||||||
|
- go get -t -v k8s.io/md-check
|
||||||
|
|
||||||
|
# (3) Fetch mungedocs
|
||||||
- go get -v k8s.io/kubernetes/cmd/mungedocs
|
- go get -v k8s.io/kubernetes/cmd/mungedocs
|
||||||
|
|
||||||
script:
|
script:
|
||||||
|
|
|
@ -20,14 +20,22 @@ toc:
|
||||||
- title: Controllers
|
- title: Controllers
|
||||||
section:
|
section:
|
||||||
- docs/concepts/abstractions/controllers/statefulsets.md
|
- docs/concepts/abstractions/controllers/statefulsets.md
|
||||||
|
- docs/concepts/abstractions/controllers/garbage-collection.md
|
||||||
|
|
||||||
- title: Object Metadata
|
- title: Object Metadata
|
||||||
section:
|
section:
|
||||||
- docs/concepts/object-metadata/annotations.md
|
- docs/concepts/object-metadata/annotations.md
|
||||||
|
|
||||||
|
- title: Workloads
|
||||||
|
section:
|
||||||
|
- title: Pods
|
||||||
|
section:
|
||||||
|
- docs/concepts/workloads/pods/pod-lifecycle.md
|
||||||
|
|
||||||
- title: Configuration
|
- title: Configuration
|
||||||
section:
|
section:
|
||||||
- docs/concepts/configuration/container-command-args.md
|
- docs/concepts/configuration/container-command-args.md
|
||||||
|
- docs/concepts/configuration/manage-compute-resources-container.md
|
||||||
|
|
||||||
- title: Policies
|
- title: Policies
|
||||||
section:
|
section:
|
||||||
|
|
|
@ -228,5 +228,5 @@ toc:
|
||||||
- title: Federation Components
|
- title: Federation Components
|
||||||
section:
|
section:
|
||||||
- docs/admin/federation-apiserver.md
|
- docs/admin/federation-apiserver.md
|
||||||
- title : federation-controller-mananger
|
- title : federation-controller-manager
|
||||||
path: /docs/admin/federation-controller-manager
|
path: /docs/admin/federation-controller-manager
|
||||||
|
|
|
@ -3,9 +3,10 @@ abstract: "Step-by-step instructions for performing operations with Kubernetes."
|
||||||
toc:
|
toc:
|
||||||
- docs/tasks/index.md
|
- docs/tasks/index.md
|
||||||
|
|
||||||
- title: Using the Kubectl Command-Line
|
- title: Using the kubectl Command-Line
|
||||||
section:
|
section:
|
||||||
- docs/tasks/kubectl/list-all-running-container-images.md
|
- docs/tasks/kubectl/list-all-running-container-images.md
|
||||||
|
- docs/tasks/kubectl/get-shell-running-container.md
|
||||||
|
|
||||||
- title: Configuring Pods and Containers
|
- title: Configuring Pods and Containers
|
||||||
section:
|
section:
|
||||||
|
@ -15,6 +16,7 @@ toc:
|
||||||
- docs/tasks/configure-pod-container/configure-volume-storage.md
|
- docs/tasks/configure-pod-container/configure-volume-storage.md
|
||||||
- docs/tasks/configure-pod-container/configure-persistent-volume-storage.md
|
- docs/tasks/configure-pod-container/configure-persistent-volume-storage.md
|
||||||
- docs/tasks/configure-pod-container/environment-variable-expose-pod-information.md
|
- docs/tasks/configure-pod-container/environment-variable-expose-pod-information.md
|
||||||
|
- docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information.md
|
||||||
- docs/tasks/configure-pod-container/distribute-credentials-secure.md
|
- docs/tasks/configure-pod-container/distribute-credentials-secure.md
|
||||||
- docs/tasks/configure-pod-container/pull-image-private-registry.md
|
- docs/tasks/configure-pod-container/pull-image-private-registry.md
|
||||||
- docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md
|
- docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md
|
||||||
|
|
|
@ -3453,7 +3453,7 @@ Populated by the system when a graceful deletion is requested. Read-only. More i
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection/README">http://kubernetes.io/docs/user-guide/node-selection/README</a></p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection">http://kubernetes.io/docs/user-guide/node-selection</a></p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
||||||
<td class="tableblock halign-left valign-top"></td>
|
<td class="tableblock halign-left valign-top"></td>
|
||||||
|
|
|
@ -4172,7 +4172,7 @@ The resulting set of endpoints can be viewed as:<br>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection/README">http://kubernetes.io/docs/user-guide/node-selection/README</a></p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection">http://kubernetes.io/docs/user-guide/node-selection</a></p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
||||||
<td class="tableblock halign-left valign-top"></td>
|
<td class="tableblock halign-left valign-top"></td>
|
||||||
|
|
|
@ -85,9 +85,9 @@ See [APPENDIX](#appendix) for how to generate a client cert.
|
||||||
The API server reads bearer tokens from a file when given the `--token-auth-file=SOMEFILE` option on the command line. Currently, tokens last indefinitely, and the token list cannot be
|
The API server reads bearer tokens from a file when given the `--token-auth-file=SOMEFILE` option on the command line. Currently, tokens last indefinitely, and the token list cannot be
|
||||||
changed without restarting API server.
|
changed without restarting API server.
|
||||||
|
|
||||||
The token file format is implemented in `plugin/pkg/auth/authenticator/token/tokenfile/...`
|
The token file is a csv file with a minimum of 3 columns: token, user name, user uid,
|
||||||
and is a csv file with a minimum of 3 columns: token, user name, user uid, followed by
|
followed by optional group names. Note, if you have more than one group the column must be
|
||||||
optional group names. Note, if you have more than one group the column must be double quoted e.g.
|
double quoted e.g.
|
||||||
|
|
||||||
```conf
|
```conf
|
||||||
token,user,uid,"group1,group2,group3"
|
token,user,uid,"group1,group2,group3"
|
||||||
|
@ -115,9 +115,9 @@ and the password cannot be changed without restarting API server. Note that basi
|
||||||
authentication is currently supported for convenience while we finish making the
|
authentication is currently supported for convenience while we finish making the
|
||||||
more secure modes described above easier to use.
|
more secure modes described above easier to use.
|
||||||
|
|
||||||
The basic auth file format is implemented in `plugin/pkg/auth/authenticator/password/passwordfile/...`
|
The basic auth file is a csv file with a minimum of 3 columns: password,
|
||||||
and is a csv file with a minimum of 3 columns: password, user name, user id, followed by
|
user name, user id, followed by optional group names. Note, if you have more than
|
||||||
optional group names. Note, if you have more than one group the column must be double quoted e.g.
|
one group the column must be double quoted e.g.
|
||||||
|
|
||||||
```conf
|
```conf
|
||||||
password,user,uid,"group1,group2,group3"
|
password,user,uid,"group1,group2,group3"
|
||||||
|
|
|
@ -20,7 +20,7 @@ Data Reliability: for reasonable safety, either etcd needs to be run as a
|
||||||
etcd) or etcd's data directory should be located on durable storage (e.g., GCE's
|
etcd) or etcd's data directory should be located on durable storage (e.g., GCE's
|
||||||
persistent disk). In either case, if high availability is required--as it might
|
persistent disk). In either case, if high availability is required--as it might
|
||||||
be in a production cluster--the data directory ought to be [backed up
|
be in a production cluster--the data directory ought to be [backed up
|
||||||
periodically](https://coreos.com/etcd/docs/2.2.1/admin_guide.html#disaster-recovery),
|
periodically](https://coreos.com/etcd/docs/latest/op-guide/recovery.html),
|
||||||
to reduce downtime in case of corruption.
|
to reduce downtime in case of corruption.
|
||||||
|
|
||||||
## Default configuration
|
## Default configuration
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: federation-controller-mananger
|
title: federation-controller-manager
|
||||||
notitle: true
|
notitle: true
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
@ -24,9 +24,9 @@ threshold has been met.
|
||||||
### Container Collection
|
### Container Collection
|
||||||
|
|
||||||
The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers any single
|
The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers any single
|
||||||
pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting 'MinAge' to zero and setting 'MaxPerPodContainer' and 'MaxContainers' respectively to less than zero.
|
pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting `MinAge` to zero and setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
|
||||||
|
|
||||||
Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. 'MaxPerPodContainer' and 'MaxContainer' may potentially conflict with each other in situations where retaining the maximum number of containers per pod ('MaxPerPodContainer') would go outside the allowable range of global dead containers ('MaxContainers'). 'MaxPerPodContainer' would be adjusted in this situation: A worst case scenario would be to downgrade 'MaxPerPodContainer' to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`.
|
Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other in situations where retaining the maximum number of containers per pod (`MaxPerPodContainer`) would go outside the allowable range of global dead containers (`MaxContainers`). `MaxPerPodContainer` would be adjusted in this situation: A worst case scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`.
|
||||||
|
|
||||||
Containers that are not managed by kubelet are not subject to container garbage collection.
|
Containers that are not managed by kubelet are not subject to container garbage collection.
|
||||||
|
|
||||||
|
@ -42,15 +42,34 @@ to free. Default is 80%.
|
||||||
We also allow users to customize garbage collection policy through the following kubelet flags:
|
We also allow users to customize garbage collection policy through the following kubelet flags:
|
||||||
|
|
||||||
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
|
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
|
||||||
garbage collected. Default is 1 minute.
|
garbage collected. Default is 0 minute, which means any finished container will be garbaged collected.
|
||||||
2. `maximum-dead-containers-per-container`, maximum number of old instances to retain
|
2. `maximum-dead-containers-per-container`, maximum number of old instances to retain
|
||||||
per container. Default is 2.
|
per container. Default is 1.
|
||||||
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
|
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
|
||||||
Default is 100.
|
Default is -1, which means there is no global limit.
|
||||||
|
|
||||||
Containers can potentially be garbage collected before their usefulness has expired. These containers
|
Containers can potentially be garbage collected before their usefulness has expired. These containers
|
||||||
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
|
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
|
||||||
`maximum-dead-containers-per-container` is highly recommended to allow at least 2 dead containers to be
|
`maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be
|
||||||
retained per expected container. A higher value for `maximum-dead-containers` is also recommended for a
|
retained per expected container. A higher value for `maximum-dead-containers` is also recommended for a
|
||||||
similar reason.
|
similar reason.
|
||||||
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
|
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
|
||||||
|
|
||||||
|
|
||||||
|
### Deprecation
|
||||||
|
|
||||||
|
Some kubelet Garbage Collection features in this doc will be replaced by kubelet eviction in the future.
|
||||||
|
|
||||||
|
Including:
|
||||||
|
|
||||||
|
| Existing Flag | New Flag | Rationale |
|
||||||
|
| ------------- | -------- | --------- |
|
||||||
|
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
|
||||||
|
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
|
||||||
|
| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context |
|
||||||
|
| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context |
|
||||||
|
| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context |
|
||||||
|
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
|
||||||
|
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
|
||||||
|
|
||||||
|
See [kubelet eviction design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/kubelet-eviction.md) for more details.
|
||||||
|
|
|
@ -41,7 +41,6 @@ DynamicKubeletConfig=true|false (ALPHA - default=false)
|
||||||
DynamicVolumeProvisioning=true|false (ALPHA - default=true)
|
DynamicVolumeProvisioning=true|false (ALPHA - default=true)
|
||||||
ExperimentalHostUserNamespaceDefaulting=true|false (ALPHA - default=false)
|
ExperimentalHostUserNamespaceDefaulting=true|false (ALPHA - default=false)
|
||||||
StreamingProxyRedirects=true|false (ALPHA - default=false)
|
StreamingProxyRedirects=true|false (ALPHA - default=false)
|
||||||
--google-json-key string The Google Cloud Platform Service Account JSON Key to use for authentication.
|
|
||||||
--healthz-bind-address ip The IP address for the health check server to serve on, defaulting to 127.0.0.1 (set to 0.0.0.0 for all interfaces) (default 127.0.0.1)
|
--healthz-bind-address ip The IP address for the health check server to serve on, defaulting to 127.0.0.1 (set to 0.0.0.0 for all interfaces) (default 127.0.0.1)
|
||||||
--healthz-port int32 The port to bind the health check server. Use 0 to disable. (default 10249)
|
--healthz-port int32 The port to bind the health check server. Use 0 to disable. (default 10249)
|
||||||
--hostname-override string If non-empty, will use this string as identification instead of the actual hostname.
|
--hostname-override string If non-empty, will use this string as identification instead of the actual hostname.
|
||||||
|
|
|
@ -36,7 +36,6 @@ DynamicKubeletConfig=true|false (ALPHA - default=false)
|
||||||
DynamicVolumeProvisioning=true|false (ALPHA - default=true)
|
DynamicVolumeProvisioning=true|false (ALPHA - default=true)
|
||||||
ExperimentalHostUserNamespaceDefaulting=true|false (ALPHA - default=false)
|
ExperimentalHostUserNamespaceDefaulting=true|false (ALPHA - default=false)
|
||||||
StreamingProxyRedirects=true|false (ALPHA - default=false)
|
StreamingProxyRedirects=true|false (ALPHA - default=false)
|
||||||
--google-json-key string The Google Cloud Platform Service Account JSON Key to use for authentication.
|
|
||||||
--hard-pod-affinity-symmetric-weight int RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. (default 1)
|
--hard-pod-affinity-symmetric-weight int RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. (default 1)
|
||||||
--kube-api-burst int32 Burst to use while talking with Kubernetes apiserver (default 100)
|
--kube-api-burst int32 Burst to use while talking with Kubernetes apiserver (default 100)
|
||||||
--kube-api-content-type string Content type of requests sent to apiserver. (default "application/vnd.kubernetes.protobuf")
|
--kube-api-content-type string Content type of requests sent to apiserver. (default "application/vnd.kubernetes.protobuf")
|
||||||
|
|
|
@ -31,7 +31,7 @@ server, as well as an additional kubeconfig file for administration.
|
||||||
controller manager and scheduler, and placing them in
|
controller manager and scheduler, and placing them in
|
||||||
`/etc/kubernetes/manifests`. The kubelet watches this directory for static
|
`/etc/kubernetes/manifests`. The kubelet watches this directory for static
|
||||||
resources to create on startup. These are the core components of Kubernetes, and
|
resources to create on startup. These are the core components of Kubernetes, and
|
||||||
once they are up and running we can use `kubectl` to set up/manage any
|
once they are up and running we can use `kubectl` to set up or manage any
|
||||||
additional components.
|
additional components.
|
||||||
|
|
||||||
1. kubeadm installs any add-on components, such as DNS or discovery, via the API
|
1. kubeadm installs any add-on components, such as DNS or discovery, via the API
|
||||||
|
@ -180,49 +180,49 @@ available as configuration file options.
|
||||||
|
|
||||||
### Sample Master Configuration
|
### Sample Master Configuration
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: kubeadm.k8s.io/v1alpha1
|
apiVersion: kubeadm.k8s.io/v1alpha1
|
||||||
kind: MasterConfiguration
|
kind: MasterConfiguration
|
||||||
api:
|
api:
|
||||||
advertiseAddresses:
|
advertiseAddresses:
|
||||||
- <address1|string>
|
- <address1|string>
|
||||||
- <address2|string>
|
- <address2|string>
|
||||||
bindPort: <int>
|
bindPort: <int>
|
||||||
externalDNSNames:
|
externalDNSNames:
|
||||||
- <dnsname1|string>
|
- <dnsname1|string>
|
||||||
- <dnsname2|string>
|
- <dnsname2|string>
|
||||||
authorizationMode: <string>
|
authorizationMode: <string>
|
||||||
cloudProvider: <string>
|
cloudProvider: <string>
|
||||||
discovery:
|
discovery:
|
||||||
bindPort: <int>
|
bindPort: <int>
|
||||||
etcd:
|
etcd:
|
||||||
endpoints:
|
endpoints:
|
||||||
- <endpoint1|string>
|
- <endpoint1|string>
|
||||||
- <endpoint2|string>
|
- <endpoint2|string>
|
||||||
caFile: <path|string>
|
caFile: <path|string>
|
||||||
certFile: <path|string>
|
certFile: <path|string>
|
||||||
keyFile: <path|string>
|
keyFile: <path|string>
|
||||||
kubernetesVersion: <string>
|
kubernetesVersion: <string>
|
||||||
networking:
|
networking:
|
||||||
dnsDomain: <string>
|
dnsDomain: <string>
|
||||||
serviceSubnet: <cidr>
|
serviceSubnet: <cidr>
|
||||||
podSubnet: <cidr>
|
podSubnet: <cidr>
|
||||||
secrets:
|
secrets:
|
||||||
givenToken: <token|string>
|
givenToken: <token|string>
|
||||||
```
|
```
|
||||||
|
|
||||||
### Sample Node Configuration
|
### Sample Node Configuration
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: kubeadm.k8s.io/v1alpha1
|
apiVersion: kubeadm.k8s.io/v1alpha1
|
||||||
kind: NodeConfiguration
|
kind: NodeConfiguration
|
||||||
apiPort: <int>
|
apiPort: <int>
|
||||||
discoveryPort: <int>
|
discoveryPort: <int>
|
||||||
masterAddresses:
|
masterAddresses:
|
||||||
- <master1>
|
- <master1>
|
||||||
secrets:
|
secrets:
|
||||||
givenToken: <token|string>
|
givenToken: <token|string>
|
||||||
```
|
```
|
||||||
|
|
||||||
## Automating kubeadm
|
## Automating kubeadm
|
||||||
|
|
||||||
|
@ -258,6 +258,8 @@ These environment variables are a short-term solution, eventually they will be i
|
||||||
| `KUBE_ETCD_IMAGE` | `gcr.io/google_containers/etcd-<arch>:2.2.5` | The etcd container image to use. |
|
| `KUBE_ETCD_IMAGE` | `gcr.io/google_containers/etcd-<arch>:2.2.5` | The etcd container image to use. |
|
||||||
| `KUBE_REPO_PREFIX` | `gcr.io/google_containers` | The image prefix for all images that are used. |
|
| `KUBE_REPO_PREFIX` | `gcr.io/google_containers` | The image prefix for all images that are used. |
|
||||||
|
|
||||||
|
If you want to use kubeadm with an http proxy, you may need to configure it to support http_proxy, https_proxy, or no_proxy.
|
||||||
|
|
||||||
## Releases and release notes
|
## Releases and release notes
|
||||||
|
|
||||||
If you already have kubeadm installed and want to upgrade, run `apt-get update && apt-get upgrade` or `yum update` to get the latest version of kubeadm.
|
If you already have kubeadm installed and want to upgrade, run `apt-get update && apt-get upgrade` or `yum update` to get the latest version of kubeadm.
|
||||||
|
|
|
@ -24,7 +24,7 @@ each namespace.
|
||||||
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
|
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
|
||||||
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
|
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
|
||||||
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and CPU of their
|
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and CPU of their
|
||||||
average node size in order to provide for more uniform scheduling and to limit waste.
|
average node size in order to provide for more uniform scheduling and limit waste.
|
||||||
|
|
||||||
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control
|
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control
|
||||||
min/max resource limits per pod. In addition, this example demonstrates how you can
|
min/max resource limits per pod. In addition, this example demonstrates how you can
|
||||||
|
|
|
@ -95,7 +95,7 @@ Now that our second scheduler is running, let's create some pods, and direct the
|
||||||
scheduler in that pod spec. Let's look at three examples.
|
scheduler in that pod spec. Let's look at three examples.
|
||||||
|
|
||||||
|
|
||||||
1. Pod spec without any scheduler name
|
- Pod spec without any scheduler name
|
||||||
|
|
||||||
{% include code.html language="yaml" file="multiple-schedulers/pod1.yaml" ghlink="/docs/admin/multiple-schedulers/pod1.yaml" %}
|
{% include code.html language="yaml" file="multiple-schedulers/pod1.yaml" ghlink="/docs/admin/multiple-schedulers/pod1.yaml" %}
|
||||||
|
|
||||||
|
@ -108,7 +108,7 @@ scheduler in that pod spec. Let's look at three examples.
|
||||||
kubectl create -f pod1.yaml
|
kubectl create -f pod1.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Pod spec with `default-scheduler`
|
- Pod spec with `default-scheduler`
|
||||||
|
|
||||||
{% include code.html language="yaml" file="multiple-schedulers/pod2.yaml" ghlink="/docs/admin/multiple-schedulers/pod2.yaml" %}
|
{% include code.html language="yaml" file="multiple-schedulers/pod2.yaml" ghlink="/docs/admin/multiple-schedulers/pod2.yaml" %}
|
||||||
|
|
||||||
|
@ -121,7 +121,7 @@ scheduler in that pod spec. Let's look at three examples.
|
||||||
kubectl create -f pod2.yaml
|
kubectl create -f pod2.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Pod spec with `my-scheduler`
|
- Pod spec with `my-scheduler`
|
||||||
|
|
||||||
{% include code.html language="yaml" file="multiple-schedulers/pod3.yaml" ghlink="/docs/admin/multiple-schedulers/pod3.yaml" %}
|
{% include code.html language="yaml" file="multiple-schedulers/pod3.yaml" ghlink="/docs/admin/multiple-schedulers/pod3.yaml" %}
|
||||||
|
|
||||||
|
|
|
@ -145,7 +145,7 @@ dev
|
||||||
|
|
||||||
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
|
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
|
||||||
|
|
||||||
Let's create some content.
|
Let's create some contents.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
|
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
|
||||||
|
|
|
@ -49,7 +49,7 @@ The plugin requires a few things:
|
||||||
|
|
||||||
* The standard CNI `bridge`, `lo` and `host-local` plugins are required, at minimum version 0.2.0. Kubenet will first search for them in `/opt/cni/bin`. Specify `network-plugin-dir` to supply additional search path. The first found match will take effect.
|
* The standard CNI `bridge`, `lo` and `host-local` plugins are required, at minimum version 0.2.0. Kubenet will first search for them in `/opt/cni/bin`. Specify `network-plugin-dir` to supply additional search path. The first found match will take effect.
|
||||||
* Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin
|
* Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin
|
||||||
* Kubelet should also be run with the `--non-masquerade-cidr=<clusterCidr>` argumment to ensure traffic to IPs outside this range will use IP masquerade.
|
* Kubelet should also be run with the `--non-masquerade-cidr=<clusterCidr>` argument to ensure traffic to IPs outside this range will use IP masquerade.
|
||||||
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.
|
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.
|
||||||
|
|
||||||
### Customizing the MTU (with kubenet)
|
### Customizing the MTU (with kubenet)
|
||||||
|
|
|
@ -176,6 +176,9 @@ For self-registration, the kubelet is started with the following options:
|
||||||
- `--kubeconfig=` - Path to credentials to authenticate itself to the apiserver.
|
- `--kubeconfig=` - Path to credentials to authenticate itself to the apiserver.
|
||||||
- `--cloud-provider=` - How to talk to a cloud provider to read metadata about itself.
|
- `--cloud-provider=` - How to talk to a cloud provider to read metadata about itself.
|
||||||
- `--register-node` - Automatically register with the API server.
|
- `--register-node` - Automatically register with the API server.
|
||||||
|
- `--node-ip` IP address of the node.
|
||||||
|
- `--node-labels` - Labels to add when registering the node in the cluster.
|
||||||
|
- `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
|
||||||
|
|
||||||
Currently, any kubelet is authorized to create/modify any node resource, but in practice it only creates/modifies
|
Currently, any kubelet is authorized to create/modify any node resource, but in practice it only creates/modifies
|
||||||
its own. (In the future, we plan to only allow a kubelet to modify its own node resource.)
|
its own. (In the future, we plan to only allow a kubelet to modify its own node resource.)
|
||||||
|
|
|
@ -10,11 +10,11 @@ The Salt scripts are shared across multiple hosting providers and depending on w
|
||||||
|
|
||||||
## Salt cluster setup
|
## Salt cluster setup
|
||||||
|
|
||||||
The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
|
The **salt-master** service runs on the kubernetes-master [(except on the default GCE and OpenStack-Heat setup)](#standalone-salt-configuration-on-gce-and-others).
|
||||||
|
|
||||||
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
|
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
|
||||||
|
|
||||||
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
|
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE and OpenStack-Heat)](#standalone-salt-configuration-on-gce-and-others).
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
|
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
|
||||||
|
@ -25,15 +25,15 @@ The salt-master is contacted by each salt-minion and depending upon the machine
|
||||||
|
|
||||||
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
|
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
|
||||||
|
|
||||||
## Standalone Salt Configuration on GCE
|
## Standalone Salt Configuration on GCE and others
|
||||||
|
|
||||||
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
|
On GCE and OpenStack, using the Openstack-Heat provider, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
|
||||||
|
|
||||||
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
|
All remaining sections that refer to master/minion setups should be ignored for GCE and OpenStack. One fallout of this setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
|
||||||
|
|
||||||
## Salt security
|
## Salt security
|
||||||
|
|
||||||
*(Not applicable on default GCE setup.)*
|
*(Not applicable on default GCE and OpenStack-Heat setup.)*
|
||||||
|
|
||||||
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
|
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
|
||||||
|
|
||||||
|
|
|
@ -71,8 +71,9 @@ account. To create additional API tokens for a service account, create a secret
|
||||||
of type `ServiceAccountToken` with an annotation referencing the service
|
of type `ServiceAccountToken` with an annotation referencing the service
|
||||||
account, and the controller will update it with a generated token:
|
account, and the controller will update it with a generated token:
|
||||||
|
|
||||||
```json
|
|
||||||
secret.json:
|
secret.json:
|
||||||
|
|
||||||
|
```json
|
||||||
{
|
{
|
||||||
"kind": "Secret",
|
"kind": "Secret",
|
||||||
"apiVersion": "v1",
|
"apiVersion": "v1",
|
||||||
|
@ -100,4 +101,4 @@ kubectl delete secret mysecretname
|
||||||
### Service Account Controller
|
### Service Account Controller
|
||||||
|
|
||||||
Service Account Controller manages ServiceAccount inside namespaces, and ensures
|
Service Account Controller manages ServiceAccount inside namespaces, and ensures
|
||||||
a ServiceAccount named "default" exists in every active namespace.
|
a ServiceAccount named "default" exists in every active namespace.
|
||||||
|
|
|
@ -26,7 +26,7 @@ For example, this is how to start a simple web server as a static pod:
|
||||||
[joe@host ~] $ ssh my-node1
|
[joe@host ~] $ ssh my-node1
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:
|
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubelet.d/static-web.yaml`:
|
||||||
|
|
||||||
```
|
```
|
||||||
[root@my-node1 ~] $ mkdir /etc/kubernetes.d/
|
[root@my-node1 ~] $ mkdir /etc/kubernetes.d/
|
||||||
|
@ -51,7 +51,7 @@ For example, this is how to start a simple web server as a static pod:
|
||||||
3. Configure your kubelet daemon on the node to use this directory by running it with `--pod-manifest-path=/etc/kubelet.d/` argument.
|
3. Configure your kubelet daemon on the node to use this directory by running it with `--pod-manifest-path=/etc/kubelet.d/` argument.
|
||||||
On Fedora edit `/etc/kubernetes/kubelet` to include this line:
|
On Fedora edit `/etc/kubernetes/kubelet` to include this line:
|
||||||
|
|
||||||
```conf
|
```
|
||||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
|
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -73,8 +73,8 @@ When kubelet starts, it automatically starts all pods defined in directory speci
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
[joe@my-node1 ~] $ docker ps
|
[joe@my-node1 ~] $ docker ps
|
||||||
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
|
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
|
||||||
```
|
```
|
||||||
|
|
||||||
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
|
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
|
||||||
|
@ -82,9 +82,9 @@ If we look at our Kubernetes API server (running on host `my-master`), we see th
|
||||||
```shell
|
```shell
|
||||||
[joe@host ~] $ ssh my-master
|
[joe@host ~] $ ssh my-master
|
||||||
[joe@my-master ~] $ kubectl get pods
|
[joe@my-master ~] $ kubectl get pods
|
||||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 role=myrole Running 11 minutes
|
static-web-my-node1 1/1 Running 0 2m
|
||||||
web nginx Running 11 minutes
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering.
|
Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering.
|
||||||
|
@ -95,8 +95,9 @@ Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](/docs/
|
||||||
[joe@my-master ~] $ kubectl delete pod static-web-my-node1
|
[joe@my-master ~] $ kubectl delete pod static-web-my-node1
|
||||||
pods/static-web-my-node1
|
pods/static-web-my-node1
|
||||||
[joe@my-master ~] $ kubectl get pods
|
[joe@my-master ~] $ kubectl get pods
|
||||||
POD IP CONTAINER(S) IMAGE(S) HOST ...
|
NAME READY STATUS RESTARTS AGE
|
||||||
static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 ...
|
static-web-my-node1 1/1 Running 0 12s
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Back to our `my-node1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
|
Back to our `my-node1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
|
||||||
|
@ -115,11 +116,11 @@ CONTAINER ID IMAGE COMMAND CREATED ...
|
||||||
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
|
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
[joe@my-node1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
|
[joe@my-node1 ~] $ mv /etc/kubelet.d/static-web.yaml /tmp
|
||||||
[joe@my-node1 ~] $ sleep 20
|
[joe@my-node1 ~] $ sleep 20
|
||||||
[joe@my-node1 ~] $ docker ps
|
[joe@my-node1 ~] $ docker ps
|
||||||
// no nginx container is running
|
// no nginx container is running
|
||||||
[joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/
|
[joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubelet.d/
|
||||||
[joe@my-node1 ~] $ sleep 20
|
[joe@my-node1 ~] $ sleep 20
|
||||||
[joe@my-node1 ~] $ docker ps
|
[joe@my-node1 ~] $ docker ps
|
||||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||||
|
|
|
@ -3620,7 +3620,7 @@ The StatefulSet guarantees that a given network identity will always map to the
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection/README">http://kubernetes.io/docs/user-guide/node-selection/README</a></p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection">http://kubernetes.io/docs/user-guide/node-selection</a></p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
||||||
<td class="tableblock halign-left valign-top"></td>
|
<td class="tableblock halign-left valign-top"></td>
|
||||||
|
|
|
@ -3609,7 +3609,7 @@ Populated by the system when a graceful deletion is requested. Read-only. More i
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection/README">http://kubernetes.io/docs/user-guide/node-selection/README</a></p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection">http://kubernetes.io/docs/user-guide/node-selection</a></p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
||||||
<td class="tableblock halign-left valign-top"></td>
|
<td class="tableblock halign-left valign-top"></td>
|
||||||
|
|
|
@ -3457,7 +3457,7 @@ Populated by the system when a graceful deletion is requested. Read-only. More i
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection/README">http://kubernetes.io/docs/user-guide/node-selection/README</a></p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection">http://kubernetes.io/docs/user-guide/node-selection</a></p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
||||||
<td class="tableblock halign-left valign-top"></td>
|
<td class="tableblock halign-left valign-top"></td>
|
||||||
|
|
|
@ -8010,7 +8010,7 @@ Appears In <a href="#pod-v1">Pod</a> <a href="#podtemplatespec-v1">PodTemplateSp
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>nodeSelector <br /> <em>object</em></td>
|
<td>nodeSelector <br /> <em>object</em></td>
|
||||||
<td>NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection/README">http://kubernetes.io/docs/user-guide/node-selection/README</a></td>
|
<td>NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection">http://kubernetes.io/docs/user-guide/node-selection</a></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>restartPolicy <br /> <em>string</em></td>
|
<td>restartPolicy <br /> <em>string</em></td>
|
||||||
|
|
|
@ -0,0 +1,110 @@
|
||||||
|
---
|
||||||
|
title: Garbage Collection
|
||||||
|
---
|
||||||
|
|
||||||
|
{% capture overview %}
|
||||||
|
|
||||||
|
The role of the Kubernetes garbage collector is to delete certain objects
|
||||||
|
that once had an owner, but no longer have an owner.
|
||||||
|
|
||||||
|
**Note**: Garbage collection is a beta feature and is enabled by default in
|
||||||
|
Kubernetes version 1.4 and later.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture body %}
|
||||||
|
|
||||||
|
## Owners and dependents
|
||||||
|
|
||||||
|
Some Kubernetes objects are owners of other objects. For example, a ReplicaSet
|
||||||
|
is the owner of a set of Pods. The owned objects are called *dependents* of the
|
||||||
|
owner object. Every dependent object has a `metadata.ownerReferences` field that
|
||||||
|
points to the owning object.
|
||||||
|
|
||||||
|
Sometimes, Kubernetes sets the value of `ownerReference` automatically. For
|
||||||
|
example, when you create a ReplicaSet, Kubernetes automatically sets the
|
||||||
|
`ownerReference` field of each Pod in the ReplicaSet. You can also specify
|
||||||
|
relationships between owners and dependents by manually setting the
|
||||||
|
`ownerReference` field.
|
||||||
|
|
||||||
|
Here's a configuration file for a ReplicaSet that has three Pods:
|
||||||
|
|
||||||
|
{% include code.html language="yaml" file="my-repset.yaml" ghlink="/docs/concepts/abstractions/controllers/my-repset.yaml" %}
|
||||||
|
|
||||||
|
If you create the ReplicaSet and then view the Pod metadata, you can see
|
||||||
|
OwnerReferences field:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl create -f http://k8s.io/docs/concepts/abstractions/controllers/my-repset.yaml
|
||||||
|
kubectl get pods --output=yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
The output shows that the Pod owner is a ReplicaSet named my-repset:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
...
|
||||||
|
ownerReferences:
|
||||||
|
- apiVersion: extensions/v1beta1
|
||||||
|
controller: true
|
||||||
|
kind: ReplicaSet
|
||||||
|
name: my-repset
|
||||||
|
uid: d9607e19-f88f-11e6-a518-42010a800195
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Controlling whether the garbage collector deletes dependents
|
||||||
|
|
||||||
|
When you delete object, you can specify whether the object's dependents
|
||||||
|
are deleted automatically. Deleting dependents automatically is called
|
||||||
|
*cascading deletion*. If you delete an object without deleting its
|
||||||
|
dependents automatically, the dependents are said to be *orphaned*.
|
||||||
|
|
||||||
|
To delete dependent objects automatically, set the `orphanDependents` query
|
||||||
|
parameter to false in your request to delete the owner object.
|
||||||
|
|
||||||
|
To orphan the dependents of an owner object, set the `orphanDependents` query
|
||||||
|
parameter to true in your request to delete the owner object.
|
||||||
|
|
||||||
|
The default value for `orphanDependents` is true. So unless you specify
|
||||||
|
otherwise, dependent objects are orphaned.
|
||||||
|
|
||||||
|
Here's an example that deletes dependents automatically:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl proxy --port=8080
|
||||||
|
curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset?orphanDependents=false
|
||||||
|
```
|
||||||
|
|
||||||
|
To delete dependents automatically using kubectl, set `--cascade` to true.
|
||||||
|
To orphan dependents, set `--cascade` to false. The default value for
|
||||||
|
`--cascade` is true.
|
||||||
|
|
||||||
|
Here's an example that orphans the dependents of a ReplicaSet:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl delete replicaset my-repset --cascade=false
|
||||||
|
```
|
||||||
|
|
||||||
|
## Ongoing development
|
||||||
|
|
||||||
|
In Kubernetes version 1.5, synchronous garbage collection is under active
|
||||||
|
development. See the tracking
|
||||||
|
[issue](https://github.com/kubernetes/kubernetes/issues/29891) for more details.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture whatsnext %}
|
||||||
|
|
||||||
|
[Design Doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/garbage-collection.md)
|
||||||
|
|
||||||
|
[Known issues](https://github.com/kubernetes/kubernetes/issues/26120)
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% include templates/concept.md %}
|
|
@ -0,0 +1,17 @@
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: ReplicaSet
|
||||||
|
metadata:
|
||||||
|
name: my-repset
|
||||||
|
spec:
|
||||||
|
replicas: 3
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
pod-is-for: garbage-collection-example
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
pod-is-for: garbage-collection-example
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: nginx
|
|
@ -95,6 +95,98 @@ Here are some ideas for how to use Init Containers:
|
||||||
More detailed usage examples can be found in the [StatefulSets documentation](/docs/concepts/abstractions/controllers/statefulsets/)
|
More detailed usage examples can be found in the [StatefulSets documentation](/docs/concepts/abstractions/controllers/statefulsets/)
|
||||||
and the [Production Pods guide](/docs/user-guide/production-pods.md#handling-initialization).
|
and the [Production Pods guide](/docs/user-guide/production-pods.md#handling-initialization).
|
||||||
|
|
||||||
|
### Init Containers in use
|
||||||
|
|
||||||
|
The following yaml file outlines a simple Pod which has two Init Containers.
|
||||||
|
The first waits for `myservice` and the second waits for `mydb`. Once both
|
||||||
|
containers complete the Pod will begin.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: myapp-pod
|
||||||
|
labels:
|
||||||
|
app: myapp
|
||||||
|
annotations:
|
||||||
|
pod.beta.kubernetes.io/init-containers: '[
|
||||||
|
{
|
||||||
|
"name": "init-myservice",
|
||||||
|
"image": "busybox",
|
||||||
|
"command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "init-mydb",
|
||||||
|
"image": "busybox",
|
||||||
|
"command": ["sh", "-c", "until nslookup mydb; do echo waiting for mydb; sleep 2; done;"]
|
||||||
|
}
|
||||||
|
]'
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: myapp-container
|
||||||
|
image: busybox
|
||||||
|
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
|
||||||
|
```
|
||||||
|
|
||||||
|
This Pod can be started and debugged with the following commands:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl create -f myapp.yaml
|
||||||
|
pod "myapp-pod" created
|
||||||
|
$ kubectl get -f myapp.yaml
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
myapp-pod 0/1 Init:0/2 0 6m
|
||||||
|
$ kubectl describe -f myapp.yaml
|
||||||
|
i11:32 $ kubectl describe -f examples/init-container.yaml
|
||||||
|
Name: myapp-pod
|
||||||
|
Namespace: default
|
||||||
|
[...]
|
||||||
|
Labels: app=myapp
|
||||||
|
Status: Pending
|
||||||
|
[...]
|
||||||
|
Init Containers:
|
||||||
|
init-myservice:
|
||||||
|
[...]
|
||||||
|
State: Running
|
||||||
|
[...]
|
||||||
|
init-mydb:
|
||||||
|
[...]
|
||||||
|
State: Running
|
||||||
|
[...]
|
||||||
|
Containers:
|
||||||
|
myapp-container:
|
||||||
|
[...]
|
||||||
|
State: Waiting
|
||||||
|
Reason: PodInitializing
|
||||||
|
Ready: False
|
||||||
|
[...]
|
||||||
|
Events:
|
||||||
|
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
|
||||||
|
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||||
|
16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201
|
||||||
|
16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image "busybox"
|
||||||
|
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox"
|
||||||
|
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container with docker id 5ced34a04634; Security:[seccomp=unconfined]
|
||||||
|
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634
|
||||||
|
$ kubectl logs myapp-pod -c init-myservice # Inspect the first init container
|
||||||
|
$ kubectl logs myapp-pod -c init-mydd # Inspect the second init container
|
||||||
|
```
|
||||||
|
|
||||||
|
Once we start the `mydb` and `myservice` Services we can see the Init Containers
|
||||||
|
complete and the `myapp-pod` is created:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl create -f services.yaml
|
||||||
|
service "myservice" created
|
||||||
|
service "mydb" created
|
||||||
|
$ kubectl get -f myapp.yaml
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
myapp-pod 1/1 Running 0 9m
|
||||||
|
```
|
||||||
|
|
||||||
|
This example is very simple but should provide some inspiration for you to
|
||||||
|
create your own Init Containers.
|
||||||
|
|
||||||
## Detailed behavior
|
## Detailed behavior
|
||||||
|
|
||||||
During the startup of a Pod, the Init Containers are started in order, after the
|
During the startup of a Pod, the Init Containers are started in order, after the
|
||||||
|
@ -181,4 +273,4 @@ Kubelet and Apiserver versions; see the [release notes](https://github.com/kuber
|
||||||
{% endcapture %}
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
{% include templates/concept.md %}
|
{% include templates/concept.md %}
|
||||||
|
|
|
@ -9,7 +9,7 @@ This page explains how Kubernetes objects are represented in the Kubernetes API,
|
||||||
{% capture body %}
|
{% capture body %}
|
||||||
## Understanding Kubernetes Objects
|
## Understanding Kubernetes Objects
|
||||||
|
|
||||||
*Kubernetes Objects* are persistent entities in the Kubernetes system. Kubenetes uses these entities to represent the state of your cluster. Specifically, they can describe:
|
*Kubernetes Objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:
|
||||||
|
|
||||||
* What containerized applications are running (and on which nodes)
|
* What containerized applications are running (and on which nodes)
|
||||||
* The resources available to those applications
|
* The resources available to those applications
|
||||||
|
|
|
@ -27,7 +27,7 @@ The [Kubernetes Blog](http://blog.kubernetes.io) has some additional information
|
||||||
* [The Distributed System Toolkit: Patterns for Composite Containers](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html)
|
* [The Distributed System Toolkit: Patterns for Composite Containers](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html)
|
||||||
* [Container Design Patterns](http://blog.kubernetes.io/2016/06/container-design-patterns.html)
|
* [Container Design Patterns](http://blog.kubernetes.io/2016/06/container-design-patterns.html)
|
||||||
|
|
||||||
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run muliple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information.
|
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information.
|
||||||
|
|
||||||
### How Pods Manage Multiple Containers
|
### How Pods Manage Multiple Containers
|
||||||
|
|
||||||
|
|
|
@ -66,6 +66,7 @@ Here are some examples:
|
||||||
| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` |
|
| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` |
|
||||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` |
|
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` |
|
||||||
| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` |
|
| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` |
|
||||||
|
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
|
||||||
|
|
||||||
{% endcapture %}
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,430 @@
|
||||||
|
---
|
||||||
|
title: Managing Compute Resources for Containers
|
||||||
|
---
|
||||||
|
|
||||||
|
{% capture overview %}
|
||||||
|
|
||||||
|
When you specify a [Pod](/docs/user-guide/pods), you can optionally specify how
|
||||||
|
much CPU and memory (RAM) each Container needs. When Containers have resource
|
||||||
|
requests specified, the scheduler can make better decisions about which nodes to
|
||||||
|
place Pods on. And when Containers have their limits specified, contention for
|
||||||
|
resources on a node can be handled in a specified manner. For more details about
|
||||||
|
the difference between requests and limits, see
|
||||||
|
[Resource QoS](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md).
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture body %}
|
||||||
|
|
||||||
|
## Resource types
|
||||||
|
|
||||||
|
*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
|
||||||
|
CPU is specified in units of cores, and memory is specified in units of bytes.
|
||||||
|
|
||||||
|
CPU and memory are collectively referred to as *compute resources*, or just
|
||||||
|
*resources*. Compute
|
||||||
|
resources are measurable quantities that can be requested, allocated, and
|
||||||
|
consumed. They are distinct from
|
||||||
|
[API resources](/docs/api/). API resources, such as Pods and
|
||||||
|
[Services](/docs/user-guide/services) are objects that can be read and modified
|
||||||
|
through the Kubernetes API server.
|
||||||
|
|
||||||
|
## Resource requests and limits of Pod and Container
|
||||||
|
|
||||||
|
Each Container of a Pod can specify one or more of the following:
|
||||||
|
|
||||||
|
* `spec.containers[].resources.limits.cpu`
|
||||||
|
* `spec.containers[].resources.limits.memory`
|
||||||
|
* `spec.containers[].resources.requests.cpu`
|
||||||
|
* `spec.containers[].resources.requests.memory`
|
||||||
|
|
||||||
|
Although requests and limits can only be specified on individual Containers, it
|
||||||
|
is convenient to talk about Pod resource requests and limits. A
|
||||||
|
*Pod resource request/limit* for a particular resource type is the sum of the
|
||||||
|
resource requests/limits of that type for each Container in the Pod.
|
||||||
|
|
||||||
|
## Meaning of CPU
|
||||||
|
|
||||||
|
Limits and requests for CPU resources are measured in *cpu* units.
|
||||||
|
One cpu, in Kubernetes, is equivalent to:
|
||||||
|
|
||||||
|
- 1 AWS vCPU
|
||||||
|
- 1 GCP Core
|
||||||
|
- 1 Azure vCore
|
||||||
|
- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
|
||||||
|
|
||||||
|
Fractional requests are allowed. A Container with
|
||||||
|
`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much
|
||||||
|
CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the
|
||||||
|
expression `100m`, which can be read as "one hundred millicpu". Some people say
|
||||||
|
"one hundred millicores", and this is understood to mean the same thing. A
|
||||||
|
request with a decimal point, like `0.1`, is converted to `100m` by the API, and
|
||||||
|
precision finer than `1m` is not allowed. For this reason, the form `100m` might
|
||||||
|
be preferred.
|
||||||
|
|
||||||
|
CPU is always requested as an absolute quantity, never as a relative quantity;
|
||||||
|
0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.
|
||||||
|
|
||||||
|
## Meaning of memory
|
||||||
|
|
||||||
|
Limits and requests for `memory` are measured in bytes. You can express memory as
|
||||||
|
a plain integer or as a fixed-point integer using one of these SI suffixes:
|
||||||
|
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||||
|
Mi, Ki. For example, the following represent roughly the same value:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
128974848, 129e6, 129M, 123Mi
|
||||||
|
```
|
||||||
|
|
||||||
|
Here's an example.
|
||||||
|
The following Pod has two Containers. Each Container has a request of 0.25 cpu
|
||||||
|
and 64MiB (2<sup>26</sup> bytes) of memory Each Container has a limit of 0.5
|
||||||
|
cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
|
||||||
|
MiB of memory, and a limit of 1 core and 256MiB of memory.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: frontend
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: db
|
||||||
|
image: mysql
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "64Mi"
|
||||||
|
cpu: "250m"
|
||||||
|
limits:
|
||||||
|
memory: "128Mi"
|
||||||
|
cpu: "500m"
|
||||||
|
- name: wp
|
||||||
|
image: wordpress
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "64Mi"
|
||||||
|
cpu: "250m"
|
||||||
|
limits:
|
||||||
|
memory: "128Mi"
|
||||||
|
cpu: "500m"
|
||||||
|
```
|
||||||
|
|
||||||
|
## How Pods with resource requests are scheduled
|
||||||
|
|
||||||
|
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
|
||||||
|
run on. Each node has a maximum capacity for each of the resource types: the
|
||||||
|
amount of CPU and memory it can provide for Pods. The scheduler ensures that,
|
||||||
|
for each resource type, the sum of the resource requests of the scheduled
|
||||||
|
Containers is less than the capacity of the node. Note that although actual memory
|
||||||
|
or CPU resource usage on nodes is very low, the scheduler still refuses to place
|
||||||
|
a Pod on a node if the capacity check fails. This protects against a resource
|
||||||
|
shortage on a node when resource usage later increases, for example, during a
|
||||||
|
daily peak in request rate.
|
||||||
|
|
||||||
|
## How Pods with resource limits are run
|
||||||
|
|
||||||
|
When the kubelet starts a Container of a Pod, it passes the CPU and memory limits
|
||||||
|
to the container runtime.
|
||||||
|
|
||||||
|
When using Docker:
|
||||||
|
|
||||||
|
- The `spec.containers[].resources.requests.cpu` is converted to its core value,
|
||||||
|
which is potentially fractional, and multiplied by 1024. This number is used
|
||||||
|
as the value of the
|
||||||
|
[`--cpu-shares`](https://docs.docker.com/engine/reference/run/#/cpu-share-constraint)
|
||||||
|
flag in the `docker run` command.
|
||||||
|
|
||||||
|
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value,
|
||||||
|
multiplied by 100000, and then divided by 1000. This number is used as the value
|
||||||
|
of the [`--cpu-quota`](https://docs.docker.com/engine/reference/run/#/cpu-quota-constraint)
|
||||||
|
flag in the `docker run` command. he [`--cpu-period`] flag is set to 100000,
|
||||||
|
which represents the default 100ms period for measuring quota usage. The
|
||||||
|
kubelet enforces cpu limits if it is started with the
|
||||||
|
[`--cpu-cfs-quota`] flag set to true. As of Kubernetes version 1.2, this flag
|
||||||
|
defaults to true.
|
||||||
|
|
||||||
|
- The `spec.containers[].resources.limits.memory` is converted to an integer, and
|
||||||
|
used as the value of the
|
||||||
|
[`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints)
|
||||||
|
flag in the `docker run` command.
|
||||||
|
|
||||||
|
If a Container exceeds its memory limit, it might be terminated. If it is
|
||||||
|
restartable, the kubelet will restart it, as with any other type of runtime
|
||||||
|
failure.
|
||||||
|
|
||||||
|
If a Container exceeds its memory request, it is likely that its Pod will
|
||||||
|
be evicted whenever the node runs out of memory.
|
||||||
|
|
||||||
|
A Container might or might not be allowed to exceed its CPU limit for extended
|
||||||
|
periods of time. However, it will not be killed for excessive CPU usage.
|
||||||
|
|
||||||
|
To determine whether a Container cannot be scheduled or is being killed due to
|
||||||
|
resource limits, see the
|
||||||
|
[Troubleshooting](#troubleshooting) section.
|
||||||
|
|
||||||
|
## Monitoring compute resource usage
|
||||||
|
|
||||||
|
The resource usage of a Pod is reported as part of the Pod status.
|
||||||
|
|
||||||
|
If [optional monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md)
|
||||||
|
is configured for your cluster, then Pod resource usage can be retrieved from
|
||||||
|
the monitoring system.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### My Pods are pending with event message failedScheduling
|
||||||
|
|
||||||
|
If the scheduler cannot find any node where a Pod can fit, the Pod remains
|
||||||
|
unscheduled until a place can be found. An event is produced each time the
|
||||||
|
scheduler fails to find a place for the Pod, like this:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ kubectl describe pod frontend | grep -A 3 Events
|
||||||
|
Events:
|
||||||
|
FirstSeen LastSeen Count From Subobject PathReason Message
|
||||||
|
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
|
||||||
|
```
|
||||||
|
|
||||||
|
In the preceding example, the Pod named "frontend" fails to be scheduled due to
|
||||||
|
insufficient CPU resource on the node. Similar error messages can also suggest
|
||||||
|
failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod
|
||||||
|
is pending with a message of this type, there are several things to try:
|
||||||
|
|
||||||
|
- Add more nodes to the cluster.
|
||||||
|
- Terminate unneeded Pods to make room for pending Pods.
|
||||||
|
- Check that the Pod is not larger than all the nodes. For example, if all the
|
||||||
|
nodes have a capacity of `cpu: 1`, then a Pod with a limit of `cpu: 1.1` will
|
||||||
|
never be scheduled.
|
||||||
|
|
||||||
|
You can check node capacities and amounts allocated with the
|
||||||
|
`kubectl describe nodes` command. For example:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ kubectl.sh describe nodes e2e-test-minion-group-4lw4
|
||||||
|
Name: e2e-test-minion-group-4lw4
|
||||||
|
[ ... lines removed for clarity ...]
|
||||||
|
Capacity:
|
||||||
|
alpha.kubernetes.io/nvidia-gpu: 0
|
||||||
|
cpu: 2
|
||||||
|
memory: 7679792Ki
|
||||||
|
pods: 110
|
||||||
|
Allocatable:
|
||||||
|
alpha.kubernetes.io/nvidia-gpu: 0
|
||||||
|
cpu: 1800m
|
||||||
|
memory: 7474992Ki
|
||||||
|
pods: 110
|
||||||
|
[ ... lines removed for clarity ...]
|
||||||
|
Non-terminated Pods: (5 in total)
|
||||||
|
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
|
||||||
|
--------- ---- ------------ ---------- --------------- -------------
|
||||||
|
kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)
|
||||||
|
kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)
|
||||||
|
kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)
|
||||||
|
kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)
|
||||||
|
kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)
|
||||||
|
Allocated resources:
|
||||||
|
(Total limits may be over 100 percent, i.e., overcommitted.)
|
||||||
|
CPU Requests CPU Limits Memory Requests Memory Limits
|
||||||
|
------------ ---------- --------------- -------------
|
||||||
|
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
|
||||||
|
```
|
||||||
|
|
||||||
|
In the preceding output, you can see that if a Pod requests more than 1120m
|
||||||
|
CPUs or 6.23Gi of memory, it will not fit on the node.
|
||||||
|
|
||||||
|
By looking at the `Pods` section, you can see which Pods are taking up space on
|
||||||
|
the node.
|
||||||
|
|
||||||
|
The amount of resources available to Pods is less than the node capacity, because
|
||||||
|
system daemons use a portion of the available resources. The `allocatable` field
|
||||||
|
[NodeStatus](/docs/resources-reference/v1.5/#nodestatus-v1)
|
||||||
|
gives the amount of resources that are available to Pods. For more information, see
|
||||||
|
[Node Allocatable Resources](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md).
|
||||||
|
|
||||||
|
The [resource quota](/docs/admin/resourcequota/) feature can be configured
|
||||||
|
to limit the total amount of resources that can be consumed. If used in conjunction
|
||||||
|
with namespaces, it can prevent one team from hogging all the resources.
|
||||||
|
|
||||||
|
### My Container is terminated
|
||||||
|
|
||||||
|
Your Container might get terminated because it is resource-starved. To check
|
||||||
|
whether a Container is being killed because it is hitting a resource limit, call
|
||||||
|
`kubectl describe pod` on the Pod of interest:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
[12:54:41] $ ./cluster/kubectl.sh describe pod simmemleak-hra99
|
||||||
|
Name: simmemleak-hra99
|
||||||
|
Namespace: default
|
||||||
|
Image(s): saadali/simmemleak
|
||||||
|
Node: kubernetes-node-tf0f/10.240.216.66
|
||||||
|
Labels: name=simmemleak
|
||||||
|
Status: Running
|
||||||
|
Reason:
|
||||||
|
Message:
|
||||||
|
IP: 10.244.2.75
|
||||||
|
Replication Controllers: simmemleak (1/1 replicas created)
|
||||||
|
Containers:
|
||||||
|
simmemleak:
|
||||||
|
Image: saadali/simmemleak
|
||||||
|
Limits:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 50Mi
|
||||||
|
State: Running
|
||||||
|
Started: Tue, 07 Jul 2015 12:54:41 -0700
|
||||||
|
Last Termination State: Terminated
|
||||||
|
Exit Code: 1
|
||||||
|
Started: Fri, 07 Jul 2015 12:54:30 -0700
|
||||||
|
Finished: Fri, 07 Jul 2015 12:54:33 -0700
|
||||||
|
Ready: False
|
||||||
|
Restart Count: 5
|
||||||
|
Conditions:
|
||||||
|
Type Status
|
||||||
|
Ready False
|
||||||
|
Events:
|
||||||
|
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||||
|
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
|
||||||
|
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
|
||||||
|
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
|
||||||
|
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
|
||||||
|
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
|
||||||
|
```
|
||||||
|
|
||||||
|
In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
|
||||||
|
Container in the Pod was terminated and restarted five times.
|
||||||
|
|
||||||
|
You can call `get pod` with the `-o go-template=...` option to fetch the status
|
||||||
|
of previously terminated Containers:
|
||||||
|
|
||||||
|
```shell{% raw %}
|
||||||
|
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
|
||||||
|
Container Name: simmemleak
|
||||||
|
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see that the Container was terminated because of `reason:OOM Killed`,
|
||||||
|
where `OOM` stands for Out Of Memory.
|
||||||
|
|
||||||
|
## Opaque integer resources (Alpha feature)
|
||||||
|
|
||||||
|
Kubernetes version 1.5 introduces Opaque integer resources. Opaque
|
||||||
|
integer resources allow cluster operators to advertise new node-level
|
||||||
|
resources that would be otherwise unknown to the system.
|
||||||
|
|
||||||
|
Users can consume these resources in Pod specs just like CPU and memory.
|
||||||
|
The scheduler takes care of the resource accounting so that no more than the
|
||||||
|
available amount is simultaneously allocated to Pods.
|
||||||
|
|
||||||
|
**Note:** Opaque integer resources are Alpha in Kubernetes version 1.5.
|
||||||
|
Only resource accounting is implemented; node-level isolation is still
|
||||||
|
under active development.
|
||||||
|
|
||||||
|
Opaque integer resources are resources that begin with the prefix
|
||||||
|
`pod.alpha.kubernetes.io/opaque-int-resource-`. The API server
|
||||||
|
restricts quantities of these resources to whole numbers. Examples of
|
||||||
|
_valid_ quantities are `3`, `3000m` and `3Ki`. Examples of _invalid_
|
||||||
|
quantities are `0.5` and `1500m`.
|
||||||
|
|
||||||
|
There are two steps required to use opaque integer resources. First, the
|
||||||
|
cluster operator must advertise a per-node opaque resource on one or more
|
||||||
|
nodes. Second, users must request the opaque resource in Pods.
|
||||||
|
|
||||||
|
To advertise a new opaque integer resource, the cluster operator should
|
||||||
|
submit a `PATCH` HTTP request to the API server to specify the available
|
||||||
|
quantity in the `status.capacity` for a node in the cluster. After this
|
||||||
|
operation, the node's `status.capacity` will include a new resource. The
|
||||||
|
`status.allocatable` field is updated automatically with the new resource
|
||||||
|
asynchronously by the kubelet. Note that because the scheduler uses the
|
||||||
|
node `status.allocatable` value when evaluating Pod fitness, there may
|
||||||
|
be a short delay between patching the node capacity with a new resource and the
|
||||||
|
first pod that requests the resource to be scheduled on that node.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
Here is an HTTP request that advertises five "foo" resources on node `k8s-node-1`.
|
||||||
|
|
||||||
|
```http
|
||||||
|
PATCH /api/v1/nodes/k8s-node-1/status HTTP/1.1
|
||||||
|
Accept: application/json
|
||||||
|
Content-Type: application/json-patch+json
|
||||||
|
Host: k8s-master:8080
|
||||||
|
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"op": "add",
|
||||||
|
"path": "/status/capacity/pod.alpha.kubernetes.io~1opaque-int-resource-foo",
|
||||||
|
"value": "5"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: In the preceding request, `~1` is the encoding for the character `/`
|
||||||
|
in the patch path. The operation path value in JSON-Patch is interpreted as a
|
||||||
|
JSON-Pointer. For more details, see
|
||||||
|
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
|
||||||
|
|
||||||
|
To consume an opaque resource in a Pod, include the name of the opaque
|
||||||
|
resource as a key in the `spec.containers[].resources.requests` map.
|
||||||
|
|
||||||
|
The Pod is scheduled only if all of the resource requests are
|
||||||
|
satisfied, including cpu, memory and any opaque resources. The Pod will
|
||||||
|
remain in the `PENDING` state as long as the resource request cannot be met by
|
||||||
|
any node.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
The Pod below requests 2 cpus and 1 "foo" (an opaque resource.)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: my-pod
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: my-container
|
||||||
|
image: myimage
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: 2
|
||||||
|
pod.alpha.kubernetes.io/opaque-int-resource-foo: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Planned Improvements
|
||||||
|
|
||||||
|
Kubernetes version 1.5 only allows resource quantities to be specified on a
|
||||||
|
Container. It is planned to improve accounting for resources that are shared by
|
||||||
|
all Containers in a Pod, such as
|
||||||
|
[emptyDir volumes](/docs/user-guide/volumes/#emptydir).
|
||||||
|
|
||||||
|
Kubernetes version 1.5 only supports Container requests and limits for CPU and
|
||||||
|
memory. It is planned to add new resource types, including a node disk space
|
||||||
|
resource, and a framework for adding custom
|
||||||
|
[resource types](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/resources.md).
|
||||||
|
|
||||||
|
Kubernetes supports overcommitment of resources by supporting multiple levels of
|
||||||
|
[Quality of Service](http://issue.k8s.io/168).
|
||||||
|
|
||||||
|
In Kubernetes version 1.5, one unit of CPU means different things on different
|
||||||
|
cloud providers, and on different machine types within the same cloud providers.
|
||||||
|
For example, on AWS, the capacity of a node is reported in
|
||||||
|
[ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
|
||||||
|
cores. We plan to revise the definition of the cpu resource to allow for more
|
||||||
|
consistency across providers and platforms.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture whatsnext %}
|
||||||
|
|
||||||
|
* Get hands-on experience
|
||||||
|
[assigning CPU and RAM resources to a container](/docs/tasks/configure-pod-container/assign-cpu-ram-container/).
|
||||||
|
|
||||||
|
* [Container](/docs/api-reference/v1/definitions/#_v1_container)
|
||||||
|
|
||||||
|
* [ResourceRequirements](/docs/resources-reference/v1.5/#resourcerequirements-v1)
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
{% include templates/concept.md %}
|
||||||
|
|
|
@ -8,7 +8,7 @@ The Concepts section helps you learn about the parts of the Kubernetes system an
|
||||||
|
|
||||||
To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.
|
To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.
|
||||||
|
|
||||||
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kuberentes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection processes running on your cluster:
|
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kuberentes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster:
|
||||||
|
|
||||||
* The **Kubernetes Master** is a collection of four processes that run on a single node in your cluster, which is designated as the master node.
|
* The **Kubernetes Master** is a collection of four processes that run on a single node in your cluster, which is designated as the master node.
|
||||||
* Each individual non-master node in your cluster runs two processes:
|
* Each individual non-master node in your cluster runs two processes:
|
||||||
|
@ -17,7 +17,7 @@ Once you've set your desired state, the *Kubernetes Control Plane* works to make
|
||||||
|
|
||||||
## Kubernetes Objects
|
## Kubernetes Objects
|
||||||
|
|
||||||
Kubernetes contains a number of abstractions that represent your the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are represented by objects in the Kubernetes API; see the [Kubernetes Objects overview](/docs/concepts/abstractions/overview/) for more details.
|
Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are represented by objects in the Kubernetes API; see the [Kubernetes Objects overview](/docs/concepts/abstractions/overview/) for more details.
|
||||||
|
|
||||||
The basic Kubernetes objects include:
|
The basic Kubernetes objects include:
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,282 @@
|
||||||
|
---
|
||||||
|
title: Pod Lifecycle
|
||||||
|
---
|
||||||
|
|
||||||
|
{% capture overview %}
|
||||||
|
|
||||||
|
{% comment %}Updated: 4/14/2015{% endcomment %}
|
||||||
|
{% comment %}Edited and moved to Concepts section: 2/2/17{% endcomment %}
|
||||||
|
|
||||||
|
This page describes the lifecycle of a Pod.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture body %}
|
||||||
|
|
||||||
|
## Pod phase
|
||||||
|
|
||||||
|
A Pod's `status` field is a
|
||||||
|
[PodStatus](/docs/resources-reference/v1.5/#podstatus-v1)
|
||||||
|
object, which has a `phase` field.
|
||||||
|
|
||||||
|
The phase of a Pod is a simple, high-level summary of where the Pod is in its
|
||||||
|
lifecycle. The phase is not intended to be a comprehensive rollup of observations
|
||||||
|
of Container or Pod state, nor is it intended to be a comprehensive state machine.
|
||||||
|
|
||||||
|
The number and meanings of Pod phase values are tightly guarded.
|
||||||
|
Other than what is documented here, nothing should be assumed about Pods that
|
||||||
|
have a given `phase` value.
|
||||||
|
|
||||||
|
Here are the possible values for `phase`:
|
||||||
|
|
||||||
|
* Pending: The Pod has been accepted by the Kubernetes system, but one or more of
|
||||||
|
the Container images has not been created. This includes time before being
|
||||||
|
scheduled as well as time spent downloading images over the network,
|
||||||
|
which could take a while.
|
||||||
|
|
||||||
|
* Running: The Pod has been bound to a node, and all of the Containers have been
|
||||||
|
created. At least one Container is still running, or is in the process of
|
||||||
|
starting or restarting.
|
||||||
|
|
||||||
|
* Succeeded: All Containers in the Pod have terminated in success, and will not
|
||||||
|
be restarted.
|
||||||
|
|
||||||
|
* Failed: All Containers in the Pod have terminated, and at least one Container
|
||||||
|
has terminated in failure. That is, the Container either exited with non-zero
|
||||||
|
status or was terminated by the system.
|
||||||
|
|
||||||
|
* Unknown: For some reason the state of the Pod could not be obtained, typically
|
||||||
|
due to an error in communicating with the host of the Pod.
|
||||||
|
|
||||||
|
## Pod conditions
|
||||||
|
|
||||||
|
A Pod has a PodStatus, which has an array of
|
||||||
|
[PodConditions](docs/resources-reference/v1.5/#podcondition). Each element
|
||||||
|
of the PodCondition array has a `type` field and a `status` field. The `type`
|
||||||
|
field is a string, with possible values PodScheduled, Ready, Initialized, and
|
||||||
|
Unschedulable. The `status` field is a string, with possible values True, False,
|
||||||
|
and Unknown.
|
||||||
|
|
||||||
|
## Container probes
|
||||||
|
|
||||||
|
A [Probe](/docs/resources-reference/v1.5/#probe-v1) is a diagnostic
|
||||||
|
performed periodically by the [kubelet](/docs/admin/kubelet/)
|
||||||
|
on a Container. To perform a diagnostic,
|
||||||
|
the kublet calls a
|
||||||
|
[Handler](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler) implemented by
|
||||||
|
the Container. There are three types of handlers:
|
||||||
|
|
||||||
|
* [ExecAction](/docs/resources-reference/v1.5/#execaction-v1):
|
||||||
|
Executes a specified command inside the Container. The diagnostic
|
||||||
|
is considered successful if the command exits with a status code of 0.
|
||||||
|
|
||||||
|
* [TCPSocketAction](/docs/resources-reference/v1.5/#tcpsocketaction-v1):
|
||||||
|
Performs a TCP check against the Container's IP address on
|
||||||
|
a specified port. The diagnostic is considered successful if the port is open.
|
||||||
|
|
||||||
|
* [HTTPGetAction](/docs/resources-reference/v1.5/#httpgetaction-v1):
|
||||||
|
Performs an HTTP Get request against the Container's IP
|
||||||
|
address on a specified port and path. The diagnostic is considered successful
|
||||||
|
if the response has a status code greater than or equal to 200 and less than 400.
|
||||||
|
|
||||||
|
Each probe has one of three results:
|
||||||
|
|
||||||
|
* Success: The Container passed the diagnostic.
|
||||||
|
* Failure: The Container failed the diagnostic.
|
||||||
|
* Unknown: The diagnostic failed, so no action should be taken.
|
||||||
|
|
||||||
|
The kubelet can optionally perform and react to two kinds of probes on running
|
||||||
|
Containers:
|
||||||
|
|
||||||
|
* `livenessProbe`: Indicates whether the Container is running. If
|
||||||
|
the liveness probe fails, the kubelet kills the Container, and the Container
|
||||||
|
is subjected to its [restart policy](#restart-policy). If a Container does not
|
||||||
|
provide a liveness probe, the default state is `Success`.
|
||||||
|
|
||||||
|
* `readinessProbe`: Indicates whether the Container is ready to service requests.
|
||||||
|
If the readiness probe fails, the endpoints controller removes the Pod's IP
|
||||||
|
address from the endpoints of all Services that match the Pod. The default
|
||||||
|
state of readiness before the initial delay is `Failure`. If a Container does
|
||||||
|
not provide a readiness probe, the default state is `Success`.
|
||||||
|
|
||||||
|
### When should you use liveness or readiness probes?
|
||||||
|
|
||||||
|
If the process in your Container is able to crash on its own whenever it
|
||||||
|
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
|
||||||
|
probe; the kubelet will automatically perform the correct action in accordance
|
||||||
|
with the Pod's `restartPolicy`.
|
||||||
|
|
||||||
|
If you'd like your Container to be killed and restarted if a probe fails, then
|
||||||
|
specify a liveness probe, and specify a `restartPolicy` of Always or OnFailure.
|
||||||
|
|
||||||
|
If you'd like to start sending traffic to a Pod only when a probe succeeds,
|
||||||
|
specify a readiness probe. In this case, the readiness probe might be the same
|
||||||
|
as the liveness probe, but the existence of the readiness probe in the spec means
|
||||||
|
that the Pod will start without receiving any traffic and only start receiving
|
||||||
|
traffic after the probe starts succeeding.
|
||||||
|
|
||||||
|
If you want your Container to be able to take itself down for maintenance, you
|
||||||
|
can specify a readiness probe that checks an endpoint specific to readiness that
|
||||||
|
is different from the liveness probe.
|
||||||
|
|
||||||
|
Note that if you just want to be able to drain requests when the Pod is deleted,
|
||||||
|
you do not necessarily need a readiness probe; on deletion, the Pod automatically
|
||||||
|
puts itself into an unready state regardless of whether the readiness probe exists.
|
||||||
|
The Pod remains in the unready state while it waits for the Containers in the Pod
|
||||||
|
to stop.
|
||||||
|
|
||||||
|
## Pod and Container status
|
||||||
|
|
||||||
|
For detailed information about Pod Container status, see
|
||||||
|
[PodStatus](/docs/resources-reference/v1.5/#podstatus-v1)
|
||||||
|
and
|
||||||
|
[ContainerStatus](/docs/resources-reference/v1.5/#containerstatus-v1).
|
||||||
|
Note that the information reported as Pod status depends on the current
|
||||||
|
[ContainerState](/docs/resources-reference/v1.5/#containerstate-v1).
|
||||||
|
|
||||||
|
## Restart policy
|
||||||
|
|
||||||
|
A PodSpec has a `restartPolicy` field with possible values Always, OnFailure,
|
||||||
|
and Never. The default value is Always.
|
||||||
|
`restartPolicy` applies to all Containers in the Pod. `restartPolicy` only
|
||||||
|
refers to restarts of the Containers by the kubelet on the same node. Failed
|
||||||
|
Containers that are restarted by the kubelet are restarted with an exponential
|
||||||
|
back-off delay (10s, 20s, 40s ...) capped at five minutes, and is reset after ten
|
||||||
|
minutes of successful execution. As discussed in the
|
||||||
|
[Pods document](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof),
|
||||||
|
once bound to a node, a Pod will never be rebound to another node.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Pod lifetime
|
||||||
|
|
||||||
|
In general, Pods do not disappear until someone destroys them. This might be a
|
||||||
|
human or a controller. The only exception to
|
||||||
|
this rule is that Pods with a `phase` of Succeeded or Failed for more than some
|
||||||
|
duration (determined by the master) will expire and be automatically destroyed.
|
||||||
|
|
||||||
|
Three types of controllers are available:
|
||||||
|
|
||||||
|
- Use a [Job](/docs/user-guide/jobs/) for Pods that are expected to terminate,
|
||||||
|
for example, batch computations. Jobs are appropriate only for Pods with
|
||||||
|
`restartPolicy` equal to OnFailure or Never.
|
||||||
|
|
||||||
|
- Use a [ReplicationController](/docs/user-guide/replication-controller/),
|
||||||
|
[ReplicaSet](/docs/user-guide/replicasets/), or
|
||||||
|
[Deployment](/docs/user-guide/deployments/)
|
||||||
|
for Pods that are not expected to terminate, for example, web servers.
|
||||||
|
ReplicationControllers are appropriate only for Pods with a `restartPolicy` of
|
||||||
|
Always.
|
||||||
|
|
||||||
|
- Use a [DaemonSet](/docs/admin/daemons/) for Pods that need to run one per
|
||||||
|
machine, because they provide a machine-specific system service.
|
||||||
|
|
||||||
|
All three types of controllers contain a PodTemplate. It
|
||||||
|
is recommended to create the appropriate controller and let
|
||||||
|
it create Pods, rather than directly create Pods yourself. That is because Pods
|
||||||
|
alone are not resilient to machine failures, but controllers are.
|
||||||
|
|
||||||
|
If a node dies or is disconnected from the rest of the cluster, Kubernetes
|
||||||
|
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Advanced liveness probe example
|
||||||
|
|
||||||
|
Liveness probes are executed by the kubelet, so all requests are made in the
|
||||||
|
kubelet network namespace.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
test: liveness
|
||||||
|
name: liveness-http
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- args:
|
||||||
|
- /server
|
||||||
|
image: gcr.io/google_containers/liveness
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
# when "host" is not defined, "PodIP" will be used
|
||||||
|
# host: my-host
|
||||||
|
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
|
||||||
|
# scheme: HTTPS
|
||||||
|
path: /healthz
|
||||||
|
port: 8080
|
||||||
|
httpHeaders:
|
||||||
|
- name: X-Custom-Header
|
||||||
|
value: Awesome
|
||||||
|
initialDelaySeconds: 15
|
||||||
|
timeoutSeconds: 1
|
||||||
|
name: liveness
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example states
|
||||||
|
|
||||||
|
* Pod is running and has one Container. Container exits with success.
|
||||||
|
* Log completion event.
|
||||||
|
* If `restartPolicy` is:
|
||||||
|
* Always: Restart Container; Pod `phase` stays Running.
|
||||||
|
* OnFailure: Pod `phase` becomes Succeeded.
|
||||||
|
* Never: Pod `phase` becomes Succeeded.
|
||||||
|
|
||||||
|
* Pod is running and has one Container. Container exits with failure.
|
||||||
|
* Log failure event.
|
||||||
|
* If `restartPolicy` is:
|
||||||
|
* Always: Restart Container; Pod `phase` stays Running.
|
||||||
|
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||||
|
* Never: Pod `phase` becomes Failed.
|
||||||
|
|
||||||
|
* Pod is running and has two Containers. Container 1 exits with failure.
|
||||||
|
* Log failure event.
|
||||||
|
* If `restartPolicy` is:
|
||||||
|
* Always: Restart Container; Pod `phase` stays Running.
|
||||||
|
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||||
|
* Never: Do not restart Container; Pod `phase` stays Running.
|
||||||
|
* If Container 1 is not running, and Container 2 exits:
|
||||||
|
* Log failure event.
|
||||||
|
* If `restartPolicy` is:
|
||||||
|
* Always: Restart Container; Pod `phase` stays Running.
|
||||||
|
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||||
|
* Never: Pod `phase` becomes Failed.
|
||||||
|
|
||||||
|
* Pod is running and has one Container. Container runs out of memory.
|
||||||
|
* Container terminates in failure.
|
||||||
|
* Log OOM event.
|
||||||
|
* If `restartPolicy` is:
|
||||||
|
* Always: Restart Container; Pod `phase` stays Running.
|
||||||
|
* OnFailure: Restart Container; Pod `phase` stays Running.
|
||||||
|
* Never: Log failure event; Pod `phase` becomes Failed.
|
||||||
|
|
||||||
|
* Pod is running, and a disk dies.
|
||||||
|
* Kill all Containers.
|
||||||
|
* Log appropriate event.
|
||||||
|
* Pod `phase` becomes Failed.
|
||||||
|
* If running under a controller, Pod is recreated elsewhere.
|
||||||
|
|
||||||
|
* Pod is running, and its node is segmented out.
|
||||||
|
* Node controller waits for timeout.
|
||||||
|
* Node controller sets Pod `phase` to Failed.
|
||||||
|
* If running under a controller, Pod is recreated elsewhere.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture whatsnext %}
|
||||||
|
|
||||||
|
* Get hands-on experience
|
||||||
|
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||||
|
|
||||||
|
* Get hands-on experience
|
||||||
|
[configuring liveness and readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
|
||||||
|
|
||||||
|
* [Container Lifecycle Hooks](/docs/user-guide/container-environment/)
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
{% include templates/concept.md %}
|
||||||
|
|
|
@ -12,6 +12,8 @@ This page explains how documentation issues are reviewed and prioritized for the
|
||||||
## Categorizing issues
|
## Categorizing issues
|
||||||
Issues should be sorted into different buckets of work using the following labels and definitions. If an issue doesn't have enough information to identify a problem that can be researched, reviewed, or worked on (i.e. the issue doesn't fit into any of the categories below) you should close the issue with a comment explaining why it is being closed.
|
Issues should be sorted into different buckets of work using the following labels and definitions. If an issue doesn't have enough information to identify a problem that can be researched, reviewed, or worked on (i.e. the issue doesn't fit into any of the categories below) you should close the issue with a comment explaining why it is being closed.
|
||||||
|
|
||||||
|
### Needs Clarification
|
||||||
|
* Issues that need more information from the original submitter to make them actionable. Issues with this label that aren't followed up within a week may be closed.
|
||||||
|
|
||||||
### Actionable
|
### Actionable
|
||||||
* Issues that can be worked on with current information (or may need a comment to explain what needs to be done to make it more clear)
|
* Issues that can be worked on with current information (or may need a comment to explain what needs to be done to make it more clear)
|
||||||
|
@ -26,8 +28,9 @@ Issues should be sorted into different buckets of work using the following label
|
||||||
* Issues that are suggestions for better processes or site improvements that require community agreement to be implemented
|
* Issues that are suggestions for better processes or site improvements that require community agreement to be implemented
|
||||||
* Topics can be brought to SIG meetings as agenda items
|
* Topics can be brought to SIG meetings as agenda items
|
||||||
|
|
||||||
#### Needs UX Review
|
### Needs UX Review
|
||||||
* Issues that are suggestions for improving the user interface of the site or fixing a broken UX.
|
* Issues that are suggestions for improving the user interface of the site.
|
||||||
|
* Fixing broken site elements.
|
||||||
|
|
||||||
|
|
||||||
## Prioritizing Issues
|
## Prioritizing Issues
|
||||||
|
|
|
@ -233,7 +233,7 @@ after their announced deprecation for no less than:**
|
||||||
* **Beta: 3 months or 1 release (whichever is longer)**
|
* **Beta: 3 months or 1 release (whichever is longer)**
|
||||||
* **Alpha: 0 releases**
|
* **Alpha: 0 releases**
|
||||||
|
|
||||||
**Rule #6: Deprecated CLI elements must emit warnings (optionally disableable)
|
**Rule #6: Deprecated CLI elements must emit warnings (optionally disable)
|
||||||
when used.**
|
when used.**
|
||||||
|
|
||||||
## Deprecating a feature or behavior
|
## Deprecating a feature or behavior
|
||||||
|
|
|
@ -3073,7 +3073,7 @@ Populated by the system when a graceful deletion is requested. Read-only. More i
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">nodeSelector</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection/README">http://kubernetes.io/docs/user-guide/node-selection/README</a></p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection">http://kubernetes.io/docs/user-guide/node-selection</a></p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
|
||||||
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
|
||||||
<td class="tableblock halign-left valign-top"></td>
|
<td class="tableblock halign-left valign-top"></td>
|
||||||
|
|
|
@ -61,9 +61,6 @@ echo "192.168.121.9 centos-master
|
||||||
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
|
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
# Comma separated list of nodes in the etcd cluster
|
|
||||||
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
|
|
||||||
|
|
||||||
# logging to stderr means we get it in the systemd journal
|
# logging to stderr means we get it in the systemd journal
|
||||||
KUBE_LOGTOSTDERR="--logtostderr=true"
|
KUBE_LOGTOSTDERR="--logtostderr=true"
|
||||||
|
|
||||||
|
@ -111,6 +108,9 @@ KUBE_API_PORT="--port=8080"
|
||||||
# Port kubelets listen on
|
# Port kubelets listen on
|
||||||
KUBELET_PORT="--kubelet-port=10250"
|
KUBELET_PORT="--kubelet-port=10250"
|
||||||
|
|
||||||
|
# Comma separated list of nodes in the etcd cluster
|
||||||
|
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
|
||||||
|
|
||||||
# Address range to use for services
|
# Address range to use for services
|
||||||
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
|
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
|
||||||
|
|
||||||
|
|
|
@ -134,10 +134,10 @@ $ kubectl get --all-namespaces services
|
||||||
should show a set of [services](/docs/user-guide/services) that look something like this:
|
should show a set of [services](/docs/user-guide/services) that look something like this:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) AGE
|
||||||
default kubernetes 10.0.0.1 <none> 443/TCP <none> 1d
|
default kubernetes 10.0.0.1 <none> 443/TCP 1d
|
||||||
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP k8s-app=kube-dns 1d
|
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP 1d
|
||||||
kube-system kube-ui 10.0.0.3 <none> 80/TCP k8s-app=kube-ui 1d
|
kube-system kube-ui 10.0.0.3 <none> 80/TCP 1d
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -47,6 +47,8 @@ clusters.
|
||||||
|
|
||||||
[KCluster.io](https://kcluster.io) provides highly available and scalable managed Kubernetes clusters for AWS.
|
[KCluster.io](https://kcluster.io) provides highly available and scalable managed Kubernetes clusters for AWS.
|
||||||
|
|
||||||
|
[KUBE2GO.io](https://kube2go.io) get started with highly available Kubernetes clusters on multiple public clouds along with useful tools for development, debugging, monitoring.
|
||||||
|
|
||||||
[Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or any public cloud, and provides 24/7 health monitoring and alerting.
|
[Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or any public cloud, and provides 24/7 health monitoring and alerting.
|
||||||
|
|
||||||
[OpenShift Dedicated](https://www.openshift.com/dedicated/) offers managed Kubernetes clusters powered by OpenShift and [OpenShift Online](https://www.openshift.com/features/) provides free hosted access for Kubernetes applications.
|
[OpenShift Dedicated](https://www.openshift.com/dedicated/) offers managed Kubernetes clusters powered by OpenShift and [OpenShift Online](https://www.openshift.com/features/) provides free hosted access for Kubernetes applications.
|
||||||
|
@ -62,6 +64,7 @@ few commands, and have active community support.
|
||||||
- [CenturyLink Cloud](/docs/getting-started-guides/clc)
|
- [CenturyLink Cloud](/docs/getting-started-guides/clc)
|
||||||
- [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer)
|
- [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer)
|
||||||
- [Stackpoint.io](/docs/getting-started-guides/stackpoint/)
|
- [Stackpoint.io](/docs/getting-started-guides/stackpoint/)
|
||||||
|
- [KUBE2GO.io](https://kube2go.io/)
|
||||||
|
|
||||||
### Custom Solutions
|
### Custom Solutions
|
||||||
|
|
||||||
|
@ -131,6 +134,7 @@ GKE | | | GCE | [docs](https://clou
|
||||||
Stackpoint.io | | multi-support | multi-support | [docs](http://www.stackpointcloud.com) | Commercial
|
Stackpoint.io | | multi-support | multi-support | [docs](http://www.stackpointcloud.com) | Commercial
|
||||||
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial
|
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial
|
||||||
KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | Commercial
|
KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | Commercial
|
||||||
|
KUBE2GO.io | | multi-support | multi-support | [docs](https://kube2go.io) | Commercial
|
||||||
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/products/kubernetes/) | Commercial
|
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/products/kubernetes/) | Commercial
|
||||||
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | Project
|
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | Project
|
||||||
Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | Commercial
|
Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | Commercial
|
||||||
|
|
|
@ -69,6 +69,7 @@ For each host in turn:
|
||||||
* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
|
* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
|
||||||
* If the machine is running Ubuntu or HypriotOS, run:
|
* If the machine is running Ubuntu or HypriotOS, run:
|
||||||
|
|
||||||
|
apt-get update && apt-get install -y apt-transport-https
|
||||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
||||||
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
|
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
|
||||||
deb http://apt.kubernetes.io/ kubernetes-xenial main
|
deb http://apt.kubernetes.io/ kubernetes-xenial main
|
||||||
|
@ -194,6 +195,46 @@ Once a pod network has been installed, you can confirm that it is working by che
|
||||||
|
|
||||||
And once the `kube-dns` pod is up and running, you can continue by joining your nodes.
|
And once the `kube-dns` pod is up and running, you can continue by joining your nodes.
|
||||||
|
|
||||||
|
|
||||||
|
You may have trouble in the configuration if you see the following statuses
|
||||||
|
|
||||||
|
```
|
||||||
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
|
kube-system canal-node-f0lqp 2/3 RunContainerError 2 48s
|
||||||
|
kube-system canal-node-77d0h 2/3 CrashLoopBackOff 3 3m
|
||||||
|
kube-system kube-dns-2924299975-7q1vq 0/4 ContainerCreating 0 15m
|
||||||
|
```
|
||||||
|
|
||||||
|
The three statuses ```RunContainerError``` and ```CrashLoopBackOff``` and ```ContainerCreating``` are very common.
|
||||||
|
|
||||||
|
To help diagnose what happened, you can use the following command to check what is in the logs:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl describe -n kube-system po {YOUR_POD_NAME}
|
||||||
|
```
|
||||||
|
|
||||||
|
Do not using kubectl logs. You will got the following error:
|
||||||
|
|
||||||
|
```
|
||||||
|
# kubectl logs -n kube-system canal-node-f0lqp
|
||||||
|
Error from server (BadRequest): the server rejected our request for an unknown reason (get pods canal-node-f0lqp)
|
||||||
|
```
|
||||||
|
|
||||||
|
The ```kubectl describe``` comand gives you more details about the logs
|
||||||
|
|
||||||
|
```
|
||||||
|
# kubectl describe -n kube-system po kube-dns-2924299975-1l2t7
|
||||||
|
2m 2m 1 {kubelet nac} spec.containers{flannel} Warning Failed Failed to start container with docker id 927e7ccdc32b with error: Error response from daemon: {"message":"chown /etc/resolv.conf: operation not permitted"}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Or
|
||||||
|
```
|
||||||
|
6m 1m 191 {kubelet nac} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2924299975-1l2t7_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2924299975-1l2t7_kube-system(dee8ef21-fbcb-11e6-ba19-38d547e0006a)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"
|
||||||
|
```
|
||||||
|
|
||||||
|
You can then do some Google searches on the error messages, which may help you to find some solutions.
|
||||||
|
|
||||||
### (4/4) Joining your nodes
|
### (4/4) Joining your nodes
|
||||||
|
|
||||||
The nodes are where your workloads (containers and pods, etc) run.
|
The nodes are where your workloads (containers and pods, etc) run.
|
||||||
|
@ -352,7 +393,7 @@ Please note: `kubeadm` is a work in progress and these limitations will be addre
|
||||||
1. There is no built-in way of fetching the token easily once the cluster is up and running, but here is a `kubectl` command you can copy and paste that will print out the token for you:
|
1. There is no built-in way of fetching the token easily once the cluster is up and running, but here is a `kubectl` command you can copy and paste that will print out the token for you:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
# kubectl -n kube-system get secret clusterinfo -o yaml | grep token-map | awk '{print $2}' | base64 -D | sed "s|{||g;s|}||g;s|:|.|g;s/\"//g;" | xargs echo
|
# kubectl -n kube-system get secret clusterinfo -o yaml | grep token-map | awk '{print $2}' | base64 --decode | sed "s|{||g;s|}||g;s|:|.|g;s/\"//g;" | xargs echo
|
||||||
```
|
```
|
||||||
|
|
||||||
1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one).
|
1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one).
|
||||||
|
|
|
@ -31,12 +31,13 @@ This will run two nginx Pods in the default Namespace, and expose them through a
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl get svc,pod
|
$ kubectl get svc,pod
|
||||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
kubernetes 10.100.0.1 <none> 443/TCP 46m
|
svc/kubernetes 10.100.0.1 <none> 443/TCP 46m
|
||||||
nginx 10.100.0.16 <none> 80/TCP 33s
|
svc/nginx 10.100.0.16 <none> 80/TCP 33s
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
nginx-701339712-e0qfq 1/1 Running 0 35s
|
NAME READY STATUS RESTARTS AGE
|
||||||
nginx-701339712-o00ef 1/1 Running 0 35s
|
po/nginx-701339712-e0qfq 1/1 Running 0 35s
|
||||||
|
po/nginx-701339712-o00ef 1/1 Running 0 35s
|
||||||
```
|
```
|
||||||
|
|
||||||
We should be able to access our new nginx Service from other Pods. Let's try to access it from another Pod
|
We should be able to access our new nginx Service from other Pods. Let's try to access it from another Pod
|
||||||
|
|
|
@ -23,7 +23,7 @@ This guide assumes you have access to a working OpenStack cluster with the follo
|
||||||
- Heat
|
- Heat
|
||||||
- DNS resolution of instance names
|
- DNS resolution of instance names
|
||||||
|
|
||||||
By default this provider provisions 4 m1.medium instances. If you do not have resources available, please see the [Set additional configuration values](#set-additional-configuration-values) section for information on reducing the footprint of your cluster.
|
By default this provider provisions 4 `m1.medium` instances. If you do not have resources available, please see the [Set additional configuration values](#set-additional-configuration-values) section for information on reducing the footprint of your cluster.
|
||||||
|
|
||||||
## Pre-Requisites
|
## Pre-Requisites
|
||||||
If you already have the required versions of the OpenStack CLI tools installed and configured, you can move on to the [Starting a cluster](#starting-a-cluster) section.
|
If you already have the required versions of the OpenStack CLI tools installed and configured, you can move on to the [Starting a cluster](#starting-a-cluster) section.
|
||||||
|
@ -92,7 +92,7 @@ Please see the contents of these files for documentation regarding each variable
|
||||||
|
|
||||||
## Starting a cluster
|
## Starting a cluster
|
||||||
|
|
||||||
Once Kubernetes version 1.3 is released, and you've installed the OpenStack CLI tools and have set your OpenStack environment variables, issue this command:
|
Once you've installed the OpenStack CLI tools and have set your OpenStack environment variables, issue this command:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
export KUBERNETES_PROVIDER=openstack-heat; curl -sS https://get.k8s.io | bash
|
export KUBERNETES_PROVIDER=openstack-heat; curl -sS https://get.k8s.io | bash
|
||||||
|
@ -194,6 +194,11 @@ nova list --name=$STACK_NAME
|
||||||
|
|
||||||
See the [OpenStack CLI Reference](http://docs.openstack.org/cli-reference/) for more details.
|
See the [OpenStack CLI Reference](http://docs.openstack.org/cli-reference/) for more details.
|
||||||
|
|
||||||
|
### Salt
|
||||||
|
|
||||||
|
The OpenStack-Heat provider uses a [standalone Salt configuration](/docs/admin/salt/#standalone-salt-configuration-on-gce-and-others).
|
||||||
|
It only uses Salt for bootstraping the machines and creates no salt-master and does not auto-start the salt-minion service on the nodes.
|
||||||
|
|
||||||
## SSHing to your nodes
|
## SSHing to your nodes
|
||||||
|
|
||||||
Your public key was added during the cluster turn-up, so you can easily ssh to them for troubleshooting purposes.
|
Your public key was added during the cluster turn-up, so you can easily ssh to them for troubleshooting purposes.
|
||||||
|
|
|
@ -159,15 +159,17 @@ juju scp kubernetes-master/0:config ~/.kube/config
|
||||||
|
|
||||||
Fetch a binary for the architecture you have deployed. If your client is a
|
Fetch a binary for the architecture you have deployed. If your client is a
|
||||||
different architecture you will need to get the appropriate `kubectl` binary
|
different architecture you will need to get the appropriate `kubectl` binary
|
||||||
through other means.
|
through other means. In this example we copy kubectl to `~/bin` for convenience,
|
||||||
|
by default this should be in your $PATH.
|
||||||
|
|
||||||
```
|
```
|
||||||
juju scp kubernetes-master/0:kubectl ./kubectl
|
mkdir -p ~/bin
|
||||||
|
juju scp kubernetes-master/0:kubectl ~/bin/kubectl
|
||||||
```
|
```
|
||||||
|
|
||||||
Query the cluster:
|
Query the cluster:
|
||||||
|
|
||||||
./kubectl cluster-info
|
kubectl cluster-info
|
||||||
|
|
||||||
Output:
|
Output:
|
||||||
|
|
||||||
|
|
|
@ -79,8 +79,7 @@ Sample Config:
|
||||||
|
|
||||||
#### Known issues
|
#### Known issues
|
||||||
|
|
||||||
* [Volumes are not removed from a VM configuration if the VM is down](https://github.com/kubernetes/kubernetes/issues/33061). The workaround is to manually remove the disk from VM settings before powering it up.
|
* [Unable to execute command on pod container using kubectl exec](https://github.com/kubernetes/kubernetes-anywhere/issues/337)
|
||||||
* [FS groups are not supported in 1.4.7](https://github.com/kubernetes/kubernetes/issues/34039) - This issue is fixed in 1.4.8
|
|
||||||
|
|
||||||
### Kube-up (Deprecated)
|
### Kube-up (Deprecated)
|
||||||
|
|
||||||
|
@ -216,7 +215,7 @@ going on (find yourself authorized with your SSH key, or use the password
|
||||||
|
|
||||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||||
Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/getting-started-guides/vsphere) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu))([@luomiao](https://github.com/luomiao))
|
Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/getting-started-guides/vsphere) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu)), ([@luomiao](https://github.com/luomiao)), ([@divyenpatel](https://github.com/divyenpatel))
|
||||||
|
|
||||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||||
|
|
||||||
|
|
|
@ -1062,7 +1062,7 @@ Appears In <a href="#pod-v1">Pod</a> <a href="#podtemplatespec-v1">PodTemplateSp
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>nodeSelector <br /> <em>object</em></td>
|
<td>nodeSelector <br /> <em>object</em></td>
|
||||||
<td>NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection/README">http://kubernetes.io/docs/user-guide/node-selection/README</a></td>
|
<td>NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: <a href="http://kubernetes.io/docs/user-guide/node-selection">http://kubernetes.io/docs/user-guide/node-selection</a></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>restartPolicy <br /> <em>string</em></td>
|
<td>restartPolicy <br /> <em>string</em></td>
|
||||||
|
|
|
@ -156,7 +156,7 @@ The output is:
|
||||||
|
|
||||||
Verify that the replica count is zero:
|
Verify that the replica count is zero:
|
||||||
|
|
||||||
kubectl get deployment --namespace-kube-system
|
kubectl get deployment --namespace=kube-system
|
||||||
|
|
||||||
The output displays 0 in the DESIRED and CURRENT columns:
|
The output displays 0 in the DESIRED and CURRENT columns:
|
||||||
|
|
||||||
|
|
|
@ -247,7 +247,7 @@ where you would set it. Suppose the Container listens on 127.0.0.1 and the Pod's
|
||||||
If your pod relies on virtual hosts, which is probably the more common case,
|
If your pod relies on virtual hosts, which is probably the more common case,
|
||||||
you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
||||||
|
|
||||||
In addition to command probes and HTTP probes, Kubenetes supports
|
In addition to command probes and HTTP probes, Kubernetes supports
|
||||||
[TCP probes](/docs/api-reference/v1/definitions/#_v1_tcpsocketaction).
|
[TCP probes](/docs/api-reference/v1/definitions/#_v1_tcpsocketaction).
|
||||||
|
|
||||||
{% endcapture %}
|
{% endcapture %}
|
||||||
|
@ -255,7 +255,7 @@ In addition to command probes and HTTP probes, Kubenetes supports
|
||||||
{% capture whatsnext %}
|
{% capture whatsnext %}
|
||||||
|
|
||||||
* Learn more about
|
* Learn more about
|
||||||
[Container Probes](/docs/user-guide/pod-states/#container-probes).
|
[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||||
|
|
||||||
* Learn more about
|
* Learn more about
|
||||||
[Health Checking section](/docs/user-guide/walkthrough/k8s201/#health-checking).
|
[Health Checking section](/docs/user-guide/walkthrough/k8s201/#health-checking).
|
||||||
|
|
|
@ -0,0 +1,54 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: kubernetes-downwardapi-volume-example-2
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: client-container
|
||||||
|
image: gcr.io/google_containers/busybox:1.24
|
||||||
|
command: ["sh", "-c"]
|
||||||
|
args:
|
||||||
|
- while true; do
|
||||||
|
echo -en '\n';
|
||||||
|
if [[ -e /etc/cpu_limit ]]; then
|
||||||
|
echo -en '\n'; cat /etc/cpu_limit; fi;
|
||||||
|
if [[ -e /etc/cpu_request ]]; then
|
||||||
|
echo -en '\n'; cat /etc/cpu_request; fi;
|
||||||
|
if [[ -e /etc/mem_limit ]]; then
|
||||||
|
echo -en '\n'; cat /etc/mem_limit; fi;
|
||||||
|
if [[ -e /etc/mem_request ]]; then
|
||||||
|
echo -en '\n'; cat /etc/mem_request; fi;
|
||||||
|
sleep 5;
|
||||||
|
done;
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "32Mi"
|
||||||
|
cpu: "125m"
|
||||||
|
limits:
|
||||||
|
memory: "64Mi"
|
||||||
|
cpu: "250m"
|
||||||
|
volumeMounts:
|
||||||
|
- name: podinfo
|
||||||
|
mountPath: /etc
|
||||||
|
readOnly: false
|
||||||
|
volumes:
|
||||||
|
- name: podinfo
|
||||||
|
downwardAPI:
|
||||||
|
items:
|
||||||
|
- path: "cpu_limit"
|
||||||
|
resourceFieldRef:
|
||||||
|
containerName: client-container
|
||||||
|
resource: limits.cpu
|
||||||
|
- path: "cpu_request"
|
||||||
|
resourceFieldRef:
|
||||||
|
containerName: client-container
|
||||||
|
resource: requests.cpu
|
||||||
|
- path: "mem_limit"
|
||||||
|
resourceFieldRef:
|
||||||
|
containerName: client-container
|
||||||
|
resource: limits.memory
|
||||||
|
- path: "mem_request"
|
||||||
|
resourceFieldRef:
|
||||||
|
containerName: client-container
|
||||||
|
resource: requests.memory
|
||||||
|
|
|
@ -0,0 +1,39 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: kubernetes-downwardapi-volume-example
|
||||||
|
labels:
|
||||||
|
zone: us-est-coast
|
||||||
|
cluster: test-cluster1
|
||||||
|
rack: rack-22
|
||||||
|
annotations:
|
||||||
|
build: two
|
||||||
|
builder: john-doe
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: client-container
|
||||||
|
image: gcr.io/google_containers/busybox
|
||||||
|
command: ["sh", "-c"]
|
||||||
|
args:
|
||||||
|
- while true; do
|
||||||
|
if [[ -e /etc/labels ]]; then
|
||||||
|
echo -en '\n\n'; cat /etc/labels; fi;
|
||||||
|
if [[ -e /etc/annotations ]]; then
|
||||||
|
echo -en '\n\n'; cat /etc/annotations; fi;
|
||||||
|
sleep 5;
|
||||||
|
done;
|
||||||
|
volumeMounts:
|
||||||
|
- name: podinfo
|
||||||
|
mountPath: /etc
|
||||||
|
readOnly: false
|
||||||
|
volumes:
|
||||||
|
- name: podinfo
|
||||||
|
downwardAPI:
|
||||||
|
items:
|
||||||
|
- path: "labels"
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.labels
|
||||||
|
- path: "annotations"
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.annotations
|
||||||
|
|
|
@ -0,0 +1,242 @@
|
||||||
|
---
|
||||||
|
title: Exposing Pod Information to Containers Using a DownwardApiVolumeFile
|
||||||
|
---
|
||||||
|
|
||||||
|
{% capture overview %}
|
||||||
|
|
||||||
|
This page shows how a Pod can use a DownwardAPIVolumeFile to expose information
|
||||||
|
about itself to Containers running in the Pod. A DownwardAPIVolumeFile can expose
|
||||||
|
Pod fields and Container fields.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture prerequisites %}
|
||||||
|
|
||||||
|
{% include task-tutorial-prereqs.md %}
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
{% capture steps %}
|
||||||
|
|
||||||
|
## The Downward API
|
||||||
|
|
||||||
|
There are two ways to expose Pod and Container fields to a running Container:
|
||||||
|
|
||||||
|
* [Environment variables](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/)
|
||||||
|
* DownwardAPIVolumeFiles
|
||||||
|
|
||||||
|
Together, these two ways of exposing Pod and Container fields are called the
|
||||||
|
*Downward API*.
|
||||||
|
|
||||||
|
## Storing Pod fields
|
||||||
|
|
||||||
|
In this exercise, you create a Pod that has one Container.
|
||||||
|
Here is the configuration file for the Pod:
|
||||||
|
|
||||||
|
{% include code.html language="yaml" file="dapi-volume.yaml" ghlink="/docs/tasks/configure-pod-container/dapi-volume.yaml" %}
|
||||||
|
|
||||||
|
In the configuration file, you can see that the Pod has a `downwardAPI` Volume,
|
||||||
|
and the Container mounts the Volume at `/etc`.
|
||||||
|
|
||||||
|
Look at the `items` array under `downwardAPI`. Each element of the array is a
|
||||||
|
[DownwardAPIVolumeFile](/docs/resources-reference/v1.5/#downwardapivolumefile-v1).
|
||||||
|
The first element specifies that the value of the Pod's
|
||||||
|
`metadata.labels` field should be stored in a file named `labels`.
|
||||||
|
The second element specifies that the value of the Pod's `annotations`
|
||||||
|
field should be stored in a file named `annotations`.
|
||||||
|
|
||||||
|
**Note**: The fields in this example are Pod fields. They are not
|
||||||
|
fields of the Container in the Pod.
|
||||||
|
|
||||||
|
Create the Pod:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/dapi-volume.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify that Container in the Pod is running:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get pods
|
||||||
|
```
|
||||||
|
|
||||||
|
View the Container's logs:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl logs kubernetes-downwardapi-volume-example
|
||||||
|
```
|
||||||
|
|
||||||
|
The output shows the contents of the `labels` file and the `annotations` file:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cluster="test-cluster1"
|
||||||
|
rack="rack-22"
|
||||||
|
zone="us-est-coast"
|
||||||
|
|
||||||
|
build="two"
|
||||||
|
builder="john-doe"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get a shell into the Container that is running in your Pod:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl exec -it kubernetes-downwardapi-volume-example -- sh
|
||||||
|
```
|
||||||
|
|
||||||
|
In your shell, view the `labels` file:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
/# cat /etc/labels
|
||||||
|
```
|
||||||
|
|
||||||
|
The output shows that all of the Pod's labels have been written
|
||||||
|
to the `labels` file:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cluster="test-cluster1"
|
||||||
|
rack="rack-22"
|
||||||
|
zone="us-est-coast"
|
||||||
|
```
|
||||||
|
|
||||||
|
Similarly, view the `annotations` file:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
/# cat /etc/annotations
|
||||||
|
```
|
||||||
|
|
||||||
|
View the files in the `/etc` directory:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
/# ls -laR /etc
|
||||||
|
```
|
||||||
|
|
||||||
|
In the output, you can see that the `labels` and `annotations` files
|
||||||
|
are in a temporary subdirectory: in this example,
|
||||||
|
`..2982_06_02_21_47_53.299460680`. In the `/etc` directory, `..data` is
|
||||||
|
a symbolic link to the temporary subdirectory. Also in the `/etc` directory,
|
||||||
|
`labels` and `annotations` are symbolic links.
|
||||||
|
|
||||||
|
```
|
||||||
|
drwxr-xr-x ... Feb 6 21:47 ..2982_06_02_21_47_53.299460680
|
||||||
|
lrwxrwxrwx ... Feb 6 21:47 ..data -> ..2982_06_02_21_47_53.299460680
|
||||||
|
lrwxrwxrwx ... Feb 6 21:47 annotations -> ..data/annotations
|
||||||
|
lrwxrwxrwx ... Feb 6 21:47 labels -> ..data/labels
|
||||||
|
|
||||||
|
/etc/..2982_06_02_21_47_53.299460680:
|
||||||
|
total 8
|
||||||
|
-rw-r--r-- ... Feb 6 21:47 annotations
|
||||||
|
-rw-r--r-- ... Feb 6 21:47 labels
|
||||||
|
```
|
||||||
|
|
||||||
|
Using symbolic links enables dynamic atomic refresh of the metadata; updates are
|
||||||
|
written to a new temporary directory, and the `..data` symlink is updated
|
||||||
|
atomically using
|
||||||
|
[rename(2)](http://man7.org/linux/man-pages/man2/rename.2.html).
|
||||||
|
|
||||||
|
Exit the shell:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
/# exit
|
||||||
|
```
|
||||||
|
|
||||||
|
## Storing Container fields
|
||||||
|
|
||||||
|
The preceding exercise, you stored Pod fields in a DownwardAPIVolumeFile.
|
||||||
|
In this next exercise, you store Container fields. Here is the configuration
|
||||||
|
file for a Pod that has one Container:
|
||||||
|
|
||||||
|
{% include code.html language="yaml" file="dapi-volume-resources.yaml" ghlink="/docs/tasks/configure-pod-container/dapi-volume-resources.yaml" %}
|
||||||
|
|
||||||
|
In the configuration file, you can see that the Pod has a `downwardAPI` Volume,
|
||||||
|
and the Container mounts the Volume at `/etc`.
|
||||||
|
|
||||||
|
Look at the `items` array under `downwardAPI`. Each element of the array is a
|
||||||
|
DownwardAPIVolumeFile.
|
||||||
|
|
||||||
|
The first element specifies that in the Container named `client-container`,
|
||||||
|
the value of the `limits.cpu` field
|
||||||
|
`metadata.labels` field should be stored in a file named `cpu_limit`.
|
||||||
|
|
||||||
|
Create the Pod:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/dapi-volume-resources.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Get a shell into the Container that is running in your Pod:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh
|
||||||
|
```
|
||||||
|
|
||||||
|
In your shell, view the `cpu_limit` file:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
/# cat /etc/cpu_limit
|
||||||
|
```
|
||||||
|
You can use similar commands to view the `cpu_request`, `mem_limit` and
|
||||||
|
`mem_request` files.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
{% capture discussion %}
|
||||||
|
|
||||||
|
## Capabilities of the Downward API
|
||||||
|
|
||||||
|
The following information is available to Containers through environment
|
||||||
|
variables and DownwardAPIVolumeFiles:
|
||||||
|
|
||||||
|
* The node’s name
|
||||||
|
* The Pod’s name
|
||||||
|
* The Pod’s namespace
|
||||||
|
* The Pod’s IP address
|
||||||
|
* The Pod’s service account name
|
||||||
|
* A Container’s CPU limit
|
||||||
|
* A container’s CPU request
|
||||||
|
* A Container’s memory limit
|
||||||
|
* A Container’s memory request
|
||||||
|
|
||||||
|
In addition, the following information is available through
|
||||||
|
DownwardAPIVolumeFiles.
|
||||||
|
|
||||||
|
* The Pod's labels
|
||||||
|
* The Pod's annotations
|
||||||
|
|
||||||
|
**Note**: If CPU and memory limits are not specified for a Container, the
|
||||||
|
Downward API defaults to the node allocatable value for CPU and memory.
|
||||||
|
|
||||||
|
## Projecting keys to specific paths and file permissions
|
||||||
|
|
||||||
|
You can project keys to specific paths and specific permissions on a per-file
|
||||||
|
basis. For more information, see
|
||||||
|
[Secrets](/docs/user-guide/secrets/).
|
||||||
|
|
||||||
|
## Motivation for the Downward API
|
||||||
|
|
||||||
|
It is sometimes useful for a Container to have information about itself, without
|
||||||
|
being overly coupled to Kubernetes. The Downward API allows containers to consume
|
||||||
|
information about themselves or the cluster without using the Kubernetes client
|
||||||
|
or API server.
|
||||||
|
|
||||||
|
An example is an existing application that assumes a particular well-known
|
||||||
|
environment variable holds a unique identifier. One possibility is to wrap the
|
||||||
|
application, but that is tedious and error prone, and it violates the goal of low
|
||||||
|
coupling. A better option would be to use the Pod's name as an identifier, and
|
||||||
|
inject the Pod's name into the well-known environment variable.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture whatsnext %}
|
||||||
|
|
||||||
|
* [PodSpec](/docs/resources-reference/v1.5/#podspec-v1)
|
||||||
|
* [Volume](/docs/resources-reference/v1.5/#volume-v1)
|
||||||
|
* [DownwardAPIVolumeSource](/docs/resources-reference/v1.5/#downwardapivolumesource-v1)
|
||||||
|
* [DownwardAPIVolumeFile](/docs/resources-reference/v1.5/#downwardapivolumefile-v1)
|
||||||
|
* [ResourceFieldSelector](/docs/resources-reference/v1.5/#resourcefieldselector-v1)
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
{% include templates/task.md %}
|
||||||
|
|
|
@ -26,6 +26,17 @@ Together, these two ways of exposing Pod and Container fields are called the
|
||||||
|
|
||||||
{% capture steps %}
|
{% capture steps %}
|
||||||
|
|
||||||
|
## The Downward API
|
||||||
|
|
||||||
|
There are two ways to expose Pod and Container fields to a running Container:
|
||||||
|
|
||||||
|
* Environment variables
|
||||||
|
* [DownwardAPIVolumeFiles](/docs/resources-reference/v1.5/#downwardapivolumefile-v1)
|
||||||
|
|
||||||
|
Together, these two ways of exposing Pod and Container fields are called the
|
||||||
|
*Downward API*.
|
||||||
|
|
||||||
|
|
||||||
## Using Pod fields as values for environment variables
|
## Using Pod fields as values for environment variables
|
||||||
|
|
||||||
In this exercise, you create a Pod that has one Container. Here is the
|
In this exercise, you create a Pod that has one Container. Here is the
|
||||||
|
@ -161,3 +172,4 @@ The output shows the values of selected environment variables:
|
||||||
|
|
||||||
|
|
||||||
{% include templates/task.md %}
|
{% include templates/task.md %}
|
||||||
|
|
||||||
|
|
|
@ -80,7 +80,7 @@ Copy the base64 representation of the secret data into a file named `secret64`.
|
||||||
|
|
||||||
**Important**: Make sure there are no line breaks in your `secret64` file.
|
**Important**: Make sure there are no line breaks in your `secret64` file.
|
||||||
|
|
||||||
To understand what is in the `dockercfg` field, convert the secret data to a
|
To understand what is in the `.dockercfg` field, convert the secret data to a
|
||||||
readable format:
|
readable format:
|
||||||
|
|
||||||
base64 -d secret64
|
base64 -d secret64
|
||||||
|
|
|
@ -6,12 +6,20 @@ This section of the Kubernetes documentation contains pages that
|
||||||
show how to do individual tasks. A task page shows how to do a
|
show how to do individual tasks. A task page shows how to do a
|
||||||
single thing, typically by giving a short sequence of steps.
|
single thing, typically by giving a short sequence of steps.
|
||||||
|
|
||||||
|
#### Using the kubectl Command Line
|
||||||
|
|
||||||
|
* [Listing Alll Container Images Running in a Cluster](/docs/tasks/kubectl/list-all-running-container-images/)
|
||||||
|
* [Getting a Shell to a Running Container](/docs/tasks/kubectl/get-shell-running-container/)
|
||||||
|
|
||||||
#### Configuring Pods and Containers
|
#### Configuring Pods and Containers
|
||||||
|
|
||||||
* [Defining Environment Variables for a Container](/docs/tasks/configure-pod-container/define-environment-variable-container/)
|
* [Defining Environment Variables for a Container](/docs/tasks/configure-pod-container/define-environment-variable-container/)
|
||||||
* [Defining a Command and Arguments for a Container](/docs/tasks/configure-pod-container/define-command-argument-container/)
|
* [Defining a Command and Arguments for a Container](/docs/tasks/configure-pod-container/define-command-argument-container/)
|
||||||
* [Assigning CPU and RAM Resources to a Container](/docs/tasks/configure-pod-container/assign-cpu-ram-container/)
|
* [Assigning CPU and RAM Resources to a Container](/docs/tasks/configure-pod-container/assign-cpu-ram-container/)
|
||||||
* [Configuring a Pod to Use a Volume for Storage](/docs/tasks/configure-pod-container/configure-volume-storage/)
|
* [Configuring a Pod to Use a Volume for Storage](/docs/tasks/configure-pod-container/configure-volume-storage/)
|
||||||
|
* [Configuring a Pod to Use a PersistentVolume for Storage](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)
|
||||||
|
* [Exposing Pod Information to Containers Through Environment Variables](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/)
|
||||||
|
* [Exposing Pod Information to Containers Using a DownwardAPIVolumeFile](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/)
|
||||||
* [Distributing Credentials Securely](/docs/tasks/configure-pod-container/distribute-credentials-secure/)
|
* [Distributing Credentials Securely](/docs/tasks/configure-pod-container/distribute-credentials-secure/)
|
||||||
* [Pulling an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry/)
|
* [Pulling an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry/)
|
||||||
* [Configuring Liveness and Readiness Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
|
* [Configuring Liveness and Readiness Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
|
||||||
|
@ -55,3 +63,4 @@ single thing, typically by giving a short sequence of steps.
|
||||||
|
|
||||||
If you would like to write a task page, see
|
If you would like to write a task page, see
|
||||||
[Creating a Documentation Pull Request](/docs/contribute/create-pull-request/).
|
[Creating a Documentation Pull Request](/docs/contribute/create-pull-request/).
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,148 @@
|
||||||
|
---
|
||||||
|
assignees:
|
||||||
|
- caesarxuchao
|
||||||
|
- mikedanese
|
||||||
|
title: Getting a Shell to a Running Container
|
||||||
|
---
|
||||||
|
|
||||||
|
{% capture overview %}
|
||||||
|
|
||||||
|
This page shows how to use `kubectl exec` to get a shell to a
|
||||||
|
running Container.
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture prerequisites %}
|
||||||
|
|
||||||
|
{% include task-tutorial-prereqs.md %}
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture steps %}
|
||||||
|
|
||||||
|
## Getting a shell to a Container
|
||||||
|
|
||||||
|
In this exercise, you create a Pod that has one Container. The Container
|
||||||
|
runs the nginx image. Here is the configuration file for the Pod:
|
||||||
|
|
||||||
|
{% include code.html language="yaml" file="shell-demo.yaml" ghlink="/docs/tasks/kubectl/shell-demo.yaml" %}
|
||||||
|
|
||||||
|
Create the Pod:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl create -f https://k8s.io/docs/tasks/kubectl/shell-demo.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify that the Container is running:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get pod shell-demo
|
||||||
|
```
|
||||||
|
|
||||||
|
Get a shell to the running Container:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec -it shell-demo -- /bin/bash
|
||||||
|
```
|
||||||
|
|
||||||
|
In your shell, list the running processes:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@shell-demo:/# ps aux
|
||||||
|
```
|
||||||
|
|
||||||
|
In your shell, list the nginx processes:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@shell-demo:/# ps aux | grep nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
In your shell, experiment with other commands. Here are
|
||||||
|
some examples:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@shell-demo:/# ls /
|
||||||
|
root@shell-demo:/# cat /proc/mounts
|
||||||
|
root@shell-demo:/# cat /proc/1/maps
|
||||||
|
root@shell-demo:/# apt-get update
|
||||||
|
root@shell-demo:/# apt-get install tcpdump
|
||||||
|
root@shell-demo:/# tcpdump
|
||||||
|
root@shell-demo:/# apt-get install lsof
|
||||||
|
root@shell-demo:/# lsof
|
||||||
|
```
|
||||||
|
|
||||||
|
## Writing the root page for nginx
|
||||||
|
|
||||||
|
Look again at the configuration file for your Pod. The Pod
|
||||||
|
has an `emptyDir` volume, and the Container mounts the volume
|
||||||
|
at `/usr/share/nginx/html`.
|
||||||
|
|
||||||
|
In your shell, create an `index.html` file in the `/usr/share/nginx/html`
|
||||||
|
directory:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@shell-demo:/# echo Hello shell demo > /usr/share/nginx/html/index.html
|
||||||
|
```
|
||||||
|
|
||||||
|
In your shell, send a GET request to the nginx server:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@shell-demo:/# apt-get update
|
||||||
|
root@shell-demo:/# apt-get install curl
|
||||||
|
root@shell-demo:/# curl localhost
|
||||||
|
```
|
||||||
|
|
||||||
|
The output shows the text that you wrote to the `index.html` file:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
Hello shell demo
|
||||||
|
```
|
||||||
|
|
||||||
|
When you are finished with your shell, enter `exit`.
|
||||||
|
|
||||||
|
## Running individual commands in a Container
|
||||||
|
|
||||||
|
In an ordinary command window, not your shell, list the environment
|
||||||
|
variables in the running Container:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec shell-demo env
|
||||||
|
```
|
||||||
|
|
||||||
|
Experiment running other commands. Here are some examples:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec shell-demo ps aux
|
||||||
|
kubectl exec shell-demo ls /
|
||||||
|
kubectl exec shell-demo cat /proc/1/mounts
|
||||||
|
```
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
{% capture discussion %}
|
||||||
|
|
||||||
|
## Opening a shell when a Pod has more than one Container
|
||||||
|
|
||||||
|
If a Pod has more than one Container, use `--container` or `-c` to
|
||||||
|
specify a Container in the `kubectl exec` command. For example,
|
||||||
|
suppose you have a Pod named my-pod, and the Pod has two containers
|
||||||
|
named main-app and helper-app. The following command would open a
|
||||||
|
shell to the main-app Container.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec -it my-pod --container main-app -- /bin/bash
|
||||||
|
```
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% capture whatsnext %}
|
||||||
|
|
||||||
|
* [kubectl exec](/docs/user-guide/kubectl/v1.5/#exec)
|
||||||
|
|
||||||
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
||||||
|
{% include templates/task.md %}
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: Listing all Container images running in the cluster
|
title: Listing All Container Images Running in a Cluster
|
||||||
---
|
---
|
||||||
|
|
||||||
{% capture overview %}
|
{% capture overview %}
|
||||||
|
|
|
@ -0,0 +1,14 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: shell-demo
|
||||||
|
spec:
|
||||||
|
volumes:
|
||||||
|
- name: shared-data
|
||||||
|
emptyDir: {}
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: nginx
|
||||||
|
volumeMounts:
|
||||||
|
- name: shared-data
|
||||||
|
mountPath: /usr/share/nginx/html
|
|
@ -7,14 +7,14 @@ A tutorial shows how to accomplish a goal that is larger than a single
|
||||||
[task](/docs/tasks/). Typically a tutorial has several sections,
|
[task](/docs/tasks/). Typically a tutorial has several sections,
|
||||||
each of which has a sequence of steps.
|
each of which has a sequence of steps.
|
||||||
|
|
||||||
#### Kubernetes Basics
|
|
||||||
|
|
||||||
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.
|
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.
|
||||||
|
|
||||||
#### Stateless Applications
|
* [Online Training Course](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
|
||||||
|
|
||||||
* [Hello Minikube](/docs/tutorials/stateless-application/hello-minikube/)
|
* [Hello Minikube](/docs/tutorials/stateless-application/hello-minikube/)
|
||||||
|
|
||||||
|
#### Stateless Applications
|
||||||
|
|
||||||
* [Running a Stateless Application Using a Deployment](/docs/tutorials/stateless-application/run-stateless-application-deployment/)
|
* [Running a Stateless Application Using a Deployment](/docs/tutorials/stateless-application/run-stateless-application-deployment/)
|
||||||
|
|
||||||
* [Using a Service to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address-service/)
|
* [Using a Service to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address-service/)
|
||||||
|
|
|
@ -132,7 +132,7 @@ client_address=10.240.0.5
|
||||||
client_address=10.240.0.3
|
client_address=10.240.0.3
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that these are not your IPs, they're cluster internal IPs. This is what happens:
|
Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:
|
||||||
|
|
||||||
* Client sends packet to `node2:nodePort`
|
* Client sends packet to `node2:nodePort`
|
||||||
* `node2` replaces the source IP address (SNAT) in the packet with its own IP address
|
* `node2` replaces the source IP address (SNAT) in the packet with its own IP address
|
||||||
|
|
|
@ -580,7 +580,7 @@ env:
|
||||||
key: purge.interval
|
key: purge.interval
|
||||||
```
|
```
|
||||||
|
|
||||||
The entry point of the container invokes a bash script, `zkConfig.sh`, prior to
|
The entry point of the container invokes a bash script, `zkGenConfig.sh`, prior to
|
||||||
launching the ZooKeeper server process. This bash script generates the
|
launching the ZooKeeper server process. This bash script generates the
|
||||||
ZooKeeper configuration files from the supplied environment variables.
|
ZooKeeper configuration files from the supplied environment variables.
|
||||||
|
|
||||||
|
@ -653,7 +653,7 @@ ZK_LOG_DIR=/var/log/zookeeper
|
||||||
|
|
||||||
### Configuring Logging
|
### Configuring Logging
|
||||||
|
|
||||||
One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging.
|
One of the files generated by the `zkGenConfig.sh` script controls ZooKeeper's logging.
|
||||||
ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default,
|
ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default,
|
||||||
it uses a time and size based rolling file appender for its logging configuration.
|
it uses a time and size based rolling file appender for its logging configuration.
|
||||||
Get the logging configuration from one of Pods in the `zk` StatefulSet.
|
Get the logging configuration from one of Pods in the `zk` StatefulSet.
|
||||||
|
|
|
@ -74,10 +74,9 @@ chmod +x ./kubectl
|
||||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||||
```
|
```
|
||||||
Determine whether you can access sites like [https://cloud.google.com/container-registry/](https://cloud.google.com/container-registry/) directly without a proxy, by opening a new terminal and using
|
Determine whether you can access sites like [https://cloud.google.com/container-registry/](https://cloud.google.com/container-registry/) directly without a proxy, by opening a new terminal and using
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
export http_proxy=""
|
curl --proxy "" https://cloud.google.com/container-registry/
|
||||||
export https_proxy=""
|
|
||||||
curl https://cloud.google.com/container-registry/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
If NO proxy is required, start the Minikube cluster:
|
If NO proxy is required, start the Minikube cluster:
|
||||||
|
|
|
@ -5,368 +5,6 @@ assignees:
|
||||||
title: Managing Compute Resources
|
title: Managing Compute Resources
|
||||||
---
|
---
|
||||||
|
|
||||||
* TOC
|
{% include user-guide-content-moved.md %}
|
||||||
{:toc}
|
|
||||||
|
|
||||||
When specifying a [pod](/docs/user-guide/pods), you can optionally specify how much CPU and memory (RAM) each
|
[Managing Compute Resources for Containers](/docs/concepts/configuration/manage-compute-resources-container/)
|
||||||
container needs. When containers have their resource requests specified, the scheduler is
|
|
||||||
able to make better decisions about which nodes to place pods on; and when containers have their
|
|
||||||
limits specified, contention for resources on a node can be handled in a specified manner. For
|
|
||||||
more details about the difference between requests and limits, please refer to
|
|
||||||
[Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resource-qos.md).
|
|
||||||
|
|
||||||
*CPU* and *memory* are each a *resource type*. A resource type has a base unit. CPU is specified
|
|
||||||
in units of cores. Memory is specified in units of bytes.
|
|
||||||
|
|
||||||
CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute
|
|
||||||
resources are measureable quantities which can be requested, allocated, and consumed. They are
|
|
||||||
distinct from [API resources](/docs/user-guide/working-with-resources). API resources, such as pods and
|
|
||||||
[services](/docs/user-guide/services) are objects that can be written to and retrieved from the Kubernetes API
|
|
||||||
server.
|
|
||||||
|
|
||||||
## Resource Requests and Limits of Pod and Container
|
|
||||||
|
|
||||||
Each container of a pod can optionally specify one or more of the following:
|
|
||||||
|
|
||||||
* `spec.containers[].resources.limits.cpu`
|
|
||||||
* `spec.containers[].resources.limits.memory`
|
|
||||||
* `spec.containers[].resources.requests.cpu`
|
|
||||||
* `spec.containers[].resources.requests.memory`.
|
|
||||||
|
|
||||||
Specifying resource requests and/or limits is optional. In some clusters, unset limits or requests
|
|
||||||
may be replaced with default values when a pod is created or updated. The default value depends on
|
|
||||||
how the cluster is configured. If the requests values are not specified, they are set to be equal
|
|
||||||
to the limits values by default. Please note that limits must always be greater than or equal to
|
|
||||||
requests.
|
|
||||||
|
|
||||||
Although requests/limits can only be specified on individual containers, it is convenient to talk
|
|
||||||
about pod resource requests/limits. A *pod resource request/limit* for a particular resource
|
|
||||||
type is the sum of the resource requests/limits of that type for each container in the pod, with
|
|
||||||
unset values treated as zero (or equal to default values in some cluster configurations).
|
|
||||||
|
|
||||||
### Meaning of CPU
|
|
||||||
Limits and requests for `cpu` are measured in cpus.
|
|
||||||
One cpu, in Kubernetes, is equivalent to:
|
|
||||||
|
|
||||||
- 1 AWS vCPU
|
|
||||||
- 1 GCP Core
|
|
||||||
- 1 Azure vCore
|
|
||||||
- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
|
|
||||||
|
|
||||||
Fractional requests are allowed. A container with `spec.containers[].resources.requests.cpu` of `0.5` will
|
|
||||||
be guaranteed half as much CPU as one that asks for `1`. The expression `0.1` is equivalent to the expression
|
|
||||||
`100m`, which can be read as "one hundred millicpu" (some may say "one hundred millicores", and this is understood
|
|
||||||
to mean the same thing when talking about Kubernetes). A request with a decimal point, like `0.1` is converted to
|
|
||||||
`100m` by the API, and precision finer than `1m` is not allowed. For this reason, the form `100m` may be preferred.
|
|
||||||
|
|
||||||
CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of cpu on a single
|
|
||||||
core, dual core, or 48 core machine.
|
|
||||||
|
|
||||||
# Meaning of Memory
|
|
||||||
|
|
||||||
Limits and requests for `memory` are measured in bytes.
|
|
||||||
Memory can be expressed a plain integer or as fixed-point integers with one of these SI suffixes (E, P, T, G, M, K)
|
|
||||||
or their power-of-two equivalents (Ei, Pi, Ti, Gi, Mi, Ki). For example, the following represent roughly the same value:
|
|
||||||
`128974848`, `129e6`, `129M` , `123Mi`.
|
|
||||||
|
|
||||||
### Example
|
|
||||||
The following pod has two containers. Each has a request of 0.25 core of cpu and 64MiB
|
|
||||||
(2<sup>26</sup> bytes) of memory and a limit of 0.5 core of cpu and 128MiB of memory. The pod can
|
|
||||||
be said to have a request of 0.5 core and 128 MiB of memory and a limit of 1 core and 256MiB of
|
|
||||||
memory.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: frontend
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: db
|
|
||||||
image: mysql
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: "64Mi"
|
|
||||||
cpu: "250m"
|
|
||||||
limits:
|
|
||||||
memory: "128Mi"
|
|
||||||
cpu: "500m"
|
|
||||||
- name: wp
|
|
||||||
image: wordpress
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: "64Mi"
|
|
||||||
cpu: "250m"
|
|
||||||
limits:
|
|
||||||
memory: "128Mi"
|
|
||||||
cpu: "500m"
|
|
||||||
```
|
|
||||||
|
|
||||||
## How Pods with Resource Requests are Scheduled
|
|
||||||
|
|
||||||
When a pod is created, the Kubernetes scheduler selects a node for the pod to
|
|
||||||
run on. Each node has a maximum capacity for each of the resource types: the
|
|
||||||
amount of CPU and memory it can provide for pods. The scheduler ensures that,
|
|
||||||
for each resource type (CPU and memory), the sum of the resource requests of the
|
|
||||||
containers scheduled to the node is less than the capacity of the node. Note
|
|
||||||
that although actual memory or CPU resource usage on nodes is very low, the
|
|
||||||
scheduler will still refuse to place pods onto nodes if the capacity check
|
|
||||||
fails. This protects against a resource shortage on a node when resource usage
|
|
||||||
later increases, such as due to a daily peak in request rate.
|
|
||||||
|
|
||||||
## How Pods with Resource Limits are Run
|
|
||||||
|
|
||||||
When kubelet starts a container of a pod, it passes the CPU and memory limits to the container
|
|
||||||
runner (Docker or rkt).
|
|
||||||
|
|
||||||
When using Docker:
|
|
||||||
|
|
||||||
- The `spec.containers[].resources.requests.cpu` is converted to its core value (potentially fractional),
|
|
||||||
and multiplied by 1024, and used as the value of the [`--cpu-shares`](https://docs.docker.com/engine/reference/run/#/cpu-share-constraint)
|
|
||||||
flag to the `docker run` command.
|
|
||||||
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value,
|
|
||||||
multiplied by 100000, and then divided by 1000, and used as the value of the [`--cpu-quota`](
|
|
||||||
https://docs.docker.com/engine/reference/run/#/cpu-quota-constraint) flag to the `docker run`
|
|
||||||
command. The [`--cpu-period`] flag is set to 100000 which represents the default 100ms period
|
|
||||||
for measuring quota usage. The kubelet enforces cpu limits if it was started with the
|
|
||||||
[`--cpu-cfs-quota`] flag set to true. As of version 1.2, this flag will now default to true.
|
|
||||||
- The `spec.containers[].resources.limits.memory` is converted to an integer, and used as the value
|
|
||||||
of the [`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints) flag
|
|
||||||
to the `docker run` command.
|
|
||||||
|
|
||||||
**TODO: document behavior for rkt**
|
|
||||||
|
|
||||||
If a container exceeds its memory limit, it may be terminated. If it is restartable, it will be
|
|
||||||
restarted by kubelet, as will any other type of runtime failure.
|
|
||||||
|
|
||||||
A container may or may not be allowed to exceed its CPU limit for extended periods of time.
|
|
||||||
However, it will not be killed for excessive CPU usage.
|
|
||||||
|
|
||||||
To determine if a container cannot be scheduled or is being killed due to resource limits, see the
|
|
||||||
"Troubleshooting" section below.
|
|
||||||
|
|
||||||
## Monitoring Compute Resource Usage
|
|
||||||
|
|
||||||
The resource usage of a pod is reported as part of the Pod status.
|
|
||||||
|
|
||||||
If [optional monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
|
|
||||||
then pod resource usage can be retrieved from the monitoring system.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### My pods are pending with event message failedScheduling
|
|
||||||
|
|
||||||
If the scheduler cannot find any node where a pod can fit, then the pod will remain unscheduled
|
|
||||||
until a place can be found. An event will be produced each time the scheduler fails to find a
|
|
||||||
place for the pod, like this:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl describe pod frontend | grep -A 3 Events
|
|
||||||
Events:
|
|
||||||
FirstSeen LastSeen Count From Subobject PathReason Message
|
|
||||||
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
|
|
||||||
```
|
|
||||||
|
|
||||||
In the case shown above, the pod "frontend" fails to be scheduled due to insufficient
|
|
||||||
CPU resource on the node. Similar error messages can also suggest failure due to insufficient
|
|
||||||
memory (PodExceedsFreeMemory). In general, if a pod or pods are pending with this message and
|
|
||||||
alike, then there are several things to try:
|
|
||||||
|
|
||||||
- Add more nodes to the cluster.
|
|
||||||
- Terminate unneeded pods to make room for pending pods.
|
|
||||||
- Check that the pod is not larger than all the nodes. For example, if all the nodes
|
|
||||||
have a capacity of `cpu: 1`, then a pod with a limit of `cpu: 1.1` will never be scheduled.
|
|
||||||
|
|
||||||
You can check node capacities and amounts allocated with the `kubectl describe nodes` command.
|
|
||||||
For example:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl describe nodes gke-cluster-4-386701dd-node-ww4p
|
|
||||||
Name: gke-cluster-4-386701dd-node-ww4p
|
|
||||||
[ ... lines removed for clarity ...]
|
|
||||||
Capacity:
|
|
||||||
cpu: 1
|
|
||||||
memory: 464Mi
|
|
||||||
pods: 40
|
|
||||||
Allocated resources (total requests):
|
|
||||||
cpu: 910m
|
|
||||||
memory: 2370Mi
|
|
||||||
pods: 4
|
|
||||||
[ ... lines removed for clarity ...]
|
|
||||||
Pods: (4 in total)
|
|
||||||
Namespace Name CPU(milliCPU) Memory(bytes)
|
|
||||||
frontend webserver-ffj8j 500 (50% of total) 2097152000 (50% of total)
|
|
||||||
kube-system fluentd-cloud-logging-gke-cluster-4-386701dd-node-ww4p 100 (10% of total) 209715200 (5% of total)
|
|
||||||
kube-system kube-dns-v8-qopgw 310 (31% of total) 178257920 (4% of total)
|
|
||||||
TotalResourceLimits:
|
|
||||||
CPU(milliCPU): 910 (91% of total)
|
|
||||||
Memory(bytes): 2485125120 (59% of total)
|
|
||||||
[ ... lines removed for clarity ...]
|
|
||||||
```
|
|
||||||
|
|
||||||
Here you can see from the `Allocated resources` section that that a pod which ask for more than
|
|
||||||
90 millicpus or more than 1341MiB of memory will not be able to fit on this node.
|
|
||||||
|
|
||||||
Looking at the `Pods` section, you can see which pods are taking up space on the node.
|
|
||||||
|
|
||||||
The [resource quota](/docs/admin/resourcequota/) feature can be configured
|
|
||||||
to limit the total amount of resources that can be consumed. If used in conjunction
|
|
||||||
with namespaces, it can prevent one team from hogging all the resources.
|
|
||||||
|
|
||||||
### My container is terminated
|
|
||||||
|
|
||||||
Your container may be terminated because it's resource-starved. To check if a container is being killed because it is hitting a resource limit, call `kubectl describe pod`
|
|
||||||
on the pod you are interested in:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
[12:54:41] $ ./cluster/kubectl.sh describe pod simmemleak-hra99
|
|
||||||
Name: simmemleak-hra99
|
|
||||||
Namespace: default
|
|
||||||
Image(s): saadali/simmemleak
|
|
||||||
Node: kubernetes-node-tf0f/10.240.216.66
|
|
||||||
Labels: name=simmemleak
|
|
||||||
Status: Running
|
|
||||||
Reason:
|
|
||||||
Message:
|
|
||||||
IP: 10.244.2.75
|
|
||||||
Replication Controllers: simmemleak (1/1 replicas created)
|
|
||||||
Containers:
|
|
||||||
simmemleak:
|
|
||||||
Image: saadali/simmemleak
|
|
||||||
Limits:
|
|
||||||
cpu: 100m
|
|
||||||
memory: 50Mi
|
|
||||||
State: Running
|
|
||||||
Started: Tue, 07 Jul 2015 12:54:41 -0700
|
|
||||||
Last Termination State: Terminated
|
|
||||||
Exit Code: 1
|
|
||||||
Started: Fri, 07 Jul 2015 12:54:30 -0700
|
|
||||||
Finished: Fri, 07 Jul 2015 12:54:33 -0700
|
|
||||||
Ready: False
|
|
||||||
Restart Count: 5
|
|
||||||
Conditions:
|
|
||||||
Type Status
|
|
||||||
Ready False
|
|
||||||
Events:
|
|
||||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
|
||||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
|
|
||||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
|
|
||||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
|
|
||||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
|
|
||||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
|
|
||||||
```
|
|
||||||
|
|
||||||
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.
|
|
||||||
|
|
||||||
You can call `get pod` with the `-o go-template=...` option to fetch the status of previously terminated containers:
|
|
||||||
|
|
||||||
```shell{% raw %}
|
|
||||||
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
|
|
||||||
Container Name: simmemleak
|
|
||||||
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]{% endraw %}
|
|
||||||
```
|
|
||||||
|
|
||||||
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
|
|
||||||
|
|
||||||
## Opaque Integer Resources (Alpha Feature)
|
|
||||||
|
|
||||||
Kubernetes version 1.5 introduces Opaque integer resources. Opaque
|
|
||||||
integer resources allow cluster operators to advertise new node-level
|
|
||||||
resources that would be otherwise unknown to the system.
|
|
||||||
|
|
||||||
Users can consume these resources in pod specs just like CPU and memory.
|
|
||||||
The scheduler takes care of the resource accounting so that no more than the
|
|
||||||
available amount is simultaneously allocated to pods.
|
|
||||||
|
|
||||||
**Note:** Opaque integer resources are Alpha in Kubernetes version 1.5.
|
|
||||||
Only resource accounting is implemented; node-level isolation is still
|
|
||||||
under active development.
|
|
||||||
|
|
||||||
Opaque integer resources are resources that begin with the prefix
|
|
||||||
`pod.alpha.kubernetes.io/opaque-int-resource-`. The API server
|
|
||||||
restricts quantities of these resources to whole numbers. Examples of
|
|
||||||
_valid_ quantities are `3`, `3000m` and `3Ki`. Examples of _invalid_
|
|
||||||
quantities are `0.5` and `1500m`.
|
|
||||||
|
|
||||||
There are two steps required to use opaque integer resources. First, the
|
|
||||||
cluster operator must advertise a per-node opaque resource on one or more
|
|
||||||
nodes. Second, users must request the opaque resource in pods.
|
|
||||||
|
|
||||||
To advertise a new opaque integer resource, the cluster operator should
|
|
||||||
submit a `PATCH` HTTP request to the API server to specify the available
|
|
||||||
quantity in the `status.capacity` for a node in the cluster. After this
|
|
||||||
operation, the node's `status.capacity` will include a new resource. The
|
|
||||||
`status.allocatable` field is updated automatically with the new resource
|
|
||||||
asychronously by the Kubelet. Note that since the scheduler uses the
|
|
||||||
node `status.allocatable` value when evaluating pod fitness, there may
|
|
||||||
be a short delay between patching the node capacity with a new resource and the
|
|
||||||
first pod that requests the resource to be scheduled on that node.
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
The HTTP request below advertises 5 "foo" resources on node `k8s-node-1`.
|
|
||||||
|
|
||||||
_NOTE: `~1` is the encoding for the character `/` in the patch path.
|
|
||||||
The operation path value in JSON-Patch is interpreted as a JSON-Pointer.
|
|
||||||
For more details, please refer to
|
|
||||||
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3)._
|
|
||||||
|
|
||||||
```http
|
|
||||||
PATCH /api/v1/nodes/k8s-node-1/status HTTP/1.1
|
|
||||||
Accept: application/json
|
|
||||||
Content-Type: application/json-patch+json
|
|
||||||
Host: k8s-master:8080
|
|
||||||
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"op": "add",
|
|
||||||
"path": "/status/capacity/pod.alpha.kubernetes.io~1opaque-int-resource-foo",
|
|
||||||
"value": "5"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
To consume opaque resources in pods, include the name of the opaque
|
|
||||||
resource as a key in the `spec.containers[].resources.requests` map.
|
|
||||||
|
|
||||||
The pod will be scheduled only if all of the resource requests are
|
|
||||||
satisfied (including cpu, memory and any opaque resources.) The pod will
|
|
||||||
remain in the `PENDING` state while the resource request cannot be met by any
|
|
||||||
node.
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
The pod below requests 2 cpus and 1 "foo" (an opaque resource.)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: my-pod
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: my-container
|
|
||||||
image: myimage
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
cpu: 2
|
|
||||||
pod.alpha.kubernetes.io/opaque-int-resource-foo: 1
|
|
||||||
```
|
|
||||||
|
|
||||||
## Planned Improvements
|
|
||||||
|
|
||||||
The current system only allows resource quantities to be specified on a container.
|
|
||||||
It is planned to improve accounting for resources which are shared by all containers in a pod,
|
|
||||||
such as [EmptyDir volumes](/docs/user-guide/volumes/#emptydir).
|
|
||||||
|
|
||||||
The current system only supports container requests and limits for CPU and Memory.
|
|
||||||
It is planned to add new resource types, including a node disk space
|
|
||||||
resource, and a framework for adding custom [resource types](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/resources.md).
|
|
||||||
|
|
||||||
Kubernetes supports overcommitment of resources by supporting multiple levels of [Quality of Service](http://issue.k8s.io/168).
|
|
||||||
|
|
||||||
Currently, one unit of CPU means different things on different cloud providers, and on different
|
|
||||||
machine types within the same cloud providers. For example, on AWS, the capacity of a node
|
|
||||||
is reported in [ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
|
|
||||||
cores. We plan to revise the definition of the cpu resource to allow for more consistency
|
|
||||||
across providers and platforms.
|
|
||||||
|
|
|
@ -291,34 +291,6 @@ SPECIAL_LEVEL_KEY=very
|
||||||
SPECIAL_TYPE_KEY=charm
|
SPECIAL_TYPE_KEY=charm
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Optional ConfigMap in environment variables
|
|
||||||
|
|
||||||
There might be situations where environment variables are not
|
|
||||||
always required. These environment variables can be marked as optional in a
|
|
||||||
pod like so:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: dapi-test-pod
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: test-container
|
|
||||||
image: gcr.io/google_containers/busybox
|
|
||||||
command: [ "/bin/sh", "-c", "env" ]
|
|
||||||
env:
|
|
||||||
- name: SPECIAL_LEVEL_KEY
|
|
||||||
valueFrom:
|
|
||||||
configMapKeyRef:
|
|
||||||
name: a-config
|
|
||||||
key: akey
|
|
||||||
optional: true
|
|
||||||
restartPolicy: Never
|
|
||||||
```
|
|
||||||
|
|
||||||
When this pod is run, the output will be empty.
|
|
||||||
|
|
||||||
### Use-Case: Set command-line arguments with ConfigMap
|
### Use-Case: Set command-line arguments with ConfigMap
|
||||||
|
|
||||||
ConfigMaps can also be used to set the value of the command or arguments in a container. This is
|
ConfigMaps can also be used to set the value of the command or arguments in a container. This is
|
||||||
|
@ -450,38 +422,6 @@ very
|
||||||
You can project keys to specific paths and specific permissions on a per-file
|
You can project keys to specific paths and specific permissions on a per-file
|
||||||
basis. The [Secrets](/docs/user-guide/secrets/) user guide explains the syntax.
|
basis. The [Secrets](/docs/user-guide/secrets/) user guide explains the syntax.
|
||||||
|
|
||||||
#### Optional ConfigMap via volume plugin
|
|
||||||
|
|
||||||
Volumes and files provided by a ConfigMap can be also be marked as optional.
|
|
||||||
The ConfigMap or the key specified does not have to exist. The mount path for
|
|
||||||
such items will always be created.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: dapi-test-pod
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: test-container
|
|
||||||
image: gcr.io/google_containers/busybox
|
|
||||||
command: [ "/bin/sh", "-c", "ls /etc/config" ]
|
|
||||||
volumeMounts:
|
|
||||||
- name: config-volume
|
|
||||||
mountPath: /etc/config
|
|
||||||
volumes:
|
|
||||||
- name: config-volume
|
|
||||||
configMap:
|
|
||||||
name: no-config
|
|
||||||
optional: true
|
|
||||||
restartPolicy: Never
|
|
||||||
```
|
|
||||||
|
|
||||||
When this pod is run, the output will be:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
```
|
|
||||||
|
|
||||||
## Real World Example: Configuring Redis
|
## Real World Example: Configuring Redis
|
||||||
|
|
||||||
Let's take a look at a real-world example: configuring redis using ConfigMap. Say we want to inject
|
Let's take a look at a real-world example: configuring redis using ConfigMap. Say we want to inject
|
||||||
|
@ -577,10 +517,9 @@ $ kubectl exec -it redis redis-cli
|
||||||
|
|
||||||
## Restrictions
|
## Restrictions
|
||||||
|
|
||||||
ConfigMaps must be created before they are consumed in pods unless they are
|
ConfigMaps must be created before they are consumed in pods. Controllers may be written to tolerate
|
||||||
marked as optional. Controllers may be written to tolerate missing
|
missing configuration data; consult individual components configured via ConfigMap on a case-by-case
|
||||||
configuration data; consult individual components configured via ConfigMap on
|
basis.
|
||||||
a case-by-case basis.
|
|
||||||
|
|
||||||
ConfigMaps reside in a namespace. They can only be referenced by pods in the same namespace.
|
ConfigMaps reside in a namespace. They can only be referenced by pods in the same namespace.
|
||||||
|
|
||||||
|
@ -590,3 +529,4 @@ Kubelet only supports use of ConfigMap for pods it gets from the API server. Th
|
||||||
created using kubectl, or indirectly via a replication controller. It does not include pods created
|
created using kubectl, or indirectly via a replication controller. It does not include pods created
|
||||||
via the Kubelet's `--manifest-url` flag, its `--config` flag, or its REST API (these are not common
|
via the Kubelet's `--manifest-url` flag, its `--config` flag, or its REST API (these are not common
|
||||||
ways to create pods.)
|
ways to create pods.)
|
||||||
|
|
||||||
|
|
|
@ -62,8 +62,8 @@ This indicates that the Deployment has created all three replicas, and all repli
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ kubectl get rs
|
$ kubectl get rs
|
||||||
NAME DESIRED CURRENT AGE
|
NAME DESIRED CURRENT READY AGE
|
||||||
nginx-deployment-2035384211 3 3 18s
|
nginx-deployment-2035384211 3 3 0 18s
|
||||||
```
|
```
|
||||||
|
|
||||||
You may notice that the name of the Replica Set is always `<the name of the Deployment>-<hash value of the pod template>`.
|
You may notice that the name of the Replica Set is always `<the name of the Deployment>-<hash value of the pod template>`.
|
||||||
|
@ -180,9 +180,9 @@ We can run `kubectl get rs` to see that the Deployment updated the Pods by creat
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ kubectl get rs
|
$ kubectl get rs
|
||||||
NAME DESIRED CURRENT AGE
|
NAME DESIRED CURRENT READY AGE
|
||||||
nginx-deployment-1564180365 3 3 6s
|
nginx-deployment-1564180365 3 3 0 6s
|
||||||
nginx-deployment-2035384211 0 0 36s
|
nginx-deployment-2035384211 0 0 0 36s
|
||||||
```
|
```
|
||||||
|
|
||||||
Running `get pods` should now show only the new Pods:
|
Running `get pods` should now show only the new Pods:
|
||||||
|
@ -287,10 +287,10 @@ You will also see that both the number of old replicas (nginx-deployment-1564180
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ kubectl get rs
|
$ kubectl get rs
|
||||||
NAME DESIRED CURRENT AGE
|
NAME DESIRED CURRENT READY AGE
|
||||||
nginx-deployment-1564180365 2 2 25s
|
nginx-deployment-1564180365 2 2 0 25s
|
||||||
nginx-deployment-2035384211 0 0 36s
|
nginx-deployment-2035384211 0 0 0 36s
|
||||||
nginx-deployment-3066724191 2 2 6s
|
nginx-deployment-3066724191 2 2 2 6s
|
||||||
```
|
```
|
||||||
|
|
||||||
Looking at the Pods created, you will see that the 2 Pods created by new Replica Set are stuck in an image pull loop.
|
Looking at the Pods created, you will see that the 2 Pods created by new Replica Set are stuck in an image pull loop.
|
||||||
|
@ -514,10 +514,10 @@ The Deployment was still in progress when we paused it, so the actions of scalin
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ kubectl get rs
|
$ kubectl get rs
|
||||||
NAME DESIRED CURRENT AGE
|
NAME DESIRED CURRENT READY AGE
|
||||||
nginx-deployment-1564180365 2 2 1h
|
nginx-deployment-1564180365 2 2 2 1h
|
||||||
nginx-deployment-2035384211 2 2 1h
|
nginx-deployment-2035384211 2 2 0 1h
|
||||||
nginx-deployment-3066724191 0 0 1h
|
nginx-deployment-3066724191 0 0 0 1h
|
||||||
```
|
```
|
||||||
|
|
||||||
In a separate terminal, watch for rollout status changes and you'll see the rollout won't continue:
|
In a separate terminal, watch for rollout status changes and you'll see the rollout won't continue:
|
||||||
|
@ -546,10 +546,10 @@ deployment nginx-deployment successfully rolled out
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ kubectl get rs
|
$ kubectl get rs
|
||||||
NAME DESIRED CURRENT AGE
|
NAME DESIRED CURRENT READY AGE
|
||||||
nginx-deployment-1564180365 3 3 1h
|
nginx-deployment-1564180365 3 3 3 1h
|
||||||
nginx-deployment-2035384211 0 0 1h
|
nginx-deployment-2035384211 0 0 0 1h
|
||||||
nginx-deployment-3066724191 0 0 1h
|
nginx-deployment-3066724191 0 0 0 1h
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: You cannot rollback a paused Deployment until you resume it.
|
Note: You cannot rollback a paused Deployment until you resume it.
|
||||||
|
@ -578,6 +578,7 @@ Kubernetes marks a Deployment as _complete_ when it has the following characteri
|
||||||
equals or exceeds the number required by the Deployment strategy.
|
equals or exceeds the number required by the Deployment strategy.
|
||||||
* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
|
* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
|
||||||
updates you've requested have been completed.
|
updates you've requested have been completed.
|
||||||
|
* No old pods for the Deployment are running.
|
||||||
|
|
||||||
You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed successfully, `kubectl rollout status` returns a zero exit code.
|
You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed successfully, `kubectl rollout status` returns a zero exit code.
|
||||||
|
|
||||||
|
@ -615,7 +616,7 @@ the Deployment's `status.conditions`:
|
||||||
* Status=False
|
* Status=False
|
||||||
* Reason=ProgressDeadlineExceeded
|
* Reason=ProgressDeadlineExceeded
|
||||||
|
|
||||||
See the [Kubernetes API conventions](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/devel/api-conventions.md#typical-status-properties) for more information on status conditions.
|
See the [Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#typical-status-properties) for more information on status conditions.
|
||||||
|
|
||||||
Note that in version 1.5, Kubernetes will take no action on a stalled Deployment other than to report a status condition with
|
Note that in version 1.5, Kubernetes will take no action on a stalled Deployment other than to report a status condition with
|
||||||
`Reason=ProgressDeadlineExceeded`.
|
`Reason=ProgressDeadlineExceeded`.
|
||||||
|
@ -725,7 +726,7 @@ As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, a
|
||||||
`metadata` fields. For general information about working with config files,
|
`metadata` fields. For general information about working with config files,
|
||||||
see [deploying applications](/docs/user-guide/deploying-applications), [configuring containers](/docs/user-guide/configuring-containers), and [using kubectl to manage resources](/docs/user-guide/working-with-resources) documents.
|
see [deploying applications](/docs/user-guide/deploying-applications), [configuring containers](/docs/user-guide/configuring-containers), and [using kubectl to manage resources](/docs/user-guide/working-with-resources) documents.
|
||||||
|
|
||||||
A Deployment also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
|
A Deployment also needs a [`.spec` section](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status).
|
||||||
|
|
||||||
### Pod Template
|
### Pod Template
|
||||||
|
|
||||||
|
|
|
@ -5,137 +5,6 @@ assignees:
|
||||||
title: Using the Downward API to Convey Pod Properties
|
title: Using the Downward API to Convey Pod Properties
|
||||||
---
|
---
|
||||||
|
|
||||||
It is sometimes useful for a container to have information about itself, but we
|
{% include user-guide-content-moved.md %}
|
||||||
want to be careful not to over-couple containers to Kubernetes. The downward
|
|
||||||
API allows containers to consume information about themselves or the system and
|
|
||||||
expose that information how they want it, without necessarily coupling to the
|
|
||||||
Kubernetes client or REST API.
|
|
||||||
|
|
||||||
An example of this is a "legacy" app that is already written assuming
|
[Exposing Pod Information to Containers Using a DownwardAPIVolumeFile](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/)
|
||||||
that a particular environment variable will hold a unique identifier. While it
|
|
||||||
is often possible to "wrap" such applications, this is tedious and error prone,
|
|
||||||
and violates the goal of low coupling. Instead, the user should be able to use
|
|
||||||
the Pod's name, for example, and inject it into this well-known variable.
|
|
||||||
|
|
||||||
|
|
||||||
## Capabilities
|
|
||||||
|
|
||||||
The following information is available to a `Pod` through the downward API:
|
|
||||||
|
|
||||||
* The node's name
|
|
||||||
* The pod's name
|
|
||||||
* The pod's namespace
|
|
||||||
* The pod's IP
|
|
||||||
* The pod's service account name
|
|
||||||
* A container's cpu limit
|
|
||||||
* A container's cpu request
|
|
||||||
* A container's memory limit
|
|
||||||
* A container's memory request
|
|
||||||
|
|
||||||
More information will be exposed through this same API over time.
|
|
||||||
|
|
||||||
|
|
||||||
## Exposing pod information into a container
|
|
||||||
|
|
||||||
Containers consume information from the downward API using environment
|
|
||||||
variables or using a volume plugin.
|
|
||||||
|
|
||||||
|
|
||||||
## Environment variables
|
|
||||||
|
|
||||||
Most environment variables in the Kubernetes API use the `value` field to carry
|
|
||||||
simple values. However, the alternate `valueFrom` field allows you to specify
|
|
||||||
a `fieldRef` to select fields from the pod's definition, and a `resourceFieldRef`
|
|
||||||
to select fields from one of its container's definition.
|
|
||||||
|
|
||||||
The `fieldRef` field is a structure that has an `apiVersion` field and a `fieldPath`
|
|
||||||
field. The `fieldPath` field is an expression designating a field of the pod. The
|
|
||||||
`apiVersion` field is the version of the API schema that the `fieldPath` is
|
|
||||||
written in terms of. If the `apiVersion` field is not specified it is
|
|
||||||
defaulted to the API version of the enclosing object.
|
|
||||||
|
|
||||||
The `fieldRef` is evaluated and the resulting value is used as the value for
|
|
||||||
the environment variable. This allows users to publish their pod's name in any
|
|
||||||
environment variable they want.
|
|
||||||
|
|
||||||
The `resourceFieldRef` is a structure that has a `containerName` field, a `resource`
|
|
||||||
field, and a `divisor` field. The `containerName` is the name of a container,
|
|
||||||
whose resource (cpu or memory) information is to be exposed. The `containerName` is
|
|
||||||
optional for environment variables and defaults to the current container. The
|
|
||||||
`resource` field is an expression designating a resource in a container, and the `divisor`
|
|
||||||
field specifies an output format of the resource being exposed. If the `divisor`
|
|
||||||
is not specified, it defaults to "1" for cpu and memory. The table shows possible
|
|
||||||
values for cpu and memory resources for `resource` and `divisor` settings:
|
|
||||||
|
|
||||||
|
|
||||||
| Setting | Cpu | Memory |
|
|
||||||
| ------------- |-------------| -----|
|
|
||||||
| resource | limits.cpu, requests.cpu| limits.memory, requests.memory|
|
|
||||||
| divisor | 1(cores), 1m(millicores) | 1(bytes), 1k(kilobytes), 1M(megabytes), 1G(gigabytes), 1T(terabytes), 1P(petabytes), 1E(exabytes), 1Ki(kibibyte), 1Mi(mebibyte), 1Gi(gibibyte), 1Ti(tebibyte), 1Pi(pebibyte), 1Ei(exbibyte)|
|
|
||||||
|
|
||||||
|
|
||||||
### Example
|
|
||||||
|
|
||||||
This is an example of a pod that consumes its name and namespace via the
|
|
||||||
downward API:
|
|
||||||
|
|
||||||
{% include code.html language="yaml" file="dapi-pod.yaml" ghlink="/docs/user-guide/downward-api/dapi-pod.yaml" %}
|
|
||||||
|
|
||||||
This is an example of a pod that consumes its container's resources via the downward API:
|
|
||||||
|
|
||||||
{% include code.html language="yaml" file="dapi-container-resources.yaml" ghlink="/docs/user-guide/downward-api/dapi-container-resources.yaml" %}
|
|
||||||
|
|
||||||
## Downward API volume
|
|
||||||
|
|
||||||
Using a similar syntax it's possible to expose pod information to containers using plain text files.
|
|
||||||
Downward API are dumped to a mounted volume. This is achieved using a `downwardAPI`
|
|
||||||
volume type and the different items represent the files to be created. `fieldPath` references the field to be exposed.
|
|
||||||
For exposing a container's resources limits and requests, `containerName` must be specified with `resourceFieldRef`.
|
|
||||||
|
|
||||||
Downward API volume permits to store more complex data like [`metadata.labels`](/docs/user-guide/labels) and [`metadata.annotations`](/docs/user-guide/annotations). Currently key/value pair set fields are saved using `key="value"` format:
|
|
||||||
|
|
||||||
```conf
|
|
||||||
key1="value1"
|
|
||||||
key2="value2"
|
|
||||||
```
|
|
||||||
|
|
||||||
In future, it will be possible to specify an output format option.
|
|
||||||
|
|
||||||
Downward API volumes can expose:
|
|
||||||
|
|
||||||
* The node's name
|
|
||||||
* The pod's name
|
|
||||||
* The pod's namespace
|
|
||||||
* The pod's labels
|
|
||||||
* The pod's annotations
|
|
||||||
* The pod's service account name
|
|
||||||
* A container's cpu limit
|
|
||||||
* A container's cpu request
|
|
||||||
* A container's memory limit
|
|
||||||
* A container's memory request
|
|
||||||
|
|
||||||
The downward API volume refreshes its data in step with the kubelet refresh loop. When labels will be modifiable on the fly without respawning the pod containers will be able to detect changes through mechanisms such as [inotify](https://en.wikipedia.org/wiki/Inotify).
|
|
||||||
|
|
||||||
In future, it will be possible to specify a specific annotation or label.
|
|
||||||
|
|
||||||
#### Projecting keys to specific paths and file permissions
|
|
||||||
|
|
||||||
You can project keys to specific paths and specific permissions on a per-file
|
|
||||||
basis. The [Secrets](/docs/user-guide/secrets/) user guide explains the syntax.
|
|
||||||
|
|
||||||
### Example
|
|
||||||
|
|
||||||
This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations are dumped in `/etc/labels` and in `/etc/annotations`, respectively:
|
|
||||||
|
|
||||||
{% include code.html language="yaml" file="volume/dapi-volume.yaml" ghlink="/docs/user-guide/downward-api/volume/dapi-volume.yaml" %}
|
|
||||||
|
|
||||||
This is an example of a pod that consumes its container's resources via the downward API volume.
|
|
||||||
|
|
||||||
{% include code.html language="yaml" file="volume/dapi-volume-resources.yaml" ghlink="/docs/user-guide/downward-api/volume/dapi-volume-resources.yaml" %}
|
|
||||||
|
|
||||||
For a more thorough example, see
|
|
||||||
[environment variables](/docs/user-guide/environment-guide/).
|
|
||||||
|
|
||||||
## Default values for container resource limits
|
|
||||||
|
|
||||||
If cpu and memory limits are not specified for a container, the downward API will default to the node allocatable value for cpu and memory.
|
|
||||||
|
|
|
@ -2,118 +2,6 @@
|
||||||
title: Downward API Volumes
|
title: Downward API Volumes
|
||||||
---
|
---
|
||||||
|
|
||||||
Following this example, you will create a pod with a downward API volume.
|
{% include user-guide-content-moved.md %}
|
||||||
A downward API volume is a k8s volume plugin with the ability to save some pod information in a plain text file. The pod information can be for example some [metadata](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#metadata) or a container's [resources](/docs/user-guide/compute-resources).
|
|
||||||
|
|
||||||
Supported metadata fields:
|
[Exposing Pod Information to Containers Using a DownwardAPIVolumeFile](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/)
|
||||||
|
|
||||||
1. `metadata.annotations`
|
|
||||||
2. `metadata.namespace`
|
|
||||||
3. `metadata.name`
|
|
||||||
4. `metadata.labels`
|
|
||||||
|
|
||||||
Supported container's resources:
|
|
||||||
|
|
||||||
1. `limits.cpu`
|
|
||||||
2. `limits.memory`
|
|
||||||
3. `requests.cpu`
|
|
||||||
4. `requests.memory`
|
|
||||||
|
|
||||||
### Step Zero: Prerequisites
|
|
||||||
|
|
||||||
This example assumes you have a Kubernetes cluster installed and running, and the `kubectl` command line tool somewhere in your path. Please see the [gettingstarted](/docs/getting-started-guides/) for installation instructions for your platform.
|
|
||||||
|
|
||||||
### Step One: Create the pod
|
|
||||||
|
|
||||||
Use the [dapi-volume.yaml](/docs/user-guide/downward-api/volume/dapi-volume.yaml) file to create a Pod with a downward API volume which stores pod labels and pod annotations to `/etc/labels` and `/etc/annotations` respectively.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step Two: Examine pod/container output
|
|
||||||
|
|
||||||
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl logs kubernetes-downwardapi-volume-example
|
|
||||||
cluster="test-cluster1"
|
|
||||||
rack="rack-22"
|
|
||||||
zone="us-est-coast"
|
|
||||||
build="two"
|
|
||||||
builder="john-doe"
|
|
||||||
kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
|
|
||||||
kubernetes.io/config.source="api"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Internals
|
|
||||||
|
|
||||||
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
|
|
||||||
/ # ls -laR /etc
|
|
||||||
/etc:
|
|
||||||
total 4
|
|
||||||
drwxrwxrwt 3 0 0 120 Jun 1 19:55 .
|
|
||||||
drwxr-xr-x 17 0 0 4096 Jun 1 19:55 ..
|
|
||||||
drwxr-xr-x 2 0 0 80 Jun 1 19:55 ..6986_01_06_15_55_10.473583074
|
|
||||||
lrwxrwxrwx 1 0 0 31 Jun 1 19:55 ..data -> ..6986_01_06_15_55_10.473583074
|
|
||||||
lrwxrwxrwx 1 0 0 18 Jun 1 19:55 annotations -> ..data/annotations
|
|
||||||
lrwxrwxrwx 1 0 0 13 Jun 1 19:55 labels -> ..data/labels
|
|
||||||
|
|
||||||
/etc/..6986_01_06_15_55_10.473583074:
|
|
||||||
total 8
|
|
||||||
drwxr-xr-x 2 0 0 80 Jun 1 19:55 .
|
|
||||||
drwxrwxrwt 3 0 0 120 Jun 1 19:55 ..
|
|
||||||
-rw-r--r-- 1 0 0 129 Jun 1 19:55 annotations
|
|
||||||
-rw-r--r-- 1 0 0 59 Jun 1 19:55 labels
|
|
||||||
/ #
|
|
||||||
```
|
|
||||||
|
|
||||||
The file `labels` is stored in a temporary directory (`..6986_01_06_15_55_10.473583074` in the example above) which is symlinked to by `..data`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..data` indirection. This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..data` symlink is updated atomically using `rename(2)`.
|
|
||||||
|
|
||||||
## Example of downward API volume with container resources
|
|
||||||
|
|
||||||
Use the `docs/user-guide/downward-api/volume/dapi-volume-resources.yaml` file to create a Pod with a downward API volume which stores its container's limits and requests in /etc.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume-resources.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Examine pod/container output
|
|
||||||
|
|
||||||
In pod's `/etc` directory one may find the files created by the plugin:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
|
|
||||||
/ # ls -alR /etc
|
|
||||||
/etc:
|
|
||||||
total 4
|
|
||||||
drwxrwxrwt 3 0 0 160 Jun 1 19:47 .
|
|
||||||
drwxr-xr-x 17 0 0 4096 Jun 1 19:48 ..
|
|
||||||
drwxr-xr-x 2 0 0 120 Jun 1 19:47 ..6986_01_06_15_47_23.076909525
|
|
||||||
lrwxrwxrwx 1 0 0 31 Jun 1 19:47 ..data -> ..6986_01_06_15_47_23.076909525
|
|
||||||
lrwxrwxrwx 1 0 0 16 Jun 1 19:47 cpu_limit -> ..data/cpu_limit
|
|
||||||
lrwxrwxrwx 1 0 0 18 Jun 1 19:47 cpu_request -> ..data/cpu_request
|
|
||||||
lrwxrwxrwx 1 0 0 16 Jun 1 19:47 mem_limit -> ..data/mem_limit
|
|
||||||
lrwxrwxrwx 1 0 0 18 Jun 1 19:47 mem_request -> ..data/mem_request
|
|
||||||
|
|
||||||
/etc/..6986_01_06_15_47_23.076909525:
|
|
||||||
total 16
|
|
||||||
drwxr-xr-x 2 0 0 120 Jun 1 19:47 .
|
|
||||||
drwxrwxrwt 3 0 0 160 Jun 1 19:47 ..
|
|
||||||
-rw-r--r-- 1 0 0 1 Jun 1 19:47 cpu_limit
|
|
||||||
-rw-r--r-- 1 0 0 1 Jun 1 19:47 cpu_request
|
|
||||||
-rw-r--r-- 1 0 0 8 Jun 1 19:47 mem_limit
|
|
||||||
-rw-r--r-- 1 0 0 8 Jun 1 19:47 mem_request
|
|
||||||
|
|
||||||
/ # cat /etc/cpu_limit
|
|
||||||
1
|
|
||||||
/ # cat /etc/mem_limit
|
|
||||||
67108864
|
|
||||||
/ # cat /etc/cpu_request
|
|
||||||
1
|
|
||||||
/ # cat /etc/mem_request
|
|
||||||
33554432
|
|
||||||
```
|
|
||||||
|
|
|
@ -4,35 +4,6 @@ assignees:
|
||||||
title: Garbage Collection (Beta)
|
title: Garbage Collection (Beta)
|
||||||
---
|
---
|
||||||
|
|
||||||
* TOC
|
{% include user-guide-content-moved.md %}
|
||||||
{:toc}
|
|
||||||
|
|
||||||
## Garbage Collection
|
[Garbage Collection](/docs/concepts/abstractions/controllers/garbage-collection/)
|
||||||
|
|
||||||
Note: the Garbage Collection is a beta feature and is enabled by default in Kubernetes version 1.4.
|
|
||||||
|
|
||||||
### What does Garbage Collector do
|
|
||||||
|
|
||||||
When you delete, for example, a ReplicaSet, it is often desirable for the server to automatically garbage collect all the Pods that the ReplicaSet creates. The Garbage Collector (GC) implements this. In general, when you delete an owner object, GC deletes that owner's dependent objects.
|
|
||||||
|
|
||||||
### How to establish an owner-dependent relationship between objects
|
|
||||||
|
|
||||||
Kubernetes 1.3 added a metadata.ownerReferences field to every Kubernetes API object. If an API object is a dependent of another object, ownerReference should point to the owning API object.
|
|
||||||
|
|
||||||
When you create a ReplicationController or a ReplicaSet in Kubernetes 1.4, the Kubernetes control plane automatically sets the ownerReference field in each created pod to point to the owning ReplicationController or ReplicaSet.
|
|
||||||
|
|
||||||
You can set up owner-dependent relationships among other objects by manually setting the ownerReference field on dependent objects.
|
|
||||||
|
|
||||||
### Controlling whether Garbage Collector deletes dependents
|
|
||||||
|
|
||||||
When deleting an object, you can request the GC to ***asynchronously*** delete its dependents by ***explicitly*** specifying `deleteOptions.orphanDependents=false` in the deletion request that you send to the API server. A 200 OK response from the API server indicates the owner is deleted.
|
|
||||||
|
|
||||||
In Kubernetes version 1.5, synchronous garbage collection is under active development. See the tracking [issue](https://github.com/kubernetes/kubernetes/issues/29891) for more details.
|
|
||||||
|
|
||||||
If you specify `deleteOptions.orphanDependents=true`, or leave it blank, then the GC will first reset the `ownerReferences` in the dependents, then delete the owner. Note that the deletion of the owner object is asynchronous, that is, a 200 OK response will be sent by the API server before the owner object gets deleted.
|
|
||||||
|
|
||||||
### Other references
|
|
||||||
|
|
||||||
[Design Doc](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/garbage-collection.md)
|
|
||||||
|
|
||||||
[Known issues](https://github.com/kubernetes/kubernetes/issues/26120)
|
|
||||||
|
|
|
@ -5,70 +5,6 @@ assignees:
|
||||||
title: Running Commands in a Container with kubectl exec
|
title: Running Commands in a Container with kubectl exec
|
||||||
---
|
---
|
||||||
|
|
||||||
Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases.
|
{% include user-guide-content-moved.md %}
|
||||||
|
|
||||||
## Using kubectl exec to check the environment variables of a container
|
[Getting a Shell to a Running Container](/docs/tasks/kubectl/get-shell-running-container/)
|
||||||
|
|
||||||
Kubernetes exposes [services](/docs/user-guide/services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
|
|
||||||
|
|
||||||
We first create a pod and a service,
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
|
|
||||||
$ kubectl create -f examples/guestbook/redis-master-service.yaml
|
|
||||||
```
|
|
||||||
wait until the pod is Running and Ready,
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl get pod
|
|
||||||
NAME READY REASON RESTARTS AGE
|
|
||||||
redis-master-ft9ex 1/1 Running 0 12s
|
|
||||||
```
|
|
||||||
|
|
||||||
then we can check the environment variables of the pod,
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl exec redis-master-ft9ex env
|
|
||||||
...
|
|
||||||
REDIS_MASTER_SERVICE_PORT=6379
|
|
||||||
REDIS_MASTER_SERVICE_HOST=10.0.0.219
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
We can use these environment variables in applications to find the service.
|
|
||||||
|
|
||||||
|
|
||||||
## Using kubectl exec to check the mounted volumes
|
|
||||||
|
|
||||||
It is convenient to use `kubectl exec` to check if the volumes are mounted as expected.
|
|
||||||
We first create a Pod with a volume mounted at /data/redis,
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
wait until the pod is Running and Ready,
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl get pods
|
|
||||||
NAME READY REASON RESTARTS AGE
|
|
||||||
storage 1/1 Running 0 1m
|
|
||||||
```
|
|
||||||
|
|
||||||
we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl exec storage ls /data
|
|
||||||
redis
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using kubectl exec to open a bash terminal in a pod
|
|
||||||
|
|
||||||
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl exec -ti storage -- bash
|
|
||||||
root@storage:/data#
|
|
||||||
```
|
|
||||||
|
|
||||||
This gets you a terminal.
|
|
||||||
|
|
|
@ -23,7 +23,7 @@ heapster monitoring will be turned-on by default).
|
||||||
## Step One: Run & expose php-apache server
|
## Step One: Run & expose php-apache server
|
||||||
|
|
||||||
To demonstrate Horizontal Pod Autoscaler we will use a custom docker image based on the php-apache image.
|
To demonstrate Horizontal Pod Autoscaler we will use a custom docker image based on the php-apache image.
|
||||||
The image can be found [here](/docs/user-guide/horizontal-pod-autoscaling/image).
|
The Dockerfile can be found [here](/docs/user-guide/horizontal-pod-autoscaling/image/Dockerfile).
|
||||||
It defines an [index.php](/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
|
It defines an [index.php](/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
|
||||||
|
|
||||||
First, we will start a deployment running the image and expose it as a service:
|
First, we will start a deployment running the image and expose it as a service:
|
||||||
|
|
|
@ -54,7 +54,7 @@ Before running examples in the user guides, please ensure you have completed the
|
||||||
: A service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name.
|
: A service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name.
|
||||||
|
|
||||||
[**Volume**](/docs/user-guide/volumes/)
|
[**Volume**](/docs/user-guide/volumes/)
|
||||||
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the volume directory and/or device.
|
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/), adding provisioning of the volume directory and/or device.
|
||||||
|
|
||||||
[**Secret**](/docs/user-guide/secrets/)
|
[**Secret**](/docs/user-guide/secrets/)
|
||||||
: A secret stores sensitive data, such as authentication tokens, which can be made available to containers upon request.
|
: A secret stores sensitive data, such as authentication tokens, which can be made available to containers upon request.
|
||||||
|
|
|
@ -44,9 +44,9 @@ It can be configured to give services externally-reachable urls, load balance tr
|
||||||
|
|
||||||
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
|
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
|
||||||
|
|
||||||
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers) and [here](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc).
|
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://github.com/kubernetes/ingress/tree/master/controllers/nginx#running-multiple-ingress-controllers) and [here](https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||||
|
|
||||||
Make sure you review the [beta limitations](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce/BETA_LIMITATIONS.md) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://github.com/kubernetes/contrib/tree/master/ingress/controllers) as a pod.
|
Make sure you review the [beta limitations](https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://github.com/kubernetes/ingress/tree/master/controllers) as a pod.
|
||||||
|
|
||||||
## The Ingress Resource
|
## The Ingress Resource
|
||||||
|
|
||||||
|
@ -71,7 +71,7 @@ spec:
|
||||||
|
|
||||||
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](/docs/user-guide/deploying-applications), [here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources).
|
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](/docs/user-guide/deploying-applications), [here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources).
|
||||||
|
|
||||||
__Lines 5-7__: Ingress [spec](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
|
__Lines 5-7__: Ingress [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
|
||||||
|
|
||||||
__Lines 8-9__: Each http rule contains the following information: A host (e.g.: foo.bar.com, defaults to * in this example), a list of paths (e.g.: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
|
__Lines 8-9__: Each http rule contains the following information: A host (e.g.: foo.bar.com, defaults to * in this example), a list of paths (e.g.: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
|
||||||
|
|
||||||
|
|
|
@ -66,7 +66,7 @@ To view completed pods of a job, use `kubectl get pods --show-all`. The `--show
|
||||||
To list all the pods that belong to a job in a machine readable form, you can use a command like this:
|
To list all the pods that belong to a job in a machine readable form, you can use a command like this:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name})
|
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})
|
||||||
echo $pods
|
echo $pods
|
||||||
pi-aiw0a
|
pi-aiw0a
|
||||||
```
|
```
|
||||||
|
@ -120,7 +120,7 @@ There are three main types of jobs:
|
||||||
- the job is complete when there is one successful pod for each value in the range 1 to `.spec.completions`.
|
- the job is complete when there is one successful pod for each value in the range 1 to `.spec.completions`.
|
||||||
- **not implemented yet:** each pod passed a different index in the range 1 to `.spec.completions`.
|
- **not implemented yet:** each pod passed a different index in the range 1 to `.spec.completions`.
|
||||||
1. Parallel Jobs with a *work queue*:
|
1. Parallel Jobs with a *work queue*:
|
||||||
- do not specify `.spec.completions`
|
- do not specify `.spec.completions`, default to `.spec.Parallelism`
|
||||||
- the pods must coordinate with themselves or an external service to determine what each should work on
|
- the pods must coordinate with themselves or an external service to determine what each should work on
|
||||||
- each pod is independently capable of determining whether or not all its peers are done, thus the entire Job is done.
|
- each pod is independently capable of determining whether or not all its peers are done, thus the entire Job is done.
|
||||||
- when _any_ pod terminates with success, no new pods are created.
|
- when _any_ pod terminates with success, no new pods are created.
|
||||||
|
|
|
@ -305,7 +305,7 @@ $ kubectl config use-context federal-context
|
||||||
|
|
||||||
### Final notes for tying it all together
|
### Final notes for tying it all together
|
||||||
|
|
||||||
So, tying this all together, a quick start to creating your own kubeconfig file:
|
So, tying this all together, a quick start to create your own kubeconfig file:
|
||||||
|
|
||||||
- Take a good look and understand how your api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
|
- Take a good look and understand how your api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
|
||||||
|
|
||||||
|
|
|
@ -197,11 +197,12 @@ $ kubectl -n my-ns delete po,svc --all # Delete all pods and servic
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl logs my-pod # dump pod logs (stdout)
|
$ kubectl logs my-pod # dump pod logs (stdout)
|
||||||
|
$ kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
|
||||||
$ kubectl logs -f my-pod # stream pod logs (stdout)
|
$ kubectl logs -f my-pod # stream pod logs (stdout)
|
||||||
|
$ kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)
|
||||||
$ kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
|
$ kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
|
||||||
$ kubectl attach my-pod -i # Attach to Running Container
|
$ kubectl attach my-pod -i # Attach to Running Container
|
||||||
$ kubectl port-forward my-pod 5000:6000 # Forward port 6000 of Pod to your to 5000 on your local machine
|
$ kubectl port-forward my-pod 5000:6000 # Forward port 6000 of Pod to your to 5000 on your local machine
|
||||||
$ kubectl port-forward my-svc 6000 # Forward port to service
|
|
||||||
$ kubectl exec my-pod -- ls / # Run command in existing pod (1 container case)
|
$ kubectl exec my-pod -- ls / # Run command in existing pod (1 container case)
|
||||||
$ kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)
|
$ kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)
|
||||||
$ kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
|
$ kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
|
||||||
|
@ -242,7 +243,7 @@ Resource type | Abbreviated alias
|
||||||
`namespaces` |`ns`
|
`namespaces` |`ns`
|
||||||
`networkpolicies` |
|
`networkpolicies` |
|
||||||
`nodes` |`no`
|
`nodes` |`no`
|
||||||
`petset` |
|
`statefulsets` |
|
||||||
`persistentvolumeclaims` |`pvc`
|
`persistentvolumeclaims` |`pvc`
|
||||||
`persistentvolumes` |`pv`
|
`persistentvolumes` |`pv`
|
||||||
`pods` |`po`
|
`pods` |`po`
|
||||||
|
|
|
@ -32,7 +32,7 @@ kubectl apply -f FILENAME
|
||||||
# Apply the configuration in manifest.yaml that matches label app=nginx and delete all the other resources that are not in the file and match label app=nginx.
|
# Apply the configuration in manifest.yaml that matches label app=nginx and delete all the other resources that are not in the file and match label app=nginx.
|
||||||
kubectl apply --prune -f manifest.yaml -l app=nginx
|
kubectl apply --prune -f manifest.yaml -l app=nginx
|
||||||
|
|
||||||
# Apply the configuration in manifest.yaml and delete all the other configmaps that are not in the file.
|
# Apply the configuration in manifest.yaml and delete all the other configmaps with the same label key that are not in the file.
|
||||||
kubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap
|
kubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ Perform a rolling update of the given ReplicationController.
|
||||||
|
|
||||||
Replaces the specified replication controller with a new replication controller by updating one pod at a time to use the new PodTemplate. The new-controller.json must specify the same namespace as the existing replication controller and overwrite at least one (common) label in its replicaSelector.
|
Replaces the specified replication controller with a new replication controller by updating one pod at a time to use the new PodTemplate. The new-controller.json must specify the same namespace as the existing replication controller and overwrite at least one (common) label in its replicaSelector.
|
||||||
|
|
||||||
! http://kubernetes.io/images/docs/kubectl_rollingupdate.svg
|
![kubectl_rollingupdate](http://kubernetes.io/images/docs/kubectl_rollingupdate.svg)
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC)
|
kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC)
|
||||||
|
|
|
@ -172,23 +172,23 @@ In the CLI, the access modes are abbreviated to:
|
||||||
|
|
||||||
| Volume Plugin | ReadWriteOnce| ReadOnlyMany| ReadWriteMany|
|
| Volume Plugin | ReadWriteOnce| ReadOnlyMany| ReadWriteMany|
|
||||||
| :--- | :---: | :---: | :---: |
|
| :--- | :---: | :---: | :---: |
|
||||||
| AWSElasticBlockStore | x | - | - |
|
| AWSElasticBlockStore | ✓ | - | - |
|
||||||
| AzureFile | x | x | x |
|
| AzureFile | ✓ | ✓ | ✓ |
|
||||||
| AzureDisk | x | - | - |
|
| AzureDisk | ✓ | - | - |
|
||||||
| CephFS | x | x | x |
|
| CephFS | ✓ | ✓ | ✓ |
|
||||||
| Cinder | x | - | - |
|
| Cinder | ✓ | - | - |
|
||||||
| FC | x | x | - |
|
| FC | ✓ | ✓ | - |
|
||||||
| FlexVolume | x | x | - |
|
| FlexVolume | ✓ | ✓ | - |
|
||||||
| Flocker | x | - | - |
|
| Flocker | ✓ | - | - |
|
||||||
| GCEPersistentDisk | x | x | - |
|
| GCEPersistentDisk | ✓ | ✓ | - |
|
||||||
| Glusterfs | x | x | x |
|
| Glusterfs | ✓ | ✓ | ✓ |
|
||||||
| HostPath | x | - | - |
|
| HostPath | ✓ | - | - |
|
||||||
| iSCSI | x | x | - |
|
| iSCSI | ✓ | ✓ | - |
|
||||||
| PhotonPersistentDisk | x | - | - |
|
| PhotonPersistentDisk | ✓ | - | - |
|
||||||
| Quobyte | x | x | x |
|
| Quobyte | ✓ | ✓ | ✓ |
|
||||||
| NFS | x | x | x |
|
| NFS | ✓ | ✓ | ✓ |
|
||||||
| RBD | x | x | - |
|
| RBD | ✓ | ✓ | - |
|
||||||
| VsphereVolume | x | - | - |
|
| VsphereVolume | ✓ | - | - |
|
||||||
|
|
||||||
### Class
|
### Class
|
||||||
|
|
||||||
|
@ -396,7 +396,7 @@ parameters:
|
||||||
zone: us-central1-a
|
zone: us-central1-a
|
||||||
```
|
```
|
||||||
|
|
||||||
* `type`: `pd-standard` or `pd-ssd`. Default: `pd-ssd`
|
* `type`: `pd-standard` or `pd-ssd`. Default: `pd-standard`
|
||||||
* `zone`: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen.
|
* `zone`: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen.
|
||||||
|
|
||||||
#### Glusterfs
|
#### Glusterfs
|
||||||
|
@ -421,7 +421,7 @@ parameters:
|
||||||
* `restauthenabled` : Gluster REST service authentication boolean that enables authentication to the REST server. If this value is 'true', `restuser` and `restuserkey` or `secretNamespace` + `secretName` have to be filled. This option is deprecated, authentication is enabled when any of `restuser`, `restuserkey`, `secretName` or `secretNamespace` is specified.
|
* `restauthenabled` : Gluster REST service authentication boolean that enables authentication to the REST server. If this value is 'true', `restuser` and `restuserkey` or `secretNamespace` + `secretName` have to be filled. This option is deprecated, authentication is enabled when any of `restuser`, `restuserkey`, `secretName` or `secretNamespace` is specified.
|
||||||
* `restuser` : Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.
|
* `restuser` : Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.
|
||||||
* `restuserkey` : Gluster REST service/Heketi user's password which will be used for authentication to the REST server. This parameter is deprecated in favor of `secretNamespace` + `secretName`.
|
* `restuserkey` : Gluster REST service/Heketi user's password which will be used for authentication to the REST server. This parameter is deprecated in favor of `secretNamespace` + `secretName`.
|
||||||
* `secretNamespace` + `secretName` : Identification of Secret instance that containes user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when both `secretNamespace` and `secretName` are omitted. The provided secret must have type "kubernetes.io/glusterfs", e.g. created in this way:
|
* `secretNamespace` + `secretName` : Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when both `secretNamespace` and `secretName` are omitted. The provided secret must have type "kubernetes.io/glusterfs", e.g. created in this way:
|
||||||
```
|
```
|
||||||
$ kubectl create secret generic heketi-secret --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' --namespace=default
|
$ kubectl create secret generic heketi-secret --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' --namespace=default
|
||||||
```
|
```
|
||||||
|
@ -507,7 +507,7 @@ parameters:
|
||||||
* `quobyteAPIServer`: API Server of Quobyte in the format `http(s)://api-server:7860`
|
* `quobyteAPIServer`: API Server of Quobyte in the format `http(s)://api-server:7860`
|
||||||
* `registry`: Quobyte registry to use to mount the volume. You can specify the registry as ``<host>:<port>`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
|
* `registry`: Quobyte registry to use to mount the volume. You can specify the registry as ``<host>:<port>`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
|
||||||
* `adminSecretNamespace`: The namespace for `adminSecretName`. Default is "default".
|
* `adminSecretNamespace`: The namespace for `adminSecretName`. Default is "default".
|
||||||
* `adminSecretName`: secret that holds information about the Quobyte user and the password to authenticate agains the API server. The provided secret must have type "kubernetes.io/quobyte", e.g. created in this way:
|
* `adminSecretName`: secret that holds information about the Quobyte user and the password to authenticate against the API server. The provided secret must have type "kubernetes.io/quobyte", e.g. created in this way:
|
||||||
```
|
```
|
||||||
$ kubectl create secret generic quobyte-admin-secret --type="kubernetes.io/quobyte" --from-literal=key='opensesame' --namespace=kube-system
|
$ kubectl create secret generic quobyte-admin-secret --type="kubernetes.io/quobyte" --from-literal=key='opensesame' --namespace=kube-system
|
||||||
```
|
```
|
||||||
|
|
|
@ -163,5 +163,5 @@ following
|
||||||
|
|
||||||
## Working With RBAC
|
## Working With RBAC
|
||||||
|
|
||||||
Use PodSecurityPolicy to control access to privileged containers based on role and groups.
|
In Kubernetes 1.5 and newer, you can use PodSecurityPolicy to control access to privileged containers based on user role and groups.
|
||||||
(see [more details](https://github.com/kubernetes/kubernetes/blob/master/examples/podsecuritypolicy/rbac/README.md)).
|
(see [more details](https://github.com/kubernetes/kubernetes/blob/master/examples/podsecuritypolicy/rbac/README.md)).
|
||||||
|
|
|
@ -4,168 +4,6 @@ assignees:
|
||||||
title: The Lifecycle of a Pod
|
title: The Lifecycle of a Pod
|
||||||
---
|
---
|
||||||
|
|
||||||
Updated: 4/14/2015
|
{% include user-guide-content-moved.md %}
|
||||||
|
|
||||||
This document covers the lifecycle of a pod. It is not an exhaustive document, but an introduction to the topic.
|
|
||||||
|
|
||||||
## Pod Phase
|
|
||||||
|
|
||||||
As consistent with the overall [API convention](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#typical-status-properties), phase is a simple, high-level summary of the phase of the lifecycle of a pod. It is not intended to be a comprehensive rollup of observations of container-level or even pod-level conditions or other state, nor is it intended to be a comprehensive state machine.
|
|
||||||
|
|
||||||
The number and meanings of `PodPhase` values are tightly guarded. Other than what is documented here, nothing should be assumed about pods with a given `PodPhase`.
|
|
||||||
|
|
||||||
* Pending: The pod has been accepted by the system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.
|
|
||||||
* Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
|
|
||||||
* Succeeded: All containers in the pod have terminated in success, and will not be restarted.
|
|
||||||
* Failed: All containers in the pod have terminated, at least one container has terminated in failure (exited with non-zero exit status or was terminated by the system).
|
|
||||||
* Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.
|
|
||||||
|
|
||||||
## Pod Conditions
|
|
||||||
|
|
||||||
A pod containing containers that specify readiness probes will also report the Ready condition. Condition status values may be `True`, `False`, or `Unknown`.
|
|
||||||
|
|
||||||
## Container Probes
|
|
||||||
|
|
||||||
A [Probe](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Probe) is a diagnostic performed periodically by the kubelet on a container. Specifically the diagnostic is one of three [Handlers](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler):
|
|
||||||
|
|
||||||
* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0.
|
|
||||||
* `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open.
|
|
||||||
* `HTTPGetAction`: performs an HTTP Get against the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
|
|
||||||
|
|
||||||
Each probe will have one of three results:
|
|
||||||
|
|
||||||
* `Success`: indicates that the container passed the diagnostic.
|
|
||||||
* `Failure`: indicates that the container failed the diagnostic.
|
|
||||||
* `Unknown`: indicates that the diagnostic failed so no action should be taken.
|
|
||||||
|
|
||||||
The kubelet can optionally perform and react to two kinds of probes on running containers:
|
|
||||||
|
|
||||||
* `LivenessProbe`: indicates whether the container is *live*, i.e. running. If the LivenessProbe fails, the kubelet will kill the container and the container will be subjected to its [RestartPolicy](#restartpolicy). The default state of Liveness before the initial delay is `Success`. The state of Liveness for a container when no probe is provided is assumed to be `Success`.
|
|
||||||
* `ReadinessProbe`: indicates whether the container is *ready* to service requests. If the ReadinessProbe fails, the endpoints controller will remove the pod's IP address from the endpoints of all services that match the pod. The default state of Readiness before the initial delay is `Failure`. The state of Readiness for a container when no probe is provided is assumed to be `Success`.
|
|
||||||
|
|
||||||
### When should I use liveness or readiness probes?
|
|
||||||
|
|
||||||
If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe - the kubelet will automatically perform the correct action in accordance with the RestartPolicy when the process crashes.
|
|
||||||
|
|
||||||
If you'd like your container to be killed and restarted if a probe fails, then specify a LivenessProbe and a RestartPolicy of `Always` or `OnFailure`.
|
|
||||||
|
|
||||||
If you'd like to start sending traffic to a pod only when a probe succeeds, specify a ReadinessProbe. In this case, the ReadinessProbe may be the same as the LivenessProbe, but the existence of the ReadinessProbe in the spec means that the pod will start without receiving any traffic and only start receiving traffic once the probe starts succeeding.
|
|
||||||
|
|
||||||
If a container wants the ability to take itself down for maintenance, you can specify a ReadinessProbe that checks an endpoint specific to readiness which is different than the LivenessProbe.
|
|
||||||
|
|
||||||
Note that if you just want to be able to drain requests when the pod is deleted, you do not necessarily need a ReadinessProbe - on deletion, the pod automatically puts itself into an unready state regardless of whether the ReadinessProbe exists or not while it waits for the containers in the pod to stop.
|
|
||||||
|
|
||||||
## Container Statuses
|
|
||||||
|
|
||||||
More detailed information about the current (and previous) container statuses can be found in [ContainerStatuses](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#PodStatus). The information reported depends on the current [ContainerState](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#ContainerState), which may be Waiting, Running, or Terminated.
|
|
||||||
|
|
||||||
## RestartPolicy
|
|
||||||
|
|
||||||
The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If RestartPolicy is not set, the default value is `Always`. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. Failed containers that are restarted by Kubelet, are restarted with an exponential back-off delay, the delay is in multiples of sync-frequency 0, 1x, 2x, 4x, 8x ... capped at 5 minutes and is reset after 10 minutes of successful execution. As discussed in the [pods document](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof), once bound to a node, a pod will never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired.
|
|
||||||
|
|
||||||
Three types of controllers are currently available:
|
|
||||||
|
|
||||||
- Use a [`Job`](/docs/user-guide/jobs/) for pods which are expected to terminate (e.g. batch computations).
|
|
||||||
- Use a [`ReplicationController`](/docs/user-guide/replication-controller/) or [`Deployment`](/docs/user-guide/deployments/)
|
|
||||||
for pods which are not expected to terminate (e.g. web servers).
|
|
||||||
- Use a [`DaemonSet`](/docs/admin/daemons/): Use for pods which need to run 1 per machine because they provide a
|
|
||||||
machine-specific system service.
|
|
||||||
If you are unsure whether to use ReplicationController or Daemon, then see [Daemon Set versus
|
|
||||||
Replication Controller](/docs/admin/daemons/#daemon-set-versus-replication-controller).
|
|
||||||
|
|
||||||
`ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`.
|
|
||||||
`Job` is *only* appropriate for pods with `RestartPolicy` equal to `OnFailure` or `Never`.
|
|
||||||
|
|
||||||
All 3 types of controllers contain a PodTemplate, which has all the same fields as a Pod.
|
|
||||||
It is recommended to create the appropriate controller and let it create pods, rather than to
|
|
||||||
directly create pods yourself. That is because pods alone are not resilient to machine failures,
|
|
||||||
but Controllers are.
|
|
||||||
|
|
||||||
## Pod lifetime
|
|
||||||
|
|
||||||
In general, pods which are created do not disappear until someone destroys them. This might be a human or a `ReplicationController`, or another controller. The only exception to this rule is that pods with a `PodPhase` of `Succeeded` or `Failed` for more than some duration (determined by the master) will expire and be automatically reaped.
|
|
||||||
|
|
||||||
If a node dies or is disconnected from the rest of the cluster, some entity within the system (call it the NodeController for now) is responsible for applying policy (e.g. a timeout) and marking any pods on the lost node as `Failed`.
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### Advanced livenessProbe example
|
|
||||||
|
|
||||||
Liveness probes are executed by `kubelet`, so all requests will be made within kubelet network namespace.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
test: liveness
|
|
||||||
name: liveness-http
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- args:
|
|
||||||
- /server
|
|
||||||
image: gcr.io/google_containers/liveness
|
|
||||||
livenessProbe:
|
|
||||||
httpGet:
|
|
||||||
# when "host" is not defined, "PodIP" will be used
|
|
||||||
# host: my-host
|
|
||||||
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
|
|
||||||
# scheme: HTTPS
|
|
||||||
path: /healthz
|
|
||||||
port: 8080
|
|
||||||
httpHeaders:
|
|
||||||
- name: X-Custom-Header
|
|
||||||
value: Awesome
|
|
||||||
initialDelaySeconds: 15
|
|
||||||
timeoutSeconds: 1
|
|
||||||
name: liveness
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example states
|
|
||||||
|
|
||||||
* Pod is `Running`, 1 container, container exits success
|
|
||||||
* Log completion event
|
|
||||||
* If RestartPolicy is:
|
|
||||||
* Always: restart container, pod stays `Running`
|
|
||||||
* OnFailure: pod becomes `Succeeded`
|
|
||||||
* Never: pod becomes `Succeeded`
|
|
||||||
|
|
||||||
* Pod is `Running`, 1 container, container exits failure
|
|
||||||
* Log failure event
|
|
||||||
* If RestartPolicy is:
|
|
||||||
* Always: restart container, pod stays `Running`
|
|
||||||
* OnFailure: restart container, pod stays `Running`
|
|
||||||
* Never: pod becomes `Failed`
|
|
||||||
|
|
||||||
* Pod is `Running`, 2 containers, container 1 exits failure
|
|
||||||
* Log failure event
|
|
||||||
* If RestartPolicy is:
|
|
||||||
* Always: restart container, pod stays `Running`
|
|
||||||
* OnFailure: restart container, pod stays `Running`
|
|
||||||
* Never: pod stays `Running`
|
|
||||||
* When container 2 exits...
|
|
||||||
* Log failure event
|
|
||||||
* If RestartPolicy is:
|
|
||||||
* Always: restart container, pod stays `Running`
|
|
||||||
* OnFailure: restart container, pod stays `Running`
|
|
||||||
* Never: pod becomes `Failed`
|
|
||||||
|
|
||||||
* Pod is `Running`, container becomes OOM
|
|
||||||
* Container terminates in failure
|
|
||||||
* Log OOM event
|
|
||||||
* If RestartPolicy is:
|
|
||||||
* Always: restart container, pod stays `Running`
|
|
||||||
* OnFailure: restart container, pod stays `Running`
|
|
||||||
* Never: log failure event, pod becomes `Failed`
|
|
||||||
|
|
||||||
* Pod is `Running`, a disk dies
|
|
||||||
* All containers are killed
|
|
||||||
* Log appropriate event
|
|
||||||
* Pod becomes `Failed`
|
|
||||||
* If running under a controller, pod will be recreated elsewhere
|
|
||||||
|
|
||||||
* Pod is `Running`, its node is segmented out
|
|
||||||
* NodeController waits for timeout
|
|
||||||
* NodeController marks pod `Failed`
|
|
||||||
* If running under a controller, pod will be recreated elsewhere
|
|
||||||
|
|
||||||
|
[Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/)
|
||||||
|
|
|
@ -4,172 +4,6 @@ assignees:
|
||||||
title: Creating Multi-Container Pods
|
title: Creating Multi-Container Pods
|
||||||
---
|
---
|
||||||
|
|
||||||
* TOC
|
{% include user-guide-content-moved.md %}
|
||||||
{:toc}
|
|
||||||
|
|
||||||
A pod is a group of containers that are scheduled
|
[Communicating Between Containers Running in the Same Pod](/docs/tasks/configure-pod-container/communicate-containers-same-pod/)
|
||||||
onto the same host. Pods serve as units of scheduling, deployment, and
|
|
||||||
horizontal scaling/replication. Pods share fate, and share some resources, such
|
|
||||||
as storage volumes and IP addresses.
|
|
||||||
|
|
||||||
## Creating a pod
|
|
||||||
|
|
||||||
Multi-container pods must be created with the `create` command. Properties
|
|
||||||
are passed to the command as a YAML- or JSON-formatted configuration file.
|
|
||||||
|
|
||||||
The `create` command can be used to create a pod directly, or it can create
|
|
||||||
a pod or pods through a `Deployment`. It is highly recommended that
|
|
||||||
you use a
|
|
||||||
[Deployment](/docs/user-guide/deployments/)
|
|
||||||
to create your pods. It watches for failed pods and will start up
|
|
||||||
new pods as required to maintain the specified number.
|
|
||||||
|
|
||||||
If you don't want a Deployment to monitor your pod (e.g. your pod
|
|
||||||
is writing non-persistent data which won't survive a restart, or your pod is
|
|
||||||
intended to be very short-lived), you can create a pod directly with the
|
|
||||||
`create` command.
|
|
||||||
|
|
||||||
### Using `create`
|
|
||||||
|
|
||||||
Note: We recommend using a
|
|
||||||
[Deployment](/docs/user-guide/deployments/)
|
|
||||||
to create pods. You should use the instructions below only if you don't want
|
|
||||||
to create a Deployment.
|
|
||||||
|
|
||||||
If your pod will contain more than one container, or if you don't want to
|
|
||||||
create a Deployment to manage your pod, use the
|
|
||||||
`kubectl create` command and pass a pod specification as a JSON- or
|
|
||||||
YAML-formatted configuration file.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl create -f FILE
|
|
||||||
```
|
|
||||||
|
|
||||||
Where:
|
|
||||||
|
|
||||||
* `-f FILE` or `--filename FILE` is the name of a
|
|
||||||
[pod configuration file](#pod-configuration-file) in either JSON or YAML
|
|
||||||
format.
|
|
||||||
|
|
||||||
A successful create request returns the pod name. Use the
|
|
||||||
[`kubectl get`](#viewing_a_pod) command to view status after creation.
|
|
||||||
|
|
||||||
### Pod configuration file
|
|
||||||
|
|
||||||
A pod configuration file specifies required information about the pod.
|
|
||||||
It can be formatted as YAML or as JSON, and supports the following fields:
|
|
||||||
|
|
||||||
{% capture tabspec %}configfiles
|
|
||||||
JSON,json,pod-config.json,/docs/user-guide/pods/pod-config.json
|
|
||||||
YAML,yaml,pod-config.yaml,/docs/user-guide/pods/pod-config.yaml{% endcapture %}
|
|
||||||
{% include tabs.html %}
|
|
||||||
|
|
||||||
Required fields are:
|
|
||||||
|
|
||||||
* `kind`: Always `Pod`.
|
|
||||||
* `apiVersion`: Currently `v1`.
|
|
||||||
* `metadata`: An object containing:
|
|
||||||
* `name`: Required if `generateName` is not specified. The name of this pod.
|
|
||||||
It must be an
|
|
||||||
[RFC1035](https://www.ietf.org/rfc/rfc1035.txt) compatible value and be
|
|
||||||
unique within the namespace.
|
|
||||||
* `labels`: Optional. Labels are arbitrary key:value pairs that can be used
|
|
||||||
by
|
|
||||||
[Deployment](/docs/user-guide/deployments/)
|
|
||||||
and [services](/docs/user-guide/services/) for grouping and targeting
|
|
||||||
pods.
|
|
||||||
* `generateName`: Required if `name` is not set. A prefix to use to generate
|
|
||||||
a unique name. Has the same validation rules as `name`.
|
|
||||||
* `namespace`: Required. The namespace of the pod.
|
|
||||||
* `annotations`: Optional. A map of string keys and values that can be used
|
|
||||||
by external tooling to store and retrieve arbitrary metadata about
|
|
||||||
objects.
|
|
||||||
* `spec`: The pod specification. See [The `spec` schema](#the_spec_schema) for
|
|
||||||
details.
|
|
||||||
|
|
||||||
|
|
||||||
### The `spec` schema
|
|
||||||
|
|
||||||
A full description of the `spec` schema is contained in the
|
|
||||||
[Kubernetes API reference](/docs/api-reference/v1/definitions/#_v1_podspec).
|
|
||||||
|
|
||||||
The following fields are required or commonly used in the `spec` schema:
|
|
||||||
|
|
||||||
{% capture tabspec %}specfiles
|
|
||||||
JSON,json,pod-spec-common.json,/docs/user-guide/pods/pod-spec-common.json
|
|
||||||
YAML,yaml,pod-spec-common.yaml,/docs/user-guide/pods/pod-spec-common.yaml{% endcapture %}
|
|
||||||
{% include tabs.html %}
|
|
||||||
|
|
||||||
#### `containers[]`
|
|
||||||
|
|
||||||
A list of containers belonging to the pod. Containers cannot be added or removed once the pod is created, and there must be at least one container in a pod.
|
|
||||||
|
|
||||||
The `containers` object **must contain**:
|
|
||||||
|
|
||||||
* `name`: Name of the container. It must be a DNS_LABEL and be unique within the pod. Cannot be updated.
|
|
||||||
* `image`: Docker image name.
|
|
||||||
|
|
||||||
The `containers` object **commonly contains** the following optional properties:
|
|
||||||
|
|
||||||
* `command[]`: The entrypoint array. Commands are not executed within a shell. The docker image's entrypoint is used if this is not provided. Cannot be updated.
|
|
||||||
* `args[]`: A command array containing arguments to the entrypoint. The docker image's `cmd` is used if this is not provided. Cannot be updated.
|
|
||||||
* `env[]`: A list of environment variables in key:value format to set in the container. Cannot be updated.
|
|
||||||
* `name`: The name of the environment variable; must be a `C_IDENTIFIER`.
|
|
||||||
* `value`: The value of the environment variable. Defaults to empty string.
|
|
||||||
* `imagePullPolicy`: The image pull policy. Accepted values are:
|
|
||||||
* `Always`
|
|
||||||
* `Never`
|
|
||||||
* `IfNotPresent`Defaults to `Always` if `:latest` tag is specified, or `IfNotPresent` otherwise. Cannot be updated.
|
|
||||||
* `ports[]`: A list of ports to expose from the container. Cannot be updated.
|
|
||||||
* `containerPort`: The port number to expose on the pod's IP address.
|
|
||||||
* `name`: The name for the port that can be referred to by services. Must be a `DNS_LABEL` and be unique without the pod.
|
|
||||||
* `protocol`: Protocol for the port. Must be UDP or TCP. Default is TCP.
|
|
||||||
* `resources`: The Compute resources required by this container. Contains:
|
|
||||||
* `cpu`: CPUs to reserve for each container. Default is whole CPUs; scale suffixes (e.g. `100m` for one hundred milli-CPUs) are supported. If the host does not have enough available resources, your pod will not be scheduled.
|
|
||||||
* `memory`: Memory to reserve for each container. Default is bytes; [binary scale suffixes](http://en.wikipedia.org/wiki/Binary_prefix) (e.g. `100Mi` for one hundred mebibytes) are supported. If the host does not have enough available resources, your pod will not be scheduled.Cannot be updated.
|
|
||||||
|
|
||||||
#### `restartPolicy`
|
|
||||||
|
|
||||||
Restart policy for all containers within the pod. Options are:
|
|
||||||
|
|
||||||
* `Always`
|
|
||||||
* `OnFailure`
|
|
||||||
* `Never`
|
|
||||||
|
|
||||||
#### `volumes[]`
|
|
||||||
|
|
||||||
A list of volumes that can be mounted by containers belonging to the pod. You must specify a `name` and a source for each volume. The container must also include a `volumeMount` with matching `name`. Source is one of:
|
|
||||||
|
|
||||||
* `emptyDir`: A temporary directory that shares a pod's lifetime. Contains:
|
|
||||||
* `medium`: The type of storage used to back the volume. Must be an empty string (default) or `Memory`.
|
|
||||||
* `hostPath`: A pre-existing host file or directory. This is generally used for privileged system daemons or other agents tied to the host. Contains:
|
|
||||||
* `path`: The path of the directory on the host.
|
|
||||||
* `secret`: Secret to populate volume. Secrets are used to hold sensitive information, such as passwords, OAuth tokens, and SSH keys. Learn more from [the docs on secrets](/docs/user-guide/secrets/). Contains:
|
|
||||||
* `secretName`: The name of a secret in the pod's namespace.
|
|
||||||
|
|
||||||
The `name` must be a DNS_LABEL and unique within the pod.
|
|
||||||
|
|
||||||
|
|
||||||
### Sample file
|
|
||||||
|
|
||||||
For example, the following configuration file creates two containers: a
|
|
||||||
`redis` key-value store image, and a `django` frontend image.
|
|
||||||
|
|
||||||
{% capture tabspec %}samplefiles
|
|
||||||
JSON,json,pod-sample.json,/docs/user-guide/pods/pod-sample.json
|
|
||||||
YAML,yaml,pod-sample.yaml,/docs/user-guide/pods/pod-sample.yaml{% endcapture %}
|
|
||||||
{% include tabs.html %}
|
|
||||||
|
|
||||||
## Viewing a pod
|
|
||||||
|
|
||||||
{% include_relative _viewing-a-pod.md %}
|
|
||||||
|
|
||||||
## Deleting a pod
|
|
||||||
|
|
||||||
If you created your pod directly with `kubectl create`, use `kubectl delete`:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ kubectl delete pod NAME
|
|
||||||
```
|
|
||||||
|
|
||||||
A successful delete request returns the name of the deleted pod.
|
|
||||||
|
|
|
@ -375,41 +375,6 @@ However, it is using its local ttl-based cache for getting the current value of
|
||||||
As a result, the total delay from the moment when the secret is updated to the moment when new keys are
|
As a result, the total delay from the moment when the secret is updated to the moment when new keys are
|
||||||
projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet.
|
projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet.
|
||||||
|
|
||||||
#### Optional Secrets as Files from a Pod
|
|
||||||
|
|
||||||
Volumes and files provided by a Secret can be also be marked as optional.
|
|
||||||
The Secret or the key within a Secret does not have to exist. The mount path for
|
|
||||||
such items will always be created.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"apiVersion": "v1",
|
|
||||||
"kind": "Pod",
|
|
||||||
"metadata": {
|
|
||||||
"name": "mypod",
|
|
||||||
"namespace": "myns"
|
|
||||||
},
|
|
||||||
"spec": {
|
|
||||||
"containers": [{
|
|
||||||
"name": "mypod",
|
|
||||||
"image": "redis",
|
|
||||||
"volumeMounts": [{
|
|
||||||
"name": "foo",
|
|
||||||
"mountPath": "/etc/foo"
|
|
||||||
}]
|
|
||||||
}],
|
|
||||||
"volumes": [{
|
|
||||||
"name": "foo",
|
|
||||||
"secret": {
|
|
||||||
"secretName": "mysecret",
|
|
||||||
"defaultMode": 256,
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Using Secrets as Environment Variables
|
#### Using Secrets as Environment Variables
|
||||||
|
|
||||||
To use a secret in an environment variable in a pod:
|
To use a secret in an environment variable in a pod:
|
||||||
|
@ -456,30 +421,6 @@ $ echo $SECRET_PASSWORD
|
||||||
1f2d1e2e67df
|
1f2d1e2e67df
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Optional Secrets from Environment Variables
|
|
||||||
|
|
||||||
You may not want to require all your secrets to exist. They can be marked as
|
|
||||||
optional as shown in the pod:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: optional-secret-env-pod
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: mycontainer
|
|
||||||
image: redis
|
|
||||||
env:
|
|
||||||
- name: OPTIONAL_SECRET
|
|
||||||
valueFrom:
|
|
||||||
secretKeyRef:
|
|
||||||
name: mysecret
|
|
||||||
key: username
|
|
||||||
optional: true
|
|
||||||
restartPolicy: Never
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Using imagePullSecrets
|
#### Using imagePullSecrets
|
||||||
|
|
||||||
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
|
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
|
||||||
|
@ -511,8 +452,7 @@ can be automatically attached to pods based on their service account.
|
||||||
|
|
||||||
Secret volume sources are validated to ensure that the specified object
|
Secret volume sources are validated to ensure that the specified object
|
||||||
reference actually points to an object of type `Secret`. Therefore, a secret
|
reference actually points to an object of type `Secret`. Therefore, a secret
|
||||||
needs to be created before any pods that depend on it, unless it is marked as
|
needs to be created before any pods that depend on it.
|
||||||
optional.
|
|
||||||
|
|
||||||
Secret API objects reside in a namespace. They can only be referenced by pods
|
Secret API objects reside in a namespace. They can only be referenced by pods
|
||||||
in that same namespace.
|
in that same namespace.
|
||||||
|
@ -532,12 +472,12 @@ not common ways to create pods.)
|
||||||
|
|
||||||
When a pod is created via the API, there is no check whether a referenced
|
When a pod is created via the API, there is no check whether a referenced
|
||||||
secret exists. Once a pod is scheduled, the kubelet will try to fetch the
|
secret exists. Once a pod is scheduled, the kubelet will try to fetch the
|
||||||
secret value. If a required secret cannot be fetched because it does not
|
secret value. If the secret cannot be fetched because it does not exist or
|
||||||
exist or because of a temporary lack of connection to the API server, the
|
because of a temporary lack of connection to the API server, kubelet will
|
||||||
kubelet will periodically retry. It will report an event about the pod
|
periodically retry. It will report an event about the pod explaining the
|
||||||
explaining the reason it is not started yet. Once the secret is fetched, the
|
reason it is not started yet. Once the secret is fetched, the kubelet will
|
||||||
kubelet will create and mount a volume containing it. None of the pod's
|
create and mount a volume containing it. None of the pod's containers will
|
||||||
containers will start until all the pod's volumes are mounted.
|
start until all the pod's volumes are mounted.
|
||||||
|
|
||||||
## Use cases
|
## Use cases
|
||||||
|
|
||||||
|
@ -594,8 +534,8 @@ consumes it in a volume:
|
||||||
When the container's command runs, the pieces of the key will be available in:
|
When the container's command runs, the pieces of the key will be available in:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
/etc/secret-volume/id-rsa.pub
|
/etc/secret-volume/ssh-publickey
|
||||||
/etc/secret-volume/id-rsa
|
/etc/secret-volume/ssh-privatekey
|
||||||
```
|
```
|
||||||
|
|
||||||
The container is then free to use the secret data to establish an ssh connection.
|
The container is then free to use the secret data to establish an ssh connection.
|
||||||
|
|
|
@ -9,7 +9,7 @@ title: Kubernetes 101
|
||||||
|
|
||||||
For Kubernetes 101, we will cover kubectl, pods, volumes, and multiple containers
|
For Kubernetes 101, we will cover kubectl, pods, volumes, and multiple containers
|
||||||
|
|
||||||
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
|
In order for the kubectl usage examples to work, make sure you have an example directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
|
||||||
|
|
||||||
* TOC
|
* TOC
|
||||||
{:toc}
|
{:toc}
|
||||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 8.4 KiB |
Loading…
Reference in New Issue