1.2 additions for user-guide/
parent
e899d54a1b
commit
f16668b45f
|
@ -20,4 +20,5 @@ defaults:
|
|||
layout: docwithnav
|
||||
showedit: true
|
||||
githubbranch: "release-1.2"
|
||||
docsbranch: "master"
|
||||
permalink: pretty
|
||||
|
|
|
@ -147,6 +147,8 @@ toc:
|
|||
path: /docs/user-guide/getting-into-containers/
|
||||
- title: The Lifecycle of a Pod
|
||||
path: /docs/user-guide/pod-states/
|
||||
- title: Pod Templates
|
||||
path: /docs/user-guide/pod-templates/
|
||||
- title: Assigning Pods to Nodes
|
||||
path: /docs/user-guide/node-selection/
|
||||
- title: Creating Pods with the Downward API
|
||||
|
@ -167,13 +169,13 @@ toc:
|
|||
- title: Using DNS Pods and Services
|
||||
path: /docs/admin/dns/
|
||||
- title: Setting Up and Configuring DNS
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/cluster-dns
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/cluster-dns
|
||||
- title: Deploying DNS
|
||||
path: /docs/getting-started-guides/docker-multinode/deployDNS/
|
||||
- title: Connecting Applications
|
||||
path: /docs/user-guide/connecting-applications/
|
||||
- title: Creating Servers with External IPs
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/simple-nginx.md
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/simple-nginx.md
|
||||
- title: Connect with Proxies
|
||||
path: /docs/user-guide/connecting-to-applications-proxy/
|
||||
- title: Connect with Port Forwarding
|
||||
|
@ -193,6 +195,8 @@ toc:
|
|||
path: /docs/user-guide/config-best-practices/
|
||||
- title: Configuring Containers
|
||||
path: /docs/user-guide/configuring-containers/
|
||||
- title: Using ConfigMap
|
||||
path: /docs/user-guide/configmap/
|
||||
- title: Sharing Cluster Access with kubeconfig
|
||||
path: /docs/user-guide/sharing-clusters/
|
||||
- title: Using Environment Variables
|
||||
|
@ -228,11 +232,11 @@ toc:
|
|||
- title: Testing a Kubernetes Cluster
|
||||
path: /docs/getting-started-guides/docker-multinode/testing/
|
||||
- title: Simulating Large Test Loads
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/k8petstore
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/k8petstore
|
||||
- title: Checking Pod Health
|
||||
path: /docs/user-guide/liveness/
|
||||
- title: Using Explorer to Examine the Runtime Environment
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/explorer
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/explorer
|
||||
- title: Resource Usage Monitoring
|
||||
path: /docs/user-guide/monitoring/
|
||||
- title: Logging
|
||||
|
|
|
@ -40,6 +40,8 @@ toc:
|
|||
path: /docs/user-guide/docker-cli-to-kubectl/
|
||||
- title: JSONpath Support
|
||||
path: /docs/user-guide/jsonpath/
|
||||
- title: kubectl Cheat Sheet
|
||||
path: /docs/user-guide/kubectl-cheatsheet/
|
||||
- title: kubectl Commands
|
||||
section:
|
||||
- title: kubectl
|
||||
|
@ -174,14 +176,14 @@ toc:
|
|||
- title: Kubernetes Design Docs
|
||||
section:
|
||||
- title: Kubernetes Architecture
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/architecture.md
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/architecture.md
|
||||
- title: Kubernetes Design Overview
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/
|
||||
- title: Security in Kubernetes
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/security.md
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/security.md
|
||||
- title: Kubernetes Identity and Access Management
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/access.md
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/access.md
|
||||
- title: Security Contexts
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/security_context.md
|
||||
path: https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/security_context.md
|
||||
- title: Kubernetes OpenVSwitch GRE/VxLAN networking
|
||||
path: /docs/admin/ovs-networking/
|
|
@ -7,52 +7,56 @@ toc:
|
|||
- title: Clustered Application Samples
|
||||
section:
|
||||
- title: Apache Cassandra Database
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/cassandra
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/cassandra
|
||||
- title: Apache Spark
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/spark
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/spark
|
||||
- title: Apache Storm
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/storm
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/storm
|
||||
- title: Distributed Task Queue
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/celery-rabbitmq
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/celery-rabbitmq
|
||||
- title: Hazelcast
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/hazelcast
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/hazelcast
|
||||
- title: Meteor Applications
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/meteor/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/meteor/
|
||||
- title: Redis
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/redis/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/redis/
|
||||
- title: RethinkDB
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/rethinkdb/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/rethinkdb/
|
||||
- title: Elasticsearch/Kibana Logging Demonstration
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/logging-demo/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/logging-demo/
|
||||
- title: Elasticsearch
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/elasticsearch/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/elasticsearch/
|
||||
- title: OpenShift Origin
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/openshift-origin/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/openshift-origin/
|
||||
- title: Ceph
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/rbd/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/rbd/
|
||||
- title: MEAN stack on Google Cloud Platform
|
||||
path: /docs/getting-started-guides/meanstack/
|
||||
|
||||
- title: Persistent Volume Samples
|
||||
section:
|
||||
- title: WordPress on a Kubernetes Persistent Volume
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/mysql-wordpress-pd/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/mysql-wordpress-pd/
|
||||
- title: GlusterFS
|
||||
path: /https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/glusterfs/
|
||||
path: /https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/glusterfs/
|
||||
- title: iSCSI
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/iscsi/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/iscsi/
|
||||
- title: NFS
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/nfs/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/nfs/
|
||||
- title: Downward API Volumes
|
||||
path: /docs/user-guide/downward-api/volume/
|
||||
path: /docs/user-guide/downward-api/volume
|
||||
|
||||
- title: Multi-tier Application Samples
|
||||
section:
|
||||
- title: Guestbook - Go Server
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/guestbook-go/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook-go/
|
||||
- title: GuestBook - PHP Server
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/guestbook/
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook/
|
||||
- title: MySQL - Phabricator Server
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.1/examples/phabricator/
|
||||
- title: Elasticsearch/Kibana Logging Demo
|
||||
path: https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/user-guide/logging-demo
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/phabricator/
|
||||
|
||||
- title: Elasticsearch/Kibana Logging Demo
|
||||
path: https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/user-guide/logging-demo
|
||||
|
||||
- title: ConfigMap Example
|
||||
path: https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/user-guide/configmap
|
||||
|
|
|
@ -71,7 +71,7 @@ a node for testing.
|
|||
|
||||
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
|
||||
create pods on nodes which match that [node
|
||||
selector](/docs/user-guide/node-selection/).
|
||||
selector](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection).
|
||||
|
||||
If you do not specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
|
||||
create pods on all nodes.
|
||||
|
|
|
@ -85,7 +85,7 @@ The above example uses the `--insecure` flag. This leaves it subject to MITM
|
|||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||
and client certificates to access the server. (These are installed in the
|
||||
`~/.kube` directory). Since cluster certificates are typically self-signed, it
|
||||
make take special configuration to get your http client to use root
|
||||
may take special configuration to get your http client to use root
|
||||
certificate.
|
||||
|
||||
On some clusters, the apiserver does not require authentication; it may serve
|
||||
|
@ -119,6 +119,13 @@ is associated with a service account, and a credential (token) for that
|
|||
service account is placed into the filesystem tree of each container in that pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
|
||||
If available, a certificate bundle is placed into the filesystem tree of each
|
||||
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
|
||||
used to verify the serving certificate of the apiserver.
|
||||
|
||||
Finally, the default namespace to be used for namespaced API operations is placed in a file
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
|
||||
|
||||
From within a pod the recommended ways to connect to API are:
|
||||
|
||||
- run a kubectl proxy as one of the containers in the pod, or as a background
|
||||
|
@ -195,9 +202,9 @@ at `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticse
|
|||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/`*`service_path`*`/`*`service_name`*`/`*`service_endpoint-suffix-parameter`*
|
||||
<!--- TODO: update this part of doc because it doesn't seem to be valid. What
|
||||
about namespaces? 'proxy' verb? -->
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/proxy/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL
|
||||
|
||||
##### Examples
|
||||
|
||||
|
@ -205,7 +212,7 @@ about namespaces? 'proxy' verb? -->
|
|||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_cluster/health?pretty=true`
|
||||
|
||||
```json
|
||||
{
|
||||
{
|
||||
"cluster_name" : "kubernetes_logging",
|
||||
"status" : "yellow",
|
||||
"timed_out" : false,
|
||||
|
|
|
@ -185,7 +185,7 @@ on the pod you are interested in:
|
|||
Name: simmemleak-hra99
|
||||
Namespace: default
|
||||
Image(s): saadali/simmemleak
|
||||
Node: kubernetes-minion-tf0f/10.240.216.66
|
||||
Node: kubernetes-node-tf0f/10.240.216.66
|
||||
Labels: name=simmemleak
|
||||
Status: Running
|
||||
Reason:
|
||||
|
@ -208,14 +208,14 @@ Containers:
|
|||
Restart Count: 5
|
||||
Conditions:
|
||||
Type Status
|
||||
Ready False
|
||||
Ready False
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-minion-tf0f
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
|
||||
```
|
||||
|
||||
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.
|
||||
|
@ -225,7 +225,7 @@ You can call `get pod` with the `-o go-template=...` option to fetch the status
|
|||
```shell
|
||||
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
|
||||
Container Name: simmemleak
|
||||
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]][13:59:03] clusterScaleDoc ~/go/src/github.com/kubernetes/kubernetes $
|
||||
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
|
||||
```
|
||||
|
||||
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
|
||||
|
|
|
@ -3,23 +3,111 @@
|
|||
|
||||
This document is meant to highlight and consolidate in one place configuration best practices that are introduced throughout the user-guide and getting-started documentation and examples. This is a living document so if you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
|
||||
|
||||
1. When writing configuration, use the latest stable API version (currently v1).
|
||||
1. Configuration should be stored in version control before being pushed to the cluster. This allows configuration to be quickly rolled back if needed and will aid with cluster re-creation and restoration if the worst were to happen.
|
||||
1. Use YAML rather than JSON. They can be used interchangeably in almost all scenarios but YAML tends to be more user-friendly for config.
|
||||
1. Group related objects together in a single file. This is often better than separate files.
|
||||
1. Use `kubectl create -f <directory>` where possible. This looks for config objects in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes them to create.
|
||||
1. Create a service before corresponding replication controllers so that the scheduler can spread the pods comprising the service. You can also create the replication controller without specifying replicas, create the service, then scale up the replication controller, which may work better in an example using progressive disclosure and may have benefits in real scenarios also, such as ensuring one replica works before creating lots of them)
|
||||
1. Don't use `hostPort` unless absolutely necessary (e.g., for a node daemon) as it will prevent certain scheduling configurations due to port conflicts. Use the apiserver proxying or port forwarding for debug/admin access, or a service for external service access. If you need to expose a pod's port on the host machine, consider using a [NodePort](/docs/user-guide/services/#type--loadbalancer) service before resorting to `hostPort`. If you only need access to the port for debugging purposes, you can also use the [kubectl proxy and apiserver proxy](/docs/user-guide/connecting-to-applications-proxy) or [kubectl port-forward](/docs/user-guide/connecting-to-applications-port-forward).
|
||||
1. Don't use `hostNetwork` for the same reasons as `hostPort`.
|
||||
1. Don't specify default values unnecessarily, to simplify and minimize configs. For example, omit the selector and labels in ReplicationController if you want them to be the same as the labels in its podTemplate, since those fields are populated from the podTemplate labels by default.
|
||||
1. Instead of attaching one label to a set of pods to represent a service (e.g., `service: myservice`) and another to represent the replication controller managing the pods (e.g., `controller: mycontroller`), attach labels that identify semantic attributes of your application or deployment and select the appropriate subsets in your service and replication controller, such as `{ app: myapp, tier: frontend, deployment: v3 }`. A service can be made to span multiple deployments, such as across rolling updates, by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.
|
||||
1. Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/user-guide/labels/#label-selectors) and [using labels effectively](/docs/user-guide/managing-deployments/#using-labels-effectively).
|
||||
1. Use kubectl run and expose to quickly create and expose single container replication controllers. See the [quick start guide](/docs/user-guide/quick-start) for an example.
|
||||
1. Use headless services for easy service discovery when you don't need kube-proxy load balancing. See [headless services](/docs/user-guide/services/#headless-services).
|
||||
1. Use kubectl delete rather than stop. Delete has a superset of the functionality of stop and stop is deprecated.
|
||||
1. If there is a viable alternative to naked pods (i.e. pods not bound to a controller), go with the alternative. Controllers are almost always preferable to creating pods (except for some `restartPolicy: Never` scenarios). A minimal Job is coming. See [#1624](http://issue.k8s.io/1624). Naked pods will not be rescheduled in the event of node failure.
|
||||
1. Put a version number or hash as a suffix to the name and in a label on a replication controller to facilitate rolling update, as we do for [--image](/docs/user-guide/kubectl/kubectl_rolling-update). This is necessary because rolling-update actually creates a new controller as opposed to modifying the existing controller. This does not play well with version agnostic controller names.
|
||||
1. Put an object description in an annotation to allow better introspection.
|
||||
## General Config Tips
|
||||
|
||||
- When defining configurations, specify the latest stable API version (currently v1).
|
||||
|
||||
- Configuration files should be stored in version control before being pushed to the cluster. This allows a configuration to be quickly rolled back if needed, and will aid with cluster re-creation and restoration if necessary.
|
||||
|
||||
- Write your configuration files using YAML rather than JSON. They can be used interchangeably in almost all scenarios, but YAML tends to be more user-friendly for config.
|
||||
|
||||
- Group related objects together in a single file where this makes sense. This format is often easier to manage than separate files. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
|
||||
(Note also that many `kubectl` commands can be called on a directory, and so you can also call
|
||||
`kubectl create` on a directory of config files— see below for more detail).
|
||||
|
||||
- Don't specify default values unnecessarily, in order to simplify and minimize configs, and to
|
||||
reduce error. For example, omit the selector and labels in a `ReplicationController` if you want
|
||||
them to be the same as the labels in its `podTemplate`, since those fields are populated from the
|
||||
`podTemplate` labels by default. See the [guestbook app's](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) .yaml files for some [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/frontend-controller.yaml) of this.
|
||||
|
||||
- Put an object description in an annotation to allow better introspection.
|
||||
|
||||
|
||||
## "Naked" Pods vs Replication Controllers and Jobs
|
||||
|
||||
- If there is a viable alternative to naked pods (i.e., pods not bound to a [replication controller
|
||||
](/docs/user-guide/replication-controller)), go with the alternative. Naked pods will not be rescheduled in the
|
||||
event of node failure.
|
||||
|
||||
Replication controllers are almost always preferable to creating pods, except for some explicit
|
||||
[`restartPolicy: Never`](/docs/user-guide/pod-states/#restartpolicy) scenarios. A
|
||||
[Job](/docs/user-guide/jobs/) object (currently in Beta), may also be appropriate.
|
||||
|
||||
|
||||
## Services
|
||||
|
||||
- It's typically best to create a [service](/docs/user-guide/services/) before corresponding [replication
|
||||
controllers](/docs/user-guide/replication-controller/), so that the scheduler can spread the pods comprising the
|
||||
service. You can also create a replication controller without specifying replicas (this will set
|
||||
replicas=1), create a service, then scale up the replication controller. This can be useful in
|
||||
ensuring that one replica works before creating lots of them.
|
||||
|
||||
- Don't use `hostPort` (which specifies the port number to expose on the host) unless absolutely
|
||||
necessary, e.g., for a node daemon. When you bind a Pod to a `hostPort`, there are a limited
|
||||
number of places that pod can be scheduled, due to port conflicts— you can only schedule as many
|
||||
such Pods as there are nodes in your Kubernetes cluster.
|
||||
|
||||
If you only need access to the port for debugging purposes, you can use the [kubectl proxy and apiserver proxy](/docs/user-guide/connecting-to-applications-proxy/) or [kubectl port-forward](/docs/user-guide/connecting-to-applications-port-forward/).
|
||||
You can use a [Service](/docs/user-guide/services/) object for external service access.
|
||||
If you do need to expose a pod's port on the host machine, consider using a [NodePort](/docs/user-guide/services/#type-nodeport) service before resorting to `hostPort`.
|
||||
|
||||
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
|
||||
|
||||
- Use _headless services_ for easy service discovery when you don't need kube-proxy load balancing.
|
||||
See [headless services](/docs/user-guide/services/#headless-services).
|
||||
|
||||
## Using Labels
|
||||
|
||||
- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or
|
||||
deployment. For example, instead of attaching a label to a set of pods to explicitly represent
|
||||
some service (e.g., `service: myservice`), or explicitly representing the replication
|
||||
controller managing the pods (e.g., `controller: mycontroller`), attach labels that identify
|
||||
semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This
|
||||
will let you select the object groups appropriate to the context— e.g., a service for all "tier:
|
||||
frontend" pods, or all "test" phase components of app "myapp". See the
|
||||
[guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) app for an example of this approach.
|
||||
|
||||
A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/user-guide/kubectl/kubectl_rolling-update/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.
|
||||
|
||||
- To facilitate rolling updates, include version info in replication controller names, e.g. as a
|
||||
suffix to the name. It is useful to set a 'version' label as well. The rolling update creates a
|
||||
new controller as opposed to modifying the existing controller. So, there will be issues with
|
||||
version-agnostic controller names. See the [documentation](/docs/user-guide/kubectl/kubectl_rolling-update/) on
|
||||
the rolling-update command for more detail.
|
||||
|
||||
Note that the [Deployment](/docs/user-guide/deployments/) object obviates the need to manage replication
|
||||
controller 'version names'. A desired state of an object is described by a Deployment, and if
|
||||
changes to that spec are _applied_, the deployment controller changes the actual state to the
|
||||
desired state at a controlled rate. (Deployment objects are currently part of the [`extensions`
|
||||
API Group](/docs/api/#api-groups), and are not enabled by default.)
|
||||
|
||||
- You can manipulate labels for debugging. Because Kubernetes replication controllers and services
|
||||
match to pods using labels, this allows you to remove a pod from being considered by a
|
||||
controller, or served traffic by a service, by removing the relevant selector labels. If you
|
||||
remove the labels of an existing pod, its controller will create a new pod to take its place.
|
||||
This is a useful way to debug a previously "live" pod in a quarantine environment. See the
|
||||
[`kubectl label`](/docs/user-guide/kubectl/kubectl_label/) command.
|
||||
|
||||
## Container Images
|
||||
|
||||
- The [default container image pull policy](images.md) is `IfNotPresent`, which causes the
|
||||
[Kubelet](/docs/admin/kubelet.md) to not pull an image if it already exists. If you would like to
|
||||
always force a pull, you must specify a pull image policy of `Always` in your .yaml file
|
||||
(`imagePullPolicy: Always`) or specify a `:latest` tag on your image.
|
||||
|
||||
That is, if you're specifying an image with other than the `:latest` tag, e.g. `myimage:v1`, and
|
||||
there is an image update to that same tag, the Kubelet won't pull the updated image. You can
|
||||
address this by ensuring that any updates to an image bump the image tag as well (e.g.
|
||||
`myimage:v2`), and ensuring that your configs point to the correct version.
|
||||
|
||||
## Using kubectl
|
||||
|
||||
- Use `kubectl create -f <directory>` where possible. This looks for config objects in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes them to `create`.
|
||||
|
||||
- Use `kubectl delete` rather than `stop`. `Delete` has a superset of the functionality of `stop`, and `stop` is deprecated.
|
||||
|
||||
- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/user-guide/labels/#label-selectors) and [using labels effectively](/docs/user-guide/managing-deployments/#using-labels-effectively).
|
||||
|
||||
- Use `kubectl run` and `expose` to quickly create and expose single container replication controllers. See the [quick start guide](/docs/user-guide/quick-start/) for an example.
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,511 @@
|
|||
---
|
||||
---
|
||||
Many applications require configuration via some combination of config files, command line
|
||||
arguments, and environment variables. These configuration artifacts should be decoupled from image
|
||||
content in order to keep containerized applications portable. The ConfigMap API resource provides
|
||||
mechanisms to inject containers with configuration data while keeping containers agnostic of
|
||||
Kubernetes. ConfigMap can be used to store fine-grained information like individual properties or
|
||||
coarse-grained information like entire config files or JSON blobs.
|
||||
|
||||
|
||||
## Overview of ConfigMap
|
||||
|
||||
The ConfigMap API resource holds key-value pairs of configuration data that can be consumed in pods
|
||||
or used to store configuration data for system components such as controllers. ConfigMap is similar
|
||||
to [Secrets](/docs/user-guide/secrets/), but designed to more conveniently support working with strings that do not
|
||||
contain sensitive information.
|
||||
|
||||
Let's look at a made-up example:
|
||||
|
||||
```yaml
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T19:14:38Z
|
||||
name: example-config
|
||||
namespace: default
|
||||
data:
|
||||
example.property.1: hello
|
||||
example.property.2: world
|
||||
example.property.file: |-
|
||||
property.1=value-1
|
||||
property.2=value-2
|
||||
property.3=value-3
|
||||
```
|
||||
|
||||
The `data` field contains the configuration data. As you can see, ConfigMaps can be used to hold
|
||||
fine-grained information like individual properties or coarse-grained information like the contents
|
||||
of configuration files.
|
||||
|
||||
Configuration data can be consumed in pods in a variety of ways. ConfigMaps can be used to:
|
||||
|
||||
1. Populate the value of environment variables
|
||||
2. Set command-line arguments in a container
|
||||
3. Populate config files in a volume
|
||||
|
||||
Both users and system components may store configuration data in ConfigMap.
|
||||
|
||||
## Creating ConfigMaps
|
||||
|
||||
You can use the `kubectl create configmap` command to create configmaps easily from literal values,
|
||||
files, or directories.
|
||||
|
||||
Let's take a look at some different ways to create a ConfigMap:
|
||||
|
||||
### Creating from directories
|
||||
|
||||
Say that we have a directory with some files that already contain the data we want to populate a ConfigMap with:
|
||||
|
||||
```shell
|
||||
$ ls docs/user-guide/configmap/kubectl/
|
||||
game.properties
|
||||
ui.properties
|
||||
|
||||
$ cat docs/user-guide/configmap/kubectl/game.properties
|
||||
enemies=aliens
|
||||
lives=3
|
||||
enemies.cheat=true
|
||||
enemies.cheat.level=noGoodRotten
|
||||
secret.code.passphrase=UUDDLRLRBABAS
|
||||
secret.code.allowed=true
|
||||
secret.code.lives=30
|
||||
|
||||
$ cat docs/user-guide/configmap/kubectl/ui.properties
|
||||
color.good=purple
|
||||
color.bad=yellow
|
||||
allow.textmode=true
|
||||
how.nice.to.look=fairlyNice
|
||||
```
|
||||
|
||||
The `kubectl create configmap` command can be used to create a ConfigMap holding the content of each
|
||||
file in this directory:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl create configmap game-config --from-file=docs/user-guide/configmap/kubectl
|
||||
|
||||
```
|
||||
|
||||
When `--from-file` points to a directory, each file directly in that directory is used to populate a
|
||||
key in the ConfigMap, where the name of the key is the filename, and the value of the key is the
|
||||
content of the file.
|
||||
|
||||
Let's take a look at the ConfigMap that this command created:
|
||||
|
||||
```shell
|
||||
$ cluster/kubectl.sh describe configmaps game-config
|
||||
Name: game-config
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Data
|
||||
====
|
||||
game.properties: 121 bytes
|
||||
ui.properties: 83 bytes
|
||||
```
|
||||
|
||||
You can see the two keys in the map are created from the filenames in the directory we pointed
|
||||
kubectl to. Since the content of those keys may be large, in the output of `kubectl describe`,
|
||||
you'll see only the names of the keys and their sizes.
|
||||
|
||||
If we want to see the values of the keys, we can simply `kubectl get` the resource:
|
||||
|
||||
```shell
|
||||
$ kubectl get configmaps game-config -o yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
game.properties: |-
|
||||
enemies=aliens
|
||||
lives=3
|
||||
enemies.cheat=true
|
||||
enemies.cheat.level=noGoodRotten
|
||||
secret.code.passphrase=UUDDLRLRBABAS
|
||||
secret.code.allowed=true
|
||||
secret.code.lives=30
|
||||
ui.properties: |
|
||||
color.good=purple
|
||||
color.bad=yellow
|
||||
allow.textmode=true
|
||||
how.nice.to.look=fairlyNice
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:34:05Z
|
||||
name: game-config
|
||||
namespace: default
|
||||
resourceVersion: "407"-
|
||||
selfLink: /api/v1/namespaces/default/configmaps/game-config
|
||||
uid: 30944725-d66e-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
### Creating from files
|
||||
|
||||
We can also pass `--from-file` a specific file, and pass it multiple times to kubectl. The
|
||||
following command yields equivalent results to the above example:
|
||||
|
||||
```shell
|
||||
$ kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties --from-file=docs/user-guide/configmap/kubectl/ui.properties
|
||||
|
||||
$ cluster/kubectl.sh get configmaps game-config-2 -o yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
game.properties: |-
|
||||
enemies=aliens
|
||||
lives=3
|
||||
enemies.cheat=true
|
||||
enemies.cheat.level=noGoodRotten
|
||||
secret.code.passphrase=UUDDLRLRBABAS
|
||||
secret.code.allowed=true
|
||||
secret.code.lives=30
|
||||
ui.properties: |
|
||||
color.good=purple
|
||||
color.bad=yellow
|
||||
allow.textmode=true
|
||||
how.nice.to.look=fairlyNice
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:52:05Z
|
||||
name: game-config-2
|
||||
namespace: default
|
||||
resourceVersion: "516"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/game-config-2
|
||||
uid: b4952dc3-d670-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
We can also set the key to use for an individual file with `--from-file` by passing an expression
|
||||
of `key=value`: `--from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties`:
|
||||
|
||||
```shell
|
||||
$ kubectl create configmap game-config-3 --from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties
|
||||
|
||||
$ kubectl get configmaps game-config-3 -o yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
game-special-key: |-
|
||||
enemies=aliens
|
||||
lives=3
|
||||
enemies.cheat=true
|
||||
enemies.cheat.level=noGoodRotten
|
||||
secret.code.passphrase=UUDDLRLRBABAS
|
||||
secret.code.allowed=true
|
||||
secret.code.lives=30
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:54:22Z
|
||||
name: game-config-3
|
||||
namespace: default
|
||||
resourceVersion: "530"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/game-config-3
|
||||
uid: 05f8da22-d671-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
### Creating from literal values
|
||||
|
||||
It is also possible to supply literal values for ConfigMaps using `kubectl create configmap`. The
|
||||
`--from-literal` option takes a `key=value` syntax that allows literal values to be supplied
|
||||
directly on the command line:
|
||||
|
||||
```shell
|
||||
$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
|
||||
|
||||
$ kubectl get configmaps special-config -o yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
special.how: very
|
||||
special.type: charm
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T19:14:38Z
|
||||
name: special-config
|
||||
namespace: default
|
||||
resourceVersion: "651"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/special-config
|
||||
uid: dadce046-d673-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
## Consuming ConfigMap in pods
|
||||
|
||||
### Use-Case: Consume ConfigMap in environment variables
|
||||
|
||||
ConfigMaps can be used to populate the value of command line arguments. As an example, consider
|
||||
the following ConfigMap:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: special-config
|
||||
namespace: default
|
||||
data:
|
||||
special.how: very
|
||||
special.type: charm
|
||||
```
|
||||
|
||||
We can consume the keys of this ConfigMap in a pod like so:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dapi-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "-c", "env" ]
|
||||
env:
|
||||
- name: SPECIAL_LEVEL_KEY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: special-configmap
|
||||
key: special.how
|
||||
- name: SPECIAL_TYPE_KEY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: special-config
|
||||
key: data-1
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
When this pod is run, its output will include the lines:
|
||||
|
||||
```shell
|
||||
SPECIAL_LEVEL_KEY=very
|
||||
SPECIAL_TYPE_KEY=charm
|
||||
```
|
||||
|
||||
### Use-Case: Set command-line arguments with ConfigMap
|
||||
|
||||
ConfigMaps can also be used to set the value of the command or arguments in a container. This is
|
||||
accomplished using the kubernetes substitution syntax `$(VAR_NAME)`. Consider the ConfigMap:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: special-config
|
||||
namespace: default
|
||||
data:
|
||||
special.how: very
|
||||
special.type: charm
|
||||
```
|
||||
|
||||
In order to inject values into the command line, we must consume the keys we want to use as
|
||||
environment variables, as in the last example. Then we can refer to them in a container's command
|
||||
using the `$(VAR_NAME)` syntax.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dapi-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
|
||||
env:
|
||||
- name: SPECIAL_LEVEL_KEY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: special-configmap
|
||||
key: special.how
|
||||
- name: SPECIAL_TYPE_KEY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: special-config
|
||||
key: data-1
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
When this pod is run, the output from the `test-container` container will be:
|
||||
|
||||
```shell
|
||||
very charm
|
||||
```
|
||||
|
||||
### Use-Case: Consume ConfigMap via volume plugin
|
||||
|
||||
ConfigMaps can also be consumed in volumes. Returning again to our example ConfigMap:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: special-config
|
||||
namespace: default
|
||||
data:
|
||||
special.how: very
|
||||
special.type: charm
|
||||
```
|
||||
|
||||
We have a couple different options for consuming this ConfigMap in a volume. The most basic
|
||||
way is to populate the volume with files where the key is the filename and the content of the file
|
||||
is the value of the key:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dapi-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "cat", "/etc/config/special.how" ]
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: special-config
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
When this pod is run, the output will be:
|
||||
|
||||
```shell
|
||||
very
|
||||
```
|
||||
|
||||
We can also control the paths within the volume where ConfigMap keys are projected:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dapi-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "cat", "/etc/config/path/to/special-key" ]
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: special-config
|
||||
items:
|
||||
- key: special.how
|
||||
path: path/to/special-key
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
When this pod is run, the output will be:
|
||||
|
||||
```shell
|
||||
very
|
||||
```
|
||||
|
||||
## Real World Example: Configuring Redis
|
||||
|
||||
Let's take a look at a real-world example: configuring redis using ConfigMap. Say we want to inject
|
||||
redis with the recommendation configuration for using redis as a cache. The redis config file
|
||||
should contain:
|
||||
|
||||
```conf
|
||||
maxmemory 2mb
|
||||
maxmemory-policy allkeys-lru
|
||||
```
|
||||
|
||||
Such a file is in `docs/user-guide/configmap/redis`; we can use the following command to create a
|
||||
ConfigMap instance with it:
|
||||
|
||||
```shell
|
||||
$ kubectl create configmap example-redis-config --from-file=docs/user-guide/configmap/redis/redis-config
|
||||
|
||||
$ kubectl get configmap redis-config -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
{
|
||||
"kind": "ConfigMap",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "example-redis-config",
|
||||
"namespace": "default",
|
||||
"selfLink": "/api/v1/namespaces/default/configmaps/example-redis-config",
|
||||
"uid": "07fd0419-d97b-11e5-b443-68f728db1985",
|
||||
"resourceVersion": "15",
|
||||
"creationTimestamp": "2016-02-22T15:43:34Z"
|
||||
},
|
||||
"data": {
|
||||
"redis-config": "maxmemory 2mb\nmaxmemory-policy allkeys-lru\n"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now, let's create a pod that uses this config:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: redis
|
||||
spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: kubernetes/redis:v1
|
||||
env:
|
||||
- name: MASTER
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
resources:
|
||||
limits:
|
||||
cpu: "0.1"
|
||||
volumeMounts:
|
||||
- mountPath: /redis-master-data
|
||||
name: data
|
||||
- mountPath: /redis-master
|
||||
name: config
|
||||
volumes:
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
- name: config
|
||||
configMap:
|
||||
name: example-redis-config
|
||||
items:
|
||||
- key: redis-config
|
||||
path: redis.conf
|
||||
```
|
||||
|
||||
Notice that this pod has a ConfigMap volume that places the `redis-config` key of the
|
||||
`example-redis-config` ConfigMap into a file called `redis.conf`. This volume is mounted into the
|
||||
`/redis-master` directory in the redis container, placing our config file at
|
||||
`/redis-master/redis.conf`, which is where the image looks for the redis config file for the master.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/configmap/redis/redis-pod.yaml
|
||||
```
|
||||
|
||||
If we `kubectl exec` into this pod and run the `redis-cli` tool, we can check that our config was
|
||||
applied correctly:
|
||||
|
||||
```shell
|
||||
$ kubectl exec -it redis redis-cli
|
||||
127.0.0.1:6379> CONFIG GET maxmemory
|
||||
1) "maxmemory"
|
||||
2) "2097152"
|
||||
127.0.0.1:6379> CONFIG GET maxmemory-policy
|
||||
1) "maxmemory-policy"
|
||||
2) "allkeys-lru"
|
||||
```
|
||||
|
||||
## Restrictions
|
||||
|
||||
ConfigMaps must be created before they are consumed in pods. Controllers may be written to tolerate
|
||||
missing configuration data; consult individual components configured via ConfigMap on a case-by-case
|
||||
basis.
|
||||
|
||||
ConfigMaps reside in a namespace. They can only be referenced by pods in the same namespace.
|
||||
|
||||
Quota for ConfigMap size is a planned feature.
|
||||
|
||||
Kubelet only supports use of ConfigMap for pods it gets from the API server. This includes any pods
|
||||
created using kubectl, or indirectly via a replication controller. It does not include pods created
|
||||
via the Kubelet's `--manifest-url` flag, its `--config` flag, or its REST API (these are not common
|
||||
ways to create pods.)
|
|
@ -1,8 +1,3 @@
|
|||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# ConfigMap example
|
||||
|
||||
|
||||
|
@ -11,21 +6,21 @@
|
|||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and that you have
|
||||
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
|
||||
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
|
||||
started](http://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Step One: Create the ConfigMap
|
||||
|
||||
A ConfigMap contains a set of named strings.
|
||||
|
||||
Use the [`examples/configmap/configmap.yaml`](configmap.yaml) file to create a ConfigMap:
|
||||
Use the [`configmap.yaml`](configmap.yaml) file to create a ConfigMap:
|
||||
|
||||
```console
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/configmap/configmap.yaml
|
||||
```
|
||||
|
||||
You can use `kubectl` to see information about the ConfigMap:
|
||||
|
||||
```console
|
||||
```shell
|
||||
$ kubectl get configmap
|
||||
NAME DATA
|
||||
test-secret 2
|
||||
|
@ -43,7 +38,7 @@ data-2: 7 bytes
|
|||
|
||||
View the values of the keys with `kubectl get`:
|
||||
|
||||
```console
|
||||
```shell
|
||||
$ cluster/kubectl.sh get configmaps test-configmap -o yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
|
@ -61,16 +56,16 @@ metadata:
|
|||
|
||||
## Step Two: Create a pod that consumes a configMap in environment variables
|
||||
|
||||
Use the [`examples/configmap/env-pod.yaml`](env-pod.yaml) file to create a Pod that consumes the
|
||||
Use the [`env-pod.yaml`](env-pod.yaml) file to create a Pod that consumes the
|
||||
ConfigMap in environment variables.
|
||||
|
||||
```console
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/configmap/env-pod.yaml
|
||||
```
|
||||
|
||||
This pod runs the `env` command to display the environment of the container:
|
||||
|
||||
```console
|
||||
```shell
|
||||
$ kubectl logs secret-test-pod
|
||||
KUBE_CONFIG_1=value-1
|
||||
KUBE_CONFIG_2=value-2
|
||||
|
@ -78,40 +73,29 @@ KUBE_CONFIG_2=value-2
|
|||
|
||||
## Step Three: Create a pod that sets the command line using ConfigMap
|
||||
|
||||
Use the [`examples/configmap/command-pod.yaml`](env-pod.yaml) file to create a Pod with a container
|
||||
Use the [`command-pod.yaml`](env-pod.yaml) file to create a Pod with a container
|
||||
whose command is injected with the keys of a ConfigMap
|
||||
|
||||
```console
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/configmap/env-pod.yaml
|
||||
```
|
||||
|
||||
This pod runs an `echo` command to display the keys:
|
||||
|
||||
```console
|
||||
```shell
|
||||
value-1 value-2
|
||||
```
|
||||
|
||||
## Step Four: Create a pod that consumes a configMap in a volume
|
||||
|
||||
Pods can also consume ConfigMaps in volumes. Use the [`examples/configmap/volume-pod.yaml`](volume-pod.yaml) file to create a Pod that consume the ConfigMap in a volume.
|
||||
Pods can also consume ConfigMaps in volumes. Use the [`volume-pod.yaml`](volume-pod.yaml) file to create a Pod that consume the ConfigMap in a volume.
|
||||
|
||||
```console
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/configmap/volume-pod.yaml
|
||||
```
|
||||
|
||||
This pod runs a `cat` command to print the value of one of the keys in the volume:
|
||||
|
||||
```console
|
||||
```shell
|
||||
value-1
|
||||
```
|
||||
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: IS_VERSIONED -->
|
||||
<!-- TAG IS_VERSIONED -->
|
||||
<!-- END MUNGE: IS_VERSIONED -->
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
```
|
|
@ -34,7 +34,7 @@ The value of `metadata.name`, `hello-world`, will be the name of the pod resourc
|
|||
The [`command`](/docs/user-guide/containers/#containers-and-commands) overrides the Docker container's `Entrypoint`. Command arguments (corresponding to Docker's `Cmd`) may be specified using `args`, as follows:
|
||||
|
||||
```yaml
|
||||
command: ["/bin/echo"]
|
||||
command: ["/bin/echo"]
|
||||
args: ["hello","world"]
|
||||
```
|
||||
|
||||
|
@ -49,22 +49,17 @@ pods/hello-world
|
|||
|
||||
## Validating configuration
|
||||
|
||||
If you're not sure you specified the resource correctly, you can ask `kubectl` to validate it for you:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./hello-world.yaml --validate
|
||||
```
|
||||
We enable validation by default in `kubectl` since v1.1.
|
||||
|
||||
Let's say you specified `entrypoint` instead of `command`. You'd see output as follows:
|
||||
|
||||
```shell
|
||||
I0709 06:33:05.600829 14160 schema.go:126] unknown field: entrypoint
|
||||
I0709 06:33:05.600988 14160 schema.go:129] this may be a false alarm, see http://issue.k8s.io/6842 pods/hello-world
|
||||
error validating "./hello-world.yaml": error validating data: found invalid field Entrypoint for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
|
||||
```
|
||||
|
||||
`kubectl create --validate` currently warns about problems it detects, but creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
|
||||
Using `kubectl create --validate=false` to turn validation off, it creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
|
||||
View the [Pod API
|
||||
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions/#_v1_pod)
|
||||
object](/docs/api-reference/v1/definitions/#_v1_pod)
|
||||
to see the list of valid fields.
|
||||
|
||||
## Environment variables and variable expansion
|
||||
|
@ -76,7 +71,7 @@ apiVersion: v1
|
|||
kind: Pod
|
||||
metadata:
|
||||
name: hello-world
|
||||
spec: # specification of the pod's contents
|
||||
spec: # specification of the pod’s contents
|
||||
restartPolicy: Never
|
||||
containers:
|
||||
- name: hello
|
||||
|
@ -91,7 +86,7 @@ spec: # specification of the pod's contents
|
|||
However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/expansion):
|
||||
|
||||
```yaml
|
||||
command: ["/bin/echo"]
|
||||
command: ["/bin/echo"]
|
||||
args: ["$(MESSAGE)"]
|
||||
```
|
||||
|
||||
|
@ -129,7 +124,7 @@ hello-world 0/1 ExitCode:0 0 15s
|
|||
|
||||
## Viewing pod output
|
||||
|
||||
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/userguide/usingdocker/), `kubectl logs` will show you the output:
|
||||
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/engine/reference/commandline/logs/), `kubectl logs` will show you the output:
|
||||
|
||||
```shell
|
||||
$ kubectl logs hello-world
|
||||
|
|
|
@ -12,7 +12,7 @@ By default, Docker uses host-private networking, so containers can talk to other
|
|||
|
||||
Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model.
|
||||
|
||||
This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes).
|
||||
This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html).
|
||||
|
||||
## Exposing pods to the cluster
|
||||
|
||||
|
@ -43,8 +43,8 @@ This makes it accessible from any node in your cluster. Check the nodes the pod
|
|||
```shell
|
||||
$ kubectl create -f ./nginxrc.yaml
|
||||
$ kubectl get pods -l app=nginx -o wide
|
||||
my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-minion-93ly
|
||||
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-minion-93ly
|
||||
my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-node-93ly
|
||||
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-node-93ly
|
||||
```
|
||||
|
||||
Check your pods' IPs:
|
||||
|
@ -83,7 +83,7 @@ spec:
|
|||
app: nginx
|
||||
```
|
||||
|
||||
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions/#_v1_service) to see the list of supported fields in service definition.
|
||||
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](/docs/api-reference/v1/definitions/#_v1_service) to see the list of supported fields in service definition.
|
||||
Check your Service:
|
||||
|
||||
```shell
|
||||
|
@ -289,7 +289,7 @@ Lets test this from a pod (the same secret is being reused for simplicity, the p
|
|||
|
||||
```shell
|
||||
$ cat curlpod.yaml
|
||||
vapiVersion: v1
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: curlrc
|
||||
|
@ -367,7 +367,7 @@ $ curl https://104.197.63.17:30645 -k
|
|||
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of Service in the nginx-app.yaml from `NodePort` to `LoadBalancer`:
|
||||
|
||||
```shell
|
||||
$ kubectl delete rc, svc -l app=nginx
|
||||
$ kubectl delete rc,svc -l app=nginx
|
||||
$ kubectl create -f ./nginx-app.yaml
|
||||
$ kubectl get svc nginxsvc
|
||||
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
|
@ -381,6 +381,18 @@ $ curl https://162.22.184.144 -k
|
|||
The IP address in the `EXTERNAL_IP` column is the one that is available on the public internet. The `CLUSTER_IP` is only available inside your
|
||||
cluster/private cloud network.
|
||||
|
||||
Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long)
|
||||
hostname, not an IP. It's too long to fit in the standard `kubectl get svc`
|
||||
output, in fact, so you'll need to do `kubectl describe service nginxsvc` to
|
||||
see it. You'll see something like this:
|
||||
|
||||
```shell
|
||||
> kubectl describe service nginxsvc
|
||||
...
|
||||
LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com
|
||||
...
|
||||
```
|
||||
|
||||
## What's next?
|
||||
|
||||
[Learn about more Kubernetes features that will help you run containers reliably in production.](/docs/user-guide/production-pods)
|
||||
|
|
|
@ -18,9 +18,9 @@ we can use:
|
|||
|
||||
Docker images have metadata associated with them that is used to store information about the image.
|
||||
The image author may use this to define defaults for the command and arguments to run a container
|
||||
when the user does not supply values. Docker calls the fields for commands and arguments
|
||||
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
|
||||
describe here, mostly due to the fact that the Docker API allows users to specify both of these
|
||||
when the user does not supply values. Docker calls the fields for commands and arguments
|
||||
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
|
||||
describe here, mostly due to the fact that the docker API allows users to specify both of these
|
||||
fields as either a string array or a string and there are subtle differences in how those cases are
|
||||
handled. We encourage the curious to check out Docker's documentation for this feature.
|
||||
|
||||
|
@ -50,7 +50,7 @@ Here are examples for these rules in table format
|
|||
|
||||
By default, Docker containers are "unprivileged" and cannot, for example, run a Docker daemon inside a Docker container. We can have fine grain control over the capabilities using cap-add and cap-drop.More details [here](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration).
|
||||
|
||||
The relationship between Docker's capabilities and [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7)
|
||||
The relationship between Docker's capabilities and [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7.html)
|
||||
|
||||
| Docker's capabilities | Linux capabilities |
|
||||
| ---- | ---- |
|
||||
|
|
|
@ -417,24 +417,16 @@ depends on your `Node` OS. On some OSes it is a file, such as
|
|||
should see something like:
|
||||
|
||||
```shell
|
||||
I0707 17:34:53.945651 30031 server.go:88] Running in resource-only container "/kube-proxy"
|
||||
I0707 17:34:53.945921 30031 proxier.go:121] Setting proxy IP to 10.240.115.247 and initializing iptables
|
||||
I0707 17:34:54.053023 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/kubernetes: to [10.240.169.188:443]
|
||||
I0707 17:34:54.053175 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
|
||||
I0707 17:34:54.053284 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/kube-dns:dns to [10.244.3.3:53]
|
||||
I0707 17:34:54.053310 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/kube-dns:dns-tcp to [10.244.3.3:53]
|
||||
I0707 17:34:54.054780 30031 proxier.go:306] Adding new service "default/kubernetes:" at 10.0.0.1:443/TCP
|
||||
I0707 17:34:54.054903 30031 proxier.go:247] Proxying for service "default/kubernetes:" on TCP port 40074
|
||||
I0707 17:34:54.079181 30031 proxier.go:306] Adding new service "default/hostnames:default" at 10.0.1.175:80/TCP
|
||||
I0707 17:34:54.079273 30031 proxier.go:247] Proxying for service "default/hostnames:default" on TCP port 48577
|
||||
I0707 17:34:54.113665 30031 proxier.go:306] Adding new service "default/kube-dns:dns" at 10.0.0.10:53/UDP
|
||||
I0707 17:34:54.113776 30031 proxier.go:247] Proxying for service "default/kube-dns:dns" on UDP port 34149
|
||||
I0707 17:34:54.120224 30031 proxier.go:306] Adding new service "default/kube-dns:dns-tcp" at 10.0.0.10:53/TCP
|
||||
I0707 17:34:54.120297 30031 proxier.go:247] Proxying for service "default/kube-dns:dns-tcp" on TCP port 53476
|
||||
I0707 17:34:54.902313 30031 proxysocket.go:130] Accepted TCP connection from 10.244.3.3:42670 to 10.244.3.1:40074
|
||||
I0707 17:34:54.903107 30031 proxysocket.go:130] Accepted TCP connection from 10.244.3.3:42671 to 10.244.3.1:40074
|
||||
I0707 17:35:46.015868 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:57493
|
||||
I0707 17:35:46.017061 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:55471
|
||||
I1027 22:14:53.995134 5063 server.go:200] Running in resource-only container "/kube-proxy"
|
||||
I1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.
|
||||
I1027 22:14:53.999055 5063 server.go:255] Tearing down userspace rules. Errors here are acceptable.
|
||||
I1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns-tcp" to [10.244.1.3:53]
|
||||
I1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns" to [10.244.1.3:53]
|
||||
I1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for "default/kubernetes:https" to [10.240.0.2:443]
|
||||
I1027 22:14:54.038238 5063 proxier.go:429] Not syncing iptables until Services and Endpoints have been received from master
|
||||
I1027 22:14:54.040048 5063 proxier.go:294] Adding new service "default/kubernetes:https" at 10.0.0.1:443/TCP
|
||||
I1027 22:14:54.040154 5063 proxier.go:294] Adding new service "kube-system/kube-dns:dns" at 10.0.0.10:53/UDP
|
||||
I1027 22:14:54.040223 5063 proxier.go:294] Adding new service "kube-system/kube-dns:dns-tcp" at 10.0.0.10:53/TCP
|
||||
```
|
||||
|
||||
If you see error messages about not being able to contact the master, you
|
||||
|
@ -446,6 +438,12 @@ One of the main responsibilities of `kube-proxy` is to write the `iptables`
|
|||
rules which implement `Service`s. Let's check that those rules are getting
|
||||
written.
|
||||
|
||||
The kube-proxy can run in either "userspace" mode or "iptables" mode.
|
||||
Hopefully you are using the newer, faster, more stable "iptables" mode. You
|
||||
should see one of the following cases.
|
||||
|
||||
#### Userspace
|
||||
|
||||
```shell
|
||||
u@node$ iptables-save | grep hostnames
|
||||
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
|
||||
|
@ -457,6 +455,27 @@ example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
|
|||
not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and
|
||||
then look at the logs again.
|
||||
|
||||
#### Iptables
|
||||
|
||||
```shell
|
||||
u@node$ iptables-save | grep hostnames
|
||||
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
|
||||
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376
|
||||
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
|
||||
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.1.7:9376
|
||||
-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
|
||||
-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.2.3:9376
|
||||
-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
|
||||
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ
|
||||
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3
|
||||
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-57KPRZ3JQVENLNBR
|
||||
```
|
||||
|
||||
There should be 1 rule in `KUBE-SERVICES`, 1 or 2 rules per endpoint in
|
||||
`KUBE-SVC-(hash)` (depending on `SessionAffinity`), one `KUBE-SEP-(hash)` chain
|
||||
per endpoint, and a few rules in each `KUBE-SEP-(hash)` chain. The exact rules
|
||||
will vary based on your exact config (including node-ports and load-balancers).
|
||||
|
||||
### Is kube-proxy proxying?
|
||||
|
||||
Assuming you do see the above rules, try again to access your `Service` by IP:
|
||||
|
@ -466,10 +485,12 @@ u@node$ curl 10.0.1.175:80
|
|||
hostnames-0uton
|
||||
```
|
||||
|
||||
If this fails, we can try accessing the proxy directly. Look back at the
|
||||
`iptables-save` output above, and extract the port number that `kube-proxy` is
|
||||
using for your `Service`. In the above examples it is "48577". Now connect to
|
||||
that:
|
||||
If this fails and you are using the userspace proxy, you can try accessing the
|
||||
proxy directly. If you are using the iptables proxy, skip this section.
|
||||
|
||||
Look back at the `iptables-save` output above, and extract the
|
||||
port number that `kube-proxy` is using for your `Service`. In the above
|
||||
examples it is "48577". Now connect to that:
|
||||
|
||||
```shell
|
||||
u@node$ curl localhost:48577
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
---
|
||||
|
||||
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](/docs/user-guide/quick-start) and how to configure and launch single-run containers using pods ([Configuring containers](/docs/user-guide/configuring-containers)). Here you'll use the configuration-based approach to deploy a continuously running, replicated application.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
@ -35,7 +34,7 @@ spec:
|
|||
|
||||
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods don't need to be specified explicitly because they are generated from the name of the replication controller.
|
||||
View the [replication controller API
|
||||
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions/#_v1_replicationcontroller)
|
||||
object](/docs/api-reference/v1/definitions/#_v1_replicationcontroller)
|
||||
to view the list of supported fields.
|
||||
|
||||
This replication controller can be created using `create`, just as with pods:
|
||||
|
|
|
@ -6,30 +6,16 @@
|
|||
|
||||
## What is a _Deployment_?
|
||||
|
||||
A _Deployment_ provides declarative update for Pods and ReplicationControllers.
|
||||
Users describe the desired state in deployment object and deployment
|
||||
controller changes the actual state to that at a controlled rate.
|
||||
Users can define deployments to create new resources, or replace existing ones
|
||||
A _Deployment_ provides declarative updates for Pods and ReplicationControllers.
|
||||
Users describe the desired state in a Deployment object, and the deployment
|
||||
controller changes the actual state to the desired state at a controlled rate.
|
||||
Users can define Deployments to create new resources, or replace existing ones
|
||||
by new ones.
|
||||
|
||||
A typical use case is:
|
||||
|
||||
* Create a deployment to bring up a replication controller and pods.
|
||||
* Later, update that deployment to recreate the pods (for ex: to use a new image).
|
||||
|
||||
## Enabling Deployments on kubernetes cluster
|
||||
|
||||
Deployments is part of the [`extensions` API Group](/docs/api/#api-groups) and is not enabled by default.
|
||||
Set `--runtime-config=extensions/v1beta1/deployments=true` on API server to
|
||||
enable it.
|
||||
This can be achieved by exporting `ENABLE_DEPLOYMENTS=true` before running
|
||||
`kube-up.sh` script on GCE.
|
||||
|
||||
Note that Deployment objects effectively have [API version
|
||||
`v1alpha1`](/docs/api/)#api-versioning).
|
||||
Alpha objects may change or even be discontinued in future software releases.
|
||||
However, due to to a known issue, they will appear as API version `v1beta1` if
|
||||
enabled.
|
||||
* Create a Deployment to bring up a replication controller and pods.
|
||||
* Later, update that Deployment to recreate the pods (for example, to use a new image).
|
||||
|
||||
## Creating a Deployment
|
||||
|
||||
|
@ -45,7 +31,13 @@ $ kubectl create -f docs/user-guide/nginx-deployment.yaml
|
|||
deployment "nginx-deployment" created
|
||||
```
|
||||
|
||||
Running a get immediately will give:
|
||||
Running
|
||||
|
||||
```console
|
||||
$ kubectl get deployments
|
||||
```
|
||||
|
||||
immediately will give:
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
|
@ -53,10 +45,9 @@ NAME UPDATEDREPLICAS AGE
|
|||
nginx-deployment 0/3 8s
|
||||
```
|
||||
|
||||
This indicates that deployment is trying to update 3 replicas. It has not
|
||||
updated any one of those yet.
|
||||
This indicates that the Deployment is trying to update 3 replicas, and has not updated any of them yet.
|
||||
|
||||
Running a get again after a minute, will give:
|
||||
Running the `get` again after a minute, should give:
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
|
@ -64,16 +55,14 @@ NAME UPDATEDREPLICAS AGE
|
|||
nginx-deployment 3/3 1m
|
||||
```
|
||||
|
||||
This indicates that deployent has created all the 3 replicas.
|
||||
Running ```kubectl get rc```
|
||||
and ```kubectl get pods```
|
||||
will show the replication controller (RC) and pods created.
|
||||
This indicates that the Deployment has created all three replicas.
|
||||
Running `kubectl get rc` and `kubectl get pods` will show the replication controller (RC) and pods created.
|
||||
|
||||
```shell
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
|
||||
REPLICAS AGE
|
||||
deploymentrc-1975012602 nginx nginx:1.7.9 deployment.kubernetes.io/podTemplateHash=1975012602,app=nginx 3 2m
|
||||
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 3 2m
|
||||
```
|
||||
|
||||
```shell
|
||||
|
@ -84,22 +73,24 @@ deploymentrc-1975012602-j975u 1/1 Running 0 1m
|
|||
deploymentrc-1975012602-uashb 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
The created RC will ensure that there are 3 nginx pods at all time.
|
||||
The created RC will ensure that there are three nginx pods at all times.
|
||||
|
||||
## Updating a Deployment
|
||||
|
||||
Lets say, now we want to update the nginx pods to start using nginx:1.9.1 image
|
||||
instead of nginx:1.7.9.
|
||||
For this, we update our deployment to be as follows:
|
||||
Suppose that we now want to update the nginx pods to start using the `nginx:1.9.1` image
|
||||
instead of the `nginx:1.7.9` image.
|
||||
For this, we update our deployment file as follows:
|
||||
|
||||
{% include code.html language="yaml" file="new-nginx-deployment.yaml" ghlink="/docs/user-guide/new-nginx-deployment.yaml" %}
|
||||
|
||||
We can then `apply` the Deployment:
|
||||
|
||||
```shell
|
||||
$ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
|
||||
deployment "nginx-deployment" configured
|
||||
```
|
||||
|
||||
Running a get immediately will still give:
|
||||
Running a `get` immediately will still give:
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
|
@ -109,7 +100,7 @@ nginx-deployment 3/3 8s
|
|||
|
||||
This indicates that deployment status has not been updated yet (it is still
|
||||
showing old status).
|
||||
Running a get again after a minute, will give:
|
||||
Running a `get` again after a minute, should show:
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
|
@ -117,9 +108,9 @@ NAME UPDATEDREPLICAS AGE
|
|||
nginx-deployment 1/3 1m
|
||||
```
|
||||
|
||||
This indicates that deployment has updated one of the three pods that it needs
|
||||
This indicates that the Deployment has updated one of the three pods that it needs
|
||||
to update.
|
||||
Eventually, it will get around to updating all the pods.
|
||||
Eventually, it will update all the pods.
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
|
@ -127,18 +118,17 @@ NAME UPDATEDREPLICAS AGE
|
|||
nginx-deployment 3/3 3m
|
||||
```
|
||||
|
||||
We can run `kubectl get rc`
|
||||
to see that deployment updated the pods by creating a new RC
|
||||
which it scaled up to 3 and scaled down the old RC to 0.
|
||||
We can run `kubectl get rc` to see that the Deployment updated the pods by creating a new RC,
|
||||
which it scaled up to 3 replicas, and has scaled down the old RC to 0 replicas.
|
||||
|
||||
```shell
|
||||
kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
|
||||
deploymentrc-1562004724 nginx nginx:1.9.1 deployment.kubernetes.io/podTemplateHash=1562004724,app=nginx 3 5m
|
||||
deploymentrc-1975012602 nginx nginx:1.7.9 deployment.kubernetes.io/podTemplateHash=1975012602,app=nginx 0 7m
|
||||
deploymentrc-1562004724 nginx nginx:1.9.1 pod-template-hash=1562004724,app=nginx 3 5m
|
||||
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 0 7m
|
||||
```
|
||||
|
||||
Running get pods, will only show the new pods.
|
||||
Running `get pods` should now show only the new pods:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
|
@ -148,7 +138,7 @@ deploymentrc-1562004724-1rkfl 1/1 Running 0 8m
|
|||
deploymentrc-1562004724-6v702 1/1 Running 0 8m
|
||||
```
|
||||
|
||||
Next time we want to update pods, we can just update the deployment again.
|
||||
Next time we want to update these pods, we can just update and re-apply the Deployment again.
|
||||
|
||||
Deployment ensures that not all pods are down while they are being updated. By
|
||||
default, it ensures that minimum of 1 less than the desired number of pods are
|
||||
|
@ -177,27 +167,29 @@ Events:
|
|||
1m 1m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 0
|
||||
```
|
||||
|
||||
Here we see that when we first created the deployment, it created an RC and scaled it up to 3 replicas directly.
|
||||
When we updated the deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times.
|
||||
Here we see that when we first created the Deployment, it created an RC and scaled it up to 3 replicas directly.
|
||||
When we updated the Deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times.
|
||||
It then scaled up the new RC to 3 and when those pods were ready, it scaled down the old RC to 0.
|
||||
|
||||
### Multiple Updates
|
||||
|
||||
Each time a new deployment object is observed, a replication controller is
|
||||
created to bring up the desired pods if there is no existing RC doing so.
|
||||
Existing RCs controlling pods whose labels match `.spec.selector` but the
|
||||
Existing RCs controlling pods whose labels match `.spec.selector` but whose
|
||||
template does not match `.spec.template` are scaled down.
|
||||
Eventually, the new RC will be scaled to `.spec.replicas` and all old RCs will
|
||||
be scaled to 0.
|
||||
If the user updates the deployment while an existing deployment was in progress,
|
||||
deployment will create a new RC as per the update and start scaling that up and
|
||||
will roll the RC that it was scaling up before in its list of old RCs and will
|
||||
|
||||
If the user updates a Deployment while an existing deployment is in progress,
|
||||
the Deployment will create a new RC as per the update and start scaling that up, and
|
||||
will roll the RC that it was scaling up previously-- it will add it to its list of old RCs and will
|
||||
start scaling it down.
|
||||
For example: If user creates a deployment to create 5 replicas of nginx:1.7.9.
|
||||
But then updates the deployment to create 5 replicas of nging:1.9.1, when only 3
|
||||
replicas of nginx:1.7.9 had been created, then deployment will immediately start
|
||||
killing the 3 nginx:1.7.9 pods that it had created and will start creating
|
||||
nginx:1.9.1 pods. It will not wait for 5 replicas of nginx:1.7.9 to be created
|
||||
|
||||
For example, suppose the user creates a deployment to create 5 replicas of `nginx:1.7.9`,
|
||||
but then updates the deployment to create 5 replicas of `nginx:1.9.1`, when only 3
|
||||
replicas of `nginx:1.7.9` had been created. In that case, deployment will immediately start
|
||||
killing the 3 `nginx:1.7.9` pods that it had created, and will start creating
|
||||
`nginx:1.9.1` pods. It will not wait for 5 replicas of `nginx:1.7.9` to be created
|
||||
before changing course.
|
||||
|
||||
## Writing a Deployment Spec
|
||||
|
@ -218,7 +210,7 @@ the same schema as a [pod](/docs/user-guide/pods), except it is nested and does
|
|||
|
||||
### Replicas
|
||||
|
||||
`.spec.replicas` is an optional field that specifies the number of desired pods. Defaults
|
||||
`.spec.replicas` is an optional field that specifies the number of desired pods. It defaults
|
||||
to 1.
|
||||
|
||||
### Selector
|
||||
|
@ -229,20 +221,9 @@ template is different than `.spec.template` or if the total number of such pods
|
|||
exceeds `.spec.replicas`. It will bring up new pods with `.spec.template` if
|
||||
number of pods are less than the desired number.
|
||||
|
||||
### Unique Label Key
|
||||
|
||||
`.spec.uniqueLabelKey` is an optional field specifying key of the selector that
|
||||
is added to existing RCs (and label key that is added to its pods) to prevent
|
||||
the existing RCs to select new pods (and old pods being selected by new RC).
|
||||
Users can set this to an empty string to indicate that the system should
|
||||
not add any selector and label. If unspecified, system uses
|
||||
"deployment.kubernetes.io/podTemplateHash".
|
||||
Value of this key is hash of `.spec.template`.
|
||||
No label is added if this is set to empty string.
|
||||
|
||||
### Strategy
|
||||
|
||||
`.spec.strategy` specifies the strategy to replace old pods by new ones.
|
||||
`.spec.strategy` specifies the strategy used to replace old pods by new ones.
|
||||
`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is
|
||||
the default value.
|
||||
|
||||
|
@ -250,11 +231,11 @@ the default value.
|
|||
|
||||
All existing pods are killed before new ones are created when
|
||||
`.spec.strategy.type==Recreate`.
|
||||
Note: This is not implemented yet.
|
||||
__Note: This is not implemented yet__.
|
||||
|
||||
#### Rolling Update Deployment
|
||||
|
||||
Deployment updates pods in a [rolling update][update-demo/] fashion
|
||||
The Deployment updates pods in a [rolling update](/docs/user-guide/update-demo/) fashion
|
||||
when `.spec.strategy.type==RollingUpdate`.
|
||||
Users can specify `maxUnavailable`, `maxSurge` and `minReadySeconds` to control
|
||||
the rolling update process.
|
||||
|
@ -263,39 +244,41 @@ the rolling update process.
|
|||
|
||||
`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the
|
||||
maximum number of pods that can be unavailable during the update process.
|
||||
Value can be an absolute number (ex: 5) or a percentage of desired pods (ex:
|
||||
10%).
|
||||
Absolute number is calculated from percentage by rounding up.
|
||||
The value can be an absolute number (e.g. 5) or a percentage of desired pods
|
||||
(e.g. 10%).
|
||||
The absolute number is calculated from percentage by rounding up.
|
||||
This can not be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0.
|
||||
By default, a fixed value of 1 is used.
|
||||
Example: when this is set to 30%, the old RC can be scaled down to
|
||||
|
||||
For example, when this value is set to 30%, the old RC can be scaled down to
|
||||
70% of desired pods immediately when the rolling update starts. Once new pods are
|
||||
ready, old RC can be scaled down further, followed by scaling up the new RC,
|
||||
ensuring that the total number of pods available at all times during the
|
||||
update is at least 70% of desired pods.
|
||||
update is at least 70% of the desired pods.
|
||||
|
||||
##### Max Surge
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the
|
||||
maximum number of pods that can be created above the desired number of pods.
|
||||
Value can be an absolute number (ex: 5) or a percentage of desired pods (ex:
|
||||
10%).
|
||||
This can not be 0 if MaxUnavailable is 0.
|
||||
Absolute number is calculated from percentage by rounding up.
|
||||
Value can be an absolute number (e.g. 5) or a percentage of desired pods
|
||||
(e.g. 10%).
|
||||
This can not be 0 if `MaxUnavailable` is 0.
|
||||
The absolute number is calculated from percentage by rounding up.
|
||||
By default, a value of 1 is used.
|
||||
Example: when this is set to 30%, the new RC can be scaled up immediately when
|
||||
|
||||
For example, when this value is set to 30%, the new RC can be scaled up immediately when
|
||||
the rolling update starts, such that the total number of old and new pods do not exceed
|
||||
130% of desired pods. Once old pods have been killed,
|
||||
new RC can be scaled up further, ensuring that total number of pods running
|
||||
at any time during the update is atmost 130% of desired pods.
|
||||
the new RC can be scaled up further, ensuring that the total number of pods running
|
||||
at any time during the update is at most 130% of desired pods.
|
||||
|
||||
##### Min Ready Seconds
|
||||
|
||||
`.spec.strategy.rollingUpdate.minReadySeconds` is an optional field that specifies the
|
||||
`.spec.minReadySeconds` is an optional field that specifies the
|
||||
minimum number of seconds for which a newly created pod should be ready
|
||||
without any of its container crashing, for it to be considered available.
|
||||
Defaults to 0 (pod will be considered available as soon as it is ready).
|
||||
Note: This is not implemented yet.
|
||||
without any of its containers crashing, for it to be considered available.
|
||||
This defaults to 0 (the pod will be considered available as soon as it is ready).
|
||||
To learn more about when a pod is considered ready, see [Container Probes](/docs/user-guide/pod-states/#container-probes).
|
||||
|
||||
## Alternative to Deployments
|
||||
|
||||
|
|
|
@ -84,12 +84,11 @@ In future, it will be possible to specify a specific annotation or label.
|
|||
|
||||
## Example
|
||||
|
||||
This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations are dumped in `/etc/podlabels` and in `/etc/annotations`, respectively:
|
||||
This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations are dumped in `/etc/labels` and in `/etc/annotations`, respectively:
|
||||
|
||||
{% include code.html language="yaml" file="downward-api/volume/dapi-volume.yaml" ghlink="/docs/user-guide/downward-api/volume/dapi-volume.yaml" %}
|
||||
|
||||
|
||||
Some more thorough examples:
|
||||
|
||||
* [environment variables](/docs/user-guide/environment-guide/)
|
||||
* [downward API](/docs/user-guide/downward-api/)
|
||||
* [downward API](/docs/user-guide/downward-api/)
|
||||
|
|
|
@ -1,21 +1,18 @@
|
|||
---
|
||||
---
|
||||
|
||||
Following this example, you will create a pod with a container that consumes the pod's name and
|
||||
namespace using the [downward API](/docs/user-guide/downward-api/).
|
||||
namespace using the [downward API](http://kubernetesio/docs/user-guide/downward-api/).
|
||||
|
||||
## Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and that you have
|
||||
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
|
||||
started](/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
started](http://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Step One: Create the pod
|
||||
|
||||
Containers consume the downward API using environment variables. The downward API allows
|
||||
containers to be injected with the name and namespace of the pod the container is in.
|
||||
|
||||
Use the [`dapi-pod.yaml`](/docs/user-guide/downward-api/dapi-pod.yaml) file to create a Pod with a container that consumes the
|
||||
Use the [`dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
|
||||
downward API.
|
||||
|
||||
```shell
|
|
@ -13,8 +13,7 @@ Supported metadata fields:
|
|||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and the `kubectl`
|
||||
command line tool somewhere in your path. Please see the [gettingstarted](/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
This example assumes you have a Kubernetes cluster installed and running, and the `kubectl` command line tool somewhere in your path. Please see the [gettingstarted](/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
### Step One: Create the pod
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ for your platform.
|
|||
## Optional: Build your own containers
|
||||
|
||||
The code for the containers is under
|
||||
[containers/](https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/user-guide/environment-guide/containers/)
|
||||
[containers/](/docs/user-guide/environment-guide/containers/)
|
||||
|
||||
## Get everything running
|
||||
|
||||
|
@ -40,8 +40,8 @@ Use `kubectl describe service show-srv` to determine the public IP of
|
|||
your service.
|
||||
|
||||
> Note: If your platform does not support external load balancers,
|
||||
> you'll need to open the proper port and direct traffic to the
|
||||
> internal IP shown for the frontend service with the above command
|
||||
you'll need to open the proper port and direct traffic to the
|
||||
internal IP shown for the frontend service with the above command
|
||||
|
||||
Run `curl <public ip>:80` to query the service. You should get
|
||||
something like this back:
|
||||
|
|
|
@ -18,6 +18,9 @@ pull an image if it already exists. If you would like to always force a pull
|
|||
you must set a pull image policy of `Always` or specify a `:latest` tag on
|
||||
your image.
|
||||
|
||||
If you did not specify tag of your image, it will be assumed as `:latest`, with
|
||||
pull image policy of `Always` correspondingly.
|
||||
|
||||
## Using a Private Registry
|
||||
|
||||
Private registries may require keys to read images from them.
|
||||
|
@ -52,6 +55,21 @@ Google service account. The service account on the instance
|
|||
will have a `https://www.googleapis.com/auth/devstorage.read_only`,
|
||||
so it can pull from the project's GCR, but not push.
|
||||
|
||||
### Using AWS EC2 Container Registry
|
||||
|
||||
Kubernetes has native support for the [AWS EC2 Container
|
||||
Registry](https://aws.amazon.com/ecr/), when nodes are AWS instances.
|
||||
|
||||
Simply use the full image name (e.g. `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)
|
||||
in the Pod definition.
|
||||
|
||||
All users of the cluster who can create pods will be able to run pods that use any of the
|
||||
images in the ECR registry.
|
||||
|
||||
The kubelet will fetch and periodically refresh ECR credentials. It needs the
|
||||
`ecr:GetAuthorizationToken` permission to do this.
|
||||
|
||||
|
||||
### Configuring Nodes to Authenticate to a Private Repository
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
|
@ -61,18 +79,19 @@ with credentials for Google Container Registry. You cannot use this approach.
|
|||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
node replacement.
|
||||
|
||||
Docker stores keys for private registries in the `$HOME/.dockercfg` file. If you put this
|
||||
in the `$HOME` of `root` on a kubelet, then docker will use it.
|
||||
Docker stores keys for private registries in the `$HOME/.dockercfg` or `$HOME/.docker/config.json` file. If you put this
|
||||
in the `$HOME` of user `root` on a kubelet, then docker will use it.
|
||||
|
||||
Here are the recommended steps to configuring your nodes to use a private registry. In this
|
||||
example, run these on your desktop/laptop:
|
||||
|
||||
1. run `docker login [server]` for each set of credentials you want to use.
|
||||
1. view `$HOME/.dockercfg` in an editor to ensure it contains just the credentials you want to use.
|
||||
1. get a list of your nodes
|
||||
- for example: `nodes=$(kubectl get nodes -o template --template='{{range.items}}{{.metadata.name}} {{end}}')`
|
||||
1. copy your local `.dockercfg` to the home directory of root on each node.
|
||||
- for example: `for n in $nodes; do scp ~/.dockercfg root@$n:/root/.dockercfg; done`
|
||||
1. run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json`.
|
||||
1. view `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use.
|
||||
1. get a list of your nodes, for example:
|
||||
- if you want the names: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
|
||||
- if you want to get the IPs: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
|
||||
1. copy your local `.docker/config.json` to the home directory of root on each node.
|
||||
- for example: `for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done`
|
||||
|
||||
Verify by creating a pod that uses a private image, e.g.:
|
||||
|
||||
|
@ -108,12 +127,13 @@ $ kubectl describe pods/private-image-test-1 | grep "Failed"
|
|||
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
|
||||
```
|
||||
|
||||
You must ensure all nodes in the cluster have the same `.dockercfg`. Otherwise, pods will run on
|
||||
|
||||
You must ensure all nodes in the cluster have the same `.docker/config.json`. Otherwise, pods will run on
|
||||
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
|
||||
template needs to include the `.dockercfg` or mount a drive that contains it.
|
||||
template needs to include the `.docker/config.json` or mount a drive that contains it.
|
||||
|
||||
All pods will have read access to images in any private registry once private
|
||||
registry keys are added to the `.dockercfg`.
|
||||
registry keys are added to the `.docker/config.json`.
|
||||
|
||||
**This was tested with a private docker repository as of 26 June with Kubernetes version v0.19.3.
|
||||
It should also work for a private registry such as quay.io, but that has not been tested.**
|
||||
|
@ -145,43 +165,53 @@ where node creation is automated.
|
|||
|
||||
Kubernetes supports specifying registry keys on a pod.
|
||||
|
||||
First, create a `.dockercfg`, such as running `docker login <registry.domain>`.
|
||||
Then put the resulting `.dockercfg` file into a [secret resource](/docs/user-guide/secrets). For example:
|
||||
#### Creating a Secret with a Docker Config
|
||||
|
||||
Run the following command, substituting the appropriate uppercase values:
|
||||
|
||||
```shell
|
||||
$ docker login
|
||||
Username: janedoe
|
||||
Password: '<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?
|
||||
Email: jdoe@example.com
|
||||
WARNING: login credentials saved in /Users/jdoe/.dockercfg.
|
||||
Login Succeeded
|
||||
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
|
||||
secret "myregistrykey" created.
|
||||
```
|
||||
|
||||
$ echo $(cat ~/.dockercfg)
|
||||
{ "https://index.docker.io/v1/": { "auth": "ZmFrZXBhc3N3b3JkMTIK", "email": "jdoe@example.com" } }
|
||||
If you need access to multiple registries, you can create one secret for each registry.
|
||||
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
|
||||
when pulling images for your Pods.
|
||||
|
||||
$ cat ~/.dockercfg | base64
|
||||
eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
|
||||
Pods can only reference image pull secrets in their own namespace,
|
||||
so this process needs to be done one time per namespace.
|
||||
|
||||
$ cat > /tmp/image-pull-secret.yaml <<EOF
|
||||
##### Bypassing kubectl create secrets
|
||||
|
||||
If for some reason you need multiple items in a single `.docker/config.json` or need
|
||||
control not given by the above command, then you can [create a secret using
|
||||
json or yaml](/docs/user-guide/secrets/#creating-a-secret-manually).
|
||||
|
||||
Be sure to:
|
||||
|
||||
- set the name of the data item to `.dockerconfigjson`
|
||||
- base64 encode the docker file and paste that string, unbroken
|
||||
as the value for field `data[".dockerconfigjson"]`
|
||||
- set `type` to `kubernetes.io/dockerconfigjson`
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: myregistrykey
|
||||
namespace: awesomeapps
|
||||
data:
|
||||
.dockercfg: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
|
||||
type: kubernetes.io/dockercfg
|
||||
EOF
|
||||
|
||||
$ kubectl create -f /tmp/image-pull-secret.yaml
|
||||
secrets/myregistrykey
|
||||
$
|
||||
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
```
|
||||
|
||||
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
|
||||
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockercfg]: invalid value ...` it means
|
||||
the data was successfully un-base64 encoded, but could not be parsed as a dockercfg file.
|
||||
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...` it means
|
||||
the data was successfully un-base64 encoded, but could not be parsed as a `.docker/config.json` file.
|
||||
|
||||
This process only needs to be done one time (per namespace).
|
||||
#### Referring to an imagePullSecrets on a Pod
|
||||
|
||||
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
|
||||
section to a pod definition.
|
||||
|
@ -191,6 +221,7 @@ apiVersion: v1
|
|||
kind: Pod
|
||||
metadata:
|
||||
name: foo
|
||||
namespace: awesomeapps
|
||||
spec:
|
||||
containers:
|
||||
- name: foo
|
||||
|
@ -200,15 +231,11 @@ spec:
|
|||
```
|
||||
|
||||
This needs to be done for each pod that is using a private registry.
|
||||
|
||||
However, setting of this field can be automated by setting the imagePullSecrets
|
||||
in a [serviceAccount](/docs/user-guide/service-accounts) resource.
|
||||
|
||||
Currently, all pods will potentially have read access to any images which were
|
||||
pulled using imagePullSecrets. That is, imagePullSecrets does *NOT* protect your
|
||||
images from being seen by other users in the cluster. Our intent
|
||||
is to fix that.
|
||||
|
||||
You can use this in conjunction with a per-node `.dockerfile`. The credentials
|
||||
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
|
||||
will be merged. This approach will work on Google Container Engine (GKE).
|
||||
|
||||
### Use Cases
|
||||
|
@ -216,22 +243,25 @@ will be merged. This approach will work on Google Container Engine (GKE).
|
|||
There are a number of solutions for configuring private registries. Here are some
|
||||
common use cases and suggested solutions.
|
||||
|
||||
1. Cluster running only non-proprietary (e.g open-source) images. No need to hide images.
|
||||
1. Cluster running only non-proprietary (e.g open-source) images. No need to hide images.
|
||||
- Use public images on the Docker hub.
|
||||
- no configuration required
|
||||
- on GCE/GKE, a local mirror is automatically used for improved speed and availability
|
||||
1. Cluster running some proprietary images which should be hidden to those outside the company, but
|
||||
1. Cluster running some proprietary images which should be hidden to those outside the company, but
|
||||
visible to all cluster users.
|
||||
- Use a hosted private [Docker registry](https://docs.docker.com/registry/)
|
||||
- may be hosted on the [Docker Hub](https://hub.docker.com/account/signup/), or elsewhere.
|
||||
- manually configure .dockercfg on each node as described above
|
||||
- manually configure .docker/config.json on each node as described above
|
||||
- Or, run an internal private registry behind your firewall with open read access.
|
||||
- no Kubernetes configuration required
|
||||
- Or, when on GCE/GKE, use the project's Google Container Registry.
|
||||
- will work better with cluster autoscaling than manual node configuration
|
||||
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
|
||||
1. Cluster with a proprietary images, a few of which require stricter access control
|
||||
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
|
||||
- DO NOT use imagePullSecrets for this use case yet.
|
||||
1. A multi-tenant cluster where each tenant needs own private registry
|
||||
- NOT supported yet.
|
||||
1. Cluster with a proprietary images, a few of which require stricter access control
|
||||
- ensure [AlwaysPullImages admission controller](/docs/admin/admission-controllers/#alwayspullimages) is active, otherwise, all Pods potentially have access to all images
|
||||
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
|
||||
1. A multi-tenant cluster where each tenant needs own private registry
|
||||
- ensure [AlwaysPullImages admission controller](/docs/admin/admission-controllers/#alwayspullimages) is active, otherwise, all Pods of all tenants potentially have access to all images
|
||||
- run a private registry with authorization required.
|
||||
- generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
|
||||
- tenant adds that secret to imagePullSecrets of each namespace.
|
||||
|
|
|
@ -4,106 +4,87 @@
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
The user guide is intended for anyone who wants to run programs and services on an existing Kubernetes cluster. Setup and administration of a Kubernetes cluster is described in the [Cluster Admin Guide](/docs/admin/). The [Developer Guide](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/) is for anyone wanting to either write code which directly accesses the Kubernetes API, or to contribute directly to the Kubernetes project.
|
||||
|
||||
Please ensure you have completed the [prerequisites for running examples from the user guide](/docs/user-guide/prereqs).
|
||||
The user guide is intended for anyone who wants to run programs and services on an existing Kubernetes cluster. Setup and administration of a Kubernetes cluster is described in the [Cluster Admin Guide](/docs/admin/). The [Developer Guide](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel) is for anyone wanting to either write code which directly accesses the Kubernetes API, or to contribute directly to the Kubernetes project.
|
||||
|
||||
Please ensure you have completed the [prerequisites for running examples from the user guide](/docs/user-guide/prereqs/).
|
||||
|
||||
## Quick walkthrough
|
||||
|
||||
1. [Kubernetes 101](/docs/user-guide/walkthrough/)
|
||||
1. [Kubernetes 201](/docs/user-guide/walkthrough/k8s201)
|
||||
1. [Kubernetes 201](/docs/user-guide/walkthrough/k8s201/)
|
||||
|
||||
## Thorough walkthrough
|
||||
|
||||
If you don't have much familiarity with Kubernetes, we recommend you read the following sections in order:
|
||||
|
||||
1. [Quick start: launch and expose an application](/docs/user-guide/quick-start)
|
||||
1. [Configuring and launching containers: configuring common container parameters](/docs/user-guide/configuring-containers)
|
||||
1. [Deploying continuously running applications](/docs/user-guide/deploying-applications)
|
||||
1. [Connecting applications: exposing applications to clients and users](/docs/user-guide/connecting-applications)
|
||||
1. [Working with containers in production](/docs/user-guide/production-pods)
|
||||
1. [Managing deployments](/docs/user-guide/managing-deployments)
|
||||
1. [Application introspection and debugging](/docs/user-guide/introspection-and-debugging)
|
||||
1. [Using the Kubernetes web user interface](/docs/user-guide/ui)
|
||||
1. [Logging](/docs/user-guide/logging)
|
||||
1. [Monitoring](/docs/user-guide/monitoring)
|
||||
1. [Getting into containers via `exec`](/docs/user-guide/getting-into-containers)
|
||||
1. [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy)
|
||||
1. [Connecting to containers via port forwarding](/docs/user-guide/connecting-to-applications-port-forward)
|
||||
|
||||
## Overview
|
||||
|
||||
Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster. Kubernetes is intended to make deploying containerized/microservice-based applications easy but powerful.
|
||||
|
||||
Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure that the state of the cluster continually matches the user's intentions. An operations user should be able to launch a micro-service, letting the scheduler find the right placement. We also want to improve the tools and experience for how users can roll-out applications through patterns like canary deployments.
|
||||
|
||||
Kubernetes supports [Docker](http://www.docker.io) and [Rocket](https://coreos.com/blog/rocket/) containers, and other container image formats and container runtimes will be supported in the future.
|
||||
|
||||
While Kubernetes currently focuses on continuously-running stateless (e.g. web server or in-memory object cache) and "cloud native" stateful applications (e.g. NoSQL datastores), in the near future it will support all the other workload types commonly found in production cluster environments, such as batch, stream processing, and traditional databases.
|
||||
|
||||
In Kubernetes, all containers run inside [pods](/docs/user-guide/pods). A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. A pod can also contain zero or more [volumes](/docs/user-guide/volumes), which are directories that are private to a container or shared across containers in a pod. For each pod the user creates, the system finds a machine that is healthy and that has sufficient available capacity, and starts up the corresponding container(s) there. If a container fails it can be automatically restarted by Kubernetes' node agent, called the Kubelet. But if the pod or its machine fails, it is not automatically moved or restarted unless the user also defines a [replication controller](/docs/user-guide/replication-controller), which we discuss next.
|
||||
|
||||
Users can create and manage pods themselves, but Kubernetes drastically simplifies system management by allowing users to delegate two common pod-related activities: deploying multiple pod replicas based on the same pod configuration, and creating replacement pods when a pod or its machine fails. The Kubernetes API object that manages these behaviors is called a [replication controller](/docs/user-guide/replication-controller). It defines a pod in terms of a template, that the system then instantiates as some number of pods (specified by the user). The replicated set of pods might constitute an entire application, a micro-service, or one layer in a multi-tier application. Once the pods are created, the system continually monitors their health and that of the machines they are running on; if a pod fails due to a software problem or machine failure, the replication controller automatically creates a new pod on a healthy machine, to maintain the set of pods at the desired replication level. Multiple pods from the same or different applications can share the same machine. Note that a replication controller is needed even in the case of a single non-replicated pod if the user wants it to be re-created when it or its machine fails.
|
||||
|
||||
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](/docs/user-guide/labels), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](/docs/user-guide/annotations).
|
||||
|
||||
Kubernetes supports a unique [networking model](/docs/admin/networking). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
|
||||
|
||||
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](/docs/user-guide/services) abstraction, which provides a stable IP address and [DNS name](/docs/admin/dns) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
|
||||
|
||||
Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the object's name, and the object's [namespace](/docs/user-guide/namespaces). For a certain object kind, every name is unique within its namespace. In contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space.
|
||||
1. [Quick start: launch and expose an application](/docs/user-guide/quick-start/)
|
||||
1. [Configuring and launching containers: configuring common container parameters](/docs/user-guide/configuring-containers/)
|
||||
1. [Deploying continuously running applications](/docs/user-guide/deploying-applications/)
|
||||
1. [Connecting applications: exposing applications to clients and users](/docs/user-guide/connecting-applications/)
|
||||
1. [Working with containers in production](/docs/user-guide/production-pods/)
|
||||
1. [Managing deployments](/docs/user-guide/managing-deployments/)
|
||||
1. [Application introspection and debugging](/docs/user-guide/introspection-and-debugging/)
|
||||
1. [Using the Kubernetes web user interface](/docs/user-guide/ui/)
|
||||
1. [Logging](/docs/user-guide/logging/)
|
||||
1. [Monitoring](/docs/user-guide/monitoring/)
|
||||
1. [Getting into containers via `exec`](/docs/user-guide/getting-into-containers/)
|
||||
1. [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy/)
|
||||
1. [Connecting to containers via port forwarding](/docs/user-guide/connecting-to-applications-port-forward/)
|
||||
|
||||
## Concept guide
|
||||
|
||||
[**Cluster**](/docs/admin/)
|
||||
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications.
|
||||
|
||||
[**Node**](/docs/admin/node)
|
||||
[**Node**](/docs/admin/node/)
|
||||
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
|
||||
|
||||
[**Pod**](/docs/user-guide/pods)
|
||||
[**Pod**](/docs/user-guide/pods/)
|
||||
: A pod is a co-located group of containers and volumes.
|
||||
|
||||
[**Label**](/docs/user-guide/labels)
|
||||
[**Label**](/docs/user-guide/labels/)
|
||||
: A label is a key/value pair that is attached to a resource, such as a pod, to convey a user-defined identifying attribute. Labels can be used to organize and to select subsets of resources.
|
||||
|
||||
[**Selector**](/docs/user-guide/labels/#label-selectors)
|
||||
: A selector is an expression that matches labels in order to identify related resources, such as which pods are targeted by a load-balanced service.
|
||||
|
||||
[**Replication Controller**](/docs/user-guide/replication-controller)
|
||||
[**Replication Controller**](/docs/user-guide/replication-controller/)
|
||||
: A replication controller ensures that a specified number of pod replicas are running at any one time. It both allows for easy scaling of replicated systems and handles re-creation of a pod when the machine it is on reboots or otherwise fails.
|
||||
|
||||
[**Service**](/docs/user-guide/services)
|
||||
[**Service**](/docs/user-guide/services/)
|
||||
: A service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name.
|
||||
|
||||
[**Volume**](/docs/user-guide/volumes)
|
||||
[**Volume**](/docs/user-guide/volumes/)
|
||||
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the volume directory and/or device.
|
||||
|
||||
[**Secret**](/docs/user-guide/secrets)
|
||||
[**Secret**](/docs/user-guide/secrets/)
|
||||
: A secret stores sensitive data, such as authentication tokens, which can be made available to containers upon request.
|
||||
|
||||
[**Name**](/docs/user-guide/identifiers)
|
||||
[**Name**](/docs/user-guide/identifiers/)
|
||||
: A user- or client-provided name for a resource.
|
||||
|
||||
[**Namespace**](/docs/user-guide/namespaces)
|
||||
[**Namespace**](/docs/user-guide/namespaces/)
|
||||
: A namespace is like a prefix to the name of a resource. Namespaces help different projects, teams, or customers to share a cluster, such as by preventing name collisions between unrelated teams.
|
||||
|
||||
[**Annotation**](/docs/user-guide/annotations)
|
||||
[**Annotation**](/docs/user-guide/annotations/)
|
||||
: A key/value pair that can hold larger (compared to a label), and possibly not human-readable, data, intended to store non-identifying auxiliary data, especially data manipulated by tools and system extensions. Efficient filtering by annotation values is not supported.
|
||||
|
||||
## Further reading
|
||||
|
||||
* API resources
|
||||
* [Working with resources](/docs/user-guide/working-with-resources)
|
||||
API resources
|
||||
|
||||
* Pods and containers
|
||||
* [Pod lifecycle and restart policies](/docs/user-guide/pod-states)
|
||||
* [Lifecycle hooks](/docs/user-guide/container-environment)
|
||||
* [Compute resources, such as cpu and memory](/docs/user-guide/compute-resources)
|
||||
* [Specifying commands and requesting capabilities](/docs/user-guide/containers)
|
||||
* [Downward API: accessing system configuration from a pod](/docs/user-guide/downward-api)
|
||||
* [Images and registries](/docs/user-guide/images)
|
||||
* [Migrating from docker-cli to kubectl](/docs/user-guide/docker-cli-to-kubectl)
|
||||
* [Tips and tricks when working with config](/docs/user-guide/config-best-practices)
|
||||
* [Assign pods to selected nodes](/docs/user-guide/node-selection/)
|
||||
* [Perform a rolling update on a running group of pods](/docs/user-guide/update-demo/)
|
||||
* [Working with resources](/docs/user-guide/working-with-resources/)
|
||||
|
||||
Pods and containers
|
||||
|
||||
* [Pod lifecycle and restart policies](/docs/user-guide/pod-states/)
|
||||
* [Lifecycle hooks](/docs/user-guide/container-environment/)
|
||||
* [Compute resources, such as cpu and memory](/docs/user-guide/compute-resources/)
|
||||
* [Specifying commands and requesting capabilities](/docs/user-guide/containers/)
|
||||
* [Downward API: accessing system configuration from a pod](/docs/user-guide/downward-api/)
|
||||
* [Images and registries](/docs/user-guide/images/)
|
||||
* [Migrating from docker-cli to kubectl](/docs/user-guide/docker-cli-to-kubectl/)
|
||||
* [Configuration Best Practices and Tips](/docs/user-guide/config-best-practices/)
|
||||
* [Assign pods to selected nodes](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection/)
|
||||
* [Perform a rolling update on a running group of pods](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/update-demo/)
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
__Terminology__
|
||||
|
||||
Throughout this doc you will see a few terms that are sometimes used interchangably elsewhere, that might cause confusion. This section attempts to clarify them.
|
||||
Throughout this doc you will see a few terms that are sometimes used interchangeably elsewhere, that might cause confusion. This section attempts to clarify them.
|
||||
|
||||
* Node: A single virtual or physical machine in a Kubernetes cluster.
|
||||
* Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
|
||||
|
@ -19,7 +19,7 @@ Throughout this doc you will see a few terms that are sometimes used interchanga
|
|||
Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:
|
||||
|
||||
```
|
||||
internet
|
||||
internet
|
||||
|
|
||||
------------
|
||||
[ Services ]
|
||||
|
@ -41,7 +41,7 @@ It can be configured to give services externally-reachable urls, load balance tr
|
|||
|
||||
Before you start using the Ingress resource, there are a few things you should understand:
|
||||
|
||||
* The Ingress resource is not available in any Kubernetes release prior to 1.1
|
||||
* The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1.
|
||||
* You need an Ingress controller to satisfy an Ingress. Simply creating the resource will have no effect.
|
||||
* On GCE/GKE there should be a [L7 cluster addon](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-loadbalancing/glbc/README.md#prerequisites), on other platforms you either need to write your own or [deploy an existing controller](https://github.com/kubernetes/contrib/tree/master/Ingress) as a pod.
|
||||
* The resource currently does not support HTTPS, but will do so before it leaves beta.
|
||||
|
@ -51,18 +51,18 @@ Before you start using the Ingress resource, there are a few things you should u
|
|||
A minimal Ingress might look like:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test-ingress
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /testpath
|
||||
backend:
|
||||
serviceName: test
|
||||
servicePort: 80
|
||||
01. apiVersion: extensions/v1beta1
|
||||
02. kind: Ingress
|
||||
03. metadata:
|
||||
04. name: test-ingress
|
||||
05. spec:
|
||||
06. rules:
|
||||
07. - http:
|
||||
08. paths:
|
||||
09. - path: /testpath
|
||||
10. backend:
|
||||
11. serviceName: test
|
||||
12. servicePort: 80
|
||||
```
|
||||
|
||||
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
|
||||
|
@ -179,7 +179,7 @@ __Default Backends__: An Ingress with no rules, like the one shown in the previo
|
|||
|
||||
### Loadbalancing
|
||||
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distil loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
|
||||
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/production-pods.md#liveness-and-readiness-probes-aka-health-checks) which allow you to achieve the same end result.
|
||||
|
||||
|
@ -245,7 +245,7 @@ Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kuberne
|
|||
|
||||
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
|
||||
|
||||
* Use [Service.Type=LoadBalancer](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-loadbalancer)
|
||||
* Use [Service.Type=NodePort](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-nodeport)
|
||||
* Use a [Port Proxy](https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service)
|
||||
* Use [Service.Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer)
|
||||
* Use [Service.Type=NodePort](/docs/user-guide/services/#type-nodeport)
|
||||
* Use a [Port Proxy] (https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service)
|
||||
* Deploy the [Service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.
|
|
@ -53,7 +53,7 @@ We can retrieve a lot more information about each of these pods using `kubectl d
|
|||
$ kubectl describe pod my-nginx-gy1ij
|
||||
Name: my-nginx-gy1ij
|
||||
Image(s): nginx
|
||||
Node: kubernetes-minion-y3vk/10.240.154.168
|
||||
Node: kubernetes-node-y3vk/10.240.154.168
|
||||
Labels: app=nginx
|
||||
Status: Running
|
||||
Reason:
|
||||
|
@ -75,13 +75,13 @@ Conditions:
|
|||
Ready True
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {scheduler } scheduled Successfully assigned my-nginx-gy1ij to kubernetes-minion-y3vk
|
||||
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-minion-y3vk} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
|
||||
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-minion-y3vk} implicitly required container POD created Created with docker id cd1644065066
|
||||
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-minion-y3vk} implicitly required container POD started Started with docker id cd1644065066
|
||||
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} pulled Successfully pulled image "nginx"
|
||||
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac
|
||||
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
|
||||
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {scheduler } scheduled Successfully assigned my-nginx-gy1ij to kubernetes-node-y3vk
|
||||
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
|
||||
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD created Created with docker id cd1644065066
|
||||
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD started Started with docker id cd1644065066
|
||||
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} pulled Successfully pulled image "nginx"
|
||||
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac
|
||||
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
|
||||
```
|
||||
|
||||
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
|
||||
|
@ -191,7 +191,7 @@ spec:
|
|||
name: default-token-zkhkk
|
||||
readOnly: true
|
||||
dnsPolicy: ClusterFirst
|
||||
nodeName: kubernetes-minion-u619
|
||||
nodeName: kubernetes-node-u619
|
||||
restartPolicy: Always
|
||||
serviceAccountName: default
|
||||
volumes:
|
||||
|
@ -226,14 +226,14 @@ Sometimes when debugging it can be useful to look at the status of a node -- for
|
|||
```shell
|
||||
$ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kubernetes-minion-861h kubernetes.io/hostname=kubernetes-minion-861h NotReady
|
||||
kubernetes-minion-bols kubernetes.io/hostname=kubernetes-minion-bols Ready
|
||||
kubernetes-minion-st6x kubernetes.io/hostname=kubernetes-minion-st6x Ready
|
||||
kubernetes-minion-unaj kubernetes.io/hostname=kubernetes-minion-unaj Ready
|
||||
kubernetes-node-861h kubernetes.io/hostname=kubernetes-node-861h NotReady
|
||||
kubernetes-node-bols kubernetes.io/hostname=kubernetes-node-bols Ready
|
||||
kubernetes-node-st6x kubernetes.io/hostname=kubernetes-node-st6x Ready
|
||||
kubernetes-node-unaj kubernetes.io/hostname=kubernetes-node-unaj Ready
|
||||
|
||||
$ kubectl describe node kubernetes-minion-861h
|
||||
Name: kubernetes-minion-861h
|
||||
Labels: kubernetes.io/hostname=kubernetes-minion-861h
|
||||
$ kubectl describe node kubernetes-node-861h
|
||||
Name: kubernetes-node-861h
|
||||
Labels: kubernetes.io/hostname=kubernetes-node-861h
|
||||
CreationTimestamp: Fri, 10 Jul 2015 14:32:29 -0700
|
||||
Conditions:
|
||||
Type Status LastHeartbeatTime LastTransitionTime Reason Message
|
||||
|
@ -255,28 +255,28 @@ Pods: (0 in total)
|
|||
Namespace Name
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
Fri, 10 Jul 2015 14:32:28 -0700 Fri, 10 Jul 2015 14:32:28 -0700 1 {kubelet kubernetes-minion-861h} NodeNotReady Node kubernetes-minion-861h status is now: NodeNotReady
|
||||
Fri, 10 Jul 2015 14:32:30 -0700 Fri, 10 Jul 2015 14:32:30 -0700 1 {kubelet kubernetes-minion-861h} NodeNotReady Node kubernetes-minion-861h status is now: NodeNotReady
|
||||
Fri, 10 Jul 2015 14:33:00 -0700 Fri, 10 Jul 2015 14:33:00 -0700 1 {kubelet kubernetes-minion-861h} starting Starting kubelet.
|
||||
Fri, 10 Jul 2015 14:33:02 -0700 Fri, 10 Jul 2015 14:33:02 -0700 1 {kubelet kubernetes-minion-861h} NodeReady Node kubernetes-minion-861h status is now: NodeReady
|
||||
Fri, 10 Jul 2015 14:35:15 -0700 Fri, 10 Jul 2015 14:35:15 -0700 1 {controllermanager } NodeNotReady Node kubernetes-minion-861h status is now: NodeNotReady
|
||||
Fri, 10 Jul 2015 14:32:28 -0700 Fri, 10 Jul 2015 14:32:28 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
|
||||
Fri, 10 Jul 2015 14:32:30 -0700 Fri, 10 Jul 2015 14:32:30 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
|
||||
Fri, 10 Jul 2015 14:33:00 -0700 Fri, 10 Jul 2015 14:33:00 -0700 1 {kubelet kubernetes-node-861h} starting Starting kubelet.
|
||||
Fri, 10 Jul 2015 14:33:02 -0700 Fri, 10 Jul 2015 14:33:02 -0700 1 {kubelet kubernetes-node-861h} NodeReady Node kubernetes-node-861h status is now: NodeReady
|
||||
Fri, 10 Jul 2015 14:35:15 -0700 Fri, 10 Jul 2015 14:35:15 -0700 1 {controllermanager } NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
|
||||
|
||||
|
||||
$ kubectl get node kubernetes-minion-861h -o yaml
|
||||
$ kubectl get node kubernetes-node-861h -o yaml
|
||||
apiVersion: v1
|
||||
kind: Node
|
||||
metadata:
|
||||
creationTimestamp: 2015-07-10T21:32:29Z
|
||||
labels:
|
||||
kubernetes.io/hostname: kubernetes-minion-861h
|
||||
name: kubernetes-minion-861h
|
||||
kubernetes.io/hostname: kubernetes-node-861h
|
||||
name: kubernetes-node-861h
|
||||
resourceVersion: "757"
|
||||
selfLink: /api/v1/nodes/kubernetes-minion-861h
|
||||
selfLink: /api/v1/nodes/kubernetes-node-861h
|
||||
uid: 2a69374e-274b-11e5-a234-42010af0d969
|
||||
spec:
|
||||
externalID: "15233045891481496305"
|
||||
podCIDR: 10.244.0.0/24
|
||||
providerID: gce://striped-torus-760/us-central1-b/kubernetes-minion-861h
|
||||
providerID: gce://striped-torus-760/us-central1-b/kubernetes-node-861h
|
||||
status:
|
||||
addresses:
|
||||
- address: 10.240.115.55
|
||||
|
|
|
@ -1,16 +1,11 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: pi
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: pi
|
||||
template:
|
||||
metadata:
|
||||
name: pi
|
||||
labels:
|
||||
app: pi
|
||||
spec:
|
||||
containers:
|
||||
- name: pi
|
||||
|
|
|
@ -12,6 +12,9 @@ of successful completions is reached, the job itself is complete. Deleting a Jo
|
|||
pods it created.
|
||||
|
||||
A simple case is to create 1 Job object in order to reliably run one Pod to completion.
|
||||
The Job object will start a new Pod if the first pod fails or is deleted (for example
|
||||
due to a node hardware failure or a node reboot).
|
||||
|
||||
A Job can also be used to run multiple pods in parallel.
|
||||
|
||||
## Running an example Job
|
||||
|
@ -25,7 +28,7 @@ Run the example job by downloading the example file and then running this comman
|
|||
|
||||
```shell
|
||||
$ kubectl create -f ./job.yaml
|
||||
jobs/pi
|
||||
job "pi" created
|
||||
```
|
||||
|
||||
Check on the status of the job using this command:
|
||||
|
@ -35,14 +38,17 @@ $ kubectl describe jobs/pi
|
|||
Name: pi
|
||||
Namespace: default
|
||||
Image(s): perl
|
||||
Selector: app=pi
|
||||
Parallelism: 2
|
||||
Selector: app in (pi)
|
||||
Parallelism: 1
|
||||
Completions: 1
|
||||
Labels: <none>
|
||||
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
|
||||
Start Time: Mon, 11 Jan 2016 15:35:52 -0800
|
||||
Labels: app=pi
|
||||
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
|
||||
No volumes.
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
1m 1m 1 {job } SuccessfulCreate Created pod: pi-z548a
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q
|
||||
```
|
||||
|
||||
To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too.
|
||||
|
@ -61,7 +67,7 @@ that just gets the name from each pod in the returned list.
|
|||
View the standard output of one of the pods:
|
||||
|
||||
```shell
|
||||
$ kubectl logs pi-aiw0a
|
||||
$ kubectl logs $pods
|
||||
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
|
||||
```
|
||||
|
||||
|
@ -69,7 +75,7 @@ $ kubectl logs pi-aiw0a
|
|||
|
||||
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
|
||||
general information about working with config files, see [here](/docs/user-guide/simple-yaml),
|
||||
[here](/docs/user-guide/configuring-containers), and [here](working-with-resources).
|
||||
[here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources).
|
||||
|
||||
A Job also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
|
||||
|
||||
|
@ -82,52 +88,75 @@ the same schema as a [pod](/docs/user-guide/pods), except it is nested and does
|
|||
`kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a job must specify appropriate
|
||||
lables (see [pod selector](#pod-selector) and an appropriate restart policy.
|
||||
labels (see [pod selector](#pod-selector) and an appropriate restart policy.
|
||||
|
||||
Only a [`RestartPolicy`](/docs/user-guide/pod-states) equal to `Never` or `OnFailure` are allowed.
|
||||
|
||||
### Pod Selector
|
||||
|
||||
The `.spec.selector` field is a label query over a set of pods.
|
||||
The `.spec.selector` field is optional. In almost all cases you should not specify it.
|
||||
See section [specifying your own pod selector](#specifying-your-own-pod-selector).
|
||||
|
||||
The `spec.selector` is an object consisting of two fields:
|
||||
|
||||
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](/docs/user-guide/replication-controller)
|
||||
* `matchExpressions` - allows to build more sophisticated selectors by specyfing key,
|
||||
list of values and an operator that relates the key and values.
|
||||
### Parallel Jobs
|
||||
|
||||
When the two are specified the result is ANDed.
|
||||
There are three main types of jobs:
|
||||
|
||||
If `.spec.selector` is unspecified, `.spec.selector.matchLabels` will be defaulted to
|
||||
`.spec.template.metadata.labels`.
|
||||
1. Non-parallel Jobs
|
||||
- normally only one pod is started, unless the pod fails.
|
||||
- job is complete as soon as Pod terminates successfully.
|
||||
1. Parallel Jobs with a *fixed completion count*:
|
||||
- specify a non-zero positive value for `.spec.completions`
|
||||
- the job is complete when there is one successful pod for each value in the range 1 to `.spec.completions`.
|
||||
- **not implemented yet:** each pod passed a different index in the range 1 to `.spec.completions`.
|
||||
1. Parallel Jobs with a *work queue*:
|
||||
- do not specify `.spec.completions`
|
||||
- the pods must coordinate with themselves or an external service to determine what each should work on
|
||||
- each pod is independently capable of determining whether or not all its peers are done, thus the entire Job is done.
|
||||
- when _any_ pod terminates with success, no new pods are created.
|
||||
- once at least one pod has terminated with success and all pods are terminated, then the job is completed with success.
|
||||
- once any pod has exited with success, no other pod should still be doing any work or writing any output. They should all be
|
||||
in the process of exiting.
|
||||
|
||||
Also you should not normally create any pods whose labels match this selector, either directly,
|
||||
via another Job, or via another controller such as ReplicationController. Otherwise, the Job will
|
||||
think that those pods were created by it. Kubernetes will not stop you from doing this.
|
||||
For a Non-parallel job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
|
||||
unset, both are defaulted to 1.
|
||||
|
||||
### Multiple Completions
|
||||
For a Fixed Completion Count job, you should set `.spec.completions` to the number of completions needed.
|
||||
You can set `.spec.parallelism`, or leave it unset and it will default to 1.
|
||||
|
||||
By default, a Job is complete when one Pod runs to successful completion. You can also specify that
|
||||
this needs to happen multiple times by specifying `.spec.completions` with a value greater than 1.
|
||||
When multiple completions are requested, each Pod created by the Job controller has an identical
|
||||
[`spec`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status). In particular, all pods will have
|
||||
the same command line and the same image, the same volumes, and mostly the same environment
|
||||
variables. It is up to the user to arrange for the pods to do work on different things. For
|
||||
example, the pods might all access a shared work queue service to acquire work units.
|
||||
For a Work Queue Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
|
||||
a non-negative integer.
|
||||
|
||||
To create multiple pods which are similar, but have slightly different arguments, environment
|
||||
variables or images, use multiple Jobs.
|
||||
For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
|
||||
|
||||
### Parallelism
|
||||
|
||||
You can suggest how many pods should run concurrently by setting `.spec.parallelism` to the number
|
||||
of pods you would like to have running concurrently. This number is a suggestion. The number
|
||||
running concurrently may be lower or higher for a variety of reasons. For example, it may be lower
|
||||
if the number of remaining completions is less, or as the controller is ramping up, or if it is
|
||||
throttling the job due to excessive failures. It may be higher for example if a pod is gracefully
|
||||
shutdown, and the replacement starts early.
|
||||
#### Controlling Parallelism
|
||||
|
||||
If you do not specify `.spec.parallelism`, then it defaults to `.spec.completions`.
|
||||
The requested parallelism (`.spec.parallelism`) can be set to any non-negative value.
|
||||
If it is unspecified, it defaults to 1.
|
||||
If it is specified as 0, then the Job is effectively paused until it is increased.
|
||||
|
||||
A job can be scaled up using the `kubectl scale` command. For example, the following
|
||||
command sets `.spec.parallelism` of a job called `myjob` to 10:
|
||||
|
||||
```shell
|
||||
$ kubectl scale --replicas=$N jobs/myjob
|
||||
job "myjob" scaled
|
||||
```
|
||||
|
||||
You can also use the `scale` subresource of the Job resource.
|
||||
|
||||
Actual parallelism (number of pods running at any instant) may be more or less than requested
|
||||
parallelism, for a variety or reasons:
|
||||
|
||||
- For Fixed Completion Count jobs, the actual number of pods running in parallel will not exceed the number of
|
||||
remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
|
||||
- For work queue jobs, no new pods are started after any pod has succeeded -- remaining pods are allowed to complete, however.
|
||||
- If the controller has not had time to react.
|
||||
- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc),
|
||||
then there may be fewer pods than requested.
|
||||
- The controller may throttle new pod creation due to excessive previous pod failures in the same Job.
|
||||
- When a pod is gracefully shutdown, it make take time to stop.
|
||||
|
||||
## Handling Pod and Container Failures
|
||||
|
||||
|
@ -139,7 +168,7 @@ restarted locally, or else specify `.spec.template.containers[].restartPolicy =
|
|||
See [pods-states](/docs/user-guide/pod-states) for more information on `restartPolicy`.
|
||||
|
||||
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
|
||||
(node is upgraded, rebooted, delelted, etc.), or if a container of the Pod fails and the
|
||||
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
|
||||
`.spec.template.containers[].restartPolicy = "Never"`. When a Pod fails, then the Job controller
|
||||
starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new
|
||||
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
|
||||
|
@ -152,7 +181,125 @@ sometimes be started twice.
|
|||
If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
|
||||
multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
|
||||
|
||||
## Alternatives to Job
|
||||
## Job Patterns
|
||||
|
||||
The Job object can be used to support reliable parallel execution of Pods. The Job object is not
|
||||
designed to support closely-communicating parallel processes, as commonly found in scientific
|
||||
computing. It does support parallel processing of a set of independent but related *work items*.
|
||||
These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a
|
||||
NoSQL database to scan, and so on.
|
||||
|
||||
In a complex system, there may be multiple different sets of work items. Here we are just
|
||||
considering one set of work items that the user wants to manage together — a *batch job*.
|
||||
|
||||
There are several different patterns for parallel computation, each with strengths and weaknesses.
|
||||
The tradeoffs are:
|
||||
|
||||
- One Job object for each work item, vs a single Job object for all work items. The latter is
|
||||
better for large numbers of work items. The former creates some overhead for the user and for the
|
||||
system to manage large numbers of Job objects. Also, with the latter, the resource usage of the job
|
||||
(number of concurrently running pods) can be easily adjusted using the `kubectl scale` command.
|
||||
- Number of pods created equals number of work items, vs each pod can process multiple work items.
|
||||
The former typically requires less modification to existing code and containers. The latter
|
||||
is better for large numbers of work items, for similar reasons to the previous bullet.
|
||||
- Several approaches use a work queue. This requires running a queue service,
|
||||
and modifications to the existing program or container to make it use the work queue.
|
||||
Other approaches are easier to adapt to an existing containerised application.
|
||||
|
||||
|
||||
The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
|
||||
The pattern names are also links to examples and more detailed description.
|
||||
|
||||
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
|
||||
| -------------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
|
||||
| [Job Template Expansion](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/expansions/README.md) | | | ✓ | ✓ |
|
||||
| [Queue with Pod Per Work Item](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/work-queue-1/README.md) | ✓ | | sometimes | ✓ |
|
||||
| [Queue with Variable Pod Count](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/work-queue-2/README.md) | | ✓ | ✓ | | ✓ |
|
||||
| Single Job with Static Work Assignment | ✓ | | ✓ | |
|
||||
|
||||
When you specify completions with `.spec.completions`, each Pod created by the Job controller
|
||||
has an identical [`spec`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status). This means that
|
||||
all pods will have the same command line and the same
|
||||
image, the same volumes, and (almost) the same environment variables. These patterns
|
||||
are different ways to arrange for pods to work on different things.
|
||||
|
||||
This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
|
||||
Here, `W` is the number of work items.
|
||||
|
||||
| Pattern | `.spec.completions` | `.spec.parallelism` |
|
||||
| -------------------------------------------------------------------------- |:-------------------:|:--------------------:|
|
||||
| [Job Template Expansion](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/expansions/README.md) | 1 | should be 1 |
|
||||
| [Queue with Pod Per Work Item](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/work-queue-1/README.md) | W | any |
|
||||
| [Queue with Variable Pod Count](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/work-queue-2/README.md) | 1 | any |
|
||||
| Single Job with Static Work Assignment | W | any |
|
||||
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Specifying your own pod selector
|
||||
|
||||
Normally, when you create a job object, you do not specify `spec.selector`.
|
||||
The system defaulting logic adds this field when the job is created.
|
||||
It picks a selector value that will not overlap with any other jobs.
|
||||
|
||||
However, in some cases, you might need to override this automatically set selector.
|
||||
To do this, you can specify the `spec.selector` of the job.
|
||||
|
||||
Be very careful when doing this. If you specify a label selector which is not
|
||||
unique to the pods of that job, and which matches unrelated pods, then pods of the unrelated
|
||||
job may be deleted, or this job may count other pods as completing it, or one or both
|
||||
of the jobs may refuse to create pods or run to completion. If a non-unique selector is
|
||||
chosen, then other controllers (e.g. ReplicationController) and their pods may behave
|
||||
in unpredicatable ways too. Kubernetes will not stop you from making a mistake when
|
||||
specifying `spec.selector`.
|
||||
|
||||
Here is an example of a case when you might want to use this feature.
|
||||
|
||||
Say job `old` is already running. You want existing pods
|
||||
to keep running, but you want the rest of the pods it creates
|
||||
to use a different pod template and for the job to have a new name.
|
||||
You cannot update the job because these fields are not updatable.
|
||||
Therefore, you delete job `old` but leave its pods
|
||||
running, using `kubectl delete jobs/old-one --cascade=false`.
|
||||
Before deleting it, you make a note of what selector it uses:
|
||||
|
||||
```
|
||||
kind: Job
|
||||
metadata:
|
||||
name: old
|
||||
...
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
job-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
...
|
||||
```
|
||||
|
||||
Then you create a new job with name `new` and you explicitly specify the same selector.
|
||||
Since the existing pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
|
||||
they are controlled by job `new` as well.
|
||||
|
||||
You need to specify `manualSelector: true` in the new job since you are not using
|
||||
the selector that the system normally generates for you automatically.
|
||||
|
||||
```
|
||||
kind: Job
|
||||
metadata:
|
||||
name: new
|
||||
...
|
||||
spec:
|
||||
manualSelector: true
|
||||
selector:
|
||||
matchLabels:
|
||||
job-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
...
|
||||
```
|
||||
|
||||
The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
|
||||
`manualSelector: true` tells the system to that you know what you are doing and to allow this
|
||||
mismatch.
|
||||
|
||||
## Alternatives
|
||||
|
||||
### Bare Pods
|
||||
|
||||
|
@ -171,6 +318,19 @@ As discussed in [life of a pod](/docs/user-guide/pod-states), `Job` is *only* ap
|
|||
`RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default
|
||||
value is `Always`.)
|
||||
|
||||
### Single Job starts Controller Pod
|
||||
|
||||
Another pattern is for a single Job to create a pod which then creates other pods, acting as a sort
|
||||
of custom controller for those pods. This allows the most flexibility, but may be somewhat
|
||||
complicated to get started with and offers less integration with Kubernetes.
|
||||
|
||||
One example of this pattern would be a Job which starts a Pod which runs a script that in turn
|
||||
starts a Spark master controller (see [spark example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/spark/README.md)), runs a spark
|
||||
driver, and then cleans up.
|
||||
|
||||
An advantage of this approach is that the overall process gets the completion guarantee of a Job
|
||||
object, but complete control over what pods are created and how work is assigned to them.
|
||||
|
||||
## Caveats
|
||||
|
||||
Job objects are in the [`extensions` API Group](/docs/api/#api-groups).
|
||||
|
|
|
@ -52,7 +52,6 @@ Given the input:
|
|||
Function | Description | Example | Result
|
||||
---------|--------------------|--------------------|------------------
|
||||
text | the plain text | kind is {.kind} | kind is List
|
||||
"" | quote | {"{"} | {
|
||||
@ | the current object | {@} | the same as input
|
||||
. or [] | child operator | {.kind} or {['kind']}| List
|
||||
.. | recursive descent | {..name} | 127.0.0.1 127.0.0.2 myself e2e
|
||||
|
@ -60,4 +59,5 @@ text | the plain text | kind is {.kind} | kind is List
|
|||
[start:end :step] | subscript operator | {.users[0].name}| myself
|
||||
[,] | union operator | {.items[*]['metadata.name', 'status.capacity']} | 127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]
|
||||
?() | filter | {.users[?(@.name=="e2e")].user.password} | secret
|
||||
range, end | iterate list | {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} | [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]
|
||||
range, end | iterate list | {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} | [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]
|
||||
"" | quote interpreted string | {range .items[*]}{.metadata.name}{"\t"}{end} | 127.0.0.1 127.0.0.2
|
||||
|
|
|
@ -12,15 +12,15 @@ So in order to easily switch between multiple clusters, for multiple users, a ku
|
|||
|
||||
This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname.
|
||||
|
||||
Multiple kubeconfig files are allowed. At runtime they are loaded and merged together along with override options specified from the command line (see rules below).
|
||||
Multiple kubeconfig files are allowed, if specified explicitly. At runtime they are loaded and merged together along with override options specified from the command line (see [rules](#loading-and-merging) below).
|
||||
|
||||
## Related discussion
|
||||
|
||||
http://issue.k8s.io/1755
|
||||
|
||||
## Example kubeconfig file
|
||||
## Components of a kubeconfig file
|
||||
|
||||
The below file contains a `current-context` which will be used by default by clients which are using the file to connect to a cluster. Thus, this kubeconfig file has more information in it then we will necessarily have to use in a given session. You can see it defines many clusters, and users associated with those clusters. The context itself is associated with both a cluster AND a user.
|
||||
### Example kubeconfig file
|
||||
|
||||
```yaml
|
||||
current-context: federal-context
|
||||
|
@ -62,7 +62,103 @@ users:
|
|||
client-key: path/to/my/client/key
|
||||
```
|
||||
|
||||
### Building your own kubeconfig file
|
||||
### Breakdown/explanation of components
|
||||
|
||||
#### cluster
|
||||
|
||||
```yaml
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority: path/to/my/cafile
|
||||
server: https://horse.org:4443
|
||||
name: horse-cluster
|
||||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://pig.org:443
|
||||
name: pig-cluster
|
||||
```
|
||||
|
||||
A `cluster` contains endpoint data for a kubernetes cluster. This includes the fully
|
||||
qualified url for the kubernetes apiserver, as well as the cluster's certificate
|
||||
authority or `insecure-skip-tls-verify: true`, if the cluster's serving
|
||||
certificate is not signed by a system trusted certificate authority.
|
||||
A `cluster` has a name (nickname) which acts as a dictionary key for the cluster
|
||||
within this kubeconfig file. You can add or modify `cluster` entries using
|
||||
[`kubectl config set-cluster`](/docs/user-guide/kubectl/kubectl_config_set-cluster/).
|
||||
|
||||
#### user
|
||||
|
||||
```yaml
|
||||
users:
|
||||
- name: blue-user
|
||||
user:
|
||||
token: blue-token
|
||||
- name: green-user
|
||||
user:
|
||||
client-certificate: path/to/my/client/cert
|
||||
client-key: path/to/my/client/key
|
||||
```
|
||||
|
||||
A `user` defines client credentials for authenticating to a kubernetes cluster. A
|
||||
`user` has a name (nickname) which acts as its key within the list of user entries
|
||||
after kubeconfig is loaded/merged. Available credentials are `client-certificate`,
|
||||
`client-key`, `token`, and `username/password`. `username/password` and `token`
|
||||
are mutually exclusive, but client certs and keys can be combined with them.
|
||||
You can add or modify `user` entries using
|
||||
[`kubectl config set-credentials`](kubectl/kubectl_config_set-credentials.md).
|
||||
|
||||
#### context
|
||||
|
||||
```
|
||||
contexts:
|
||||
- context:
|
||||
cluster: horse-cluster
|
||||
namespace: chisel-ns
|
||||
user: green-user
|
||||
name: federal-context
|
||||
```
|
||||
|
||||
A `context` defines a named [`cluster`](#cluster),[`user`](#user),[`namespace`](namespaces.md) tuple
|
||||
which is used to send requests to the specified cluster using the provided authentication info and
|
||||
namespace. Each of the three is optional; it is valid to specify a context with only one of `cluster`,
|
||||
`user`,`namespace`, or to specify none. Unspecified values, or named values that don't have corresponding
|
||||
entries in the loaded kubeconfig (e.g. if the context specified a `pink-user` for the above kubeconfig file)
|
||||
will be replaced with the default. See [Loading and merging rules](#loading-and-merging) below for override/merge behavior.
|
||||
You can add or modify `context` entries with [`kubectl config set-conext`](kubectl/kubectl_config_set-context.md).
|
||||
|
||||
#### current-context
|
||||
|
||||
```yaml
|
||||
current-context: federal-context
|
||||
```
|
||||
|
||||
`current-context` is the nickname or 'key' for the cluster,user,namespace tuple that kubectl
|
||||
will use by default when loading config from this file. You can override any of the values in kubectl
|
||||
from the commandline, by passing `--context=CONTEXT`, `--cluster=CLUSTER`, `--user=USER`, and/or `--namespace=NAMESPACE` respectively.
|
||||
You can change the `current-context` with [`kubectl config use-context`](kubectl/kubectl_config_use-context.md).
|
||||
|
||||
#### miscellaneous
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
preferences:
|
||||
colors: true
|
||||
```
|
||||
|
||||
`apiVersion` and `kind` identify the version and schema for the client parser and should not
|
||||
be edited manually.
|
||||
|
||||
`preferences` specify optional (and currently unused) kubectl preferences.
|
||||
|
||||
## Viewing kubeconfig files
|
||||
|
||||
`kubectl config view` will display the current kubeconfig settings. By default
|
||||
it will show you all loaded kubeconfig settings; you can filter the view to just
|
||||
the settings relevant to the `current-context` by passing `--minify`. See
|
||||
[`kubectl config view`](kubectl/kubectl_config_view.md) for other options.
|
||||
|
||||
## Building your own kubeconfig file
|
||||
|
||||
NOTE, that if you are deploying k8s via kube-up.sh, you do not need to create your own kubeconfig files, the script will do it for you.
|
||||
|
||||
|
@ -79,7 +175,7 @@ mister-red,mister-red,2
|
|||
|
||||
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
|
||||
|
||||
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in succesfully, because we are providigin the green-user's client credentials.
|
||||
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in successfully, because we are providing the green-user's client credentials.
|
||||
- Similarly, we can operate as the "blue-user" if we choose to change the value of current-context.
|
||||
|
||||
In the above scenario, green-user would have to log in by providing certificates, whereas blue-user would just provide the token. All this information would be handled for us by the
|
||||
|
@ -118,7 +214,12 @@ The rules for loading and merging the kubeconfig files are straightforward, but
|
|||
1. The command line flags are: `client-certificate`, `client-key`, `username`, `password`, and `token`.
|
||||
1. If there are two conflicting techniques, fail.
|
||||
1. For any information still missing, use default values and potentially prompt for authentication information
|
||||
1. All file references inside of a kubeconfig file are resolved relative to the location of the kubeconfig file itself. When file references are presented on the command line
|
||||
they are resolved relative to the current working directory. When paths are saved in the ~/.kube/config, relative paths are stored relatively while absolute paths are stored absolutely.
|
||||
|
||||
Any path in a kubeconfig file is resolved relative to the location of the kubeconfig file itself.
|
||||
|
||||
|
||||
## Manipulation of kubeconfig via `kubectl config <subcommand>`
|
||||
|
||||
In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help.
|
||||
|
|
|
@ -0,0 +1,122 @@
|
|||
---
|
||||
---
|
||||
An assortment of compact kubectl examples
|
||||
|
||||
See also: [Kubectl overview](/docs/user-guide/kubectl-overview/) and [JsonPath guide](/docs/user-guide/jsonpath).
|
||||
|
||||
## Creating Objects
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./file.yml # create resource(s) in a json or yaml file
|
||||
|
||||
$ kubectl create -f ./file1.yml -f ./file2.yaml # create resource(s) in a json or yaml file
|
||||
|
||||
$ kubectl create -f ./dir # create resources in all .json, .yml, and .yaml files in dir
|
||||
|
||||
# Create from a URL
|
||||
|
||||
$ kubectl create -f http://www.fpaste.org/279276/48569091/raw/
|
||||
|
||||
# Create multiple YAML objects from stdin
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox-sleep
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox
|
||||
args:
|
||||
- sleep
|
||||
- "1000000"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox-sleep-less
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox
|
||||
args:
|
||||
- sleep
|
||||
- "1000"
|
||||
EOF
|
||||
|
||||
# Create a secret with several keys
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysecret
|
||||
type: Opaque
|
||||
data:
|
||||
password: $(echo "s33msi4" | base64)
|
||||
username: $(echo "jane" | base64)
|
||||
EOF
|
||||
|
||||
# TODO: kubectl-explain example
|
||||
```
|
||||
|
||||
|
||||
## Viewing, Finding Resources
|
||||
|
||||
```shell
|
||||
# Columnar output
|
||||
$ kubectl get services # List all services in the namespace
|
||||
$ kubectl get pods --all-namespaces # List all pods in all namespaces
|
||||
$ kubectl get pods -o wide # List all pods in the namespace, with more details
|
||||
$ kubectl get rc <rc-name> # List a particular replication controller
|
||||
$ kubectl get replicationcontroller <rc-name> # List a particular RC
|
||||
|
||||
# Verbose output
|
||||
$ kubectl describe nodes <node-name>
|
||||
$ kubectl describe pods <pod-name>
|
||||
$ kubectl describe pods/<pod-name> # Equivalent to previous
|
||||
$ kubectl describe pods <rc-name> # Lists pods created by <rc-name> using common prefix
|
||||
|
||||
# List Services Sorted by Name
|
||||
$ kubectl get services --sort-by=.metadata.name
|
||||
|
||||
# List pods Sorted by Restart Count
|
||||
$ kubectl get pods --sort-by=.status.containerStatuses[0].restartCount
|
||||
|
||||
# Get the version label of all pods with label app=cassandra
|
||||
$ kubectl get pods --selector=app=cassandra rc -o 'jsonpath={.items[*].metadata.labels.version}'
|
||||
|
||||
# Get ExternalIPs of all nodes
|
||||
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=ExternalIP)].address}'
|
||||
|
||||
# List Names of Pods that belong to Particular RC
|
||||
# "jq" command useful for transformations that are too complex for jsonpath
|
||||
$ sel=$(./kubectl get rc <rc-name> --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')
|
||||
$ sel=${sel%?} # Remove trailing comma
|
||||
$ pods=$(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})`
|
||||
|
||||
# Check which nodes are ready
|
||||
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'| tr ';' "\n" | grep "Ready=True"
|
||||
```
|
||||
|
||||
## Modifying and Deleting Resources
|
||||
|
||||
```shell
|
||||
$ kubectl label pods <pod-name> new-label=awesome # Add a Label
|
||||
$ kubectl annotate pods <pod-name> icon-url=http://goo.gl/XXBTWq # Add an annotation
|
||||
|
||||
# TODO: examples of kubectl edit, patch, delete, replace, scale, and rolling-update commands.
|
||||
```
|
||||
|
||||
## Interacting with running Pods
|
||||
|
||||
```shell
|
||||
$ kubectl logs <pod-name> # dump pod logs (stdout)
|
||||
$ kubectl logs -f <pod-name> # stream pod logs (stdout) until canceled (ctrl-c) or timeout
|
||||
|
||||
$ kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
|
||||
$ kubectl attach <podname> -i # Attach to Running Container
|
||||
$ kubectl port-forward <podname> <local-and-remote-port> # Forward port of Pod to your local machine
|
||||
$ kubectl port-forward <servicename> <port> # Forward port to service
|
||||
$ kubectl exec <pod-name> -- ls / # Run command in existing pod (1 container case)
|
||||
$ kubectl exec <pod-name> -c <container-name> -- ls / # Run command in existing pod (multi-container case)
|
||||
```
|
|
@ -18,26 +18,25 @@ where `command`, `TYPE`, `NAME`, and `flags` are:
|
|||
* `command`: Specifies the operation that you want to perform on one or more resources, for example `create`, `get`, `describe`, `delete`.
|
||||
* `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-sensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:
|
||||
|
||||
```shell
|
||||
$ kubectl get pod pod1
|
||||
$ kubectl get pods pod1
|
||||
$ kubectl get po pod1
|
||||
```
|
||||
```shell
|
||||
$ kubectl get pod pod1
|
||||
$ kubectl get pods pod1
|
||||
$ kubectl get po pod1
|
||||
```
|
||||
|
||||
* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `$ kubectl get pods`.
|
||||
|
||||
When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files:
|
||||
When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files:
|
||||
|
||||
* To specify resources by type and name:
|
||||
* To group resources if they are all the same type: `TYPE1 name1 name2 name<#>`<br/>
|
||||
* To specify resources by type and name:
|
||||
* To group resources if they are all the same type: `TYPE1 name1 name2 name<#>`<br/>
|
||||
Example: `$ kubectl get pod example-pod1 example-pod2`
|
||||
* To specify multiple resource types individually: `TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`<br/>
|
||||
Example: `$ kubectl get pod/example-pod1 replicationcontroller/example-rc1`
|
||||
* To specify resources with one or more files: `-f file1 -f file2 -f file<#>`
|
||||
[Use YAML rather than JSON](/docs/user-guide/config-best-practices) since YAML tends to be more user-friendly, especially for configuration files.<br/>
|
||||
[Use YAML rather than JSON](/docs/user-guide/config-best-practices/#general-config-tips) since YAML tends to be more user-friendly, especially for configuration files.<br/>
|
||||
Example: `$ kubectl get pod -f ./pod.yaml`
|
||||
* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.
|
||||
|
||||
* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.<br/>
|
||||
**Important**: Flags that you specify from the command line override default values and any corresponding environment variables.
|
||||
|
||||
If you need help, just run `kubectl help` from the terminal window.
|
||||
|
@ -83,21 +82,24 @@ The following table includes a list of all the supported resource types and thei
|
|||
Resource type | Abbreviated alias
|
||||
-------------------- | --------------------
|
||||
`componentstatuses` | `cs`
|
||||
`events` | `ev`
|
||||
`endpoints` | `ep`
|
||||
`horizontalpodautoscalers` | `hpa`
|
||||
`limitranges` | `limits`
|
||||
`nodes` | `no`
|
||||
`namespaces` | `ns`
|
||||
`pods` | `po`
|
||||
`persistentvolumes` | `pv`
|
||||
`persistentvolumeclaims` | `pvc`
|
||||
`resourcequotas` | `quota`
|
||||
`replicationcontrollers` | `rc`
|
||||
`secrets` |
|
||||
`serviceaccounts` |
|
||||
`services` | `svc`
|
||||
`ingress` | `ing`
|
||||
`daemonsets` | `ds`
|
||||
`deployments` |
|
||||
`events` | `ev`
|
||||
`endpoints` | `ep`
|
||||
`horizontalpodautoscalers` | `hpa`
|
||||
`ingresses` | `ing`
|
||||
`jobs` |
|
||||
`limitranges` | `limits`
|
||||
`nodes` | `no`
|
||||
`namespaces` | `ns`
|
||||
`pods` | `po`
|
||||
`persistentvolumes` | `pv`
|
||||
`persistentvolumeclaims` | `pvc`
|
||||
`resourcequotas` | `quota`
|
||||
`replicationcontrollers` | `rc`
|
||||
`secrets` |
|
||||
`serviceaccounts` |
|
||||
`services` | `svc`
|
||||
|
||||
## Output options
|
||||
|
||||
|
|
|
@ -99,7 +99,7 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje
|
|||
* _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend`
|
||||
* _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29`
|
||||
|
||||
Both label selector styles can be used to list or watch resources via a REST client. For example targetting `apiserver` with `kubectl` and using _equality-based_ one may write:
|
||||
Both label selector styles can be used to list or watch resources via a REST client. For example targeting `apiserver` with `kubectl` and using _equality-based_ one may write:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -l environment=production,tier=frontend
|
||||
|
@ -160,4 +160,9 @@ selector:
|
|||
- {key: environment, operator: NotIn, values: [dev]}
|
||||
```
|
||||
|
||||
`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match.
|
||||
`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match.
|
||||
|
||||
#### Selecting sets of nodes
|
||||
|
||||
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
|
||||
See the documentation on [node selection](node-selection/README.md) for more information.
|
||||
|
|
|
@ -5,15 +5,7 @@ This example shows two types of pod [health checks](/docs/user-guide/production-
|
|||
|
||||
The [exec-liveness.yaml](/docs/user-guide/liveness/exec-liveness.yaml) demonstrates the container execution check.
|
||||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/health
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
```
|
||||
{% include code.html language="yaml" file="exec-liveness.yaml" ghlink="/docs/user-guide/liveness/exec-liveness.yaml" %}
|
||||
|
||||
Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.
|
||||
|
||||
|
@ -27,17 +19,10 @@ so when Kubelet executes the health check 15 seconds (defined by initialDelaySec
|
|||
|
||||
|
||||
The [http-liveness.yaml](/docs/user-guide/liveness/http-liveness.yaml) demonstrates the HTTP check.
|
||||
{% include code.html language="yaml" file="http-liveness.yaml" ghlink="/docs/user-guide/liveness/http-liveness.yaml" %}
|
||||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 1
|
||||
```
|
||||
|
||||
The Kubelet sends an HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails. The Kubelet sends the probe to the container's ip address by default which could be specified with `host` as part of httpGet probe. If the container listens on `127.0.0.1`, `host` should be specified as `127.0.0.1`. In general, if the container listens on its ip address or on all interfaces (0.0.0.0), there is no need to specify the `host` as part of the httpGet probe.
|
||||
The Kubelet sends an HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails. The Kubelet sends probes to the container's IP address, unless overridden by the optional `host` field in httpGet. If the container listens on `127.0.0.1` and `hostNetwork` is `true` (i.e., it does not use the pod-specific network), then `host` should be specified as `127.0.0.1`. Be warned that, outside of less common cases like that, `host` does probably not result in what you would expect. If you set it to a non-existing hostname (or your competitor's!), probes will never reach the pod, defeating the whole point of health checks. If your pod relies on e.g. virtual hosts, which is probably the more common case, you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
||||
|
||||
This [guide](/docs/user-guide/k8s201/#health-checking) has more information on health checks.
|
||||
|
||||
|
@ -75,8 +60,8 @@ At the bottom of the *kubectl describe* output there are messages indicating tha
|
|||
```shell
|
||||
$ kubectl describe pods liveness-exec
|
||||
[...]
|
||||
Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10
|
||||
Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kubelet kubernetes-node-6fbi} spec.containers{liveness} unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10
|
||||
```
|
|
@ -1,4 +1,4 @@
|
|||
This directory contains two [pod](/docs/user-guide/pods) specifications which can be used as synthetic
|
||||
This directory contains two [pod](https://kubernetes.io/docs/user-guide/pods) specifications which can be used as synthetic
|
||||
logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml)
|
||||
describes a pod that just emits a log message once every 4 seconds. The pod specification in
|
||||
[synthetic_10lps.yaml](synthetic_10lps.yaml)
|
||||
|
@ -10,6 +10,3 @@ at [Cluster Level Logging to Google Cloud Logging](https://kubernetes.io/docs/ge
|
|||
To observe the ingested log lines when using Elasticsearch and Kibana please see the getting
|
||||
started instructions
|
||||
at [Cluster Level Logging with Elasticsearch and Kibana](https://kubernetes.io/docs/getting-started-guides/logging-elasticsearch).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ Kubernetes components, such as kubelet and apiserver, use the [glog](https://god
|
|||
|
||||
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
|
||||
this pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
|
||||
output every second. (You can find different pod specifications [here](https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/user-guide/logging-demo).)
|
||||
output every second. (You can find different pod specifications [here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/logging-demo/).)
|
||||
|
||||
{% include code.html language="yaml" file="counter-pod.yaml" k8slink="/examples/blog-logging/counter-pod.yaml" %}
|
||||
|
||||
|
@ -70,7 +70,7 @@ describes how to ingest cluster level logs into Elasticsearch and view them usin
|
|||
## Ingesting Application Log Files
|
||||
|
||||
Cluster level logging only collects the standard output and standard error output of the applications
|
||||
running in containers. The guide [Collecting log files within containers with Fluentd](http://releases.k8s.io/{{page.githubbranch}}/contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
|
||||
running in containers. The guide [Collecting log files from within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
|
||||
|
||||
## Known issues
|
||||
|
||||
|
|
|
@ -71,8 +71,8 @@ It is a recommended practice to put resources related to the same microservice o
|
|||
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/docs/user-guide/replication.yaml
|
||||
replicationcontrollers/nginx
|
||||
$ kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/docs/user-guide/pod.yaml
|
||||
pods/nginx
|
||||
```
|
||||
|
||||
## Bulk operations in kubectl
|
||||
|
@ -81,8 +81,8 @@ Resource creation isn't the only operation that `kubectl` can perform in bulk. I
|
|||
|
||||
```shell
|
||||
$ kubectl delete -f ./nginx/
|
||||
replicationcontrollers/my-nginx
|
||||
services/my-nginx-svc
|
||||
replicationcontrollers "my-nginx" deleted
|
||||
services "my-nginx-svc" deleted
|
||||
```
|
||||
|
||||
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
|
||||
|
@ -95,14 +95,14 @@ For larger numbers of resources, one can use labels to filter resources. The sel
|
|||
|
||||
```shell
|
||||
$ kubectl delete all -lapp=nginx
|
||||
replicationcontrollers/my-nginx
|
||||
services/my-nginx-svc
|
||||
replicationcontrollers "my-nginx" deleted
|
||||
services "my-nginx-svc" deleted
|
||||
```
|
||||
|
||||
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
|
||||
|
||||
```shell
|
||||
$ kubectl get $(kubectl create -f ./nginx/ | grep my-nginx)
|
||||
$ kubectl get $(kubectl create -f ./nginx/ -o name | grep my-nginx)
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
my-nginx nginx nginx app=nginx 2
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
|
@ -116,7 +116,7 @@ The examples we've used so far apply at most a single label to any resource. The
|
|||
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
```
|
||||
|
@ -124,7 +124,7 @@ labels:
|
|||
while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: backend
|
||||
role: master
|
||||
|
@ -133,7 +133,7 @@ labels:
|
|||
and
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: backend
|
||||
role: slave
|
||||
|
@ -143,19 +143,19 @@ The labels allow us to slice and dice our resources along any dimension specifie
|
|||
|
||||
```shell
|
||||
$ kubectl create -f ./guestbook-fe.yaml -f ./redis-master.yaml -f ./redis-slave.yaml
|
||||
replicationcontrollers/guestbook-fe
|
||||
replicationcontrollers/guestbook-redis-master
|
||||
replicationcontrollers/guestbook-redis-slave
|
||||
replicationcontrollers "guestbook-fe" created
|
||||
replicationcontrollers "guestbook-redis-master" created
|
||||
replicationcontrollers "guestbook-redis-slave" created
|
||||
$ kubectl get pods -Lapp -Ltier -Lrole
|
||||
NAME READY STATUS RESTARTS AGE APP TIER ROLE
|
||||
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <n/a>
|
||||
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <n/a>
|
||||
guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <n/a>
|
||||
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>
|
||||
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>
|
||||
guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>
|
||||
guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master
|
||||
guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestbook backend slave
|
||||
guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave
|
||||
my-nginx-divi2 1/1 Running 0 29m nginx <n/a> <n/a>
|
||||
my-nginx-o0ef1 1/1 Running 0 29m nginx <n/a> <n/a>
|
||||
my-nginx-divi2 1/1 Running 0 29m nginx <none> <none>
|
||||
my-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>
|
||||
$ kubectl get pods -lapp=guestbook,role=slave
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
|
||||
|
@ -167,7 +167,7 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m
|
|||
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. For example, it is common practice to deploy a *canary* of a new application release (specified via image tag) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. For instance, a new release of the guestbook frontend might carry the following labels:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
track: canary
|
||||
|
@ -176,7 +176,7 @@ labels:
|
|||
and the primary, stable release would have a different value of the `track` label, so that the sets of pods controlled by the two replication controllers would not overlap:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
track: stable
|
||||
|
@ -185,7 +185,7 @@ labels:
|
|||
The frontend service would span both sets of replicas by selecting the common subset of their labels, omitting the `track` label:
|
||||
|
||||
```yaml
|
||||
selector:
|
||||
selector:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
```
|
||||
|
@ -196,16 +196,11 @@ Sometimes existing pods and other resources need to be relabeled before creating
|
|||
|
||||
```shell
|
||||
$ kubectl label pods -lapp=nginx tier=fe
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-9gw19 1/1 Running 0 14m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-hayza 1/1 Running 0 13m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-mde6m 1/1 Running 0 17m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-sh6m8 1/1 Running 0 18m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-wfof4 1/1 Running 0 16m
|
||||
pod "my-nginx-v4-9gw19" labeled
|
||||
pod "my-nginx-v4-hayza" labeled
|
||||
pod "my-nginx-v4-mde6m" labeled
|
||||
pod "my-nginx-v4-sh6m8" labeled
|
||||
pod "my-nginx-v4-wfof4" labeled
|
||||
$ kubectl get pods -lapp=nginx -Ltier
|
||||
NAME READY STATUS RESTARTS AGE TIER
|
||||
my-nginx-v4-9gw19 1/1 Running 0 15m fe
|
||||
|
@ -215,13 +210,32 @@ my-nginx-v4-sh6m8 1/1 Running 0 19m fe
|
|||
my-nginx-v4-wfof4 1/1 Running 0 16m fe
|
||||
```
|
||||
|
||||
For more information, please see [labels](/docs/user-guide/labels/) and [kubectl label](/docs/kubectl/kubectl_label/) document.
|
||||
|
||||
## Updating annotations
|
||||
|
||||
Sometimes you want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example:
|
||||
|
||||
```console
|
||||
$ kubectl annotate pods my-nginx-v4-9gw19 decscription='my frontend running nginx'
|
||||
$ kubectl get pods my-nginx-v4-9gw19 -o yaml
|
||||
apiversion: v1
|
||||
kind: pod
|
||||
metadata:
|
||||
annotations:
|
||||
description: my frontend running nginx
|
||||
...
|
||||
```
|
||||
|
||||
For more information, please see [annotations](annotations.md) and [kubectl annotate](kubectl/kubectl_annotate.md) document.
|
||||
|
||||
## Scaling your application
|
||||
|
||||
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to increase the number of nginx replicas from 2 to 3, do:
|
||||
|
||||
```shell
|
||||
$ kubectl scale rc my-nginx --replicas=3
|
||||
scaled
|
||||
replicationcontroller "my-nginx" scaled
|
||||
$ kubectl get pods -lapp=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-1jgkf 1/1 Running 0 3m
|
||||
|
@ -229,6 +243,22 @@ my-nginx-divi2 1/1 Running 0 1h
|
|||
my-nginx-o0ef1 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
To have the system automatically choose the number of nginx replicas as needed, range from 1 to 3, do:
|
||||
|
||||
```shell
|
||||
$ kubectl autoscale rc my-nginx --min=1 --max=3
|
||||
replicationcontroller "my-nginx" autoscaled
|
||||
$ kubectl get pods -lapp=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-1jgkf 1/1 Running 0 3m
|
||||
my-nginx-divi2 1/1 Running 0 3m
|
||||
$ kubectl get horizontalpodautoscaler
|
||||
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
|
||||
nginx ReplicationController/nginx/scale 80% <waiting> 1 3 1m
|
||||
```
|
||||
|
||||
For more information, please see [kubectl scale](kubectl/kubectl_scale.md), [kubectl autoscale](kubectl/kubectl_autoscale.md) and [horizontal pod autoscaler](horizontal-pod-autoscaling/README.md) document.
|
||||
|
||||
## Updating your application without a service outage
|
||||
|
||||
At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
|
||||
|
@ -260,7 +290,7 @@ To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https
|
|||
|
||||
```shell
|
||||
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
|
||||
Creating my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
|
||||
Created my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
|
||||
```
|
||||
|
||||
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
|
||||
|
@ -268,7 +298,6 @@ In another window, you can see that `kubectl` added a `deployment` label to the
|
|||
```shell
|
||||
$ kubectl get pods -lapp=nginx -Ldeployment
|
||||
NAME READY STATUS RESTARTS AGE DEPLOYMENT
|
||||
my-nginx-1jgkf 1/1 Running 0 1h 2d1d7a8f682934a254002b56404b813e
|
||||
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46
|
||||
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46
|
||||
my-nginx-divi2 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
|
||||
|
@ -279,34 +308,28 @@ my-nginx-q6all 1/1 Running 0
|
|||
`kubectl rolling-update` reports progress as it progresses:
|
||||
|
||||
```shell
|
||||
Updating my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
|
||||
At end of loop: my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
|
||||
At beginning of loop: my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
|
||||
Updating my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
|
||||
At end of loop: my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
|
||||
At beginning of loop: my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
|
||||
Updating my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
|
||||
At end of loop: my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
|
||||
At beginning of loop: my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
|
||||
Updating my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
|
||||
At end of loop: my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
|
||||
At beginning of loop: my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
|
||||
Updating my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
|
||||
At end of loop: my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
|
||||
Scaling up my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 0 to 3, scaling down my-nginx from 3 to 0 (keep 3 pods available, don't exceed 4 pods)
|
||||
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 1
|
||||
Scaling my-nginx down to 2
|
||||
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 2
|
||||
Scaling my-nginx down to 1
|
||||
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 3
|
||||
Scaling my-nginx down to 0
|
||||
Update succeeded. Deleting old controller: my-nginx
|
||||
Renaming my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 to my-nginx
|
||||
my-nginx
|
||||
replicationcontroller "my-nginx" rolling updated
|
||||
```
|
||||
|
||||
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`:
|
||||
|
||||
```shell
|
||||
$ kubectl kubectl rolling-update my-nginx --image=nginx:1.9.1 --rollback
|
||||
Found existing update in progress (my-nginx-ccba8fbd8cc8160970f63f9a2696fc46), resuming.
|
||||
Found desired replicas.Continuing update with existing controller my-nginx.
|
||||
Stopping my-nginx-02ca3e87d8685813dbe1f8c164a46f02 replicas: 1 -> 0
|
||||
$ kubectl rolling-update my-nginx --rollback
|
||||
Setting "my-nginx" replicas to 1
|
||||
Continuing update with existing controller my-nginx.
|
||||
Scaling up nginx from 1 to 1, scaling down my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
|
||||
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 down to 0
|
||||
Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
|
||||
my-nginx
|
||||
replicationcontroller "my-nginx" rolling updated
|
||||
```
|
||||
|
||||
This is one example where the immutability of containers is a huge asset.
|
||||
|
@ -332,7 +355,7 @@ spec:
|
|||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.9.2
|
||||
args: ['nginx'?,'?-T'?]
|
||||
args: [“nginx”,”-T”]
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
@ -341,57 +364,114 @@ and roll it out:
|
|||
|
||||
```shell
|
||||
$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml
|
||||
Creating my-nginx-v4
|
||||
At beginning of loop: my-nginx replicas: 4, my-nginx-v4 replicas: 1
|
||||
Updating my-nginx replicas: 4, my-nginx-v4 replicas: 1
|
||||
At end of loop: my-nginx replicas: 4, my-nginx-v4 replicas: 1
|
||||
At beginning of loop: my-nginx replicas: 3, my-nginx-v4 replicas: 2
|
||||
Updating my-nginx replicas: 3, my-nginx-v4 replicas: 2
|
||||
At end of loop: my-nginx replicas: 3, my-nginx-v4 replicas: 2
|
||||
At beginning of loop: my-nginx replicas: 2, my-nginx-v4 replicas: 3
|
||||
Updating my-nginx replicas: 2, my-nginx-v4 replicas: 3
|
||||
At end of loop: my-nginx replicas: 2, my-nginx-v4 replicas: 3
|
||||
At beginning of loop: my-nginx replicas: 1, my-nginx-v4 replicas: 4
|
||||
Updating my-nginx replicas: 1, my-nginx-v4 replicas: 4
|
||||
At end of loop: my-nginx replicas: 1, my-nginx-v4 replicas: 4
|
||||
At beginning of loop: my-nginx replicas: 0, my-nginx-v4 replicas: 5
|
||||
Updating my-nginx replicas: 0, my-nginx-v4 replicas: 5
|
||||
At end of loop: my-nginx replicas: 0, my-nginx-v4 replicas: 5
|
||||
Update succeeded. Deleting my-nginx
|
||||
my-nginx-v4
|
||||
Created my-nginx-v4
|
||||
Scaling up my-nginx-v4 from 0 to 5, scaling down my-nginx from 4 to 0 (keep 4 pods available, don't exceed 5 pods)
|
||||
Scaling my-nginx-v4 up to 1
|
||||
Scaling my-nginx down to 3
|
||||
Scaling my-nginx-v4 up to 2
|
||||
Scaling my-nginx down to 2
|
||||
Scaling my-nginx-v4 up to 3
|
||||
Scaling my-nginx down to 1
|
||||
Scaling my-nginx-v4 up to 4
|
||||
Scaling my-nginx down to 0
|
||||
Scaling my-nginx-v4 up to 5
|
||||
Update succeeded. Deleting old controller: my-nginx
|
||||
replicationcontroller "my-nginx-v4" rolling updated
|
||||
```
|
||||
|
||||
You can also run the [update demo](/docs/user-guide/update-demo/) to see a visual representation of the rolling update process.
|
||||
|
||||
## In-place updates of resources
|
||||
|
||||
Sometimes it's necessary to make narrow, non-disruptive updates to resources you've created. For instance, you might want to add an [annotation](/docs/user-guide/annotations) with a description of your object. That's easiest to do with `kubectl patch`:
|
||||
Sometimes it's necessary to make narrow, non-disruptive updates to resources you've created. For instance, you might want to update the container's image of your pod.
|
||||
|
||||
### kubectl patch
|
||||
|
||||
Suppose you want to fix a typo of the container's image of a pod. One way to do that is with `kubectl patch`:
|
||||
|
||||
```shell
|
||||
$ kubectl patch rc my-nginx-v4 -p '{"metadata": {"annotations": {"description": "my frontend running nginx"}}}'
|
||||
my-nginx-v4
|
||||
$ kubectl get rc my-nginx-v4 -o yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
annotations:
|
||||
description: my frontend running nginx
|
||||
# Suppose you have a pod with a container named "nginx" and its image "nignx" (typo),
|
||||
# use container name "nginx" as a key to update the image from "nignx" (typo) to "nginx"
|
||||
$ kubectl get pod my-nginx-1jgkf -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiversion: v1
|
||||
kind: pod
|
||||
...
|
||||
spec:
|
||||
containers:
|
||||
-image: nignx
|
||||
name: nginx
|
||||
...
|
||||
```
|
||||
|
||||
```shell
|
||||
$ kubectl patch pod my-nginx-1jgkf -p '{"spec":{"containers":[{"name":"nginx","image":"nginx"}]}}'
|
||||
"my-nginx-1jgkf" patched
|
||||
$ kubectl get pod my-nginx-1jgkf -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiversion: v1
|
||||
kind: pod
|
||||
...
|
||||
spec:
|
||||
containers:
|
||||
-image: nginx
|
||||
name: nginx
|
||||
...
|
||||
```
|
||||
|
||||
The patch is specified using json.
|
||||
|
||||
For more significant changes, you can `get` the resource, edit it, and then `replace` the resource with the updated version:
|
||||
The system ensures that you don’t clobber changes made by other users or components by confirming that the `resourceVersion` doesn’t differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don’t use your original configuration file as the source since additional fields most likely were set in the live state.
|
||||
|
||||
```shell
|
||||
$ kubectl get rc my-nginx-v4 -o yaml > /tmp/nginx.yaml
|
||||
$ vi /tmp/nginx.yaml
|
||||
$ kubectl replace -f /tmp/nginx.yaml
|
||||
replicationcontrollers/my-nginx-v4
|
||||
$ rm $TMP
|
||||
For more information, please see [kubectl patch](/docs/user-guide/kubectl/kubectl_patch/) document.
|
||||
|
||||
### kubectl edit
|
||||
|
||||
Alternatively, you may also update resources with `kubectl edit`:
|
||||
|
||||
```console
|
||||
$ kubectl edit pod my-nginx-1jgkf
|
||||
```
|
||||
|
||||
The system ensures that you don't clobber changes made by other users or components by confirming that the `resourceVersion` doesn't differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don't use your original configuration file as the source since additional fields most likely were set in the live state.
|
||||
This is equivalent to first `get` the resource, edit it in text editor, and then `replace` the resource with the updated version:
|
||||
|
||||
```shell
|
||||
$ kubectl get pod my-nginx-1jgkf -o yaml > /tmp/nginx.yaml
|
||||
$ vi /tmp/nginx.yaml
|
||||
# do some edit, and then save the file
|
||||
$ kubectl replace -f /tmp/nginx.yaml
|
||||
pod "my-nginx-1jgkf" replaced
|
||||
$ rm /tmp/nginx.yaml
|
||||
```
|
||||
|
||||
This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables.
|
||||
|
||||
For more information, please see [kubectl edit](/docs/user-guide/kubectl/kubectl_edit/) document.
|
||||
|
||||
## Using configuration files
|
||||
|
||||
A more disciplined alternative to patch and edit is `kubectl apply`.
|
||||
|
||||
With apply, you can keep a set of configuration files in source control, where they can be maintained and versioned along with the code for the resources they configure. Then, when you're ready to push configuration changes to the cluster, you can run `kubectl apply`.
|
||||
|
||||
This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
|
||||
|
||||
```console
|
||||
$ kubectl apply -f ./nginx-rc.yaml
|
||||
replicationcontroller "my-nginx-v4" configured
|
||||
```
|
||||
|
||||
As shown in the example above, the configuration used with `kubectl apply` is the same as the one used with `kubectl replace`. However, instead of deleting the existing resource and replacing it with a new one, `kubectl apply` modifies the configuration of the existing resource.
|
||||
|
||||
Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a three-way diff between the previous configuration, the provided input and the current configuration of the resource, in order to determine how to modify the resource.
|
||||
|
||||
Currently, resources are created without this annotation, so the first invocation of `kubectl apply` will fall back to a two-way diff between the provided input and the current configuration of the resource. During this first invocation, it cannot detect the deletion of properties set when the resource was created. For this reason, it will not remove them.
|
||||
|
||||
All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff.
|
||||
|
||||
## Disruptive updates
|
||||
|
||||
|
@ -399,11 +479,10 @@ In some cases, you may need to update resource fields that cannot be updated onc
|
|||
|
||||
```shell
|
||||
$ kubectl replace -f ./nginx-rc.yaml --force
|
||||
replicationcontrollers/my-nginx-v4
|
||||
replicationcontrollers/my-nginx-v4
|
||||
replicationcontrollers "my-nginx-v4" replaced
|
||||
```
|
||||
|
||||
## What's next?
|
||||
|
||||
- [Learn about how to use `kubectl` for application introspection and debugging.](/docs/user-guide/introspection-and-debugging)
|
||||
- [Tips and tricks when working with config](/docs/user-guide/config-best-practices
|
||||
- [Learn about how to use `kubectl` for application introspection and debugging.](/docs/user-guide/introspection-and-debugging/)
|
||||
- [Configuration Best Practices and Tips](/docs/user-guide/config-best-practices/)
|
||||
|
|
|
@ -57,4 +57,4 @@ Now that you've learned a bit about Heapster, feel free to try it out on your ow
|
|||
|
||||
***
|
||||
*Authors: Vishnu Kannan and Victor Marmol, Google Software Engineers.*
|
||||
*This article was originally posted in [Kubernetes blog](http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes).*
|
||||
*This article was originally posted in [Kubernetes blog](http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes.html).*
|
||||
|
|
|
@ -68,7 +68,9 @@ $ export CONTEXT=$(kubectl config view | grep current-context | awk '{print $2}'
|
|||
Then update the default namespace:
|
||||
|
||||
```shell
|
||||
$ kubectl config set-context $(CONTEXT) --namespace=<insert-namespace-name-here>
|
||||
$ kubectl config set-context $CONTEXT --namespace=<insert-namespace-name-here>
|
||||
# Validate it
|
||||
$ kubectl config view | grep namespace:
|
||||
```
|
||||
|
||||
## Namespaces and DNS
|
||||
|
|
|
@ -3,6 +3,9 @@
|
|||
|
||||
This example shows how to assign a [pod](/docs/user-guide/pods/) to a specific [node](/docs/admin/node/) or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it.
|
||||
|
||||
You can find all the files for this example [in our docs
|
||||
repo here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection).
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes that you have a basic understanding of Kubernetes pods and that you have [turned up a Kubernetes cluster](https://github.com/kubernetes/kubernetes#documentation).
|
||||
|
@ -38,24 +41,53 @@ spec:
|
|||
|
||||
Then add a nodeSelector like so:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
<b>nodeSelector:
|
||||
disktype: ssd</b>
|
||||
```
|
||||
{% include code.html language="yaml" file="pod.yaml" ghlink="/docs/user-guide/node-selection/pod.yaml" %}
|
||||
|
||||
When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the pod was assigned to.
|
||||
|
||||
#### Alpha feature in Kubernetes v1.2: Node Affinity
|
||||
|
||||
During the first half of 2016 we are rolling out a new mechanism, called *affinity* for controlling which nodes your pods wil be scheduled onto.
|
||||
Like `nodeSelector`, affinity is based on labels. But it allows you to write much more expressive rules.
|
||||
`nodeSelector` wil continue to work during the transition, but will eventually be deprecated.
|
||||
|
||||
Kubernetes v1.2 offers an alpha version of the first piece of the affinity mechanism, called [node affinity](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/nodeaffinity.md).
|
||||
There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
|
||||
`preferresDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively,
|
||||
in the sense that the former specifies rules that *must* be met for a pod to schedule onto a node (just like
|
||||
`nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler
|
||||
will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar
|
||||
to how `nodeSelector` works, if labels on a node change at runtime such that the rules on a pod are no longer
|
||||
met, the pod will still continue to run on the node. In the future we plan to offer
|
||||
`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.
|
||||
|
||||
Node affinity is currently expressed using an annotation on Pod. In v1.3 it will use a field, and we will
|
||||
also introduce the second piece of the affinity mechanism, called pod affinity,
|
||||
which allows you to control whether a pod schedules onto a particular node based on which other pods are
|
||||
running on the node, rather than the labels on the node.
|
||||
|
||||
Here's an example of a pod that uses node affinity:
|
||||
|
||||
{% include code.html language="yaml" file="pod-with-node-affinity.yaml" ghlink="/docs/user-guide/node-selection/pod-with-node-affinity.yaml" %}
|
||||
|
||||
This node affinity rule says the pod can only be placed on a node with a label whose key is
|
||||
`kubernetes.io/e2e-az-name` and whose value is either `e2e-az1` or `e2e-az2`. In addition,
|
||||
among nodes that meet that criteria, nodes with a label whose key is `foo` and whose
|
||||
value is `bar` should be preferred.
|
||||
|
||||
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod
|
||||
to be scheduled onto a candidate node.
|
||||
|
||||
### Built-in node labels
|
||||
|
||||
In addition to labels you [attach yourself](#step-one-attach-label-to-the-node), nodes come pre-populated
|
||||
with a standard set of labels. As of Kubernetes v1.2 these labels are
|
||||
* `kubernetes.io/hostname`
|
||||
* `failure-domain.beta.kubernetes.io/zone`
|
||||
* `failure-domain.beta.kubernetes.io/region`
|
||||
* `beta.kubernetes.io/instance-type`
|
||||
|
||||
### Conclusion
|
||||
|
||||
While this example only covered one node, you can attach labels to as many nodes as you want. Then when you schedule a pod with a nodeSelector, it can be scheduled on any of the nodes that satisfy that nodeSelector. Be careful that it will match at least one node, however, because if it doesn't the pod won't be scheduled at all.
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
---
|
||||
|
||||
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/user-guide/volumes) is suggested.
|
||||
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](volumes.md) is suggested.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
@ -63,7 +63,7 @@ The reclaim policy for a `PersistentVolume` tells the cluster what to do with th
|
|||
Each PV contains a spec and status, which is the specification and status of the volume.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0003
|
||||
|
|
|
@ -4,6 +4,9 @@
|
|||
The purpose of this guide is to help you become familiar with [Kubernetes Persistent Volumes](/docs/user-guide/persistent-volumes/). By the end of the guide, we'll have
|
||||
nginx serving content from your persistent volume.
|
||||
|
||||
You can view all the files for this example in [the docs repo
|
||||
here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/persistent-volumes).
|
||||
|
||||
This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running.
|
||||
|
||||
See [Persistent Storage design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/persistent-storage.md) for more information.
|
||||
|
|
|
@ -50,11 +50,10 @@ The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If
|
|||
|
||||
Three types of controllers are currently available:
|
||||
|
||||
- Use a [`Job`](/docs/user-guide/jobs) for pods which are expected to terminate (e.g. batch computations).
|
||||
- Use a [`ReplicationController`](/docs/user-guide/replication-controller) for pods which are not expected to
|
||||
|
||||
terminate, and where (e.g. web servers).
|
||||
- Use a [`DaemonSet`](/docs/admin/daemons): Use for pods which need to run 1 per machine because they provide a
|
||||
- Use a [`Job`](/docs/user-guide/jobs/) for pods which are expected to terminate (e.g. batch computations).
|
||||
- Use a [`ReplicationController`](/docs/user-guide/replication-controller/) for pods which are not expected to
|
||||
terminate (e.g. web servers).
|
||||
- Use a [`DaemonSet`](/docs/admin/daemons/): Use for pods which need to run 1 per machine because they provide a
|
||||
machine-specific system service.
|
||||
If you are unsure whether to use ReplicationController or Daemon, then see [Daemon Set versus
|
||||
Replication Controller](/docs/admin/daemons/#daemon-set-versus-replication-controller).
|
||||
|
@ -121,5 +120,3 @@ If a node dies or is disconnected from the rest of the cluster, some entity with
|
|||
* NodeController marks pod `Failed`
|
||||
* If running under a controller, pod will be recreated elsewhere
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
---
|
||||
Pod templates are [pod](/docs/user-guide/pods.md) specifications which are included in other objects, such as
|
||||
[Replication Controllers](/docs/user-guide/replication-controller/), [Jobs](/docs/user-guide/jobs/), and
|
||||
[DaemonSets](/docs/admin/daemons.md). Controllers use Pod Templates to make actual pods.
|
||||
|
||||
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive.
|
||||
|
||||
|
||||
## Future Work
|
||||
|
||||
A replication controller creates new pods from a template, which is currently inline in the `ReplicationController` object, but which we plan to extract into its own resource [#170](http://issue.k8s.io/170).
|
|
@ -4,42 +4,92 @@
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
In Kubernetes, rather than individual application containers, _pods_ are the smallest deployable units that can be created, scheduled, and managed.
|
||||
|
||||
_pods_ are the smallest deployable units of computing that can be created and
|
||||
managed in Kubernetes.
|
||||
|
||||
## What is a _pod_?
|
||||
|
||||
A _pod_ (as in a pod of whales or pea pod) corresponds to a colocated group of applications running with a shared context. Within that context, the applications may also have individual cgroup isolations applied. A pod models an application-specific "logical host" in a containerized environment. It may contain one or more applications which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual host.
|
||||
A _pod_ (as in a pod of whales or pea pod) is a group of one or more containers
|
||||
(such as Docker containers), the shared storage for those containers, and
|
||||
options about how to run the containers. Pods are always co-located and
|
||||
co-scheduled, and run in a shared context. A pod models an
|
||||
application-specific "logical host" - it contains one or more application
|
||||
containers which are relatively tightly coupled — in a pre-container
|
||||
world, they would have executed on the same physical or virtual machine.
|
||||
|
||||
The context of the pod can be defined as the conjunction of several Linux namespaces:
|
||||
While Kubernetes supports more container runtimes than just Docker, Docker is
|
||||
the most commonly known runtime, and it helps to describe pods in Docker terms.
|
||||
|
||||
* PID namespace (applications within the pod can see each other's processes)
|
||||
* network namespace (applications within the pod have access to the same IP and port space)
|
||||
* IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate)
|
||||
* UTS namespace (applications within the pod share a hostname)
|
||||
The shared context of a pod is a set of Linux namespaces, cgroups, and
|
||||
potentially other facets of isolation - the same things that isolate a Docker
|
||||
container. Within a pod's context, the individual applications may have
|
||||
further sub-isolations applied.
|
||||
|
||||
Applications within a pod also have access to shared volumes, which are defined at the pod level and made available in each application's filesystem. Additionally, a pod may define top-level cgroup isolations which form an outer bound to any individual isolation applied to constituent applications.
|
||||
Containers within a pod share an IP address and port space, and
|
||||
can find each other via `localhost`. They can also communicate with each
|
||||
other using standard inter-process communications like SystemV semaphores or
|
||||
POSIX shared memory. Containers in different pods have distinct IP addresses
|
||||
and can not communicate by IPC.
|
||||
|
||||
In terms of [Docker](https://www.docker.com/) constructs, a pod consists of a colocated group of Docker containers with shared [volumes](/docs/user-guide/volumes). PID namespace sharing is not yet implemented with Docker.
|
||||
Applications within a pod also have access to shared volumes, which are defined
|
||||
as part of a pod and are made available to be mounted into each application's
|
||||
filesystem.
|
||||
|
||||
Like individual application containers, pods are considered to be relatively ephemeral rather than durable entities. As discussed in [life of a pod](/docs/user-guide/pod-states), pods are scheduled to nodes and remain there until termination (according to restart policy) or deletion. When a node dies, the pods scheduled to that node are deleted. Specific pods are never rescheduled to new nodes; instead, they must be replaced (see [/{{page.version}}/docs/user-guide/replication controller](/docs/user-guide/replication-controller) for more details). (In the future, a higher-level API may support pod migration.)
|
||||
In terms of [Docker](https://www.docker.com/) constructs, a pod is modelled as
|
||||
a group of Docker containers with shared namespaces and shared
|
||||
[volumes](/docs/user-guide/volumes/). PID namespace sharing is not yet implemented in Docker.
|
||||
|
||||
Like individual application containers, pods are considered to be relatively
|
||||
ephemeral (rather than durable) entities. As discussed in [life of a
|
||||
pod](/docs/user-guide/pod-states/), pods are created, assigned a unique ID (UID), and
|
||||
scheduled to nodes where they remain until termination (according to restart
|
||||
policy) or deletion. If a node dies, the pods scheduled to that node are
|
||||
deleted, after a timeout period. A given pod (as defined by a UID) is not
|
||||
"rescheduled" to a new node; instead, it can be replaced by an identical pod,
|
||||
with even the same name if desired, but with a new UID (see [replication
|
||||
controller](/docs/user-guide/replication-controller/) for more details). (In the future, a
|
||||
higher-level API may support pod migration.)
|
||||
|
||||
When something is said to have the same lifetime as a pod, such as a volume,
|
||||
that means that it exists as long as that pod (with that UID) exists. If that
|
||||
pod is deleted for any reason, even if an identical replacement is created, the
|
||||
related thing (e.g. volume) is also destroyed and created anew.
|
||||
|
||||
## Motivation for pods
|
||||
|
||||
### Resource sharing and communication
|
||||
|
||||
Pods facilitate data sharing and communication among their constituents.
|
||||
|
||||
The applications in the pod all use the same network namespace/IP and port space, and can find and communicate with each other using localhost. Each pod has an IP address in a flat shared networking namespace that has full communication with other physical computers and containers across the network. The hostname is set to the pod's Name for the application containers within the pod. [More details on networking](/docs/admin/networking).
|
||||
|
||||
In addition to defining the application containers that run in the pod, the pod specifies a set of shared storage volumes. Volumes enable data to survive container restarts and to be shared among the applications within the pod.
|
||||
|
||||
### Management
|
||||
|
||||
Pods also simplify application deployment and management by providing a higher-level abstraction than the raw, low-level container interface. Pods serve as units of deployment and horizontal scaling/replication. Co-location (co-scheduling), fate sharing, coordinated replication, resource sharing, and dependency management are handled automatically.
|
||||
Pods are a model of the pattern of multiple cooperating processes which form a
|
||||
cohesive unit of service. They simplify application deployment and management
|
||||
by providing a higher-level abstraction than the set of their constituent
|
||||
applications. Pods serve as unit of deployment, horizontal scaling, and
|
||||
replication. Colocation (co-scheduling), shared fate (e.g. termination),
|
||||
coordinated replication, resource sharing, and dependency management are
|
||||
handled automatically for containers in a pod.
|
||||
|
||||
### Resource sharing and communication
|
||||
|
||||
Pods enable data sharing and communication among their constituents.
|
||||
|
||||
The applications in a pod all use the same network namespace (same IP and port
|
||||
space), and can thus "find" each other and communicate using `localhost`.
|
||||
Because of this, applications in a pod must coordinate their usage of ports.
|
||||
Each pod has an IP address in a flat shared networking space that has full
|
||||
communication with other physical computers and pods across the network.
|
||||
|
||||
The hostname is set to the pod's Name for the application containers within the
|
||||
pod. [More details on networking](/docs/admin/networking/).
|
||||
|
||||
In addition to defining the application containers that run in the pod, the pod
|
||||
specifies a set of shared storage volumes. Volumes enable data to survive
|
||||
container restarts and to be shared among the applications within the pod.
|
||||
|
||||
## Uses of pods
|
||||
|
||||
Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located, co-managed helper programs, such as:
|
||||
Pods can be used to host vertically integrated application stacks (e.g. LAMP),
|
||||
but their primary motivation is to support co-located, co-managed helper
|
||||
programs, such as:
|
||||
|
||||
* content management systems, file and data loaders, local cache managers, etc.
|
||||
* log and checkpoint backup, compression, rotation, snapshotting, etc.
|
||||
|
@ -47,30 +97,42 @@ Pods can be used to host vertically integrated application stacks, but their pri
|
|||
* proxies, bridges, and adapters
|
||||
* controllers, managers, configurators, and updaters
|
||||
|
||||
Individual pods are not intended to run multiple instances of the same application, in general.
|
||||
Individual pods are not intended to run multiple instances of the same
|
||||
application, in general.
|
||||
|
||||
For a longer explanation, see [The Distributed System ToolKit: Patterns for Composite Containers](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns).
|
||||
For a longer explanation, see [The Distributed System ToolKit: Patterns for
|
||||
Composite
|
||||
Containers](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html).
|
||||
|
||||
## Alternatives considered
|
||||
|
||||
_Why not just run multiple programs in a single (Docker) container?_
|
||||
|
||||
1. Transparency. Making the containers within the pod visible to the infrastructure enables the infrastructure to provide services to those containers, such as process management and resource monitoring. This facilitates a number of conveniences for users.
|
||||
2. Decoupling software dependencies. The individual containers may be rebuilt and redeployed independently. Kubernetes may even support live updates of individual containers someday.
|
||||
3. Ease of use. Users don't need to run their own process managers, worry about signal and exit-code propagation, etc.
|
||||
4. Efficiency. Because the infrastructure takes on more responsibility, containers can be lighter weight.
|
||||
1. Transparency. Making the containers within the pod visible to the
|
||||
infrastructure enables the infrastructure to provide services to those
|
||||
containers, such as process management and resource monitoring. This
|
||||
facilitates a number of conveniences for users.
|
||||
2. Decoupling software dependencies. The individual containers may be
|
||||
versioned, rebuilt and redeployed independently. Kubernetes may even support
|
||||
live updates of individual containers someday.
|
||||
3. Ease of use. Users don't need to run their own process managers, worry about
|
||||
signal and exit-code propagation, etc.
|
||||
4. Efficiency. Because the infrastructure takes on more responsibility,
|
||||
containers can be lighter weight.
|
||||
|
||||
_Why not support affinity-based co-scheduling of containers?_
|
||||
|
||||
That approach would provide co-location, but would not provide most of the benefits of pods, such as resource sharing, IPC, guaranteed fate sharing, and simplified management.
|
||||
That approach would provide co-location, but would not provide most of the
|
||||
benefits of pods, such as resource sharing, IPC, guaranteed fate sharing, and
|
||||
simplified management.
|
||||
|
||||
## Durability of pods (or lack thereof)
|
||||
|
||||
Pods aren't intended to be treated as durable [pets](https://blog.engineyard.com/2014/pets-vs-cattle). They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
|
||||
|
||||
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](/docs/user-guide/replication-controller)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
|
||||
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](/docs/user-guide/replication-controller/)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
|
||||
|
||||
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
|
||||
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
|
||||
|
||||
Pod is exposed as a primitive in order to facilitate:
|
||||
|
||||
|
@ -103,7 +165,7 @@ By default, all deletes are graceful within 30 seconds. The `kubectl delete` com
|
|||
|
||||
## Privileged mode for pod containers
|
||||
|
||||
From kubernetes v1.1, any container in a pod can enable privileged mode, using the `privileged` flag on the `SecurityContext` of the container spec. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as seperate pods that don't need to be compiled into the kubelet.
|
||||
From kubernetes v1.1, any container in a pod can enable privileged mode, using the `privileged` flag on the `SecurityContext` of the container spec. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as separate pods that don't need to be compiled into the kubelet.
|
||||
|
||||
If the master is running kubernetes v1.1 or higher, and the nodes are running a version lower than v1.1, then new privileged pods will be accepted by api-server, but will not be launched. They will be pending state.
|
||||
If user calls `kubectl describe pod FooPodName`, user can see the reason why the pod is in pending state. The events table in the describe command output will say:
|
||||
|
@ -118,7 +180,4 @@ spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20b222db0)true'
|
|||
|
||||
Pod is a top-level resource in the kubernetes REST API. More details about the
|
||||
API object can be found at: [Pod API
|
||||
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions/#_v1_pod).
|
||||
|
||||
|
||||
|
||||
object](/docs/api-reference/v1/definitions/#_v1_pod).
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
---
|
||||
|
||||
To deploy and manage applications on Kubernetes, you'll use the Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/kubectl). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps.
|
||||
To deploy and manage applications on Kubernetes, you’ll use the Kubernetes command-line tool, [kubectl](kubectl/kubectl.md). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps.
|
||||
|
||||
## Installing kubectl
|
||||
|
||||
|
|
|
@ -110,21 +110,21 @@ For more details, see the [secrets document](/docs/user-guide/secrets), [example
|
|||
|
||||
Secrets can also be used to pass [image registry credentials](/docs/user-guide/images/#using-a-private-registry).
|
||||
|
||||
First, create a `.dockercfg` file, such as running `docker login <registry.domain>`.
|
||||
Then put the resulting `.dockercfg` file into a [secret resource](/docs/user-guide/secrets). For example:
|
||||
First, create a `.docker/config.json`, such as by running `docker login <registry.domain>`.
|
||||
Then put the resulting `.docker/config.json` file into a [secret resource](secrets.md). For example:
|
||||
|
||||
```shell
|
||||
$ docker login
|
||||
Username: janedoe
|
||||
Password: *******
|
||||
Password: ●●●●●●●●●●●
|
||||
Email: jdoe@example.com
|
||||
WARNING: login credentials saved in /Users/jdoe/.dockercfg.
|
||||
WARNING: login credentials saved in /Users/jdoe/.docker/config.json.
|
||||
Login Succeeded
|
||||
|
||||
$ echo $(cat ~/.dockercfg)
|
||||
$ echo $(cat ~/.docker/config.json)
|
||||
{ "https://index.docker.io/v1/": { "auth": "ZmFrZXBhc3N3b3JkMTIK", "email": "jdoe@example.com" } }
|
||||
|
||||
$ cat ~/.dockercfg | base64
|
||||
$ cat ~/.docker/config.json | base64
|
||||
eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
|
||||
|
||||
$ cat > /tmp/image-pull-secret.yaml <<EOF
|
||||
|
@ -133,11 +133,11 @@ kind: Secret
|
|||
metadata:
|
||||
name: myregistrykey
|
||||
data:
|
||||
.dockercfg: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
|
||||
type: kubernetes.io/dockercfg
|
||||
.dockerconfigjson: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
EOF
|
||||
|
||||
$ kubectl create -f ./image-pull-secret.yaml
|
||||
$ kubectl create -f /tmp/image-pull-secret.yaml
|
||||
secrets/myregistrykey
|
||||
```
|
||||
|
||||
|
@ -196,7 +196,7 @@ spec:
|
|||
name: www-data
|
||||
```
|
||||
|
||||
More examples can be found in our [blog article](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns) and [presentation slides](http://www.slideshare.net/Docker/slideshare-burns).
|
||||
More examples can be found in our [blog article](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) and [presentation slides](http://www.slideshare.net/Docker/slideshare-burns).
|
||||
|
||||
## Resource management
|
||||
|
||||
|
|
|
@ -50,6 +50,11 @@ You may need to wait for a minute or two for the external ip address to be provi
|
|||
|
||||
In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a [firewall to allow traffic on port 80](/docs/user-guide/services-firewalls).
|
||||
|
||||
If you're running on AWS, Kubernetes creates an ELB for you. ELBs use host
|
||||
names, not IPs, so you will have to do `kubectl describe svc my-nginx` and look
|
||||
for the `LoadBalancer Ingress` host name. Traffic from external IPs is allowed
|
||||
automatically.
|
||||
|
||||
## Killing the application
|
||||
|
||||
To kill the application and delete its containers and public IP address, do:
|
||||
|
|
|
@ -6,42 +6,171 @@
|
|||
|
||||
## What is a _replication controller_?
|
||||
|
||||
A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
|
||||
A _replication controller_ ensures that a specified number of pod "replicas" are running at any one
|
||||
time. In other words, a replication controller makes sure that a pod or homogeneous set of pods are
|
||||
always up and available.
|
||||
If there are too many pods, it will kill some. If there are too few, the
|
||||
replication controller will start more. Unlike manually created pods, the pods maintained by a
|
||||
replication controller are automatically replaced if they fail, get deleted, or are terminated.
|
||||
For example, your pods get re-created on a node after disruptive maintenance such as a kernel upgrade.
|
||||
For this reason, we recommend that you use a replication controller even if your application requires
|
||||
only a single pod. You can think of a replication controller as something similar to a process supervisor,
|
||||
but rather then individual processes on a single node, the replication controller supervises multiple pods
|
||||
across multiple nodes.
|
||||
|
||||
As discussed in [life of a pod](/docs/user-guide/pod-states), `ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`. (Note: If `RestartPolicy` is not set, the default value is `Always`.) `ReplicationController` should refuse to instantiate any pod that has a different restart policy. As discussed in [issue #503](http://issue.k8s.io/503#issuecomment-50169443), we expect other types of controllers to be added to Kubernetes to handle other types of workloads, such as build/test and batch workloads, in the future.
|
||||
Replication Controller is often abbreviated to "rc" or "rcs" in discussion, and as a shortcut in
|
||||
kubectl commands.
|
||||
|
||||
A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services.
|
||||
A simple case is to create 1 Replication Controller object in order to reliably run one instance of
|
||||
a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated
|
||||
service, such as web servers.
|
||||
|
||||
## How does a replication controller work?
|
||||
## Running an example Replication Controller
|
||||
|
||||
### Pod template
|
||||
Here is an example Replication Controller config. It runs 3 copies of the nginx web server.
|
||||
|
||||
A replication controller creates new pods from a template, which is currently inline in the `ReplicationController` object, but which we plan to extract into its own resource [#170](http://issue.k8s.io/170).
|
||||
{% include code.html language="yaml" file="replication.yaml" ghlink="/docs/user-guide/replication.yaml" %}
|
||||
|
||||
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive, as demonstrated by the use cases explained below.
|
||||
|
||||
Pods created by a replication controller are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but replication controllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the replication controller itself.
|
||||
|
||||
### Labels
|
||||
Run the example job by downloading the example file and then running this command:
|
||||
|
||||
The population of pods that a replication controller is monitoring is defined with a [label selector](/docs/user-guide/labels/#label-selectors), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled to their definition. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that approach increases complexity of management operations, for both clients and the system.
|
||||
```shell
|
||||
$ kubectl create -f ./replication.yaml
|
||||
replicationcontrollers/nginx
|
||||
```
|
||||
|
||||
The replication controller should verify that the pods created from the specified template have labels that match its label selector. Though it isn't verified yet, you should also ensure that only one replication controller controls any given pod, by ensuring that the label selectors of replication controllers do not target overlapping sets. If you do end up with multiple controllers that have overlapping selectors, you will have to manage the deletion yourself with --cascade=false until there are no controllers with an overlapping superset of selectors.
|
||||
Check on the status of the replication controller using this command:
|
||||
|
||||
Note that replication controllers may themselves have labels and would generally carry the labels their corresponding pods have in common, but these labels do not affect the behavior of the replication controllers.
|
||||
```shell
|
||||
$ kubectl describe replicationcontrollers/nginx
|
||||
Name: nginx
|
||||
Namespace: default
|
||||
Image(s): nginx
|
||||
Selector: app=nginx
|
||||
Labels: app=nginx
|
||||
Replicas: 3 current / 3 desired
|
||||
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
|
||||
Events:
|
||||
FirstSeen LastSeen Count From
|
||||
SubobjectPath Reason Message
|
||||
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
|
||||
{replication-controller } SuccessfulCreate Created pod: nginx-qrm3m
|
||||
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
|
||||
{replication-controller } SuccessfulCreate Created pod: nginx-3ntk0
|
||||
Thu, 24 Sep 2015 10:38:20 -0700 Thu, 24 Sep 2015 10:38:20 -0700 1
|
||||
{replication-controller } SuccessfulCreate Created pod: nginx-4ok8v
|
||||
```
|
||||
|
||||
Here, 3 pods have been made, but none are running yet, perhaps because the image is being pulled.
|
||||
A little later, the same command may show:
|
||||
|
||||
```shell
|
||||
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
```
|
||||
|
||||
To list all the pods that belong to the rc in a machine readable form, you can use a command like this:
|
||||
|
||||
```shell
|
||||
$ pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
|
||||
echo $pods
|
||||
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
|
||||
```
|
||||
|
||||
Here, the selector is the same as the selector for the replication controller (seen in the
|
||||
`kubectl describe` output, and in a different form in `replication.yaml`. The `--output=jsonpath` option
|
||||
specifies an expression that just gets the name from each pod in the returned list.
|
||||
|
||||
|
||||
## Writing a Replication Controller Spec
|
||||
|
||||
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
|
||||
general information about working with config files, see [here](/docs/user-guide/simple-yaml/),
|
||||
[here](/docs/user-guide/configuring-containers/), and [here](/docs/user-guide/working-with-resources/).
|
||||
|
||||
A Replication Controller also needs a [`.spec` section](../devel/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/user-guide/replication-controller/#pod-template). It has exactly
|
||||
the same schema as a [pod](pods.md), except it is nested and does not have an `apiVersion` or
|
||||
`kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a job must specify appropriate
|
||||
labels (see [pod selector](#pod-selector) and an appropriate restart policy.
|
||||
|
||||
Only a [`RestartPolicy`](pod-states.md) equal to `Always` is allowed, which is the default
|
||||
if not specified.
|
||||
|
||||
For local container restarts, replication controllers delegate to an agent on the node,
|
||||
for example the [Kubelet](/docs/admin/kubelet.md) or Docker.
|
||||
|
||||
### Labels on the Replication Controller
|
||||
|
||||
The replication controller can itself have labels (`.metadata.labels`). Typically, you
|
||||
would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
|
||||
then it is defaulted to `.spec.template.metadata.labels`. However, they are allowed to be
|
||||
different, and the `.metadata.labels` do not affec the behavior of the replication controller.
|
||||
|
||||
### Pod Selector
|
||||
|
||||
The `.spec.selector` field is a [label selector](/docs/user-guide/labels/#label-selectors). A replication
|
||||
controller manages all the pods with labels which match the selector. It does not distinguish
|
||||
between pods which it created or deleted versus pods which some other person or process created or
|
||||
deleted. This allows the replication controller to be replaced without affecting the running pods.
|
||||
|
||||
If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will
|
||||
be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to
|
||||
`.spec.template.metadata.labels`.
|
||||
|
||||
Also you should not normally create any pods whose labels match this selector, either directly, via
|
||||
another ReplicationController or via another controller such as Job. Otherwise, the
|
||||
ReplicationController will think that those pods were created by it. Kubernetes will not stop you
|
||||
from doing this.
|
||||
|
||||
If you do end up with multiple controllers that have overlapping selectors, you
|
||||
will have to manage the deletion yourself (see [below](#updating-a-replication-controller)).
|
||||
|
||||
### Multiple Replicas
|
||||
|
||||
You can specify how many pods should run concurrently by setting `.spec.replicas` to the number
|
||||
of pods you would like to have running concurrently. The number running at any time may be higher
|
||||
or lower, such as if the replicas was just increased or decreased, or if a pod is gracefully
|
||||
shutdown, and a replacement starts early.
|
||||
|
||||
If you do not specify `.spec.replicas`, then it defaults to 1.
|
||||
|
||||
## Working with Replication Controllers
|
||||
|
||||
### Deleting a Replication Controller and its Pods
|
||||
|
||||
To delete a replication controller and all its pods, use [`kubectl
|
||||
delete`](/docs/user-guide/kubectl/kubectl_delete/). Kubectl will scale the replication controller to zero and wait
|
||||
for it to delete each pod before deleting the replication controller itself. If this kubectl
|
||||
command is interrupted, it can be restarted.
|
||||
|
||||
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
|
||||
0, wait for pod deletions, then delete the replication controller).
|
||||
|
||||
### Deleting just a Replication Controller
|
||||
|
||||
You can delete a replication controller without affecting any of its pods.
|
||||
|
||||
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/).
|
||||
|
||||
When using the REST API or go client library, simply delete the replication controller object.
|
||||
|
||||
Once the original is deleted, you can create a new replication controller to replace it. As long
|
||||
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
|
||||
However, it will not make any effort to make existing pods match a new, different pod template.
|
||||
To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
|
||||
|
||||
### Isolating pods from a Replication Controller
|
||||
|
||||
Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
|
||||
Similarly, deleting a replication controller using the API does not affect the pods it created. Its `replicas` field must first be set to `0` in order to delete the pods controlled. (Note that the client tool, `kubectl`, provides a single operation, [delete](/docs/user-guide/kubectl/kubectl_delete) to delete both the replication controller and the pods it controls. If you want to leave the pods running when deleting a replication controller, specify `--cascade=false`. However, there is no such operation in the API at the moment)
|
||||
|
||||
## Responsibilities of the replication controller
|
||||
|
||||
The replication controller simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
|
||||
|
||||
The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Spinnaker](http://spinnaker.io/) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
## Common usage patterns
|
||||
|
||||
### Rescheduling
|
||||
|
@ -71,8 +200,46 @@ In addition to running multiple releases of an application while a rolling updat
|
|||
|
||||
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a replication controller with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another replication controller with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the replication controllers separately to test things out, monitor the results, etc.
|
||||
|
||||
### Using Replication Controllers with Services
|
||||
|
||||
Multiple replication controllers can sit behind a single service, so that, for example, some traffic
|
||||
goes to the old version, and some goes to the new version.
|
||||
|
||||
A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services.
|
||||
|
||||
## Writing programs for Replication
|
||||
|
||||
Pods created by a replication controller are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but replication controllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the replication controller itself.
|
||||
|
||||
## Responsibilities of the replication controller
|
||||
|
||||
The replication controller simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
|
||||
|
||||
The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
|
||||
## API Object
|
||||
|
||||
Replication controller is a top-level resource in the kubernetes REST API. More details about the
|
||||
API object can be found at: [ReplicationController API
|
||||
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions/#_v1_replicationcontroller).
|
||||
object](/docs/api-reference/v1/definitions/#_v1_replicationcontroller).
|
||||
|
||||
## Alternatives to Replication Controller
|
||||
|
||||
### Bare Pods
|
||||
|
||||
Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
|
||||
|
||||
### Job
|
||||
|
||||
Use a [Job](/docs/user-guide/jobs/) instead of a replication controller for pods that are expected to terminate on their own
|
||||
(i.e. batch jobs).
|
||||
|
||||
### DaemonSet
|
||||
|
||||
Use a [DaemonSet](/docs/admin/daemons/) instead of a replication controller for pods that provide a
|
||||
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime is tied
|
||||
to machine lifetime: the pod needs to be running on the machine before other pods start, and are
|
||||
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
|
||||
|
|
|
@ -19,10 +19,12 @@ more control over how it is used, and reduces the risk of accidental exposure.
|
|||
Users can create secrets, and the system also creates some secrets.
|
||||
|
||||
To use a secret, a pod needs to reference the secret.
|
||||
A secret can be used with a pod in two ways: either as files in a [volume](/docs/user-guide/volumes) mounted on one or more of
|
||||
A secret can be used with a pod in two ways: as files in a [volume](/docs/user-guide/volumes) mounted on one or more of
|
||||
its containers, or used by kubelet when pulling images for the pod.
|
||||
|
||||
### Service Accounts Automatically Create and Attach Secrets with API Credentials
|
||||
### Built-in Secrets
|
||||
|
||||
#### Service Accounts Automatically Create and Attach Secrets with API Credentials
|
||||
|
||||
Kubernetes automatically creates secrets which contain credentials for
|
||||
accessing the API and it automatically modifies your pods to use this type of
|
||||
|
@ -35,9 +37,70 @@ this is the recommended workflow.
|
|||
See the [Service Account](/docs/user-guide/service-accounts) documentation for more
|
||||
information on how Service Accounts work.
|
||||
|
||||
### Creating a Secret Manually
|
||||
### Creating your own Secrets
|
||||
|
||||
This is an example of a simple secret, in yaml format:
|
||||
#### Creating a Secret Using kubectl create secret
|
||||
|
||||
Say that some pods need to access a database. The
|
||||
username and password that the pods should use is in the files
|
||||
`./username.txt` and `./password.txt` on your local machine.
|
||||
|
||||
```shell
|
||||
# Create files needed for rest of example.
|
||||
$ echo "admin" > ./username.txt
|
||||
$ echo "1f2d1e2e67df" > ./password.txt
|
||||
```
|
||||
|
||||
The `kubectl create secret` command
|
||||
packages these files into a Secret and creates
|
||||
the object on the Apiserver.
|
||||
|
||||
```shell
|
||||
$ kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
|
||||
secret "db-user-pass" created
|
||||
```
|
||||
|
||||
You can check that the secret was created like this:
|
||||
|
||||
```shell
|
||||
$ kubectl get secrets
|
||||
NAME TYPE DATA AGE
|
||||
db-user-pass Opaque 2 51s
|
||||
$ kubectl describe secrets/db-user-pass
|
||||
Name: db-user-pass
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Type: Opaque
|
||||
|
||||
Data
|
||||
====
|
||||
password.txt: 13 bytes
|
||||
username.txt: 6 bytes
|
||||
```
|
||||
|
||||
Note that neither `get` nor `describe` shows the contents of the file by default.
|
||||
This is to protect the secret from being exposed accidentally to someone looking
|
||||
or from being stored in a terminal log.
|
||||
|
||||
See [decoding a secret](#decoding-a-secret) for how to see the contents.
|
||||
|
||||
#### Creating a Secret Manually
|
||||
|
||||
You can also create a secret object in a file first,
|
||||
in json or yaml format, and then create that object.
|
||||
|
||||
Each item must be base64 encoded:
|
||||
|
||||
```shell
|
||||
$ echo "admin" | base64
|
||||
YWRtaW4K
|
||||
$ echo "1f2d1e2e67df" | base64
|
||||
MWYyZDFlMmU2N2RmCg==
|
||||
```
|
||||
|
||||
Now write a secret object that looks like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -46,22 +109,69 @@ metadata:
|
|||
name: mysecret
|
||||
type: Opaque
|
||||
data:
|
||||
password: dmFsdWUtMg0K
|
||||
username: dmFsdWUtMQ0K
|
||||
password: MWYyZDFlMmU2N2RmCg==
|
||||
username: YWRtaW4K
|
||||
```
|
||||
|
||||
The data field is a map. Its keys must match
|
||||
[`DNS_SUBDOMAIN`](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md), except that leading dots are also
|
||||
allowed. The values are arbitrary data, encoded using base64. The values of
|
||||
username and password in the example above, before base64 encoding,
|
||||
are `value-1` and `value-2`, respectively, with carriage return and newline characters at the end.
|
||||
[`DNS_SUBDOMAIN`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/design/identifiers.md), except that leading dots are also
|
||||
allowed. The values are arbitrary data, encoded using base64.
|
||||
|
||||
Create the secret using [`kubectl create`](/docs/user-guide/kubectl/kubectl_create).
|
||||
Create the secret using [`kubectl create`](kubectl/kubectl_create.md):
|
||||
|
||||
Once the secret is created, you can need to modify your pod to specify
|
||||
that it should use the secret.
|
||||
```shell
|
||||
$ kubectl create -f ./secret.yaml
|
||||
secret "mysecret" created
|
||||
```
|
||||
|
||||
### Manually specifying a Secret to be Mounted on a Pod
|
||||
**Encoding Note:** The serialized JSON and YAML values of secret data are encoded as
|
||||
base64 strings. Newlines are not valid within these strings and must be
|
||||
omitted (i.e. do not use `-b` option of `base64` which breaks long lines.)
|
||||
|
||||
#### Decoding a Secret
|
||||
|
||||
Get back the secret created in the previous section:
|
||||
|
||||
```shell
|
||||
$ kubectl get secret mysecret -o yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
password: MWYyZDFlMmU2N2RmCg==
|
||||
username: YWRtaW4K
|
||||
kind: Secret
|
||||
metadata:
|
||||
creationTimestamp: 2016-01-22T18:41:56Z
|
||||
name: mysecret
|
||||
namespace: default
|
||||
resourceVersion: "164619"
|
||||
selfLink: /api/v1/namespaces/default/secrets/mysecret
|
||||
uid: cfee02d6-c137-11e5-8d73-42010af00002
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
Decode the password field:
|
||||
|
||||
```shell
|
||||
$ echo "MWYyZDFlMmU2N2RmCg==" | base64 -D
|
||||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
### Using Secrets
|
||||
|
||||
Secrets can be mounted as data volumes or be exposed as environment variables to
|
||||
be used by a container in a pod. They can also be used by other parts of the
|
||||
system, without being directly exposed to the pod. For example, they can hold
|
||||
credentials that other parts of the system should use to interact with external
|
||||
systems on your behalf.
|
||||
|
||||
#### Using Secrets as Files from a Pod
|
||||
|
||||
To consume a Secret in a volume in a Pod:
|
||||
|
||||
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
|
||||
1. Modify your Pod definition to add a volume under `spec.volumes[]`. Name the volume anything, and have a `spec.volumes[].secret.secretName` field equal to the name of the secret object.
|
||||
1. Add a `spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `spec.containers[].volumeMounts[].readOnly = true` and `spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear.
|
||||
1. Modify your image and/or command line so that the the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`.
|
||||
|
||||
This is an example of a pod that mounts a secret in a volume:
|
||||
|
||||
|
@ -93,17 +203,87 @@ This is an example of a pod that mounts a secret in a volume:
|
|||
}
|
||||
```
|
||||
|
||||
Each secret you want to use needs its own `spec.volumes`.
|
||||
Each secret you want to use needs to be referred to in `spec.volumes`.
|
||||
|
||||
If there are multiple containers in the pod, then each container needs its
|
||||
own `volumeMounts` block, but only one `spec.volumes` is needed per secret.
|
||||
|
||||
You can package many files into one secret, or use many secrets,
|
||||
whichever is convenient.
|
||||
You can package many files into one secret, or use many secrets, whichever is convenient.
|
||||
|
||||
See another example of creating a secret and a pod that consumes that secret in a volume [here](/docs/user-guide/secrets/).
|
||||
|
||||
### Manually specifying an imagePullSecret
|
||||
##### Consuming Secret Values from Volumes
|
||||
|
||||
Inside the container that mounts a secret volume, the secret keys appear as
|
||||
files and the secret values are base-64 decoded and stored inside these files.
|
||||
This is the result of commands
|
||||
executed inside the container from the example above:
|
||||
|
||||
```shell
|
||||
$ ls /etc/foo/
|
||||
username
|
||||
password
|
||||
$ cat /etc/foo/username
|
||||
admin
|
||||
$ cat /etc/foo/password
|
||||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
The program in a container is responsible for reading the secret(s) from the
|
||||
files.
|
||||
|
||||
#### Using Secrets as Environment Variables
|
||||
|
||||
To use a secret in an environment variable in a pod:
|
||||
|
||||
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
|
||||
1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[x].valueFrom.secretKeyRef`.
|
||||
1. Modify your image and/or command line so that the the program looks for values in the specified environment variabless
|
||||
|
||||
This is an example of a pod that mounts a secret in a volume:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-env-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: mycontainer
|
||||
image: redis
|
||||
env:
|
||||
- name: SECRET_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mysecret
|
||||
key: username
|
||||
- name: SECRET_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mysecret
|
||||
key: password
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
##### Consuming Secret Values from Environment Variables
|
||||
|
||||
Inside a container that consumes a secret in an environment variables, the secret keys appear as
|
||||
normal environment variables containing the base-64 decoded values of the secret data.
|
||||
This is the result of commands executed inside the container from the example above:
|
||||
|
||||
```console
|
||||
$ echo $SECRET_USERNAME
|
||||
admin
|
||||
$ cat /etc/foo/password
|
||||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
#### Using imagePullSecrets
|
||||
|
||||
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
|
||||
password to the Kubelet so it can pull a private image on behalf of your Pod.
|
||||
|
||||
##### Manually specifying an imagePullSecret
|
||||
|
||||
Use of imagePullSecrets is described in the [images documentation](/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod)
|
||||
|
||||
|
@ -111,13 +291,12 @@ Use of imagePullSecrets is described in the [images documentation](/docs/user-gu
|
|||
|
||||
You can manually create an imagePullSecret, and reference it from
|
||||
a serviceAccount. Any pods created with that serviceAccount
|
||||
or that default to use that serviceAccount, will get have the imagePullSecret of the
|
||||
or that default to use that serviceAccount, will get their imagePullSecret
|
||||
field set to that of the service account.
|
||||
See [here](/docs/user-guide/service-accounts/#adding-imagepullsecrets-to-a-service-account)
|
||||
for a detailed explanation of that process.
|
||||
|
||||
|
||||
### Automatic Mounting of Manually Created Secrets
|
||||
#### Automatic Mounting of Manually Created Secrets
|
||||
|
||||
We plan to extend the service account behavior so that manually created
|
||||
secrets (e.g. one containing a token for accessing a github account)
|
||||
|
@ -146,31 +325,6 @@ controller. It does not include pods created via the kubelets
|
|||
`--manifest-url` flag, its `--config` flag, or its REST API (these are
|
||||
not common ways to create pods.)
|
||||
|
||||
### Consuming Secret Values
|
||||
|
||||
Inside the container that mounts a secret volume, the secret keys appear as
|
||||
files and the secret values are base-64 decoded and stored inside these files.
|
||||
This is the result of commands
|
||||
executed inside the container from the example above:
|
||||
|
||||
```shell
|
||||
$ ls /etc/foo/
|
||||
username
|
||||
password
|
||||
$ cat /etc/foo/username
|
||||
value-1
|
||||
$ cat /etc/foo/password
|
||||
value-2
|
||||
```
|
||||
|
||||
The program in a container is responsible for reading the secret(s) from the
|
||||
files. Currently, if a program expects a secret to be stored in an environment
|
||||
variable, then the user needs to modify the image to populate the environment
|
||||
variable from the file as an step before running the main program. Future
|
||||
versions of Kubernetes are expected to provide more automation for populating
|
||||
environment variables from files.
|
||||
|
||||
|
||||
### Secret and Pod Lifetime interaction
|
||||
|
||||
When a pod is created via the API, there is no check whether a referenced
|
||||
|
@ -203,25 +357,14 @@ update the data of existing secrets, but to create new ones with distinct names.
|
|||
|
||||
### Use-Case: Pod with ssh keys
|
||||
|
||||
To create a pod that uses an ssh key stored as a secret, we first need to create a secret:
|
||||
Create a secret containing some ssh keys:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "ssh-key-secret"
|
||||
},
|
||||
"data": {
|
||||
"id-rsa": "dmFsdWUtMg0KDQo=",
|
||||
"id-rsa.pub": "dmFsdWUtMQ0K"
|
||||
}
|
||||
}
|
||||
```shell
|
||||
$ kubectl create secret generic my-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
**Note:** The serialized JSON and YAML values of secret data are encoded as
|
||||
base64 strings. Newlines are not valid within these strings and must be
|
||||
omitted.
|
||||
**Security Note:** think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to have accessible to all the users with whom you share the kubernetes cluster, and can revoke if they are compromised.
|
||||
|
||||
|
||||
Now we can create a pod which references the secret with the ssh key and
|
||||
consumes it in a volume:
|
||||
|
@ -277,39 +420,16 @@ This example illustrates a pod which consumes a secret containing prod
|
|||
credentials and another pod which consumes a secret with test environment
|
||||
credentials.
|
||||
|
||||
The secrets:
|
||||
Make the secrets:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
"items":
|
||||
[{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "prod-db-secret"
|
||||
},
|
||||
"data": {
|
||||
"password": "dmFsdWUtMg0KDQo=",
|
||||
"username": "dmFsdWUtMQ0K"
|
||||
}
|
||||
},
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "test-db-secret"
|
||||
},
|
||||
"data": {
|
||||
"password": "dmFsdWUtMg0KDQo=",
|
||||
"username": "dmFsdWUtMQ0K"
|
||||
}
|
||||
}]
|
||||
}
|
||||
```shell
|
||||
$ kubectl create secret generic prod-db-password --from-literal=user=produser --from-literal=password=Y4nys7f11
|
||||
secret "prod-db-password" created
|
||||
$ kubectl create secret generic test-db-password --from-literal=user=testuser --from-literal=password=iluvtests
|
||||
secret "test-db-password" created
|
||||
```
|
||||
|
||||
The pods:
|
||||
Now make the pods:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -414,12 +534,73 @@ one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
|
|||
"containers": [
|
||||
{
|
||||
"name": "db-client-container",
|
||||
"image": "myClientImage",
|
||||
"image": "myClientImage"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Use-case: Dotfiles in secret volume
|
||||
|
||||
In order to make piece of data 'hidden' (ie, in a file whose name begins with a dot character), simply
|
||||
make that key begin with a dot. For example, when the following secret secret is mounted into a volume:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "dotfile-secret"
|
||||
},
|
||||
"data": {
|
||||
".secret-file": "dmFsdWUtMg0KDQo=",
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "secret-dotfiles-pod",
|
||||
},
|
||||
"spec": {
|
||||
"volumes": [
|
||||
{
|
||||
"name": "secret-volume",
|
||||
"secret": {
|
||||
"secretName": "dotfile-secret"
|
||||
}
|
||||
}
|
||||
],
|
||||
"containers": [
|
||||
{
|
||||
"name": "dotfile-test-container",
|
||||
"image": "gcr.io/google_containers/busybox",
|
||||
"command": "ls -l /etc/secret-volume"
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "secret-volume",
|
||||
"readOnly": true,
|
||||
"mountPath": "/etc/secret-volume"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
The `secret-volume` will contain a single file, called `.secret-file`, and
|
||||
the `dotfile-test-container` will have this file present at the path
|
||||
`/etc/secret-volume/.secret-file`.
|
||||
|
||||
**NOTE**
|
||||
|
||||
Files beginning with dot characters are hidden from the output of `ls -l`;
|
||||
you must use `ls -la` to see them when listing directory contents.
|
||||
|
||||
|
||||
### Use-case: Secret visible to one container in a pod
|
||||
|
||||
<a name="use-case-two-containers"></a>
|
||||
|
@ -488,4 +669,4 @@ Pod level](#use-case-two-containers).
|
|||
- Currently, anyone with root on any node can read any secret from the apiserver,
|
||||
by impersonating the kubelet. It is a planned feature to only send secrets to
|
||||
nodes that actually require them, to restrict the impact of a root exploit on a
|
||||
single node.
|
||||
single node.
|
||||
|
|
|
@ -13,7 +13,7 @@ started](/docs/getting-started-guides/) for installation instructions for your p
|
|||
|
||||
A secret contains a set of named byte arrays.
|
||||
|
||||
Use the [`examples/secrets/secret.yaml`](/docs/user-guide/secrets/secret.yaml) file to create a secret:
|
||||
Use the [`secret.yaml`](/docs/user-guide/secrets/secret.yaml) file to create a secret:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/secrets/secret.yaml
|
||||
|
@ -44,7 +44,7 @@ data-2: 11 bytes
|
|||
Pods consume secrets in volumes. Now that you have created a secret, you can create a pod that
|
||||
consumes it.
|
||||
|
||||
Use the [`examples/secrets/secret-pod.yaml`](/docs/user-guide/secrets/secret-pod.yaml) file to create a Pod that consumes the secret.
|
||||
Use the [`secret-pod.yaml`](/docs/user-guide/secrets/secret-pod.yaml) file to create a Pod that consumes the secret.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/secrets/secret-pod.yaml
|
||||
|
|
|
@ -123,7 +123,8 @@ Type: kubernetes.io/service-account-token
|
|||
Data
|
||||
====
|
||||
ca.crt: 1220 bytes
|
||||
token:
|
||||
token: ...
|
||||
namespace: 7 bytes
|
||||
```
|
||||
|
||||
> Note that the content of `token` is elided here.
|
||||
|
@ -135,8 +136,8 @@ Next, verify it has been created. For example:
|
|||
|
||||
```shell
|
||||
$ kubectl get secrets myregistrykey
|
||||
NAME TYPE DATA
|
||||
myregistrykey kubernetes.io/dockercfg 1
|
||||
NAME TYPE DATA
|
||||
myregistrykey kubernetes.io/.dockerconfigjson 1
|
||||
```
|
||||
|
||||
Next, read/modify/write the service account for the namespace to use this secret as an imagePullSecret
|
||||
|
|
|
@ -114,7 +114,7 @@ In any of these scenarios you can define a service without a selector:
|
|||
}
|
||||
```
|
||||
|
||||
Because this has no selector, the corresponding `Endpoints` object will not be
|
||||
Because this service has no selector, the corresponding `Endpoints` object will not be
|
||||
created. You can manually map the service to your own specific endpoints:
|
||||
|
||||
```json
|
||||
|
@ -127,10 +127,10 @@ created. You can manually map the service to your own specific endpoints:
|
|||
"subsets": [
|
||||
{
|
||||
"addresses": [
|
||||
{ "IP": "1.2.3.4" }
|
||||
{ "ip": "1.2.3.4" }
|
||||
],
|
||||
"ports": [
|
||||
{ "port": 80 }
|
||||
{ "port": 9376 }
|
||||
]
|
||||
}
|
||||
]
|
||||
|
@ -141,32 +141,61 @@ NOTE: Endpoint IPs may not be loopback (127.0.0.0/8), link-local
|
|||
(169.254.0.0/16), or link-local multicast ((224.0.0.0/24).
|
||||
|
||||
Accessing a `Service` without a selector works the same as if it had selector.
|
||||
The traffic will be routed to endpoints defined by the user (`1.2.3.4:80` in
|
||||
The traffic will be routed to endpoints defined by the user (`1.2.3.4:9376` in
|
||||
this example).
|
||||
|
||||
## Virtual IPs and service proxies
|
||||
|
||||
Every node in a Kubernetes cluster runs a `kube-proxy`. This application
|
||||
watches the Kubernetes master for the addition and removal of `Service`
|
||||
and `Endpoints` objects. For each `Service` it opens a port (randomly chosen)
|
||||
on the local node. Any connections made to that port will be proxied to one of
|
||||
the corresponding backend `Pods`. Which backend to use is decided based on the
|
||||
is responsible for implementing a form of virtual IP for `Service`s. In
|
||||
Kubernetes v1.0 the proxy was purely in userspace. In Kubernetes v1.1 an
|
||||
iptables proxy was added, but was not the default operating mode. In
|
||||
Kubernetes v1.2 we expect the iptables proxy to be the default.
|
||||
|
||||
As of Kubernetes v1.0, `Services` are a "layer 3" (TCP/UDP over IP) construct.
|
||||
In Kubernetes v1.1 the `Ingress` API was added (beta) to represent "layer 7"
|
||||
(HTTP) services.
|
||||
|
||||
### Proxy-mode: userspace
|
||||
|
||||
In this mode, kube-proxy watches the Kubernetes master for the addition and
|
||||
removal of `Service` and `Endpoints` objects. For each `Service` it opens a
|
||||
port (randomly chosen) on the local node. Any connections to this "proxy port"
|
||||
will be proxied to one of the `Service`'s backend `Pods` (as reported in
|
||||
`Endpoints`). Which backend `Pod` to use is decided based on the
|
||||
`SessionAffinity` of the `Service`. Lastly, it installs iptables rules which
|
||||
capture traffic to the `Service`'s cluster IP (which is virtual) and `Port` and
|
||||
redirects that traffic to the previously described port.
|
||||
capture traffic to the `Service`'s `clusterIP` (which is virtual) and `Port`
|
||||
and redirects that traffic to the proxy port which proxies the a backend `Pod`.
|
||||
|
||||
The net result is that any traffic bound for the `Service` is proxied to an
|
||||
appropriate backend without the clients knowing anything about Kubernetes or
|
||||
`Services` or `Pods`.
|
||||
|
||||

|
||||
The net result is that any traffic bound for the `Service`'s IP:Port is proxied
|
||||
to an appropriate backend without the clients knowing anything about Kubernetes
|
||||
or `Services` or `Pods`.
|
||||
|
||||
By default, the choice of backend is round robin. Client-IP based session affinity
|
||||
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
|
||||
default is `"None"`).
|
||||
|
||||
As of Kubernetes 1.0, `Services` are a "layer 3" (TCP/UDP over IP) construct. We do not
|
||||
yet have a concept of "layer 7" (HTTP) services.
|
||||

|
||||
|
||||
### Proxy-mode: iptables
|
||||
|
||||
In this mode, kube-proxy watches the Kubernetes master for the addition and
|
||||
removal of `Service` and `Endpoints` objects. For each `Service` it installs
|
||||
iptables rules which capture traffic to the `Service`'s `clusterIP` (which is
|
||||
virtual) and `Port` and redirects that traffic to one of the `Service`'s
|
||||
backend sets. For each `Endpoints` object it installs iptables rules which
|
||||
select a backend `Pod`.
|
||||
|
||||
By default, the choice of backend is random. Client-IP based session affinity
|
||||
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
|
||||
default is `"None"`).
|
||||
|
||||
As with the userspace proxy, the net result is that any traffic bound for the
|
||||
`Service`'s IP:Port is proxied to an appropriate backend without the clients
|
||||
knowing anything about Kubernetes or `Services` or `Pods`. This should be
|
||||
faster and more reliable than the userspace proxy.
|
||||
|
||||

|
||||
|
||||
## Multi-Port Services
|
||||
|
||||
|
@ -325,9 +354,6 @@ Valid values for the `ServiceType` field are:
|
|||
which forwards to the `Service` exposed as a `<NodeIP>:NodePort`
|
||||
for each Node.
|
||||
|
||||
Note that while `NodePort`s can be TCP or UDP, `LoadBalancer`s only support TCP
|
||||
as of Kubernetes 1.0.
|
||||
|
||||
### Type NodePort
|
||||
|
||||
If you set the `type` field to `"NodePort"`, the Kubernetes master will
|
||||
|
@ -435,16 +461,14 @@ In the example below, my-service can be accessed by clients on 80.11.12.10:80 (e
|
|||
|
||||
## Shortcomings
|
||||
|
||||
We expect that using iptables and userspace proxies for VIPs will work at
|
||||
small to medium scale, but may not scale to very large clusters with thousands
|
||||
of Services. See [the original design proposal for
|
||||
portals](http://issue.k8s.io/1107) for more
|
||||
details.
|
||||
Using the userspace proxy for VIPs will work at small to medium scale, but will
|
||||
not scale to very large clusters with thousands of Services. See [the original
|
||||
design proposal for portals](http://issue.k8s.io/1107) for more details.
|
||||
|
||||
Using the kube-proxy obscures the source-IP of a packet accessing a `Service`.
|
||||
This makes some kinds of firewalling impossible.
|
||||
|
||||
LoadBalancers only support TCP, not UDP.
|
||||
Using the userspace proxy obscures the source-IP of a packet accessing a `Service`.
|
||||
This makes some kinds of firewalling impossible. The iptables proxier does not
|
||||
obscure in-cluster source IPs, but it does still impact clients coming through
|
||||
a load-balancer or node-port.
|
||||
|
||||
The `Type` field is designed as nested functionality - each level adds to the
|
||||
previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does
|
||||
|
@ -458,13 +482,7 @@ simple round robin balancing, for example master-elected or sharded. We also
|
|||
envision that some `Services` will have "real" load balancers, in which case the
|
||||
VIP will simply transport the packets there.
|
||||
|
||||
There's a
|
||||
[proposal](http://issue.k8s.io/3760) to
|
||||
eliminate userspace proxying in favor of doing it all in iptables. This should
|
||||
perform better and fix the source-IP obfuscation, though is less flexible than
|
||||
arbitrary userspace code.
|
||||
|
||||
We intend to have first-class support for L7 (HTTP) `Services`.
|
||||
We intend to improve our support for L7 (HTTP) `Services`.
|
||||
|
||||
We intend to have more flexible ingress modes for `Services` which encompass
|
||||
the current `ClusterIP`, `NodePort`, and `LoadBalancer` modes and more.
|
||||
|
@ -506,6 +524,11 @@ VIP, their traffic is automatically transported to an appropriate endpoint.
|
|||
The environment variables and DNS for `Services` are actually populated in
|
||||
terms of the `Service`'s VIP and port.
|
||||
|
||||
We support two proxy modes - userspace and iptables, which operate slightly
|
||||
differently.
|
||||
|
||||
#### Userspace
|
||||
|
||||
As an example, consider the image processing application described above.
|
||||
When the backend `Service` is created, the Kubernetes master assigns a virtual
|
||||
IP address, for example 10.0.0.1. Assuming the `Service` port is 1234, the
|
||||
|
@ -522,10 +545,27 @@ This means that `Service` owners can choose any port they want without risk of
|
|||
collision. Clients can simply connect to an IP and port, without being aware
|
||||
of which `Pods` they are actually accessing.
|
||||
|
||||

|
||||
#### Iptables
|
||||
|
||||
Again, consider the image processing application described above.
|
||||
When the backend `Service` is created, the Kubernetes master assigns a virtual
|
||||
IP address, for example 10.0.0.1. Assuming the `Service` port is 1234, the
|
||||
`Service` is observed by all of the `kube-proxy` instances in the cluster.
|
||||
When a proxy sees a new `Service`, it installs a series of iptables rules which
|
||||
redirect from the VIP to per-`Service` rules. The per-`Service` rules link to
|
||||
per-`Endpoint` rules which redirect (Destination NAT) to the backends.
|
||||
|
||||
When a client connects to the VIP the iptables rule kicks in. A backend is
|
||||
chosen (either based on session affinity or randomly) and packets are
|
||||
redirected to the backend. Unlike the userspace proxy, packets are never
|
||||
copied to userspace, the kube-proxy does not have to be running for the VIP to
|
||||
work, and the client IP is not altered.
|
||||
|
||||
This same basic flow executes when traffic comes in through a node-port or
|
||||
through a load-balancer, though in those cases the client IP does get altered.
|
||||
|
||||
## API Object
|
||||
|
||||
Service is a top-level resource in the kubernetes REST API. More details about the
|
||||
API object can be found at: [Service API
|
||||
object](/docs/api-reference/v1/definitions/#_v1_service).
|
||||
object](/docs/api-reference/v1/definitions/#_v1_service).
|
||||
|
|
|
@ -27,10 +27,10 @@ You can also see the replication controller that was created:
|
|||
kubectl get rc
|
||||
```
|
||||
|
||||
To stop the two replicated containers, stop the replication controller:
|
||||
To stop the two replicated containers, delete the replication controller:
|
||||
|
||||
```shell
|
||||
kubectl stop rc my-nginx
|
||||
kubectl delete rc my-nginx
|
||||
```
|
||||
|
||||
### Exposing your pods to the internet.
|
||||
|
@ -52,5 +52,5 @@ In order to access your nginx landing page, you also have to make sure that traf
|
|||
|
||||
### Next: Configuration files
|
||||
|
||||
Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](/docs/user-guide/simple-yaml)
|
||||
Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](/docs/user-guide/deploying-applications/)
|
||||
is given in a different document.
|
|
@ -1,51 +1,3 @@
|
|||
---
|
||||
---
|
||||
|
||||
In addition to the imperative style commands described [elsewhere](/docs/user-guide/simple-nginx), Kubernetes
|
||||
supports declarative YAML or JSON configuration files. Often times config files are preferable
|
||||
to imperative commands, since they can be checked into version control and changes to the files
|
||||
can be code reviewed, producing a more robust, reliable and archival system.
|
||||
|
||||
### Running a container from a pod configuration file
|
||||
|
||||
```shell
|
||||
$ cd kubernetes
|
||||
$ kubectl create -f ./pod.yaml
|
||||
```
|
||||
|
||||
Where pod.yaml contains something like:
|
||||
|
||||
{% include code.html language="yaml" file="pod.yaml" ghlink="/docs/user-guide/pod.yaml" %}
|
||||
|
||||
You can see your cluster's pods:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
```
|
||||
|
||||
and delete the pod you just created:
|
||||
|
||||
```shell
|
||||
$ kubectl delete pods nginx
|
||||
```
|
||||
|
||||
### Running a replicated set of containers from a configuration file
|
||||
|
||||
To run replicated containers, you need a [Replication Controller](/docs/user-guide/replication-controller).
|
||||
A replication controller is responsible for ensuring that a specific number of pods exist in the
|
||||
cluster.
|
||||
|
||||
```shell
|
||||
$ cd kubernetes
|
||||
$ kubectl create -f ./replication.yaml
|
||||
```
|
||||
|
||||
Where `replication.yaml` contains:
|
||||
|
||||
{% include code.html language="yaml" file="replication.yaml" ghlink="/docs/user-guide/replication.yaml" %}
|
||||
|
||||
To delete the replication controller (and the pods it created):
|
||||
|
||||
```shell
|
||||
$ kubectl delete rc nginx
|
||||
```
|
||||
### This document has been subsumed by [deploying-applications.md](/docs/user-guide/deploying-applications/)
|
||||
|
|
|
@ -1,59 +1,138 @@
|
|||
---
|
||||
---
|
||||
|
||||
|
||||
Kubernetes has a web-based user interface that allows you to deploy containerized
|
||||
applications to a Kubernetes cluster, troubleshoot them, and manage the cluster itself.
|
||||
|
||||
Kubernetes has a web-based user interface that displays the current cluster state graphically.
|
||||
## Accessing the Dashboard
|
||||
|
||||
## Accessing the UI
|
||||
By default, the Kubernetes Dashboard is deployed as a cluster addon. To access it, visit
|
||||
`https://<kubernetes-master>/ui`, which redirects to
|
||||
`https://<kubernetes-master>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard`.
|
||||
|
||||
By default, the Kubernetes UI is deployed as a cluster addon. To access it, visit `https://<kubernetes-master>/ui`, which redirects to `https://<kubernetes-master>/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/`.
|
||||
|
||||
If you find that you're not able to access the UI, it may be because the kube-ui service has not been started on your cluster. In that case, you can start it manually with:
|
||||
If you find that you're not able to access the Dashboard, it may be because the
|
||||
`kubernetes-dashboard` service has not been started on your cluster. In that case,
|
||||
you can start it manually as follows:
|
||||
|
||||
```shell
|
||||
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
|
||||
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
|
||||
kubectl create -f cluster/addons/dashboard/dashboard-controller.yaml --namespace=kube-system
|
||||
kubectl create -f cluster/addons/dashboard/dashboard-service.yaml --namespace=kube-system
|
||||
```
|
||||
|
||||
Normally, this should be taken care of automatically by the [`kube-addons.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kube-addons/kube-addons.sh) script that runs on the master.
|
||||
Normally, this should be taken care of automatically by the
|
||||
[`kube-addons.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kube-addons/kube-addons.sh)
|
||||
script that runs on the master. Release notes and development versions of the Dashboard can be
|
||||
found at https://github.com/kubernetes/dashboard/releases.
|
||||
|
||||
## Using the UI
|
||||
## Using the Dashboard
|
||||
|
||||
The Kubernetes UI can be used to introspect your current cluster, such as checking how resources are used, or looking at error messages. You cannot, however, use the UI to modify your cluster.
|
||||
The Dashboard can be used to get an overview of applications running on the cluster, and to provide information on any errors that have occurred. You can also inspect your replication controllers and corresponding services, change the number of replicated Pods, and deploy new applications using a deploy wizard.
|
||||
|
||||
### Node Resource Usage
|
||||
When accessing the Dashboard on an empty cluster for the first time, the Welcome page is displayed. This page contains a link to this document as well as a button to deploy your first application.
|
||||
|
||||
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
|
||||

|
||||
|
||||

|
||||
### Deploying applications
|
||||
|
||||
The Dashboard lets you create and deploy a containerized application as a Replication Controller with a simple wizard:
|
||||
|
||||
### Dashboard Views
|
||||

|
||||
|
||||
Click on the "Views" button in the top-right of the page to see other views available, which include: Explore, Pods, Nodes, Replication Controllers, Services, and Events.
|
||||
#### Specifying application details
|
||||
|
||||
#### Explore View
|
||||
The wizard expects that you provide the following information:
|
||||
|
||||
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
|
||||
- **App name** (mandatory): Name for your application. A [label](/docs/user-guide/labels/) with the name will be added to the Replication Controller and Service, if any, that will be deployed.
|
||||
|
||||

|
||||
The application name must be unique within the selected Kubernetes namespace. It must start with a lowercase character, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters.
|
||||
|
||||
The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc.
|
||||
- **Container image** (mandatory): The URL of a public Docker [container image](/docs/user-guide/images/) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub).
|
||||
|
||||

|
||||
- **Number of pods** (mandatory): The target number of Pods you want your application to be deployed in. The value must be a positive integer.
|
||||
|
||||
You can also create filters by clicking on the down triangle of any listed resource instances and choose which filters you want to add.
|
||||
A [Replication Controller](/docs/user-guide/replication-controller/) will be created to maintain the desired number of Pods across your cluster.
|
||||
|
||||
- **Ports** (optional): If your container listens on a port, you can provide a port and target port. The wizard will create a corresponding Kubernetes [Service](http://kubernetes.io/v1.1/docs/user-guide/services.html) which will route to your deployed Pods. Supported protocols are TCP and UDP. In case you specify ports, the internal DNS name for this Service will be the value you specified as application name above.
|
||||
|
||||

|
||||
Be aware that if you specify ports, you need to provide both port and target port.
|
||||
|
||||
To see more details of each resource instance, simply click on it.
|
||||
- For some parts of your application (e.g. frontends), you can expose the Service onto an external, maybe public IP address by selecting the **Expose service externally** option. You may need to open up one or more ports to do so. Find more details [here](/docs/user-guide/services-firewalls/).
|
||||
|
||||
If needed, you can expand the **Advanced options** section where you can specify more settings:
|
||||
|
||||

|
||||
|
||||
- **Description**: The text you enter here will be added as an [annotation](/docs/user-guide/annotations/) to the Replication Controller and displayed in the application's details.
|
||||
|
||||
- **Labels**: Default [labels](/docs/user-guide/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Replication Controller, Service (if any), and Pods, such as release, environment, tier, partition, and release track.
|
||||
|
||||
Example:
|
||||
|
||||
```conf
|
||||
release=1.0
|
||||
tier=frontend
|
||||
environment=pod
|
||||
track=stable
|
||||
```
|
||||
|
||||
- **Kubernetes namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called [namespaces](/docs/admin/namespaces/). They let you partition resources into logically named groups.
|
||||
|
||||
The Dashboard offers all available namespaces in a dropdown list and allows you to create a new namespace. The namespace name may contain alphanumeric characters and dashes (-).
|
||||
|
||||
- **Image pull secrets**: In case the Docker container image is private, it may require [pull secret](/docs/user-guide/secrets/) credentials.
|
||||
|
||||
The Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`. The content of a secret must be base24-encoded and specified in a [`.dockercfg`](/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) file.
|
||||
|
||||
- **CPU requirement** and **Memory requirement**: You can specify the minimum [resource limits](/docs/admin/limitrange/) for the container. By default, Pods run with unbounded CPU and memory limits.
|
||||
|
||||
- **Run command** and **Run command arguments**: By default, your containers run the selected Docker image's default [entrypoint command](/docs/user-guide/containers/#containers-and-commands). You can use the command options and arguments to override the default.
|
||||
|
||||
- **Run as privileged**: This setting determines whether processes in [privileged containers](/docs/user-guide/pods/#privileged-mode-for-pod-containers) are equivalent to processes running as root on the host. Privileged containers can make use of capabilities like manipulating the network stack and accessing devices.
|
||||
|
||||
- **Environment variables**: Kubernetes exposes Services through [environment variables](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/design/expansion.md). You can compose environment variable or pass arguments to your commands using the values of environnment variables. They can be used in applications to find a Service. Environment variables are also useful for decreasing coupling and the use of workarounds. Values can reference other variables using the `$(VAR_NAME)` syntax.
|
||||
|
||||
#### Uploading a YAML or JSON file
|
||||
|
||||
Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes' [API](http://kubernetes.io/v1.1/docs/api.html) resource schemas as the configuration schemas.
|
||||
|
||||
As an alternative to specifying application details in the deploy wizard, you can define your Replication Controllers and Services in YAML or JSON files, and upload the files to your Pods:
|
||||
|
||||

|
||||
|
||||

|
||||
### Applications view
|
||||
|
||||
As soon as applications are running on your cluster, the initial view of the Dashboard defaults to showing an overview of them, for example:
|
||||
|
||||
### Other Views
|
||||

|
||||
|
||||
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
|
||||
Individual applications are shown as cards - where an application is defined as a Replication Controller and its corresponding Services. Each card shows the current number of running and desired replicas, along with errors reported by Kubernetes, if any.
|
||||
|
||||
You can view application details (**View details**), make quick changes to the number of replicas (**Edit pod count**) or delete the application directly (**Delete**) from the menu in each card's corner:
|
||||
|
||||

|
||||

|
||||
|
||||
#### View details
|
||||
|
||||
Selecting this option from the card menu will take you to the following page where you can view more information about the Pods that make up your application:
|
||||
|
||||

|
||||
|
||||
The **EVENTS** tab can be useful for debugging flapping applications.
|
||||
|
||||
Clicking the plus sign in the right corner of the screen leads you back to the page for deploying a new application.
|
||||
|
||||
#### Edit pod count
|
||||
|
||||
If you choose to change the number of Pods, the respective Replication Controller will be updated to reflect the newly specified number.
|
||||
|
||||
#### Delete
|
||||
|
||||
Deleting a Replication Controller also deletes the Pods managed by it. It is currently not supported to leave the Pods running.
|
||||
|
||||
You have the option to also delete Services related to the Replication Controller if the label selector targets only the Replication Controller to be deleted.
|
||||
|
||||
## More Information
|
||||
|
||||
For more information, see the [Kubernetes UI development document](http://releases.k8s.io/{{page.githubbranch}}/www/README.md) in the www directory.
|
||||
For more information, see the
|
||||
[Kubernetes Dashboard repository](https://github.com/kubernetes/dashboard).
|
||||
|
|
|
@ -1,25 +1,11 @@
|
|||
---
|
||||
---
|
||||
|
||||
|
||||
<!--
|
||||
Copyright 2014 Google Inc. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
-->
|
||||
|
||||
This example demonstrates the usage of Kubernetes to perform a [rolling update](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/user-guide/kubectl/kubectl_rolling-update.md) on a running group of [pods](/docs/user-guide/pods). See [here](/docs/user-guide/managing-deployments/#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md) for more information.
|
||||
|
||||
|
||||
This example demonstrates the usage of Kubernetes to perform a [rolling update](/docs/user-guide/kubectl/kubectl_rolling-update/) on a running group of [pods](/docs/user-guide/pods/). See [here](/docs/user-guide/managing-deployments/#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md) for more information.
|
||||
|
||||
The files for this example are viewable in [our docs repo
|
||||
here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/update-demo).
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
|
@ -62,7 +48,7 @@ Now we will increase the number of replicas from two to four:
|
|||
$ kubectl scale rc update-demo-nautilus --replicas=4
|
||||
```
|
||||
|
||||
If you go back to the [demo website](http://localhost:8001/static/index) you should eventually see four boxes, one for each pod.
|
||||
If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod.
|
||||
|
||||
### Step Four: Update the docker image
|
||||
|
||||
|
@ -74,10 +60,10 @@ $ kubectl rolling-update update-demo-nautilus --update-period=10s -f docs/user-g
|
|||
|
||||
The rolling-update command in kubectl will do 2 things:
|
||||
|
||||
1. Create a new [replication controller](/docs/user-guide/replication-controller) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
|
||||
1. Create a new [replication controller](/docs/user-guide/replication-controller/) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
|
||||
2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinning up new ones to replace them.
|
||||
|
||||
Watch the [demo website](http://localhost:8001/static/index), it will update one pod every 10 seconds until all of the pods have the new image.
|
||||
Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image.
|
||||
Note that the new replication controller definition does not include the replica count, so the current replica count of the old replication controller is preserved.
|
||||
But if the replica count had been specified, the final replica count of the new replication controller will be equal this number.
|
||||
|
||||
|
|
|
@ -67,6 +67,8 @@ Kubernetes supports several types of Volumes:
|
|||
* `gitRepo`
|
||||
* `secret`
|
||||
* `persistentVolumeClaim`
|
||||
* `downwardAPI`
|
||||
* `azureFileVolume`
|
||||
|
||||
We welcome additional contributions.
|
||||
|
||||
|
@ -83,7 +85,7 @@ volume is safe across container crashes.
|
|||
|
||||
Some uses for an `emptyDir` are:
|
||||
|
||||
* scratch space, such as for a disk-based mergesortcw
|
||||
* scratch space, such as for a disk-based merge sort
|
||||
* checkpointing a long computation for recovery from crashes
|
||||
* holding files that a content-manager container fetches while a webserver
|
||||
container serves the data
|
||||
|
@ -135,7 +137,7 @@ A feature of PD is that they can be mounted as read-only by multiple consumers
|
|||
simultaneously. This means that you can pre-populate a PD with your dataset
|
||||
and then serve it in parallel from as many pods as you need. Unfortunately,
|
||||
PDs can only be mounted by a single consumer in read-write mode - no
|
||||
simultaneous readers allowed.
|
||||
simultaneous writers allowed.
|
||||
|
||||
Using a PD on a pod controlled by a ReplicationController will fail unless
|
||||
the PD is read-only or the replica count is 0 or 1.
|
||||
|
@ -217,12 +219,10 @@ spec:
|
|||
- name: test-volume
|
||||
# This AWS EBS volume must already exist.
|
||||
awsElasticBlockStore:
|
||||
volumeID: aws://<availability-zone>/<volume-id>
|
||||
volumeID: <volume-id>
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
(Note: the syntax of volumeID is currently awkward; #10181 fixes it)
|
||||
|
||||
### nfs
|
||||
|
||||
An `nfs` volume allows an existing NFS (Network File System) share to be
|
||||
|
@ -252,7 +252,7 @@ A feature of iSCSI is that it can be mounted as read-only by multiple consumers
|
|||
simultaneously. This means that you can pre-populate a volume with your dataset
|
||||
and then serve it in parallel from as many pods as you need. Unfortunately,
|
||||
iSCSI volumes can only be mounted by a single consumer in read-write mode - no
|
||||
simultaneous readers allowed.
|
||||
simultaneous writers allowed.
|
||||
|
||||
See the [iSCSI example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/iscsi/) for more details.
|
||||
|
||||
|
@ -262,8 +262,8 @@ See the [iSCSI example](https://github.com/kubernetes/kubernetes/tree/{{page.git
|
|||
and orchestration of data volumes backed by a variety of storage backends.
|
||||
|
||||
A `flocker` volume allows a Flocker dataset to be mounted into a pod. If the
|
||||
dataset does not already exist in Flocker, it needs to be created with Flocker
|
||||
CLI or the using the Flocker API. If the dataset already exists it will
|
||||
dataset does not already exist in Flocker, it needs to be first created with the Flocker
|
||||
CLI or by using the Flocker API. If the dataset already exists it will be
|
||||
reattached by Flocker to the node that the pod is scheduled. This means data
|
||||
can be "handed off" between pods as required.
|
||||
|
||||
|
@ -364,6 +364,21 @@ It mounts a directory and writes the requested data in plain text files.
|
|||
|
||||
See the [`downwardAPI` volume example](/docs/user-guide/downward-api/volume/) for more details.
|
||||
|
||||
### FlexVolume
|
||||
|
||||
A `FlexVolume` enables users to mount vendor volumes into a pod. It expects vendor
|
||||
drivers are installed in the volume plugin path on each kubelet node. This is
|
||||
an alpha feature and may change in future.
|
||||
|
||||
More details are in [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/flexvolume/README.md)
|
||||
|
||||
### AzureFileVolume
|
||||
|
||||
A `AzureFileVolume` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0)
|
||||
into a Pod.
|
||||
|
||||
More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/azure_file/README.md)
|
||||
|
||||
## Resources
|
||||
|
||||
The storage media (Disk, SSD, etc) of an `emptyDir` volume is determined by the
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
---
|
||||
|
||||
## Kubectl CLI and Pods
|
||||
|
||||
For Kubernetes 101, we will cover kubectl, pods, volumes, and multiple containers
|
||||
|
||||
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
|
||||
|
@ -74,6 +76,7 @@ Delete the pod by name:
|
|||
$ kubectl delete pod nginx
|
||||
```
|
||||
|
||||
|
||||
#### Volumes
|
||||
|
||||
That's great for a simple static web server, but what about persistent storage?
|
||||
|
@ -106,15 +109,15 @@ Example Redis pod definition with a persistent storage volume ([pod-redis.yaml](
|
|||
|
||||
Notes:
|
||||
|
||||
- The volume mount name is a reference to a specific empty dir volume.
|
||||
- The volume mount path is the path to mount the empty dir volume within the container.
|
||||
- The `volumeMounts` `name` is a reference to a specific `volumes` `name`.
|
||||
- The `volumeMounts` `mountPath` is the path to mount the volume within the container.
|
||||
|
||||
##### Volume Types
|
||||
|
||||
- **EmptyDir**: Creates a new directory that will persist across container failures and restarts.
|
||||
- **EmptyDir**: Creates a new directory that will exist as long as the Pod is running on the node, but it can persist across container failures and restarts.
|
||||
- **HostPath**: Mounts an existing directory on the node's file system (e.g. `/var/logs`).
|
||||
|
||||
See [volumes](/docs/user-guide/volumes) for more details.
|
||||
See [volumes](/docs/user-guide/volumes/) for more details.
|
||||
|
||||
|
||||
#### Multiple Containers
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
---
|
||||
|
||||
## Labels, Replication Controllers, Services and Health Checking
|
||||
|
||||
If you went through [Kubernetes 101](/docs/user-guide/walkthrough/), you learned about kubectl, pods, volumes, and multiple containers.
|
||||
For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, deployment and
|
||||
|
@ -11,6 +12,7 @@ In order for the kubectl usage examples to work, make sure you have an examples
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
|
||||
## Labels
|
||||
|
||||
Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods. Please do! But eventually you will need a system to organize these pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
|
||||
|
@ -18,7 +20,7 @@ Having already learned about Pods and how to create them, you may be struck by a
|
|||
To add a label, add a labels section under metadata in the pod definition:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
labels:
|
||||
app: nginx
|
||||
```
|
||||
|
||||
|
@ -44,7 +46,7 @@ They are a core concept used by two additional Kubernetes building blocks: Repli
|
|||
|
||||
## Replication Controllers
|
||||
|
||||
OK, now you know how to make awesome, multi-container, labeled pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogenous?
|
||||
OK, now you know how to make awesome, multi-container, labeled pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogeneous?
|
||||
|
||||
Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single Kubernetes object. The replication controller also contains a label selector that identifies the set of objects managed by the replication controller. The replication controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.
|
||||
|
||||
|
|
|
@ -49,6 +49,6 @@ The system adds fields in several ways:
|
|||
|
||||
The API will generally not modify fields that you have set; it just sets ones which were unspecified.
|
||||
|
||||
## <a name="finding_schema_docs"></a>Finding Documentation on Resource Fields
|
||||
## Finding Documentation on Resource Fields
|
||||
|
||||
You can browse auto-generated API documentation at the [project website](/docs/api/) or on [github](https://releases.k8s.io/{{page.githubbranch}}/docs/api-reference).
|
||||
You can browse auto-generated API documentation [here](/docs/api/).
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 81 KiB |
Loading…
Reference in New Issue