Merge pull request #5960 from zhangxiaoyu-zidif/master
delete un-translated docsreviewable/pr5962/r1
commit
47a377a688
|
@ -1,93 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- mikedanese
|
||||
title: Configuration Best Practices
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This document highlights and consolidates configuration best practices that are introduced throughout the user-guide, getting-started documentation, and examples.
|
||||
|
||||
This is a living document. If you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture body %}
|
||||
## General Config Tips
|
||||
|
||||
- When defining configurations, specify the latest stable API version (currently v1).
|
||||
|
||||
- Configuration files should be stored in version control before being pushed to the cluster. This allows quick roll-back of a configuration if needed. It also aids with cluster re-creation and restoration if necessary.
|
||||
|
||||
- Write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly.
|
||||
|
||||
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
|
||||
|
||||
Note also that many `kubectl` commands can be called on a directory, so you can also call `kubectl create` on a directory of config files. See below for more details.
|
||||
|
||||
- Don't specify default values unnecessarily, in order to simplify and minimize configs, and to reduce error. For example, omit the selector and labels in a `ReplicationController` if you want them to be the same as the labels in its `podTemplate`, since those fields are populated from the `podTemplate` labels by default. See the [guestbook app's](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) .yaml files for some [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/frontend-deployment.yaml) of this.
|
||||
|
||||
- Put an object description in an annotation to allow better introspection.
|
||||
|
||||
|
||||
## "Naked" Pods vs Replication Controllers and Jobs
|
||||
|
||||
- If there is a viable alternative to naked pods (in other words: pods not bound to a [replication controller](/docs/user-guide/replication-controller)), go with the alternative. Naked pods will not be rescheduled in the event of node failure.
|
||||
|
||||
Replication controllers are almost always preferable to creating pods, except for some explicit [`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) scenarios. A [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) object (currently in Beta) may also be appropriate.
|
||||
|
||||
|
||||
## Services
|
||||
|
||||
- It's typically best to create a [service](/docs/concepts/services-networking/service/) before corresponding [replication controllers](/docs/concepts/workloads/controllers/replicationcontroller/). This lets the scheduler spread the pods that comprise the service.
|
||||
|
||||
You can also use this process to ensure that at least one replica works before creating lots of them:
|
||||
|
||||
1. Create a replication controller without specifying replicas (this will set replicas=1);
|
||||
2. Create a service;
|
||||
3. Then scale up the replication controller.
|
||||
|
||||
- Don't use `hostPort` unless it is absolutely necessary (for example: for a node daemon). It specifies the port number to expose on the host. When you bind a Pod to a `hostPort`, there are a limited number of places to schedule a pod due to port conflicts— you can only schedule as many such Pods as there are nodes in your Kubernetes cluster.
|
||||
|
||||
If you only need access to the port for debugging purposes, you can use the [kubectl proxy and apiserver proxy](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) or [kubectl port-forward](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
|
||||
You can use a [Service](/docs/concepts/services-networking/service/) object for external service access.
|
||||
|
||||
If you explicitly need to expose a pod's port on the host machine, consider using a [NodePort](/docs/user-guide/services/#type-nodeport) service before resorting to `hostPort`.
|
||||
|
||||
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
|
||||
|
||||
- Use _headless services_ for easy service discovery when you don't need kube-proxy load balancing. See [headless services](/docs/user-guide/services/#headless-services).
|
||||
|
||||
## Using Labels
|
||||
|
||||
- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) app for an example of this approach.
|
||||
|
||||
A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.
|
||||
|
||||
- To facilitate rolling updates, include version info in replication controller names, for example as a suffix to the name. It is useful to set a 'version' label as well. The rolling update creates a new controller as opposed to modifying the existing controller. So, there will be issues with version-agnostic controller names. See the [documentation](/docs/tasks/run-application/rolling-update-replication-controller/) on the rolling-update command for more detail.
|
||||
|
||||
Note that the [Deployment](/docs/concepts/workloads/controllers/deployment/) object obviates the need to manage replication controller 'version names'. A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate. (Deployment objects are currently part of the [`extensions` API Group](/docs/concepts/overview/kubernetes-api/#api-groups).)
|
||||
|
||||
- You can manipulate labels for debugging. Because Kubernetes replication controllers and services match to pods using labels, this allows you to remove a pod from being considered by a controller, or served traffic by a service, by removing the relevant selector labels. If you remove the labels of an existing pod, its controller will create a new pod to take its place. This is a useful way to debug a previously "live" pod in a quarantine environment. See the [`kubectl label`](/docs/concepts/overview/working-with-objects/labels/) command.
|
||||
|
||||
## Container Images
|
||||
|
||||
- The [default container image pull policy](/docs/concepts/containers/images/) is `IfNotPresent`, which causes the [Kubelet](/docs/admin/kubelet/) to not pull an image if it already exists. If you would like to always force a pull, you must specify a pull image policy of `Always` in your .yaml file (`imagePullPolicy: Always`) or specify a `:latest` tag on your image.
|
||||
|
||||
That is, if you're specifying an image with other than the `:latest` tag, for example `myimage:v1`, and there is an image update to that same tag, the Kubelet won't pull the updated image. You can address this by ensuring that any updates to an image bump the image tag as well (for example, `myimage:v2`), and ensuring that your configs point to the correct version.
|
||||
|
||||
**Note:** You should avoid using `:latest` tag when deploying containers in production, because this makes it hard to track which version of the image is running and hard to roll back.
|
||||
|
||||
- To work only with a specific version of an image, you can specify an image with its digest (SHA256). This approach guarantees that the image will never update. For detailed information about working with image digests, see [the Docker documentation](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier).
|
||||
|
||||
## Using kubectl
|
||||
|
||||
- Use `kubectl create -f <directory>` where possible. This looks for config objects in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes them to `create`.
|
||||
|
||||
- Use `kubectl delete` rather than `stop`. `Delete` has a superset of the functionality of `stop`, and `stop` is deprecated.
|
||||
|
||||
- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/user-guide/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively).
|
||||
|
||||
- Use `kubectl run` and `expose` to quickly create and expose single container Deployments. See the [quick start guide](/docs/user-guide/quick-start/) for an example.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/concept.md %}
|
|
@ -1,298 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- bprashanth
|
||||
title: Ingress Resources
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
__Terminology__
|
||||
|
||||
Throughout this doc you will see a few terms that are sometimes used interchangeably elsewhere, that might cause confusion. This section attempts to clarify them.
|
||||
|
||||
* Node: A single virtual or physical machine in a Kubernetes cluster.
|
||||
* Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
|
||||
* Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
|
||||
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the [Kubernetes networking model](/docs/concepts/cluster-administration/networking/). Examples of a Cluster network include Overlays such as [flannel](https://github.com/coreos/flannel#flannel) or SDNs such as [OVS](/docs/admin/ovs-networking/).
|
||||
* Service: A Kubernetes [Service](/docs/concepts/services-networking/service/) that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
|
||||
|
||||
## What is Ingress?
|
||||
|
||||
Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:
|
||||
|
||||
```
|
||||
internet
|
||||
|
|
||||
------------
|
||||
[ Services ]
|
||||
```
|
||||
|
||||
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
|
||||
|
||||
```
|
||||
internet
|
||||
|
|
||||
[ Ingress ]
|
||||
--|-----|--
|
||||
[ Services ]
|
||||
```
|
||||
|
||||
It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting etc. Users request ingress by POSTing the Ingress resource to the API server. An [Ingress controller](#ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
|
||||
|
||||
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://git.k8s.io/ingress/controllers/nginx#running-multiple-ingress-controllers) and [here](https://git.k8s.io/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||
|
||||
Make sure you review the [beta limitations](https://git.k8s.io/ingress/controllers/gce/BETA_LIMITATIONS.md) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://git.k8s.io/ingress/controllers) as a pod.
|
||||
|
||||
## The Ingress Resource
|
||||
|
||||
A minimal Ingress might look like:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test-ingress
|
||||
annotations:
|
||||
ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /testpath
|
||||
backend:
|
||||
serviceName: test
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
|
||||
|
||||
__Lines 1-6__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/) and [ingress configuration rewrite](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md#rewrite).
|
||||
|
||||
__Lines 7-9__: Ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
|
||||
|
||||
__Lines 10-11__: Each http rule contains the following information: A host (e.g.: foo.bar.com, defaults to * in this example), a list of paths (e.g.: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
|
||||
|
||||
__Lines 12-14__: A backend is a service:port combination as described in the [services doc](/docs/concepts/services-networking/service/). Ingress traffic is typically sent directly to the endpoints matching a backend.
|
||||
|
||||
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters, see the [API reference](https://releases.k8s.io/{{page.githubbranch}}/staging/src/k8s.io/api/extensions/v1beta1/types.go) for a full definition of the resource. One can specify a global default backend in the absence of which requests that don't match a path in the spec are sent to the default backend of the Ingress controller.
|
||||
|
||||
## Ingress controllers
|
||||
|
||||
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://git.k8s.io/ingress/controllers).
|
||||
|
||||
## Before you begin
|
||||
|
||||
The following document describes a set of cross platform features exposed through the Ingress resource. Ideally, all Ingress controllers should fulfill this specification, but we're not there yet. The docs for the GCE and nginx controllers are [here](https://git.k8s.io/ingress/controllers/gce/README.md) and [here](https://git.k8s.io/ingress/controllers/nginx/README.md) respectively. **Make sure you review controller specific docs so you understand the caveats of each one**.
|
||||
|
||||
## Types of Ingress
|
||||
|
||||
### Single Service Ingress
|
||||
|
||||
There are existing Kubernetes concepts that allow you to expose a single service (see [alternatives](#alternatives)), however you can do so through an Ingress as well, by specifying a *default backend* with no rules.
|
||||
|
||||
{% include code.html language="yaml" file="ingress.yaml" ghlink="/docs/concepts/services-networking/ingress.yaml" %}
|
||||
|
||||
If you create it using `kubectl create -f` you should see:
|
||||
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test-ingress - testsvc:80 107.178.254.228
|
||||
```
|
||||
|
||||
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy this Ingress. The `RULE` column shows that all traffic send to the IP is directed to the Kubernetes Service listed under `BACKEND`.
|
||||
|
||||
### Simple fanout
|
||||
|
||||
As described previously, pods within kubernetes have IPs only visible on the cluster network, so we need something at the edge accepting ingress traffic and proxying it to the right endpoints. This component is usually a highly available loadbalancer. An Ingress allows you to keep the number of loadbalancers down to a minimum, for example, a setup like:
|
||||
|
||||
```shell
|
||||
foo.bar.com -> 178.91.123.132 -> / foo s1:80
|
||||
/ bar s2:80
|
||||
```
|
||||
|
||||
would require an Ingress such as:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
annotations:
|
||||
ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
- path: /bar
|
||||
backend:
|
||||
serviceName: s2
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
When you create the Ingress with `kubectl create -f`:
|
||||
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test -
|
||||
foo.bar.com
|
||||
/foo s1:80
|
||||
/bar s2:80
|
||||
```
|
||||
The Ingress controller will provision an implementation specific loadbalancer that satisfies the Ingress, as long as the services (s1, s2) exist. When it has done so, you will see the address of the loadbalancer under the last column of the Ingress.
|
||||
|
||||
### Name based virtual hosting
|
||||
|
||||
Name-based virtual hosts use multiple host names for the same IP address.
|
||||
|
||||
```
|
||||
foo.bar.com --| |-> foo.bar.com s1:80
|
||||
| 178.91.123.132 |
|
||||
bar.foo.com --| |-> bar.foo.com s2:80
|
||||
```
|
||||
|
||||
The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
- host: bar.foo.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: s2
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the URL of the request.
|
||||
|
||||
### TLS
|
||||
|
||||
You can secure an Ingress by specifying a [secret](/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. If the TLS configuration section in an Ingress specifies different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, e.g.:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
tls.crt: base64 encoded cert
|
||||
tls.key: base64 encoded key
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: testsecret
|
||||
namespace: default
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: no-rules-map
|
||||
spec:
|
||||
tls:
|
||||
- secretName: testsecret
|
||||
backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
Note that there is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://git.k8s.io/ingress/controllers/nginx/README.md#https), [GCE](https://git.k8s.io/ingress/controllers/gce/README.md#tls), or any other platform specific Ingress controller to understand how TLS works in your environment.
|
||||
|
||||
### Loadbalancing
|
||||
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://git.k8s.io/contrib/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
|
||||
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ([nginx](https://git.k8s.io/ingress/controllers/nginx/README.md), [GCE](https://git.k8s.io/ingress/controllers/gce/README.md#health-checks)).
|
||||
|
||||
## Updating an Ingress
|
||||
|
||||
Say you'd like to add a new Host to an existing Ingress, you can update it by editing the resource:
|
||||
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test - 178.91.123.132
|
||||
foo.bar.com
|
||||
/foo s1:80
|
||||
$ kubectl edit ing test
|
||||
```
|
||||
|
||||
This should pop up an editor with the existing yaml, modify it to include the new Host.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
path: /foo
|
||||
- host: bar.baz.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: s2
|
||||
servicePort: 80
|
||||
path: /foo
|
||||
..
|
||||
```
|
||||
|
||||
saving it will update the resource in the API server, which should tell the Ingress controller to reconfigure the loadbalancer.
|
||||
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test - 178.91.123.132
|
||||
foo.bar.com
|
||||
/foo s1:80
|
||||
bar.baz.com
|
||||
/foo s2:80
|
||||
```
|
||||
|
||||
You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file.
|
||||
|
||||
## Failing across availability zones
|
||||
|
||||
Techniques for spreading traffic across failure domains differs between cloud providers. Please check the documentation of the relevant Ingress controller for details. Please refer to the federation [doc](/docs/concepts/cluster-administration/federation/) for details on deploying Ingress in a federated cluster.
|
||||
|
||||
## Future Work
|
||||
|
||||
* Various modes of HTTPS/TLS support (e.g.: SNI, re-encryption)
|
||||
* Requesting an IP or Hostname via claims
|
||||
* Combining L4 and L7 Ingress
|
||||
* More Ingress controllers
|
||||
|
||||
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the evolution of various Ingress controllers.
|
||||
|
||||
## Alternatives
|
||||
|
||||
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
|
||||
|
||||
* Use [Service.Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer)
|
||||
* Use [Service.Type=NodePort](/docs/user-guide/services/#type-nodeport)
|
||||
* Use a [Port Proxy](https://git.k8s.io/contrib/for-demos/proxy-to-service)
|
||||
* Deploy the [Service loadbalancer](https://git.k8s.io/contrib/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.
|
|
@ -1,234 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
title: StatefulSets
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
**StatefulSet is the workload API object used to manage stateful applications.
|
||||
StatefulSets are beta in 1.8.**
|
||||
|
||||
{% include templates/glossary/snippet.md term="statefulset" length="long" %}
|
||||
{% endcapture %}
|
||||
|
||||
{% capture body %}
|
||||
|
||||
## Using StatefulSets
|
||||
|
||||
StatefulSets are valuable for applications that require one or more of the
|
||||
following.
|
||||
|
||||
* Stable, unique network identifiers.
|
||||
* Stable, persistent storage.
|
||||
* Ordered, graceful deployment and scaling.
|
||||
* Ordered, graceful deletion and termination.
|
||||
* Ordered, automated rolling updates.
|
||||
|
||||
In the above, stable is synonymous with persistence across Pod (re)scheduling.
|
||||
If an application doesn't require any stable identifiers or ordered deployment,
|
||||
deletion, or scaling, you should deploy your application with a controller that
|
||||
provides a set of stateless replicas. Controllers such as
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/) or
|
||||
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) may be better suited to your stateless needs.
|
||||
|
||||
## Limitations
|
||||
|
||||
* StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5.
|
||||
* As with all alpha/beta resources, you can disable StatefulSet through the `--runtime-config` option passed to the apiserver.
|
||||
* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
|
||||
* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
|
||||
* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service.
|
||||
|
||||
## Components
|
||||
The example below demonstrates the components of a StatefulSet.
|
||||
|
||||
* A Headless Service, named nginx, is used to control the network domain.
|
||||
* The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
|
||||
* The volumeClaimTemplates will provide stable storage using [PersistentVolumes](/docs/concepts/storage/volumes/) provisioned by a PersistentVolume Provisioner.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: web
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: nginx
|
||||
---
|
||||
apiVersion: apps/v1beta2
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: web
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx # has to match .spec.template.metadata.labels
|
||||
serviceName: "nginx"
|
||||
replicas: 3 # by default is 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx # has to match .spec.selector.matchLabels
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx
|
||||
image: gcr.io/google_containers/nginx-slim:0.8
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: my-storage-class
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
## Pod Selector
|
||||
You must set the `spec.selector` field of a StatefulSet to match the labels of its `.spec.template.metadata.labels`. Prior to Kubernetes 1.8, the `spec.selector` field was defaulted when omitted. In 1.8 and later versions, failing to specify a matching Pod Selector will result in a validation error during StatefulSet creation.
|
||||
|
||||
## Pod Identity
|
||||
StatefulSet Pods have a unique identity that is comprised of an ordinal, a
|
||||
stable network identity, and stable storage. The identity sticks to the Pod,
|
||||
regardless of which node it's (re)scheduled on.
|
||||
|
||||
### Ordinal Index
|
||||
|
||||
For a StatefulSet with N replicas, each Pod in the StatefulSet will be
|
||||
assigned an integer ordinal, in the range [0,N), that is unique over the Set.
|
||||
|
||||
### Stable Network ID
|
||||
|
||||
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet
|
||||
and the ordinal of the Pod. The pattern for the constructed hostname
|
||||
is `$(statefulset name)-$(ordinal)`. The example above will create three Pods
|
||||
named `web-0,web-1,web-2`.
|
||||
A StatefulSet can use a [Headless Service](/docs/concepts/services-networking/service/#headless-services)
|
||||
to control the domain of its Pods. The domain managed by this Service takes the form:
|
||||
`$(service name).$(namespace).svc.cluster.local`, where "cluster.local"
|
||||
is the [cluster domain](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
|
||||
As each Pod is created, it gets a matching DNS subdomain, taking the form:
|
||||
`$(podname).$(governing service domain)`, where the governing service is defined
|
||||
by the `serviceName` field on the StatefulSet.
|
||||
|
||||
Here are some examples of choices for Cluster Domain, Service name,
|
||||
StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods.
|
||||
|
||||
Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain | Pod DNS | Pod Hostname |
|
||||
-------------- | ----------------- | ----------------- | -------------- | ------- | ------------ |
|
||||
cluster.local | default/nginx | default/web | nginx.default.svc.cluster.local | web-{0..N-1}.nginx.default.svc.cluster.local | web-{0..N-1} |
|
||||
cluster.local | foo/nginx | foo/web | nginx.foo.svc.cluster.local | web-{0..N-1}.nginx.foo.svc.cluster.local | web-{0..N-1} |
|
||||
kube.local | foo/nginx | foo/web | nginx.foo.svc.kube.local | web-{0..N-1}.nginx.foo.svc.kube.local | web-{0..N-1} |
|
||||
|
||||
Note that Cluster Domain will be set to `cluster.local` unless
|
||||
[otherwise configured](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
|
||||
|
||||
### Stable Storage
|
||||
|
||||
Kubernetes creates one [PersistentVolume](/docs/concepts/storage/volumes/) for each
|
||||
VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume
|
||||
with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
|
||||
is specified, then the default StorageClass will be used. When a Pod is (re)scheduled
|
||||
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
|
||||
PersistentVolume Claims. Note that, the PersistentVolumes associated with the
|
||||
Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
|
||||
This must be done manually.
|
||||
|
||||
## Deployment and Scaling Guarantees
|
||||
|
||||
* For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
|
||||
* When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
|
||||
* Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
|
||||
* Before a Pod is terminated, all of its successors must be completely shutdown.
|
||||
|
||||
The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer to [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
|
||||
When the nginx example above is created, three Pods will be deployed in the order
|
||||
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
|
||||
[Running and Ready](/docs/user-guide/pod-states), and web-2 will not be deployed until
|
||||
web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before
|
||||
web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
|
||||
becomes Running and Ready.
|
||||
|
||||
If a user were to scale the deployed example by patching the StatefulSet such that
|
||||
`replicas=1`, web-2 would be terminated first. web-1 would not be terminated until web-2
|
||||
is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and
|
||||
is completely shutdown, but prior to web-1's termination, web-1 would not be terminated
|
||||
until web-0 is Running and Ready.
|
||||
|
||||
### Pod Management Policies
|
||||
In Kubernetes 1.7 and later, StatefulSet allows you to relax its ordering guarantees while
|
||||
preserving its uniqueness and identity guarantees via its `.spec.podManagementPolicy` field.
|
||||
|
||||
#### OrderedReady Pod Management
|
||||
|
||||
`OrderedReady` pod management is the default for StatefulSets. It implements the behavior
|
||||
described [above](#deployment-and-scaling-guarantees).
|
||||
|
||||
#### Parallel Pod Management
|
||||
|
||||
`Parallel` pod management tells the StatefulSet controller to launch or
|
||||
terminate all Pods in parallel, and to not wait for Pods to become Running
|
||||
and Ready or completely terminated prior to launching or terminating another
|
||||
Pod.
|
||||
|
||||
## Update Strategies
|
||||
|
||||
In Kubernetes 1.7 and later, StatefulSet's `.spec.updateStrategy` field allows you to configure
|
||||
and disable automated rolling updates for containers, labels, resource request/limits, and
|
||||
annotations for the Pods in a StatefulSet.
|
||||
|
||||
### On Delete
|
||||
|
||||
The `OnDelete` update strategy implements the legacy (1.6 and prior) behavior. It is the default
|
||||
strategy when `spec.updateStrategy` is left unspecified. When a StatefulSet's
|
||||
`.spec.updateStrategy.type` is set to `OnDelete`, the StatefulSet controller will not automatically
|
||||
update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to
|
||||
create new Pods that reflect modifications made to a StatefulSet's `.spec.template`.
|
||||
|
||||
### Rolling Updates
|
||||
|
||||
The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a
|
||||
StatefulSet. When a StatefulSet's `.spec.updateStrategy.type` is set to `RollingUpdate`, the
|
||||
StatefulSet controller will delete and recreate each Pod in the StatefulSet. It will proceed
|
||||
in the same order as Pod termination (from the largest ordinal to the smallest), updating
|
||||
each Pod one at a time. It will wait until an updated Pod is Running and Ready prior to
|
||||
updating its predecessor.
|
||||
|
||||
#### Partitions
|
||||
|
||||
The `RollingUpdate` update strategy can be partitioned, by specifying a
|
||||
`.spec.updateStrategy.rollingUpdate.partition`. If a partition is specified, all Pods with an
|
||||
ordinal that is greater than or equal to the partition will be updated when the StatefulSet's
|
||||
`.spec.template` is updated. All Pods with an ordinal that is less than the partition will not
|
||||
be updated, and, even if they are deleted, they will be recreated at the previous version. If a
|
||||
StatefulSet's `.spec.updateStrategy.rollingUpdate.partition` is greater than its `.spec.replicas`,
|
||||
updates to its `.spec.template` will not be propagated to its Pods.
|
||||
In most cases you will not need to use a partition, but they are useful if you want to stage an
|
||||
update, roll out a canary, or perform a phased roll out.
|
||||
|
||||
{% endcapture %}
|
||||
{% capture whatsnext %}
|
||||
|
||||
* Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set).
|
||||
|
||||
{% endcapture %}
|
||||
{% include templates/concept.md %}
|
|
@ -1,318 +0,0 @@
|
|||
---
|
||||
title: Accessing Clusters
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Accessing the cluster API
|
||||
|
||||
### Accessing for the first time with kubectl
|
||||
|
||||
When accessing the Kubernetes API for the first time, we suggest using the
|
||||
Kubernetes CLI, `kubectl`.
|
||||
|
||||
To access a cluster, you need to know the location of the cluster and have credentials
|
||||
to access it. Typically, this is automatically set-up when you work through
|
||||
a [Getting started guide](/docs/getting-started-guides/),
|
||||
or someone else setup the cluster and provided you with credentials and a location.
|
||||
|
||||
Check the location and credentials that kubectl knows about with this command:
|
||||
|
||||
```shell
|
||||
$ kubectl config view
|
||||
```
|
||||
|
||||
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using
|
||||
kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index).
|
||||
|
||||
### Directly accessing the REST API
|
||||
|
||||
Kubectl handles locating and authenticating to the apiserver.
|
||||
If you want to directly access the REST API with an http client like
|
||||
curl or wget, or a browser, there are several ways to locate and authenticate:
|
||||
|
||||
- Run kubectl in proxy mode.
|
||||
- Recommended approach.
|
||||
- Uses stored apiserver location.
|
||||
- Verifies identity of apiserver using self-signed cert. No MITM possible.
|
||||
- Authenticates to apiserver.
|
||||
- In future, may do intelligent client-side load-balancing and failover.
|
||||
- Provide the location and credentials directly to the http client.
|
||||
- Alternate approach.
|
||||
- Works with some types of client code that are confused by using a proxy.
|
||||
- Need to import a root cert into your browser to protect against MITM.
|
||||
|
||||
#### Using kubectl proxy
|
||||
|
||||
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
|
||||
locating the apiserver and authenticating.
|
||||
Run it like this:
|
||||
|
||||
```shell
|
||||
$ kubectl proxy --port=8080 &
|
||||
```
|
||||
|
||||
See [kubectl proxy](/docs/user-guide/kubectl/v1.6/#proxy) for more details.
|
||||
|
||||
Then you can explore the API with curl, wget, or a browser, like so:
|
||||
|
||||
```shell
|
||||
$ curl http://localhost:8080/api/
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy (before v1.3.x)
|
||||
|
||||
It is possible to avoid using kubectl proxy by passing an authentication token
|
||||
directly to the apiserver, like this:
|
||||
|
||||
```shell
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy (post v1.3.x)
|
||||
|
||||
In Kubernetes version 1.3 or later, `kubectl config view` no longer displays the token. Use `kubectl describe secret...` to get the token for the default service account, like this:
|
||||
|
||||
``` shell
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"kind": "APIVersions",
|
||||
"versions": [
|
||||
"v1"
|
||||
],
|
||||
"serverAddressByClientCIDRs": [
|
||||
{
|
||||
"clientCIDR": "0.0.0.0/0",
|
||||
"serverAddress": "10.0.1.149:443"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The above examples use the `--insecure` flag. This leaves it subject to MITM
|
||||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||
and client certificates to access the server. (These are installed in the
|
||||
`~/.kube` directory). Since cluster certificates are typically self-signed, it
|
||||
may take special configuration to get your http client to use root
|
||||
certificate.
|
||||
|
||||
On some clusters, the apiserver does not require authentication; it may serve
|
||||
on localhost, or be protected by a firewall. There is not a standard
|
||||
for this. [Configuring Access to the API](/docs/admin/accessing-the-api)
|
||||
describes how a cluster admin can configure this. Such approaches may conflict
|
||||
with future high-availability support.
|
||||
|
||||
### Programmatic access to the API
|
||||
|
||||
Kubernetes officially supports [Go](#go-client) and [Python](#python-client)
|
||||
client libraries.
|
||||
|
||||
#### Go client
|
||||
|
||||
* To get the library, run the following command: `go get k8s.io/client-go/<version number>/kubernetes`. See [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go) to see which versions are supported.
|
||||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
|
||||
|
||||
The Go client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go).
|
||||
|
||||
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
|
||||
|
||||
#### Python client
|
||||
|
||||
To use [Python client](https://github.com/kubernetes-incubator/client-python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-incubator/client-python) for more installation options.
|
||||
|
||||
The Python client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-incubator/client-python/tree/master/examples/example1.py).
|
||||
|
||||
#### Other languages
|
||||
|
||||
There are [client libraries](/docs/reference/client-libraries/) for accessing the API from other languages.
|
||||
See documentation for other libraries for how they authenticate.
|
||||
|
||||
### Accessing the API from a Pod
|
||||
|
||||
When accessing the API from a pod, locating and authenticating
|
||||
to the apiserver are somewhat different.
|
||||
|
||||
The recommended way to locate the apiserver within the pod is with
|
||||
the `kubernetes` DNS name, which resolves to a Service IP which in turn
|
||||
will be routed to an apiserver.
|
||||
|
||||
The recommended way to authenticate to the apiserver is with a
|
||||
[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By kube-system, a pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
|
||||
If available, a certificate bundle is placed into the filesystem tree of each
|
||||
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
|
||||
used to verify the serving certificate of the apiserver.
|
||||
|
||||
Finally, the default namespace to be used for namespaced API operations is placed in a file
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
|
||||
|
||||
From within a pod the recommended ways to connect to API are:
|
||||
|
||||
- run a kubectl proxy as one of the containers in the pod, or as a background
|
||||
process within a container. This proxies the
|
||||
Kubernetes API to the localhost interface of the pod, so that other processes
|
||||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
|
||||
They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
|
||||
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
|
||||
|
||||
## Accessing services running on the cluster
|
||||
|
||||
The previous section was about connecting the Kubernetes API server. This section is about
|
||||
connecting to other services running on Kubernetes cluster. In Kubernetes, the
|
||||
[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have
|
||||
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
|
||||
routable, so they will not be reachable from a machine outside the cluster,
|
||||
such as your desktop machine.
|
||||
|
||||
### Ways to connect
|
||||
|
||||
You have several options for connecting to nodes, pods and services from outside the cluster:
|
||||
|
||||
- Access services through public IPs.
|
||||
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
|
||||
the cluster. See the [services](/docs/user-guide/services) and
|
||||
[kubectl expose](/docs/user-guide/kubectl/v1.6/#expose) documentation.
|
||||
- Depending on your cluster environment, this may just expose the service to your corporate network,
|
||||
or it may expose it to the internet. Think about whether the service being exposed is secure.
|
||||
Does it do its own authentication?
|
||||
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
|
||||
place a unique label on the pod and create a new service which selects this label.
|
||||
- In most cases, it should not be necessary for application developer to directly access
|
||||
nodes via their nodeIPs.
|
||||
- Access services, nodes, or pods using the Proxy Verb.
|
||||
- Does apiserver authentication and authorization prior to accessing the remote service.
|
||||
Use this if the services are not secure enough to expose to the internet, or to gain
|
||||
access to ports on the node IP, or for debugging.
|
||||
- Proxies may cause problems for some web applications.
|
||||
- Only works for HTTP/HTTPS.
|
||||
- Described [here](#manually-constructing-apiserver-proxy-urls).
|
||||
- Access from a node or pod in the cluster.
|
||||
- Run a pod, and then connect to a shell in it using [kubectl exec](/docs/user-guide/kubectl/v1.6/#exec).
|
||||
Connect to other nodes, pods, and services from that shell.
|
||||
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
|
||||
access cluster services. This is a non-standard method, and will work on some clusters but
|
||||
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
|
||||
|
||||
### Discovering builtin services
|
||||
|
||||
Typically, there are several services which are started on a cluster by kube-system. Get a list of these
|
||||
with the `kubectl cluster-info` command:
|
||||
|
||||
```shell
|
||||
$ kubectl cluster-info
|
||||
|
||||
Kubernetes master is running at https://104.197.5.247
|
||||
elasticsearch-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
|
||||
kibana-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kibana-logging/proxy
|
||||
kube-dns is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kube-dns/proxy
|
||||
grafana is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
|
||||
heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy
|
||||
```
|
||||
|
||||
This shows the proxy-verb URL for accessing each service.
|
||||
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
|
||||
at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed. Logging can also be reached through a kubectl proxy, for example at:
|
||||
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`.
|
||||
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
|
||||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
|
||||
|
||||
##### Examples
|
||||
|
||||
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy`
|
||||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true`
|
||||
|
||||
```json
|
||||
{
|
||||
"cluster_name" : "kubernetes_logging",
|
||||
"status" : "yellow",
|
||||
"timed_out" : false,
|
||||
"number_of_nodes" : 1,
|
||||
"number_of_data_nodes" : 1,
|
||||
"active_primary_shards" : 5,
|
||||
"active_shards" : 5,
|
||||
"relocating_shards" : 0,
|
||||
"initializing_shards" : 0,
|
||||
"unassigned_shards" : 5
|
||||
}
|
||||
```
|
||||
|
||||
#### Using web browsers to access services running on the cluster
|
||||
|
||||
You may be able to put an apiserver proxy url into the address bar of a browser. However:
|
||||
|
||||
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
|
||||
but your cluster may not be configured to accept basic auth.
|
||||
- Some web apps may not work, particularly those with client side javascript that construct urls in a
|
||||
way that is unaware of the proxy path prefix.
|
||||
|
||||
## Requesting redirects
|
||||
|
||||
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
|
||||
|
||||
## So Many Proxies
|
||||
|
||||
There are several different proxies you may encounter when using Kubernetes:
|
||||
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the Kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
- adds authentication headers
|
||||
1. The [apiserver proxy](#discovering-builtin-services):
|
||||
- is a bastion built into the apiserver
|
||||
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
|
||||
- runs in the apiserver processes
|
||||
- client to proxy uses HTTPS (or http if apiserver so configured)
|
||||
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
1. The [kube proxy](/docs/user-guide/services/#ips-and-vips):
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
- is just used to reach services
|
||||
1. A Proxy/Load-balancer in front of apiserver(s):
|
||||
- existence and implementation varies from cluster to cluster (e.g. nginx)
|
||||
- sits between all clients and one or more apiservers
|
||||
- acts as load balancer if there are several apiservers.
|
||||
1. Cloud Load Balancers on external services:
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the Kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
- implementation varies by cloud provider.
|
||||
|
||||
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
|
||||
will typically ensure that the latter types are setup correctly.
|
|
@ -1,94 +0,0 @@
|
|||
---
|
||||
title: Use Port Forwarding to Access Applications in a Cluster
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to use `kubectl port-forward` to connect to a Redis
|
||||
server running in a Kubernetes cluster. This type of connection can be useful
|
||||
for database debugging.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* {% include task-tutorial-prereqs.md %}
|
||||
|
||||
* Install [redis-cli](http://redis.io/topics/rediscli).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Creating a pod to run a Redis server
|
||||
|
||||
1. Create a pod:
|
||||
|
||||
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/redis-master.yaml
|
||||
|
||||
The output of a successful command verifies that the pod was created:
|
||||
|
||||
pod "redis-master" created
|
||||
|
||||
1. Check to see whether the pod is running and ready:
|
||||
|
||||
kubectl get pods
|
||||
|
||||
When the pod is ready, the output displays a STATUS of Running:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master 2/2 Running 0 41s
|
||||
|
||||
1. Verify that the Redis server is running in the pod and listening on port 6379:
|
||||
|
||||
{% raw %}
|
||||
kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
|
||||
{% endraw %}
|
||||
|
||||
The output displays the port:
|
||||
|
||||
6379
|
||||
|
||||
## Forward a local port to a port on the pod
|
||||
|
||||
1. Forward port 6379 on the local workstation to port 6379 of redis-master pod:
|
||||
|
||||
kubectl port-forward redis-master 6379:6379
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
|
||||
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
|
||||
|
||||
1. Start the Redis command line interface:
|
||||
|
||||
redis-cli
|
||||
|
||||
1. At the Redis command line prompt, enter the `ping` command:
|
||||
|
||||
127.0.0.1:6379>ping
|
||||
|
||||
A successful ping request returns PONG.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture discussion %}
|
||||
|
||||
## Discussion
|
||||
|
||||
Connections made to local port 6379 are forwarded to port 6379 of the pod that
|
||||
is running the Redis server. With this connection in place you can use your
|
||||
local workstation to debug the database that is running in the pod.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture whatsnext %}
|
||||
Learn more about [kubectl port-forward](/docs/user-guide/kubectl/v1.6/#port-forward).
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -1,147 +0,0 @@
|
|||
---
|
||||
title: Use a Service to Access an Application in a Cluster
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to create a Kubernetes Service object that external
|
||||
clients can use to access an application running in a cluster. The Service
|
||||
provides load balancing for an application that has two running instances.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture objectives %}
|
||||
|
||||
* Run two instances of a Hello World application.
|
||||
* Create a Service object that exposes a node port.
|
||||
* Use the Service object to access the running application.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture lessoncontent %}
|
||||
|
||||
## Creating a service for an application running in two pods
|
||||
|
||||
1. Run a Hello World application in your cluster:
|
||||
|
||||
kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
|
||||
|
||||
The preceding command creates a
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
object and an associated
|
||||
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
object. The ReplicaSet has two
|
||||
[Pods](/docs/concepts/workloads/pods/pod/),
|
||||
each of which runs the Hello World application.
|
||||
|
||||
1. Display information about the Deployment:
|
||||
|
||||
kubectl get deployments hello-world
|
||||
kubectl describe deployments hello-world
|
||||
|
||||
1. Display information about your ReplicaSet objects:
|
||||
|
||||
kubectl get replicasets
|
||||
kubectl describe replicasets
|
||||
|
||||
1. Create a Service object that exposes the deployment:
|
||||
|
||||
kubectl expose deployment hello-world --type=NodePort --name=example-service
|
||||
|
||||
1. Display information about the Service:
|
||||
|
||||
kubectl describe services example-service
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
Name: example-service
|
||||
Namespace: default
|
||||
Labels: run=load-balancer-example
|
||||
Selector: run=load-balancer-example
|
||||
Type: NodePort
|
||||
IP: 10.32.0.16
|
||||
Port: <unset> 8080/TCP
|
||||
NodePort: <unset> 31496/TCP
|
||||
Endpoints: 10.200.1.4:8080,10.200.2.5:8080
|
||||
Session Affinity: None
|
||||
No events.
|
||||
|
||||
Make a note of the NodePort value for the service. For example,
|
||||
in the preceding output, the NodePort value is 31496.
|
||||
|
||||
1. List the pods that are running the Hello World application:
|
||||
|
||||
kubectl get pods --selector="run=load-balancer-example" --output=wide
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
NAME READY STATUS ... IP NODE
|
||||
hello-world-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1
|
||||
hello-world-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2
|
||||
|
||||
1. Get the public IP address of one of your nodes that is running
|
||||
a Hello World pod. How you get this address depends on how you set
|
||||
up your cluster. For example, if you are using Minikube, you can
|
||||
see the node address by running `kubectl cluster-info`. If you are
|
||||
using Google Compute Engine instances, you can use the
|
||||
`gcloud compute instances list` command to see the public addresses of your
|
||||
nodes. For more information about this command, see the [GCE documentation](https://cloud.google.com/sdk/gcloud/reference/compute/instances/list).
|
||||
|
||||
1. On your chosen node, create a firewall rule that allows TCP traffic
|
||||
on your node port. For example, if your Service has a NodePort value of
|
||||
31568, create a firewall rule that allows TCP traffic on port 31568. Different
|
||||
cloud providers offer different ways of configuring firewall rules. See [the
|
||||
GCE documentation on firewall rules](https://cloud.google.com/compute/docs/vpc/firewalls),
|
||||
for example.
|
||||
|
||||
1. Use the node address and node port to access the Hello World application:
|
||||
|
||||
curl http://<public-node-ip>:<node-port>
|
||||
|
||||
where `<public-node-ip>` is the public IP address of your node,
|
||||
and `<node-port>` is the NodePort value for your service.
|
||||
|
||||
The response to a successful request is a hello message:
|
||||
|
||||
Hello Kubernetes!
|
||||
|
||||
## Using a service configuration file
|
||||
|
||||
As an alternative to using `kubectl expose`, you can use a
|
||||
[service configuration file](/docs/user-guide/services/operations)
|
||||
to create a Service.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture cleanup %}
|
||||
|
||||
To delete the Service, enter this command:
|
||||
|
||||
kubectl delete services example-service
|
||||
|
||||
To delete the Deployment, the ReplicaSet, and the Pods that are running
|
||||
the Hello World application, enter this command:
|
||||
|
||||
kubectl delete deployment hello-world
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
Learn more about
|
||||
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/tutorial.md %}
|
||||
|
|
@ -1,112 +0,0 @@
|
|||
---
|
||||
title: IP Masquerade Agent User Guide
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This page shows how to configure and enable the ip-masq-agent.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
## IP Masquerade Agent User Guide
|
||||
|
||||
The ip-masq-agent configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
|
||||
|
||||
### **Key Terms**
|
||||
|
||||
* **NAT (Network Address Translation)**
|
||||
Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing.
|
||||
* **Masquerading**
|
||||
A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address.
|
||||
* **CIDR (Classless Inter-Domain Routing)**
|
||||
Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24.
|
||||
* **Link Local**
|
||||
A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
|
||||
|
||||
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in GKE, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
|
||||
|
||||
![masq/non-masq example](/images/docs/ip-masq.png)
|
||||
|
||||
The agent configuration file must be written in YAML or JSON syntax, and may contain three optional keys:
|
||||
|
||||
* **nonMasqueradeCIDRs:** A list of strings in [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
|
||||
* **masqLinkLocal:** A Boolean (true / false) which indicates whether to masquerade traffic to the link local prefix 169.254.0.0/16. False by default.
|
||||
* **resyncInterval:** An interval at which the agent attempts to reload config from disk. e.g. '30s' where 's' is seconds, 'ms' is milliseconds etc...
|
||||
|
||||
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent:
|
||||
|
||||
```
|
||||
iptables -t nat -L IP-MASQ-AGENT
|
||||
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 192.168.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
|
||||
|
||||
```
|
||||
|
||||
By default, in GCE/GKE starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to your cluster:
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Create an ip-masq-agent
|
||||
To create an ip-masq-agent, run the following kubectl command:
|
||||
|
||||
`
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-incubator/ip-masq-agent/master/ip-masq-agent.yaml
|
||||
`
|
||||
|
||||
You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.
|
||||
|
||||
`
|
||||
kubectl label nodes my-node beta.kubernetes.io/masq-agent-ds-ready=true
|
||||
`
|
||||
|
||||
More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-incubator/ip-masq-agent)
|
||||
|
||||
In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configmap/) in a file called "config".
|
||||
**Note:** It is important that the file is called config since, by default, that will be used as the key for lookup by the ip-masq-agent:
|
||||
|
||||
```
|
||||
nonMasqueradeCIDRs:
|
||||
- 10.0.0.0/8
|
||||
resyncInterval: 60s
|
||||
|
||||
```
|
||||
|
||||
Run the following command to add the config map to your cluster:
|
||||
|
||||
```
|
||||
kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system
|
||||
```
|
||||
|
||||
This will update a file located at */etc/config/ip-masq-agent* which is periodically checked every *resyscInterval* and applied to the cluster node.
|
||||
After the resync interval has expired, you should see the iptables rules reflect your changes:
|
||||
|
||||
```
|
||||
iptables -t nat -L IP-MASQ-AGENT
|
||||
Chain IP-MASQ-AGENT (1 references)
|
||||
target prot opt source destination
|
||||
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local
|
||||
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
|
||||
```
|
||||
|
||||
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set *masqLinkLocal* to true in the config map.
|
||||
|
||||
```
|
||||
nonMasqueradeCIDRs:
|
||||
- 10.0.0.0/8
|
||||
resyncInterval: 60s
|
||||
masqLinkLocal: true
|
||||
```
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -1,256 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- eparis
|
||||
- pmorie
|
||||
title: Configure Containers Using a ConfigMap
|
||||
---
|
||||
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows you how to configure an application using a ConfigMap. ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* {% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Use kubectl to create a ConfigMap
|
||||
|
||||
Use the `kubectl create configmap` command to create configmaps from [directories](#creating-configmaps-from-directories), [files](#creating-configmaps-from-files), or [literal values](#creating-configmaps-from-literal-values):
|
||||
|
||||
```shell
|
||||
kubectl create configmap <map-name> <data-source>
|
||||
```
|
||||
|
||||
where \<map-name> is the name you want to assign to the ConfigMap and \<data-source> is the directory, file, or literal value to draw the data from.
|
||||
|
||||
The data source corresponds to a key-value pair in the ConfigMap, where
|
||||
|
||||
* key = the file name or the key you provided on the command line, and
|
||||
* value = the file contents or the literal value you provided on the command line.
|
||||
|
||||
You can use [`kubectl describe`](/docs/user-guide/kubectl/v1.6/#describe) or [`kubectl get`](/docs/user-guide/kubectl/v1.6/#get) to retrieve information about a ConfigMap. The former shows a summary of the ConfigMap, while the latter returns the full contents of the ConfigMap.
|
||||
|
||||
### Create ConfigMaps from directories
|
||||
|
||||
You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same directory.
|
||||
|
||||
For example:
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config --from-file=docs/user-guide/configmap/kubectl
|
||||
```
|
||||
|
||||
combines the contents of the `docs/user-guide/configmap/kubectl/` directory
|
||||
|
||||
```shell
|
||||
ls docs/user-guide/configmap/kubectl/
|
||||
game.properties
|
||||
ui.properties
|
||||
```
|
||||
|
||||
into the following ConfigMap:
|
||||
|
||||
```shell
|
||||
kubectl describe configmaps game-config
|
||||
Name: game-config
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Data
|
||||
====
|
||||
game.properties: 158 bytes
|
||||
ui.properties: 83 bytes
|
||||
```
|
||||
|
||||
The `game.properties` and `ui.properties` files in the `docs/user-guide/configmap/kubectl/` directory are represented in the `data` section of the ConfigMap.
|
||||
|
||||
```shell
|
||||
kubectl get configmaps game-config -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
game.properties: |
|
||||
enemies=aliens
|
||||
lives=3
|
||||
enemies.cheat=true
|
||||
enemies.cheat.level=noGoodRotten
|
||||
secret.code.passphrase=UUDDLRLRBABAS
|
||||
secret.code.allowed=true
|
||||
secret.code.lives=30
|
||||
ui.properties: |
|
||||
color.good=purple
|
||||
color.bad=yellow
|
||||
allow.textmode=true
|
||||
how.nice.to.look=fairlyNice
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:52:05Z
|
||||
name: game-config
|
||||
namespace: default
|
||||
resourceVersion: "516"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/game-config-2
|
||||
uid: b4952dc3-d670-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
### Create ConfigMaps from files
|
||||
|
||||
You can use `kubectl create configmap` to create a ConfigMap from an individual file, or from multiple files.
|
||||
|
||||
For example,
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties
|
||||
```
|
||||
|
||||
would produce the following ConfigMap:
|
||||
|
||||
```shell
|
||||
kubectl describe configmaps game-config-2
|
||||
Name: game-config-2
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Data
|
||||
====
|
||||
game.properties: 158 bytes
|
||||
```
|
||||
|
||||
You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple data sources.
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties --from-file=docs/user-guide/configmap/kubectl/ui.properties
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl describe configmaps game-config-2
|
||||
Name: game-config-2
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Data
|
||||
====
|
||||
game.properties: 158 bytes
|
||||
ui.properties: 83 bytes
|
||||
```
|
||||
|
||||
#### Define the key to use when creating a ConfigMap from a file
|
||||
|
||||
You can define a key other than the file name to use in the `data` section of your ConfigMap when using the `--from-file` argument:
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-3 --from-file=<my-key-name>=<path-to-file>
|
||||
```
|
||||
|
||||
where `<my-key-name>` is the key you want to use in the ConfigMap and `<path-to-file>` is the location of the data source file you want the key to represent.
|
||||
|
||||
For example:
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-3 --from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties
|
||||
|
||||
kubectl get configmaps game-config-3 -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
game-special-key: |
|
||||
enemies=aliens
|
||||
lives=3
|
||||
enemies.cheat=true
|
||||
enemies.cheat.level=noGoodRotten
|
||||
secret.code.passphrase=UUDDLRLRBABAS
|
||||
secret.code.allowed=true
|
||||
secret.code.lives=30
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:54:22Z
|
||||
name: game-config-3
|
||||
namespace: default
|
||||
resourceVersion: "530"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/game-config-3
|
||||
uid: 05f8da22-d671-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
### Create ConfigMaps from literal values
|
||||
|
||||
You can use `kubectl create configmap` with the `--from-literal` argument to define a literal value from the command line:
|
||||
|
||||
```shell
|
||||
kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
|
||||
```
|
||||
|
||||
You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the `data` section of the ConfigMap.
|
||||
|
||||
```shell
|
||||
kubectl get configmaps special-config -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
special.how: very
|
||||
special.type: charm
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T19:14:38Z
|
||||
name: special-config
|
||||
namespace: default
|
||||
resourceVersion: "651"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/special-config
|
||||
uid: dadce046-d673-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
|
||||
## Understanding ConfigMaps
|
||||
|
||||
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
|
||||
The ConfigMap API resource stores configuration data as key-value pairs. The data can be consumed in pods or provide the configurations for system components such as controllers. ConfigMap is similar to [Secrets](/docs/concepts/configuration/secret/), but provides a means of working with strings that don't contain sensitive information. Users and system components alike can store configuration data in ConfigMap.
|
||||
|
||||
**Note:** ConfigMaps should reference properties files, not replace them. Think of the ConfigMap as representing something similar to the Linux `/etc` directory and its contents. For example, if you create a [Kubernetes Volume](/docs/concepts/storage/volumes/) from a ConfigMap, each data item in the ConfigMap is represented by an individual file in the volume.
|
||||
{: .note}
|
||||
|
||||
The ConfigMap's `data` field contains the configuration data. As shown in the example below, this can be simple -- like individual properties defined using `--from-literal` -- or complex -- like configuration files or JSON blobs defined using `--from-file`.
|
||||
|
||||
```yaml
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T19:14:38Z
|
||||
name: example-config
|
||||
namespace: default
|
||||
data:
|
||||
# example of a simple property defined using --from-literal
|
||||
example.property.1: hello
|
||||
example.property.2: world
|
||||
# example of a complex property defined using --from-file
|
||||
example.property.file: |-
|
||||
property.1=value-1
|
||||
property.2=value-2
|
||||
property.3=value-3
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
* See [Using ConfigMap Data in Pods](/docs/tasks/configure-pod-container/configure-pod-configmap).
|
||||
* Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/).
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -1,301 +0,0 @@
|
|||
---
|
||||
title: Configure Liveness and Readiness Probes
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to configure liveness and readiness probes for Containers.
|
||||
|
||||
The [kubelet](/docs/admin/kubelet/) uses liveness probes to know when to
|
||||
restart a Container. For example, liveness probes could catch a deadlock,
|
||||
where an application is running, but unable to make progress. Restarting a
|
||||
Container in such a state can help to make the application more available
|
||||
despite bugs.
|
||||
|
||||
The kubelet uses readiness probes to know when a Container is ready to start
|
||||
accepting traffic. A Pod is considered ready when all of its Containers are ready.
|
||||
One use of this signal is to control which Pods are used as backends for Services.
|
||||
When a Pod is not ready, it is removed from Service load balancers.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Define a liveness command
|
||||
|
||||
Many applications running for long periods of time eventually transition to
|
||||
broken states, and cannot recover except by being restarted. Kubernetes provides
|
||||
liveness probes to detect and remedy such situations.
|
||||
|
||||
In this exercise, you create a Pod that runs a Container based on the
|
||||
`gcr.io/google_containers/busybox` image. Here is the configuration file for the Pod:
|
||||
|
||||
{% include code.html language="yaml" file="exec-liveness.yaml" ghlink="/docs/tasks/configure-pod-container/exec-liveness.yaml" %}
|
||||
|
||||
In the configuration file, you can see that the Pod has a single Container.
|
||||
The `periodSeconds` field specifies that the kubelet should perform a liveness
|
||||
probe every 5 seconds. The `initialDelaySeconds` field tells the kubelet that it
|
||||
should wait 5 second before performing the first probe. To perform a probe, the
|
||||
kubelet executes the command `cat /tmp/healthy` in the Container. If the
|
||||
command succeeds, it returns 0, and the kubelet considers the Container to be alive and
|
||||
healthy. If the command returns a non-zero value, the kubelet kills the Container
|
||||
and restarts it.
|
||||
|
||||
When the Container starts, it executes this command:
|
||||
|
||||
```shell
|
||||
/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
|
||||
```
|
||||
|
||||
For the first 30 seconds of the Container's life, there is a `/tmp/healthy` file.
|
||||
So during the first 30 seconds, the command `cat /tmp/healthy` returns a success
|
||||
code. After 30 seconds, `cat /tmp/healthy` returns a failure code.
|
||||
|
||||
Create the Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml
|
||||
```
|
||||
|
||||
Within 30 seconds, view the Pod events:
|
||||
|
||||
```
|
||||
kubectl describe pod liveness-exec
|
||||
```
|
||||
|
||||
The output indicates that no liveness probes have failed yet:
|
||||
|
||||
```shell
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
|
||||
```
|
||||
|
||||
After 35 seconds, view the Pod events again:
|
||||
|
||||
```shell
|
||||
kubectl describe pod liveness-exec
|
||||
```
|
||||
|
||||
At the bottom of the output, there are messages indicating that the liveness
|
||||
probes have failed, and the containers have been killed and recreated.
|
||||
|
||||
```shell
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
|
||||
2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
|
||||
```
|
||||
|
||||
Wait another 30 seconds, and verify that the Container has been restarted:
|
||||
|
||||
```shell
|
||||
kubectl get pod liveness-exec
|
||||
```
|
||||
|
||||
The output shows that `RESTARTS` has been incremented:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
liveness-exec 1/1 Running 1 1m
|
||||
```
|
||||
|
||||
## Define a liveness HTTP request
|
||||
|
||||
Another kind of liveness probe uses an HTTP GET request. Here is the configuration
|
||||
file for a Pod that runs a container based on the `gcr.io/google_containers/liveness`
|
||||
image.
|
||||
|
||||
{% include code.html language="yaml" file="http-liveness.yaml" ghlink="/docs/tasks/configure-pod-container/http-liveness.yaml" %}
|
||||
|
||||
In the configuration file, you can see that the Pod has a single Container.
|
||||
The `periodSeconds` field specifies that the kubelet should perform a liveness
|
||||
probe every 3 seconds. The `initialDelaySeconds` field tells the kubelet that it
|
||||
should wait 3 seconds before performing the first probe. To perform a probe, the
|
||||
kubelet sends an HTTP GET request to the server that is running in the Container
|
||||
and listening on port 8080. If the handler for the server's `/healthz` path
|
||||
returns a success code, the kubelet considers the Container to be alive and
|
||||
healthy. If the handler returns a failure code, the kubelet kills the Container
|
||||
and restarts it.
|
||||
|
||||
Any code greater than or equal to 200 and less than 400 indicates success. Any
|
||||
other code indicates failure.
|
||||
|
||||
You can see the source code for the server in
|
||||
[server.go](https://github.com/kubernetes/kubernetes/blob/master/test/images/liveness/server.go).
|
||||
|
||||
For the first 10 seconds that the Container is alive, the `/healthz` handler
|
||||
returns a status of 200. After that, the handler returns a status of 500.
|
||||
|
||||
```go
|
||||
http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
|
||||
duration := time.Now().Sub(started)
|
||||
if duration.Seconds() > 10 {
|
||||
w.WriteHeader(500)
|
||||
w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds())))
|
||||
} else {
|
||||
w.WriteHeader(200)
|
||||
w.Write([]byte("ok"))
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
The kubelet starts performing health checks 3 seconds after the Container starts.
|
||||
So the first couple of health checks will succeed. But after 10 seconds, the health
|
||||
checks will fail, and the kubelet will kill and restart the Container.
|
||||
|
||||
To try the HTTP liveness check, create a Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/http-liveness.yaml
|
||||
```
|
||||
|
||||
After 10 seconds, view Pod events to verify that liveness probes have failed and
|
||||
the Container has been restarted:
|
||||
|
||||
```shell
|
||||
kubectl describe pod liveness-http
|
||||
```
|
||||
|
||||
## Define a TCP liveness probe
|
||||
|
||||
A third type of liveness probe uses a TCP Socket. With this configuration, the
|
||||
kubelet will attempt to open a socket to your container on the specified port.
|
||||
If it can establish a connection, the container is considered healthy, if it
|
||||
can’t it is considered a failure.
|
||||
|
||||
{% include code.html language="yaml" file="tcp-liveness-readiness.yaml" ghlink="/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml" %}
|
||||
|
||||
As you can see, configuration for a TCP check is quite similar to an HTTP check.
|
||||
This example uses both readiness and liveness probes. The kubelet will send the
|
||||
first readiness probe 5 seconds after the container starts. This will attempt to
|
||||
connect to the `goproxy` container on port 8080. If the probe succeeds, the pod
|
||||
will be marked as ready. The kubelet will continue to run this check every 10
|
||||
seconds.
|
||||
|
||||
In addition to the readiness probe, this configuration includes a liveness probe.
|
||||
The kubelet will run the first liveness probe 15 seconds after the container
|
||||
starts. Just like the readiness probe, this will attempt to connect to the
|
||||
`goproxy` container on port 8080. If the liveness probe fails, the container
|
||||
will be restarted.
|
||||
|
||||
## Use a named port
|
||||
|
||||
You can use a named
|
||||
[ContainerPort](/docs/api-reference/{{page.version}}/#containerport-v1-core)
|
||||
for HTTP or TCP liveness checks:
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
- name: liveness-port
|
||||
containerPort: 8080
|
||||
hostPort: 8080
|
||||
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: liveness-port
|
||||
```
|
||||
|
||||
## Define readiness probes
|
||||
|
||||
Sometimes, applications are temporarily unable to serve traffic.
|
||||
For example, an application might need to load large data or configuration
|
||||
files during startup. In such cases, you don't want to kill the application,
|
||||
but you don’t want to send it requests either. Kubernetes provides
|
||||
readiness probes to detect and mitigate these situations. A pod with containers
|
||||
reporting that they are not ready does not receive traffic through Kubernetes
|
||||
Services.
|
||||
|
||||
Readiness probes are configured similarly to liveness probes. The only difference
|
||||
is that you use the `readinessProbe` field instead of the `livenessProbe` field.
|
||||
|
||||
```yaml
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/healthy
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
```
|
||||
|
||||
Configuration for HTTP and TCP readiness probes also remains identical to
|
||||
liveness probes.
|
||||
|
||||
Readiness and liveness probes can be used in parallel for the same container.
|
||||
Using both can ensure that traffic does not reach a container that is not ready
|
||||
for it, and that containers are restarted when they fail.
|
||||
|
||||
## Configure Probes
|
||||
|
||||
{% comment %}
|
||||
Eventually, some of this section could be moved to a concept topic.
|
||||
{% endcomment %}
|
||||
|
||||
[Probes](/docs/api-reference/{{page.version}}/#probe-v1-core) have a number of fields that
|
||||
you can use to more precisely control the behavior of liveness and readiness
|
||||
checks:
|
||||
|
||||
* `initialDelaySeconds`: Number of seconds after the container has started
|
||||
before liveness probes are initiated.
|
||||
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10
|
||||
seconds. Minimum value is 1.
|
||||
* `timeoutSeconds`: Number of seconds after which the probe times out. Defaults
|
||||
to 1 second. Minimum value is 1.
|
||||
* `successThreshold`: Minimum consecutive successes for the probe to be
|
||||
considered successful after having failed. Defaults to 1. Must be 1 for
|
||||
liveness. Minimum value is 1.
|
||||
* `failureThreshold`: Minimum consecutive failures for the probe to be
|
||||
considered failed after having succeeded. Defaults to 3. Minimum value is 1.
|
||||
|
||||
[HTTP probes](/docs/api-reference/{{page.version}}/#httpgetaction-v1-core)
|
||||
have additional fields that can be set on `httpGet`:
|
||||
|
||||
* `host`: Host name to connect to, defaults to the pod IP. You probably want to
|
||||
set "Host" in httpHeaders instead.
|
||||
* `scheme`: Scheme to use for connecting to the host. Defaults to HTTP.
|
||||
* `path`: Path to access on the HTTP server.
|
||||
* `httpHeaders`: Custom headers to set in the request. HTTP allows repeated headers.
|
||||
* `port`: Name or number of the port to access on the container. Number must be
|
||||
in the range 1 to 65535.
|
||||
|
||||
For an HTTP probe, the kubelet sends an HTTP request to the specified path and
|
||||
port to perform the check. The kubelet sends the probe to the pod’s IP address,
|
||||
unless the address is overridden by the optional `host` field in `httpGet`.
|
||||
In most scenarios, you do not want to set the `host` field. Here's one scenario
|
||||
where you would set it. Suppose the Container listens on 127.0.0.1 and the Pod's
|
||||
`hostNetwork` field is true. Then `host`, under `httpGet`, should be set to 127.0.0.1.
|
||||
If your pod relies on virtual hosts, which is probably the more common case,
|
||||
you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
* Learn more about
|
||||
[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||
|
||||
### Reference
|
||||
|
||||
* [Pod](/docs/api-reference/{{page.version}}/#pod-v1-core)
|
||||
* [Container](/docs/api-reference/{{page.version}}/#container-v1-core)
|
||||
* [Probe](/docs/api-reference/{{page.version}}/#probe-v1-core)
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -1,231 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- bprashanth
|
||||
- liggitt
|
||||
- thockin
|
||||
title: Configure Service Accounts for Pods
|
||||
---
|
||||
|
||||
A service account provides an identity for processes that run in a Pod.
|
||||
|
||||
*This is a user introduction to Service Accounts. See also the
|
||||
[Cluster Admin Guide to Service Accounts](/docs/admin/service-accounts-admin).*
|
||||
|
||||
**Note:** This document describes how service accounts behave in a cluster set up
|
||||
as recommended by the Kubernetes project. Your cluster administrator may have
|
||||
customized the behavior in your cluster, in which case this documentation may
|
||||
not apply.
|
||||
{: .note}
|
||||
|
||||
When you (a human) access the cluster (e.g. using `kubectl`), you are
|
||||
authenticated by the apiserver as a particular User Account (currently this is
|
||||
usually `admin`, unless your cluster administrator has customized your
|
||||
cluster). Processes in containers inside pods can also contact the apiserver.
|
||||
When they do, they are authenticated as a particular Service Account (e.g.
|
||||
`default`).
|
||||
|
||||
## Use the Default Service Account to access the API server.
|
||||
|
||||
When you create a pod, if you do not specify a service account, it is
|
||||
automatically assigned the `default` service account in the same namespace.
|
||||
If you get the raw json or yaml for a pod you have created (e.g. `kubectl get pods/podname -o yaml`),
|
||||
you can see the `spec.serviceAccountName` field has been
|
||||
[automatically set](/docs/user-guide/working-with-resources/#resources-are-automatically-modified).
|
||||
|
||||
You can access the API from inside a pod using automatically mounted service account credentials,
|
||||
as described in [Accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod).
|
||||
The API permissions a service account has depend on the [authorization plugin and policy](/docs/admin/authorization/#a-quick-note-on-service-accounts) in use.
|
||||
|
||||
In version 1.6+, you can opt out of automounting API credentials for a service account by setting
|
||||
`automountServiceAccountToken: false` on the service account:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: build-robot
|
||||
automountServiceAccountToken: false
|
||||
...
|
||||
```
|
||||
|
||||
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: my-pod
|
||||
spec:
|
||||
serviceAccountName: build-robot
|
||||
automountServiceAccountToken: false
|
||||
...
|
||||
```
|
||||
|
||||
The pod spec takes precedence over the service account if both specify a `automountServiceAccountToken` value.
|
||||
|
||||
## Use Multiple Service Accounts.
|
||||
|
||||
Every namespace has a default service account resource called `default`.
|
||||
You can list this and any other serviceAccount resources in the namespace with this command:
|
||||
|
||||
```shell
|
||||
$ kubectl get serviceAccounts
|
||||
NAME SECRETS AGE
|
||||
default 1 1d
|
||||
```
|
||||
|
||||
You can create additional ServiceAccount objects like this:
|
||||
|
||||
```shell
|
||||
$ cat > /tmp/serviceaccount.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: build-robot
|
||||
EOF
|
||||
$ kubectl create -f /tmp/serviceaccount.yaml
|
||||
serviceaccount "build-robot" created
|
||||
```
|
||||
|
||||
If you get a complete dump of the service account object, like this:
|
||||
|
||||
```shell
|
||||
$ kubectl get serviceaccounts/build-robot -o yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-06-16T00:12:59Z
|
||||
name: build-robot
|
||||
namespace: default
|
||||
resourceVersion: "272500"
|
||||
selfLink: /api/v1/namespaces/default/serviceaccounts/build-robot
|
||||
uid: 721ab723-13bc-11e5-aec2-42010af0021e
|
||||
secrets:
|
||||
- name: build-robot-token-bvbk5
|
||||
```
|
||||
|
||||
then you will see that a token has automatically been created and is referenced by the service account.
|
||||
|
||||
You may use authorization plugins to [set permissions on service accounts](/docs/admin/authorization/#a-quick-note-on-service-accounts).
|
||||
|
||||
To use a non-default service account, simply set the `spec.serviceAccountName`
|
||||
field of a pod to the name of the service account you wish to use.
|
||||
|
||||
The service account has to exist at the time the pod is created, or it will be rejected.
|
||||
|
||||
You cannot update the service account of an already created pod.
|
||||
|
||||
You can clean up the service account from this example like this:
|
||||
|
||||
```shell
|
||||
$ kubectl delete serviceaccount/build-robot
|
||||
```
|
||||
|
||||
## Manually create a service account API token.
|
||||
|
||||
Suppose we have an existing service account named "build-robot" as mentioned above, and we create
|
||||
a new secret manually.
|
||||
|
||||
```shell
|
||||
$ cat > /tmp/build-robot-secret.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: build-robot-secret
|
||||
annotations:
|
||||
kubernetes.io/service-account.name: build-robot
|
||||
type: kubernetes.io/service-account-token
|
||||
EOF
|
||||
$ kubectl create -f /tmp/build-robot-secret.yaml
|
||||
secret "build-robot-secret" created
|
||||
```
|
||||
|
||||
Now you can confirm that the newly built secret is populated with an API token for the "build-robot" service account.
|
||||
|
||||
Any tokens for non-existent service accounts will be cleaned up by the token controller.
|
||||
|
||||
```shell
|
||||
$ kubectl describe secrets/build-robot-secret
|
||||
Name: build-robot-secret
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: kubernetes.io/service-account.name=build-robot,kubernetes.io/service-account.uid=870ef2a5-35cf-11e5-8d06-005056b45392
|
||||
|
||||
Type: kubernetes.io/service-account-token
|
||||
|
||||
Data
|
||||
====
|
||||
ca.crt: 1220 bytes
|
||||
token: ...
|
||||
namespace: 7 bytes
|
||||
```
|
||||
|
||||
**Note:** The content of `token` is elided here.
|
||||
{: .note}
|
||||
|
||||
## Add ImagePullSecrets to a service account
|
||||
|
||||
First, create an imagePullSecret, as described [here](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
Next, verify it has been created. For example:
|
||||
|
||||
```shell
|
||||
$ kubectl get secrets myregistrykey
|
||||
NAME TYPE DATA AGE
|
||||
myregistrykey kubernetes.io/.dockerconfigjson 1 1d
|
||||
```
|
||||
|
||||
Next, modify the default service account for the namespace to use this secret as an imagePullSecret.
|
||||
|
||||
```shell
|
||||
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
|
||||
```
|
||||
|
||||
Interactive version requiring manual edit:
|
||||
|
||||
```shell
|
||||
$ kubectl get serviceaccounts default -o yaml > ./sa.yaml
|
||||
$ cat sa.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-08-07T22:02:39Z
|
||||
name: default
|
||||
namespace: default
|
||||
resourceVersion: "243024"
|
||||
selfLink: /api/v1/namespaces/default/serviceaccounts/default
|
||||
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
|
||||
secrets:
|
||||
- name: default-token-uudge
|
||||
$ vi sa.yaml
|
||||
[editor session not shown]
|
||||
[delete line with key "resourceVersion"]
|
||||
[add lines with "imagePullSecret:"]
|
||||
$ cat sa.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-08-07T22:02:39Z
|
||||
name: default
|
||||
namespace: default
|
||||
selfLink: /api/v1/namespaces/default/serviceaccounts/default
|
||||
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
|
||||
secrets:
|
||||
- name: default-token-uudge
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey
|
||||
$ kubectl replace serviceaccount default -f ./sa.yaml
|
||||
serviceaccounts/default
|
||||
```
|
||||
|
||||
Now, any new pods created in the current namespace will have this added to their spec:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey
|
||||
```
|
||||
|
||||
<!--## Adding Secrets to a service account.
|
||||
|
||||
TODO: Test and explain how to use additional non-K8s secrets with an existing service account.
|
||||
-->
|
|
@ -1,154 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- fgrzadkowski
|
||||
- jszczepkowski
|
||||
- directxman12
|
||||
title: Horizontal Pod Autoscaling
|
||||
---
|
||||
|
||||
This document describes the current state of Horizontal Pod Autoscaling in Kubernetes.
|
||||
|
||||
## What is Horizontal Pod Autoscaling?
|
||||
|
||||
With Horizontal Pod Autoscaling, Kubernetes automatically scales the number of pods
|
||||
in a replication controller, deployment or replica set based on observed CPU utilization
|
||||
(or, with alpha support, on some other, application-provided metrics). Note that Horizontal
|
||||
Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSet.
|
||||
|
||||
The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
|
||||
The resource determines the behavior of the controller.
|
||||
The controller periodically adjusts the number of replicas in a replication controller or deployment
|
||||
to match the observed average CPU utilization to the target specified by user.
|
||||
|
||||
## How does the Horizontal Pod Autoscaler work?
|
||||
|
||||
![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.svg)
|
||||
|
||||
The Horizontal Pod Autoscaler is implemented as a control loop, with a period controlled
|
||||
by the controller manager's `--horizontal-pod-autoscaler-sync-period` flag (with a default
|
||||
value of 30 seconds).
|
||||
|
||||
During each period, the controller manager queries the resource utilization against the
|
||||
metrics specified in each HorizontalPodAutoscaler definition. The controller manager
|
||||
obtains the metrics from either the resource metrics API (for per-pod resource metrics),
|
||||
or the custom metrics API (for all other metrics).
|
||||
|
||||
* For per-pod resource metrics (like CPU), the controller fetches the metrics
|
||||
from the resource metrics API for each pod targeted by the HorizontalPodAutoscaler.
|
||||
Then, if a target utilization value is set, the controller calculates the utilization
|
||||
value as a percentage of the equivalent resource request on the containers in
|
||||
each pod. If a target raw value is set, the raw metric values are used directly.
|
||||
The controller then takes the mean of the utilization or the raw value (depending on the type
|
||||
of target specified) across all targeted pods, and produces a ratio used to scale
|
||||
the number of desired replicas.
|
||||
|
||||
Please note that if some of the pod's containers do not have the relevant resource request set,
|
||||
CPU utilization for the pod will not be defined and the autoscaler will not take any action
|
||||
for that metric. See the [autoscaling algorithm design document](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#autoscaling-algorithm) for further
|
||||
details about how the autoscaling algorithm works.
|
||||
|
||||
* For per-pod custom metrics, the controller functions similarly to per-pod resource metrics,
|
||||
except that it works with raw values, not utilization values.
|
||||
|
||||
* For object metrics, a single metric is fetched (which describes the object
|
||||
in question), and compared to the target value, to produce a ratio as above.
|
||||
|
||||
The HorizontalPodAutoscaler controller can fetch metrics in two different ways: direct Heapster
|
||||
access, and REST client access.
|
||||
|
||||
When using direct Heapster access, the HorizontalPodAutoscaler queries Heapster directly
|
||||
through the API server's service proxy subresource. Heapster needs to be deployed on the
|
||||
cluster and running in the kube-system namespace.
|
||||
|
||||
See [Support for custom metrics](#support-for-custom-metrics) for more details on REST client access.
|
||||
|
||||
The autoscaler accesses corresponding replication controller, deployment or replica set by scale sub-resource.
|
||||
Scale is an interface that allows you to dynamically set the number of replicas and examine each of their current states.
|
||||
More details on scale sub-resource can be found [here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
|
||||
|
||||
|
||||
## API Object
|
||||
|
||||
The Horizontal Pod Autoscaler is an API resource in the Kubernetes `autoscaling` API group.
|
||||
The current stable version, which only includes support for CPU autoscaling,
|
||||
can be found in the `autoscaling/v1` API version.
|
||||
|
||||
The alpha version, which includes support for scaling on memory and custom metrics,
|
||||
can be found in `autoscaling/v2alpha1`. The new fields introduced in `autoscaling/v2alpha1`
|
||||
are preserved as annotations when working with `autoscaling/v1`.
|
||||
|
||||
More details about the API object can be found at
|
||||
[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
|
||||
|
||||
## Support for Horizontal Pod Autoscaler in kubectl
|
||||
|
||||
Horizontal Pod Autoscaler, like every API resource, is supported in a standard way by `kubectl`.
|
||||
We can create a new autoscaler using `kubectl create` command.
|
||||
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
|
||||
Finally, we can delete an autoscaler using `kubectl delete hpa`.
|
||||
|
||||
In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
|
||||
For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80`
|
||||
will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%`
|
||||
and the number of replicas between 2 and 5.
|
||||
The detailed documentation of `kubectl autoscale` can be found [here](/docs/user-guide/kubectl/v1.6/#autoscale).
|
||||
|
||||
|
||||
## Autoscaling during rolling update
|
||||
|
||||
Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly,
|
||||
or by using the deployment object, which manages the underlying replication controllers for you.
|
||||
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
|
||||
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replication controllers.
|
||||
|
||||
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
|
||||
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
|
||||
The reason this doesn't work is that when rolling update creates a new replication controller,
|
||||
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
|
||||
|
||||
## Support for multiple metrics
|
||||
|
||||
Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2alpha1` API
|
||||
version to specify multiple metrics for the Horizontal Pod Autoscaler to scale on. Then, the Horizontal Pod
|
||||
Autoscaler controller will evaluate each metric, and propose a new scale based on that metric. The largest of the
|
||||
proposed scales will be used as the new scale.
|
||||
|
||||
## Support for custom metrics
|
||||
|
||||
**Note**: Kubernetes 1.2 added alpha support for scaling based on application-specific metrics using special annotations.
|
||||
Support for these annotations was removed in Kubernetes 1.6 in favor of the `autoscaling/v2alpha1` API. While the old method for collecting
|
||||
custom metrics is still available, these metrics will not be available for use by the Horizontal Pod Autoscaler, and the former
|
||||
annotations for specifying which custom metrics to scale on are no longer honored by the Horizontal Pod Autoscaler controller.
|
||||
|
||||
Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler.
|
||||
You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2alpha1` API.
|
||||
Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics.
|
||||
|
||||
### Requirements
|
||||
|
||||
To use custom metrics with your Horizontal Pod Autoscaler, you must set the necessary configurations when deploying your cluster:
|
||||
|
||||
* [Enable the API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) if you have not already done so.
|
||||
|
||||
* Register your resource metrics API and your
|
||||
custom metrics API with the API aggregation layer. Both of these API servers must be running *on* your cluster.
|
||||
|
||||
* *Resource Metrics API*: You can use Heapster's implementation of the resource metrics API, by running Heapster with its `--api-server` flag set to true.
|
||||
|
||||
* *Custom Metrics API*: This must be provided by a separate component. To get started with boilerplate code, see the [kubernetes-incubator/custom-metrics-apiserver](https://github.com/kubernetes-incubator/custom-metrics-apiserver) and the [k8s.io/metrics](https://github.com/kubernetes/metrics) repositories.
|
||||
|
||||
* Set the appropriate flags for kube-controller-manager:
|
||||
|
||||
* `--horizontal-pod-autoscaler-use-rest-clients` should be true.
|
||||
|
||||
* `--kubeconfig <path-to-kubeconfig>` OR `--master <ip-address-of-apiserver>`
|
||||
|
||||
Note that either the `--master` or `--kubeconfig` flag can be used; `--master` will override `--kubeconfig` if both are specified. These flags specify the location of the API aggregation layer, allowing the controller manager to communicate to the API server.
|
||||
|
||||
In Kubernetes 1.7, the standard aggregation layer that Kubernetes provides runs in-process with the kube-apiserver, so the target IP address can be found with `kubectl get pods --selector k8s-app=kube-apiserver --namespace kube-system -o jsonpath='{.items[0].status.podIP}'`.
|
||||
|
||||
## Further reading
|
||||
|
||||
* Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
|
||||
* kubectl autoscale command: [kubectl autoscale](/docs/user-guide/kubectl/v1.6/#autoscale).
|
||||
* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).
|
|
@ -1,205 +0,0 @@
|
|||
---
|
||||
title: Manage TLS Certificates in a Cluster
|
||||
approvers:
|
||||
- mikedanese
|
||||
- beacham
|
||||
- liggit
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
Every Kubernetes cluster has a cluster root Certificate Authority (CA). The CA
|
||||
is generally used by cluster components to validate the API server's
|
||||
certificate, by the API server to validate kubelet client certificates, etc. To
|
||||
support this, the CA certificate bundle is distributed to every node in the
|
||||
cluster and is distributed as a secret attached to default service accounts.
|
||||
Optionally, your workloads can use this CA to establish trust. Your application
|
||||
can request a certificate signing using the `certificates.k8s.io` API using a
|
||||
protocol that is similar to the
|
||||
[ACME draft](https://github.com/ietf-wg-acme/acme/).
|
||||
|
||||
## Trusting TLS in a Cluster
|
||||
|
||||
Trusting the cluster root CA from an application running as a pod usually
|
||||
requires some extra application configuration. You will need to add the CA
|
||||
certificate bundle to the list of CA certificates that the TLS client or server
|
||||
trusts. For example, you would do this with a golang TLS config by parsing the
|
||||
certificate chain and adding the parsed certificates to the `Certificates` field
|
||||
in the [`tls.Config`](https://godoc.org/crypto/tls#Config) struct.
|
||||
|
||||
The CA certificate bundle is automatically mounted into pods using the default
|
||||
service account at the path `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.
|
||||
If you are not using the default service account, ask a cluster administrator to
|
||||
build a configmap containing the certificate bundle that you have access to use.
|
||||
|
||||
## Requesting a Certificate
|
||||
|
||||
The following section demonstrates how to create a TLS certificate for a
|
||||
Kubernetes service accessed through DNS.
|
||||
|
||||
### Step 0. Download and install CFSSL
|
||||
|
||||
The cfssl tools used in this example can be downloaded at
|
||||
[https://pkg.cfssl.org/](https://pkg.cfssl.org/).
|
||||
|
||||
### Step 1. Create a Certificate Signing Request
|
||||
|
||||
Generate a private key and certificate signing request (or CSR) by running
|
||||
the following command:
|
||||
|
||||
```console
|
||||
$ cat <<EOF | cfssl genkey - | cfssljson -bare server
|
||||
{
|
||||
"hosts": [
|
||||
"my-svc.my-namespace.svc.cluster.local",
|
||||
"my-pod.my-namespace.pod.cluster.local",
|
||||
"172.168.0.24",
|
||||
"10.0.34.2"
|
||||
],
|
||||
"CN": "my-pod.my-namespace.pod.cluster.local",
|
||||
"key": {
|
||||
"algo": "ecdsa",
|
||||
"size": 256
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Where `172.168.0.24` is the service's cluster IP,
|
||||
`my-svc.my-namespace.svc.cluster.local` is the service's DNS name,
|
||||
`10.0.34.2` is the pod's IP and `my-pod.my-namespace.pod.cluster.local`
|
||||
is the pod's DNS name. You should see the following output:
|
||||
|
||||
```
|
||||
2017/03/21 06:48:17 [INFO] generate received request
|
||||
2017/03/21 06:48:17 [INFO] received CSR
|
||||
2017/03/21 06:48:17 [INFO] generating key: ecdsa-256
|
||||
2017/03/21 06:48:17 [INFO] encoded CSR
|
||||
```
|
||||
|
||||
This command generates two files; it generates `server.csr` containing the PEM
|
||||
encoded [pkcs#10](https://tools.ietf.org/html/rfc2986) certification request,
|
||||
and `server-key.pem` containing the PEM encoded key to the certificate that
|
||||
is still to be created.
|
||||
|
||||
### Step 2. Create a Certificate Signing Request object to send to the Kubernetes API
|
||||
|
||||
Generate a CSR yaml blob and send it to the apiserver by running the following
|
||||
command:
|
||||
|
||||
```console
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
apiVersion: certificates.k8s.io/v1beta1
|
||||
kind: CertificateSigningRequest
|
||||
metadata:
|
||||
name: my-svc.my-namespace
|
||||
spec:
|
||||
groups:
|
||||
- system:authenticated
|
||||
request: $(cat server.csr | base64 | tr -d '\n')
|
||||
usages:
|
||||
- digital signature
|
||||
- key encipherment
|
||||
- server auth
|
||||
EOF
|
||||
```
|
||||
|
||||
Notice that the `server.csr` file created in step 1 is base64 encoded
|
||||
and stashed in the `.spec.request` field. We are also requesting a
|
||||
certificate with the "digital signature", "key encipherment", and "server
|
||||
auth" key usages. We support all key usages and extended key usages listed
|
||||
[here](https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage)
|
||||
so you can request client certificates and other certificates using this
|
||||
same API.
|
||||
|
||||
The CSR should now be visible from the API in a Pending state. You can see
|
||||
it by running:
|
||||
|
||||
```console
|
||||
$ kubectl describe csr my-svc.my-namespace
|
||||
Name: my-svc.my-namespace
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
CreationTimestamp: Tue, 21 Mar 2017 07:03:51 -0700
|
||||
Requesting User: yourname@example.com
|
||||
Status: Pending
|
||||
Subject:
|
||||
Common Name: my-svc.my-namespace.svc.cluster.local
|
||||
Serial Number:
|
||||
Subject Alternative Names:
|
||||
DNS Names: my-svc.my-namespace.svc.cluster.local
|
||||
IP Addresses: 172.168.0.24
|
||||
10.0.34.2
|
||||
Events: <none>
|
||||
```
|
||||
|
||||
### Step 3. Get the Certificate Signing Request Approved
|
||||
|
||||
Approving the certificate signing request is either done by an automated
|
||||
approval process or on a one off basis by a cluster administrator. More
|
||||
information on what this involves is covered below.
|
||||
|
||||
### Step 4. Download the Certificate and Use It
|
||||
|
||||
Once the CSR is signed and approved you should see the following:
|
||||
|
||||
```console
|
||||
$ kubectl get csr
|
||||
NAME AGE REQUESTOR CONDITION
|
||||
my-svc.my-namespace 10m yourname@example.com Approved,Issued
|
||||
```
|
||||
|
||||
You can download the issued certificate and save it to a `server.crt` file
|
||||
by running the following:
|
||||
|
||||
```console
|
||||
$ kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \
|
||||
| base64 -d > server.crt
|
||||
```
|
||||
|
||||
Now you can use `server.crt` and `server-key.pem` as the keypair to start
|
||||
your HTTPS server.
|
||||
|
||||
## Approving Certificate Signing Requests
|
||||
|
||||
A Kubernetes administrator (with appropriate permissions) can manually approve
|
||||
(or deny) Certificate Signing Requests by using the `kubectl certificate
|
||||
approve` and `kubectl certificate deny` commands. However if you intend
|
||||
to make heavy usage of this API, you might consider writing an automated
|
||||
certificates controller.
|
||||
|
||||
Whether a machine or a human using kubectl as above, the role of the approver is
|
||||
to verify that the CSR satisfies two requirements:
|
||||
|
||||
1. The subject of the CSR controls the private key used to sign the CSR. This
|
||||
addresses the threat of a third party masquerading as an authorized subject.
|
||||
In the above example, this step would be to verify that the pod controls the
|
||||
private key used to generate the CSR.
|
||||
2. The subject of the CSR is authorized to act in the requested context. This
|
||||
addresses the threat of an undesired subject joining the cluster. In the
|
||||
above example, this step would be to verify that the pod is allowed to
|
||||
participate in the requested service.
|
||||
|
||||
If and only if these two requirements are met, the approver should approve
|
||||
the CSR and otherwise should deny the CSR.
|
||||
|
||||
## A Word of **Warning** on the Approval Permission
|
||||
|
||||
The ability to approve CSRs decides who trusts who within the cluster. This
|
||||
includes who the Kubernetes API trusts. The ability to approve CSRs should
|
||||
not be granted broadly or lightly. The requirements of the challenge
|
||||
noted in the previous section and the repercussions of issuing a specific
|
||||
certificate should be fully understood before granting this permission. See
|
||||
[here](/docs/admin/authentication#x509-client-certs) for information on how
|
||||
certificates interact with authentication.
|
||||
|
||||
## A Note to Cluster Administrators
|
||||
|
||||
This tutorial assumes that a signer is setup to serve the certificates API. The
|
||||
Kubernetes controller manager provides a default implementation of a signer. To
|
||||
enable it, pass the `--cluster-signing-cert-file` and
|
||||
`--cluster-signing-key-file` parameters to the controller manager with paths to
|
||||
your Certificate Authority's keypair.
|
Loading…
Reference in New Issue