Merge branch 'release-1.6' of https://github.com/kubernetes/kubernetes.github.io into release-1.6
commit
7667b6d217
|
@ -14,16 +14,6 @@ toc:
|
|||
- docs/concepts/overview/working-with-objects/annotations.md
|
||||
- docs/concepts/overview/kubernetes-api.md
|
||||
|
||||
- title: Kubernetes Objects
|
||||
section:
|
||||
- title: Pods
|
||||
section:
|
||||
- title: Controllers
|
||||
section:
|
||||
- docs/concepts/abstractions/controllers/statefulsets.md
|
||||
- docs/concepts/abstractions/controllers/petsets.md
|
||||
- docs/concepts/abstractions/controllers/garbage-collection.md
|
||||
|
||||
- title: Workloads
|
||||
section:
|
||||
- title: Pods
|
||||
|
@ -31,6 +21,11 @@ toc:
|
|||
- docs/concepts/workloads/pods/pod-overview.md
|
||||
- docs/concepts/workloads/pods/pod-lifecycle.md
|
||||
- docs/concepts/workloads/pods/init-containers.md
|
||||
- title: Controllers
|
||||
section:
|
||||
- docs/concepts/workloads/controllers/statefulset.md
|
||||
- docs/concepts/workloads/controllers/petset.md
|
||||
- docs/concepts/workloads/controllers/garbage-collection.md
|
||||
- title: Jobs
|
||||
section:
|
||||
- docs/concepts/jobs/run-to-completion-finite-workloads.md
|
||||
|
@ -42,16 +37,21 @@ toc:
|
|||
- docs/concepts/cluster-administration/network-plugins.md
|
||||
- docs/concepts/cluster-administration/logging.md
|
||||
- docs/concepts/cluster-administration/audit.md
|
||||
- docs/concepts/cluster-administration/resource-usage-monitoring.md
|
||||
- docs/concepts/cluster-administration/out-of-resource.md
|
||||
- docs/concepts/cluster-administration/multiple-clusters.md
|
||||
- docs/concepts/cluster-administration/federation.md
|
||||
- docs/concepts/cluster-administration/federation-service-discovery.md
|
||||
- docs/concepts/cluster-administration/guaranteed-scheduling-critical-addon-pods.md
|
||||
- docs/concepts/cluster-administration/static-pod.md
|
||||
- docs/concepts/cluster-administration/sysctl-cluster.md
|
||||
- docs/concepts/cluster-administration/access-cluster.md
|
||||
- docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig.md
|
||||
|
||||
- title: Services, Load Balancing, and Networking
|
||||
section:
|
||||
- docs/concepts/services-networking/dns-pod-service.md
|
||||
- docs/concepts/services-networking/connect-applications-service.md
|
||||
|
||||
- title: Configuration
|
||||
section:
|
||||
|
|
|
@ -25,6 +25,7 @@ toc:
|
|||
section:
|
||||
- docs/admin/accessing-the-api.md
|
||||
- docs/admin/authentication.md
|
||||
- docs/admin/bootstrap-tokens.md
|
||||
- title: Authorization Plugins
|
||||
section:
|
||||
- docs/admin/authorization/index.md
|
||||
|
|
|
@ -5,6 +5,7 @@ toc:
|
|||
|
||||
- title: Using the kubectl Command-Line
|
||||
section:
|
||||
- docs/tasks/kubectl/install.md
|
||||
- docs/tasks/kubectl/list-all-running-container-images.md
|
||||
- docs/tasks/kubectl/get-shell-running-container.md
|
||||
|
||||
|
@ -34,13 +35,15 @@ toc:
|
|||
- title: Running Jobs
|
||||
section:
|
||||
- docs/tasks/job/parallel-processing-expansion.md
|
||||
- docs/tasks/job/work-queue-1/index.md
|
||||
- docs/tasks/job/coarse-parallel-processing-work-queue/index.md
|
||||
- docs/tasks/job/fine-parallel-processing-work-queue/index.md
|
||||
|
||||
- title: Accessing Applications in a Cluster
|
||||
section:
|
||||
- docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
|
||||
- docs/tasks/access-application-cluster/load-balance-access-application-cluster.md
|
||||
- docs/tasks/access-application-cluster/create-external-load-balancer.md
|
||||
- docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md
|
||||
|
||||
- title: Monitoring, Logging, and Debugging
|
||||
section:
|
||||
|
@ -56,11 +59,13 @@ toc:
|
|||
|
||||
- title: Administering a Cluster
|
||||
section:
|
||||
- docs/tasks/administer-cluster/overview.md
|
||||
- docs/tasks/administer-cluster/assign-pods-nodes.md
|
||||
- docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
|
||||
- docs/tasks/administer-cluster/safely-drain-node.md
|
||||
- docs/tasks/administer-cluster/change-pv-reclaim-policy.md
|
||||
- docs/tasks/administer-cluster/limit-storage-consumption.md
|
||||
- docs/tasks/administer-cluster/share-configuration.md
|
||||
|
||||
- title: Administering Federation
|
||||
section:
|
||||
|
|
|
@ -42,7 +42,7 @@ The input to the authentication step is the entire HTTP request, however, it typ
|
|||
just examines the headers and/or client certificate.
|
||||
|
||||
Authentication modules include Client Certificates, Password, and Plain Tokens,
|
||||
and JWT Tokens (used for service accounts).
|
||||
Bootstrap Tokens, and JWT Tokens (used for service accounts).
|
||||
|
||||
Multiple authentication modules can be specified, in which case each one is tried in sequence,
|
||||
until one of them succeeds.
|
||||
|
@ -63,7 +63,7 @@ users in its object store.
|
|||
|
||||
Once the request is authenticated as coming from a specific user,
|
||||
it moves to a generic authorization step. This is shown as step **2** in the
|
||||
diagram.
|
||||
diagram.
|
||||
|
||||
The input to the Authorization step are attributes of the REST request, including:
|
||||
- the username determined by the Authentication step.
|
||||
|
@ -75,12 +75,12 @@ The input to the Authorization step are attributes of the REST request, includin
|
|||
|
||||
There are multiple supported Authorization Modules. The cluster creator configures the API
|
||||
server with which Authorization Modules should be used. When multiple Authorization Modules
|
||||
are configured, each is checked in sequence, and if any Module authorizes the request,
|
||||
are configured, each is checked in sequence, and if any Module authorizes the request,
|
||||
then the request can proceed. If all deny the request, then the request is denied (HTTP status
|
||||
code 403).
|
||||
|
||||
The [Authorization Modules](/docs/admin/authorization) page describes what authorization modules
|
||||
are available and how to configure them.
|
||||
are available and how to configure them.
|
||||
|
||||
For version 1.2, clusters created by `kube-up.sh` are configured so that no authorization is
|
||||
required for any request.
|
||||
|
@ -108,7 +108,7 @@ They act on objects being created, deleted, updated or connected (proxy), but no
|
|||
|
||||
Multiple admission controllers can be configured. Each is called in order.
|
||||
|
||||
This is shown as step **3** in the diagram.
|
||||
This is shown as step **3** in the diagram.
|
||||
|
||||
Unlike Authentication and Authorization Modules, if any admission controller module
|
||||
rejects, then the request is immediately rejected.
|
||||
|
@ -122,7 +122,7 @@ Once a request passes all admission controllers, it is validated using the valid
|
|||
for the corresponding API object, and then written to the object store (shown as step **4**).
|
||||
|
||||
|
||||
## API Server Ports and IPs
|
||||
## API Server Ports and IPs
|
||||
|
||||
The previous discussion applies to requests sent to the secure port of the API server
|
||||
(the typical case). The API server can actually serve on 2 ports:
|
||||
|
@ -141,8 +141,8 @@ By default the Kubernetes API server serves HTTP on 2 ports:
|
|||
- protected by need to have host access
|
||||
|
||||
2. `Secure Port`:
|
||||
|
||||
- use whenever possible
|
||||
|
||||
- use whenever possible
|
||||
- uses TLS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
|
||||
- default is port 6443, change with `--secure-port` flag.
|
||||
- default IP is first non-localhost network interface, change with `--bind-address` flag.
|
||||
|
|
|
@ -101,7 +101,7 @@ ImagePolicyWebhook uses the admission config file `--admission-controller-config
|
|||
}
|
||||
```
|
||||
|
||||
The config file must reference a [kubeconfig](/docs/user-guide/kubeconfig-file/) formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.
|
||||
The config file must reference a [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.
|
||||
|
||||
The kubeconfig file's cluster field must point to the remote service, and the user field must contain the returned authorizer.
|
||||
|
||||
|
@ -120,7 +120,7 @@ users:
|
|||
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
|
||||
client-key: /path/to/key.pem # key matching the cert
|
||||
```
|
||||
For additional HTTP configuration, refer to the [kubeconfig](/docs/user-guide/kubeconfig-file/) documentation.
|
||||
For additional HTTP configuration, refer to the [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) documentation.
|
||||
|
||||
#### Request Payloads
|
||||
|
||||
|
|
|
@ -107,6 +107,41 @@ header as shown below.
|
|||
Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269
|
||||
```
|
||||
|
||||
### Bootstrap Tokens
|
||||
|
||||
This feature is currently in **alpha**.
|
||||
|
||||
To allow for streamlined bootstrapping for new clusters, Kubernetes includes a
|
||||
dynamically-managed Bearer token type called a *Bootstrap Token*. These tokens
|
||||
are stored as Secrets in the `kube-system` namespace, where they can be
|
||||
dynamically managed and created. Controller Manager contains a TokenCleaner
|
||||
controller that deletes bootstrap tokens as they expire.
|
||||
|
||||
The tokens are of the form `[a-z0-9]{6}.[a-z0-9]{16}`. The first component is a
|
||||
Token ID and the second component is the Token Secret. You specify the token
|
||||
in an HTTP header as follows:
|
||||
|
||||
```http
|
||||
Authorization: Bearer 781292.db7bc3a58fc5f07e
|
||||
```
|
||||
|
||||
You must enable the Bootstrap Token Authenticator with the
|
||||
`--experimental-bootstrap-token-auth` flag on the API Server. You must enable
|
||||
the TokenCleaner controller via the `--controllers` flag on the Controller
|
||||
Manager. This is done with something like `--controllers=*,tokencleaner`.
|
||||
`kubeadm` will do this for you if you are using it to bootstrapping a cluster.
|
||||
|
||||
The authenticator authenticates as `system:bootstrap:<Token ID>`. It is
|
||||
included in the `system:bootstrappers` group. The naming and groups are
|
||||
intentionally limited to discourage users from using these tokens past
|
||||
bootstrapping. The user names and group can be used (and are used by `kubeadm`)
|
||||
to craft the appropriate authorization policies to support bootstrapping a
|
||||
cluster.
|
||||
|
||||
Please see [Bootstrap Tokens](/docs/admin/bootstrap-tokens/) for in depth
|
||||
documentation on the Bootstrap Token authenticator and controllers along with
|
||||
how to manage these tokens with `kubeadm`.
|
||||
|
||||
### Static Password File
|
||||
|
||||
Basic authentication is enabled by passing the `--basic-auth-file=SOMEFILE`
|
||||
|
@ -346,7 +381,7 @@ Webhook authentication is a hook for verifying bearer tokens.
|
|||
* `--authentication-token-webhook-config-file` a kubeconfig file describing how to access the remote webhook service.
|
||||
* `--authentication-token-webhook-cache-ttl` how long to cache authentication decisions. Defaults to two minutes.
|
||||
|
||||
The configuration file uses the [kubeconfig](/docs/user-guide/kubeconfig-file/)
|
||||
The configuration file uses the [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
file format. Within the file "users" refers to the API server webhook and
|
||||
"clusters" refers to the remote service. An example would be:
|
||||
|
||||
|
@ -512,7 +547,7 @@ using an existing deployment script or manually through `easyrsa` or `openssl.`
|
|||
#### Using an Existing Deployment Script
|
||||
|
||||
**Using an existing deployment script** is implemented at
|
||||
`cluster/saltbase/salt/generate-cert/make-ca-cert.sh`.
|
||||
`cluster/saltbase/salt/generate-cert/make-ca-cert.sh`.
|
||||
|
||||
Execute this script with two parameters. The first is the IP address
|
||||
of API server. The second is a list of subject alternate names in the form `IP:<ip-address> or DNS:<dns-name>`.
|
||||
|
|
|
@ -230,7 +230,7 @@ service when determining user privileges.
|
|||
Mode `Webhook` requires a file for HTTP configuration, specify by the
|
||||
`--authorization-webhook-config-file=SOME_FILENAME` flag.
|
||||
|
||||
The configuration file uses the [kubeconfig](/docs/user-guide/kubeconfig-file/)
|
||||
The configuration file uses the [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
file format. Within the file "users" refers to the API Server webhook and
|
||||
"clusters" refers to the remote service.
|
||||
|
||||
|
|
|
@ -0,0 +1,171 @@
|
|||
---
|
||||
assignees:
|
||||
- jbeda
|
||||
title: Authenticating with Bootstrap Tokens
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
Bootstrap tokens are a simple bearer token that is meant to be used when
|
||||
creating new clusters or joining new nodes to an existing cluster. It was built
|
||||
to support [`kubeadm`](/docs/admin/kubeadm/), but can be used in other contexts
|
||||
for users that wish to start clusters without `kubeadm`. It is also built to
|
||||
work, via RBAC policy, with the [Kubelet TLS
|
||||
Bootstrap](/docs/admin/kubelet-tls-bootstrap/) system.
|
||||
|
||||
Bootstrap Tokens are defined with a specific type
|
||||
(`bootstrap.kubernetes.io/token`) of secrets that lives in the `kube-system`
|
||||
namespace. These Secrets are then read by the Bootstrap Authenticator in the
|
||||
API Server. Expired tokens are removed with the TokenCleaner controller in the
|
||||
Controller Manager. The tokens are also used to create a signature for a
|
||||
specific ConfigMap used in a "discovery" process through a BootstrapSigner
|
||||
controller.
|
||||
|
||||
Currently, Bootstrap Tokens are **alpha** but there are no large breaking
|
||||
changes expected.
|
||||
|
||||
## Token Format
|
||||
|
||||
Bootstrap Tokens take the form of `abcdef.0123456789abcdef`. More formally,
|
||||
they must match the regular expression `[a-z0-9]{6}\.[a-z0-9]{16}`.
|
||||
|
||||
The first part of the token is the "Token ID" and is considered public
|
||||
information. It is used when referring to a token without leaking the secret
|
||||
part used for authentication. The second part is the "Token Secret" and should
|
||||
only be shared with trusted parties.
|
||||
|
||||
## Enabling Bootstrap Tokens
|
||||
|
||||
All features for Bootstrap Tokens are disabled by default in Kubernetes v1.6.
|
||||
|
||||
You can enable the Bootstrap Token authenticator with the
|
||||
`--experimental-bootstrap-token-auth` flag on the API server. You can enable
|
||||
the Bootstrap controllers by specifying them withthe `--controllers` flag on the
|
||||
controller manager with something like
|
||||
`--controllers=*,tokencleaner,bootstrapsigner`. This is done automatically when
|
||||
using `kubeadm`.
|
||||
|
||||
Tokens are used in an HTTPS call as follows:
|
||||
|
||||
```http
|
||||
Authorization: Bearer 07401b.f395accd246ae52d
|
||||
```
|
||||
|
||||
## Bootstrap Token Secret Format
|
||||
|
||||
Each valid token is backed by a secret in the `kube-system` namespace. You can
|
||||
find the full design doc
|
||||
[here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md).
|
||||
|
||||
Here is what the secret looks like. Note that `base64(string)` indicates the
|
||||
value should be base64 encoded. The undecoded version is provided here for
|
||||
readability.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: bootstrap-token-07401b
|
||||
namespace: kube-system
|
||||
type: bootstrap.kubernetes.io/token
|
||||
data:
|
||||
description: base64(The default bootstrap token generated by 'kubeadm init'.)
|
||||
token-id: base64(07401b)
|
||||
token-secret: base64(f395accd246ae52d)
|
||||
expiration: base64(2017-03-10T03:22:11Z)
|
||||
usage-bootstrap-authentication: base64(true)
|
||||
usage-bootstrap-signing: base64(true)
|
||||
```
|
||||
|
||||
The type of the secret must be `bootstrap.kubernetes.io/token` and the name must
|
||||
be `bootstrap-token-<token id>`. It must also exist in the `kube-system`
|
||||
namespace. `description` is a human readable discription that should not be
|
||||
used for machine readable information. The Token ID and Secret are included in
|
||||
the data dictionary.
|
||||
|
||||
The `usage-bootstrap-*` members indicate what this secret is intended to be used
|
||||
for. A value must be set to `true` to be enabled.
|
||||
|
||||
`usage-bootstrap-authentication` indicates that the token can be used to
|
||||
authenticate to the API server. The authenticator authenticates as
|
||||
`system:bootstrap:<Token ID>`. It is included in the `system:bootstrappers`
|
||||
group. The naming and groups are intentionally limited to discourage users from
|
||||
using these tokens past bootstrapping.
|
||||
|
||||
`usage-bootstrap-signing` indicates that the token should be used to sign the
|
||||
`cluster-info` ConfigMap as described below.
|
||||
|
||||
The `expiration` data member lists a time after which the token is no longer
|
||||
valid. This is encoded as an absolute UTC time using RFC3339. The TokenCleaner
|
||||
controller will delete expired tokens.
|
||||
|
||||
## Token Management with `kubeadm`
|
||||
|
||||
You can use the `kubeadm` tool to manage tokens on a running cluster. It will
|
||||
automatically grab the default admin credentials on a master from a `kubeadm`
|
||||
created cluster (`/etc/kubernetes/admin.conf`). You can specify an alternate
|
||||
kubeconfig file for credentials with the `--kubeconfig` to the following
|
||||
commands.
|
||||
|
||||
* `kubeadm token list` Lists the tokens along with when they expire and what the
|
||||
approved usages are.
|
||||
* `kubeadm token create` Creates a new token.
|
||||
* `--description` Set the description on the new token.
|
||||
* `--ttl duration` Set expiration time of the token as a delta from "now".
|
||||
Default is 0 for no expiration.
|
||||
* `--usages` Set the ways that the token can be used. The default is
|
||||
`signing,authentication`. These are the usages as described above.
|
||||
* `kubeadm token delete <token id>|<token id>.<token secret>` Delete a token.
|
||||
The token can either be identified with just an ID or with the entire token
|
||||
value. Only the ID is used; the token is still deleted if the secret does not
|
||||
match.
|
||||
|
||||
## ConfigMap Signing
|
||||
|
||||
In addition to authentication, the tokens can be used to sign a ConfigMap. This
|
||||
is used early in a cluster bootstrap process before the client trusts the API
|
||||
server. The signed ConfigMap can be authenicated by the shared token.
|
||||
|
||||
The ConfigMap that is signed is `cluster-info` in the `kube-public` namespace.
|
||||
The typical flow is that a client reads this ConfigMap while unauthenticated and
|
||||
ignoring TLS errors. It then validates the payload of the ConfigMap by looking
|
||||
at a signature embedded in the ConfigMap.
|
||||
|
||||
The ConfigMap may look like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cluster-info
|
||||
namespace: kube-public
|
||||
data:
|
||||
jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U
|
||||
kubeconfig: |
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: <really long certificate data>
|
||||
server: https://10.138.0.2:6443
|
||||
name: ""
|
||||
contexts: []
|
||||
current-context: ""
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users: []
|
||||
```
|
||||
|
||||
The `kubeconfig` member of the ConfigMap is a config file with just the cluster
|
||||
information filled out. The key thing being communicated here is the
|
||||
`certificate-authority-data`. This may be expanded in the future.
|
||||
|
||||
The signature is a JWS signature using the "detached" mode. To validate the
|
||||
signature, the user should encode the `kubeconfig` payload according to JWS
|
||||
rules (base64 encoded while discarding any trailing `=`). That encoded payload
|
||||
is then used to form a whole JWS by inserting it between the 2 dots. You can
|
||||
verify the JWS using the `HS256` scheme (HMAC-SHA256) with the full token (e.g.
|
||||
`07401b.f395accd246ae52d`) as the shared secret. Users _must_ verify that HS256
|
||||
is used.
|
|
@ -1,92 +1,7 @@
|
|||
---
|
||||
assignees:
|
||||
- davidopp
|
||||
- lavalamp
|
||||
title: Admin Guide
|
||||
---
|
||||
|
||||
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with concepts in the [User Guide](/docs/user-guide/).
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Planning a cluster
|
||||
|
||||
There are many different examples of how to setup a Kubernetes cluster. Many of them are listed in this
|
||||
[matrix](/docs/getting-started-guides/). We call each of the combinations in this matrix a *distro*.
|
||||
|
||||
Before choosing a particular guide, here are some things to consider:
|
||||
|
||||
- Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both
|
||||
models are supported, but some distros are better for one case or the other.
|
||||
- Will you be using a hosted Kubernetes cluster, such as [GKE](https://cloud.google.com/container-engine), or setting
|
||||
one up yourself?
|
||||
- Will your cluster be on-premises, or in the cloud (IaaS)? Kubernetes does not directly support hybrid clusters. We
|
||||
recommend setting up multiple clusters rather than spanning distant locations.
|
||||
- Will you be running Kubernetes on "bare metal" or virtual machines? Kubernetes supports both, via different distros.
|
||||
- Do you just want to run a cluster, or do you expect to do active development of Kubernetes project code? If the
|
||||
latter, it is better to pick a distro actively used by other developers. Some distros only use binary releases, but
|
||||
offer is a greater variety of choices.
|
||||
- Not all distros are maintained as actively. Prefer ones which are listed as tested on a more recent version of
|
||||
Kubernetes.
|
||||
- If you are configuring Kubernetes on-premises, you will need to consider what [networking
|
||||
model](/docs/admin/networking) fits best.
|
||||
- If you are designing for very high-availability, you may want [clusters in multiple zones](/docs/admin/multi-cluster).
|
||||
- You may want to familiarize yourself with the various
|
||||
[components](/docs/admin/cluster-components) needed to run a cluster.
|
||||
|
||||
## Setting up a cluster
|
||||
|
||||
Pick one of the Getting Started Guides from the [matrix](/docs/getting-started-guides/) and follow it.
|
||||
If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides.
|
||||
|
||||
One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](/docs/admin/ovs-networking)), which
|
||||
uses OpenVSwitch to set up networking between pods across
|
||||
Kubernetes nodes.
|
||||
|
||||
If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes
|
||||
project](/docs/admin/salt).
|
||||
|
||||
## Managing a cluster, including upgrades
|
||||
|
||||
[Managing a cluster](/docs/admin/cluster-management).
|
||||
|
||||
## Managing nodes
|
||||
|
||||
[Managing nodes](/docs/admin/node).
|
||||
|
||||
## Optional Cluster Services
|
||||
|
||||
* **DNS Integration with SkyDNS** ([dns.md](/docs/admin/dns)):
|
||||
Resolving a DNS name directly to a Kubernetes service.
|
||||
|
||||
* [**Cluster-level logging**](/docs/user-guide/logging/overview):
|
||||
Saving container logs to a central log store with search/browsing interface.
|
||||
|
||||
## Multi-tenant support
|
||||
|
||||
* **Resource Quota** ([resourcequota/](/docs/admin/resourcequota/))
|
||||
|
||||
## Security
|
||||
|
||||
* **Kubernetes Container Environment** ([docs/user-guide/container-environment.md](/docs/user-guide/container-environment)):
|
||||
Describes the environment for Kubelet managed containers on a Kubernetes
|
||||
node.
|
||||
|
||||
* **Securing access to the API Server** [accessing the api](/docs/admin/accessing-the-api)
|
||||
|
||||
* **Authentication** [authentication](/docs/admin/authentication)
|
||||
|
||||
* **Authorization** [authorization](/docs/admin/authorization)
|
||||
|
||||
* **Admission Controllers** [admission controllers](/docs/admin/admission-controllers)
|
||||
|
||||
* **Sysctls** [sysctls](/docs/admin/sysctls.md)
|
||||
|
||||
* **Audit** [audit](/docs/admin/audit)
|
||||
|
||||
* **Securing the kubelet**
|
||||
* [Master-Node communication](/docs/admin/master-node-communication/)
|
||||
* [TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
[Cluster Administration Overview](/docs/tasks/administer-cluster/overview/)
|
||||
|
|
|
@ -6,4 +6,6 @@ assignees:
|
|||
title: Kubernetes API Overview
|
||||
---
|
||||
|
||||
{% include user-guide-content-moved.md %}
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
[The Kubernetes API](/docs/concepts/overview/kubernetes-api/)
|
|
@ -0,0 +1,325 @@
|
|||
---
|
||||
title: Accessing Clusters
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Accessing the cluster API
|
||||
|
||||
### Accessing for the first time with kubectl
|
||||
|
||||
When accessing the Kubernetes API for the first time, we suggest using the
|
||||
Kubernetes CLI, `kubectl`.
|
||||
|
||||
To access a cluster, you need to know the location of the cluster and have credentials
|
||||
to access it. Typically, this is automatically set-up when you work through
|
||||
though a [Getting started guide](/docs/getting-started-guides/),
|
||||
or someone else setup the cluster and provided you with credentials and a location.
|
||||
|
||||
Check the location and credentials that kubectl knows about with this command:
|
||||
|
||||
```shell
|
||||
$ kubectl config view
|
||||
```
|
||||
|
||||
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using
|
||||
kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index).
|
||||
|
||||
### Directly accessing the REST API
|
||||
|
||||
Kubectl handles locating and authenticating to the apiserver.
|
||||
If you want to directly access the REST API with an http client like
|
||||
curl or wget, or a browser, there are several ways to locate and authenticate:
|
||||
|
||||
- Run kubectl in proxy mode.
|
||||
- Recommended approach.
|
||||
- Uses stored apiserver location.
|
||||
- Verifies identity of apiserver using self-signed cert. No MITM possible.
|
||||
- Authenticates to apiserver.
|
||||
- In future, may do intelligent client-side load-balancing and failover.
|
||||
- Provide the location and credentials directly to the http client.
|
||||
- Alternate approach.
|
||||
- Works with some types of client code that are confused by using a proxy.
|
||||
- Need to import a root cert into your browser to protect against MITM.
|
||||
|
||||
#### Using kubectl proxy
|
||||
|
||||
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
|
||||
locating the apiserver and authenticating.
|
||||
Run it like this:
|
||||
|
||||
```shell
|
||||
$ kubectl proxy --port=8080 &
|
||||
```
|
||||
|
||||
See [kubectl proxy](/docs/user-guide/kubectl/kubectl_proxy) for more details.
|
||||
|
||||
Then you can explore the API with curl, wget, or a browser, like so:
|
||||
|
||||
```shell
|
||||
$ curl http://localhost:8080/api/
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy (before v1.3.x)
|
||||
|
||||
It is possible to avoid using kubectl proxy by passing an authentication token
|
||||
directly to the apiserver, like this:
|
||||
|
||||
```shell
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy (post v1.3.x)
|
||||
|
||||
In Kubernetes version 1.3 or later, `kubectl config view` no longer displays the token. Use `kubectl describe secret...` to get the token for the default service account, like this:
|
||||
|
||||
``` shell
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"kind": "APIVersions",
|
||||
"versions": [
|
||||
"v1"
|
||||
],
|
||||
"serverAddressByClientCIDRs": [
|
||||
{
|
||||
"clientCIDR": "0.0.0.0/0",
|
||||
"serverAddress": "10.0.1.149:443"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The above examples use the `--insecure` flag. This leaves it subject to MITM
|
||||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||
and client certificates to access the server. (These are installed in the
|
||||
`~/.kube` directory). Since cluster certificates are typically self-signed, it
|
||||
may take special configuration to get your http client to use root
|
||||
certificate.
|
||||
|
||||
On some clusters, the apiserver does not require authentication; it may serve
|
||||
on localhost, or be protected by a firewall. There is not a standard
|
||||
for this. [Configuring Access to the API](/docs/admin/accessing-the-api)
|
||||
describes how a cluster admin can configure this. Such approaches may conflict
|
||||
with future high-availability support.
|
||||
|
||||
### Programmatic access to the API
|
||||
|
||||
The Kubernetes project-supported Go client library is at [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go).
|
||||
|
||||
To use it,
|
||||
|
||||
* To get the library, run the following command: `go get k8s.io/client-go/<version number>/kubernetes` See [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go) to see which versions are supported.
|
||||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
|
||||
|
||||
The Go client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster/main.go):
|
||||
|
||||
```golang
|
||||
import (
|
||||
"fmt"
|
||||
"k8s.io/client-go/1.4/kubernetes"
|
||||
"k8s.io/client-go/1.4/pkg/api/v1"
|
||||
"k8s.io/client-go/1.4/tools/clientcmd"
|
||||
)
|
||||
...
|
||||
// uses the current context in kubeconfig
|
||||
config, _ := clientcmd.BuildConfigFromFlags("", "path to kubeconfig")
|
||||
// creates the clientset
|
||||
clientset, _:= kubernetes.NewForConfig(config)
|
||||
// access the API to list pods
|
||||
pods, _:= clientset.Core().Pods("").List(v1.ListOptions{})
|
||||
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
|
||||
...
|
||||
```
|
||||
|
||||
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
|
||||
|
||||
There are [client libraries](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/client-libraries.md) for accessing the API from other languages. See documentation for other libraries for how they authenticate.
|
||||
|
||||
### Accessing the API from a Pod
|
||||
|
||||
When accessing the API from a pod, locating and authenticating
|
||||
to the api server are somewhat different.
|
||||
|
||||
The recommended way to locate the apiserver within the pod is with
|
||||
the `kubernetes` DNS name, which resolves to a Service IP which in turn
|
||||
will be routed to an apiserver.
|
||||
|
||||
The recommended way to authenticate to the apiserver is with a
|
||||
[service account](/docs/user-guide/service-accounts) credential. By kube-system, a pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
|
||||
If available, a certificate bundle is placed into the filesystem tree of each
|
||||
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
|
||||
used to verify the serving certificate of the apiserver.
|
||||
|
||||
Finally, the default namespace to be used for namespaced API operations is placed in a file
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
|
||||
|
||||
From within a pod the recommended ways to connect to API are:
|
||||
|
||||
- run a kubectl proxy as one of the containers in the pod, or as a background
|
||||
process within a container. This proxies the
|
||||
Kubernetes API to the localhost interface of the pod, so that other processes
|
||||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
|
||||
They handle locating and authenticating to the apiserver. [example](https://github.com/kubernetes/client-go/blob/master/examples/in-cluster/main.go)
|
||||
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
|
||||
|
||||
## Accessing services running on the cluster
|
||||
|
||||
The previous section was about connecting the Kubernetes API server. This section is about
|
||||
connecting to other services running on Kubernetes cluster. In Kubernetes, the
|
||||
[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have
|
||||
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
|
||||
routable, so they will not be reachable from a machine outside the cluster,
|
||||
such as your desktop machine.
|
||||
|
||||
### Ways to connect
|
||||
|
||||
You have several options for connecting to nodes, pods and services from outside the cluster:
|
||||
|
||||
- Access services through public IPs.
|
||||
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
|
||||
the cluster. See the [services](/docs/user-guide/services) and
|
||||
[kubectl expose](/docs/user-guide/kubectl/kubectl_expose) documentation.
|
||||
- Depending on your cluster environment, this may just expose the service to your corporate network,
|
||||
or it may expose it to the internet. Think about whether the service being exposed is secure.
|
||||
Does it do its own authentication?
|
||||
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
|
||||
place a unique label on the pod it and create a new service which selects this label.
|
||||
- In most cases, it should not be necessary for application developer to directly access
|
||||
nodes via their nodeIPs.
|
||||
- Access services, nodes, or pods using the Proxy Verb.
|
||||
- Does apiserver authentication and authorization prior to accessing the remote service.
|
||||
Use this if the services are not secure enough to expose to the internet, or to gain
|
||||
access to ports on the node IP, or for debugging.
|
||||
- Proxies may cause problems for some web applications.
|
||||
- Only works for HTTP/HTTPS.
|
||||
- Described [here](#manually-constructing-apiserver-proxy-urls).
|
||||
- Access from a node or pod in the cluster.
|
||||
- Run a pod, and then connect to a shell in it using [kubectl exec](/docs/user-guide/kubectl/kubectl_exec).
|
||||
Connect to other nodes, pods, and services from that shell.
|
||||
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
|
||||
access cluster services. This is a non-standard method, and will work on some clusters but
|
||||
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
|
||||
|
||||
### Discovering builtin services
|
||||
|
||||
Typically, there are several services which are started on a cluster by kube-system. Get a list of these
|
||||
with the `kubectl cluster-info` command:
|
||||
|
||||
```shell
|
||||
$ kubectl cluster-info
|
||||
|
||||
Kubernetes master is running at https://104.197.5.247
|
||||
elasticsearch-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
|
||||
kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kibana-logging
|
||||
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kube-dns
|
||||
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
|
||||
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
|
||||
```
|
||||
|
||||
This shows the proxy-verb URL for accessing each service.
|
||||
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
|
||||
at `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example:
|
||||
`http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/`.
|
||||
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
|
||||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/proxy/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL
|
||||
|
||||
##### Examples
|
||||
|
||||
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?q=user:kimchy`
|
||||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_cluster/health?pretty=true`
|
||||
|
||||
```json
|
||||
{
|
||||
"cluster_name" : "kubernetes_logging",
|
||||
"status" : "yellow",
|
||||
"timed_out" : false,
|
||||
"number_of_nodes" : 1,
|
||||
"number_of_data_nodes" : 1,
|
||||
"active_primary_shards" : 5,
|
||||
"active_shards" : 5,
|
||||
"relocating_shards" : 0,
|
||||
"initializing_shards" : 0,
|
||||
"unassigned_shards" : 5
|
||||
}
|
||||
```
|
||||
|
||||
#### Using web browsers to access services running on the cluster
|
||||
|
||||
You may be able to put an apiserver proxy url into the address bar of a browser. However:
|
||||
|
||||
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
|
||||
but your cluster may not be configured to accept basic auth.
|
||||
- Some web apps may not work, particularly those with client side javascript that construct urls in a
|
||||
way that is unaware of the proxy path prefix.
|
||||
|
||||
## Requesting redirects
|
||||
|
||||
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
|
||||
|
||||
## So Many Proxies
|
||||
|
||||
There are several different proxies you may encounter when using Kubernetes:
|
||||
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the Kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
- adds authentication headers
|
||||
1. The [apiserver proxy](#discovering-builtin-services):
|
||||
- is a bastion built into the apiserver
|
||||
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
|
||||
- runs in the apiserver processes
|
||||
- client to proxy uses HTTPS (or http if apiserver so configured)
|
||||
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
1. The [kube proxy](/docs/user-guide/services/#ips-and-vips):
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
- is just used to reach services
|
||||
1. A Proxy/Load-balancer in front of apiserver(s):
|
||||
- existence and implementation varies from cluster to cluster (e.g. nginx)
|
||||
- sits between all clients and one or more apiservers
|
||||
- acts as load balancer if there are several apiservers.
|
||||
1. Cloud Load Balancers on external services:
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the Kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
- implementation varies by cloud provider.
|
||||
|
||||
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
|
||||
will typically ensure that the latter types are setup correctly.
|
|
@ -0,0 +1,314 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
- thockin
|
||||
title: Authenticating Across Clusters with kubeconfig
|
||||
---
|
||||
|
||||
Authentication in Kubernetes can differ for different individuals.
|
||||
|
||||
- A running kubelet might have one way of authenticating (i.e. certificates).
|
||||
- Users might have a different way of authenticating (i.e. tokens).
|
||||
- Administrators might have a list of certificates which they provide individual users.
|
||||
- There may be multiple clusters, and we may want to define them all in one place - giving users the ability to use their own certificates and reusing the same global configuration.
|
||||
|
||||
So in order to easily switch between multiple clusters, for multiple users, a kubeconfig file was defined.
|
||||
|
||||
This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname.
|
||||
|
||||
Multiple kubeconfig files are allowed, if specified explicitly. At runtime they are loaded and merged along with override options specified from the command line (see [rules](#loading-and-merging-rules) below).
|
||||
|
||||
## Related discussion
|
||||
|
||||
http://issue.k8s.io/1755
|
||||
|
||||
## Components of a kubeconfig file
|
||||
|
||||
### Example kubeconfig file
|
||||
|
||||
```yaml
|
||||
current-context: federal-context
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
api-version: v1
|
||||
server: http://cow.org:8080
|
||||
name: cow-cluster
|
||||
- cluster:
|
||||
certificate-authority: path/to/my/cafile
|
||||
server: https://horse.org:4443
|
||||
name: horse-cluster
|
||||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://pig.org:443
|
||||
name: pig-cluster
|
||||
contexts:
|
||||
- context:
|
||||
cluster: horse-cluster
|
||||
namespace: chisel-ns
|
||||
user: green-user
|
||||
name: federal-context
|
||||
- context:
|
||||
cluster: pig-cluster
|
||||
namespace: saw-ns
|
||||
user: black-user
|
||||
name: queen-anne-context
|
||||
kind: Config
|
||||
preferences:
|
||||
colors: true
|
||||
users:
|
||||
- name: blue-user
|
||||
user:
|
||||
token: blue-token
|
||||
- name: green-user
|
||||
user:
|
||||
client-certificate: path/to/my/client/cert
|
||||
client-key: path/to/my/client/key
|
||||
```
|
||||
|
||||
### Breakdown/explanation of components
|
||||
|
||||
#### cluster
|
||||
|
||||
```yaml
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority: path/to/my/cafile
|
||||
server: https://horse.org:4443
|
||||
name: horse-cluster
|
||||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://pig.org:443
|
||||
name: pig-cluster
|
||||
```
|
||||
|
||||
A `cluster` contains endpoint data for a kubernetes cluster. This includes the fully
|
||||
qualified url for the kubernetes apiserver, as well as the cluster's certificate
|
||||
authority or `insecure-skip-tls-verify: true`, if the cluster's serving
|
||||
certificate is not signed by a system trusted certificate authority.
|
||||
A `cluster` has a name (nickname) which acts as a dictionary key for the cluster
|
||||
within this kubeconfig file. You can add or modify `cluster` entries using
|
||||
[`kubectl config set-cluster`](/docs/user-guide/kubectl/kubectl_config_set-cluster/).
|
||||
|
||||
#### user
|
||||
|
||||
```yaml
|
||||
users:
|
||||
- name: blue-user
|
||||
user:
|
||||
token: blue-token
|
||||
- name: green-user
|
||||
user:
|
||||
client-certificate: path/to/my/client/cert
|
||||
client-key: path/to/my/client/key
|
||||
```
|
||||
|
||||
A `user` defines client credentials for authenticating to a kubernetes cluster. A
|
||||
`user` has a name (nickname) which acts as its key within the list of user entries
|
||||
after kubeconfig is loaded/merged. Available credentials are `client-certificate`,
|
||||
`client-key`, `token`, and `username/password`. `username/password` and `token`
|
||||
are mutually exclusive, but client certs and keys can be combined with them.
|
||||
You can add or modify `user` entries using
|
||||
[`kubectl config set-credentials`](/docs/user-guide/kubectl/kubectl_config_set-credentials).
|
||||
|
||||
#### context
|
||||
|
||||
```yaml
|
||||
contexts:
|
||||
- context:
|
||||
cluster: horse-cluster
|
||||
namespace: chisel-ns
|
||||
user: green-user
|
||||
name: federal-context
|
||||
```
|
||||
|
||||
A `context` defines a named [`cluster`](#cluster),[`user`](#user),[`namespace`](/docs/user-guide/namespaces) tuple
|
||||
which is used to send requests to the specified cluster using the provided authentication info and
|
||||
namespace. Each of the three is optional; it is valid to specify a context with only one of `cluster`,
|
||||
`user`,`namespace`, or to specify none. Unspecified values, or named values that don't have corresponding
|
||||
entries in the loaded kubeconfig (e.g. if the context specified a `pink-user` for the above kubeconfig file)
|
||||
will be replaced with the default. See [Loading and merging rules](#loading-and-merging) below for override/merge behavior.
|
||||
You can add or modify `context` entries with [`kubectl config set-context`](/docs/user-guide/kubectl/kubectl_config_set-context).
|
||||
|
||||
#### current-context
|
||||
|
||||
```yaml
|
||||
current-context: federal-context
|
||||
```
|
||||
|
||||
`current-context` is the nickname or 'key' for the cluster,user,namespace tuple that kubectl
|
||||
will use by default when loading config from this file. You can override any of the values in kubectl
|
||||
from the commandline, by passing `--context=CONTEXT`, `--cluster=CLUSTER`, `--user=USER`, and/or `--namespace=NAMESPACE` respectively.
|
||||
You can change the `current-context` with [`kubectl config use-context`](/docs/user-guide/kubectl/kubectl_config_use-context).
|
||||
|
||||
#### miscellaneous
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
preferences:
|
||||
colors: true
|
||||
```
|
||||
|
||||
`apiVersion` and `kind` identify the version and schema for the client parser and should not
|
||||
be edited manually.
|
||||
|
||||
`preferences` specify optional (and currently unused) kubectl preferences.
|
||||
|
||||
## Viewing kubeconfig files
|
||||
|
||||
`kubectl config view` will display the current kubeconfig settings. By default
|
||||
it will show you all loaded kubeconfig settings; you can filter the view to just
|
||||
the settings relevant to the `current-context` by passing `--minify`. See
|
||||
[`kubectl config view`](/docs/user-guide/kubectl/kubectl_config_view) for other options.
|
||||
|
||||
## Building your own kubeconfig file
|
||||
|
||||
NOTE, that if you are deploying k8s via kube-up.sh, you do not need to create your own kubeconfig files, the script will do it for you.
|
||||
|
||||
In any case, you can easily use this file as a template to create your own kubeconfig files.
|
||||
|
||||
So, lets do a quick walk through the basics of the above file so you can easily modify it as needed...
|
||||
|
||||
The above file would likely correspond to an api-server which was launched using the `--token-auth-file=tokens.csv` option, where the tokens.csv file looked something like this:
|
||||
|
||||
```conf
|
||||
blue-user,blue-user,1
|
||||
mister-red,mister-red,2
|
||||
```
|
||||
|
||||
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
|
||||
|
||||
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in successfully, because we are providing the green-user's client credentials.
|
||||
- Similarly, we can operate as the "blue-user" if we choose to change the value of current-context.
|
||||
|
||||
In the above scenario, green-user would have to log in by providing certificates, whereas blue-user would just provide the token. All this information would be handled for us by the
|
||||
|
||||
## Loading and merging rules
|
||||
|
||||
The rules for loading and merging the kubeconfig files are straightforward, but there are a lot of them. The final config is built in this order:
|
||||
|
||||
1. Get the kubeconfig from disk. This is done with the following hierarchy and merge rules:
|
||||
|
||||
|
||||
If the `CommandLineLocation` (the value of the `kubeconfig` command line option) is set, use this file only. No merging. Only one instance of this flag is allowed.
|
||||
|
||||
|
||||
Else, if `EnvVarLocation` (the value of `$KUBECONFIG`) is available, use it as a list of files that should be merged.
|
||||
Merge files together based on the following rules.
|
||||
Empty filenames are ignored. Files with non-deserializable content produced errors.
|
||||
The first file to set a particular value or map key wins and the value or map key is never changed.
|
||||
This means that the first file to set `CurrentContext` will have its context preserved. It also means that if two files specify a "red-user", only values from the first file's red-user are used. Even non-conflicting entries from the second file's "red-user" are discarded.
|
||||
|
||||
|
||||
Otherwise, use HomeDirectoryLocation (`~/.kube/config`) with no merging.
|
||||
1. Determine the context to use based on the first hit in this chain
|
||||
1. command line argument - the value of the `context` command line option
|
||||
1. `current-context` from the merged kubeconfig file
|
||||
1. Empty is allowed at this stage
|
||||
1. Determine the cluster info and user to use. At this point, we may or may not have a context. They are built based on the first hit in this chain. (run it twice, once for user, once for cluster)
|
||||
1. command line argument - `user` for user name and `cluster` for cluster name
|
||||
1. If context is present, then use the context's value
|
||||
1. Empty is allowed
|
||||
1. Determine the actual cluster info to use. At this point, we may or may not have a cluster info. Build each piece of the cluster info based on the chain (first hit wins):
|
||||
1. command line arguments - `server`, `api-version`, `certificate-authority`, and `insecure-skip-tls-verify`
|
||||
1. If cluster info is present and a value for the attribute is present, use it.
|
||||
1. If you don't have a server location, error.
|
||||
1. Determine the actual user info to use. User is built using the same rules as cluster info, EXCEPT that you can only have one authentication technique per user.
|
||||
1. Load precedence is 1) command line flag, 2) user fields from kubeconfig
|
||||
1. The command line flags are: `client-certificate`, `client-key`, `username`, `password`, and `token`.
|
||||
1. If there are two conflicting techniques, fail.
|
||||
1. For any information still missing, use default values and potentially prompt for authentication information
|
||||
1. All file references inside of a kubeconfig file are resolved relative to the location of the kubeconfig file itself. When file references are presented on the command line
|
||||
they are resolved relative to the current working directory. When paths are saved in the ~/.kube/config, relative paths are stored relatively while absolute paths are stored absolutely.
|
||||
|
||||
Any path in a kubeconfig file is resolved relative to the location of the kubeconfig file itself.
|
||||
|
||||
|
||||
## Manipulation of kubeconfig via `kubectl config <subcommand>`
|
||||
|
||||
In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help.
|
||||
See [kubectl/kubectl_config.md](/docs/user-guide/kubectl/kubectl_config) for help.
|
||||
|
||||
### Example
|
||||
|
||||
```shell
|
||||
$ kubectl config set-credentials myself --username=admin --password=secret
|
||||
$ kubectl config set-cluster local-server --server=http://localhost:8080
|
||||
$ kubectl config set-context default-context --cluster=local-server --user=myself
|
||||
$ kubectl config use-context default-context
|
||||
$ kubectl config set contexts.default-context.namespace the-right-prefix
|
||||
$ kubectl config view
|
||||
```
|
||||
|
||||
produces this output
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
server: http://localhost:8080
|
||||
name: local-server
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local-server
|
||||
namespace: the-right-prefix
|
||||
user: myself
|
||||
name: default-context
|
||||
current-context: default-context
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: myself
|
||||
user:
|
||||
password: secret
|
||||
username: admin
|
||||
```
|
||||
|
||||
and a kubeconfig file that looks like this
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
server: http://localhost:8080
|
||||
name: local-server
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local-server
|
||||
namespace: the-right-prefix
|
||||
user: myself
|
||||
name: default-context
|
||||
current-context: default-context
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: myself
|
||||
user:
|
||||
password: secret
|
||||
username: admin
|
||||
```
|
||||
|
||||
#### Commands for the example file
|
||||
|
||||
```shell
|
||||
$ kubectl config set preferences.colors true
|
||||
$ kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
|
||||
$ kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile
|
||||
$ kubectl config set-cluster pig-cluster --server=https://pig.org:443 --insecure-skip-tls-verify=true
|
||||
$ kubectl config set-credentials blue-user --token=blue-token
|
||||
$ kubectl config set-credentials green-user --client-certificate=path/to/my/client/cert --client-key=path/to/my/client/key
|
||||
$ kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns
|
||||
$ kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
|
||||
$ kubectl config use-context federal-context
|
||||
```
|
||||
|
||||
### Final notes for tying it all together
|
||||
|
||||
So, tying this all together, a quick start to create your own kubeconfig file:
|
||||
|
||||
- Take a good look and understand how your api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
|
||||
|
||||
- Replace the snippet above with information for your cluster's api-server endpoint.
|
||||
|
||||
- Make sure your api-server is launched in such a way that at least one user (i.e. green-user) credentials are provided to it. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.
|
|
@ -0,0 +1,382 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- quinton-hoole
|
||||
title: Cross-cluster Service Discovery using Federated Services
|
||||
---
|
||||
|
||||
This guide explains how to use Kubernetes Federated Services to deploy
|
||||
a common Service across multiple Kubernetes clusters. This makes it
|
||||
easy to achieve cross-cluster service discovery and availability zone
|
||||
fault tolerance for your Kubernetes applications.
|
||||
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/admin/federation/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you). Other tutorials, for example
|
||||
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
by Kelsey Hightower, are also available to help you.
|
||||
|
||||
You are also expected to have a basic
|
||||
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
|
||||
general, and [Services](/docs/user-guide/services/) in particular.
|
||||
|
||||
## Overview
|
||||
|
||||
Federated Services are created in much that same way as traditional
|
||||
[Kubernetes Services](/docs/user-guide/services/) by making an API
|
||||
call which specifies the desired properties of your service. In the
|
||||
case of Federated Services, this API call is directed to the
|
||||
Federation API endpoint, rather than a Kubernetes cluster API
|
||||
endpoint. The API for Federated Services is 100% compatible with the
|
||||
API for traditional Kubernetes Services.
|
||||
|
||||
Once created, the Federated Service automatically:
|
||||
|
||||
1. Creates matching Kubernetes Services in every cluster underlying your Cluster Federation,
|
||||
2. Monitors the health of those service "shards" (and the clusters in which they reside), and
|
||||
3. Manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients
|
||||
of your federated service can seamlessly locate an appropriate healthy service endpoint at all times, even in the event of cluster,
|
||||
availability zone or regional outages.
|
||||
|
||||
Clients inside your federated Kubernetes clusters (i.e. Pods) will
|
||||
automatically find the local shard of the Federated Service in their
|
||||
cluster if it exists and is healthy, or the closest healthy shard in a
|
||||
different cluster if it does not.
|
||||
|
||||
## Hybrid cloud capabilities
|
||||
|
||||
Federations of Kubernetes Clusters can include clusters running in
|
||||
different cloud providers (e.g. Google Cloud, AWS), and on-premises
|
||||
(e.g. on OpenStack). Simply create all of the clusters that you
|
||||
require, in the appropriate cloud providers and/or locations, and
|
||||
register each cluster's API endpoint and credentials with your
|
||||
Federation API Server (See the
|
||||
[federation admin guide](/docs/admin/federation/) for details).
|
||||
|
||||
Thereafter, your applications and services can span different clusters
|
||||
and cloud providers as described in more detail below.
|
||||
|
||||
## Creating a federated service
|
||||
|
||||
This is done in the usual way, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f services/nginx.yaml
|
||||
```
|
||||
|
||||
The '--context=federation-cluster' flag tells kubectl to submit the
|
||||
request to the Federation API endpoint, with the appropriate
|
||||
credentials. If you have not yet configured such a context, visit the
|
||||
[federation admin guide](/docs/admin/federation/) or one of the
|
||||
[administration tutorials](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
to find out how to do so.
|
||||
|
||||
As described above, the Federated Service will automatically create
|
||||
and maintain matching Kubernetes services in all of the clusters
|
||||
underlying your federation.
|
||||
|
||||
You can verify this by checking in each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get services nginx
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
nginx 10.63.250.98 104.199.136.89 80/TCP 9m
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone. The name and
|
||||
namespace of the underlying services will automatically match those of
|
||||
the Federated Service that you created above (and if you happen to
|
||||
have had services of the same name and namespace already existing in
|
||||
any of those clusters, they will be automatically adopted by the
|
||||
Federation and updated to conform with the specification of your
|
||||
Federated Service - either way, the end result will be the same).
|
||||
|
||||
The status of your Federated Service will automatically reflect the
|
||||
real-time status of the underlying Kubernetes services, for example:
|
||||
|
||||
``` shell
|
||||
$kubectl --context=federation-cluster describe services nginx
|
||||
|
||||
Name: nginx
|
||||
Namespace: default
|
||||
Labels: run=nginx
|
||||
Selector: run=nginx
|
||||
Type: LoadBalancer
|
||||
IP:
|
||||
LoadBalancer Ingress: 104.197.246.190, 130.211.57.243, 104.196.14.231, 104.199.136.89, ...
|
||||
Port: http 80/TCP
|
||||
Endpoints: <none>
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
Note the 'LoadBalancer Ingress' addresses of your Federated Service
|
||||
correspond with the 'LoadBalancer Ingress' addresses of all of the
|
||||
underlying Kubernetes services (once these have been allocated - this
|
||||
may take a few seconds). For inter-cluster and inter-cloud-provider
|
||||
networking between service shards to work correctly, your services
|
||||
need to have an externally visible IP address. [Service Type:
|
||||
Loadbalancer](/docs/user-guide/services/#type-loadbalancer)
|
||||
is typically used for this, although other options
|
||||
(e.g. [External IP's](/docs/user-guide/services/#external-ips)) exist.
|
||||
|
||||
Note also that we have not yet provisioned any backend Pods to receive
|
||||
the network traffic directed to these addresses (i.e. 'Service
|
||||
Endpoints'), so the Federated Service does not yet consider these to
|
||||
be healthy service shards, and has accordingly not yet added their
|
||||
addresses to the DNS records for this Federated Service (more on this
|
||||
aspect later).
|
||||
|
||||
## Adding backend pods
|
||||
|
||||
To render the underlying service shards healthy, we need to add
|
||||
backend Pods behind them. This is currently done directly against the
|
||||
API endpoints of the underlying clusters (although in future the
|
||||
Federation server will be able to do all this for you with a single
|
||||
command, to save you the trouble). For example, to create backend Pods
|
||||
in 13 underlying clusters:
|
||||
|
||||
``` shell
|
||||
for CLUSTER in asia-east1-c asia-east1-a asia-east1-b \
|
||||
europe-west1-d europe-west1-c europe-west1-b \
|
||||
us-central1-f us-central1-a us-central1-b us-central1-c \
|
||||
us-east1-d us-east1-c us-east1-b
|
||||
do
|
||||
kubectl --context=$CLUSTER run nginx --image=nginx:1.11.1-alpine --port=80
|
||||
done
|
||||
```
|
||||
|
||||
Note that `kubectl run` automatically adds the `run=nginx` labels required to associate the backend pods with their services.
|
||||
|
||||
## Verifying public DNS records
|
||||
|
||||
Once the above Pods have successfully started and have begun listening
|
||||
for connections, Kubernetes will report them as healthy endpoints of
|
||||
the service in that cluster (via automatic health checks). The Cluster
|
||||
Federation will in turn consider each of these
|
||||
service 'shards' to be healthy, and place them in serving by
|
||||
automatically configuring corresponding public DNS records. You can
|
||||
use your preferred interface to your configured DNS provider to verify
|
||||
this. For example, if your Federation is configured to use Google
|
||||
Cloud DNS, and a managed DNS domain 'example.com':
|
||||
|
||||
``` shell
|
||||
$ gcloud dns managed-zones describe example-dot-com
|
||||
creationTime: '2016-06-26T18:18:39.229Z'
|
||||
description: Example domain for Kubernetes Cluster Federation
|
||||
dnsName: example.com.
|
||||
id: '3229332181334243121'
|
||||
kind: dns#managedZone
|
||||
name: example-dot-com
|
||||
nameServers:
|
||||
- ns-cloud-a1.googledomains.com.
|
||||
- ns-cloud-a2.googledomains.com.
|
||||
- ns-cloud-a3.googledomains.com.
|
||||
- ns-cloud-a4.googledomains.com.
|
||||
```
|
||||
|
||||
``` shell
|
||||
$ gcloud dns record-sets list --zone example-dot-com
|
||||
NAME TYPE TTL DATA
|
||||
example.com. NS 21600 ns-cloud-e1.googledomains.com., ns-cloud-e2.googledomains.com.
|
||||
example.com. SOA 21600 ns-cloud-e1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 1209600 300
|
||||
nginx.mynamespace.myfederation.svc.example.com. A 180 104.197.246.190, 130.211.57.243, 104.196.14.231, 104.199.136.89,...
|
||||
nginx.mynamespace.myfederation.svc.us-central1-a.example.com. A 180 104.197.247.191
|
||||
nginx.mynamespace.myfederation.svc.us-central1-b.example.com. A 180 104.197.244.180
|
||||
nginx.mynamespace.myfederation.svc.us-central1-c.example.com. A 180 104.197.245.170
|
||||
nginx.mynamespace.myfederation.svc.us-central1-f.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.us-central1.example.com.
|
||||
nginx.mynamespace.myfederation.svc.us-central1.example.com. A 180 104.197.247.191, 104.197.244.180, 104.197.245.170
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-a.example.com. A 180 130.211.57.243
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-b.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.asia-east1.example.com.
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-c.example.com. A 180 130.211.56.221
|
||||
nginx.mynamespace.myfederation.svc.asia-east1.example.com. A 180 130.211.57.243, 130.211.56.221
|
||||
nginx.mynamespace.myfederation.svc.europe-west1.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.example.com.
|
||||
nginx.mynamespace.myfederation.svc.europe-west1-d.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.europe-west1.example.com.
|
||||
... etc.
|
||||
```
|
||||
|
||||
Note: If your Federation is configured to use AWS Route53, you can use one of the equivalent AWS tools, for example:
|
||||
|
||||
``` shell
|
||||
$aws route53 list-hosted-zones
|
||||
```
|
||||
and
|
||||
``` shell
|
||||
$aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX
|
||||
```
|
||||
|
||||
Whatever DNS provider you use, any DNS query tool (for example 'dig'
|
||||
or 'nslookup') will of course also allow you to see the records
|
||||
created by the Federation for you. Note that you should either point
|
||||
these tools directly at your DNS provider (e.g. `dig
|
||||
@ns-cloud-e1.googledomains.com...`) or expect delays in the order of
|
||||
your configured TTL (180 seconds, by default) before seeing updates,
|
||||
due to caching by intermediate DNS servers.
|
||||
|
||||
### Some notes about the above example
|
||||
|
||||
1. Notice that there is a normal ('A') record for each service shard that has at least one healthy backend endpoint. For example, in us-central1-a, 104.197.247.191 is the external IP address of the service shard in that zone, and in asia-east1-a the address is 130.211.56.221.
|
||||
2. Similarly, there are regional 'A' records which include all healthy shards in that region. For example, 'us-central1'. These regional records are useful for clients which do not have a particular zone preference, and as a building block for the automated locality and failover mechanism described below.
|
||||
2. For zones where there are currently no healthy backend endpoints, a CNAME ('Canonical Name') record is used to alias (automatically redirect) those queries to the next closest healthy zone. In the example, the service shard in us-central1-f currently has no healthy backend endpoints (i.e. Pods), so a CNAME record has been created to automatically redirect queries to other shards in that region (us-central1 in this case).
|
||||
3. Similarly, if no healthy shards exist in the enclosing region, the search progresses further afield. In the europe-west1-d availability zone, there are no healthy backends, so queries are redirected to the broader europe-west1 region (which also has no healthy backends), and onward to the global set of healthy addresses (' nginx.mynamespace.myfederation.svc.example.com.')
|
||||
|
||||
The above set of DNS records is automatically kept in sync with the
|
||||
current state of health of all service shards globally by the
|
||||
Federated Service system. DNS resolver libraries (which are invoked by
|
||||
all clients) automatically traverse the hierarchy of 'CNAME' and 'A'
|
||||
records to return the correct set of healthy IP addresses. Clients can
|
||||
then select any one of the returned addresses to initiate a network
|
||||
connection (and fail over automatically to one of the other equivalent
|
||||
addresses if required).
|
||||
|
||||
## Discovering a federated service
|
||||
|
||||
### From pods inside your federated clusters
|
||||
|
||||
By default, Kubernetes clusters come pre-configured with a
|
||||
cluster-local DNS server ('KubeDNS'), as well as an intelligently
|
||||
constructed DNS search path which together ensure that DNS queries
|
||||
like "myservice", "myservice.mynamespace",
|
||||
"bobsservice.othernamespace" etc issued by your software running
|
||||
inside Pods are automatically expanded and resolved correctly to the
|
||||
appropriate service IP of services running in the local cluster.
|
||||
|
||||
With the introduction of Federated Services and Cross-Cluster Service
|
||||
Discovery, this concept is extended to cover Kubernetes services
|
||||
running in any other cluster across your Cluster Federation, globally.
|
||||
To take advantage of this extended range, you use a slightly different
|
||||
DNS name (of the form "<servicename>.<namespace>.<federationname>",
|
||||
e.g. myservice.mynamespace.myfederation) to resolve Federated
|
||||
Services. Using a different DNS name also avoids having your existing
|
||||
applications accidentally traversing cross-zone or cross-region
|
||||
networks and you incurring perhaps unwanted network charges or
|
||||
latency, without you explicitly opting in to this behavior.
|
||||
|
||||
So, using our NGINX example service above, and the Federated Service
|
||||
DNS name form just described, let's consider an example: A Pod in a
|
||||
cluster in the `us-central1-f` availability zone needs to contact our
|
||||
NGINX service. Rather than use the service's traditional cluster-local
|
||||
DNS name (```"nginx.mynamespace"```, which is automatically expanded
|
||||
to ```"nginx.mynamespace.svc.cluster.local"```) it can now use the
|
||||
service's Federated DNS name, which is
|
||||
```"nginx.mynamespace.myfederation"```. This will be automatically
|
||||
expanded and resolved to the closest healthy shard of my NGINX
|
||||
service, wherever in the world that may be. If a healthy shard exists
|
||||
in the local cluster, that service's cluster-local (typically
|
||||
10.x.y.z) IP address will be returned (by the cluster-local KubeDNS).
|
||||
This is almost exactly equivalent to non-federated service resolution
|
||||
(almost because KubeDNS actually returns both a CNAME and an A record
|
||||
for local federated services, but applications will be oblivious
|
||||
to this minor technical difference).
|
||||
|
||||
But if the service does not exist in the local cluster (or it exists
|
||||
but has no healthy backend pods), the DNS query is automatically
|
||||
expanded to
|
||||
```"nginx.mynamespace.myfederation.svc.us-central1-f.example.com"```
|
||||
(i.e. logically "find the external IP of one of the shards closest to
|
||||
my availability zone"). This expansion is performed automatically by
|
||||
KubeDNS, which returns the associated CNAME record. This results in
|
||||
automatic traversal of the hierarchy of DNS records in the above
|
||||
example, and ends up at one of the external IP's of the Federated
|
||||
Service in the local us-central1 region (i.e. 104.197.247.191,
|
||||
104.197.244.180 or 104.197.245.170).
|
||||
|
||||
It is of course possible to explicitly target service shards in
|
||||
availability zones and regions other than the ones local to a Pod by
|
||||
specifying the appropriate DNS names explicitly, and not relying on
|
||||
automatic DNS expansion. For example,
|
||||
"nginx.mynamespace.myfederation.svc.europe-west1.example.com" will
|
||||
resolve to all of the currently healthy service shards in Europe, even
|
||||
if the Pod issuing the lookup is located in the U.S., and irrespective
|
||||
of whether or not there are healthy shards of the service in the U.S.
|
||||
This is useful for remote monitoring and other similar applications.
|
||||
|
||||
### From other clients outside your federated clusters
|
||||
|
||||
Much of the above discussion applies equally to external clients,
|
||||
except that the automatic DNS expansion described is no longer
|
||||
possible. So external clients need to specify one of the fully
|
||||
qualified DNS names of the Federated Service, be that a zonal,
|
||||
regional or global name. For convenience reasons, it is often a good
|
||||
idea to manually configure additional static CNAME records in your
|
||||
service, for example:
|
||||
|
||||
``` shell
|
||||
eu.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.europe-west1.example.com.
|
||||
us.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.us-central1.example.com.
|
||||
nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.example.com.
|
||||
```
|
||||
That way your clients can always use the short form on the left, and
|
||||
always be automatically routed to the closest healthy shard on their
|
||||
home continent. All of the required failover is handled for you
|
||||
automatically by Kubernetes Cluster Federation. Future releases will
|
||||
improve upon this even further.
|
||||
|
||||
## Handling failures of backend pods and whole clusters
|
||||
|
||||
Standard Kubernetes service cluster-IP's already ensure that
|
||||
non-responsive individual Pod endpoints are automatically taken out of
|
||||
service with low latency (a few seconds). In addition, as alluded
|
||||
above, the Kubernetes Cluster Federation system automatically monitors
|
||||
the health of clusters and the endpoints behind all of the shards of
|
||||
your Federated Service, taking shards in and out of service as
|
||||
required (e.g. when all of the endpoints behind a service, or perhaps
|
||||
the entire cluster or availability zone go down, or conversely recover
|
||||
from an outage). Due to the latency inherent in DNS caching (the cache
|
||||
timeout, or TTL for Federated Service DNS records is configured to 3
|
||||
minutes, by default, but can be adjusted), it may take up to that long
|
||||
for all clients to completely fail over to an alternative cluster in
|
||||
the case of catastrophic failure. However, given the number of
|
||||
discrete IP addresses which can be returned for each regional service
|
||||
endpoint (see e.g. us-central1 above, which has three alternatives)
|
||||
many clients will fail over automatically to one of the alternative
|
||||
IP's in less time than that given appropriate configuration.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
#### I cannot connect to my cluster federation API
|
||||
Check that your
|
||||
|
||||
1. Client (typically kubectl) is correctly configured (including API endpoints and login credentials), and
|
||||
2. Cluster Federation API server is running and network-reachable.
|
||||
|
||||
See the [federation admin guide](/docs/admin/federation/) to learn
|
||||
how to bring up a cluster federation correctly (or have your cluster administrator do this for you), and how to correctly configure your client.
|
||||
|
||||
#### I can create a federated service successfully against the cluster federation API, but no matching services are created in my underlying clusters
|
||||
Check that:
|
||||
|
||||
1. Your clusters are correctly registered in the Cluster Federation API (`kubectl describe clusters`)
|
||||
2. Your clusters are all 'Active'. This means that the cluster Federation system was able to connect and authenticate against the clusters' endpoints. If not, consult the logs of the federation-controller-manager pod to ascertain what the failure might be. (`kubectl --namespace=federation logs $(kubectl get pods --namespace=federation -l module=federation-controller-manager -oname`)
|
||||
3. That the login credentials provided to the Cluster Federation API for the clusters have the correct authorization and quota to create services in the relevant namespace in the clusters. Again you should see associated error messages providing more detail in the above log file if this is not the case.
|
||||
4. Whether any other error is preventing the service creation operation from succeeding (look for `service-controller` errors in the output of `kubectl logs federation-controller-manager --namespace federation`).
|
||||
|
||||
#### I can create a federated service successfully, but no matching DNS records are created in my DNS provider.
|
||||
Check that:
|
||||
|
||||
1. Your federation name, DNS provider, DNS domain name are configured correctly. Consult the [federation admin guide](/docs/admin/federation/) or [tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation) to learn
|
||||
how to configure your Cluster Federation system's DNS provider (or have your cluster administrator do this for you).
|
||||
2. Confirm that the Cluster Federation's service-controller is successfully connecting to and authenticating against your selected DNS provider (look for `service-controller` errors or successes in the output of `kubectl logs federation-controller-manager --namespace federation`)
|
||||
3. Confirm that the Cluster Federation's service-controller is successfully creating DNS records in your DNS provider (or outputting errors in its logs explaining in more detail what's failing).
|
||||
|
||||
#### Matching DNS records are created in my DNS provider, but clients are unable to resolve against those names
|
||||
Check that:
|
||||
|
||||
1. The DNS registrar that manages your federation DNS domain has been correctly configured to point to your configured DNS provider's nameservers. See for example [Google Domains Documentation](https://support.google.com/domains/answer/3290309?hl=en&ref_topic=3251230) and [Google Cloud DNS Documentation](https://cloud.google.com/dns/update-name-servers), or equivalent guidance from your domain registrar and DNS provider.
|
||||
|
||||
#### This troubleshooting guide did not help me solve my problem
|
||||
|
||||
1. Please use one of our [support channels](http://kubernetes.io/docs/troubleshooting/) to seek assistance.
|
||||
|
||||
## For more information
|
||||
|
||||
* [Federation proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/federation.md) details use cases that motivated this work.
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
title: Resource Usage Monitoring
|
||||
---
|
||||
|
||||
Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes.
|
||||
|
||||
## Overview
|
||||
|
||||
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes' [Kubelet](https://releases.k8s.io/{{page.githubbranch}}/DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization), [Google Cloud Monitoring](https://cloud.google.com/monitoring/) and many others described in more details [here](https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md). The overall architecture of the service can be seen below:
|
||||
|
||||

|
||||
|
||||
Let's look at some of the other components in more detail.
|
||||
|
||||
### cAdvisor
|
||||
|
||||
cAdvisor is an open source container resource usage and performance analysis agent. It is purpose built for containers and supports Docker containers natively. In Kubernetes, cadvisor is integrated into the Kubelet binary. cAdvisor auto-discovers all containers in the machine and collects CPU, memory, filesystem, and network usage statistics. cAdvisor also provides the overall machine usage by analyzing the 'root'? container on the machine.
|
||||
|
||||
On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194. Here is a snapshot of part of cAdvisor's UI that shows the overall machine usage:
|
||||
|
||||

|
||||
|
||||
### Kubelet
|
||||
|
||||
The Kubelet acts as a bridge between the Kubernetes master and the nodes. It manages the pods and containers running on a machine. Kubelet translates each pod into its constituent containers and fetches individual container usage statistics from cAdvisor. It then exposes the aggregated pod resource usage statistics via a REST API.
|
||||
|
||||
## Storage Backends
|
||||
|
||||
### InfluxDB and Grafana
|
||||
|
||||
A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most Kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
|
||||
|
||||
The Grafana container serves Grafana's UI which provides an easy to configure dashboard interface. The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can easily be customized and expanded. Take a look at the storage schema for InfluxDB [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/storage-schema.md#metrics).
|
||||
|
||||
Here is a video showing how to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana:
|
||||
|
||||
[](http://www.youtube.com/watch?v=SZgqjMrxo3g)
|
||||
|
||||
Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:
|
||||
|
||||

|
||||
|
||||
### Google Cloud Monitoring
|
||||
|
||||
Google Cloud Monitoring is a hosted monitoring service that allows you to visualize and alert on important metrics in your application. Heapster can be setup to automatically push all collected metrics to Google Cloud Monitoring. These metrics are then available in the [Cloud Monitoring Console](https://app.google.stackdriver.com/). This storage backend is the easiest to setup and maintain. The monitoring console allows you to easily create and customize dashboards using the exported data.
|
||||
|
||||
Here is a video showing how to setup and run a Google Cloud Monitoring backed Heapster:
|
||||
|
||||
[](http://www.youtube.com/watch?v=xSMNR2fcoLs)
|
||||
|
||||
Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wide resource usage.
|
||||
|
||||

|
||||
|
||||
## Try it out!
|
||||
|
||||
Now that you've learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/kubernetes/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues via the troubleshooting [channels](/docs/troubleshooting/).
|
||||
|
||||
***
|
||||
*Authors: Vishnu Kannan and Victor Marmol, Google Software Engineers.*
|
||||
*This article was originally posted in [Kubernetes blog](http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes.html).*
|
|
@ -262,7 +262,7 @@ The pattern names are also links to examples and more detailed description.
|
|||
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
|
||||
| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
|
||||
| [Job Template Expansion](/docs/user-guide/jobs/expansions) | | | ✓ | ✓ |
|
||||
| [Queue with Pod Per Work Item](/docs/tasks/job/work-queue-1/) | ✓ | | sometimes | ✓ |
|
||||
| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | sometimes | ✓ |
|
||||
| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ |
|
||||
| Single Job with Static Work Assignment | ✓ | | ✓ | |
|
||||
|
||||
|
@ -278,7 +278,7 @@ Here, `W` is the number of work items.
|
|||
| Pattern | `.spec.completions` | `.spec.parallelism` |
|
||||
| -------------------------------------------------------------------- |:-------------------:|:--------------------:|
|
||||
| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | 1 | should be 1 |
|
||||
| [Queue with Pod Per Work Item](/docs/tasks/job/work-queue-1/) | W | any |
|
||||
| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any |
|
||||
| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any |
|
||||
| Single Job with Static Work Assignment | W | any |
|
||||
|
||||
|
|
|
@ -0,0 +1,301 @@
|
|||
---
|
||||
assignees:
|
||||
- caesarxuchao
|
||||
- lavalamp
|
||||
- thockin
|
||||
title: Connecting Applications with Services
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## The Kubernetes model for connecting containers
|
||||
|
||||
Now that you have a continuously running, replicated application you can expose it on a network. Before discussing the Kubernetes approach to networking, it is worthwhile to contrast it with the "normal" way networking works with Docker.
|
||||
|
||||
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, they must be allocated ports on the machine's own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or else be allocated ports dynamically.
|
||||
|
||||
Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model.
|
||||
|
||||
This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html).
|
||||
|
||||
## Exposing pods to the cluster
|
||||
|
||||
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
|
||||
|
||||
{% include code.html language="yaml" file="run-my-nginx.yaml" ghlink="/docs/concepts/services-networking/run-my-nginx.yaml" %}
|
||||
|
||||
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
|
||||
user
|
||||
```shell
|
||||
$ kubectl create -f ./run-my-nginx.yaml
|
||||
$ kubectl get pods -l run=my-nginx -o wide
|
||||
NAME READY STATUS RESTARTS AGE NODE
|
||||
my-nginx-3800858182-jr4a2 1/1 Running 0 13s kubernetes-minion-905m
|
||||
my-nginx-3800858182-kna2y 1/1 Running 0 13s kubernetes-minion-ljyd
|
||||
```
|
||||
|
||||
Check your pods' IPs:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -l run=my-nginx -o yaml | grep podIP
|
||||
podIP: 10.244.3.4
|
||||
podIP: 10.244.2.5
|
||||
```
|
||||
|
||||
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interfaces, but the need for this is radically diminished because of the networking model.
|
||||
|
||||
You can read more about [how we achieve this](/docs/admin/networking/#how-to-achieve-this) if you're curious.
|
||||
|
||||
## Creating a Service
|
||||
|
||||
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.
|
||||
|
||||
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
|
||||
|
||||
You can create a Service for your 2 nginx replicas with `kubectl expose`:
|
||||
|
||||
```shell
|
||||
$ kubectl expose deployment/my-nginx
|
||||
service "my-nginx" exposed
|
||||
```
|
||||
|
||||
This is equivalent to `kubectl create -f` the following yaml:
|
||||
|
||||
{% include code.html language="yaml" file="nginx-svc.yaml" ghlink="/docs/concepts/services-networking/nginx-svc.yaml" %}
|
||||
|
||||
This specification will create a Service which targets TCP port 80 on any Pod with the `run: my-nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](/docs/api-reference/v1/definitions/#_v1_service) to see the list of supported fields in service definition.
|
||||
Check your Service:
|
||||
|
||||
```shell
|
||||
$ kubectl get svc my-nginx
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-nginx 10.0.162.149 <none> 80/TCP 21s
|
||||
```
|
||||
|
||||
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `my-nginx`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Service's selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
|
||||
|
||||
```shell
|
||||
$ kubectl describe svc my-nginx
|
||||
Name: my-nginx
|
||||
Namespace: default
|
||||
Labels: run=my-nginx
|
||||
Selector: run=my-nginx
|
||||
Type: ClusterIP
|
||||
IP: 10.0.162.149
|
||||
Port: <unset> 80/TCP
|
||||
Endpoints: 10.244.2.5:80,10.244.3.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
|
||||
$ kubectl get ep my-nginx
|
||||
NAME ENDPOINTS AGE
|
||||
my-nginx 10.244.2.5:80,10.244.3.4:80 1m
|
||||
```
|
||||
|
||||
You should now be able to curl the nginx Service on `<CLUSTER-IP>:<PORT>` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](/docs/user-guide/services/#virtual-ips-and-service-proxies).
|
||||
|
||||
## Accessing the Service
|
||||
|
||||
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
|
||||
|
||||
### Environment Variables
|
||||
|
||||
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods (your pod name will be different):
|
||||
|
||||
```shell
|
||||
$ kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
|
||||
KUBERNETES_SERVICE_HOST=10.0.0.1
|
||||
KUBERNETES_SERVICE_PORT=443
|
||||
KUBERNETES_SERVICE_PORT_HTTPS=443
|
||||
```
|
||||
|
||||
Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the Deployment to recreate them. This time around the Service exists *before* the replicas. This will give you scheduler-level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
|
||||
|
||||
```shell
|
||||
$ kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
|
||||
|
||||
$ kubectl get pods -l run=my-nginx -o wide
|
||||
NAME READY STATUS RESTARTS AGE NODE
|
||||
my-nginx-3800858182-e9ihh 1/1 Running 0 5s kubernetes-minion-ljyd
|
||||
my-nginx-3800858182-j4rm4 1/1 Running 0 5s kubernetes-minion-905m
|
||||
```
|
||||
|
||||
You may notice that the pods have different names, since they are killed and recreated.
|
||||
|
||||
```shell
|
||||
$ kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE
|
||||
KUBERNETES_SERVICE_PORT=443
|
||||
MY_NGINX_SERVICE_HOST=10.0.162.149
|
||||
KUBERNETES_SERVICE_HOST=10.0.0.1
|
||||
MY_NGINX_SERVICE_PORT=80
|
||||
KUBERNETES_SERVICE_PORT_HTTPS=443
|
||||
```
|
||||
|
||||
### DNS
|
||||
|
||||
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it's running on your cluster:
|
||||
|
||||
```shell
|
||||
$ kubectl get services kube-dns --namespace=kube-system
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 8m
|
||||
```
|
||||
|
||||
If it isn't running, you can [enable it](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's run another curl application to test this:
|
||||
|
||||
```shell
|
||||
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty
|
||||
Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false
|
||||
Hit enter for command prompt
|
||||
```
|
||||
|
||||
Then, hit enter and run `nslookup my-nginx`:
|
||||
|
||||
```shell
|
||||
[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
Name: my-nginx
|
||||
Address 1: 10.0.162.149
|
||||
```
|
||||
|
||||
## Securing the Service
|
||||
|
||||
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
|
||||
|
||||
* Self signed certificates for https (unless you already have an identity certificate)
|
||||
* An nginx server configured to use the certificates
|
||||
* A [secret](/docs/user-guide/secrets) that makes the certificates accessible to pods
|
||||
|
||||
You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/), in short:
|
||||
|
||||
```shell
|
||||
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
|
||||
$ kubectl create -f /tmp/secret.json
|
||||
secret "nginxsecret" created
|
||||
$ kubectl get secrets
|
||||
NAME TYPE DATA
|
||||
default-token-il9rc kubernetes.io/service-account-token 1
|
||||
nginxsecret Opaque 2
|
||||
```
|
||||
|
||||
Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
|
||||
|
||||
{% include code.html language="yaml" file="nginx-secure-app.yaml" ghlink="/docs/concepts/services-networking/nginx-secure-app.yaml" %}
|
||||
|
||||
Noteworthy points about the nginx-secure-app manifest:
|
||||
|
||||
- It contains both Deployment and Service specification in the same file
|
||||
- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
|
||||
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
|
||||
|
||||
```shell
|
||||
$ kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml
|
||||
```
|
||||
|
||||
At this point you can reach the nginx server from any node.
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -o yaml | grep -i podip
|
||||
podIP: 10.244.3.5
|
||||
node $ curl -k https://10.244.3.5
|
||||
...
|
||||
<h1>Welcome to nginx!</h1>
|
||||
```
|
||||
|
||||
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
|
||||
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
|
||||
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
|
||||
|
||||
{% include code.html language="yaml" file="curlpod.yaml" ghlink="/docs/concepts/services-networking/curlpod.yaml" %}
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./curlpod.yaml
|
||||
$ kubectl get pods -l app=curlpod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
curl-deployment-1515033274-1410r 1/1 Running 0 1m
|
||||
$ kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt
|
||||
...
|
||||
<title>Welcome to nginx!</title>
|
||||
...
|
||||
```
|
||||
|
||||
## Exposing the Service
|
||||
|
||||
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
|
||||
|
||||
```shell
|
||||
$ kubectl get svc my-nginx -o yaml | grep nodePort -C 5
|
||||
uid: 07191fb3-f61a-11e5-8ae5-42010af00002
|
||||
spec:
|
||||
clusterIP: 10.0.162.149
|
||||
ports:
|
||||
- name: http
|
||||
nodePort: 31704
|
||||
port: 8080
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
- name: https
|
||||
nodePort: 32453
|
||||
port: 443
|
||||
protocol: TCP
|
||||
targetPort: 443
|
||||
selector:
|
||||
run: my-nginx
|
||||
|
||||
$ kubectl get nodes -o yaml | grep ExternalIP -C 1
|
||||
- address: 104.197.41.11
|
||||
type: ExternalIP
|
||||
allocatable:
|
||||
--
|
||||
- address: 23.251.152.56
|
||||
type: ExternalIP
|
||||
allocatable:
|
||||
...
|
||||
|
||||
$ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
|
||||
...
|
||||
<h1>Welcome to nginx!</h1>
|
||||
```
|
||||
|
||||
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
|
||||
|
||||
```shell
|
||||
$ kubectl edit svc my-nginx
|
||||
$ kubectl get svc my-nginx
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-nginx 10.0.162.149 162.222.184.144 80/TCP,81/TCP,82/TCP 21s
|
||||
|
||||
$ curl https://<EXTERNAL-IP> -k
|
||||
...
|
||||
<title>Welcome to nginx!</title>
|
||||
```
|
||||
|
||||
The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet. The `CLUSTER-IP` is only available inside your
|
||||
cluster/private cloud network.
|
||||
|
||||
Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long)
|
||||
hostname, not an IP. It's too long to fit in the standard `kubectl get svc`
|
||||
output, in fact, so you'll need to do `kubectl describe service my-nginx` to
|
||||
see it. You'll see something like this:
|
||||
|
||||
```shell
|
||||
$ kubectl describe service my-nginx
|
||||
...
|
||||
LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com
|
||||
...
|
||||
```
|
||||
|
||||
## Further reading
|
||||
|
||||
Kubernetes also supports Federated Services, which can span multiple
|
||||
clusters and cloud providers, to provide increased availability,
|
||||
better fault tolerance and greater scalability for your services. See
|
||||
the [Federated Services User Guide](/docs/user-guide/federation/federated-services/)
|
||||
for further information.
|
||||
|
||||
## What's next?
|
||||
|
||||
[Learn about more Kubernetes features that will help you run containers reliably in production.](/docs/user-guide/production-pods)
|
|
@ -0,0 +1,25 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: curl-deployment
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: curlpod
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: nginxsecret
|
||||
containers:
|
||||
- name: curlpod
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- while true; do sleep 1; done
|
||||
image: radial/busyboxplus:curl
|
||||
volumeMounts:
|
||||
- mountPath: /etc/nginx/ssl
|
||||
name: secret-volume
|
|
@ -0,0 +1,43 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-nginx
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 8080
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 443
|
||||
protocol: TCP
|
||||
name: https
|
||||
selector:
|
||||
run: my-nginx
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: my-nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: nginxsecret
|
||||
containers:
|
||||
- name: nginxhttps
|
||||
image: bprashanth/nginxhttps:1.0
|
||||
ports:
|
||||
- containerPort: 443
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- mountPath: /etc/nginx/ssl
|
||||
name: secret-volume
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-nginx
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
selector:
|
||||
run: my-nginx
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: my-nginx
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: my-nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
@ -1,5 +1,8 @@
|
|||
---
|
||||
title: Garbage Collection
|
||||
redirect_from:
|
||||
- "/docs/concepts/abstractions/controllers/garbage-collection/"
|
||||
- "/docs/concepts/abstractions/controllers/garbage-collection.html"
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
@ -30,7 +33,7 @@ relationships between owners and dependents by manually setting the
|
|||
|
||||
Here's a configuration file for a ReplicaSet that has three Pods:
|
||||
|
||||
{% include code.html language="yaml" file="my-repset.yaml" ghlink="/docs/concepts/abstractions/controllers/my-repset.yaml" %}
|
||||
{% include code.html language="yaml" file="my-repset.yaml" ghlink="/docs/concepts/workloads/controllers/my-repset.yaml" %}
|
||||
|
||||
If you create the ReplicaSet and then view the Pod metadata, you can see
|
||||
OwnerReferences field:
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: ReplicaSet
|
||||
metadata:
|
||||
name: my-repset
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
pod-is-for: garbage-collection-example
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
pod-is-for: garbage-collection-example
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
|
@ -8,6 +8,9 @@ assignees:
|
|||
- kow3ns
|
||||
- smarterclayton
|
||||
title: PetSets
|
||||
redirect_from:
|
||||
- "/docs/concepts/abstractions/controllers/petsets/"
|
||||
- "/docs/concepts/abstractions/controllers/petsets.html"
|
||||
---
|
||||
|
||||
__Warning:__ Starting in Kubernetes version 1.5, PetSet has been renamed to [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets). To use (or continue to use) PetSet in Kubernetes 1.5, you _must_ [migrate](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) your existing PetSets to StatefulSets. For information on working with StatefulSet, see the tutorial on [how to run replicated stateful applications](/docs/tutorials/stateful-application/run-replicated-stateful-application).
|
|
@ -8,6 +8,9 @@ assignees:
|
|||
- kow3ns
|
||||
- smarterclayton
|
||||
title: StatefulSets
|
||||
redirect_from:
|
||||
- "/docs/concepts/abstractions/controllers/statefulsets/"
|
||||
- "/docs/concepts/abstractions/controllers/statefulsets.html"
|
||||
---
|
||||
|
||||
{% capture overview %}
|
|
@ -1,4 +1,6 @@
|
|||
---
|
||||
assignees:
|
||||
- erictune
|
||||
title: Pod Overview
|
||||
redirect_from:
|
||||
- "/docs/concepts/abstractions/pod/"
|
||||
|
@ -32,7 +34,7 @@ The [Kubernetes Blog](http://blog.kubernetes.io) has some additional information
|
|||
|
||||
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information.
|
||||
|
||||
### How Pods Manage Multiple Containers
|
||||
### How Pods manage multiple Containers
|
||||
|
||||
Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.
|
||||
|
||||
|
@ -70,6 +72,14 @@ Some examples of Controllers that contain one or more pods include:
|
|||
|
||||
In general, Controllers use a Pod Template that you provide to create the Pods for which it is responsible.
|
||||
|
||||
## Pod Templates
|
||||
|
||||
Pod templates are pod specifications which are included in other objects, such as
|
||||
[Replication Controllers](/docs/user-guide/replication-controller/), [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and
|
||||
[DaemonSets](/docs/admin/daemons/). Controllers use Pod Templates to make actual pods.
|
||||
|
||||
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
|
|
@ -133,7 +133,7 @@ export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
|||
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/kubectl)
|
||||
|
||||
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
|
||||
For more information, please read [kubeconfig files](/docs/user-guide/kubeconfig-file)
|
||||
For more information, please read [kubeconfig files](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
|
||||
### Examples
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ steps that existing cluster setup scripts are making.
|
|||
This will help you become familiar with the CLI ([kubectl](/docs/user-guide/kubectl/kubectl)) and concepts ([pods](/docs/user-guide/pods), [services](/docs/user-guide/services), etc.) first.
|
||||
1. You should have `kubectl` installed on your desktop. This will happen as a side
|
||||
effect of completing one of the other Getting Started Guides. If not, follow the instructions
|
||||
[here](/docs/user-guide/prereqs).
|
||||
[here](/docs/tasks/kubectl/install/).
|
||||
|
||||
### Cloud Provider
|
||||
|
||||
|
@ -264,7 +264,7 @@ to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
|
|||
The format for this file is described in the [authentication documentation](/docs/admin/authentication).
|
||||
|
||||
For distributing credentials to clients, the convention in Kubernetes is to put the credentials
|
||||
into a [kubeconfig file](/docs/user-guide/kubeconfig-file).
|
||||
into a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/).
|
||||
|
||||
The kubeconfig file for the administrator can be created as follows:
|
||||
|
||||
|
|
|
@ -0,0 +1,99 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- davidopp
|
||||
title: Configuring Your Cloud Provider's Firewalls
|
||||
---
|
||||
|
||||
Many cloud providers (e.g. Google Compute Engine) define firewalls that help prevent inadvertent
|
||||
exposure to the internet. When exposing a service to the external world, you may need to open up
|
||||
one or more ports in these firewalls to serve traffic. This document describes this process, as
|
||||
well as any provider specific details that may be necessary.
|
||||
|
||||
### Restrict Access For LoadBalancer Service
|
||||
|
||||
When using a Service with `spec.type: LoadBalancer`, you can specify the IP ranges that are allowed to access the load balancer
|
||||
by using `spec.loadBalancerSourceRanges`. This field takes a list of IP CIDR ranges, which Kubernetes will use to configure firewall exceptions.
|
||||
This feature is currently supported on Google Compute Engine, Google Container Engine and AWS. This field will be ignored if the cloud provider does not support the feature.
|
||||
|
||||
Assuming 10.0.0.0/8 is the internal subnet. In the following example, a load blancer will be created that is only accessible to cluster internal ips.
|
||||
This will not allow clients from outside of your Kubernetes cluster to access the load blancer.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: myapp
|
||||
spec:
|
||||
ports:
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
selector:
|
||||
app: example
|
||||
type: LoadBalancer
|
||||
loadBalancerSourceRanges:
|
||||
- 10.0.0.0/8
|
||||
```
|
||||
|
||||
In the following example, a load blancer will be created that is only accessible to clients with IP addresses from 130.211.204.1 and 130.211.204.2.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: myapp
|
||||
spec:
|
||||
ports:
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
selector:
|
||||
app: example
|
||||
type: LoadBalancer
|
||||
loadBalancerSourceRanges:
|
||||
- 130.211.204.1/32
|
||||
- 130.211.204.2/32
|
||||
```
|
||||
|
||||
### Google Compute Engine
|
||||
|
||||
When using a Service with `spec.type: LoadBalancer`, the firewall will be
|
||||
opened automatically. When using `spec.type: NodePort`, however, the firewall
|
||||
is *not* opened by default.
|
||||
|
||||
Google Compute Engine firewalls are documented [elsewhere](https://cloud.google.com/compute/docs/networking#firewalls_1).
|
||||
|
||||
You can add a firewall with the `gcloud` command line tool:
|
||||
|
||||
```shell
|
||||
$ gcloud compute firewall-rules create my-rule --allow=tcp:<port>
|
||||
```
|
||||
|
||||
**Note**
|
||||
There is one important security note when using firewalls on Google Compute Engine:
|
||||
|
||||
as of Kubernetes v1.0.0, GCE firewalls are defined per-vm, rather than per-ip
|
||||
address. This means that when you open a firewall for a service's ports,
|
||||
anything that serves on that port on that VM's host IP address may potentially
|
||||
serve traffic. Note that this is not a problem for other Kubernetes services,
|
||||
as they listen on IP addresses that are different than the host node's external
|
||||
IP address.
|
||||
|
||||
Consider:
|
||||
|
||||
* You create a Service with an external load balancer (IP Address 1.2.3.4)
|
||||
and port 80
|
||||
* You open the firewall for port 80 for all nodes in your cluster, so that
|
||||
the external Service actually can deliver packets to your Service
|
||||
* You start an nginx server, running on port 80 on the host virtual machine
|
||||
(IP Address 2.3.4.5). This nginx is **also** exposed to the internet on
|
||||
the VM's external IP address.
|
||||
|
||||
Consequently, please be careful when opening firewalls in Google Compute Engine
|
||||
or Google Container Engine. You may accidentally be exposing other services to
|
||||
the wilds of the internet.
|
||||
|
||||
This will be fixed in an upcoming release of Kubernetes.
|
||||
|
||||
### Other cloud providers
|
||||
|
||||
Coming soon.
|
|
@ -0,0 +1,141 @@
|
|||
---
|
||||
title: Creating an External Load Balancer
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
When creating a service, you have the option of automatically creating a
|
||||
cloud network load balancer. This provides an
|
||||
externally-accessible IP address that sends traffic to the correct port on your
|
||||
cluster nodes _provided your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package_.
|
||||
|
||||
## External Load Balancer Providers
|
||||
|
||||
It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.
|
||||
|
||||
When the service type is set to `LoadBalancer`, Kubernetes provides functionality equivalent to type=`ClusterIP` to pods within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes VMs. The Kubernetes service controller automates the creation of the external load balancer, health checks (if needed), firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service object.
|
||||
|
||||
## Configuration file
|
||||
|
||||
To create an external load balancer, add the following line to your
|
||||
[service configuration file](/docs/user-guide/services/operations/#service-configuration-file):
|
||||
|
||||
```json
|
||||
"type": "LoadBalancer"
|
||||
```
|
||||
|
||||
Your configuration file might look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "example-service"
|
||||
},
|
||||
"spec": {
|
||||
"ports": [{
|
||||
"port": 8765,
|
||||
"targetPort": 9376
|
||||
}],
|
||||
"selector": {
|
||||
"app": "example"
|
||||
},
|
||||
"type": "LoadBalancer"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Using kubectl
|
||||
|
||||
You can alternatively create the service with the `kubectl expose` command and
|
||||
its `--type=LoadBalancer` flag:
|
||||
|
||||
```bash
|
||||
$ kubectl expose rc example --port=8765 --target-port=9376 \
|
||||
--name=example-service --type=LoadBalancer
|
||||
```
|
||||
|
||||
This command creates a new service using the same selectors as the referenced
|
||||
resource (in the case of the example above, a replication controller named
|
||||
`example`.)
|
||||
|
||||
For more information, including optional flags, refer to the
|
||||
[`kubectl expose` reference](/docs/user-guide/kubectl/kubectl_expose/).
|
||||
|
||||
## Finding your IP address
|
||||
|
||||
You can find the IP address created for your service by getting the service
|
||||
information through `kubectl`:
|
||||
|
||||
```bash
|
||||
$ kubectl describe services example-service
|
||||
Name: example-service
|
||||
Selector: app=example
|
||||
Type: LoadBalancer
|
||||
IP: 10.67.252.103
|
||||
LoadBalancer Ingress: 123.45.678.9
|
||||
Port: <unnamed> 80/TCP
|
||||
NodePort: <unnamed> 32445/TCP
|
||||
Endpoints: 10.64.0.4:80,10.64.1.5:80,10.64.2.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
The IP address is listed next to `LoadBalancer Ingress`.
|
||||
|
||||
## Loss of client source IP for external traffic
|
||||
|
||||
Due to the implementation of this feature, the source IP for sessions as seen in the target container will *not be the original source IP* of the client. This is the default behavior as of Kubernetes v1.5. However, starting in v1.5, an optional beta feature has been added
|
||||
that will preserve the client Source IP for GCE/GKE environments. This feature will be phased in for other cloud providers in subsequent releases.
|
||||
|
||||
## Annotation to modify the LoadBalancer behavior for preservation of Source IP
|
||||
In 1.5, a Beta feature has been added that changes the behavior of the external LoadBalancer feature.
|
||||
|
||||
This feature can be activated by adding the beta annotation below to the metadata section of the Service Configuration file.
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "example-service",
|
||||
"annotations": {
|
||||
"service.beta.kubernetes.io/external-traffic": "OnlyLocal"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"ports": [{
|
||||
"port": 8765,
|
||||
"targetPort": 9376
|
||||
}],
|
||||
"selector": {
|
||||
"app": "example"
|
||||
},
|
||||
"type": "LoadBalancer"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note that this feature is not currently implemented for all cloudproviders/environments.**
|
||||
|
||||
### Caveats and Limitations when preserving source IPs
|
||||
|
||||
GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB
|
||||
kube-proxy rules which would correctly balance across all endpoints.
|
||||
|
||||
With the new functionality, the external traffic will not be equally load balanced across pods, but rather
|
||||
equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability
|
||||
for specifying the weight per node, they balance equally across all target nodes, disregarding the number of
|
||||
pods on each node).
|
||||
|
||||
We can, however, state that for NumServicePods << NumNodes or NumServicePods >> NumNodes, a fairly close-to-equal
|
||||
distribution will be seen, even without weights.
|
||||
|
||||
Once the external load balancers provide weights, this functionality can be added to the LB programming path.
|
||||
*Future Work: No support for weights is provided for the 1.4 release, but may be added at a future date*
|
||||
|
||||
Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
|
|
@ -0,0 +1,92 @@
|
|||
---
|
||||
assignees:
|
||||
- davidopp
|
||||
- lavalamp
|
||||
title: Cluster Administration Overview
|
||||
---
|
||||
|
||||
The cluster administration overview is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with concepts in the [User Guide](/docs/user-guide/).
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Planning a cluster
|
||||
|
||||
There are many different examples of how to setup a Kubernetes cluster. Many of them are listed in this
|
||||
[matrix](/docs/getting-started-guides/). We call each of the combinations in this matrix a *distro*.
|
||||
|
||||
Before choosing a particular guide, here are some things to consider:
|
||||
|
||||
- Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both
|
||||
models are supported, but some distros are better for one case or the other.
|
||||
- Will you be using a hosted Kubernetes cluster, such as [GKE](https://cloud.google.com/container-engine), or setting
|
||||
one up yourself?
|
||||
- Will your cluster be on-premises, or in the cloud (IaaS)? Kubernetes does not directly support hybrid clusters. We
|
||||
recommend setting up multiple clusters rather than spanning distant locations.
|
||||
- Will you be running Kubernetes on "bare metal" or virtual machines? Kubernetes supports both, via different distros.
|
||||
- Do you just want to run a cluster, or do you expect to do active development of Kubernetes project code? If the
|
||||
latter, it is better to pick a distro actively used by other developers. Some distros only use binary releases, but
|
||||
offer is a greater variety of choices.
|
||||
- Not all distros are maintained as actively. Prefer ones which are listed as tested on a more recent version of
|
||||
Kubernetes.
|
||||
- If you are configuring Kubernetes on-premises, you will need to consider what [networking
|
||||
model](/docs/admin/networking) fits best.
|
||||
- If you are designing for very high-availability, you may want [clusters in multiple zones](/docs/admin/multi-cluster).
|
||||
- You may want to familiarize yourself with the various
|
||||
[components](/docs/admin/cluster-components) needed to run a cluster.
|
||||
|
||||
## Setting up a cluster
|
||||
|
||||
Pick one of the Getting Started Guides from the [matrix](/docs/getting-started-guides/) and follow it.
|
||||
If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides.
|
||||
|
||||
One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](/docs/admin/ovs-networking)), which
|
||||
uses OpenVSwitch to set up networking between pods across
|
||||
Kubernetes nodes.
|
||||
|
||||
If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes
|
||||
project](/docs/admin/salt).
|
||||
|
||||
## Managing a cluster, including upgrades
|
||||
|
||||
[Managing a cluster](/docs/admin/cluster-management).
|
||||
|
||||
## Managing nodes
|
||||
|
||||
[Managing nodes](/docs/admin/node).
|
||||
|
||||
## Optional Cluster Services
|
||||
|
||||
* **DNS Integration with SkyDNS** ([dns.md](/docs/admin/dns)):
|
||||
Resolving a DNS name directly to a Kubernetes service.
|
||||
|
||||
* [**Cluster-level logging**](/docs/user-guide/logging/overview):
|
||||
Saving container logs to a central log store with search/browsing interface.
|
||||
|
||||
## Multi-tenant support
|
||||
|
||||
* **Resource Quota** ([resourcequota/](/docs/admin/resourcequota/))
|
||||
|
||||
## Security
|
||||
|
||||
* **Kubernetes Container Environment** ([docs/user-guide/container-environment.md](/docs/user-guide/container-environment)):
|
||||
Describes the environment for Kubelet managed containers on a Kubernetes
|
||||
node.
|
||||
|
||||
* **Securing access to the API Server** [accessing the api](/docs/admin/accessing-the-api)
|
||||
|
||||
* **Authentication** [authentication](/docs/admin/authentication)
|
||||
|
||||
* **Authorization** [authorization](/docs/admin/authorization)
|
||||
|
||||
* **Admission Controllers** [admission controllers](/docs/admin/admission-controllers)
|
||||
|
||||
* **Sysctls** [sysctls](/docs/admin/sysctls.md)
|
||||
|
||||
* **Audit** [audit](/docs/admin/audit)
|
||||
|
||||
* **Securing the kubelet**
|
||||
* [Master-Node communication](/docs/admin/master-node-communication/)
|
||||
* [TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
|
@ -0,0 +1,125 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
- thockin
|
||||
title: Sharing Cluster Access with kubeconfig
|
||||
---
|
||||
|
||||
Client access to a running Kubernetes cluster can be shared by copying
|
||||
the `kubectl` client config bundle ([kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)).
|
||||
This config bundle lives in `$HOME/.kube/config`, and is generated
|
||||
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
|
||||
|
||||
**1. Create a cluster**
|
||||
|
||||
```shell
|
||||
$ cluster/kube-up.sh
|
||||
```
|
||||
|
||||
**2. Copy `kubeconfig` to new host**
|
||||
|
||||
```shell
|
||||
$ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
|
||||
```
|
||||
|
||||
**3. On new host, make copied `config` available to `kubectl`**
|
||||
|
||||
* Option A: copy to default location
|
||||
|
||||
```shell
|
||||
$ mv /path/to/.kube/config $HOME/.kube/config
|
||||
```
|
||||
|
||||
* Option B: copy to working directory (from which kubectl is run)
|
||||
|
||||
```shell
|
||||
$ mv /path/to/.kube/config $PWD
|
||||
```
|
||||
|
||||
* Option C: manually pass `kubeconfig` location to `kubectl`
|
||||
|
||||
```shell
|
||||
# via environment variable
|
||||
$ export KUBECONFIG=/path/to/.kube/config
|
||||
|
||||
# via commandline flag
|
||||
$ kubectl ... --kubeconfig=/path/to/.kube/config
|
||||
```
|
||||
|
||||
## Manually Generating `kubeconfig`
|
||||
|
||||
`kubeconfig` is generated by `kube-up` but you can generate your own
|
||||
using (any desired subset of) the following commands.
|
||||
|
||||
```shell
|
||||
# create kubeconfig entry
|
||||
$ kubectl config set-cluster $CLUSTER_NICK \
|
||||
--server=https://1.1.1.1 \
|
||||
--certificate-authority=/path/to/apiserver/ca_file \
|
||||
--embed-certs=true \
|
||||
# Or if tls not needed, replace --certificate-authority and --embed-certs with
|
||||
--insecure-skip-tls-verify=true \
|
||||
--kubeconfig=/path/to/standalone/.kube/config
|
||||
|
||||
# create user entry
|
||||
$ kubectl config set-credentials $USER_NICK \
|
||||
# bearer token credentials, generated on kube master
|
||||
--token=$token \
|
||||
# use either username|password or token, not both
|
||||
--username=$username \
|
||||
--password=$password \
|
||||
--client-certificate=/path/to/crt_file \
|
||||
--client-key=/path/to/key_file \
|
||||
--embed-certs=true \
|
||||
--kubeconfig=/path/to/standalone/.kube/config
|
||||
|
||||
# create context entry
|
||||
$ kubectl config set-context $CONTEXT_NAME \
|
||||
--cluster=$CLUSTER_NICK \
|
||||
--user=$USER_NICK \
|
||||
--kubeconfig=/path/to/standalone/.kube/config
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
* The `--embed-certs` flag is needed to generate a standalone
|
||||
`kubeconfig`, that will work as-is on another host.
|
||||
* `--kubeconfig` is both the preferred file to load config from and the file to
|
||||
save config too. In the above commands the `--kubeconfig` file could be
|
||||
omitted if you first run
|
||||
|
||||
```shell
|
||||
$ export KUBECONFIG=/path/to/standalone/.kube/config
|
||||
```
|
||||
|
||||
* The ca_file, key_file, and cert_file referenced above are generated on the
|
||||
kube master at cluster turnup. They can be found on the master under
|
||||
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
|
||||
|
||||
For more details on `kubeconfig` see [Authenticating Across Clusters with kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/),
|
||||
and/or run `kubectl config -h`.
|
||||
|
||||
## Merging `kubeconfig` Example
|
||||
|
||||
`kubectl` loads and merges config from the following locations (in order)
|
||||
|
||||
1. `--kubeconfig=/path/to/.kube/config` command line flag
|
||||
2. `KUBECONFIG=/path/to/.kube/config` env variable
|
||||
3. `$HOME/.kube/config`
|
||||
|
||||
If you create clusters A, B on host1, and clusters C, D on host2, you can
|
||||
make all four clusters available on both hosts by running
|
||||
|
||||
```shell
|
||||
# on host2, copy host1's default kubeconfig, and merge it from env
|
||||
$ scp host1:/path/to/home1/.kube/config /path/to/other/.kube/config
|
||||
|
||||
$ export KUBECONFIG=/path/to/other/.kube/config
|
||||
|
||||
# on host1, copy host2's default kubeconfig and merge it from env
|
||||
$ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config
|
||||
|
||||
$ export KUBECONFIG=/path/to/other/.kube/config
|
||||
```
|
||||
|
||||
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file](/docs/user-guide/kubeconfig-file).
|
|
@ -167,7 +167,7 @@ We will use the `amqp-consume` utility to read the message
|
|||
from the queue and run our actual program. Here is a very simple
|
||||
example program:
|
||||
|
||||
{% include code.html language="python" file="worker.py" ghlink="/docs/tasks/job/work-queue-1/worker.py" %}
|
||||
{% include code.html language="python" file="worker.py" ghlink="/docs/tasks/job/coarse-parallel-processing-work-queue/worker.py" %}
|
||||
|
||||
Now, build an image. If you are working in the source
|
||||
tree, then change directory to `examples/job/work-queue-1`.
|
||||
|
@ -205,7 +205,7 @@ Here is a job definition. You'll need to make a copy of the Job and edit the
|
|||
image to match the name you used, and call it `./job.yaml`.
|
||||
|
||||
|
||||
{% include code.html language="yaml" file="job.yaml" ghlink="/docs/tasks/job/work-queue-1/job.yaml" %}
|
||||
{% include code.html language="yaml" file="job.yaml" ghlink="/docs/tasks/job/coarse-parallel-processing-work-queue/job.yaml" %}
|
||||
|
||||
In this example, each pod works on one item from the queue and then exits.
|
||||
So, the completion count of the Job corresponds to the number of work items
|
|
@ -0,0 +1,165 @@
|
|||
---
|
||||
assignees:
|
||||
- bgrant0607
|
||||
- mikedanese
|
||||
title: Installing and Setting Up kubectl
|
||||
---
|
||||
|
||||
To deploy and manage applications on Kubernetes, you'll use the
|
||||
Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/). It
|
||||
lets you inspect your cluster resources, create, delete, and update
|
||||
components, and much more. You will use it to look at your new cluster
|
||||
and bring up example apps.
|
||||
|
||||
You should use a version of kubectl that is at least as new as your
|
||||
server. `kubectl version` will print the server and client versions.
|
||||
Using the same version of kubectl as your server naturally works;
|
||||
using a newer kubectl than your server also works; but if you use an
|
||||
older kubectl with a newer server you may see odd validation errors.
|
||||
|
||||
Here are a few methods to install kubectl.
|
||||
|
||||
## Install kubectl Binary Via curl
|
||||
|
||||
Download the latest release with the command:
|
||||
|
||||
```shell
|
||||
# OS X
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
|
||||
# Linux
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
|
||||
# Windows
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
If you want to download a specific version of kubectl you can replace the nested curl command from above with the version you want. (e.g. v1.4.6, v1.5.0-beta.2)
|
||||
|
||||
Make the kubectl binary executable and move it to your PATH (e.g. `/usr/local/bin`):
|
||||
|
||||
```shell
|
||||
chmod +x ./kubectl
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
## Extract kubectl from Release .tar.gz or Compiled Source
|
||||
|
||||
If you downloaded a pre-compiled [release](https://github.com/kubernetes/kubernetes/releases), kubectl will be under `platforms/<os>/<arch>` from the tar bundle.
|
||||
|
||||
If you compiled Kubernetes from source, kubectl should be either under `_output/local/bin/<os>/<arch>` or `_output/dockerized/bin/<os>/<arch>`.
|
||||
|
||||
Copy or move kubectl into a directory already in your PATH (e.g. `/usr/local/bin`). For example:
|
||||
|
||||
```shell
|
||||
# OS X
|
||||
sudo cp platforms/darwin/amd64/kubectl /usr/local/bin/kubectl
|
||||
|
||||
# Linux
|
||||
sudo cp platforms/linux/amd64/kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
Next make it executable with the following command:
|
||||
|
||||
```shell
|
||||
sudo chmod +x /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
The kubectl binary doesn't have to be installed to be executable, but the rest of the walkthrough will assume that it's in your PATH.
|
||||
|
||||
If you prefer not to copy kubectl, you need to ensure it is in your path:
|
||||
|
||||
```shell
|
||||
# OS X
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
||||
# Linux
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
## Download as part of the Google Cloud SDK
|
||||
|
||||
kubectl can be installed as part of the Google Cloud SDK:
|
||||
|
||||
First install the [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
|
||||
After Google Cloud SDK installs, run the following command to install `kubectl`:
|
||||
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
Do check that the version is sufficiently up-to-date using `kubectl version`.
|
||||
|
||||
## Install with brew
|
||||
|
||||
If you are on MacOS and using brew, you can install with:
|
||||
|
||||
```shell
|
||||
brew install kubectl
|
||||
```
|
||||
|
||||
The homebrew project is independent from kubernetes, so do check that the version is
|
||||
sufficiently up-to-date using `kubectl version`.
|
||||
|
||||
## Configuring kubectl
|
||||
|
||||
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](/docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/tasks/administer-cluster/share-configuration/).
|
||||
By default, kubectl configuration lives at `~/.kube/config`.
|
||||
|
||||
#### Making sure you're ready
|
||||
|
||||
Check that kubectl is properly configured by getting the cluster state:
|
||||
|
||||
```shell
|
||||
$ kubectl cluster-info
|
||||
```
|
||||
|
||||
If you see a url response, you are ready to go.
|
||||
|
||||
## Enabling shell autocompletion
|
||||
|
||||
kubectl includes autocompletion support, which can save a lot of typing!
|
||||
|
||||
The completion script itself is generated by kubectl, so you typically just need to invoke it from your profile.
|
||||
|
||||
Common examples are provided here, but for more details please consult `kubectl completion -h`
|
||||
|
||||
### On Linux, using bash
|
||||
|
||||
To add it to your current shell: `source <(kubectl completion bash)`
|
||||
|
||||
To add kubectl autocompletion to your profile (so it is automatically loaded in future shells):
|
||||
|
||||
```shell
|
||||
echo "source <(kubectl completion bash)" >> ~/.bashrc
|
||||
```
|
||||
|
||||
### On MacOS, using bash
|
||||
|
||||
On MacOS, you will need to install the bash-completion support first:
|
||||
|
||||
```shell
|
||||
brew install bash-completion
|
||||
```
|
||||
|
||||
To add it to your current shell:
|
||||
|
||||
```shell
|
||||
source $(brew --prefix)/etc/bash_completion
|
||||
source <(kubectl completion bash)
|
||||
```
|
||||
|
||||
To add kubectl autocompletion to your profile (so it is automatically loaded in future shells):
|
||||
|
||||
```shell
|
||||
echo "source $(brew --prefix)/etc/bash_completion" >> ~/.bash_profile
|
||||
echo "source <(kubectl completion bash)" >> ~/.bash_profile
|
||||
```
|
||||
|
||||
Please note that this only appears to work currently if you install using `brew install kubectl`,
|
||||
and not if you downloaded kubectl directly.
|
||||
|
||||
## What's next?
|
||||
|
||||
[Learn how to launch and expose your application.](/docs/user-guide/quick-start)
|
|
@ -112,7 +112,7 @@ Note that **you should NOT upgrade Nodes at this time**, because the Pods
|
|||
### Upgrade kubectl to Kubernetes version 1.5 or later
|
||||
|
||||
Upgrade `kubectl` to Kubernetes version 1.5 or later, following [the steps for installing and setting up
|
||||
kubectl](/docs/user-guide/prereqs/).
|
||||
kubectl](/docs/tasks/kubectl/install/).
|
||||
|
||||
### Create StatefulSets
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ external IP address.
|
|||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* Install [kubectl](http://kubernetes.io/docs/user-guide/prereqs).
|
||||
* Install [kubectl](http://kubernetes.io/docs/tasks/kubectl/install/).
|
||||
|
||||
* Use a cloud provider like Google Container Engine or Amazon Web Services to
|
||||
create a Kubernetes cluster. This tutorial creates an
|
||||
|
|
|
@ -5,324 +5,6 @@ assignees:
|
|||
title: Accessing Clusters
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
## Accessing the cluster API
|
||||
|
||||
### Accessing for the first time with kubectl
|
||||
|
||||
When accessing the Kubernetes API for the first time, we suggest using the
|
||||
Kubernetes CLI, `kubectl`.
|
||||
|
||||
To access a cluster, you need to know the location of the cluster and have credentials
|
||||
to access it. Typically, this is automatically set-up when you work through
|
||||
though a [Getting started guide](/docs/getting-started-guides/),
|
||||
or someone else setup the cluster and provided you with credentials and a location.
|
||||
|
||||
Check the location and credentials that kubectl knows about with this command:
|
||||
|
||||
```shell
|
||||
$ kubectl config view
|
||||
```
|
||||
|
||||
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using
|
||||
kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index).
|
||||
|
||||
### Directly accessing the REST API
|
||||
|
||||
Kubectl handles locating and authenticating to the apiserver.
|
||||
If you want to directly access the REST API with an http client like
|
||||
curl or wget, or a browser, there are several ways to locate and authenticate:
|
||||
|
||||
- Run kubectl in proxy mode.
|
||||
- Recommended approach.
|
||||
- Uses stored apiserver location.
|
||||
- Verifies identity of apiserver using self-signed cert. No MITM possible.
|
||||
- Authenticates to apiserver.
|
||||
- In future, may do intelligent client-side load-balancing and failover.
|
||||
- Provide the location and credentials directly to the http client.
|
||||
- Alternate approach.
|
||||
- Works with some types of client code that are confused by using a proxy.
|
||||
- Need to import a root cert into your browser to protect against MITM.
|
||||
|
||||
#### Using kubectl proxy
|
||||
|
||||
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
|
||||
locating the apiserver and authenticating.
|
||||
Run it like this:
|
||||
|
||||
```shell
|
||||
$ kubectl proxy --port=8080 &
|
||||
```
|
||||
|
||||
See [kubectl proxy](/docs/user-guide/kubectl/kubectl_proxy) for more details.
|
||||
|
||||
Then you can explore the API with curl, wget, or a browser, like so:
|
||||
|
||||
```shell
|
||||
$ curl http://localhost:8080/api/
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy (before v1.3.x)
|
||||
|
||||
It is possible to avoid using kubectl proxy by passing an authentication token
|
||||
directly to the apiserver, like this:
|
||||
|
||||
```shell
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy (post v1.3.x)
|
||||
|
||||
In Kubernetes version 1.3 or later, `kubectl config view` no longer displays the token. Use `kubectl describe secret...` to get the token for the default service account, like this:
|
||||
|
||||
``` shell
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"kind": "APIVersions",
|
||||
"versions": [
|
||||
"v1"
|
||||
],
|
||||
"serverAddressByClientCIDRs": [
|
||||
{
|
||||
"clientCIDR": "0.0.0.0/0",
|
||||
"serverAddress": "10.0.1.149:443"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The above examples use the `--insecure` flag. This leaves it subject to MITM
|
||||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||
and client certificates to access the server. (These are installed in the
|
||||
`~/.kube` directory). Since cluster certificates are typically self-signed, it
|
||||
may take special configuration to get your http client to use root
|
||||
certificate.
|
||||
|
||||
On some clusters, the apiserver does not require authentication; it may serve
|
||||
on localhost, or be protected by a firewall. There is not a standard
|
||||
for this. [Configuring Access to the API](/docs/admin/accessing-the-api)
|
||||
describes how a cluster admin can configure this. Such approaches may conflict
|
||||
with future high-availability support.
|
||||
|
||||
### Programmatic access to the API
|
||||
|
||||
The Kubernetes project-supported Go client library is at [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go).
|
||||
|
||||
To use it,
|
||||
|
||||
* To get the library, run the following command: `go get k8s.io/client-go/<version number>/kubernetes` See [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go) to see which versions are supported.
|
||||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
|
||||
|
||||
The Go client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster/main.go):
|
||||
|
||||
```golang
|
||||
import (
|
||||
"fmt"
|
||||
"k8s.io/client-go/1.4/kubernetes"
|
||||
"k8s.io/client-go/1.4/pkg/api/v1"
|
||||
"k8s.io/client-go/1.4/tools/clientcmd"
|
||||
)
|
||||
...
|
||||
// uses the current context in kubeconfig
|
||||
config, _ := clientcmd.BuildConfigFromFlags("", "path to kubeconfig")
|
||||
// creates the clientset
|
||||
clientset, _:= kubernetes.NewForConfig(config)
|
||||
// access the API to list pods
|
||||
pods, _:= clientset.Core().Pods("").List(v1.ListOptions{})
|
||||
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
|
||||
...
|
||||
```
|
||||
|
||||
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
|
||||
|
||||
There are [client libraries](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/client-libraries.md) for accessing the API from other languages. See documentation for other libraries for how they authenticate.
|
||||
|
||||
### Accessing the API from a Pod
|
||||
|
||||
When accessing the API from a pod, locating and authenticating
|
||||
to the api server are somewhat different.
|
||||
|
||||
The recommended way to locate the apiserver within the pod is with
|
||||
the `kubernetes` DNS name, which resolves to a Service IP which in turn
|
||||
will be routed to an apiserver.
|
||||
|
||||
The recommended way to authenticate to the apiserver is with a
|
||||
[service account](/docs/user-guide/service-accounts) credential. By kube-system, a pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
|
||||
If available, a certificate bundle is placed into the filesystem tree of each
|
||||
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
|
||||
used to verify the serving certificate of the apiserver.
|
||||
|
||||
Finally, the default namespace to be used for namespaced API operations is placed in a file
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
|
||||
|
||||
From within a pod the recommended ways to connect to API are:
|
||||
|
||||
- run a kubectl proxy as one of the containers in the pod, or as a background
|
||||
process within a container. This proxies the
|
||||
Kubernetes API to the localhost interface of the pod, so that other processes
|
||||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
|
||||
They handle locating and authenticating to the apiserver. [example](https://github.com/kubernetes/client-go/blob/master/examples/in-cluster/main.go)
|
||||
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
|
||||
|
||||
## Accessing services running on the cluster
|
||||
|
||||
The previous section was about connecting the Kubernetes API server. This section is about
|
||||
connecting to other services running on Kubernetes cluster. In Kubernetes, the
|
||||
[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have
|
||||
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
|
||||
routable, so they will not be reachable from a machine outside the cluster,
|
||||
such as your desktop machine.
|
||||
|
||||
### Ways to connect
|
||||
|
||||
You have several options for connecting to nodes, pods and services from outside the cluster:
|
||||
|
||||
- Access services through public IPs.
|
||||
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
|
||||
the cluster. See the [services](/docs/user-guide/services) and
|
||||
[kubectl expose](/docs/user-guide/kubectl/kubectl_expose) documentation.
|
||||
- Depending on your cluster environment, this may just expose the service to your corporate network,
|
||||
or it may expose it to the internet. Think about whether the service being exposed is secure.
|
||||
Does it do its own authentication?
|
||||
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
|
||||
place a unique label on the pod it and create a new service which selects this label.
|
||||
- In most cases, it should not be necessary for application developer to directly access
|
||||
nodes via their nodeIPs.
|
||||
- Access services, nodes, or pods using the Proxy Verb.
|
||||
- Does apiserver authentication and authorization prior to accessing the remote service.
|
||||
Use this if the services are not secure enough to expose to the internet, or to gain
|
||||
access to ports on the node IP, or for debugging.
|
||||
- Proxies may cause problems for some web applications.
|
||||
- Only works for HTTP/HTTPS.
|
||||
- Described [here](#manually-constructing-apiserver-proxy-urls).
|
||||
- Access from a node or pod in the cluster.
|
||||
- Run a pod, and then connect to a shell in it using [kubectl exec](/docs/user-guide/kubectl/kubectl_exec).
|
||||
Connect to other nodes, pods, and services from that shell.
|
||||
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
|
||||
access cluster services. This is a non-standard method, and will work on some clusters but
|
||||
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
|
||||
|
||||
### Discovering builtin services
|
||||
|
||||
Typically, there are several services which are started on a cluster by kube-system. Get a list of these
|
||||
with the `kubectl cluster-info` command:
|
||||
|
||||
```shell
|
||||
$ kubectl cluster-info
|
||||
|
||||
Kubernetes master is running at https://104.197.5.247
|
||||
elasticsearch-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
|
||||
kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kibana-logging
|
||||
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kube-dns
|
||||
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
|
||||
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
|
||||
```
|
||||
|
||||
This shows the proxy-verb URL for accessing each service.
|
||||
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
|
||||
at `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example:
|
||||
`http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/`.
|
||||
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
|
||||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/proxy/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL
|
||||
|
||||
##### Examples
|
||||
|
||||
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?q=user:kimchy`
|
||||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_cluster/health?pretty=true`
|
||||
|
||||
```json
|
||||
{
|
||||
"cluster_name" : "kubernetes_logging",
|
||||
"status" : "yellow",
|
||||
"timed_out" : false,
|
||||
"number_of_nodes" : 1,
|
||||
"number_of_data_nodes" : 1,
|
||||
"active_primary_shards" : 5,
|
||||
"active_shards" : 5,
|
||||
"relocating_shards" : 0,
|
||||
"initializing_shards" : 0,
|
||||
"unassigned_shards" : 5
|
||||
}
|
||||
```
|
||||
|
||||
#### Using web browsers to access services running on the cluster
|
||||
|
||||
You may be able to put an apiserver proxy url into the address bar of a browser. However:
|
||||
|
||||
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
|
||||
but your cluster may not be configured to accept basic auth.
|
||||
- Some web apps may not work, particularly those with client side javascript that construct urls in a
|
||||
way that is unaware of the proxy path prefix.
|
||||
|
||||
## Requesting redirects
|
||||
|
||||
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
|
||||
|
||||
## So Many Proxies
|
||||
|
||||
There are several different proxies you may encounter when using Kubernetes:
|
||||
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the Kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
- adds authentication headers
|
||||
1. The [apiserver proxy](#discovering-builtin-services):
|
||||
- is a bastion built into the apiserver
|
||||
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
|
||||
- runs in the apiserver processes
|
||||
- client to proxy uses HTTPS (or http if apiserver so configured)
|
||||
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
1. The [kube proxy](/docs/user-guide/services/#ips-and-vips):
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
- is just used to reach services
|
||||
1. A Proxy/Load-balancer in front of apiserver(s):
|
||||
- existence and implementation varies from cluster to cluster (e.g. nginx)
|
||||
- sits between all clients and one or more apiservers
|
||||
- acts as load balancer if there are several apiservers.
|
||||
1. Cloud Load Balancers on external services:
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the Kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
- implementation varies by cloud provider.
|
||||
|
||||
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
|
||||
will typically ensure that the latter types are setup correctly.
|
||||
[Accessing Clusters](/docs/concepts/cluster-administration/access-cluster/)
|
|
@ -6,296 +6,6 @@ assignees:
|
|||
title: Connecting Applications with Services
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
## The Kubernetes model for connecting containers
|
||||
|
||||
Now that you have a continuously running, replicated application you can expose it on a network. Before discussing the Kubernetes approach to networking, it is worthwhile to contrast it with the "normal" way networking works with Docker.
|
||||
|
||||
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, they must be allocated ports on the machine's own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or else be allocated ports dynamically.
|
||||
|
||||
Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model.
|
||||
|
||||
This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html).
|
||||
|
||||
## Exposing pods to the cluster
|
||||
|
||||
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
|
||||
|
||||
{% include code.html language="yaml" file="run-my-nginx.yaml" ghlink="/docs/user-guide/run-my-nginx.yaml" %}
|
||||
|
||||
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./run-my-nginx.yaml
|
||||
$ kubectl get pods -l run=my-nginx -o wide
|
||||
NAME READY STATUS RESTARTS AGE NODE
|
||||
my-nginx-3800858182-jr4a2 1/1 Running 0 13s kubernetes-minion-905m
|
||||
my-nginx-3800858182-kna2y 1/1 Running 0 13s kubernetes-minion-ljyd
|
||||
```
|
||||
|
||||
Check your pods' IPs:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -l run=my-nginx -o yaml | grep podIP
|
||||
podIP: 10.244.3.4
|
||||
podIP: 10.244.2.5
|
||||
```
|
||||
|
||||
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interfaces, but the need for this is radically diminished because of the networking model.
|
||||
|
||||
You can read more about [how we achieve this](/docs/admin/networking/#how-to-achieve-this) if you're curious.
|
||||
|
||||
## Creating a Service
|
||||
|
||||
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.
|
||||
|
||||
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
|
||||
|
||||
You can create a Service for your 2 nginx replicas with `kubectl expose`:
|
||||
|
||||
```shell
|
||||
$ kubectl expose deployment/my-nginx
|
||||
service "my-nginx" exposed
|
||||
```
|
||||
|
||||
This is equivalent to `kubectl create -f` the following yaml:
|
||||
|
||||
{% include code.html language="yaml" file="nginx-svc.yaml" ghlink="/docs/user-guide/nginx-svc.yaml" %}
|
||||
|
||||
This specification will create a Service which targets TCP port 80 on any Pod with the `run: my-nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](/docs/api-reference/v1/definitions/#_v1_service) to see the list of supported fields in service definition.
|
||||
Check your Service:
|
||||
|
||||
```shell
|
||||
$ kubectl get svc my-nginx
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-nginx 10.0.162.149 <none> 80/TCP 21s
|
||||
```
|
||||
|
||||
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `my-nginx`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Service's selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
|
||||
|
||||
```shell
|
||||
$ kubectl describe svc my-nginx
|
||||
Name: my-nginx
|
||||
Namespace: default
|
||||
Labels: run=my-nginx
|
||||
Selector: run=my-nginx
|
||||
Type: ClusterIP
|
||||
IP: 10.0.162.149
|
||||
Port: <unset> 80/TCP
|
||||
Endpoints: 10.244.2.5:80,10.244.3.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
|
||||
$ kubectl get ep my-nginx
|
||||
NAME ENDPOINTS AGE
|
||||
my-nginx 10.244.2.5:80,10.244.3.4:80 1m
|
||||
```
|
||||
|
||||
You should now be able to curl the nginx Service on `<CLUSTER-IP>:<PORT>` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](/docs/user-guide/services/#virtual-ips-and-service-proxies).
|
||||
|
||||
## Accessing the Service
|
||||
|
||||
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
|
||||
|
||||
### Environment Variables
|
||||
|
||||
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods (your pod name will be different):
|
||||
|
||||
```shell
|
||||
$ kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
|
||||
KUBERNETES_SERVICE_HOST=10.0.0.1
|
||||
KUBERNETES_SERVICE_PORT=443
|
||||
KUBERNETES_SERVICE_PORT_HTTPS=443
|
||||
```
|
||||
|
||||
Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the Deployment to recreate them. This time around the Service exists *before* the replicas. This will give you scheduler-level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
|
||||
|
||||
```shell
|
||||
$ kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
|
||||
|
||||
$ kubectl get pods -l run=my-nginx -o wide
|
||||
NAME READY STATUS RESTARTS AGE NODE
|
||||
my-nginx-3800858182-e9ihh 1/1 Running 0 5s kubernetes-minion-ljyd
|
||||
my-nginx-3800858182-j4rm4 1/1 Running 0 5s kubernetes-minion-905m
|
||||
```
|
||||
|
||||
You may notice that the pods have different names, since they are killed and recreated.
|
||||
|
||||
```shell
|
||||
$ kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE
|
||||
KUBERNETES_SERVICE_PORT=443
|
||||
MY_NGINX_SERVICE_HOST=10.0.162.149
|
||||
KUBERNETES_SERVICE_HOST=10.0.0.1
|
||||
MY_NGINX_SERVICE_PORT=80
|
||||
KUBERNETES_SERVICE_PORT_HTTPS=443
|
||||
```
|
||||
|
||||
### DNS
|
||||
|
||||
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it's running on your cluster:
|
||||
|
||||
```shell
|
||||
$ kubectl get services kube-dns --namespace=kube-system
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 8m
|
||||
```
|
||||
|
||||
If it isn't running, you can [enable it](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's run another curl application to test this:
|
||||
|
||||
```shell
|
||||
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty
|
||||
Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false
|
||||
Hit enter for command prompt
|
||||
```
|
||||
|
||||
Then, hit enter and run `nslookup my-nginx`:
|
||||
|
||||
```shell
|
||||
[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
Name: my-nginx
|
||||
Address 1: 10.0.162.149
|
||||
```
|
||||
|
||||
## Securing the Service
|
||||
|
||||
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
|
||||
|
||||
* Self signed certificates for https (unless you already have an identity certificate)
|
||||
* An nginx server configured to use the certificates
|
||||
* A [secret](/docs/user-guide/secrets) that makes the certificates accessible to pods
|
||||
|
||||
You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/), in short:
|
||||
|
||||
```shell
|
||||
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
|
||||
$ kubectl create -f /tmp/secret.json
|
||||
secret "nginxsecret" created
|
||||
$ kubectl get secrets
|
||||
NAME TYPE DATA
|
||||
default-token-il9rc kubernetes.io/service-account-token 1
|
||||
nginxsecret Opaque 2
|
||||
```
|
||||
|
||||
Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
|
||||
|
||||
{% include code.html language="yaml" file="nginx-secure-app.yaml" ghlink="/docs/user-guide/nginx-secure-app.yaml" %}
|
||||
|
||||
Noteworthy points about the nginx-secure-app manifest:
|
||||
|
||||
- It contains both Deployment and Service specification in the same file
|
||||
- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
|
||||
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
|
||||
|
||||
```shell
|
||||
$ kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml
|
||||
```
|
||||
|
||||
At this point you can reach the nginx server from any node.
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -o yaml | grep -i podip
|
||||
podIP: 10.244.3.5
|
||||
node $ curl -k https://10.244.3.5
|
||||
...
|
||||
<h1>Welcome to nginx!</h1>
|
||||
```
|
||||
|
||||
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
|
||||
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
|
||||
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
|
||||
|
||||
{% include code.html language="yaml" file="curlpod.yaml" ghlink="/docs/user-guide/curlpod.yaml" %}
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./curlpod.yaml
|
||||
$ kubectl get pods -l app=curlpod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
curl-deployment-1515033274-1410r 1/1 Running 0 1m
|
||||
$ kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt
|
||||
...
|
||||
<title>Welcome to nginx!</title>
|
||||
...
|
||||
```
|
||||
|
||||
## Exposing the Service
|
||||
|
||||
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
|
||||
|
||||
```shell
|
||||
$ kubectl get svc my-nginx -o yaml | grep nodePort -C 5
|
||||
uid: 07191fb3-f61a-11e5-8ae5-42010af00002
|
||||
spec:
|
||||
clusterIP: 10.0.162.149
|
||||
ports:
|
||||
- name: http
|
||||
nodePort: 31704
|
||||
port: 8080
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
- name: https
|
||||
nodePort: 32453
|
||||
port: 443
|
||||
protocol: TCP
|
||||
targetPort: 443
|
||||
selector:
|
||||
run: my-nginx
|
||||
|
||||
$ kubectl get nodes -o yaml | grep ExternalIP -C 1
|
||||
- address: 104.197.41.11
|
||||
type: ExternalIP
|
||||
allocatable:
|
||||
--
|
||||
- address: 23.251.152.56
|
||||
type: ExternalIP
|
||||
allocatable:
|
||||
...
|
||||
|
||||
$ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
|
||||
...
|
||||
<h1>Welcome to nginx!</h1>
|
||||
```
|
||||
|
||||
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
|
||||
|
||||
```shell
|
||||
$ kubectl edit svc my-nginx
|
||||
$ kubectl get svc my-nginx
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-nginx 10.0.162.149 162.222.184.144 80/TCP,81/TCP,82/TCP 21s
|
||||
|
||||
$ curl https://<EXTERNAL-IP> -k
|
||||
...
|
||||
<title>Welcome to nginx!</title>
|
||||
```
|
||||
|
||||
The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet. The `CLUSTER-IP` is only available inside your
|
||||
cluster/private cloud network.
|
||||
|
||||
Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long)
|
||||
hostname, not an IP. It's too long to fit in the standard `kubectl get svc`
|
||||
output, in fact, so you'll need to do `kubectl describe service my-nginx` to
|
||||
see it. You'll see something like this:
|
||||
|
||||
```shell
|
||||
$ kubectl describe service my-nginx
|
||||
...
|
||||
LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com
|
||||
...
|
||||
```
|
||||
|
||||
## Further reading
|
||||
|
||||
Kubernetes also supports Federated Services, which can span multiple
|
||||
clusters and cloud providers, to provide increased availability,
|
||||
better fault tolerance and greater scalability for your services. See
|
||||
the [Federated Services User Guide](/docs/user-guide/federation/federated-services/)
|
||||
for further information.
|
||||
|
||||
## What's next?
|
||||
|
||||
[Learn about more Kubernetes features that will help you run containers reliably in production.](/docs/user-guide/production-pods)
|
||||
[Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
|
|
|
@ -5,378 +5,6 @@ assignees:
|
|||
title: Cross-cluster Service Discovery using Federated Services
|
||||
---
|
||||
|
||||
This guide explains how to use Kubernetes Federated Services to deploy
|
||||
a common Service across multiple Kubernetes clusters. This makes it
|
||||
easy to achieve cross-cluster service discovery and availability zone
|
||||
fault tolerance for your Kubernetes applications.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/admin/federation/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you). Other tutorials, for example
|
||||
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
by Kelsey Hightower, are also available to help you.
|
||||
|
||||
You are also expected to have a basic
|
||||
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
|
||||
general, and [Services](/docs/user-guide/services/) in particular.
|
||||
|
||||
## Overview
|
||||
|
||||
Federated Services are created in much that same way as traditional
|
||||
[Kubernetes Services](/docs/user-guide/services/) by making an API
|
||||
call which specifies the desired properties of your service. In the
|
||||
case of Federated Services, this API call is directed to the
|
||||
Federation API endpoint, rather than a Kubernetes cluster API
|
||||
endpoint. The API for Federated Services is 100% compatible with the
|
||||
API for traditional Kubernetes Services.
|
||||
|
||||
Once created, the Federated Service automatically:
|
||||
|
||||
1. Creates matching Kubernetes Services in every cluster underlying your Cluster Federation,
|
||||
2. Monitors the health of those service "shards" (and the clusters in which they reside), and
|
||||
3. Manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients
|
||||
of your federated service can seamlessly locate an appropriate healthy service endpoint at all times, even in the event of cluster,
|
||||
availability zone or regional outages.
|
||||
|
||||
Clients inside your federated Kubernetes clusters (i.e. Pods) will
|
||||
automatically find the local shard of the Federated Service in their
|
||||
cluster if it exists and is healthy, or the closest healthy shard in a
|
||||
different cluster if it does not.
|
||||
|
||||
## Hybrid cloud capabilities
|
||||
|
||||
Federations of Kubernetes Clusters can include clusters running in
|
||||
different cloud providers (e.g. Google Cloud, AWS), and on-premises
|
||||
(e.g. on OpenStack). Simply create all of the clusters that you
|
||||
require, in the appropriate cloud providers and/or locations, and
|
||||
register each cluster's API endpoint and credentials with your
|
||||
Federation API Server (See the
|
||||
[federation admin guide](/docs/admin/federation/) for details).
|
||||
|
||||
Thereafter, your applications and services can span different clusters
|
||||
and cloud providers as described in more detail below.
|
||||
|
||||
## Creating a federated service
|
||||
|
||||
This is done in the usual way, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f services/nginx.yaml
|
||||
```
|
||||
|
||||
The '--context=federation-cluster' flag tells kubectl to submit the
|
||||
request to the Federation API endpoint, with the appropriate
|
||||
credentials. If you have not yet configured such a context, visit the
|
||||
[federation admin guide](/docs/admin/federation/) or one of the
|
||||
[administration tutorials](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
to find out how to do so.
|
||||
|
||||
As described above, the Federated Service will automatically create
|
||||
and maintain matching Kubernetes services in all of the clusters
|
||||
underlying your federation.
|
||||
|
||||
You can verify this by checking in each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get services nginx
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
nginx 10.63.250.98 104.199.136.89 80/TCP 9m
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone. The name and
|
||||
namespace of the underlying services will automatically match those of
|
||||
the Federated Service that you created above (and if you happen to
|
||||
have had services of the same name and namespace already existing in
|
||||
any of those clusters, they will be automatically adopted by the
|
||||
Federation and updated to conform with the specification of your
|
||||
Federated Service - either way, the end result will be the same).
|
||||
|
||||
The status of your Federated Service will automatically reflect the
|
||||
real-time status of the underlying Kubernetes services, for example:
|
||||
|
||||
``` shell
|
||||
$kubectl --context=federation-cluster describe services nginx
|
||||
|
||||
Name: nginx
|
||||
Namespace: default
|
||||
Labels: run=nginx
|
||||
Selector: run=nginx
|
||||
Type: LoadBalancer
|
||||
IP:
|
||||
LoadBalancer Ingress: 104.197.246.190, 130.211.57.243, 104.196.14.231, 104.199.136.89, ...
|
||||
Port: http 80/TCP
|
||||
Endpoints: <none>
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
Note the 'LoadBalancer Ingress' addresses of your Federated Service
|
||||
correspond with the 'LoadBalancer Ingress' addresses of all of the
|
||||
underlying Kubernetes services (once these have been allocated - this
|
||||
may take a few seconds). For inter-cluster and inter-cloud-provider
|
||||
networking between service shards to work correctly, your services
|
||||
need to have an externally visible IP address. [Service Type:
|
||||
Loadbalancer](/docs/user-guide/services/#type-loadbalancer)
|
||||
is typically used for this, although other options
|
||||
(e.g. [External IP's](/docs/user-guide/services/#external-ips)) exist.
|
||||
|
||||
Note also that we have not yet provisioned any backend Pods to receive
|
||||
the network traffic directed to these addresses (i.e. 'Service
|
||||
Endpoints'), so the Federated Service does not yet consider these to
|
||||
be healthy service shards, and has accordingly not yet added their
|
||||
addresses to the DNS records for this Federated Service (more on this
|
||||
aspect later).
|
||||
|
||||
## Adding backend pods
|
||||
|
||||
To render the underlying service shards healthy, we need to add
|
||||
backend Pods behind them. This is currently done directly against the
|
||||
API endpoints of the underlying clusters (although in future the
|
||||
Federation server will be able to do all this for you with a single
|
||||
command, to save you the trouble). For example, to create backend Pods
|
||||
in 13 underlying clusters:
|
||||
|
||||
``` shell
|
||||
for CLUSTER in asia-east1-c asia-east1-a asia-east1-b \
|
||||
europe-west1-d europe-west1-c europe-west1-b \
|
||||
us-central1-f us-central1-a us-central1-b us-central1-c \
|
||||
us-east1-d us-east1-c us-east1-b
|
||||
do
|
||||
kubectl --context=$CLUSTER run nginx --image=nginx:1.11.1-alpine --port=80
|
||||
done
|
||||
```
|
||||
|
||||
Note that `kubectl run` automatically adds the `run=nginx` labels required to associate the backend pods with their services.
|
||||
|
||||
## Verifying public DNS records
|
||||
|
||||
Once the above Pods have successfully started and have begun listening
|
||||
for connections, Kubernetes will report them as healthy endpoints of
|
||||
the service in that cluster (via automatic health checks). The Cluster
|
||||
Federation will in turn consider each of these
|
||||
service 'shards' to be healthy, and place them in serving by
|
||||
automatically configuring corresponding public DNS records. You can
|
||||
use your preferred interface to your configured DNS provider to verify
|
||||
this. For example, if your Federation is configured to use Google
|
||||
Cloud DNS, and a managed DNS domain 'example.com':
|
||||
|
||||
``` shell
|
||||
$ gcloud dns managed-zones describe example-dot-com
|
||||
creationTime: '2016-06-26T18:18:39.229Z'
|
||||
description: Example domain for Kubernetes Cluster Federation
|
||||
dnsName: example.com.
|
||||
id: '3229332181334243121'
|
||||
kind: dns#managedZone
|
||||
name: example-dot-com
|
||||
nameServers:
|
||||
- ns-cloud-a1.googledomains.com.
|
||||
- ns-cloud-a2.googledomains.com.
|
||||
- ns-cloud-a3.googledomains.com.
|
||||
- ns-cloud-a4.googledomains.com.
|
||||
```
|
||||
|
||||
``` shell
|
||||
$ gcloud dns record-sets list --zone example-dot-com
|
||||
NAME TYPE TTL DATA
|
||||
example.com. NS 21600 ns-cloud-e1.googledomains.com., ns-cloud-e2.googledomains.com.
|
||||
example.com. SOA 21600 ns-cloud-e1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 1209600 300
|
||||
nginx.mynamespace.myfederation.svc.example.com. A 180 104.197.246.190, 130.211.57.243, 104.196.14.231, 104.199.136.89,...
|
||||
nginx.mynamespace.myfederation.svc.us-central1-a.example.com. A 180 104.197.247.191
|
||||
nginx.mynamespace.myfederation.svc.us-central1-b.example.com. A 180 104.197.244.180
|
||||
nginx.mynamespace.myfederation.svc.us-central1-c.example.com. A 180 104.197.245.170
|
||||
nginx.mynamespace.myfederation.svc.us-central1-f.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.us-central1.example.com.
|
||||
nginx.mynamespace.myfederation.svc.us-central1.example.com. A 180 104.197.247.191, 104.197.244.180, 104.197.245.170
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-a.example.com. A 180 130.211.57.243
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-b.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.asia-east1.example.com.
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-c.example.com. A 180 130.211.56.221
|
||||
nginx.mynamespace.myfederation.svc.asia-east1.example.com. A 180 130.211.57.243, 130.211.56.221
|
||||
nginx.mynamespace.myfederation.svc.europe-west1.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.example.com.
|
||||
nginx.mynamespace.myfederation.svc.europe-west1-d.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.europe-west1.example.com.
|
||||
... etc.
|
||||
```
|
||||
|
||||
Note: If your Federation is configured to use AWS Route53, you can use one of the equivalent AWS tools, for example:
|
||||
|
||||
``` shell
|
||||
$aws route53 list-hosted-zones
|
||||
```
|
||||
and
|
||||
``` shell
|
||||
$aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX
|
||||
```
|
||||
|
||||
Whatever DNS provider you use, any DNS query tool (for example 'dig'
|
||||
or 'nslookup') will of course also allow you to see the records
|
||||
created by the Federation for you. Note that you should either point
|
||||
these tools directly at your DNS provider (e.g. `dig
|
||||
@ns-cloud-e1.googledomains.com...`) or expect delays in the order of
|
||||
your configured TTL (180 seconds, by default) before seeing updates,
|
||||
due to caching by intermediate DNS servers.
|
||||
|
||||
### Some notes about the above example
|
||||
|
||||
1. Notice that there is a normal ('A') record for each service shard that has at least one healthy backend endpoint. For example, in us-central1-a, 104.197.247.191 is the external IP address of the service shard in that zone, and in asia-east1-a the address is 130.211.56.221.
|
||||
2. Similarly, there are regional 'A' records which include all healthy shards in that region. For example, 'us-central1'. These regional records are useful for clients which do not have a particular zone preference, and as a building block for the automated locality and failover mechanism described below.
|
||||
2. For zones where there are currently no healthy backend endpoints, a CNAME ('Canonical Name') record is used to alias (automatically redirect) those queries to the next closest healthy zone. In the example, the service shard in us-central1-f currently has no healthy backend endpoints (i.e. Pods), so a CNAME record has been created to automatically redirect queries to other shards in that region (us-central1 in this case).
|
||||
3. Similarly, if no healthy shards exist in the enclosing region, the search progresses further afield. In the europe-west1-d availability zone, there are no healthy backends, so queries are redirected to the broader europe-west1 region (which also has no healthy backends), and onward to the global set of healthy addresses (' nginx.mynamespace.myfederation.svc.example.com.')
|
||||
|
||||
The above set of DNS records is automatically kept in sync with the
|
||||
current state of health of all service shards globally by the
|
||||
Federated Service system. DNS resolver libraries (which are invoked by
|
||||
all clients) automatically traverse the hierarchy of 'CNAME' and 'A'
|
||||
records to return the correct set of healthy IP addresses. Clients can
|
||||
then select any one of the returned addresses to initiate a network
|
||||
connection (and fail over automatically to one of the other equivalent
|
||||
addresses if required).
|
||||
|
||||
## Discovering a federated service
|
||||
|
||||
### From pods inside your federated clusters
|
||||
|
||||
By default, Kubernetes clusters come pre-configured with a
|
||||
cluster-local DNS server ('KubeDNS'), as well as an intelligently
|
||||
constructed DNS search path which together ensure that DNS queries
|
||||
like "myservice", "myservice.mynamespace",
|
||||
"bobsservice.othernamespace" etc issued by your software running
|
||||
inside Pods are automatically expanded and resolved correctly to the
|
||||
appropriate service IP of services running in the local cluster.
|
||||
|
||||
With the introduction of Federated Services and Cross-Cluster Service
|
||||
Discovery, this concept is extended to cover Kubernetes services
|
||||
running in any other cluster across your Cluster Federation, globally.
|
||||
To take advantage of this extended range, you use a slightly different
|
||||
DNS name (of the form "<servicename>.<namespace>.<federationname>",
|
||||
e.g. myservice.mynamespace.myfederation) to resolve Federated
|
||||
Services. Using a different DNS name also avoids having your existing
|
||||
applications accidentally traversing cross-zone or cross-region
|
||||
networks and you incurring perhaps unwanted network charges or
|
||||
latency, without you explicitly opting in to this behavior.
|
||||
|
||||
So, using our NGINX example service above, and the Federated Service
|
||||
DNS name form just described, let's consider an example: A Pod in a
|
||||
cluster in the `us-central1-f` availability zone needs to contact our
|
||||
NGINX service. Rather than use the service's traditional cluster-local
|
||||
DNS name (```"nginx.mynamespace"```, which is automatically expanded
|
||||
to ```"nginx.mynamespace.svc.cluster.local"```) it can now use the
|
||||
service's Federated DNS name, which is
|
||||
```"nginx.mynamespace.myfederation"```. This will be automatically
|
||||
expanded and resolved to the closest healthy shard of my NGINX
|
||||
service, wherever in the world that may be. If a healthy shard exists
|
||||
in the local cluster, that service's cluster-local (typically
|
||||
10.x.y.z) IP address will be returned (by the cluster-local KubeDNS).
|
||||
This is almost exactly equivalent to non-federated service resolution
|
||||
(almost because KubeDNS actually returns both a CNAME and an A record
|
||||
for local federated services, but applications will be oblivious
|
||||
to this minor technical difference).
|
||||
|
||||
But if the service does not exist in the local cluster (or it exists
|
||||
but has no healthy backend pods), the DNS query is automatically
|
||||
expanded to
|
||||
```"nginx.mynamespace.myfederation.svc.us-central1-f.example.com"```
|
||||
(i.e. logically "find the external IP of one of the shards closest to
|
||||
my availability zone"). This expansion is performed automatically by
|
||||
KubeDNS, which returns the associated CNAME record. This results in
|
||||
automatic traversal of the hierarchy of DNS records in the above
|
||||
example, and ends up at one of the external IP's of the Federated
|
||||
Service in the local us-central1 region (i.e. 104.197.247.191,
|
||||
104.197.244.180 or 104.197.245.170).
|
||||
|
||||
It is of course possible to explicitly target service shards in
|
||||
availability zones and regions other than the ones local to a Pod by
|
||||
specifying the appropriate DNS names explicitly, and not relying on
|
||||
automatic DNS expansion. For example,
|
||||
"nginx.mynamespace.myfederation.svc.europe-west1.example.com" will
|
||||
resolve to all of the currently healthy service shards in Europe, even
|
||||
if the Pod issuing the lookup is located in the U.S., and irrespective
|
||||
of whether or not there are healthy shards of the service in the U.S.
|
||||
This is useful for remote monitoring and other similar applications.
|
||||
|
||||
### From other clients outside your federated clusters
|
||||
|
||||
Much of the above discussion applies equally to external clients,
|
||||
except that the automatic DNS expansion described is no longer
|
||||
possible. So external clients need to specify one of the fully
|
||||
qualified DNS names of the Federated Service, be that a zonal,
|
||||
regional or global name. For convenience reasons, it is often a good
|
||||
idea to manually configure additional static CNAME records in your
|
||||
service, for example:
|
||||
|
||||
``` shell
|
||||
eu.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.europe-west1.example.com.
|
||||
us.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.us-central1.example.com.
|
||||
nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.example.com.
|
||||
```
|
||||
That way your clients can always use the short form on the left, and
|
||||
always be automatically routed to the closest healthy shard on their
|
||||
home continent. All of the required failover is handled for you
|
||||
automatically by Kubernetes Cluster Federation. Future releases will
|
||||
improve upon this even further.
|
||||
|
||||
## Handling failures of backend pods and whole clusters
|
||||
|
||||
Standard Kubernetes service cluster-IP's already ensure that
|
||||
non-responsive individual Pod endpoints are automatically taken out of
|
||||
service with low latency (a few seconds). In addition, as alluded
|
||||
above, the Kubernetes Cluster Federation system automatically monitors
|
||||
the health of clusters and the endpoints behind all of the shards of
|
||||
your Federated Service, taking shards in and out of service as
|
||||
required (e.g. when all of the endpoints behind a service, or perhaps
|
||||
the entire cluster or availability zone go down, or conversely recover
|
||||
from an outage). Due to the latency inherent in DNS caching (the cache
|
||||
timeout, or TTL for Federated Service DNS records is configured to 3
|
||||
minutes, by default, but can be adjusted), it may take up to that long
|
||||
for all clients to completely fail over to an alternative cluster in
|
||||
the case of catastrophic failure. However, given the number of
|
||||
discrete IP addresses which can be returned for each regional service
|
||||
endpoint (see e.g. us-central1 above, which has three alternatives)
|
||||
many clients will fail over automatically to one of the alternative
|
||||
IP's in less time than that given appropriate configuration.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
#### I cannot connect to my cluster federation API
|
||||
Check that your
|
||||
|
||||
1. Client (typically kubectl) is correctly configured (including API endpoints and login credentials), and
|
||||
2. Cluster Federation API server is running and network-reachable.
|
||||
|
||||
See the [federation admin guide](/docs/admin/federation/) to learn
|
||||
how to bring up a cluster federation correctly (or have your cluster administrator do this for you), and how to correctly configure your client.
|
||||
|
||||
#### I can create a federated service successfully against the cluster federation API, but no matching services are created in my underlying clusters
|
||||
Check that:
|
||||
|
||||
1. Your clusters are correctly registered in the Cluster Federation API (`kubectl describe clusters`)
|
||||
2. Your clusters are all 'Active'. This means that the cluster Federation system was able to connect and authenticate against the clusters' endpoints. If not, consult the logs of the federation-controller-manager pod to ascertain what the failure might be. (`kubectl --namespace=federation logs $(kubectl get pods --namespace=federation -l module=federation-controller-manager -oname`)
|
||||
3. That the login credentials provided to the Cluster Federation API for the clusters have the correct authorization and quota to create services in the relevant namespace in the clusters. Again you should see associated error messages providing more detail in the above log file if this is not the case.
|
||||
4. Whether any other error is preventing the service creation operation from succeeding (look for `service-controller` errors in the output of `kubectl logs federation-controller-manager --namespace federation`).
|
||||
|
||||
#### I can create a federated service successfully, but no matching DNS records are created in my DNS provider.
|
||||
Check that:
|
||||
|
||||
1. Your federation name, DNS provider, DNS domain name are configured correctly. Consult the [federation admin guide](/docs/admin/federation/) or [tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation) to learn
|
||||
how to configure your Cluster Federation system's DNS provider (or have your cluster administrator do this for you).
|
||||
2. Confirm that the Cluster Federation's service-controller is successfully connecting to and authenticating against your selected DNS provider (look for `service-controller` errors or successes in the output of `kubectl logs federation-controller-manager --namespace federation`)
|
||||
3. Confirm that the Cluster Federation's service-controller is successfully creating DNS records in your DNS provider (or outputting errors in its logs explaining in more detail what's failing).
|
||||
|
||||
#### Matching DNS records are created in my DNS provider, but clients are unable to resolve against those names
|
||||
Check that:
|
||||
|
||||
1. The DNS registrar that manages your federation DNS domain has been correctly configured to point to your configured DNS provider's nameservers. See for example [Google Domains Documentation](https://support.google.com/domains/answer/3290309?hl=en&ref_topic=3251230) and [Google Cloud DNS Documentation](https://cloud.google.com/dns/update-name-servers), or equivalent guidance from your domain registrar and DNS provider.
|
||||
|
||||
#### This troubleshooting guide did not help me solve my problem
|
||||
|
||||
1. Please use one of our [support channels](http://kubernetes.io/docs/troubleshooting/) to seek assistance.
|
||||
|
||||
## For more information
|
||||
|
||||
* [Federation proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/federation.md) details use cases that motivated this work.
|
||||
[Cross-cluster Service Discovery Using Federation](/docs/concepts/cluster-administration/federation-service-discovery/)
|
||||
|
|
|
@ -30,7 +30,7 @@ The following topics in the Kubernetes User Guide can help you run applications
|
|||
1. [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy/)
|
||||
1. [Connecting to containers via port forwarding](/docs/user-guide/connecting-to-applications-port-forward/)
|
||||
|
||||
Before running examples in the user guides, please ensure you have completed the [prerequisites](/docs/user-guide/prereqs/).
|
||||
Before running examples in the user guides, please ensure you have completed [installing kubectl](/docs/tasks/kubectl/install/).
|
||||
|
||||
## Kubernetes Concepts
|
||||
|
||||
|
|
|
@ -2,4 +2,6 @@
|
|||
title: Jobs
|
||||
---
|
||||
|
||||
{% include user-guide-content-moved.md %}
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
[Run to Completion Finite Workloads](/docs/concepts/jobs/run-to-completion-finite-workloads/)
|
|
@ -4,4 +4,4 @@ title: Coarse Parallel Processing using a Work Queue
|
|||
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
[Coarse Parallel Processing Using a Work Queue](/docs/tasks/job/work-queue-1/)
|
||||
[Coarse Parallel Processing Using a Work Queue](/docs/tasks/job/coarse-parallel-processing-work-queue/)
|
||||
|
|
|
@ -1,314 +1,7 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
- thockin
|
||||
title: Authenticating Across Clusters with kubeconfig
|
||||
---
|
||||
|
||||
Authentication in kubernetes can differ for different individuals.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
- A running kubelet might have one way of authenticating (i.e. certificates).
|
||||
- Users might have a different way of authenticating (i.e. tokens).
|
||||
- Administrators might have a list of certificates which they provide individual users.
|
||||
- There may be multiple clusters, and we may want to define them all in one place - giving users the ability to use their own certificates and reusing the same global configuration.
|
||||
|
||||
So in order to easily switch between multiple clusters, for multiple users, a kubeconfig file was defined.
|
||||
|
||||
This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname.
|
||||
|
||||
Multiple kubeconfig files are allowed, if specified explicitly. At runtime they are loaded and merged along with override options specified from the command line (see [rules](#loading-and-merging-rules) below).
|
||||
|
||||
## Related discussion
|
||||
|
||||
http://issue.k8s.io/1755
|
||||
|
||||
## Components of a kubeconfig file
|
||||
|
||||
### Example kubeconfig file
|
||||
|
||||
```yaml
|
||||
current-context: federal-context
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
api-version: v1
|
||||
server: http://cow.org:8080
|
||||
name: cow-cluster
|
||||
- cluster:
|
||||
certificate-authority: path/to/my/cafile
|
||||
server: https://horse.org:4443
|
||||
name: horse-cluster
|
||||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://pig.org:443
|
||||
name: pig-cluster
|
||||
contexts:
|
||||
- context:
|
||||
cluster: horse-cluster
|
||||
namespace: chisel-ns
|
||||
user: green-user
|
||||
name: federal-context
|
||||
- context:
|
||||
cluster: pig-cluster
|
||||
namespace: saw-ns
|
||||
user: black-user
|
||||
name: queen-anne-context
|
||||
kind: Config
|
||||
preferences:
|
||||
colors: true
|
||||
users:
|
||||
- name: blue-user
|
||||
user:
|
||||
token: blue-token
|
||||
- name: green-user
|
||||
user:
|
||||
client-certificate: path/to/my/client/cert
|
||||
client-key: path/to/my/client/key
|
||||
```
|
||||
|
||||
### Breakdown/explanation of components
|
||||
|
||||
#### cluster
|
||||
|
||||
```yaml
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority: path/to/my/cafile
|
||||
server: https://horse.org:4443
|
||||
name: horse-cluster
|
||||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://pig.org:443
|
||||
name: pig-cluster
|
||||
```
|
||||
|
||||
A `cluster` contains endpoint data for a kubernetes cluster. This includes the fully
|
||||
qualified url for the kubernetes apiserver, as well as the cluster's certificate
|
||||
authority or `insecure-skip-tls-verify: true`, if the cluster's serving
|
||||
certificate is not signed by a system trusted certificate authority.
|
||||
A `cluster` has a name (nickname) which acts as a dictionary key for the cluster
|
||||
within this kubeconfig file. You can add or modify `cluster` entries using
|
||||
[`kubectl config set-cluster`](/docs/user-guide/kubectl/kubectl_config_set-cluster/).
|
||||
|
||||
#### user
|
||||
|
||||
```yaml
|
||||
users:
|
||||
- name: blue-user
|
||||
user:
|
||||
token: blue-token
|
||||
- name: green-user
|
||||
user:
|
||||
client-certificate: path/to/my/client/cert
|
||||
client-key: path/to/my/client/key
|
||||
```
|
||||
|
||||
A `user` defines client credentials for authenticating to a kubernetes cluster. A
|
||||
`user` has a name (nickname) which acts as its key within the list of user entries
|
||||
after kubeconfig is loaded/merged. Available credentials are `client-certificate`,
|
||||
`client-key`, `token`, and `username/password`. `username/password` and `token`
|
||||
are mutually exclusive, but client certs and keys can be combined with them.
|
||||
You can add or modify `user` entries using
|
||||
[`kubectl config set-credentials`](/docs/user-guide/kubectl/kubectl_config_set-credentials).
|
||||
|
||||
#### context
|
||||
|
||||
```yaml
|
||||
contexts:
|
||||
- context:
|
||||
cluster: horse-cluster
|
||||
namespace: chisel-ns
|
||||
user: green-user
|
||||
name: federal-context
|
||||
```
|
||||
|
||||
A `context` defines a named [`cluster`](#cluster),[`user`](#user),[`namespace`](/docs/user-guide/namespaces) tuple
|
||||
which is used to send requests to the specified cluster using the provided authentication info and
|
||||
namespace. Each of the three is optional; it is valid to specify a context with only one of `cluster`,
|
||||
`user`,`namespace`, or to specify none. Unspecified values, or named values that don't have corresponding
|
||||
entries in the loaded kubeconfig (e.g. if the context specified a `pink-user` for the above kubeconfig file)
|
||||
will be replaced with the default. See [Loading and merging rules](#loading-and-merging) below for override/merge behavior.
|
||||
You can add or modify `context` entries with [`kubectl config set-context`](/docs/user-guide/kubectl/kubectl_config_set-context).
|
||||
|
||||
#### current-context
|
||||
|
||||
```yaml
|
||||
current-context: federal-context
|
||||
```
|
||||
|
||||
`current-context` is the nickname or 'key' for the cluster,user,namespace tuple that kubectl
|
||||
will use by default when loading config from this file. You can override any of the values in kubectl
|
||||
from the commandline, by passing `--context=CONTEXT`, `--cluster=CLUSTER`, `--user=USER`, and/or `--namespace=NAMESPACE` respectively.
|
||||
You can change the `current-context` with [`kubectl config use-context`](/docs/user-guide/kubectl/kubectl_config_use-context).
|
||||
|
||||
#### miscellaneous
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
preferences:
|
||||
colors: true
|
||||
```
|
||||
|
||||
`apiVersion` and `kind` identify the version and schema for the client parser and should not
|
||||
be edited manually.
|
||||
|
||||
`preferences` specify optional (and currently unused) kubectl preferences.
|
||||
|
||||
## Viewing kubeconfig files
|
||||
|
||||
`kubectl config view` will display the current kubeconfig settings. By default
|
||||
it will show you all loaded kubeconfig settings; you can filter the view to just
|
||||
the settings relevant to the `current-context` by passing `--minify`. See
|
||||
[`kubectl config view`](/docs/user-guide/kubectl/kubectl_config_view) for other options.
|
||||
|
||||
## Building your own kubeconfig file
|
||||
|
||||
NOTE, that if you are deploying k8s via kube-up.sh, you do not need to create your own kubeconfig files, the script will do it for you.
|
||||
|
||||
In any case, you can easily use this file as a template to create your own kubeconfig files.
|
||||
|
||||
So, lets do a quick walk through the basics of the above file so you can easily modify it as needed...
|
||||
|
||||
The above file would likely correspond to an api-server which was launched using the `--token-auth-file=tokens.csv` option, where the tokens.csv file looked something like this:
|
||||
|
||||
```conf
|
||||
blue-user,blue-user,1
|
||||
mister-red,mister-red,2
|
||||
```
|
||||
|
||||
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
|
||||
|
||||
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in successfully, because we are providing the green-user's client credentials.
|
||||
- Similarly, we can operate as the "blue-user" if we choose to change the value of current-context.
|
||||
|
||||
In the above scenario, green-user would have to log in by providing certificates, whereas blue-user would just provide the token. All this information would be handled for us by the
|
||||
|
||||
## Loading and merging rules
|
||||
|
||||
The rules for loading and merging the kubeconfig files are straightforward, but there are a lot of them. The final config is built in this order:
|
||||
|
||||
1. Get the kubeconfig from disk. This is done with the following hierarchy and merge rules:
|
||||
|
||||
|
||||
If the `CommandLineLocation` (the value of the `kubeconfig` command line option) is set, use this file only. No merging. Only one instance of this flag is allowed.
|
||||
|
||||
|
||||
Else, if `EnvVarLocation` (the value of `$KUBECONFIG`) is available, use it as a list of files that should be merged.
|
||||
Merge files together based on the following rules.
|
||||
Empty filenames are ignored. Files with non-deserializable content produced errors.
|
||||
The first file to set a particular value or map key wins and the value or map key is never changed.
|
||||
This means that the first file to set `CurrentContext` will have its context preserved. It also means that if two files specify a "red-user", only values from the first file's red-user are used. Even non-conflicting entries from the second file's "red-user" are discarded.
|
||||
|
||||
|
||||
Otherwise, use HomeDirectoryLocation (`~/.kube/config`) with no merging.
|
||||
1. Determine the context to use based on the first hit in this chain
|
||||
1. command line argument - the value of the `context` command line option
|
||||
1. `current-context` from the merged kubeconfig file
|
||||
1. Empty is allowed at this stage
|
||||
1. Determine the cluster info and user to use. At this point, we may or may not have a context. They are built based on the first hit in this chain. (run it twice, once for user, once for cluster)
|
||||
1. command line argument - `user` for user name and `cluster` for cluster name
|
||||
1. If context is present, then use the context's value
|
||||
1. Empty is allowed
|
||||
1. Determine the actual cluster info to use. At this point, we may or may not have a cluster info. Build each piece of the cluster info based on the chain (first hit wins):
|
||||
1. command line arguments - `server`, `api-version`, `certificate-authority`, and `insecure-skip-tls-verify`
|
||||
1. If cluster info is present and a value for the attribute is present, use it.
|
||||
1. If you don't have a server location, error.
|
||||
1. Determine the actual user info to use. User is built using the same rules as cluster info, EXCEPT that you can only have one authentication technique per user.
|
||||
1. Load precedence is 1) command line flag, 2) user fields from kubeconfig
|
||||
1. The command line flags are: `client-certificate`, `client-key`, `username`, `password`, and `token`.
|
||||
1. If there are two conflicting techniques, fail.
|
||||
1. For any information still missing, use default values and potentially prompt for authentication information
|
||||
1. All file references inside of a kubeconfig file are resolved relative to the location of the kubeconfig file itself. When file references are presented on the command line
|
||||
they are resolved relative to the current working directory. When paths are saved in the ~/.kube/config, relative paths are stored relatively while absolute paths are stored absolutely.
|
||||
|
||||
Any path in a kubeconfig file is resolved relative to the location of the kubeconfig file itself.
|
||||
|
||||
|
||||
## Manipulation of kubeconfig via `kubectl config <subcommand>`
|
||||
|
||||
In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help.
|
||||
See [kubectl/kubectl_config.md](/docs/user-guide/kubectl/kubectl_config) for help.
|
||||
|
||||
### Example
|
||||
|
||||
```shell
|
||||
$ kubectl config set-credentials myself --username=admin --password=secret
|
||||
$ kubectl config set-cluster local-server --server=http://localhost:8080
|
||||
$ kubectl config set-context default-context --cluster=local-server --user=myself
|
||||
$ kubectl config use-context default-context
|
||||
$ kubectl config set contexts.default-context.namespace the-right-prefix
|
||||
$ kubectl config view
|
||||
```
|
||||
|
||||
produces this output
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
server: http://localhost:8080
|
||||
name: local-server
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local-server
|
||||
namespace: the-right-prefix
|
||||
user: myself
|
||||
name: default-context
|
||||
current-context: default-context
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: myself
|
||||
user:
|
||||
password: secret
|
||||
username: admin
|
||||
```
|
||||
|
||||
and a kubeconfig file that looks like this
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
server: http://localhost:8080
|
||||
name: local-server
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local-server
|
||||
namespace: the-right-prefix
|
||||
user: myself
|
||||
name: default-context
|
||||
current-context: default-context
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: myself
|
||||
user:
|
||||
password: secret
|
||||
username: admin
|
||||
```
|
||||
|
||||
#### Commands for the example file
|
||||
|
||||
```shell
|
||||
$ kubectl config set preferences.colors true
|
||||
$ kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
|
||||
$ kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile
|
||||
$ kubectl config set-cluster pig-cluster --server=https://pig.org:443 --insecure-skip-tls-verify=true
|
||||
$ kubectl config set-credentials blue-user --token=blue-token
|
||||
$ kubectl config set-credentials green-user --client-certificate=path/to/my/client/cert --client-key=path/to/my/client/key
|
||||
$ kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns
|
||||
$ kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
|
||||
$ kubectl config use-context federal-context
|
||||
```
|
||||
|
||||
### Final notes for tying it all together
|
||||
|
||||
So, tying this all together, a quick start to create your own kubeconfig file:
|
||||
|
||||
- Take a good look and understand how your api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
|
||||
|
||||
- Replace the snippet above with information for your cluster's api-server endpoint.
|
||||
|
||||
- Make sure your api-server is launched in such a way that at least one user (i.e. green-user) credentials are provided to it. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.
|
||||
[Authenticating Across Clusters with kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
|
|
|
@ -19,7 +19,7 @@ $ source <(kubectl completion zsh) # setup autocomplete in zsh
|
|||
## Kubectl Context and Configuration
|
||||
|
||||
Set which Kubernetes cluster `kubectl` communicates with and modify configuration
|
||||
information. See [kubeconfig file](/docs/user-guide/kubeconfig-file/) documentation for
|
||||
information. See [Authenticating Across Clusters with kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) documentation for
|
||||
detailed config file information.
|
||||
|
||||
```console
|
||||
|
|
|
@ -5,7 +5,7 @@ assignees:
|
|||
title: kubectl Overview
|
||||
---
|
||||
|
||||
`kubectl` is a command line interface for running commands against Kubernetes clusters. This overview covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/docs/user-guide/kubectl) reference documentation. For installation instructions see [prerequisites](/docs/user-guide/prereqs).
|
||||
`kubectl` is a command line interface for running commands against Kubernetes clusters. This overview covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/docs/user-guide/kubectl) reference documentation. For installation instructions see [installing kubectl](/docs/tasks/kubectl/install/).
|
||||
|
||||
## Syntax
|
||||
|
||||
|
|
|
@ -2,140 +2,6 @@
|
|||
title: Creating an External Load Balancer
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
## Overview
|
||||
|
||||
When creating a service, you have the option of automatically creating a
|
||||
cloud network load balancer. This provides an
|
||||
externally-accessible IP address that sends traffic to the correct port on your
|
||||
cluster nodes _provided your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package_.
|
||||
|
||||
## External Load Balancer Providers
|
||||
|
||||
It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.
|
||||
|
||||
When the service type is set to `LoadBalancer`, Kubernetes provides functionality equivalent to type=`ClusterIP` to pods within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes VMs. The Kubernetes service controller automates the creation of the external load balancer, health checks (if needed), firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service object.
|
||||
|
||||
## Configuration file
|
||||
|
||||
To create an external load balancer, add the following line to your
|
||||
[service configuration file](/docs/user-guide/services/operations/#service-configuration-file):
|
||||
|
||||
```json
|
||||
"type": "LoadBalancer"
|
||||
```
|
||||
|
||||
Your configuration file might look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "example-service"
|
||||
},
|
||||
"spec": {
|
||||
"ports": [{
|
||||
"port": 8765,
|
||||
"targetPort": 9376
|
||||
}],
|
||||
"selector": {
|
||||
"app": "example"
|
||||
},
|
||||
"type": "LoadBalancer"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Using kubectl
|
||||
|
||||
You can alternatively create the service with the `kubectl expose` command and
|
||||
its `--type=LoadBalancer` flag:
|
||||
|
||||
```bash
|
||||
$ kubectl expose rc example --port=8765 --target-port=9376 \
|
||||
--name=example-service --type=LoadBalancer
|
||||
```
|
||||
|
||||
This command creates a new service using the same selectors as the referenced
|
||||
resource (in the case of the example above, a replication controller named
|
||||
`example`.)
|
||||
|
||||
For more information, including optional flags, refer to the
|
||||
[`kubectl expose` reference](/docs/user-guide/kubectl/kubectl_expose/).
|
||||
|
||||
## Finding your IP address
|
||||
|
||||
You can find the IP address created for your service by getting the service
|
||||
information through `kubectl`:
|
||||
|
||||
```bash
|
||||
$ kubectl describe services example-service
|
||||
Name: example-service
|
||||
Selector: app=example
|
||||
Type: LoadBalancer
|
||||
IP: 10.67.252.103
|
||||
LoadBalancer Ingress: 123.45.678.9
|
||||
Port: <unnamed> 80/TCP
|
||||
NodePort: <unnamed> 32445/TCP
|
||||
Endpoints: 10.64.0.4:80,10.64.1.5:80,10.64.2.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
The IP address is listed next to `LoadBalancer Ingress`.
|
||||
|
||||
## Loss of client source IP for external traffic
|
||||
|
||||
Due to the implementation of this feature, the source IP for sessions as seen in the target container will *not be the original source IP* of the client. This is the default behavior as of Kubernetes v1.5. However, starting in v1.5, an optional beta feature has been added
|
||||
that will preserve the client Source IP for GCE/GKE environments. This feature will be phased in for other cloud providers in subsequent releases.
|
||||
|
||||
## Annotation to modify the LoadBalancer behavior for preservation of Source IP
|
||||
In 1.5, a Beta feature has been added that changes the behavior of the external LoadBalancer feature.
|
||||
|
||||
This feature can be activated by adding the beta annotation below to the metadata section of the Service Configuration file.
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "example-service",
|
||||
"annotations": {
|
||||
"service.beta.kubernetes.io/external-traffic": "OnlyLocal"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"ports": [{
|
||||
"port": 8765,
|
||||
"targetPort": 9376
|
||||
}],
|
||||
"selector": {
|
||||
"app": "example"
|
||||
},
|
||||
"type": "LoadBalancer"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note that this feature is not currently implemented for all cloudproviders/environments.**
|
||||
|
||||
### Caveats and Limitations when preserving source IPs
|
||||
|
||||
GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB
|
||||
kube-proxy rules which would correctly balance across all endpoints.
|
||||
|
||||
With the new functionality, the external traffic will not be equally load balanced across pods, but rather
|
||||
equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability
|
||||
for specifying the weight per node, they balance equally across all target nodes, disregarding the number of
|
||||
pods on each node).
|
||||
|
||||
We can, however, state that for NumServicePods << NumNodes or NumServicePods >> NumNodes, a fairly close-to-equal
|
||||
distribution will be seen, even without weights.
|
||||
|
||||
Once the external load balancers provide weights, this functionality can be added to the LB programming path.
|
||||
*Future Work: No support for weights is provided for the 1.4 release, but may be added at a future date*
|
||||
|
||||
Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
|
||||
[Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
|
|
@ -4,60 +4,6 @@ assignees:
|
|||
title: Resource Usage Monitoring
|
||||
---
|
||||
|
||||
Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
## Overview
|
||||
|
||||
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes' [Kubelet](https://releases.k8s.io/{{page.githubbranch}}/DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization), [Google Cloud Monitoring](https://cloud.google.com/monitoring/) and many others described in more details [here](https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md). The overall architecture of the service can be seen below:
|
||||
|
||||

|
||||
|
||||
Let's look at some of the other components in more detail.
|
||||
|
||||
### cAdvisor
|
||||
|
||||
cAdvisor is an open source container resource usage and performance analysis agent. It is purpose built for containers and supports Docker containers natively. In Kubernetes, cadvisor is integrated into the Kubelet binary. cAdvisor auto-discovers all containers in the machine and collects CPU, memory, filesystem, and network usage statistics. cAdvisor also provides the overall machine usage by analyzing the 'root'? container on the machine.
|
||||
|
||||
On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194. Here is a snapshot of part of cAdvisor's UI that shows the overall machine usage:
|
||||
|
||||

|
||||
|
||||
### Kubelet
|
||||
|
||||
The Kubelet acts as a bridge between the Kubernetes master and the nodes. It manages the pods and containers running on a machine. Kubelet translates each pod into its constituent containers and fetches individual container usage statistics from cAdvisor. It then exposes the aggregated pod resource usage statistics via a REST API.
|
||||
|
||||
## Storage Backends
|
||||
|
||||
### InfluxDB and Grafana
|
||||
|
||||
A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most Kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
|
||||
|
||||
The Grafana container serves Grafana's UI which provides an easy to configure dashboard interface. The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can easily be customized and expanded. Take a look at the storage schema for InfluxDB [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/storage-schema.md#metrics).
|
||||
|
||||
Here is a video showing how to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana:
|
||||
|
||||
[](http://www.youtube.com/watch?v=SZgqjMrxo3g)
|
||||
|
||||
Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:
|
||||
|
||||

|
||||
|
||||
### Google Cloud Monitoring
|
||||
|
||||
Google Cloud Monitoring is a hosted monitoring service that allows you to visualize and alert on important metrics in your application. Heapster can be setup to automatically push all collected metrics to Google Cloud Monitoring. These metrics are then available in the [Cloud Monitoring Console](https://app.google.stackdriver.com/). This storage backend is the easiest to setup and maintain. The monitoring console allows you to easily create and customize dashboards using the exported data.
|
||||
|
||||
Here is a video showing how to setup and run a Google Cloud Monitoring backed Heapster:
|
||||
|
||||
[](http://www.youtube.com/watch?v=xSMNR2fcoLs)
|
||||
|
||||
Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wide resource usage.
|
||||
|
||||

|
||||
|
||||
## Try it out!
|
||||
|
||||
Now that you've learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/kubernetes/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues via the troubleshooting [channels](/docs/troubleshooting/).
|
||||
|
||||
***
|
||||
*Authors: Vishnu Kannan and Victor Marmol, Google Software Engineers.*
|
||||
*This article was originally posted in [Kubernetes blog](http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes.html).*
|
||||
[Resource Usage Monitoring](/docs/concepts/cluster-administration/resource-usage-monitoring/)
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
assignees:
|
||||
- davidopp
|
||||
- kevin-wangzefeng
|
||||
- bsalamat
|
||||
title: Assigning Pods to Nodes
|
||||
---
|
||||
|
||||
|
@ -198,3 +199,216 @@ must be satisfied for the pod to schedule onto a node.
|
|||
|
||||
For more information on inter-pod affinity/anti-affinity, see the design doc
|
||||
[here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/podaffinity.md).
|
||||
|
||||
## Taints and tolerations (beta feature)
|
||||
|
||||
Node affinity, described earlier, is a property of *pods* that *attracts* them to a set
|
||||
of nodes (either as a preference or a hard requirement). Taints are the opposite --
|
||||
they allow a *node* to *repel* a set of pods.
|
||||
|
||||
Taints and tolerations work together to ensure that pods are not scheduled
|
||||
onto inappropriate nodes. One or more taints are applied to a node; this
|
||||
marks that the node should not accept any pods that do not tolerate the taints.
|
||||
Tolerations are applied to pods, and allow (but do not require) the pods to schedule
|
||||
onto nodes with matching taints.
|
||||
|
||||
You add a taint to a node using [kubectl taint](https://kubernetes.io/docs/user-guide/kubectl/kubectl_taint/).
|
||||
For example,
|
||||
|
||||
```shell
|
||||
kubectl taint nodes node1 key=value:NoSchedule
|
||||
```
|
||||
|
||||
places a taint on node `node1`. The taint has key `key`, value `value`, and taint effect `NoSchedule`.
|
||||
This means that no pod will be able to schedule onto `node1` unless it has a matching toleration.
|
||||
You specify a toleration for a pod in the PodSpec. Both of the following tolerations "match" the
|
||||
taint created by the `kubectl taint` line above, and thus a pod with either toleration would be able
|
||||
to schedule onto `node1`:
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key"
|
||||
operator: "Equal"
|
||||
value: "value"
|
||||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
A toleration "matches" a taint if the `key`s are the same and the `effect`s are the same, and:
|
||||
|
||||
* the `operator` is `Exists` (in which case no `value` should be specified), or
|
||||
* the `operator` is `Equal` and the `value`s are equal
|
||||
|
||||
(`Operator` defaults to `Equal` if not specified.)
|
||||
As a special case, an empty `key` with operator `Exists` matches all keys and all values.
|
||||
Also as a special case, empty `effect` matches all effects.
|
||||
|
||||
The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`.
|
||||
This is a "preference" or "soft" version of `NoSchedule` -- the system will *try* to avoid placing a
|
||||
pod that does not tolerate the taint on the node, but it is not required. The third kind of `effect` is
|
||||
`NoExecute`, described later.
|
||||
|
||||
You can put multiple taints on the same node and multiple tolerations on the same pod.
|
||||
The way Kubernetes processes multiple taints and tolerations is like a filter: start
|
||||
with all of a node's taints, then ignore the ones for which the pod has a matching toleration; the
|
||||
remaining un-ignored taints have the indicated effects on the pod. In particular,
|
||||
|
||||
* if there is at least one un-ignored taint with effect `NoSchedule` then Kubernetes will not schedule
|
||||
the pod onto that node
|
||||
* if there is no un-ignored taint with effect `NoSchedule` but there is at least one un-ignored taint with
|
||||
effect `PreferNoSchedule` then Kubernetes will *try* to not schedule the pod onto the node
|
||||
* if there is at least one un-ignored taint with effect `NoExecute` then the pod will be evicted from
|
||||
the node (if it is already running on the node), and will not be
|
||||
scheduled onto the node (if it is not yet running on the node).
|
||||
|
||||
For example, imagine you taint a node like this
|
||||
|
||||
```shell
|
||||
kubectl taint nodes node1 key1=value1:NoSchedule
|
||||
kubectl taint nodes node1 key1=value1:NoExecute
|
||||
kubectl taint nodes node1 key2=value2:NoSchedule
|
||||
```
|
||||
|
||||
And a pod has two tolerations:
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key1"
|
||||
operator: "Equal"
|
||||
value: "value1"
|
||||
effect: "NoSchedule"
|
||||
- key: "key1"
|
||||
operator: "Equal"
|
||||
value: "value1"
|
||||
effect: "NoExecute"
|
||||
```
|
||||
|
||||
In this case, the pod will not be able to schedule onto the node, because there is no
|
||||
toleration matching the third taint. But it will be able to continue running if it is
|
||||
already running on the node when the taint is added, because the third taint is the only
|
||||
one of the three that is not tolerated by the pod.
|
||||
|
||||
Normally, if a taint with effect `NoExecute` is added to a node, then any pods that do
|
||||
not tolerate the taint will be evicted immediately, and any pods that do tolerate the
|
||||
taint will never be evicted. However, a toleration with `NoExecute` effect can specify
|
||||
an optional `tolerationSeconds` field that dictates how long the pod will stay bound
|
||||
to the node after the taint is added. For example,
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key1"
|
||||
operator: "Equal"
|
||||
value: "value1"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 3600
|
||||
```
|
||||
|
||||
means that if this pod is running and a matching taint is added to the node, then
|
||||
the pod will stay bound to the node for 3600 seconds, and then be evicted. If the
|
||||
taint is removed before that time, the pod will not be evicted.
|
||||
|
||||
### Example use cases
|
||||
|
||||
Taints and tolerations are a flexible way to steer pods away from nodes or evict
|
||||
pods that shouldn't be running. A few of the use cases are
|
||||
|
||||
* **dedicated nodes**: If you want to dedicate a set of nodes for exclusive use by
|
||||
a particular set of users, you can add a taint to those nodes (say,
|
||||
`kubectl taint nodes nodename dedicated=groupName:NoSchedule`) and then add a corresponding
|
||||
toleration to their pods (this would be done most easily by writing a custom
|
||||
[admission controller](https://kubernetes.io/docs/admin/admission-controllers/)).
|
||||
The pods with the tolerations will then be allowed to use the tainted (dedicated) nodes as
|
||||
well as any other nodes in the cluster. If you want to dedicate the nodes to them *and*
|
||||
ensure they *only* use the dedicated nodes, then you should additionally add a label similar
|
||||
to the taint to the same set of nodes (e.g. `dedicated=groupName`), and the admission
|
||||
controller should additionally add a node affinity to require that the pods can only schedule
|
||||
onto nodes labeled with `dedicated=groupName`.
|
||||
|
||||
* **nodes with special hardware**: In a cluster where a small subset of nodes have specialized
|
||||
hardware (for example GPUs), it is desirable to keep pods that don't need the specialized
|
||||
hardware off of those nodes, thus leaving room for later-arriving pods that do need the
|
||||
specialized hardware. This can be done by tainting the nodes that have the specialized
|
||||
hardware (e.g. `kubectl taint nodes nodename special=true:NoSchedule` or
|
||||
`kubectl taint nodes nodename special=true:PreferNoSchedule`) and adding a corresponding
|
||||
toleration to pods that use the special hardware. As in the dedicated nodes use case,
|
||||
it is probably easiest to apply the tolerations using a custom
|
||||
[admission controller](https://kubernetes.io/docs/admin/admission-controllers/)).
|
||||
For example, the admission controller could use
|
||||
some characteristic(s) of the pod to determine that the pod should be allowed to use
|
||||
the special nodes and hence the admission controller should add the toleration.
|
||||
To ensure that the pods that need
|
||||
the special hardware *only* schedule onto the nodes that have the special hardware, you will need some
|
||||
additional mechanism, e.g. you could represent the special resource using
|
||||
[opaque integer resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#opaque-integer-resources-alpha-feature)
|
||||
and request it as a resource in the PodSpec, or you could label the nodes that have
|
||||
the special hardware and use node affinity on the pods that need the hardware.
|
||||
|
||||
* **per-pod-configurable eviction behavior when there are node problems (alpha feature)**,
|
||||
which is described in the next section.
|
||||
|
||||
### Per-pod-configurable eviction behavior when there are node problems (alpha feature)
|
||||
|
||||
Earlier we mentioned the `NoExecute` taint effect, which affects pods that are already
|
||||
running on the node as follows
|
||||
|
||||
* pods that do not tolerate the taint are evicted immediately
|
||||
* pods that tolerate the taint without specifying `tolerationSeconds` in
|
||||
their toleration specification remain bound forever
|
||||
* pods that tolerate the taint with a specified `tolerationSeconds` remain
|
||||
bound for the specified amount of time
|
||||
|
||||
The above behavior is a beta feature. In addition, Kubernetes 1.6 has alpha
|
||||
support for representing node problems (currently only "node unreachable" and
|
||||
"node not ready", corresponding to the NodeCondition "Ready" being "Unknown" or
|
||||
"False" respectively) as taints. When the `TaintBasedEvictions` alpha feature
|
||||
is enabled (you can do this by including `TaintBasedEvictions=true` in `--feature-gates`, such as
|
||||
`--feature-gates=FooBar=true,TaintBasedEvictions=true`), the taints are automatically
|
||||
added by the NodeController and the normal logic for evicting pods from nodes
|
||||
based on the Ready NodeCondition is disabled.
|
||||
(Note: To maintain the existing [rate limiting](https://kubernetes.io/docs/admin/node/#node-controller))
|
||||
behavior of pod evictions due to node problems, the system actually adds the taints
|
||||
in a rate-limited way. This prevents massive pod evictions in scenarios such
|
||||
as the master becoming partitioned from the nodes.)
|
||||
This alpha feature, in combination with `tolerationSeconds`, allows a pod
|
||||
to specify how long it should stay bound to a node that has one or both of these problems.
|
||||
|
||||
For example, an application with a lot of local state might want to stay
|
||||
bound to node for a long time in the event of network partition, in the hope
|
||||
that the partition will recover and thus the pod eviction can be avoided.
|
||||
The toleration the pod would use in that case would look like
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "node.alpha.kubernetes.io/unreachable"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 6000
|
||||
```
|
||||
|
||||
(For the node not ready case, change the key to `node.alpha.kubernetes.io/notReady`.)
|
||||
|
||||
Note that Kubernetes automatically adds a toleration for
|
||||
`node.alpha.kubernetes.io/notReady` with `tolerationSeconds=300`
|
||||
unless the pod configuration provided
|
||||
by the user already has a toleration for `node.alpha.kubernetes.io/notReady`.
|
||||
Likewise it adds a toleration for
|
||||
`node.alpha.kubernetes.io/unreachable` with `tolerationSeconds=300`
|
||||
unless the pod configuration provided
|
||||
by the user already has a toleration for `node.alpha.kubernetes.io/unreachable`.
|
||||
|
||||
These automatically-added tolerations ensure that
|
||||
the default pod behavior of remaining bound for 5 minutes after one of these
|
||||
problems is detected is maintained.
|
||||
The two default tolerations are added by the [DefaultTolerationSeconds
|
||||
admission controller](https://github.com/kubernetes/kubernetes/tree/master/plugin/pkg/admission/defaulttolerationseconds).
|
||||
|
||||
[DaemonSet](https://kubernetes.io/docs/admin/daemons/) pods are created with
|
||||
`NoExecute` tolerations for `node.alpha.kubernetes.io/unreachable` and `node.alpha.kubernetes.io/notReady`
|
||||
with no `tolerationSeconds`. This ensures that DaemonSet pods are never evicted due
|
||||
to these problems, which matches the behavior when this feature is disabled.
|
||||
|
|
|
@ -121,6 +121,7 @@ However, the particular path specified in the custom recycler pod template in th
|
|||
* Quobyte Volumes
|
||||
* HostPath (single node testing only -- local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
|
||||
* VMware Photon
|
||||
* Portworx Volumes
|
||||
* ScaleIO Volumes
|
||||
|
||||
## Persistent Volumes
|
||||
|
@ -188,6 +189,7 @@ In the CLI, the access modes are abbreviated to:
|
|||
| NFS | ✓ | ✓ | ✓ |
|
||||
| RBD | ✓ | ✓ | - |
|
||||
| VsphereVolume | ✓ | - | - |
|
||||
| PortworxVolume | ✓ | - | ✓ |
|
||||
| ScaleIO | ✓ | ✓ | - |
|
||||
|
||||
### Class
|
||||
|
@ -356,6 +358,16 @@ parameters:
|
|||
Storage classes have a provisioner that determines what volume plugin is used
|
||||
for provisioning PVs. This field must be specified.
|
||||
|
||||
You are not restricted to specifying the "internal" provisioners
|
||||
listed here (whose names are prefixed with "kubernetes.io" and shipped
|
||||
alongside Kubernetes). You can also run and specify external provisioners,
|
||||
which are independent programs that follow a [specification](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-provisioning.md)
|
||||
defined by Kubernetes. Authors of external provisioners have full discretion
|
||||
over where their code lives, how the provisioner is shipped, how it needs to be
|
||||
run, what volume plugin it uses (including Flex), etc. The repository [kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage)
|
||||
houses a library for writing external provisioners that implements the bulk of
|
||||
the specification plus various community-maintained external provisioners.
|
||||
|
||||
### Parameters
|
||||
Storage classes have parameters that describe volumes belonging to the storage
|
||||
class. Different parameters may be accepted depending on the `provisioner`. For
|
||||
|
@ -534,6 +546,29 @@ parameters:
|
|||
* `location`: Azure storage account location. Default is empty.
|
||||
* `storageAccount`: Azure storage account name. If storage account is not provided, all storage accounts associated with the resource group are searched to find one that matches `skuName` and `location`. If storage account is provided, it must reside in the same resource group as the cluster, and `skuName` and `location` are ignored.
|
||||
|
||||
#### Portworx Volume
|
||||
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
metadata:
|
||||
name: portworx-io-priority-high
|
||||
provisioner: kubernetes.io/portworx-volume
|
||||
parameters:
|
||||
repl: "1"
|
||||
snap_interval: "70"
|
||||
io_priority: "high"
|
||||
|
||||
```
|
||||
|
||||
* `fs`: filesystem to be laid out: [none/xfs/ext4] (default: `ext4`).
|
||||
* `block_size`: block size in Kbytes (default: `32`).
|
||||
* `repl`: number of synchronous replicas to be provided in the form of replication factor [1..3] (default: `1`) A string is expected here i.e.`"1"` and not `1`.
|
||||
* `io_priority`: determines whether the volume will be created from higher performance or a lower priority storage [high/medium/low] (default: `low`).
|
||||
* `snap_interval`: clock/time interval in minutes for when to trigger snapshots. snapshots are incremental based on difference with the prior snapshot, 0 disables snaps (default: `0`). A string is expected here i.e. `"70"` and not `70`.
|
||||
* `aggregation_level`: specifies the number of chunks the volume would be distributed into, 0 indicates a non-aggregated volume (default: `0`). A string is expected here i.e. `"0"` and not `0`
|
||||
* `ephemeral`: specifies whether the volume should be cleaned-up after unmount or should be persistent. `emptyDir` use case can set this value to true and `persistent volumes` use case such as for databases like Cassandra should set to false, [true/false] (default `false`). A string is expected here i.e. `"true"` and not `true`.
|
||||
|
||||
#### ScaleIO
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
|
|
|
@ -106,6 +106,7 @@ to the volume sources that are defined when creating a volume:
|
|||
1. quobyte
|
||||
1. azureDisk
|
||||
1. photonPersistentDisk
|
||||
1. portworxVolume
|
||||
1. \* (allow all volumes)
|
||||
|
||||
The recommended minimum set of allowed volumes for new PSPs are
|
||||
|
|
|
@ -1,16 +1,7 @@
|
|||
---
|
||||
assignees:
|
||||
- erictune
|
||||
title: Pod Templates
|
||||
---
|
||||
|
||||
Pod templates are [pod](/docs/user-guide/pods/) specifications which are included in other objects, such as
|
||||
[Replication Controllers](/docs/user-guide/replication-controller/), [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and
|
||||
[DaemonSets](/docs/admin/daemons/). Controllers use Pod Templates to make actual pods.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive.
|
||||
|
||||
|
||||
## Future Work
|
||||
|
||||
A replication controller creates new pods from a template, which is currently inline in the `ReplicationController` object, but which we plan to extract into its own resource [#170](http://issue.k8s.io/170).
|
||||
[Pod Templates](/docs/concepts/workloads/pods/pod-overview/#pod-templates)
|
||||
|
|
|
@ -8,161 +8,6 @@ redirect_from:
|
|||
- "/docs/getting-started-guides/kubectl.html"
|
||||
---
|
||||
|
||||
To deploy and manage applications on Kubernetes, you'll use the
|
||||
Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/). It
|
||||
lets you inspect your cluster resources, create, delete, and update
|
||||
components, and much more. You will use it to look at your new cluster
|
||||
and bring up example apps.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
You should use a version of kubectl that is at least as new as your
|
||||
server. `kubectl version` will print the server and client versions.
|
||||
Using the same version of kubectl as your server naturally works;
|
||||
using a newer kubectl than your server also works; but if you use an
|
||||
older kubectl with a newer server you may see odd validation errors.
|
||||
|
||||
Here are a few methods to install kubectl.
|
||||
|
||||
## Install kubectl Binary Via curl
|
||||
|
||||
Download the latest release with the command:
|
||||
|
||||
```shell
|
||||
# OS X
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
|
||||
# Linux
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
|
||||
# Windows
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
If you want to download a specific version of kubectl you can replace the nested curl command from above with the version you want. (e.g. v1.4.6, v1.5.0-beta.2)
|
||||
|
||||
Make the kubectl binary executable and move it to your PATH (e.g. `/usr/local/bin`):
|
||||
|
||||
```shell
|
||||
chmod +x ./kubectl
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
## Extract kubectl from Release .tar.gz or Compiled Source
|
||||
|
||||
If you downloaded a pre-compiled [release](https://github.com/kubernetes/kubernetes/releases), kubectl will be under `platforms/<os>/<arch>` from the tar bundle.
|
||||
|
||||
If you compiled Kubernetes from source, kubectl should be either under `_output/local/bin/<os>/<arch>` or `_output/dockerized/bin/<os>/<arch>`.
|
||||
|
||||
Copy or move kubectl into a directory already in your PATH (e.g. `/usr/local/bin`). For example:
|
||||
|
||||
```shell
|
||||
# OS X
|
||||
sudo cp platforms/darwin/amd64/kubectl /usr/local/bin/kubectl
|
||||
|
||||
# Linux
|
||||
sudo cp platforms/linux/amd64/kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
Next make it executable with the following command:
|
||||
|
||||
```shell
|
||||
sudo chmod +x /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
The kubectl binary doesn't have to be installed to be executable, but the rest of the walkthrough will assume that it's in your PATH.
|
||||
|
||||
If you prefer not to copy kubectl, you need to ensure it is in your path:
|
||||
|
||||
```shell
|
||||
# OS X
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
||||
# Linux
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
## Download as part of the Google Cloud SDK
|
||||
|
||||
kubectl can be installed as part of the Google Cloud SDK:
|
||||
|
||||
First install the [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
|
||||
After Google Cloud SDK installs, run the following command to install `kubectl`:
|
||||
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
Do check that the version is sufficiently up-to-date using `kubectl version`.
|
||||
|
||||
## Install with brew
|
||||
|
||||
If you are on MacOS and using brew, you can install with:
|
||||
|
||||
```shell
|
||||
brew install kubectl
|
||||
```
|
||||
|
||||
The homebrew project is independent from kubernetes, so do check that the version is
|
||||
sufficiently up-to-date using `kubectl version`.
|
||||
|
||||
## Configuring kubectl
|
||||
|
||||
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](/docs/user-guide/kubeconfig-file), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](/docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/user-guide/sharing-clusters).
|
||||
By default, kubectl configuration lives at `~/.kube/config`.
|
||||
|
||||
#### Making sure you're ready
|
||||
|
||||
Check that kubectl is properly configured by getting the cluster state:
|
||||
|
||||
```shell
|
||||
$ kubectl cluster-info
|
||||
```
|
||||
|
||||
If you see a url response, you are ready to go.
|
||||
|
||||
## Enabling shell autocompletion
|
||||
|
||||
kubectl includes autocompletion support, which can save a lot of typing!
|
||||
|
||||
The completion script itself is generated by kubectl, so you typically just need to invoke it from your profile.
|
||||
|
||||
Common examples are provided here, but for more details please consult `kubectl completion -h`
|
||||
|
||||
### On Linux, using bash
|
||||
|
||||
To add it to your current shell: `source <(kubectl completion bash)`
|
||||
|
||||
To add kubectl autocompletion to your profile (so it is automatically loaded in future shells):
|
||||
|
||||
```shell
|
||||
echo "source <(kubectl completion bash)" >> ~/.bashrc
|
||||
```
|
||||
|
||||
### On MacOS, using bash
|
||||
|
||||
On MacOS, you will need to install the bash-completion support first:
|
||||
|
||||
```shell
|
||||
brew install bash-completion
|
||||
```
|
||||
|
||||
To add it to your current shell:
|
||||
|
||||
```shell
|
||||
source $(brew --prefix)/etc/bash_completion
|
||||
source <(kubectl completion bash)
|
||||
```
|
||||
|
||||
To add kubectl autocompletion to your profile (so it is automatically loaded in future shells):
|
||||
|
||||
```shell
|
||||
echo "source $(brew --prefix)/etc/bash_completion" >> ~/.bash_profile
|
||||
echo "source <(kubectl completion bash)" >> ~/.bash_profile
|
||||
```
|
||||
|
||||
Please note that this only appears to work currently if you install using `brew install kubectl`,
|
||||
and not if you downloaded kubectl directly.
|
||||
|
||||
## What's next?
|
||||
|
||||
[Learn how to launch and expose your application.](/docs/user-guide/quick-start)
|
||||
[Installing and Setting Up kubectl](/docs/tasks/kubectl/install/)
|
||||
|
|
|
@ -5,95 +5,6 @@ assignees:
|
|||
title: Configuring Your Cloud Provider's Firewalls
|
||||
---
|
||||
|
||||
Many cloud providers (e.g. Google Compute Engine) define firewalls that help prevent inadvertent
|
||||
exposure to the internet. When exposing a service to the external world, you may need to open up
|
||||
one or more ports in these firewalls to serve traffic. This document describes this process, as
|
||||
well as any provider specific details that may be necessary.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
### Restrict Access For LoadBalancer Service
|
||||
|
||||
When using a Service with `spec.type: LoadBalancer`, you can specify the IP ranges that are allowed to access the load balancer
|
||||
by using `spec.loadBalancerSourceRanges`. This field takes a list of IP CIDR ranges, which Kubernetes will use to configure firewall exceptions.
|
||||
This feature is currently supported on Google Compute Engine, Google Container Engine and AWS. This field will be ignored if the cloud provider does not support the feature.
|
||||
|
||||
Assuming 10.0.0.0/8 is the internal subnet. In the following example, a load blancer will be created that is only accessible to cluster internal ips.
|
||||
This will not allow clients from outside of your Kubernetes cluster to access the load blancer.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: myapp
|
||||
spec:
|
||||
ports:
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
selector:
|
||||
app: example
|
||||
type: LoadBalancer
|
||||
loadBalancerSourceRanges:
|
||||
- 10.0.0.0/8
|
||||
```
|
||||
|
||||
In the following example, a load blancer will be created that is only accessible to clients with IP addresses from 130.211.204.1 and 130.211.204.2.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: myapp
|
||||
spec:
|
||||
ports:
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
selector:
|
||||
app: example
|
||||
type: LoadBalancer
|
||||
loadBalancerSourceRanges:
|
||||
- 130.211.204.1/32
|
||||
- 130.211.204.2/32
|
||||
```
|
||||
|
||||
### Google Compute Engine
|
||||
|
||||
When using a Service with `spec.type: LoadBalancer`, the firewall will be
|
||||
opened automatically. When using `spec.type: NodePort`, however, the firewall
|
||||
is *not* opened by default.
|
||||
|
||||
Google Compute Engine firewalls are documented [elsewhere](https://cloud.google.com/compute/docs/networking#firewalls_1).
|
||||
|
||||
You can add a firewall with the `gcloud` command line tool:
|
||||
|
||||
```shell
|
||||
$ gcloud compute firewall-rules create my-rule --allow=tcp:<port>
|
||||
```
|
||||
|
||||
**Note**
|
||||
There is one important security note when using firewalls on Google Compute Engine:
|
||||
|
||||
as of Kubernetes v1.0.0, GCE firewalls are defined per-vm, rather than per-ip
|
||||
address. This means that when you open a firewall for a service's ports,
|
||||
anything that serves on that port on that VM's host IP address may potentially
|
||||
serve traffic. Note that this is not a problem for other Kubernetes services,
|
||||
as they listen on IP addresses that are different than the host node's external
|
||||
IP address.
|
||||
|
||||
Consider:
|
||||
|
||||
* You create a Service with an external load balancer (IP Address 1.2.3.4)
|
||||
and port 80
|
||||
* You open the firewall for port 80 for all nodes in your cluster, so that
|
||||
the external Service actually can deliver packets to your Service
|
||||
* You start an nginx server, running on port 80 on the host virtual machine
|
||||
(IP Address 2.3.4.5). This nginx is **also** exposed to the internet on
|
||||
the VM's external IP address.
|
||||
|
||||
Consequently, please be careful when opening firewalls in Google Compute Engine
|
||||
or Google Container Engine. You may accidentally be exposing other services to
|
||||
the wilds of the internet.
|
||||
|
||||
This will be fixed in an upcoming release of Kubernetes.
|
||||
|
||||
### Other cloud providers
|
||||
|
||||
Coming soon.
|
||||
[Configuring Your Cloud Provider's Firewalls](/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/)
|
||||
|
|
|
@ -5,170 +5,6 @@ assignees:
|
|||
title: Service Operations
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
Services map a port on each cluster node to ports on one or more pods.
|
||||
|
||||
The mapping uses a `selector` key:value pair in the service, and the
|
||||
`labels` property of pods. Any pods whose labels match the service selector
|
||||
are made accessible through the service's port.
|
||||
|
||||
For more information, see the
|
||||
[Services Overview](/docs/user-guide/services/).
|
||||
|
||||
## Create a service
|
||||
|
||||
Services are created by passing a configuration file to the `kubectl create`
|
||||
command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f FILE
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
* `-f FILE` or `--filename FILE` is a relative path to a
|
||||
[service configuration file](#service-configuration-file) in either JSON
|
||||
or YAML format.
|
||||
|
||||
A successful service create request returns the service name. You can use
|
||||
a [sample file](#sample_files) below to try a create request.
|
||||
|
||||
### Service configuration file
|
||||
|
||||
When creating a service, you must point to a service configuration file as the
|
||||
value of the `-f` flag. The configuration file can be formatted as
|
||||
YAML or as JSON, and supports the following fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": string
|
||||
},
|
||||
"spec": {
|
||||
"ports": [{
|
||||
"port": int,
|
||||
"targetPort": int
|
||||
}],
|
||||
"selector": {
|
||||
string: string
|
||||
},
|
||||
"type": "LoadBalancer",
|
||||
"loadBalancerSourceRanges": [
|
||||
"10.180.0.0/16",
|
||||
"10.245.0.0/24"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Required fields are:
|
||||
|
||||
* `kind`: Always `Service`.
|
||||
* `apiVersion`: Currently `v1`.
|
||||
* `metadata`: Contains:
|
||||
* `name`: The name to give to this service.
|
||||
* `spec`: Contains:
|
||||
* `ports`: The ports to map. `port` is the service port to expose on the
|
||||
cluster IP. `targetPort` is the port to target on the pods that are part
|
||||
of this service.
|
||||
* `selector`: The label key:value pair that defines the pods to
|
||||
target.
|
||||
* `type`: Optional. If the type is `LoadBalancer`, sets up a [network load balancer](/docs/user-guide/load-balancer/)
|
||||
for your service. This provides an externally-accessible IP address that
|
||||
sends traffic to the correct port on your cluster nodes.
|
||||
* `loadBalancerSourceRanges:`: Optional. Must use with `LoadBalancer` type.
|
||||
If specified and supported by the cloud provider, this will restrict traffic
|
||||
such that the load balancer will be accessible only to clients from the specified IP ranges.
|
||||
This field will be ignored if the cloud-provider does not support the feature.
|
||||
|
||||
For the full `service` schema see the
|
||||
[Kubernetes api reference](/docs/api-reference/v1/definitions/#_v1_service).
|
||||
|
||||
### Sample files
|
||||
|
||||
The following service configuration files assume that you have a set of pods
|
||||
that expose port 9376 and carry the label `app=example`.
|
||||
|
||||
Both files create a new service named `myapp` which resolves to TCP port 9376
|
||||
on any pod with the `app=example` label.
|
||||
|
||||
The difference in the files is in how the service is accessed. The first file
|
||||
does not create an external load balancer; the service can be accessed through
|
||||
port 8765 on any of the nodes' IP addresses.
|
||||
|
||||
{% capture tabspec %}servicesample
|
||||
JSON,json,service-sample.json,/docs/user-guide/services/service-sample.json
|
||||
YAML,yaml,service-sample.yaml,/docs/user-guide/services/service-sample.yaml{% endcapture %}
|
||||
{% include tabs.html %}
|
||||
|
||||
The second file uses
|
||||
[network load balancing](/docs/user-guide/load-balancer/) to create a
|
||||
single IP address that spreads traffic to all of the nodes in
|
||||
your cluster. This option is specified with the
|
||||
`"type": "LoadBalancer"` property.
|
||||
|
||||
{% capture tabspec %}loadbalancesample
|
||||
JSON,json,load-balancer-sample.json,/docs/user-guide/services/load-balancer-sample.json
|
||||
YAML,yaml,load-balancer-sample.yaml,/docs/user-guide/services/load-balancer-sample.yaml{% endcapture %}
|
||||
{% include tabs.html %}
|
||||
|
||||
To access the service, a client connects to the external IP address, which
|
||||
forwards to port 8765 on a node in the cluster, which in turn accesses
|
||||
port 9376 on the pod. See the
|
||||
[Service configuration file](#service-configuration-file) section of this doc
|
||||
for directions on finding the external IP address.
|
||||
|
||||
## View a service
|
||||
|
||||
To list all services on a cluster, use the
|
||||
`kubectl get` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get services
|
||||
```
|
||||
|
||||
A successful get request returns all services that exist on the specified
|
||||
cluster:
|
||||
|
||||
```shell
|
||||
NAME LABELS SELECTOR IP PORT
|
||||
myapp <none> app=MyApp 10.123.255.83 8765/TCP
|
||||
```
|
||||
|
||||
To return information about a specific service, use the
|
||||
`kubectl describe` command:
|
||||
|
||||
```shell
|
||||
$ kubectl describe service NAME
|
||||
```
|
||||
|
||||
Details about the specific service are returned:
|
||||
|
||||
```conf
|
||||
Name: myapp
|
||||
Labels: <none>
|
||||
Selector: app=MyApp
|
||||
IP: 10.123.255.83
|
||||
Port: <unnamed> 8765/TCP
|
||||
NodePort: <unnamed> 31474/TCP
|
||||
Endpoints: <none>
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
To return information about a service when event information is not required,
|
||||
substitute `get` for `describe`.
|
||||
|
||||
## Delete a service
|
||||
|
||||
To delete a service, use the `kubectl delete` command:
|
||||
|
||||
```shell
|
||||
$ kubectl delete service NAME
|
||||
```
|
||||
|
||||
A successful delete request returns the deleted service's name.
|
||||
[Connecting a Front End to a Back End Using a Service](/docs/tutorials/connecting-apps/connecting-frontend-backend/)
|
||||
|
|
|
@ -1,125 +1,7 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
- thockin
|
||||
title: Sharing Cluster Access with kubeconfig
|
||||
---
|
||||
|
||||
Client access to a running Kubernetes cluster can be shared by copying
|
||||
the `kubectl` client config bundle ([kubeconfig](/docs/user-guide/kubeconfig-file)).
|
||||
This config bundle lives in `$HOME/.kube/config`, and is generated
|
||||
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
**1. Create a cluster**
|
||||
|
||||
```shell
|
||||
$ cluster/kube-up.sh
|
||||
```
|
||||
|
||||
**2. Copy `kubeconfig` to new host**
|
||||
|
||||
```shell
|
||||
$ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
|
||||
```
|
||||
|
||||
**3. On new host, make copied `config` available to `kubectl`**
|
||||
|
||||
* Option A: copy to default location
|
||||
|
||||
```shell
|
||||
$ mv /path/to/.kube/config $HOME/.kube/config
|
||||
```
|
||||
|
||||
* Option B: copy to working directory (from which kubectl is run)
|
||||
|
||||
```shell
|
||||
$ mv /path/to/.kube/config $PWD
|
||||
```
|
||||
|
||||
* Option C: manually pass `kubeconfig` location to `kubectl`
|
||||
|
||||
```shell
|
||||
# via environment variable
|
||||
$ export KUBECONFIG=/path/to/.kube/config
|
||||
|
||||
# via commandline flag
|
||||
$ kubectl ... --kubeconfig=/path/to/.kube/config
|
||||
```
|
||||
|
||||
## Manually Generating `kubeconfig`
|
||||
|
||||
`kubeconfig` is generated by `kube-up` but you can generate your own
|
||||
using (any desired subset of) the following commands.
|
||||
|
||||
```shell
|
||||
# create kubeconfig entry
|
||||
$ kubectl config set-cluster $CLUSTER_NICK \
|
||||
--server=https://1.1.1.1 \
|
||||
--certificate-authority=/path/to/apiserver/ca_file \
|
||||
--embed-certs=true \
|
||||
# Or if tls not needed, replace --certificate-authority and --embed-certs with
|
||||
--insecure-skip-tls-verify=true \
|
||||
--kubeconfig=/path/to/standalone/.kube/config
|
||||
|
||||
# create user entry
|
||||
$ kubectl config set-credentials $USER_NICK \
|
||||
# bearer token credentials, generated on kube master
|
||||
--token=$token \
|
||||
# use either username|password or token, not both
|
||||
--username=$username \
|
||||
--password=$password \
|
||||
--client-certificate=/path/to/crt_file \
|
||||
--client-key=/path/to/key_file \
|
||||
--embed-certs=true \
|
||||
--kubeconfig=/path/to/standalone/.kube/config
|
||||
|
||||
# create context entry
|
||||
$ kubectl config set-context $CONTEXT_NAME \
|
||||
--cluster=$CLUSTER_NICK \
|
||||
--user=$USER_NICK \
|
||||
--kubeconfig=/path/to/standalone/.kube/config
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
* The `--embed-certs` flag is needed to generate a standalone
|
||||
`kubeconfig`, that will work as-is on another host.
|
||||
* `--kubeconfig` is both the preferred file to load config from and the file to
|
||||
save config too. In the above commands the `--kubeconfig` file could be
|
||||
omitted if you first run
|
||||
|
||||
```shell
|
||||
$ export KUBECONFIG=/path/to/standalone/.kube/config
|
||||
```
|
||||
|
||||
* The ca_file, key_file, and cert_file referenced above are generated on the
|
||||
kube master at cluster turnup. They can be found on the master under
|
||||
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
|
||||
|
||||
For more details on `kubeconfig` see [kubeconfig-file.md](/docs/user-guide/kubeconfig-file),
|
||||
and/or run `kubectl config -h`.
|
||||
|
||||
## Merging `kubeconfig` Example
|
||||
|
||||
`kubectl` loads and merges config from the following locations (in order)
|
||||
|
||||
1. `--kubeconfig=/path/to/.kube/config` command line flag
|
||||
2. `KUBECONFIG=/path/to/.kube/config` env variable
|
||||
3. `$HOME/.kube/config`
|
||||
|
||||
If you create clusters A, B on host1, and clusters C, D on host2, you can
|
||||
make all four clusters available on both hosts by running
|
||||
|
||||
```shell
|
||||
# on host2, copy host1's default kubeconfig, and merge it from env
|
||||
$ scp host1:/path/to/home1/.kube/config /path/to/other/.kube/config
|
||||
|
||||
$ export KUBECONFIG=/path/to/other/.kube/config
|
||||
|
||||
# on host1, copy host2's default kubeconfig and merge it from env
|
||||
$ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config
|
||||
|
||||
$ export KUBECONFIG=/path/to/other/.kube/config
|
||||
```
|
||||
|
||||
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file](/docs/user-guide/kubeconfig-file).
|
||||
[Sharing Cluster Access with kubeconfig](/docs/tasks/administer-cluster/share-configuration/)
|
||||
|
|
|
@ -79,6 +79,7 @@ Kubernetes supports several types of Volumes:
|
|||
* `azureDisk`
|
||||
* `vsphereVolume`
|
||||
* `Quobyte`
|
||||
* `portworxVolume`
|
||||
|
||||
We welcome additional contributions.
|
||||
|
||||
|
@ -531,6 +532,15 @@ before you can use it__
|
|||
|
||||
See the [Quobyte example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/quobyte) for more details.
|
||||
|
||||
### Portworx Volume
|
||||
|
||||
A `PortworxVolume` is an elastic block storage layer that runs hyperconverged with Kubernetes. Portworx fingerprints storage in a server, tiers
|
||||
based on capabilities, and aggregates capacity across multiple servers. Portworx runs in-guest in virtual machines or on bare metal Linux nodes.
|
||||
|
||||
A `PortworxVolume` can be dynamically created through Kubernetes or it can also be pre-provisioned and referenced inside a Kubernetes pod.
|
||||
|
||||
More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/portworx/README.md)
|
||||
|
||||
## Using subPath
|
||||
|
||||
Sometimes, it is useful to share one volume for multiple uses in a single pod. The `volumeMounts.subPath`
|
||||
|
|
|
@ -21,7 +21,7 @@ The easiest way to interact with Kubernetes is via the [kubectl](/docs/user-guid
|
|||
|
||||
For more info about kubectl, including its usage, commands, and parameters, see the [kubectl CLI reference](/docs/user-guide/kubectl-overview/).
|
||||
|
||||
If you haven't installed and configured kubectl, finish the [prerequisites](/docs/user-guide/prereqs/) before continuing.
|
||||
If you haven't installed and configured kubectl, finish [installing kubectl](/docs/tasks/kubectl/install/) before continuing.
|
||||
|
||||
## Pods
|
||||
|
||||
|
|
|
@ -247,7 +247,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"http-liveness": {&api.Pod{}},
|
||||
"http-liveness-named-port": {&api.Pod{}},
|
||||
},
|
||||
"../docs/tasks/job/work-queue-1": {
|
||||
"../docs/tasks/job/coarse-parallel-processing-work-queue": {
|
||||
"job": {&batch.Job{}},
|
||||
},
|
||||
"../docs/tasks/job/fine-parallel-processing-work-queue": {
|
||||
|
|
Loading…
Reference in New Issue