Merge branch 'master' into oidc-username-prefixing
commit
6792ba4aa2
|
@ -14,6 +14,11 @@ toc:
|
|||
- docs/concepts/overview/working-with-objects/annotations.md
|
||||
- docs/concepts/overview/kubernetes-api.md
|
||||
|
||||
- title: Containers
|
||||
section:
|
||||
- docs/concepts/containers/images.md
|
||||
- docs/concepts/containers/container-lifecycle-hooks.md
|
||||
|
||||
- title: Workloads
|
||||
section:
|
||||
- title: Pods
|
||||
|
@ -48,6 +53,10 @@ toc:
|
|||
- docs/concepts/cluster-administration/access-cluster.md
|
||||
- docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig.md
|
||||
|
||||
- title: Storage
|
||||
section:
|
||||
- docs/concepts/storage/volumes.md
|
||||
|
||||
- title: Services, Load Balancing, and Networking
|
||||
section:
|
||||
- docs/concepts/services-networking/dns-pod-service.md
|
||||
|
|
|
@ -54,16 +54,12 @@ toc:
|
|||
- title: Containers and Pods
|
||||
section:
|
||||
- docs/user-guide/pods/multi-container.md
|
||||
- docs/user-guide/pods/init-container.md
|
||||
- docs/user-guide/pod-templates.md
|
||||
- docs/user-guide/environment-guide/index.md
|
||||
- docs/user-guide/compute-resources.md
|
||||
- docs/user-guide/pod-states.md
|
||||
- docs/user-guide/liveness/index.md
|
||||
- docs/user-guide/container-environment.md
|
||||
- docs/user-guide/node-selection/index.md
|
||||
- docs/user-guide/downward-api/index.md
|
||||
- docs/user-guide/downward-api/volume/index.md
|
||||
- docs/user-guide/petset/bootstrapping/index.md
|
||||
|
||||
- title: Monitoring, Logging, and Debugging Containers
|
||||
|
@ -74,7 +70,6 @@ toc:
|
|||
- docs/user-guide/logging/overview.md
|
||||
- docs/user-guide/logging/stackdriver.md
|
||||
- docs/user-guide/logging/elasticsearch.md
|
||||
- docs/user-guide/getting-into-containers.md
|
||||
- docs/user-guide/connecting-to-applications-proxy.md
|
||||
- docs/user-guide/connecting-to-applications-port-forward.md
|
||||
- title: Using Explorer to Examine the Runtime Environment
|
||||
|
|
|
@ -59,6 +59,7 @@ toc:
|
|||
|
||||
- title: Administering a Cluster
|
||||
section:
|
||||
- docs/tasks/administer-cluster/overview.md
|
||||
- docs/tasks/administer-cluster/assign-pods-nodes.md
|
||||
- docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
|
||||
- docs/tasks/administer-cluster/safely-drain-node.md
|
||||
|
|
|
@ -1,92 +1,7 @@
|
|||
---
|
||||
assignees:
|
||||
- davidopp
|
||||
- lavalamp
|
||||
title: Admin Guide
|
||||
---
|
||||
|
||||
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with concepts in the [User Guide](/docs/user-guide/).
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Planning a cluster
|
||||
|
||||
There are many different examples of how to setup a Kubernetes cluster. Many of them are listed in this
|
||||
[matrix](/docs/getting-started-guides/). We call each of the combinations in this matrix a *distro*.
|
||||
|
||||
Before choosing a particular guide, here are some things to consider:
|
||||
|
||||
- Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both
|
||||
models are supported, but some distros are better for one case or the other.
|
||||
- Will you be using a hosted Kubernetes cluster, such as [GKE](https://cloud.google.com/container-engine), or setting
|
||||
one up yourself?
|
||||
- Will your cluster be on-premises, or in the cloud (IaaS)? Kubernetes does not directly support hybrid clusters. We
|
||||
recommend setting up multiple clusters rather than spanning distant locations.
|
||||
- Will you be running Kubernetes on "bare metal" or virtual machines? Kubernetes supports both, via different distros.
|
||||
- Do you just want to run a cluster, or do you expect to do active development of Kubernetes project code? If the
|
||||
latter, it is better to pick a distro actively used by other developers. Some distros only use binary releases, but
|
||||
offer is a greater variety of choices.
|
||||
- Not all distros are maintained as actively. Prefer ones which are listed as tested on a more recent version of
|
||||
Kubernetes.
|
||||
- If you are configuring Kubernetes on-premises, you will need to consider what [networking
|
||||
model](/docs/admin/networking) fits best.
|
||||
- If you are designing for very high-availability, you may want [clusters in multiple zones](/docs/admin/multi-cluster).
|
||||
- You may want to familiarize yourself with the various
|
||||
[components](/docs/admin/cluster-components) needed to run a cluster.
|
||||
|
||||
## Setting up a cluster
|
||||
|
||||
Pick one of the Getting Started Guides from the [matrix](/docs/getting-started-guides/) and follow it.
|
||||
If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides.
|
||||
|
||||
One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](/docs/admin/ovs-networking)), which
|
||||
uses OpenVSwitch to set up networking between pods across
|
||||
Kubernetes nodes.
|
||||
|
||||
If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes
|
||||
project](/docs/admin/salt).
|
||||
|
||||
## Managing a cluster, including upgrades
|
||||
|
||||
[Managing a cluster](/docs/admin/cluster-management).
|
||||
|
||||
## Managing nodes
|
||||
|
||||
[Managing nodes](/docs/admin/node).
|
||||
|
||||
## Optional Cluster Services
|
||||
|
||||
* **DNS Integration with SkyDNS** ([dns.md](/docs/admin/dns)):
|
||||
Resolving a DNS name directly to a Kubernetes service.
|
||||
|
||||
* [**Cluster-level logging**](/docs/user-guide/logging/overview):
|
||||
Saving container logs to a central log store with search/browsing interface.
|
||||
|
||||
## Multi-tenant support
|
||||
|
||||
* **Resource Quota** ([resourcequota/](/docs/admin/resourcequota/))
|
||||
|
||||
## Security
|
||||
|
||||
* **Kubernetes Container Environment** ([docs/user-guide/container-environment.md](/docs/user-guide/container-environment)):
|
||||
Describes the environment for Kubelet managed containers on a Kubernetes
|
||||
node.
|
||||
|
||||
* **Securing access to the API Server** [accessing the api](/docs/admin/accessing-the-api)
|
||||
|
||||
* **Authentication** [authentication](/docs/admin/authentication)
|
||||
|
||||
* **Authorization** [authorization](/docs/admin/authorization)
|
||||
|
||||
* **Admission Controllers** [admission controllers](/docs/admin/admission-controllers)
|
||||
|
||||
* **Sysctls** [sysctls](/docs/admin/sysctls.md)
|
||||
|
||||
* **Audit** [audit](/docs/admin/audit)
|
||||
|
||||
* **Securing the kubelet**
|
||||
* [Master-Node communication](/docs/admin/master-node-communication/)
|
||||
* [TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
[Cluster Administration Overview](/docs/tasks/administer-cluster/overview/)
|
||||
|
|
|
@ -106,7 +106,7 @@ To configure hard eviction thresholds, the following flag is supported:
|
|||
* `eviction-hard` describes a set of eviction thresholds (e.g. `memory.available<1Gi`) that if met
|
||||
would trigger a pod eviction.
|
||||
|
||||
The `kubelet` has the following default hard eviction thresholds:
|
||||
The `kubelet` has the following default hard eviction threshold:
|
||||
|
||||
* `--eviction-hard=memory.available<100Mi`
|
||||
|
||||
|
|
|
@ -395,7 +395,7 @@ spec:
|
|||
Kubernetes version 1.5 only allows resource quantities to be specified on a
|
||||
Container. It is planned to improve accounting for resources that are shared by
|
||||
all Containers in a Pod, such as
|
||||
[emptyDir volumes](/docs/user-guide/volumes/#emptydir).
|
||||
[emptyDir volumes](/docs/concepts/storage/volumes/#emptydir).
|
||||
|
||||
Kubernetes version 1.5 only supports Container requests and limits for CPU and
|
||||
memory. It is planned to add new resource types, including a node disk space
|
||||
|
|
|
@ -93,7 +93,7 @@ This document is meant to highlight and consolidate in one place configuration b
|
|||
|
||||
## Container Images
|
||||
|
||||
- The [default container image pull policy](/docs/user-guide/images/) is `IfNotPresent`, which causes the
|
||||
- The [default container image pull policy](/docs/concepts/containers/images/) is `IfNotPresent`, which causes the
|
||||
[Kubelet](/docs/admin/kubelet/) to not pull an image if it already exists. If you would like to
|
||||
always force a pull, you must specify a pull image policy of `Always` in your .yaml file
|
||||
(`imagePullPolicy: Always`) or specify a `:latest` tag on your image.
|
||||
|
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
- thockin
|
||||
title: Container Lifecycle Hooks
|
||||
---
|
||||
|
||||
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode). In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
|
||||
|
||||
This cluster information makes it possible to build applications that are *cluster aware*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analogous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
|
||||
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](/docs/concepts/containers/images/) and one or more [volumes](/docs/concepts/storage/volumes/).
|
||||
|
||||
The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Cluster Information
|
||||
|
||||
There are two types of information that are available within the container environment. There is information about the container itself, and there is information about other objects in the system.
|
||||
|
||||
### Container Information
|
||||
|
||||
Currently, the Pod name for the pod in which the container is running is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc), but this is planned to change in the future and should not be used.
|
||||
|
||||
The Pod name and namespace are also available as environment variables via the [downward API](/docs/user-guide/downward-api). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image.
|
||||
|
||||
In the future, we anticipate expanding this information with richer information about the container. Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server.
|
||||
|
||||
### Cluster Information
|
||||
|
||||
Currently the list of all services that are running at the time when the container was created via the Kubernetes Cluster API are available to the container as environment variables. The set of environment variables matches the syntax of Docker links.
|
||||
|
||||
For a service named **foo** that maps to a container port named **bar**, the following variables are defined:
|
||||
|
||||
```shell
|
||||
FOO_SERVICE_HOST=<the host the service is running on>
|
||||
FOO_SERVICE_PORT=<the port the service is running on>
|
||||
```
|
||||
|
||||
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/) is enabled). Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
|
||||
|
||||
## Container Hooks
|
||||
|
||||
Container hooks provide information to the container about events in its management lifecycle. For example, immediately after a container is started, it receives a *PostStart* hook. These hooks are broadcast *into* the container with information about the life-cycle of the container. They are different from the events provided by Docker and other systems which are *output* from the container. Output events provide a log of what has already happened. Input hooks provide real-time notification about things that are happening, but no historical log.
|
||||
|
||||
### Hook Details
|
||||
|
||||
There are currently two container hooks that are surfaced to containers:
|
||||
|
||||
*PostStart*
|
||||
|
||||
This hook is sent immediately after a container is created. It notifies the container that it has been created. No parameters are passed to the handler. It is NOT guaranteed that the hook will execute before the container entrypoint.
|
||||
|
||||
*PreStop*
|
||||
|
||||
This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in [Termination of Pods](/docs/user-guide/pods/#termination-of-pods).
|
||||
|
||||
### Hook Handler Execution
|
||||
|
||||
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook. These hook handler calls are synchronous in the context of the pod containing the container. This means that for a `PostStart` hook, the container entrypoint and hook will fire asynchronously. However, if the hook takes a while to run or hangs, the container will never reach a "running" state. The behavior is similar for a `PreStop` hook. If the hook hangs during execution, the Pod phase will stay in a "running" state and never reach "failed." If a `PostStart` or `PreStop` hook fails, it will kill the container.
|
||||
|
||||
Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop).
|
||||
|
||||
### Hook delivery guarantees
|
||||
|
||||
Hook delivery is intended to be "at least once", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this
|
||||
correctly.
|
||||
|
||||
We expect double delivery to be rare, but in some cases if the Kubelet restarts in the middle of sending a hook, the hook may be resent after the Kubelet comes back up.
|
||||
|
||||
Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend.
|
||||
|
||||
Currently, there are (hopefully rare) scenarios where PostStart hooks may not be delivered.
|
||||
|
||||
### Hook Handler Implementations
|
||||
|
||||
Hook handlers are the way that hooks are surfaced to containers. Containers can select the type of hook handler they would like to implement. Kubernetes currently supports two different hook handler types:
|
||||
|
||||
* Exec - Executes a specific command (e.g. pre-stop.sh) inside the cgroups and namespaces of the container. Resources consumed by the command are counted against the container.
|
||||
|
||||
* HTTP - Executes an HTTP request against a specific endpoint on the container.
|
||||
|
||||
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
|
||||
|
||||
### Debugging Hook Handlers
|
||||
|
||||
Currently, the logs for a hook handler are not exposed in the pod events. If your handler fails for some reason, it will emit an event. For `PostStart`, this is the `FailedPostStartHook` event. For `PreStop` this is the `FailedPreStopHook` event. You can see these events by running `kubectl describe pod <pod_name>`. An example output of events from runing this command is below:
|
||||
|
||||
```
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
|
||||
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
|
||||
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
|
||||
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
|
||||
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
|
||||
```
|
|
@ -0,0 +1,320 @@
|
|||
---
|
||||
assignees:
|
||||
- erictune
|
||||
- thockin
|
||||
title: Images
|
||||
---
|
||||
|
||||
Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/engine/tutorials/dockerimages/).
|
||||
|
||||
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
|
||||
|
||||
The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
|
||||
## Updating Images
|
||||
|
||||
The default pull policy is `IfNotPresent` which causes the Kubelet to not
|
||||
pull an image if it already exists. If you would like to always force a pull
|
||||
you must set a pull image policy of `Always` or specify a `:latest` tag on
|
||||
your image.
|
||||
|
||||
If you did not specify tag of your image, it will be assumed as `:latest`, with
|
||||
pull image policy of `Always` correspondingly.
|
||||
|
||||
Note that you should avoid using `:latest` tag, see [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images) for more information.
|
||||
|
||||
## Using a Private Registry
|
||||
|
||||
Private registries may require keys to read images from them.
|
||||
Credentials can be provided in several ways:
|
||||
|
||||
- Using Google Container Registry
|
||||
- Per-cluster
|
||||
- automatically configured on Google Compute Engine or Google Container Engine
|
||||
- all pods can read the project's private registry
|
||||
- Using AWS EC2 Container Registry (ECR)
|
||||
- use IAM roles and policies to control access to ECR repositories
|
||||
- automatically refreshes ECR login credentials
|
||||
- Using Azure Container Registry (ACR)
|
||||
- Configuring Nodes to Authenticate to a Private Registry
|
||||
- all pods can read any configured private registries
|
||||
- requires node configuration by cluster administrator
|
||||
- Pre-pulling Images
|
||||
- all pods can use any images cached on a node
|
||||
- requires root access to all nodes to setup
|
||||
- Specifying ImagePullSecrets on a Pod
|
||||
- only pods which provide own keys can access the private registry
|
||||
Each option is described in more detail below.
|
||||
|
||||
|
||||
### Using Google Container Registry
|
||||
|
||||
Kubernetes has native support for the [Google Container
|
||||
Registry (GCR)](https://cloud.google.com/tools/container-registry/), when running on Google Compute
|
||||
Engine (GCE). If you are running your cluster on GCE or Google Container Engine (GKE), simply
|
||||
use the full image name (e.g. gcr.io/my_project/image:tag).
|
||||
|
||||
All pods in a cluster will have read access to images in this registry.
|
||||
|
||||
The kubelet will authenticate to GCR using the instance's
|
||||
Google service account. The service account on the instance
|
||||
will have a `https://www.googleapis.com/auth/devstorage.read_only`,
|
||||
so it can pull from the project's GCR, but not push.
|
||||
|
||||
### Using AWS EC2 Container Registry
|
||||
|
||||
Kubernetes has native support for the [AWS EC2 Container
|
||||
Registry](https://aws.amazon.com/ecr/), when nodes are AWS EC2 instances.
|
||||
|
||||
Simply use the full image name (e.g. `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)
|
||||
in the Pod definition.
|
||||
|
||||
All users of the cluster who can create pods will be able to run pods that use any of the
|
||||
images in the ECR registry.
|
||||
|
||||
The kubelet will fetch and periodically refresh ECR credentials. It needs the following permissions to do this:
|
||||
|
||||
- `ecr:GetAuthorizationToken`
|
||||
- `ecr:BatchCheckLayerAvailability`
|
||||
- `ecr:GetDownloadUrlForLayer`
|
||||
- `ecr:GetRepositoryPolicy`
|
||||
- `ecr:DescribeRepositories`
|
||||
- `ecr:ListImages`
|
||||
- `ecr:BatchGetImage`
|
||||
|
||||
Requirements:
|
||||
|
||||
- You must be using kubelet version `v1.2.0` or newer. (e.g. run `/usr/bin/kubelet --version=true`).
|
||||
- If your nodes are in region A and your registry is in a different region B, you need version `v1.3.0` or newer.
|
||||
- ECR must be offered in your region
|
||||
|
||||
Troubleshooting:
|
||||
|
||||
- Verify all requirements above.
|
||||
- Get $REGION (e.g. `us-west-2`) credentials on your workstation. SSH into the host and run Docker manually with those creds. Does it work?
|
||||
- Verify kubelet is running with `--cloud-provider=aws`.
|
||||
- Check kubelet logs (e.g. `journalctl -t kubelet`) for log lines like:
|
||||
- `plugins.go:56] Registering credential provider: aws-ecr-key`
|
||||
- `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider`
|
||||
|
||||
### Using Azure Container Registry (ACR)
|
||||
When using [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/)
|
||||
you can authenticate using either an admin user or a service principal.
|
||||
In either case, authentication is done via standard Docker authentication. These instructions assume the
|
||||
[azure-cli](https://github.com/azure/azure-cli) command line tool.
|
||||
|
||||
You first need to create a registry and generate credentials, complete documentation for this can be found in
|
||||
the [Azure container registry documentation](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli).
|
||||
|
||||
Once you have created your container registry, you will use the following credentials to login:
|
||||
* `DOCKER_USER` : service principal, or admin username
|
||||
* `DOCKER_PASSWORD`: service principal password, or admin user password
|
||||
* `DOCKER_REGISTRY_SERVER`: `${some-registry-name}.azurecr.io`
|
||||
* `DOCKER_EMAIL`: `${some-email-address}`
|
||||
|
||||
Once you have those variables filled in you can [configure a Kubernetes Secret and use it to deploy a Pod]
|
||||
(/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
|
||||
|
||||
### Configuring Nodes to Authenticate to a Private Repository
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
with credentials for Google Container Registry. You cannot use this approach.
|
||||
|
||||
**Note:** if you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will
|
||||
manage and update the ECR login credentials. You cannot use this approach.
|
||||
|
||||
**Note:** this approach is suitable if you can control node configuration. It
|
||||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
node replacement.
|
||||
|
||||
Docker stores keys for private registries in the `$HOME/.dockercfg` or `$HOME/.docker/config.json` file. If you put this
|
||||
in the `$HOME` of user `root` on a kubelet, then docker will use it.
|
||||
|
||||
Here are the recommended steps to configuring your nodes to use a private registry. In this
|
||||
example, run these on your desktop/laptop:
|
||||
|
||||
1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json`.
|
||||
1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use.
|
||||
1. Get a list of your nodes, for example:
|
||||
- if you want the names: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
|
||||
- if you want to get the IPs: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
|
||||
1. Copy your local `.docker/config.json` to the home directory of root on each node.
|
||||
- for example: `for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done`
|
||||
|
||||
Verify by creating a pod that uses a private image, e.g.:
|
||||
|
||||
```yaml
|
||||
$ cat <<EOF > /tmp/private-image-test-1.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: private-image-test-1
|
||||
spec:
|
||||
containers:
|
||||
- name: uses-private-image
|
||||
image: $PRIVATE_IMAGE_NAME
|
||||
imagePullPolicy: Always
|
||||
command: [ "echo", "SUCCESS" ]
|
||||
EOF
|
||||
$ kubectl create -f /tmp/private-image-test-1.yaml
|
||||
pods/private-image-test-1
|
||||
$
|
||||
```
|
||||
|
||||
If everything is working, then, after a few moments, you should see:
|
||||
|
||||
```shell
|
||||
$ kubectl logs private-image-test-1
|
||||
SUCCESS
|
||||
```
|
||||
|
||||
If it failed, then you will see:
|
||||
|
||||
```shell
|
||||
$ kubectl describe pods/private-image-test-1 | grep "Failed"
|
||||
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
|
||||
```
|
||||
|
||||
|
||||
You must ensure all nodes in the cluster have the same `.docker/config.json`. Otherwise, pods will run on
|
||||
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
|
||||
template needs to include the `.docker/config.json` or mount a drive that contains it.
|
||||
|
||||
All pods will have read access to images in any private registry once private
|
||||
registry keys are added to the `.docker/config.json`.
|
||||
|
||||
**This was tested with a private docker repository as of 26 June with Kubernetes version v0.19.3.
|
||||
It should also work for a private registry such as quay.io, but that has not been tested.**
|
||||
|
||||
### Pre-pulling Images
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
with credentials for Google Container Registry. You cannot use this approach.
|
||||
|
||||
**Note:** this approach is suitable if you can control node configuration. It
|
||||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
node replacement.
|
||||
|
||||
Be default, the kubelet will try to pull each image from the specified registry.
|
||||
However, if the `imagePullPolicy` property of the container is set to `IfNotPresent` or `Never`,
|
||||
then a local image is used (preferentially or exclusively, respectively).
|
||||
|
||||
If you want to rely on pre-pulled images as a substitute for registry authentication,
|
||||
you must ensure all nodes in the cluster have the same pre-pulled images.
|
||||
|
||||
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
|
||||
|
||||
All pods will have read access to any pre-pulled images.
|
||||
|
||||
### Specifying ImagePullSecrets on a Pod
|
||||
|
||||
**Note:** This approach is currently the recommended approach for GKE, GCE, and any cloud-providers
|
||||
where node creation is automated.
|
||||
|
||||
Kubernetes supports specifying registry keys on a pod.
|
||||
|
||||
#### Creating a Secret with a Docker Config
|
||||
|
||||
Run the following command, substituting the appropriate uppercase values:
|
||||
|
||||
```shell
|
||||
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
|
||||
secret "myregistrykey" created.
|
||||
```
|
||||
|
||||
If you need access to multiple registries, you can create one secret for each registry.
|
||||
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
|
||||
when pulling images for your Pods.
|
||||
|
||||
Pods can only reference image pull secrets in their own namespace,
|
||||
so this process needs to be done one time per namespace.
|
||||
|
||||
##### Bypassing kubectl create secrets
|
||||
|
||||
If for some reason you need multiple items in a single `.docker/config.json` or need
|
||||
control not given by the above command, then you can [create a secret using
|
||||
json or yaml](/docs/user-guide/secrets/#creating-a-secret-manually).
|
||||
|
||||
Be sure to:
|
||||
|
||||
- set the name of the data item to `.dockerconfigjson`
|
||||
- base64 encode the docker file and paste that string, unbroken
|
||||
as the value for field `data[".dockerconfigjson"]`
|
||||
- set `type` to `kubernetes.io/dockerconfigjson`
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: myregistrykey
|
||||
namespace: awesomeapps
|
||||
data:
|
||||
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
```
|
||||
|
||||
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
|
||||
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...` it means
|
||||
the data was successfully un-base64 encoded, but could not be parsed as a `.docker/config.json` file.
|
||||
|
||||
#### Referring to an imagePullSecrets on a Pod
|
||||
|
||||
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
|
||||
section to a pod definition.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: foo
|
||||
namespace: awesomeapps
|
||||
spec:
|
||||
containers:
|
||||
- name: foo
|
||||
image: janedoe/awesomeapp:v1
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey
|
||||
```
|
||||
|
||||
This needs to be done for each pod that is using a private registry.
|
||||
|
||||
However, setting of this field can be automated by setting the imagePullSecrets
|
||||
in a [serviceAccount](/docs/user-guide/service-accounts) resource.
|
||||
|
||||
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
|
||||
will be merged. This approach will work on Google Container Engine (GKE).
|
||||
|
||||
### Use Cases
|
||||
|
||||
There are a number of solutions for configuring private registries. Here are some
|
||||
common use cases and suggested solutions.
|
||||
|
||||
1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
|
||||
- Use public images on the Docker hub.
|
||||
- no configuration required
|
||||
- on GCE/GKE, a local mirror is automatically used for improved speed and availability
|
||||
1. Cluster running some proprietary images which should be hidden to those outside the company, but
|
||||
visible to all cluster users.
|
||||
- Use a hosted private [Docker registry](https://docs.docker.com/registry/)
|
||||
- may be hosted on the [Docker Hub](https://hub.docker.com/account/signup/), or elsewhere.
|
||||
- manually configure .docker/config.json on each node as described above
|
||||
- Or, run an internal private registry behind your firewall with open read access.
|
||||
- no Kubernetes configuration required
|
||||
- Or, when on GCE/GKE, use the project's Google Container Registry.
|
||||
- will work better with cluster autoscaling than manual node configuration
|
||||
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
|
||||
1. Cluster with a proprietary images, a few of which require stricter access control
|
||||
- ensure [AlwaysPullImages admission controller](/docs/admin/admission-controllers/#alwayspullimages) is active, otherwise, all Pods potentially have access to all images
|
||||
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
|
||||
1. A multi-tenant cluster where each tenant needs own private registry
|
||||
- ensure [AlwaysPullImages admission controller](/docs/admin/admission-controllers/#alwayspullimages) is active, otherwise, all Pods of all tenants potentially have access to all images
|
||||
- run a private registry with authorization required.
|
||||
- generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
|
||||
- tenant adds that secret to imagePullSecrets of each namespace.
|
|
@ -8,7 +8,7 @@ The Concepts section helps you learn about the parts of the Kubernetes system an
|
|||
|
||||
To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.
|
||||
|
||||
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kuberentes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster:
|
||||
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kubernetes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster:
|
||||
|
||||
* The **Kubernetes Master** is a collection of four processes that run on a single node in your cluster, which is designated as the master node.
|
||||
* Each individual non-master node in your cluster runs two processes:
|
||||
|
|
|
@ -67,7 +67,7 @@ At a minimum, Kubernetes can schedule and run application containers on clusters
|
|||
Kubernetes satisfies a number of common needs of applications running in production, such as:
|
||||
|
||||
* [co-locating helper processes](/docs/user-guide/pods/), facilitating composite applications and preserving the one-application-per-container model,
|
||||
* [mounting storage systems](/docs/user-guide/volumes/),
|
||||
* [mounting storage systems](/docs/concepts/storage/volumes/),
|
||||
* [distributing secrets](/docs/user-guide/secrets/),
|
||||
* [application health checking](/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks),
|
||||
* [replicating application instances](/docs/user-guide/replication-controller/),
|
||||
|
|
|
@ -0,0 +1,578 @@
|
|||
---
|
||||
assignees:
|
||||
- jsafrane
|
||||
- mikedanese
|
||||
- saad-ali
|
||||
- thockin
|
||||
title: Volumes
|
||||
---
|
||||
|
||||
On-disk files in a container are ephemeral, which presents some problems for
|
||||
non-trivial applications when running in containers. First, when a container
|
||||
crashes kubelet will restart it, but the files will be lost - the
|
||||
container starts with a clean state. Second, when running containers together
|
||||
in a `Pod` it is often necessary to share files between those containers. The
|
||||
Kubernetes `Volume` abstraction solves both of these problems.
|
||||
|
||||
Familiarity with [pods](/docs/user-guide/pods) is suggested.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
|
||||
## Background
|
||||
|
||||
Docker also has a concept of
|
||||
[volumes](https://docs.docker.com/userguide/dockervolumes/), though it is
|
||||
somewhat looser and less managed. In Docker, a volume is simply a directory on
|
||||
disk or in another container. Lifetimes are not managed and until very
|
||||
recently there were only local-disk-backed volumes. Docker now provides volume
|
||||
drivers, but the functionality is very limited for now (e.g. as of Docker 1.7
|
||||
only one volume driver is allowed per container and there is no way to pass
|
||||
parameters to volumes).
|
||||
|
||||
A Kubernetes volume, on the other hand, has an explicit lifetime - the same as
|
||||
the pod that encloses it. Consequently, a volume outlives any containers that run
|
||||
within the Pod, and data is preserved across Container restarts. Of course, when a
|
||||
Pod ceases to exist, the volume will cease to exist, too. Perhaps more
|
||||
importantly than this, Kubernetes supports many type of volumes, and a Pod can
|
||||
use any number of them simultaneously.
|
||||
|
||||
At its core, a volume is just a directory, possibly with some data in it, which
|
||||
is accessible to the containers in a pod. How that directory comes to be, the
|
||||
medium that backs it, and the contents of it are determined by the particular
|
||||
volume type used.
|
||||
|
||||
To use a volume, a pod specifies what volumes to provide for the pod (the
|
||||
[`spec.volumes`](http://kubernetes.io/kubernetes/third_party/swagger-ui/#!/v1/createPod)
|
||||
field) and where to mount those into containers(the
|
||||
[`spec.containers.volumeMounts`](http://kubernetes.io/kubernetes/third_party/swagger-ui/#!/v1/createPod)
|
||||
field).
|
||||
|
||||
A process in a container sees a filesystem view composed from their Docker
|
||||
image and volumes. The [Docker
|
||||
image](https://docs.docker.com/userguide/dockerimages/) is at the root of the
|
||||
filesystem hierarchy, and any volumes are mounted at the specified paths within
|
||||
the image. Volumes can not mount onto other volumes or have hard links to
|
||||
other volumes. Each container in the Pod must independently specify where to
|
||||
mount each volume.
|
||||
|
||||
## Types of Volumes
|
||||
|
||||
Kubernetes supports several types of Volumes:
|
||||
|
||||
* `emptyDir`
|
||||
* `hostPath`
|
||||
* `gcePersistentDisk`
|
||||
* `awsElasticBlockStore`
|
||||
* `nfs`
|
||||
* `iscsi`
|
||||
* `flocker`
|
||||
* `glusterfs`
|
||||
* `rbd`
|
||||
* `cephfs`
|
||||
* `gitRepo`
|
||||
* `secret`
|
||||
* `persistentVolumeClaim`
|
||||
* `downwardAPI`
|
||||
* `azureFileVolume`
|
||||
* `azureDisk`
|
||||
* `vsphereVolume`
|
||||
* `Quobyte`
|
||||
|
||||
We welcome additional contributions.
|
||||
|
||||
### emptyDir
|
||||
|
||||
An `emptyDir` volume is first created when a Pod is assigned to a Node, and
|
||||
exists as long as that Pod is running on that node. As the name says, it is
|
||||
initially empty. Containers in the pod can all read and write the same
|
||||
files in the `emptyDir` volume, though that volume can be mounted at the same
|
||||
or different paths in each container. When a Pod is removed from a node for
|
||||
any reason, the data in the `emptyDir` is deleted forever. NOTE: a container
|
||||
crashing does *NOT* remove a pod from a node, so the data in an `emptyDir`
|
||||
volume is safe across container crashes.
|
||||
|
||||
Some uses for an `emptyDir` are:
|
||||
|
||||
* scratch space, such as for a disk-based merge sort
|
||||
* checkpointing a long computation for recovery from crashes
|
||||
* holding files that a content-manager container fetches while a webserver
|
||||
container serves the data
|
||||
|
||||
By default, `emptyDir` volumes are stored on whatever medium is backing the
|
||||
machine - that might be disk or SSD or network storage, depending on your
|
||||
environment. However, you can set the `emptyDir.medium` field to `"Memory"`
|
||||
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead.
|
||||
While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
|
||||
machine reboot and any files you write will count against your container's
|
||||
memory limit.
|
||||
|
||||
#### Example pod
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /cache
|
||||
name: cache-volume
|
||||
volumes:
|
||||
- name: cache-volume
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
### hostPath
|
||||
|
||||
A `hostPath` volume mounts a file or directory from the host node's filesystem
|
||||
into your pod. This is not something that most Pods will need, but it offers a
|
||||
powerful escape hatch for some applications.
|
||||
|
||||
For example, some uses for a `hostPath` are:
|
||||
|
||||
* running a container that needs access to Docker internals; use a `hostPath`
|
||||
of `/var/lib/docker`
|
||||
* running cAdvisor in a container; use a `hostPath` of `/dev/cgroups`
|
||||
|
||||
Watch out when using this type of volume, because:
|
||||
|
||||
* pods with identical configuration (such as created from a podTemplate) may
|
||||
behave differently on different nodes due to different files on the nodes
|
||||
* when Kubernetes adds resource-aware scheduling, as is planned, it will not be
|
||||
able to account for resources used by a `hostPath`
|
||||
* the directories created on the underlying hosts are only writable by root. You
|
||||
either need to run your process as root in a
|
||||
[privileged container](/docs/user-guide/security-context) or modify the file
|
||||
permissions on the host to be able to write to a `hostPath` volume
|
||||
|
||||
#### Example pod
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
hostPath:
|
||||
# directory location on host
|
||||
path: /data
|
||||
```
|
||||
|
||||
#### Example pod
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-hostpath
|
||||
spec:
|
||||
containers:
|
||||
- image: myimage
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-hostpath
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
hostPath:
|
||||
path: /path/to/my/dir
|
||||
```
|
||||
|
||||
### gcePersistentDisk
|
||||
|
||||
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [Persistent
|
||||
Disk](http://cloud.google.com/compute/docs/disks) into your pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of a PD are
|
||||
preserved and the volume is merely unmounted. This means that a PD can be
|
||||
pre-populated with data, and that data can be "handed off" between pods.
|
||||
|
||||
__Important: You must create a PD using `gcloud` or the GCE API or UI
|
||||
before you can use it__
|
||||
|
||||
There are some restrictions when using a `gcePersistentDisk`:
|
||||
|
||||
* the nodes on which pods are running must be GCE VMs
|
||||
* those VMs need to be in the same GCE project and zone as the PD
|
||||
|
||||
A feature of PD is that they can be mounted as read-only by multiple consumers
|
||||
simultaneously. This means that you can pre-populate a PD with your dataset
|
||||
and then serve it in parallel from as many pods as you need. Unfortunately,
|
||||
PDs can only be mounted by a single consumer in read-write mode - no
|
||||
simultaneous writers allowed.
|
||||
|
||||
Using a PD on a pod controlled by a ReplicationController will fail unless
|
||||
the PD is read-only or the replica count is 0 or 1.
|
||||
|
||||
#### Creating a PD
|
||||
|
||||
Before you can use a GCE PD with a pod, you need to create it.
|
||||
|
||||
```shell
|
||||
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
|
||||
```
|
||||
|
||||
#### Example pod
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This GCE PD must already exist.
|
||||
gcePersistentDisk:
|
||||
pdName: my-data-disk
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
### awsElasticBlockStore
|
||||
|
||||
An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS
|
||||
Volume](http://aws.amazon.com/ebs/) into your pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of an EBS
|
||||
volume are preserved and the volume is merely unmounted. This means that an
|
||||
EBS volume can be pre-populated with data, and that data can be "handed off"
|
||||
between pods.
|
||||
|
||||
__Important: You must create an EBS volume using `aws ec2 create-volume` or
|
||||
the AWS API before you can use it__
|
||||
|
||||
There are some restrictions when using an awsElasticBlockStore volume:
|
||||
|
||||
* the nodes on which pods are running must be AWS EC2 instances
|
||||
* those instances need to be in the same region and availability-zone as the EBS volume
|
||||
* EBS only supports a single EC2 instance mounting a volume
|
||||
|
||||
#### Creating an EBS volume
|
||||
|
||||
Before you can use an EBS volume with a pod, you need to create it.
|
||||
|
||||
```shell
|
||||
aws ec2 create-volume --availability-zone eu-west-1a --size 10 --volume-type gp2
|
||||
```
|
||||
|
||||
Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume
|
||||
type are suitable for your use!)
|
||||
|
||||
#### AWS EBS Example configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-ebs
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-ebs
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This AWS EBS volume must already exist.
|
||||
awsElasticBlockStore:
|
||||
volumeID: <volume-id>
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
### nfs
|
||||
|
||||
An `nfs` volume allows an existing NFS (Network File System) share to be
|
||||
mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is
|
||||
removed, the contents of an `nfs` volume are preserved and the volume is merely
|
||||
unmounted. This means that an NFS volume can be pre-populated with data, and
|
||||
that data can be "handed off" between pods. NFS can be mounted by multiple
|
||||
writers simultaneously.
|
||||
|
||||
__Important: You must have your own NFS server running with the share exported
|
||||
before you can use it__
|
||||
|
||||
See the [NFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/nfs) for more details.
|
||||
|
||||
### iscsi
|
||||
|
||||
An `iscsi` volume allows an existing iSCSI (SCSI over IP) volume to be mounted
|
||||
into your pod. Unlike `emptyDir`, which is erased when a Pod is removed, the
|
||||
contents of an `iscsi` volume are preserved and the volume is merely
|
||||
unmounted. This means that an iscsi volume can be pre-populated with data, and
|
||||
that data can be "handed off" between pods.
|
||||
|
||||
__Important: You must have your own iSCSI server running with the volume
|
||||
created before you can use it__
|
||||
|
||||
A feature of iSCSI is that it can be mounted as read-only by multiple consumers
|
||||
simultaneously. This means that you can pre-populate a volume with your dataset
|
||||
and then serve it in parallel from as many pods as you need. Unfortunately,
|
||||
iSCSI volumes can only be mounted by a single consumer in read-write mode - no
|
||||
simultaneous writers allowed.
|
||||
|
||||
See the [iSCSI example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/iscsi) for more details.
|
||||
|
||||
### flocker
|
||||
|
||||
[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management
|
||||
and orchestration of data volumes backed by a variety of storage backends.
|
||||
|
||||
A `flocker` volume allows a Flocker dataset to be mounted into a pod. If the
|
||||
dataset does not already exist in Flocker, it needs to be first created with the Flocker
|
||||
CLI or by using the Flocker API. If the dataset already exists it will be
|
||||
reattached by Flocker to the node that the pod is scheduled. This means data
|
||||
can be "handed off" between pods as required.
|
||||
|
||||
__Important: You must have your own Flocker installation running before you can use it__
|
||||
|
||||
See the [Flocker example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/flocker) for more details.
|
||||
|
||||
### glusterfs
|
||||
|
||||
A `glusterfs` volume allows a [Glusterfs](http://www.gluster.org) (an open
|
||||
source networked filesystem) volume to be mounted into your pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of a
|
||||
`glusterfs` volume are preserved and the volume is merely unmounted. This
|
||||
means that a glusterfs volume can be pre-populated with data, and that data can
|
||||
be "handed off" between pods. GlusterFS can be mounted by multiple writers
|
||||
simultaneously.
|
||||
|
||||
__Important: You must have your own GlusterFS installation running before you
|
||||
can use it__
|
||||
|
||||
See the [GlusterFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/glusterfs) for more details.
|
||||
|
||||
### rbd
|
||||
|
||||
An `rbd` volume allows a [Rados Block
|
||||
Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your
|
||||
pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of
|
||||
a `rbd` volume are preserved and the volume is merely unmounted. This
|
||||
means that a RBD volume can be pre-populated with data, and that data can
|
||||
be "handed off" between pods.
|
||||
|
||||
__Important: You must have your own Ceph installation running before you
|
||||
can use RBD__
|
||||
|
||||
A feature of RBD is that it can be mounted as read-only by multiple consumers
|
||||
simultaneously. This means that you can pre-populate a volume with your dataset
|
||||
and then serve it in parallel from as many pods as you need. Unfortunately,
|
||||
RBD volumes can only be mounted by a single consumer in read-write mode - no
|
||||
simultaneous writers allowed.
|
||||
|
||||
See the [RBD example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/rbd) for more details.
|
||||
|
||||
### cephfs
|
||||
|
||||
A `cephfs` volume allows an existing CephFS volume to be
|
||||
mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is
|
||||
removed, the contents of a `cephfs` volume are preserved and the volume is merely
|
||||
unmounted. This means that a CephFS volume can be pre-populated with data, and
|
||||
that data can be "handed off" between pods. CephFS can be mounted by multiple
|
||||
writers simultaneously.
|
||||
|
||||
__Important: You must have your own Ceph server running with the share exported
|
||||
before you can use it__
|
||||
|
||||
See the [CephFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/cephfs/) for more details.
|
||||
|
||||
### gitRepo
|
||||
|
||||
A `gitRepo` volume is an example of what can be done as a volume plugin. It
|
||||
mounts an empty directory and clones a git repository into it for your pod to
|
||||
use. In the future, such volumes may be moved to an even more decoupled model,
|
||||
rather than extending the Kubernetes API for every such use case.
|
||||
|
||||
Here is an example for gitRepo volume:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: server
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
volumeMounts:
|
||||
- mountPath: /mypath
|
||||
name: git-volume
|
||||
volumes:
|
||||
- name: git-volume
|
||||
gitRepo:
|
||||
repository: "git@somewhere:me/my-git-repository.git"
|
||||
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
|
||||
```
|
||||
|
||||
### secret
|
||||
|
||||
A `secret` volume is used to pass sensitive information, such as passwords, to
|
||||
pods. You can store secrets in the Kubernetes API and mount them as files for
|
||||
use by pods without coupling to Kubernetes directly. `secret` volumes are
|
||||
backed by tmpfs (a RAM-backed filesystem) so they are never written to
|
||||
non-volatile storage.
|
||||
|
||||
__Important: You must create a secret in the Kubernetes API before you can use
|
||||
it__
|
||||
|
||||
Secrets are described in more detail [here](/docs/user-guide/secrets).
|
||||
|
||||
### persistentVolumeClaim
|
||||
|
||||
A `persistentVolumeClaim` volume is used to mount a
|
||||
[PersistentVolume](/docs/user-guide/persistent-volumes) into a pod. PersistentVolumes are a
|
||||
way for users to "claim" durable storage (such as a GCE PersistentDisk or an
|
||||
iSCSI volume) without knowing the details of the particular cloud environment.
|
||||
|
||||
See the [PersistentVolumes example](/docs/user-guide/persistent-volumes/) for more
|
||||
details.
|
||||
|
||||
### downwardAPI
|
||||
|
||||
A `downwardAPI` volume is used to make downward API data available to applications.
|
||||
It mounts a directory and writes the requested data in plain text files.
|
||||
|
||||
See the [`downwardAPI` volume example](/docs/user-guide/downward-api/volume/) for more details.
|
||||
|
||||
### FlexVolume
|
||||
|
||||
A `FlexVolume` enables users to mount vendor volumes into a pod. It expects vendor
|
||||
drivers are installed in the volume plugin path on each kubelet node. This is
|
||||
an alpha feature and may change in future.
|
||||
|
||||
More details are in [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/flexvolume/README.md)
|
||||
|
||||
### AzureFileVolume
|
||||
|
||||
A `AzureFileVolume` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0)
|
||||
into a Pod.
|
||||
|
||||
More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/azure_file/README.md)
|
||||
|
||||
### AzureDiskVolume
|
||||
|
||||
A `AzureDiskVolume` is used to mount a Microsoft Azure [Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) into a Pod.
|
||||
|
||||
More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/azure_disk/README.md)
|
||||
|
||||
### vsphereVolume
|
||||
|
||||
__Prerequisite: Kubernetes with vSphere Cloud Provider configured.
|
||||
For cloudprovider configuration please refer [vSphere getting started guide](http://kubernetes.io/docs/getting-started-guides/vsphere/).__
|
||||
|
||||
A `vsphereVolume` is used to mount a vSphere VMDK Volume into your Pod. The contents
|
||||
of a volume are preserved when it is unmounted. It supports both VMFS and VSAN datastore.
|
||||
|
||||
__Important: You must create VMDK using one of the following method before using with POD.__
|
||||
|
||||
#### Creating a VMDK volume
|
||||
|
||||
* Create using vmkfstools.
|
||||
|
||||
First ssh into ESX and then use following command to create vmdk,
|
||||
|
||||
```shell
|
||||
vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
|
||||
```
|
||||
|
||||
* Create using vmware-vdiskmanager.
|
||||
```shell
|
||||
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
|
||||
```
|
||||
|
||||
#### vSphere VMDK Example configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-vmdk
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-vmdk
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This VMDK volume must already exist.
|
||||
vsphereVolume:
|
||||
volumePath: "[DatastoreName] volumes/myDisk"
|
||||
fsType: ext4
|
||||
```
|
||||
More examples can be found [here](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere).
|
||||
|
||||
|
||||
### Quobyte
|
||||
|
||||
A `Quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume to be mounted into your pod.
|
||||
|
||||
__Important: You must have your own Quobyte setup running with the volumes created
|
||||
before you can use it__
|
||||
|
||||
See the [Quobyte example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/quobyte) for more details.
|
||||
|
||||
## Using subPath
|
||||
|
||||
Sometimes, it is useful to share one volume for multiple uses in a single pod. The `volumeMounts.subPath`
|
||||
property can be used to specify a sub-path inside the referenced volume instead of its root.
|
||||
|
||||
Here is an example of a pod with a LAMP stack (Linux Apache Mysql PHP) using a single, shared volume.
|
||||
The HTML contents are mapped to its `html` folder, and the databases will be stored in its `mysql` folder:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: my-lamp-site
|
||||
spec:
|
||||
containers:
|
||||
- name: mysql
|
||||
image: mysql
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/mysql
|
||||
name: site-data
|
||||
subPath: mysql
|
||||
- name: php
|
||||
image: php
|
||||
volumeMounts:
|
||||
- mountPath: /var/www/html
|
||||
name: site-data
|
||||
subPath: html
|
||||
volumes:
|
||||
- name: site-data
|
||||
persistentVolumeClaim:
|
||||
claimName: my-lamp-site-data
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
The storage media (Disk, SSD, etc.) of an `emptyDir` volume is determined by the
|
||||
medium of the filesystem holding the kubelet root dir (typically
|
||||
`/var/lib/kubelet`). There is no limit on how much space an `emptyDir` or
|
||||
`hostPath` volume can consume, and no isolation between containers or between
|
||||
pods.
|
||||
|
||||
In the future, we expect that `emptyDir` and `hostPath` volumes will be able to
|
||||
request a certain amount of space using a [resource](/docs/user-guide/compute-resources)
|
||||
specification, and to select the type of media to use, for clusters that have
|
||||
several media types.
|
|
@ -55,7 +55,7 @@ The example below demonstrates the components of a StatefulSet.
|
|||
|
||||
* A Headless Service, named nginx, is used to control the network domain.
|
||||
* The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
|
||||
* The volumeClaimTemplates will provide stable storage using [PersistentVolumes](/docs/user-guide/volumes/) provisioned by a
|
||||
* The volumeClaimTemplates will provide stable storage using [PersistentVolumes](/docs/concepts/storage/volumes/) provisioned by a
|
||||
PersistentVolume Provisioner.
|
||||
|
||||
```yaml
|
||||
|
@ -144,7 +144,7 @@ Note that Cluster Domain will be set to `cluster.local` unless
|
|||
|
||||
### Stable Storage
|
||||
|
||||
Kubernetes creates one [PersistentVolume](/docs/user-guide/volumes/) for each
|
||||
Kubernetes creates one [PersistentVolume](/docs/concepts/storage/volumes/) for each
|
||||
VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume
|
||||
with a storage class of `anything` and 1 Gib of provisioned storage. When a Pod is (re)scheduled
|
||||
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
|
||||
|
|
|
@ -227,7 +227,7 @@ validation error is thrown for any Container sharing a name with another.
|
|||
### Resources
|
||||
|
||||
Given the ordering and execution for Init Containers, the following rules
|
||||
for resource usage apply:
|
||||
for resource usage application:
|
||||
|
||||
* The highest of any particular resource request or limit defined on all Init
|
||||
Containers is the *effective init request/limit*
|
||||
|
|
|
@ -274,7 +274,7 @@ spec:
|
|||
* Get hands-on experience
|
||||
[configuring liveness and readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
|
||||
|
||||
* [Container Lifecycle Hooks](/docs/user-guide/container-environment/)
|
||||
* [Container Lifecycle Hooks](/docs/concepts/containers/container-lifecycle-hooks/)
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -173,10 +173,9 @@ The key is used for mutual authentication between the master and the joining nod
|
|||
By default, your cluster will not schedule pods on the master for security reasons.
|
||||
If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, run:
|
||||
|
||||
# kubectl taint nodes --all dedicated-
|
||||
node "test-01" tainted
|
||||
taint key="dedicated" and effect="" not found.
|
||||
taint key="dedicated" and effect="" not found.
|
||||
# MASTER_NODE="actual_master_node_name"
|
||||
# TAINT_KEY=$(kubectl get no ${MASTER_NODE} --template="{{(index .spec.taints 0).key}}")
|
||||
# kubectl taint nodes ${MASTER_NODE} ${TAINT_KEY}-
|
||||
|
||||
This will remove the "dedicated" taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.
|
||||
|
||||
|
|
|
@ -266,7 +266,7 @@ Some drivers will mount a host folder within the VM so that you can easily share
|
|||
|
||||
## Private Container Registries
|
||||
|
||||
To access a private container registry, follow the steps on [this page](http://kubernetes.io/docs/user-guide/images/).
|
||||
To access a private container registry, follow the steps on [this page](/docs/concepts/containers/images/).
|
||||
|
||||
We recommend you use ImagePullSecrets, but if you would like to configure access on the minikube VM you can place the `.dockercfg` in the `/home/docker` directory or the `config.json` in the `/home/docker/.docker` directory.
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ Documentation for how to use vSphere managed storage can be found in the
|
|||
[persistent volumes user
|
||||
guide](http://kubernetes.io/docs/user-guide/persistent-volumes/#vsphere) and the
|
||||
[volumes user
|
||||
guide](http://kubernetes.io/docs/user-guide/volumes/#vspherevolume)
|
||||
guide](/docs/concepts/storage/volumes/#vspherevolume)
|
||||
|
||||
Examples can be found
|
||||
[here](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere)
|
||||
|
|
|
@ -0,0 +1,92 @@
|
|||
---
|
||||
assignees:
|
||||
- davidopp
|
||||
- lavalamp
|
||||
title: Cluster Administration Overview
|
||||
---
|
||||
|
||||
The cluster administration overview is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with concepts in the [User Guide](/docs/user-guide/).
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Planning a cluster
|
||||
|
||||
There are many different examples of how to setup a Kubernetes cluster. Many of them are listed in this
|
||||
[matrix](/docs/getting-started-guides/). We call each of the combinations in this matrix a *distro*.
|
||||
|
||||
Before choosing a particular guide, here are some things to consider:
|
||||
|
||||
- Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both
|
||||
models are supported, but some distros are better for one case or the other.
|
||||
- Will you be using a hosted Kubernetes cluster, such as [GKE](https://cloud.google.com/container-engine), or setting
|
||||
one up yourself?
|
||||
- Will your cluster be on-premises, or in the cloud (IaaS)? Kubernetes does not directly support hybrid clusters. We
|
||||
recommend setting up multiple clusters rather than spanning distant locations.
|
||||
- Will you be running Kubernetes on "bare metal" or virtual machines? Kubernetes supports both, via different distros.
|
||||
- Do you just want to run a cluster, or do you expect to do active development of Kubernetes project code? If the
|
||||
latter, it is better to pick a distro actively used by other developers. Some distros only use binary releases, but
|
||||
offer is a greater variety of choices.
|
||||
- Not all distros are maintained as actively. Prefer ones which are listed as tested on a more recent version of
|
||||
Kubernetes.
|
||||
- If you are configuring Kubernetes on-premises, you will need to consider what [networking
|
||||
model](/docs/admin/networking) fits best.
|
||||
- If you are designing for very high-availability, you may want [clusters in multiple zones](/docs/admin/multi-cluster).
|
||||
- You may want to familiarize yourself with the various
|
||||
[components](/docs/admin/cluster-components) needed to run a cluster.
|
||||
|
||||
## Setting up a cluster
|
||||
|
||||
Pick one of the Getting Started Guides from the [matrix](/docs/getting-started-guides/) and follow it.
|
||||
If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides.
|
||||
|
||||
One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](/docs/admin/ovs-networking)), which
|
||||
uses OpenVSwitch to set up networking between pods across
|
||||
Kubernetes nodes.
|
||||
|
||||
If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes
|
||||
project](/docs/admin/salt).
|
||||
|
||||
## Managing a cluster, including upgrades
|
||||
|
||||
[Managing a cluster](/docs/admin/cluster-management).
|
||||
|
||||
## Managing nodes
|
||||
|
||||
[Managing nodes](/docs/admin/node).
|
||||
|
||||
## Optional Cluster Services
|
||||
|
||||
* **DNS Integration with SkyDNS** ([dns.md](/docs/admin/dns)):
|
||||
Resolving a DNS name directly to a Kubernetes service.
|
||||
|
||||
* [**Cluster-level logging**](/docs/user-guide/logging/overview):
|
||||
Saving container logs to a central log store with search/browsing interface.
|
||||
|
||||
## Multi-tenant support
|
||||
|
||||
* **Resource Quota** ([resourcequota/](/docs/admin/resourcequota/))
|
||||
|
||||
## Security
|
||||
|
||||
* **Kubernetes Container Environment** ([docs/user-guide/container-environment.md](/docs/concepts/containers/container-lifecycle-hooks/)):
|
||||
Describes the environment for Kubelet managed containers on a Kubernetes
|
||||
node.
|
||||
|
||||
* **Securing access to the API Server** [accessing the api](/docs/admin/accessing-the-api)
|
||||
|
||||
* **Authentication** [authentication](/docs/admin/authentication)
|
||||
|
||||
* **Authorization** [authorization](/docs/admin/authorization)
|
||||
|
||||
* **Admission Controllers** [admission controllers](/docs/admin/admission-controllers)
|
||||
|
||||
* **Sysctls** [sysctls](/docs/admin/sysctls.md)
|
||||
|
||||
* **Audit** [audit](/docs/admin/audit)
|
||||
|
||||
* **Securing the kubelet**
|
||||
* [Master-Node communication](/docs/admin/master-node-communication/)
|
||||
* [TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
|
@ -79,8 +79,8 @@ unless the Pod's grace period expires. For more details, see
|
|||
|
||||
{% capture whatsnext %}
|
||||
|
||||
* Learn more about [Container lifecycle hooks](/docs/user-guide/container-environment/.)
|
||||
* Learn more about the [lifecycle of a Pod](https://kubernetes.io/docs/user-guide/pod-states/).
|
||||
* Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
|
||||
* Learn more about the [lifecycle of a Pod](/docs/user-guide/pod-states/).
|
||||
|
||||
|
||||
### Reference
|
||||
|
|
|
@ -84,7 +84,7 @@ The output shows that nginx is serving the web page that was written by the init
|
|||
* Learn more about
|
||||
[communicating between Containers running in the same Pod](/docs/tasks/configure-pod-container/communicate-containers-same-pod/).
|
||||
* Learn more about [init Containers](/docs/user-guide/pods/init-container/).
|
||||
* Learn more about [Volumes](/docs/user-guide/volumes/).
|
||||
* Learn more about [Volumes](/docs/concepts/storage/volumes/).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ This page shows how to configure a Pod to use a Volume for storage.
|
|||
A Container's file system lives only as long as the Container does, so when a
|
||||
Container terminates and restarts, changes to the filesystem are lost. For more
|
||||
consistent storage that is independent of the Container, you can use a
|
||||
[Volume](/docs/user-guide/volumes). This is especially important for stateful
|
||||
[Volume](/docs/concepts/storage/volumes/). This is especially important for stateful
|
||||
applications, such as key-value stores and databases. For example, Redis is a
|
||||
key-value cache and store.
|
||||
|
||||
|
@ -27,7 +27,7 @@ key-value cache and store.
|
|||
|
||||
In this exercise, you create a Pod that runs one Container. This Pod has a
|
||||
Volume of type
|
||||
[emptyDir](/docs/user-guide/volumes/#emptydir)
|
||||
[emptyDir](/docs/concepts/storage/volumes/#emptydir)
|
||||
that lasts for the life of the Pod, even if the Container terminates and
|
||||
restarts. Here is the configuration file for the Pod:
|
||||
|
||||
|
@ -103,7 +103,7 @@ of `Always`.
|
|||
supports many different network-attached storage solutions, including PD on
|
||||
GCE and EBS on EC2, which are preferred for critical data, and will handle
|
||||
details such as mounting and unmounting the devices on the nodes. See
|
||||
[Volumes](/docs/user-guide/volumes) for more details.
|
||||
[Volumes](/docs/concepts/storage/volumes/) for more details.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -161,7 +161,7 @@ Here is a configuration file you can use to create a Pod:
|
|||
{% capture whatsnext %}
|
||||
|
||||
* Learn more about [Secrets](/docs/user-guide/secrets/).
|
||||
* Learn about [Volumes](/docs/user-guide/volumes/).
|
||||
* Learn about [Volumes](/docs/concepts/storage/volumes/).
|
||||
|
||||
### Reference
|
||||
|
||||
|
|
|
@ -122,7 +122,7 @@ Create a Pod that uses your Secret, and verify that the Pod is running:
|
|||
|
||||
* Learn more about [Secrets](/docs/user-guide/secrets/).
|
||||
* Learn more about
|
||||
[using a private registry](/docs/user-guide/images/#using-a-private-registry).
|
||||
[using a private registry](/docs/concepts/containers/images/#using-a-private-registry).
|
||||
* See [kubectl create secret docker-registry](/docs/user-guide/kubectl/kubectl_create_secret_docker-registry/).
|
||||
* See [Secret](/docs/api-reference/v1/definitions/#_v1_secret)
|
||||
* See the `imagePullSecrets` field of
|
||||
|
|
|
@ -75,7 +75,7 @@ Elasticsearch, and is part of a service named `kibana-logging`.
|
|||
|
||||
The Elasticsearch and Kibana services are both in the `kube-system` namespace
|
||||
and are not directly exposed via a publicly reachable IP address. To reach them,
|
||||
follow the instructions for [Accessing services running in a cluster](/docs/user-guide/accessing-the-cluster/#accessing-services-running-on-the-cluster).
|
||||
follow the instructions for [Accessing services running in a cluster](/docs/concepts/cluster-administration/access-cluster/#accessing-services-running-on-the-cluster).
|
||||
|
||||
If you try accessing the `elasticsearch-logging` service in your browser, you'll
|
||||
see a status page that looks something like this:
|
||||
|
|
|
@ -36,7 +36,7 @@ it to [support other log format](/docs/admin/node-problem/#support-other-log-for
|
|||
|
||||
## Enable/Disable in GCE cluster
|
||||
|
||||
Node problem detector is [running as a cluster addon](cluster-large.md/#addon-resources) enabled by default in the
|
||||
Node problem detector is [running as a cluster addon](/docs/admin/cluster-large/#addon-resources) enabled by default in the
|
||||
gce cluster.
|
||||
|
||||
You can enable/disable it by setting the environment variable
|
||||
|
@ -194,8 +194,8 @@ and detects known kernel issues following predefined rules.
|
|||
|
||||
The Kernel Monitor matches kernel issues according to a set of predefined rule list in
|
||||
[`config/kernel-monitor.json`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/config/kernel-monitor.json).
|
||||
The rule list is extensible, and you can always extend it by [overwriting the
|
||||
configuration](/docs/admin/node-problem/#overwrite-the-configuration).
|
||||
The rule list is extensible, and you can always extend it by overwriting the
|
||||
configuration.
|
||||
|
||||
### Add New NodeConditions
|
||||
|
||||
|
|
|
@ -134,7 +134,7 @@ docker push <username>/job-wq-2
|
|||
```
|
||||
|
||||
You need to push to a public repository or [configure your cluster to be able to access
|
||||
your private repository](/docs/user-guide/images).
|
||||
your private repository](/docs/concepts/containers/images/).
|
||||
|
||||
If you are using [Google Container
|
||||
Registry](https://cloud.google.com/tools/container-registry/), tag
|
||||
|
|
|
@ -24,7 +24,7 @@ following Kubernetes concepts.
|
|||
* [Pods](/docs/user-guide/pods/single-container/)
|
||||
* [Cluster DNS](/docs/admin/dns/)
|
||||
* [Headless Services](/docs/user-guide/services/#headless-services)
|
||||
* [PersistentVolumes](/docs/user-guide/volumes/)
|
||||
* [PersistentVolumes](/docs/concepts/storage/volumes/)
|
||||
* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/)
|
||||
* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
|
||||
* [kubectl CLI](/docs/user-guide/kubectl)
|
||||
|
@ -265,7 +265,7 @@ www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO
|
|||
www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s
|
||||
```
|
||||
The StatefulSet controller created two PersistentVolumeClaims that are
|
||||
bound to two [PersistentVolumes](/docs/user-guide/volumes/). As the cluster used
|
||||
bound to two [PersistentVolumes](/docs/concepts/storage/volumes/). As the cluster used
|
||||
in this tutorial is configured to dynamically provision PersistentVolumes, the
|
||||
PersistentVolumes were created and bound automatically.
|
||||
|
||||
|
|
|
@ -214,7 +214,7 @@ gcloud compute disks delete mysql-disk
|
|||
|
||||
* [kubectl run documentation](/docs/user-guide/kubectl/kubectl_run/)
|
||||
|
||||
* [Volumes](/docs/user-guide/volumes/) and [Persistent Volumes](/docs/user-guide/persistent-volumes/)
|
||||
* [Volumes](/docs/concepts/storage/volumes/) and [Persistent Volumes](/docs/user-guide/persistent-volumes/)
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ Kubernetes concepts.
|
|||
* [Pods](/docs/user-guide/pods/single-container/)
|
||||
* [Cluster DNS](/docs/admin/dns/)
|
||||
* [Headless Services](/docs/user-guide/services/#headless-services)
|
||||
* [PersistentVolumes](/docs/user-guide/volumes/)
|
||||
* [PersistentVolumes](/docs/concepts/storage/volumes/)
|
||||
* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/)
|
||||
* [ConfigMaps](/docs/user-guide/configmap/)
|
||||
* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
|
||||
|
|
|
@ -46,7 +46,7 @@ of configuration files.
|
|||
|
||||
Configuration data can be consumed in pods in a variety of ways. ConfigMaps can be used to:
|
||||
|
||||
1. Populate the value of environment variables
|
||||
1. Populate the values of environment variables
|
||||
2. Set command-line arguments in a container
|
||||
3. Populate config files in a volume
|
||||
|
||||
|
|
|
@ -1,105 +1,7 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
- thockin
|
||||
title: Container Lifecycle Hooks
|
||||
---
|
||||
|
||||
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode). In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
This cluster information makes it possible to build applications that are *cluster aware*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analogous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
|
||||
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](/docs/user-guide/images) and one or more [volumes](/docs/user-guide/volumes).
|
||||
|
||||
The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Cluster Information
|
||||
|
||||
There are two types of information that are available within the container environment. There is information about the container itself, and there is information about other objects in the system.
|
||||
|
||||
### Container Information
|
||||
|
||||
Currently, the Pod name for the pod in which the container is running is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc), but this is planned to change in the future and should not be used.
|
||||
|
||||
The Pod name and namespace are also available as environment variables via the [downward API](/docs/user-guide/downward-api). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image.
|
||||
|
||||
In the future, we anticipate expanding this information with richer information about the container. Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server.
|
||||
|
||||
### Cluster Information
|
||||
|
||||
Currently the list of all services that are running at the time when the container was created via the Kubernetes Cluster API are available to the container as environment variables. The set of environment variables matches the syntax of Docker links.
|
||||
|
||||
For a service named **foo** that maps to a container port named **bar**, the following variables are defined:
|
||||
|
||||
```shell
|
||||
FOO_SERVICE_HOST=<the host the service is running on>
|
||||
FOO_SERVICE_PORT=<the port the service is running on>
|
||||
```
|
||||
|
||||
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/) is enabled). Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
|
||||
|
||||
## Container Hooks
|
||||
|
||||
Container hooks provide information to the container about events in its management lifecycle. For example, immediately after a container is started, it receives a *PostStart* hook. These hooks are broadcast *into* the container with information about the life-cycle of the container. They are different from the events provided by Docker and other systems which are *output* from the container. Output events provide a log of what has already happened. Input hooks provide real-time notification about things that are happening, but no historical log.
|
||||
|
||||
### Hook Details
|
||||
|
||||
There are currently two container hooks that are surfaced to containers:
|
||||
|
||||
*PostStart*
|
||||
|
||||
This hook is sent immediately after a container is created. It notifies the container that it has been created. No parameters are passed to the handler. It is NOT guaranteed that the hook will execute before the container entrypoint.
|
||||
|
||||
*PreStop*
|
||||
|
||||
This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in [Termination of Pods](/docs/user-guide/pods/#termination-of-pods).
|
||||
|
||||
### Hook Handler Execution
|
||||
|
||||
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook. These hook handler calls are synchronous in the context of the pod containing the container. This means that for a `PostStart` hook, the container entrypoint and hook will fire asynchronously. However, if the hook takes a while to run or hangs, the container will never reach a "running" state. The behavior is similar for a `PreStop` hook. If the hook hangs during execution, the Pod phase will stay in a "running" state and never reach "failed." If a `PostStart` or `PreStop` hook fails, it will kill the container.
|
||||
|
||||
Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop).
|
||||
|
||||
### Hook delivery guarantees
|
||||
|
||||
Hook delivery is intended to be "at least once", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this
|
||||
correctly.
|
||||
|
||||
We expect double delivery to be rare, but in some cases if the Kubelet restarts in the middle of sending a hook, the hook may be resent after the Kubelet comes back up.
|
||||
|
||||
Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend.
|
||||
|
||||
Currently, there are (hopefully rare) scenarios where PostStart hooks may not be delivered.
|
||||
|
||||
### Hook Handler Implementations
|
||||
|
||||
Hook handlers are the way that hooks are surfaced to containers. Containers can select the type of hook handler they would like to implement. Kubernetes currently supports two different hook handler types:
|
||||
|
||||
* Exec - Executes a specific command (e.g. pre-stop.sh) inside the cgroups and namespaces of the container. Resources consumed by the command are counted against the container.
|
||||
|
||||
* HTTP - Executes an HTTP request against a specific endpoint on the container.
|
||||
|
||||
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
|
||||
|
||||
### Debugging Hook Handlers
|
||||
|
||||
Currently, the logs for a hook handler are not exposed in the pod events. If your handler fails for some reason, it will emit an event. For `PostStart`, this is the `FailedPostStartHook` event. For `PreStop` this is the `FailedPreStopHook` event. You can see these events by running `kubectl describe pod <pod_name>`. An example output of events from runing this command is below:
|
||||
|
||||
```
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
|
||||
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
|
||||
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
|
||||
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
|
||||
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
|
||||
```
|
||||
[Container Lifecycle Hooks](/docs/concepts/containers/container-lifecycle-hooks/)
|
|
@ -80,61 +80,6 @@ The created Replica Set will ensure that there are three nginx Pods at all times
|
|||
|
||||
**Note:** You must specify appropriate selector and pod template labels of a Deployment (in this case, `app = nginx`), i.e. don't overlap with other controllers (including Deployments, Replica Sets, Replication Controllers, etc.) Kubernetes won't stop you from doing that, and if you end up with multiple controllers that have overlapping selectors, those controllers will fight with each other's and won't behave correctly.
|
||||
|
||||
## The Status of a Deployment
|
||||
|
||||
After creating or updating a Deployment, you would want to confirm whether it succeeded or not. The simplest way to do this is through `kubectl rollout status`.
|
||||
|
||||
```shell
|
||||
$ kubectl rollout status deployment/nginx-deployment
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
This verifies the Deployment's `.status.observedGeneration` >= `.metadata.generation`, and its up-to-date replicas
|
||||
(`.status.updatedReplicas`) matches the desired replicas (`.spec.replicas`) to determine if the rollout succeeded.
|
||||
It also expects that the available replicas running (`.spec.availableReplicas`) will be at least the minimum required
|
||||
based on the Deployment strategy. If the rollout is still in progress, it watches for Deployment status changes and
|
||||
prints related messages.
|
||||
|
||||
```shell
|
||||
$ kubectl rollout status deployment/nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 2 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 2 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 3 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 3 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 5 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 5 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 5 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 5 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 7 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 7 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 7 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 7 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 8 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 8 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 8 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 9 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 9 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 9 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for rollout to finish: 9 of 10 updated replicas are available...
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
For more information about the status of a Deployment [read more here](#deployment-status).
|
||||
|
||||
|
||||
## Updating a Deployment
|
||||
|
||||
|
|
|
@ -209,7 +209,7 @@ Notice that we don't delete the pod directly. With kubectl we want to delete the
|
|||
|
||||
#### docker login
|
||||
|
||||
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](/docs/user-guide/images/#using-a-private-registry).
|
||||
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](/docs/concepts/containers/images/#using-a-private-registry).
|
||||
|
||||
#### docker version
|
||||
|
||||
|
|
|
@ -1,320 +1,7 @@
|
|||
---
|
||||
assignees:
|
||||
- erictune
|
||||
- thockin
|
||||
title: Images
|
||||
---
|
||||
|
||||
Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/engine/tutorials/dockerimages/).
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
|
||||
|
||||
The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
|
||||
## Updating Images
|
||||
|
||||
The default pull policy is `IfNotPresent` which causes the Kubelet to not
|
||||
pull an image if it already exists. If you would like to always force a pull
|
||||
you must set a pull image policy of `Always` or specify a `:latest` tag on
|
||||
your image.
|
||||
|
||||
If you did not specify tag of your image, it will be assumed as `:latest`, with
|
||||
pull image policy of `Always` correspondingly.
|
||||
|
||||
Note that you should avoid using `:latest` tag, see [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images) for more information.
|
||||
|
||||
## Using a Private Registry
|
||||
|
||||
Private registries may require keys to read images from them.
|
||||
Credentials can be provided in several ways:
|
||||
|
||||
- Using Google Container Registry
|
||||
- Per-cluster
|
||||
- automatically configured on Google Compute Engine or Google Container Engine
|
||||
- all pods can read the project's private registry
|
||||
- Using AWS EC2 Container Registry (ECR)
|
||||
- use IAM roles and policies to control access to ECR repositories
|
||||
- automatically refreshes ECR login credentials
|
||||
- Using Azure Container Registry (ACR)
|
||||
- Configuring Nodes to Authenticate to a Private Registry
|
||||
- all pods can read any configured private registries
|
||||
- requires node configuration by cluster administrator
|
||||
- Pre-pulling Images
|
||||
- all pods can use any images cached on a node
|
||||
- requires root access to all nodes to setup
|
||||
- Specifying ImagePullSecrets on a Pod
|
||||
- only pods which provide own keys can access the private registry
|
||||
Each option is described in more detail below.
|
||||
|
||||
|
||||
### Using Google Container Registry
|
||||
|
||||
Kubernetes has native support for the [Google Container
|
||||
Registry (GCR)](https://cloud.google.com/tools/container-registry/), when running on Google Compute
|
||||
Engine (GCE). If you are running your cluster on GCE or Google Container Engine (GKE), simply
|
||||
use the full image name (e.g. gcr.io/my_project/image:tag).
|
||||
|
||||
All pods in a cluster will have read access to images in this registry.
|
||||
|
||||
The kubelet will authenticate to GCR using the instance's
|
||||
Google service account. The service account on the instance
|
||||
will have a `https://www.googleapis.com/auth/devstorage.read_only`,
|
||||
so it can pull from the project's GCR, but not push.
|
||||
|
||||
### Using AWS EC2 Container Registry
|
||||
|
||||
Kubernetes has native support for the [AWS EC2 Container
|
||||
Registry](https://aws.amazon.com/ecr/), when nodes are AWS EC2 instances.
|
||||
|
||||
Simply use the full image name (e.g. `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)
|
||||
in the Pod definition.
|
||||
|
||||
All users of the cluster who can create pods will be able to run pods that use any of the
|
||||
images in the ECR registry.
|
||||
|
||||
The kubelet will fetch and periodically refresh ECR credentials. It needs the following permissions to do this:
|
||||
|
||||
- `ecr:GetAuthorizationToken`
|
||||
- `ecr:BatchCheckLayerAvailability`
|
||||
- `ecr:GetDownloadUrlForLayer`
|
||||
- `ecr:GetRepositoryPolicy`
|
||||
- `ecr:DescribeRepositories`
|
||||
- `ecr:ListImages`
|
||||
- `ecr:BatchGetImage`
|
||||
|
||||
Requirements:
|
||||
|
||||
- You must be using kubelet version `v1.2.0` or newer. (e.g. run `/usr/bin/kubelet --version=true`).
|
||||
- If your nodes are in region A and your registry is in a different region B, you need version `v1.3.0` or newer.
|
||||
- ECR must be offered in your region
|
||||
|
||||
Troubleshooting:
|
||||
|
||||
- Verify all requirements above.
|
||||
- Get $REGION (e.g. `us-west-2`) credentials on your workstation. SSH into the host and run Docker manually with those creds. Does it work?
|
||||
- Verify kubelet is running with `--cloud-provider=aws`.
|
||||
- Check kubelet logs (e.g. `journalctl -t kubelet`) for log lines like:
|
||||
- `plugins.go:56] Registering credential provider: aws-ecr-key`
|
||||
- `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider`
|
||||
|
||||
### Using Azure Container Registry (ACR)
|
||||
When using [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/)
|
||||
you can authenticate using either an admin user or a service principal.
|
||||
In either case, authentication is done via standard Docker authentication. These instructions assume the
|
||||
[azure-cli](https://github.com/azure/azure-cli) command line tool.
|
||||
|
||||
You first need to create a registry and generate credentials, complete documentation for this can be found in
|
||||
the [Azure container registry documentation](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli).
|
||||
|
||||
Once you have created your container registry, you will use the following credentials to login:
|
||||
* `DOCKER_USER` : service principal, or admin username
|
||||
* `DOCKER_PASSWORD`: service principal password, or admin user password
|
||||
* `DOCKER_REGISTRY_SERVER`: `${some-registry-name}.azurecr.io`
|
||||
* `DOCKER_EMAIL`: `${some-email-address}`
|
||||
|
||||
Once you have those variables filled in you can [configure a Kubernetes Secret and use it to deploy a Pod]
|
||||
(http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
|
||||
|
||||
### Configuring Nodes to Authenticate to a Private Repository
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
with credentials for Google Container Registry. You cannot use this approach.
|
||||
|
||||
**Note:** if you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will
|
||||
manage and update the ECR login credentials. You cannot use this approach.
|
||||
|
||||
**Note:** this approach is suitable if you can control node configuration. It
|
||||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
node replacement.
|
||||
|
||||
Docker stores keys for private registries in the `$HOME/.dockercfg` or `$HOME/.docker/config.json` file. If you put this
|
||||
in the `$HOME` of user `root` on a kubelet, then docker will use it.
|
||||
|
||||
Here are the recommended steps to configuring your nodes to use a private registry. In this
|
||||
example, run these on your desktop/laptop:
|
||||
|
||||
1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json`.
|
||||
1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use.
|
||||
1. Get a list of your nodes, for example:
|
||||
- if you want the names: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
|
||||
- if you want to get the IPs: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
|
||||
1. Copy your local `.docker/config.json` to the home directory of root on each node.
|
||||
- for example: `for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done`
|
||||
|
||||
Verify by creating a pod that uses a private image, e.g.:
|
||||
|
||||
```yaml
|
||||
$ cat <<EOF > /tmp/private-image-test-1.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: private-image-test-1
|
||||
spec:
|
||||
containers:
|
||||
- name: uses-private-image
|
||||
image: $PRIVATE_IMAGE_NAME
|
||||
imagePullPolicy: Always
|
||||
command: [ "echo", "SUCCESS" ]
|
||||
EOF
|
||||
$ kubectl create -f /tmp/private-image-test-1.yaml
|
||||
pods/private-image-test-1
|
||||
$
|
||||
```
|
||||
|
||||
If everything is working, then, after a few moments, you should see:
|
||||
|
||||
```shell
|
||||
$ kubectl logs private-image-test-1
|
||||
SUCCESS
|
||||
```
|
||||
|
||||
If it failed, then you will see:
|
||||
|
||||
```shell
|
||||
$ kubectl describe pods/private-image-test-1 | grep "Failed"
|
||||
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
|
||||
```
|
||||
|
||||
|
||||
You must ensure all nodes in the cluster have the same `.docker/config.json`. Otherwise, pods will run on
|
||||
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
|
||||
template needs to include the `.docker/config.json` or mount a drive that contains it.
|
||||
|
||||
All pods will have read access to images in any private registry once private
|
||||
registry keys are added to the `.docker/config.json`.
|
||||
|
||||
**This was tested with a private docker repository as of 26 June with Kubernetes version v0.19.3.
|
||||
It should also work for a private registry such as quay.io, but that has not been tested.**
|
||||
|
||||
### Pre-pulling Images
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
with credentials for Google Container Registry. You cannot use this approach.
|
||||
|
||||
**Note:** this approach is suitable if you can control node configuration. It
|
||||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
node replacement.
|
||||
|
||||
Be default, the kubelet will try to pull each image from the specified registry.
|
||||
However, if the `imagePullPolicy` property of the container is set to `IfNotPresent` or `Never`,
|
||||
then a local image is used (preferentially or exclusively, respectively).
|
||||
|
||||
If you want to rely on pre-pulled images as a substitute for registry authentication,
|
||||
you must ensure all nodes in the cluster have the same pre-pulled images.
|
||||
|
||||
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
|
||||
|
||||
All pods will have read access to any pre-pulled images.
|
||||
|
||||
### Specifying ImagePullSecrets on a Pod
|
||||
|
||||
**Note:** This approach is currently the recommended approach for GKE, GCE, and any cloud-providers
|
||||
where node creation is automated.
|
||||
|
||||
Kubernetes supports specifying registry keys on a pod.
|
||||
|
||||
#### Creating a Secret with a Docker Config
|
||||
|
||||
Run the following command, substituting the appropriate uppercase values:
|
||||
|
||||
```shell
|
||||
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
|
||||
secret "myregistrykey" created.
|
||||
```
|
||||
|
||||
If you need access to multiple registries, you can create one secret for each registry.
|
||||
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
|
||||
when pulling images for your Pods.
|
||||
|
||||
Pods can only reference image pull secrets in their own namespace,
|
||||
so this process needs to be done one time per namespace.
|
||||
|
||||
##### Bypassing kubectl create secrets
|
||||
|
||||
If for some reason you need multiple items in a single `.docker/config.json` or need
|
||||
control not given by the above command, then you can [create a secret using
|
||||
json or yaml](/docs/user-guide/secrets/#creating-a-secret-manually).
|
||||
|
||||
Be sure to:
|
||||
|
||||
- set the name of the data item to `.dockerconfigjson`
|
||||
- base64 encode the docker file and paste that string, unbroken
|
||||
as the value for field `data[".dockerconfigjson"]`
|
||||
- set `type` to `kubernetes.io/dockerconfigjson`
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: myregistrykey
|
||||
namespace: awesomeapps
|
||||
data:
|
||||
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
```
|
||||
|
||||
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
|
||||
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...` it means
|
||||
the data was successfully un-base64 encoded, but could not be parsed as a `.docker/config.json` file.
|
||||
|
||||
#### Referring to an imagePullSecrets on a Pod
|
||||
|
||||
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
|
||||
section to a pod definition.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: foo
|
||||
namespace: awesomeapps
|
||||
spec:
|
||||
containers:
|
||||
- name: foo
|
||||
image: janedoe/awesomeapp:v1
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey
|
||||
```
|
||||
|
||||
This needs to be done for each pod that is using a private registry.
|
||||
|
||||
However, setting of this field can be automated by setting the imagePullSecrets
|
||||
in a [serviceAccount](/docs/user-guide/service-accounts) resource.
|
||||
|
||||
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
|
||||
will be merged. This approach will work on Google Container Engine (GKE).
|
||||
|
||||
### Use Cases
|
||||
|
||||
There are a number of solutions for configuring private registries. Here are some
|
||||
common use cases and suggested solutions.
|
||||
|
||||
1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
|
||||
- Use public images on the Docker hub.
|
||||
- no configuration required
|
||||
- on GCE/GKE, a local mirror is automatically used for improved speed and availability
|
||||
1. Cluster running some proprietary images which should be hidden to those outside the company, but
|
||||
visible to all cluster users.
|
||||
- Use a hosted private [Docker registry](https://docs.docker.com/registry/)
|
||||
- may be hosted on the [Docker Hub](https://hub.docker.com/account/signup/), or elsewhere.
|
||||
- manually configure .docker/config.json on each node as described above
|
||||
- Or, run an internal private registry behind your firewall with open read access.
|
||||
- no Kubernetes configuration required
|
||||
- Or, when on GCE/GKE, use the project's Google Container Registry.
|
||||
- will work better with cluster autoscaling than manual node configuration
|
||||
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
|
||||
1. Cluster with a proprietary images, a few of which require stricter access control
|
||||
- ensure [AlwaysPullImages admission controller](/docs/admin/admission-controllers/#alwayspullimages) is active, otherwise, all Pods potentially have access to all images
|
||||
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
|
||||
1. A multi-tenant cluster where each tenant needs own private registry
|
||||
- ensure [AlwaysPullImages admission controller](/docs/admin/admission-controllers/#alwayspullimages) is active, otherwise, all Pods of all tenants potentially have access to all images
|
||||
- run a private registry with authorization required.
|
||||
- generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
|
||||
- tenant adds that secret to imagePullSecrets of each namespace.
|
||||
[Images](/docs/concepts/containers/images/)
|
||||
|
|
|
@ -55,7 +55,7 @@ Before running examples in the user guides, please ensure you have completed [in
|
|||
[**Service**](/docs/user-guide/services/)
|
||||
: A service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name.
|
||||
|
||||
[**Volume**](/docs/user-guide/volumes/)
|
||||
[**Volume**](/docs/concepts/storage/volumes/)
|
||||
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/), adding provisioning of the volume directory and/or device.
|
||||
|
||||
[**Secret**](/docs/user-guide/secrets/)
|
||||
|
@ -79,11 +79,11 @@ API resources
|
|||
Pods and containers
|
||||
|
||||
* [Pod lifecycle and restart policies](/docs/user-guide/pod-states/)
|
||||
* [Lifecycle hooks](/docs/user-guide/container-environment/)
|
||||
* [Lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/)
|
||||
* [Compute resources, such as cpu and memory](/docs/user-guide/compute-resources/)
|
||||
* [Specifying commands and requesting capabilities](/docs/user-guide/containers/)
|
||||
* [Downward API: accessing system configuration from a pod](/docs/user-guide/downward-api/)
|
||||
* [Images and registries](/docs/user-guide/images/)
|
||||
* [Images and registries](/docs/concepts/containers/images/)
|
||||
* [Migrating from docker-cli to kubectl](/docs/user-guide/docker-cli-to-kubectl/)
|
||||
* [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/)
|
||||
* [Assign pods to selected nodes](/docs/user-guide/node-selection/)
|
||||
|
|
|
@ -11,7 +11,7 @@ kubectl controls the Kubernetes cluster manager
|
|||
|
||||
kubectl controls the Kubernetes cluster manager.
|
||||
|
||||
Find more information at https://github.com/kubernetes/kubernetes.
|
||||
Find more information at [https://github.com/kubernetes/kubernetes](https://github.com/kubernetes/kubernetes).
|
||||
|
||||
```
|
||||
kubectl
|
||||
|
|
|
@ -64,16 +64,8 @@ $ kubectl --namespace=<insert-namespace-name-here> get pods
|
|||
You can permanently save the namespace for all subsequent kubectl commands in that
|
||||
context.
|
||||
|
||||
First get your current context:
|
||||
|
||||
```shell
|
||||
$ export CONTEXT=$(kubectl config view | awk '/current-context/ {print $2}')
|
||||
```
|
||||
|
||||
Then update the default namespace:
|
||||
|
||||
```shell
|
||||
$ kubectl config set-context $CONTEXT --namespace=<insert-namespace-name-here>
|
||||
$ kubectl config set-context $(kubectl config current-context) --namespace=<insert-namespace-name-here>
|
||||
# Validate it
|
||||
$ kubectl config view | grep namespace:
|
||||
```
|
||||
|
|
|
@ -7,7 +7,7 @@ assignees:
|
|||
title: Persistent Volumes
|
||||
---
|
||||
|
||||
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/user-guide/volumes/) is suggested.
|
||||
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
|
|
@ -41,7 +41,7 @@ This doc assumes familiarity with the following Kubernetes concepts:
|
|||
* [Pods](/docs/user-guide/pods/single-container/)
|
||||
* [Cluster DNS](/docs/admin/dns/)
|
||||
* [Headless Services](/docs/user-guide/services/#headless-services)
|
||||
* [Persistent Volumes](/docs/user-guide/volumes/)
|
||||
* [Persistent Volumes](/docs/concepts/storage/volumes/)
|
||||
* [Persistent Volume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md)
|
||||
|
||||
You need a working Kubernetes cluster at version >= 1.3, with a healthy DNS [cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md) at version >= 15. You cannot use PetSet on a hosted Kubernetes provider that has disabled `alpha` resources.
|
||||
|
|
|
@ -163,5 +163,6 @@ following
|
|||
|
||||
## Working With RBAC
|
||||
|
||||
In Kubernetes 1.5 and newer, you can use PodSecurityPolicy to control access to privileged containers based on user role and groups.
|
||||
(see [more details](https://github.com/kubernetes/kubernetes/blob/master/examples/podsecuritypolicy/rbac/README.md)).
|
||||
In Kubernetes 1.5 and newer, you can use PodSecurityPolicy to control access to privileged containers based on user role and groups. Access to different PodSecurityPolicy objects can be controlled via authorization. To limit access to PodSecurityPolicy objects for pods created via a Deployment, ReplicaSet, etc, the [Controller Manager](/docs/admin/kube-controller-manager/) must be run against the secured API port, and must not have superuser permissions.
|
||||
|
||||
PodSecurityPolicy authorization uses the union of all policies available to the user creating the pod and the service account specified on the pod. When pods are created via a Deployment, ReplicaSet, etc, it is Controller Manager that creates the pod, so if it is running against the unsecured API port, all PodSecurityPolicy objects would be allowed, and you could not effectively subdivide access. Access to given PSP policies for a user will be effective only when deploying Pods directly. For more details, see the [PodSecurityPolicy RBAC example](https://github.com/kubernetes/kubernetes/blob/master/examples/podsecuritypolicy/rbac/README.md) of applying PodSecurityPolicy to control access to privileged containers based on role and groups when deploying Pods directly.
|
||||
|
|
|
@ -40,7 +40,7 @@ filesystem.
|
|||
|
||||
In terms of [Docker](https://www.docker.com/) constructs, a pod is modelled as
|
||||
a group of Docker containers with shared namespaces and shared
|
||||
[volumes](/docs/user-guide/volumes/). PID namespace sharing is not yet implemented in Docker.
|
||||
[volumes](/docs/concepts/storage/volumes/). PID namespace sharing is not yet implemented in Docker.
|
||||
|
||||
Like individual application containers, pods are considered to be relatively
|
||||
ephemeral (rather than durable) entities. As discussed in [life of a
|
||||
|
@ -162,7 +162,7 @@ An example flow:
|
|||
2. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period.
|
||||
3. Pod shows up as "Terminating" when listed in client commands
|
||||
4. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.
|
||||
1. If the pod has defined a [preStop hook](/docs/user-guide/container-environment/#hook-details), it is invoked inside of the pod. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.
|
||||
1. If the pod has defined a [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), it is invoked inside of the pod. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.
|
||||
2. The processes in the Pod are sent the TERM signal.
|
||||
5. (simultaneous with 3), Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.
|
||||
6. When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
|
||||
|
|
|
@ -22,7 +22,7 @@ more control over how it is used, and reduces the risk of accidental exposure.
|
|||
Users can create secrets, and the system also creates some secrets.
|
||||
|
||||
To use a secret, a pod needs to reference the secret.
|
||||
A secret can be used with a pod in two ways: as files in a [volume](/docs/user-guide/volumes) mounted on one or more of
|
||||
A secret can be used with a pod in two ways: as files in a [volume](/docs/concepts/storage/volumes/) mounted on one or more of
|
||||
its containers, or used by kubelet when pulling images for the pod.
|
||||
|
||||
### Built-in Secrets
|
||||
|
@ -428,7 +428,7 @@ password to the Kubelet so it can pull a private image on behalf of your Pod.
|
|||
|
||||
**Manually specifying an imagePullSecret**
|
||||
|
||||
Use of imagePullSecrets is described in the [images documentation](/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod)
|
||||
Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)
|
||||
|
||||
### Arranging for imagePullSecrets to be Automatically Attached
|
||||
|
||||
|
|
|
@ -138,7 +138,7 @@ namespace: 7 bytes
|
|||
|
||||
## Adding ImagePullSecrets to a service account
|
||||
|
||||
First, create an imagePullSecret, as described [here](/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod)
|
||||
First, create an imagePullSecret, as described [here](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)
|
||||
Next, verify it has been created. For example:
|
||||
|
||||
```shell
|
||||
|
|
|
@ -68,7 +68,7 @@ The deploy wizard expects that you provide the following information:
|
|||
|
||||
The application name must be unique within the selected Kubernetes [namespace](/docs/admin/namespaces/). It must start with a lowercase character, and end with a lowercase character or a number, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored.
|
||||
|
||||
- **Container image** (mandatory): The URL of a public Docker [container image](/docs/user-guide/images/) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub). The container image specification must end with a colon.
|
||||
- **Container image** (mandatory): The URL of a public Docker [container image](/docs/concepts/containers/images/) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub). The container image specification must end with a colon.
|
||||
|
||||
- **Number of pods** (mandatory): The target number of Pods you want your application to be deployed in. The value must be a positive integer.
|
||||
|
||||
|
@ -104,7 +104,7 @@ track=stable
|
|||
|
||||
- **Image Pull Secret**: In case the specified Docker container image is private, it may require [pull secret](/docs/user-guide/secrets/) credentials.
|
||||
|
||||
Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`. The content of a secret must be base64-encoded and specified in a [`.dockercfg`](/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) file. The secret name may consist of a maximum of 253 characters.
|
||||
Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`. The content of a secret must be base64-encoded and specified in a [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) file. The secret name may consist of a maximum of 253 characters.
|
||||
|
||||
In case the creation of the image pull secret is successful, it is selected by default. If the creation fails, no secret is applied.
|
||||
|
||||
|
|
|
@ -1,578 +1,7 @@
|
|||
---
|
||||
assignees:
|
||||
- jsafrane
|
||||
- mikedanese
|
||||
- saad-ali
|
||||
- thockin
|
||||
title: Volumes
|
||||
---
|
||||
|
||||
On-disk files in a container are ephemeral, which presents some problems for
|
||||
non-trivial applications when running in containers. First, when a container
|
||||
crashes kubelet will restart it, but the files will be lost - the
|
||||
container starts with a clean state. Second, when running containers together
|
||||
in a `Pod` it is often necessary to share files between those containers. The
|
||||
Kubernetes `Volume` abstraction solves both of these problems.
|
||||
{% include user-guide-content-moved.md %}
|
||||
|
||||
Familiarity with [pods](/docs/user-guide/pods) is suggested.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
|
||||
## Background
|
||||
|
||||
Docker also has a concept of
|
||||
[volumes](https://docs.docker.com/userguide/dockervolumes/), though it is
|
||||
somewhat looser and less managed. In Docker, a volume is simply a directory on
|
||||
disk or in another container. Lifetimes are not managed and until very
|
||||
recently there were only local-disk-backed volumes. Docker now provides volume
|
||||
drivers, but the functionality is very limited for now (e.g. as of Docker 1.7
|
||||
only one volume driver is allowed per container and there is no way to pass
|
||||
parameters to volumes).
|
||||
|
||||
A Kubernetes volume, on the other hand, has an explicit lifetime - the same as
|
||||
the pod that encloses it. Consequently, a volume outlives any containers that run
|
||||
within the Pod, and data is preserved across Container restarts. Of course, when a
|
||||
Pod ceases to exist, the volume will cease to exist, too. Perhaps more
|
||||
importantly than this, Kubernetes supports many type of volumes, and a Pod can
|
||||
use any number of them simultaneously.
|
||||
|
||||
At its core, a volume is just a directory, possibly with some data in it, which
|
||||
is accessible to the containers in a pod. How that directory comes to be, the
|
||||
medium that backs it, and the contents of it are determined by the particular
|
||||
volume type used.
|
||||
|
||||
To use a volume, a pod specifies what volumes to provide for the pod (the
|
||||
[`spec.volumes`](http://kubernetes.io/kubernetes/third_party/swagger-ui/#!/v1/createPod)
|
||||
field) and where to mount those into containers(the
|
||||
[`spec.containers.volumeMounts`](http://kubernetes.io/kubernetes/third_party/swagger-ui/#!/v1/createPod)
|
||||
field).
|
||||
|
||||
A process in a container sees a filesystem view composed from their Docker
|
||||
image and volumes. The [Docker
|
||||
image](https://docs.docker.com/userguide/dockerimages/) is at the root of the
|
||||
filesystem hierarchy, and any volumes are mounted at the specified paths within
|
||||
the image. Volumes can not mount onto other volumes or have hard links to
|
||||
other volumes. Each container in the Pod must independently specify where to
|
||||
mount each volume.
|
||||
|
||||
## Types of Volumes
|
||||
|
||||
Kubernetes supports several types of Volumes:
|
||||
|
||||
* `emptyDir`
|
||||
* `hostPath`
|
||||
* `gcePersistentDisk`
|
||||
* `awsElasticBlockStore`
|
||||
* `nfs`
|
||||
* `iscsi`
|
||||
* `flocker`
|
||||
* `glusterfs`
|
||||
* `rbd`
|
||||
* `cephfs`
|
||||
* `gitRepo`
|
||||
* `secret`
|
||||
* `persistentVolumeClaim`
|
||||
* `downwardAPI`
|
||||
* `azureFileVolume`
|
||||
* `azureDisk`
|
||||
* `vsphereVolume`
|
||||
* `Quobyte`
|
||||
|
||||
We welcome additional contributions.
|
||||
|
||||
### emptyDir
|
||||
|
||||
An `emptyDir` volume is first created when a Pod is assigned to a Node, and
|
||||
exists as long as that Pod is running on that node. As the name says, it is
|
||||
initially empty. Containers in the pod can all read and write the same
|
||||
files in the `emptyDir` volume, though that volume can be mounted at the same
|
||||
or different paths in each container. When a Pod is removed from a node for
|
||||
any reason, the data in the `emptyDir` is deleted forever. NOTE: a container
|
||||
crashing does *NOT* remove a pod from a node, so the data in an `emptyDir`
|
||||
volume is safe across container crashes.
|
||||
|
||||
Some uses for an `emptyDir` are:
|
||||
|
||||
* scratch space, such as for a disk-based merge sort
|
||||
* checkpointing a long computation for recovery from crashes
|
||||
* holding files that a content-manager container fetches while a webserver
|
||||
container serves the data
|
||||
|
||||
By default, `emptyDir` volumes are stored on whatever medium is backing the
|
||||
machine - that might be disk or SSD or network storage, depending on your
|
||||
environment. However, you can set the `emptyDir.medium` field to `"Memory"`
|
||||
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead.
|
||||
While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
|
||||
machine reboot and any files you write will count against your container's
|
||||
memory limit.
|
||||
|
||||
#### Example pod
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /cache
|
||||
name: cache-volume
|
||||
volumes:
|
||||
- name: cache-volume
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
### hostPath
|
||||
|
||||
A `hostPath` volume mounts a file or directory from the host node's filesystem
|
||||
into your pod. This is not something that most Pods will need, but it offers a
|
||||
powerful escape hatch for some applications.
|
||||
|
||||
For example, some uses for a `hostPath` are:
|
||||
|
||||
* running a container that needs access to Docker internals; use a `hostPath`
|
||||
of `/var/lib/docker`
|
||||
* running cAdvisor in a container; use a `hostPath` of `/dev/cgroups`
|
||||
|
||||
Watch out when using this type of volume, because:
|
||||
|
||||
* pods with identical configuration (such as created from a podTemplate) may
|
||||
behave differently on different nodes due to different files on the nodes
|
||||
* when Kubernetes adds resource-aware scheduling, as is planned, it will not be
|
||||
able to account for resources used by a `hostPath`
|
||||
* the directories created on the underlying hosts are only writable by root. You
|
||||
either need to run your process as root in a
|
||||
[privileged container](/docs/user-guide/security-context) or modify the file
|
||||
permissions on the host to be able to write to a `hostPath` volume
|
||||
|
||||
#### Example pod
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
hostPath:
|
||||
# directory location on host
|
||||
path: /data
|
||||
```
|
||||
|
||||
#### Example pod
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-hostpath
|
||||
spec:
|
||||
containers:
|
||||
- image: myimage
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-hostpath
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
hostPath:
|
||||
path: /path/to/my/dir
|
||||
```
|
||||
|
||||
### gcePersistentDisk
|
||||
|
||||
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [Persistent
|
||||
Disk](http://cloud.google.com/compute/docs/disks) into your pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of a PD are
|
||||
preserved and the volume is merely unmounted. This means that a PD can be
|
||||
pre-populated with data, and that data can be "handed off" between pods.
|
||||
|
||||
__Important: You must create a PD using `gcloud` or the GCE API or UI
|
||||
before you can use it__
|
||||
|
||||
There are some restrictions when using a `gcePersistentDisk`:
|
||||
|
||||
* the nodes on which pods are running must be GCE VMs
|
||||
* those VMs need to be in the same GCE project and zone as the PD
|
||||
|
||||
A feature of PD is that they can be mounted as read-only by multiple consumers
|
||||
simultaneously. This means that you can pre-populate a PD with your dataset
|
||||
and then serve it in parallel from as many pods as you need. Unfortunately,
|
||||
PDs can only be mounted by a single consumer in read-write mode - no
|
||||
simultaneous writers allowed.
|
||||
|
||||
Using a PD on a pod controlled by a ReplicationController will fail unless
|
||||
the PD is read-only or the replica count is 0 or 1.
|
||||
|
||||
#### Creating a PD
|
||||
|
||||
Before you can use a GCE PD with a pod, you need to create it.
|
||||
|
||||
```shell
|
||||
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
|
||||
```
|
||||
|
||||
#### Example pod
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This GCE PD must already exist.
|
||||
gcePersistentDisk:
|
||||
pdName: my-data-disk
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
### awsElasticBlockStore
|
||||
|
||||
An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS
|
||||
Volume](http://aws.amazon.com/ebs/) into your pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of an EBS
|
||||
volume are preserved and the volume is merely unmounted. This means that an
|
||||
EBS volume can be pre-populated with data, and that data can be "handed off"
|
||||
between pods.
|
||||
|
||||
__Important: You must create an EBS volume using `aws ec2 create-volume` or
|
||||
the AWS API before you can use it__
|
||||
|
||||
There are some restrictions when using an awsElasticBlockStore volume:
|
||||
|
||||
* the nodes on which pods are running must be AWS EC2 instances
|
||||
* those instances need to be in the same region and availability-zone as the EBS volume
|
||||
* EBS only supports a single EC2 instance mounting a volume
|
||||
|
||||
#### Creating an EBS volume
|
||||
|
||||
Before you can use an EBS volume with a pod, you need to create it.
|
||||
|
||||
```shell
|
||||
aws ec2 create-volume --availability-zone eu-west-1a --size 10 --volume-type gp2
|
||||
```
|
||||
|
||||
Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume
|
||||
type are suitable for your use!)
|
||||
|
||||
#### AWS EBS Example configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-ebs
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-ebs
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This AWS EBS volume must already exist.
|
||||
awsElasticBlockStore:
|
||||
volumeID: <volume-id>
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
### nfs
|
||||
|
||||
An `nfs` volume allows an existing NFS (Network File System) share to be
|
||||
mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is
|
||||
removed, the contents of an `nfs` volume are preserved and the volume is merely
|
||||
unmounted. This means that an NFS volume can be pre-populated with data, and
|
||||
that data can be "handed off" between pods. NFS can be mounted by multiple
|
||||
writers simultaneously.
|
||||
|
||||
__Important: You must have your own NFS server running with the share exported
|
||||
before you can use it__
|
||||
|
||||
See the [NFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/nfs) for more details.
|
||||
|
||||
### iscsi
|
||||
|
||||
An `iscsi` volume allows an existing iSCSI (SCSI over IP) volume to be mounted
|
||||
into your pod. Unlike `emptyDir`, which is erased when a Pod is removed, the
|
||||
contents of an `iscsi` volume are preserved and the volume is merely
|
||||
unmounted. This means that an iscsi volume can be pre-populated with data, and
|
||||
that data can be "handed off" between pods.
|
||||
|
||||
__Important: You must have your own iSCSI server running with the volume
|
||||
created before you can use it__
|
||||
|
||||
A feature of iSCSI is that it can be mounted as read-only by multiple consumers
|
||||
simultaneously. This means that you can pre-populate a volume with your dataset
|
||||
and then serve it in parallel from as many pods as you need. Unfortunately,
|
||||
iSCSI volumes can only be mounted by a single consumer in read-write mode - no
|
||||
simultaneous writers allowed.
|
||||
|
||||
See the [iSCSI example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/iscsi) for more details.
|
||||
|
||||
### flocker
|
||||
|
||||
[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management
|
||||
and orchestration of data volumes backed by a variety of storage backends.
|
||||
|
||||
A `flocker` volume allows a Flocker dataset to be mounted into a pod. If the
|
||||
dataset does not already exist in Flocker, it needs to be first created with the Flocker
|
||||
CLI or by using the Flocker API. If the dataset already exists it will be
|
||||
reattached by Flocker to the node that the pod is scheduled. This means data
|
||||
can be "handed off" between pods as required.
|
||||
|
||||
__Important: You must have your own Flocker installation running before you can use it__
|
||||
|
||||
See the [Flocker example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/flocker) for more details.
|
||||
|
||||
### glusterfs
|
||||
|
||||
A `glusterfs` volume allows a [Glusterfs](http://www.gluster.org) (an open
|
||||
source networked filesystem) volume to be mounted into your pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of a
|
||||
`glusterfs` volume are preserved and the volume is merely unmounted. This
|
||||
means that a glusterfs volume can be pre-populated with data, and that data can
|
||||
be "handed off" between pods. GlusterFS can be mounted by multiple writers
|
||||
simultaneously.
|
||||
|
||||
__Important: You must have your own GlusterFS installation running before you
|
||||
can use it__
|
||||
|
||||
See the [GlusterFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/glusterfs) for more details.
|
||||
|
||||
### rbd
|
||||
|
||||
An `rbd` volume allows a [Rados Block
|
||||
Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your
|
||||
pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of
|
||||
a `rbd` volume are preserved and the volume is merely unmounted. This
|
||||
means that a RBD volume can be pre-populated with data, and that data can
|
||||
be "handed off" between pods.
|
||||
|
||||
__Important: You must have your own Ceph installation running before you
|
||||
can use RBD__
|
||||
|
||||
A feature of RBD is that it can be mounted as read-only by multiple consumers
|
||||
simultaneously. This means that you can pre-populate a volume with your dataset
|
||||
and then serve it in parallel from as many pods as you need. Unfortunately,
|
||||
RBD volumes can only be mounted by a single consumer in read-write mode - no
|
||||
simultaneous writers allowed.
|
||||
|
||||
See the [RBD example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/rbd) for more details.
|
||||
|
||||
### cephfs
|
||||
|
||||
A `cephfs` volume allows an existing CephFS volume to be
|
||||
mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is
|
||||
removed, the contents of a `cephfs` volume are preserved and the volume is merely
|
||||
unmounted. This means that a CephFS volume can be pre-populated with data, and
|
||||
that data can be "handed off" between pods. CephFS can be mounted by multiple
|
||||
writers simultaneously.
|
||||
|
||||
__Important: You must have your own Ceph server running with the share exported
|
||||
before you can use it__
|
||||
|
||||
See the [CephFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/cephfs/) for more details.
|
||||
|
||||
### gitRepo
|
||||
|
||||
A `gitRepo` volume is an example of what can be done as a volume plugin. It
|
||||
mounts an empty directory and clones a git repository into it for your pod to
|
||||
use. In the future, such volumes may be moved to an even more decoupled model,
|
||||
rather than extending the Kubernetes API for every such use case.
|
||||
|
||||
Here is an example for gitRepo volume:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: server
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
volumeMounts:
|
||||
- mountPath: /mypath
|
||||
name: git-volume
|
||||
volumes:
|
||||
- name: git-volume
|
||||
gitRepo:
|
||||
repository: "git@somewhere:me/my-git-repository.git"
|
||||
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
|
||||
```
|
||||
|
||||
### secret
|
||||
|
||||
A `secret` volume is used to pass sensitive information, such as passwords, to
|
||||
pods. You can store secrets in the Kubernetes API and mount them as files for
|
||||
use by pods without coupling to Kubernetes directly. `secret` volumes are
|
||||
backed by tmpfs (a RAM-backed filesystem) so they are never written to
|
||||
non-volatile storage.
|
||||
|
||||
__Important: You must create a secret in the Kubernetes API before you can use
|
||||
it__
|
||||
|
||||
Secrets are described in more detail [here](/docs/user-guide/secrets).
|
||||
|
||||
### persistentVolumeClaim
|
||||
|
||||
A `persistentVolumeClaim` volume is used to mount a
|
||||
[PersistentVolume](/docs/user-guide/persistent-volumes) into a pod. PersistentVolumes are a
|
||||
way for users to "claim" durable storage (such as a GCE PersistentDisk or an
|
||||
iSCSI volume) without knowing the details of the particular cloud environment.
|
||||
|
||||
See the [PersistentVolumes example](/docs/user-guide/persistent-volumes/) for more
|
||||
details.
|
||||
|
||||
### downwardAPI
|
||||
|
||||
A `downwardAPI` volume is used to make downward API data available to applications.
|
||||
It mounts a directory and writes the requested data in plain text files.
|
||||
|
||||
See the [`downwardAPI` volume example](/docs/user-guide/downward-api/volume/) for more details.
|
||||
|
||||
### FlexVolume
|
||||
|
||||
A `FlexVolume` enables users to mount vendor volumes into a pod. It expects vendor
|
||||
drivers are installed in the volume plugin path on each kubelet node. This is
|
||||
an alpha feature and may change in future.
|
||||
|
||||
More details are in [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/flexvolume/README.md)
|
||||
|
||||
### AzureFileVolume
|
||||
|
||||
A `AzureFileVolume` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0)
|
||||
into a Pod.
|
||||
|
||||
More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/azure_file/README.md)
|
||||
|
||||
### AzureDiskVolume
|
||||
|
||||
A `AzureDiskVolume` is used to mount a Microsoft Azure [Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) into a Pod.
|
||||
|
||||
More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/azure_disk/README.md)
|
||||
|
||||
### vsphereVolume
|
||||
|
||||
__Prerequisite: Kubernetes with vSphere Cloud Provider configured.
|
||||
For cloudprovider configuration please refer [vSphere getting started guide](http://kubernetes.io/docs/getting-started-guides/vsphere/).__
|
||||
|
||||
A `vsphereVolume` is used to mount a vSphere VMDK Volume into your Pod. The contents
|
||||
of a volume are preserved when it is unmounted. It supports both VMFS and VSAN datastore.
|
||||
|
||||
__Important: You must create VMDK using one of the following method before using with POD.__
|
||||
|
||||
#### Creating a VMDK volume
|
||||
|
||||
* Create using vmkfstools.
|
||||
|
||||
First ssh into ESX and then use following command to create vmdk,
|
||||
|
||||
```shell
|
||||
vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
|
||||
```
|
||||
|
||||
* Create using vmware-vdiskmanager.
|
||||
```shell
|
||||
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
|
||||
```
|
||||
|
||||
#### vSphere VMDK Example configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-vmdk
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-vmdk
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This VMDK volume must already exist.
|
||||
vsphereVolume:
|
||||
volumePath: "[DatastoreName] volumes/myDisk"
|
||||
fsType: ext4
|
||||
```
|
||||
More examples can be found [here](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere).
|
||||
|
||||
|
||||
### Quobyte
|
||||
|
||||
A `Quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume to be mounted into your pod.
|
||||
|
||||
__Important: You must have your own Quobyte setup running with the volumes created
|
||||
before you can use it__
|
||||
|
||||
See the [Quobyte example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/quobyte) for more details.
|
||||
|
||||
## Using subPath
|
||||
|
||||
Sometimes, it is useful to share one volume for multiple uses in a single pod. The `volumeMounts.subPath`
|
||||
property can be used to specify a sub-path inside the referenced volume instead of its root.
|
||||
|
||||
Here is an example of a pod with a LAMP stack (Linux Apache Mysql PHP) using a single, shared volume.
|
||||
The HTML contents are mapped to its `html` folder, and the databases will be stored in its `mysql` folder:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: my-lamp-site
|
||||
spec:
|
||||
containers:
|
||||
- name: mysql
|
||||
image: mysql
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/mysql
|
||||
name: site-data
|
||||
subPath: mysql
|
||||
- name: php
|
||||
image: php
|
||||
volumeMounts:
|
||||
- mountPath: /var/www/html
|
||||
name: site-data
|
||||
subPath: html
|
||||
volumes:
|
||||
- name: site-data
|
||||
persistentVolumeClaim:
|
||||
claimName: my-lamp-site-data
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
The storage media (Disk, SSD, etc.) of an `emptyDir` volume is determined by the
|
||||
medium of the filesystem holding the kubelet root dir (typically
|
||||
`/var/lib/kubelet`). There is no limit on how much space an `emptyDir` or
|
||||
`hostPath` volume can consume, and no isolation between containers or between
|
||||
pods.
|
||||
|
||||
In the future, we expect that `emptyDir` and `hostPath` volumes will be able to
|
||||
request a certain amount of space using a [resource](/docs/user-guide/compute-resources)
|
||||
specification, and to select the type of media to use, for clusters that have
|
||||
several media types.
|
||||
[Volumes](/docs/concepts/storage/volumes/)
|
||||
|
|
|
@ -115,7 +115,7 @@ Notes:
|
|||
- **EmptyDir**: Creates a new directory that will exist as long as the Pod is running on the node, but it can persist across container failures and restarts.
|
||||
- **HostPath**: Mounts an existing directory on the node's file system (e.g. `/var/logs`).
|
||||
|
||||
See [volumes](/docs/user-guide/volumes/) for more details.
|
||||
See [volumes](/docs/concepts/storage/volumes/) for more details.
|
||||
|
||||
|
||||
#### Multiple Containers
|
||||
|
|
|
@ -13,6 +13,8 @@ $( document ).ready(function() {
|
|||
$("#continueEdit").show();
|
||||
$("#continueEditButton").text("Edit " + forwarding);
|
||||
$("#continueEditButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/edit/master/" + forwarding)
|
||||
$("#viewOnGithubButton").text("View " + forwarding + " on GitHub");
|
||||
$("#viewOnGithubButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/tree/master/" + forwarding)
|
||||
} else {
|
||||
$("#generalInstructions").show();
|
||||
$("#continueEdit").hide();
|
||||
|
@ -26,6 +28,7 @@ $( document ).ready(function() {
|
|||
<p>Click the button below to edit the page you were just on. When you are done, click <b>Commit Changes</b> at the bottom of the screen. This creates a copy of our site in your GitHub account called a <i>fork</i>. You can make other changes in your fork after it is created, if you want. When you are ready to send us all your changes, go to the index page for your fork and click <b>New Pull Request</b> to let us know about it.</p>
|
||||
|
||||
<p><a id="continueEditButton" class="button"></a></p>
|
||||
<p><a id="viewOnGithubButton" class="button"></a></p>
|
||||
|
||||
</div>
|
||||
<div id="generalInstructions">
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 22 KiB |
|
@ -9,12 +9,20 @@ Disallow: 404.html
|
|||
Disallow: /docs/user-guide/configuring-containers
|
||||
Disallow: /docs/user-guide/containers
|
||||
Disallow: /docs/user-guide/deploying-applications
|
||||
Disallow: /docs/user-guide/getting-into-containers
|
||||
>>>>>>> fb2ab359... Remove from TOC/Search: pods/init-containers ...
|
||||
Disallow: /docs/user-guide/liveness/index
|
||||
Disallow: /docs/user-guide/pod-states
|
||||
Disallow: /docs/user-guide/simple-nginx
|
||||
Disallow: /docs/user-guide/production-pods
|
||||
Disallow: /docs/user-guide/quick-start
|
||||
|
||||
Disallow: /docs/user-guide/downward-api/index
|
||||
Disallow: /docs/user-guide/downward-api/volume/index
|
||||
|
||||
Disallow: /docs/user-guide/persistent-volumes/walkthrough
|
||||
|
||||
Disallow: /docs/user-guide/pods/init-container
|
||||
Disallow: /docs/user-guide/pods/single-container
|
||||
|
||||
Disallow: /docs/user-guide/secrets/walkthrough
|
||||
|
|
|
@ -9,3 +9,10 @@ docs/user-guide/logging-demo/README.md
|
|||
docs/user-guide/downward-api/README.md
|
||||
docs/user-guide/configmap/README.md
|
||||
docs/concepts/abstractions/pod-termination.md
|
||||
|
||||
docs/user-guide/pods/init-container.md
|
||||
docs/user-guide/pod-states.md
|
||||
docs/user-guide/downward-api/index.md
|
||||
docs/user-guide/downward-api/volume/index.md
|
||||
docs/user-guide/getting-into-containers.md
|
||||
|
||||
|
|
|
@ -411,7 +411,7 @@ func TestReadme(t *testing.T) {
|
|||
file string
|
||||
expectedType []runtime.Object
|
||||
}{
|
||||
{"../docs/user-guide/volumes.md", []runtime.Object{&api.Pod{}}},
|
||||
{"../docs/concepts/storage/volumes.md", []runtime.Object{&api.Pod{}}},
|
||||
}
|
||||
|
||||
for _, path := range paths {
|
||||
|
|
Loading…
Reference in New Issue