Merge branch 'master' into master
commit
cbc897a320
|
@ -11,7 +11,6 @@
|
|||
{% if item.path %}
|
||||
{% assign path = item.path %}
|
||||
{% assign title = item.title %}
|
||||
{% assign target = " target='_blank'" %}
|
||||
{% else %}
|
||||
{% assign page = site.pages | where: "path", item | first %}
|
||||
{% assign title = page.title %}
|
||||
|
@ -20,7 +19,7 @@
|
|||
{% endcapture %}
|
||||
|
||||
{% if path %}
|
||||
<a class="item" data-title="{{ title }}" href="{{ path }}"{{ target }}></a>
|
||||
<a class="item" data-title="{{ title }}" href="{{ path }}"></a>
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
|
|
@ -24,7 +24,7 @@ following diagram:
|
|||
In a typical Kubernetes cluster, the API served on port 443. A TLS connection is
|
||||
established. The API server presents a certificate. This certificate is
|
||||
often self-signed, so `$USER/.kube/config` on the user's machine typically
|
||||
contains the root certficate for the API server's certificate, which when specified
|
||||
contains the root certificate for the API server's certificate, which when specified
|
||||
is used in place of the system default root certificates. This certificate is typically
|
||||
automatically written into your `$USER/.kube/config` when you create a cluster yourself
|
||||
using `kube-up.sh`. If the cluster has multiple users, then the creator needs to share
|
||||
|
|
|
@ -444,7 +444,7 @@ The script will generate three files: `ca.crt`, `server.crt`, and `server.key`.
|
|||
Finally, add the following parameters into API server start parameters:
|
||||
|
||||
- `--client-ca-file=/srv/kubernetes/ca.crt`
|
||||
- `--tls-cert-file=/srv/kubernetes/server.cert`
|
||||
- `--tls-cert-file=/srv/kubernetes/server.crt`
|
||||
- `--tls-private-key-file=/srv/kubernetes/server.key`
|
||||
|
||||
#### easyrsa
|
||||
|
@ -468,7 +468,7 @@ Finally, add the following parameters into API server start parameters:
|
|||
1. Fill in and add the following parameters into the API server start parameters:
|
||||
|
||||
--client-ca-file=/yourdirectory/ca.crt
|
||||
--tls-cert-file=/yourdirectory/server.cert
|
||||
--tls-cert-file=/yourdirectory/server.crt
|
||||
--tls-private-key-file=/yourdirectory/server.key
|
||||
|
||||
#### openssl
|
||||
|
|
|
@ -330,7 +330,7 @@ roleRef:
|
|||
|
||||
Finally a `ClusterRoleBinding` may be used to grant permissions in all
|
||||
namespaces. The following `ClusterRoleBinding` allows any user in the group
|
||||
"manager" to read secrets in any namepsace.
|
||||
"manager" to read secrets in any namespace.
|
||||
|
||||
```yaml
|
||||
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
|
||||
|
|
|
@ -92,7 +92,7 @@ an extended period of time (10min but it may change in the future).
|
|||
Cluster autoscaler is configured per instance group (GCE) or node pool (GKE).
|
||||
|
||||
If you are using GCE then you can either enable it while creating a cluster with kube-up.sh script.
|
||||
To configure cluser autoscaler you have to set 3 environment variables:
|
||||
To configure cluster autoscaler you have to set 3 environment variables:
|
||||
|
||||
* `KUBE_ENABLE_CLUSTER_AUTOSCALER` - it enables cluster autoscaler if set to true.
|
||||
* `KUBE_AUTOSCALER_MIN_NODES` - minimum number of nodes in the cluster.
|
||||
|
|
|
@ -89,7 +89,7 @@ Mitigations:
|
|||
- Mitigates: Apiserver VM shutdown or apiserver crashing
|
||||
- Mitigates: Supporting services VM shutdown or crashes
|
||||
|
||||
- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd
|
||||
- Action use IaaS providers reliable storage (e.g. GCE PD or AWS EBS volume) for VMs with apiserver+etcd
|
||||
- Mitigates: Apiserver backing storage lost
|
||||
|
||||
- Action: Use (experimental) [high-availability](/docs/admin/high-availability) configuration
|
||||
|
|
|
@ -7,20 +7,20 @@ title: Daemon Sets
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
## What is a Daemon Set?
|
||||
## What is a DaemonSet?
|
||||
|
||||
A _Daemon Set_ ensures that all (or some) nodes run a copy of a pod. As nodes are added to the
|
||||
A _DaemonSet_ ensures that all (or some) nodes run a copy of a pod. As nodes are added to the
|
||||
cluster, pods are added to them. As nodes are removed from the cluster, those pods are garbage
|
||||
collected. Deleting a Daemon Set will clean up the pods it created.
|
||||
collected. Deleting a DaemonSet will clean up the pods it created.
|
||||
|
||||
Some typical uses of a Daemon Set are:
|
||||
Some typical uses of a DaemonSet are:
|
||||
|
||||
- running a cluster storage daemon, such as `glusterd`, `ceph`, on each node.
|
||||
- running a logs collection daemon on every node, such as `fluentd` or `logstash`.
|
||||
- running a node monitoring daemon on every node, such as [Prometheus Node Exporter](
|
||||
https://github.com/prometheus/node_exporter), `collectd`, New Relic agent, or Ganglia `gmond`.
|
||||
|
||||
In a simple case, one Daemon Set, covering all nodes, would be used for each type of daemon.
|
||||
In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon.
|
||||
A more complex setup might use multiple DaemonSets would be used for a single type of daemon,
|
||||
but with different flags and/or different memory and cpu requests for different hardware types.
|
||||
|
||||
|
@ -74,7 +74,7 @@ a node for testing.
|
|||
|
||||
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
|
||||
create pods on nodes which match that [node
|
||||
selector](/docs/user-guide/node-selection/).
|
||||
selector](/docs/user-guide/node-selection/).
|
||||
If you specify a `scheduler.alpha.kubernetes.io/affinity` annotation in `.spec.template.metadata.annotations`,
|
||||
then DaemonSet controller will create pods on nodes which match that [node affinity](../../user-guide/node-selection/#alpha-feature-in-kubernetes-v12-node-affinity).
|
||||
|
||||
|
@ -88,18 +88,17 @@ created by the Daemon controller have the machine already selected (`.spec.nodeN
|
|||
when the pod is created, so it is ignored by the scheduler). Therefore:
|
||||
|
||||
- the [`unschedulable`](/docs/admin/node/#manual-node-administration) field of a node is not respected
|
||||
by the daemon set controller.
|
||||
- daemon set controller can make pods even when the scheduler has not been started, which can help cluster
|
||||
by the DaemonSet controller.
|
||||
- DaemonSet controller can make pods even when the scheduler has not been started, which can help cluster
|
||||
bootstrap.
|
||||
|
||||
## Communicating with DaemonSet Pods
|
||||
|
||||
Some possible patterns for communicating with pods in a DaemonSet are:
|
||||
|
||||
- **Push**: Pods in the Daemon Set are configured to send updates to another service, such
|
||||
- **Push**: Pods in the DaemonSet are configured to send updates to another service, such
|
||||
as a stats database. They do not have clients.
|
||||
- **NodeIP and Known Port**: Pods in the Daemon Set use a `hostPort`, so that the pods are reachable
|
||||
via the node IPs. Clients knows the list of nodes ips somehow, and know the port by convention.
|
||||
- **NodeIP and Known Port**: Pods in the DaemonSet use a `hostPort`, so that the pods are reachable via the node IPs. Clients know the list of nodes ips somehow, and know the port by convention.
|
||||
- **DNS**: Create a [headless service](/docs/user-guide/services/#headless-services) with the same pod selector,
|
||||
and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from
|
||||
DNS.
|
||||
|
@ -126,11 +125,11 @@ You cannot update a DaemonSet.
|
|||
|
||||
Support for updating DaemonSets and controlled updating of nodes is planned.
|
||||
|
||||
## Alternatives to Daemon Set
|
||||
## Alternatives to DaemonSet
|
||||
|
||||
### Init Scripts
|
||||
|
||||
It is certainly possible to run daemon processes by directly starting them on a node (e.g using
|
||||
It is certainly possible to run daemon processes by directly starting them on a node (e.g. using
|
||||
`init`, `upstartd`, or `systemd`). This is perfectly fine. However, there are several advantages to
|
||||
running such processes via a DaemonSet:
|
||||
|
||||
|
@ -145,9 +144,9 @@ running such processes via a DaemonSet:
|
|||
### Bare Pods
|
||||
|
||||
It is possible to create pods directly which specify a particular node to run on. However,
|
||||
a Daemon Set replaces pods that are deleted or terminated for any reason, such as in the case of
|
||||
a DaemonSet replaces pods that are deleted or terminated for any reason, such as in the case of
|
||||
node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, you should
|
||||
use a Daemon Set rather than creating individual pods.
|
||||
use a DaemonSet rather than creating individual pods.
|
||||
|
||||
### Static Pods
|
||||
|
||||
|
@ -159,7 +158,7 @@ in cluster bootstrapping cases. Also, static pods may be deprecated in the futu
|
|||
|
||||
### Replication Controller
|
||||
|
||||
Daemon Set are similar to [Replication Controllers](/docs/user-guide/replication-controller) in that
|
||||
DaemonSet are similar to [Replication Controllers](/docs/user-guide/replication-controller) in that
|
||||
they both create pods, and those pods have processes which are not expected to terminate (e.g. web servers,
|
||||
storage servers).
|
||||
|
||||
|
|
|
@ -77,7 +77,7 @@ For example, a pod with ip `1.2.3.4` in the namespace `default` with a DNS name
|
|||
Currently when a pod is created, its hostname is the Pod's `metadata.name` value.
|
||||
|
||||
With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, to specify what the Pod's hostname should be.
|
||||
The Pod annotation, if specified, takes precendence over the Pod's name, to be the hostname of the pod.
|
||||
The Pod annotation, if specified, takes precedence over the Pod's name, to be the hostname of the pod.
|
||||
For example, given a Pod with annotation `pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname set to "my-pod-name".
|
||||
|
||||
With v1.3, the PodSpec has a `hostname` field, which can be used to specify the Pod's hostname. This field value takes precedence over the
|
||||
|
|
|
@ -26,7 +26,7 @@ federation-apiserver
|
|||
--admission-control-config-file string File with admission control configuration.
|
||||
--advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
|
||||
--anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
|
||||
--apiserver-count int The number of apiservers running in the cluster. (default 1)
|
||||
--apiserver-count int The number of apiservers running in the cluster. Must be a positive number. (default 1)
|
||||
--audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
|
||||
--audit-log-maxbackup int The maximum number of old audit log files to retain.
|
||||
--audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated. Defaults to 100MB.
|
||||
|
|
|
@ -27,7 +27,7 @@ kube-apiserver
|
|||
--advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
|
||||
--allow-privileged If true, allow privileged containers.
|
||||
--anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true)
|
||||
--apiserver-count int The number of apiservers running in the cluster. (default 1)
|
||||
--apiserver-count int The number of apiservers running in the cluster. Must be a positive number. (default 1)
|
||||
--audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
|
||||
--audit-log-maxbackup int The maximum number of old audit log files to retain.
|
||||
--audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated. Defaults to 100MB.
|
||||
|
|
|
@ -242,7 +242,7 @@ Once the cluster is up, you can grab the admin credentials from the master node
|
|||
## Environment variables
|
||||
|
||||
There are some environment variables that modify the way that `kubeadm` works. Most users will have no need to set these.
|
||||
These enviroment variables are a short-term solution, eventually they will be integrated in the kubeadm configuration file.
|
||||
These environment variables are a short-term solution, eventually they will be integrated in the kubeadm configuration file.
|
||||
|
||||
| Variable | Default | Description |
|
||||
| --- | --- | --- |
|
||||
|
|
|
@ -9,7 +9,7 @@ title: TLS bootstrapping
|
|||
|
||||
## Overview
|
||||
|
||||
This document describes how to set up TLS client certificate boostrapping for kubelets.
|
||||
This document describes how to set up TLS client certificate bootstrapping for kubelets.
|
||||
Kubernetes 1.4 introduces an experimental API for requesting certificates from a cluster-level
|
||||
Certificate Authority (CA). The first supported use of this API is the provisioning of TLS client
|
||||
certificates for kubelets. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439)
|
||||
|
@ -17,7 +17,7 @@ and progress on the feature is being tracked as [feature #43](https://github.com
|
|||
|
||||
## apiserver configuration
|
||||
|
||||
You must provide a token file which specifies at least one "bootstrap token" assigned to a kubelet boostrap-specific group.
|
||||
You must provide a token file which specifies at least one "bootstrap token" assigned to a kubelet bootstrap-specific group.
|
||||
This group will later be used in the controller-manager configuration to scope approvals in the default approval
|
||||
controller. As this feature matures, you should ensure tokens are bound to an RBAC policy which limits requests
|
||||
using the bootstrap token to only be able to make requests related to certificate provisioning. When RBAC policy
|
||||
|
|
|
@ -78,9 +78,9 @@ kubelet
|
|||
--experimental-allowed-unsafe-sysctls stringSlice Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk.
|
||||
--experimental-bootstrap-kubeconfig string <Warning: Experimental feature> Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by --kubeconfig does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by --kubeconfig. The certificate and key file will be stored in the directory pointed by --cert-dir.
|
||||
--experimental-cgroups-per-qos Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created.
|
||||
--experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required componenets (binaries, etc.) before performing the mount
|
||||
--experimental-cri [Experimental] Enable the Container Runtime Interface (CRI) integration. If --container-runtime is set to "remote", Kubelet will communicate with the runtime/image CRI server listening on the endpoint specified by --remote-runtime-endpoint/--remote-image-endpoint. If --container-runtime is set to "docker", Kubelet will launch an in-process CRI server on behalf of docker, and communicate over a default endpoint.
|
||||
--experimental-fail-swap-on Makes the Kubelet fail to start if swap is enabled on the node. This is a temporary opton to maintain legacy behavior, failing due to swap enabled will happen by default in v1.6.
|
||||
--experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount
|
||||
--experimental-cri [Experimental] Enable the Container Runtime Interface (CRI) integration. If --container-runtime is set to "remote", Kubelet will communicate with the runtime/image CRI server listening on the endpoint specified by --remote-runtime-endpoint/--remote-image-endpoint. If --container-runtime is set to "docker", Kubelet will launch a in-process CRI server on behalf of docker, and communicate over a default endpoint.
|
||||
--experimental-fail-swap-on Makes the Kubelet fail to start if swap is enabled on the node. This is a temporary option to maintain legacy behavior, failing due to swap enabled will happen by default in v1.6.
|
||||
--experimental-kernel-memcg-notification If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling.
|
||||
--experimental-mounter-path string [Experimental] Path of mounter binary. Leave empty to use the default mount.
|
||||
--experimental-nvidia-gpus int32 Number of NVIDIA GPU devices on this node. Only 0 (default) and 1 are currently supported.
|
||||
|
|
|
@ -184,7 +184,7 @@ Note that this pod specifies explicit resource *limits* and *requests* so it did
|
|||
default values.
|
||||
|
||||
Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node
|
||||
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
|
||||
that runs the container unless the administrator deploys the kubelet with the following flag:
|
||||
|
||||
```shell
|
||||
$ kubelet --help
|
||||
|
|
|
@ -52,7 +52,7 @@ Second, decide how many clusters should be able to be unavailable at the same ti
|
|||
the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice.
|
||||
|
||||
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
|
||||
you need at least the larger of `R` or `U + 1` clusters. If it is not (e.g you want to ensure low latency for all
|
||||
you need at least the larger of `R` or `U + 1` clusters. If it is not (e.g. you want to ensure low latency for all
|
||||
users in the event of a cluster failure), then you need to have `R * (U + 1)` clusters
|
||||
(`U + 1` in each of `R` regions). In any case, try to put each cluster in a different zone.
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ either `kubectl` or addon pod.
|
|||
|
||||
### Kubectl
|
||||
|
||||
This is the recommanded way to start node problem detector outside of GCE. It
|
||||
This is the recommended way to start node problem detector outside of GCE. It
|
||||
provides more flexible management, such as overwriting the default
|
||||
configuration to fit it into your environment or detect
|
||||
customized node problems.
|
||||
|
@ -238,7 +238,7 @@ implement a new translator for a new log format.
|
|||
|
||||
## Caveats
|
||||
|
||||
It is recommanded to run the node problem detector in your cluster to monitor
|
||||
It is recommended to run the node problem detector in your cluster to monitor
|
||||
the node health. However, you should be aware that this will introduce extra
|
||||
resource overhead on each node. Usually this is fine, because:
|
||||
|
||||
|
|
|
@ -20,7 +20,15 @@ architecture design doc for more details.
|
|||
|
||||
## Node Status
|
||||
|
||||
A node's status is comprised of the following information.
|
||||
A node's status contains the following information:
|
||||
|
||||
* [Addresses](#Addresses)
|
||||
* ~~[Phase](#Phase)~~ **deprecated**
|
||||
* [Condition](#Condition)
|
||||
* [Capacity](#Capacity)
|
||||
* [Info](#Info)
|
||||
|
||||
Each section is described in detail below.
|
||||
|
||||
### Addresses
|
||||
|
||||
|
|
|
@ -330,7 +330,7 @@ for eviction. Instead `DaemonSet` should ideally launch `Guaranteed` pods.
|
|||
`kubelet` has been freeing up disk space on demand to keep the node stable.
|
||||
|
||||
As disk based eviction matures, the following `kubelet` flags will be marked for deprecation
|
||||
in favor of the simpler configuation supported around eviction.
|
||||
in favor of the simpler configuration supported around eviction.
|
||||
|
||||
| Existing Flag | New Flag |
|
||||
| ------------- | -------- |
|
||||
|
|
|
@ -50,7 +50,7 @@ It's enabled by default. It can be disabled:
|
|||
|
||||
### Marking add-on as critical
|
||||
|
||||
To be critical an add-on has to run in `kube-system` namespace (cofigurable via flag)
|
||||
To be critical an add-on has to run in `kube-system` namespace (configurable via flag)
|
||||
and have the following annotations specified:
|
||||
|
||||
* `scheduler.alpha.kubernetes.io/critical-pod` set to empty string
|
||||
|
|
|
@ -9,7 +9,7 @@ assignees:
|
|||
|
||||
This document describes how sysctls are used within a Kubernetes cluster.
|
||||
|
||||
## What is a _Sysctl_?
|
||||
## What is a Sysctl?
|
||||
|
||||
In Linux, the sysctl interface allows an administrator to modify kernel
|
||||
parameters at runtime. Parameters are available via the `/proc/sys/` virtual
|
||||
|
|
|
@ -207,7 +207,7 @@ Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VM
|
|||
|
||||
## Cluster Features and Architecture
|
||||
|
||||
We configue the Kubernetes cluster with the following features:
|
||||
We configure the Kubernetes cluster with the following features:
|
||||
|
||||
* KubeDNS: DNS resolution and service discovery
|
||||
* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling.
|
||||
|
@ -218,7 +218,7 @@ We configue the Kubernetes cluster with the following features:
|
|||
We use the following to create the kubernetes cluster:
|
||||
|
||||
* Kubernetes 1.1.7
|
||||
* Unbuntu 14.04
|
||||
* Ubuntu 14.04
|
||||
* Flannel 0.5.4
|
||||
* Docker 1.9.1-0~trusty
|
||||
* Etcd 2.2.2
|
||||
|
|
|
@ -57,7 +57,7 @@ kops uses DNS for discovery, both inside the cluster and so that you can reach t
|
|||
from clients.
|
||||
|
||||
kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will
|
||||
no longer get your clusters confused, you can share clusters with your colleagues unambigiously,
|
||||
no longer get your clusters confused, you can share clusters with your colleagues unambiguously,
|
||||
and you can reach them without relying on remembering an IP address.
|
||||
|
||||
You can, and probably should, use subdomains to divide your clusters. As our example we will use
|
||||
|
|
|
@ -31,4 +31,4 @@ There are two main components to be aware of:
|
|||
- One `calico-node` Pod runs on each node in your cluster, and enforces network policy on the traffic to/from Pods on that machine by configuring iptables.
|
||||
- The `calico-policy-controller` Pod reads policy and label information from the Kubernetes API and configures Calico appropriately.
|
||||
|
||||
Once your cluster is running, you can follow the [NetworkPolicy gettting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy.
|
||||
Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy.
|
||||
|
|
|
@ -8,4 +8,4 @@ The [Weave Net Addon](https://www.weave.works/docs/net/latest/kube-addon/) for K
|
|||
|
||||
This component automatically monitors Kubernetes for any NetworkPolicy annotations on all namespaces, and configures `iptables` rules to allow or block traffic as directed by the policies.
|
||||
|
||||
Once you have installed the Weave Net Addon you can follow the [NetworkPolicy gettting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy.
|
||||
Once you have installed the Weave Net Addon you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy.
|
||||
|
|
|
@ -163,7 +163,7 @@ balancer. Specifically:
|
|||
Configure your service with the NodePort option. For example, this
|
||||
service uses the NodePort option. All Kubernetes nodes will listen on
|
||||
a port and forward network traffic to any pods in the service. In this
|
||||
case, Kubernets will choose a random port, but it will be the same
|
||||
case, Kubernetes will choose a random port, but it will be the same
|
||||
port on all nodes.
|
||||
|
||||
```yaml
|
||||
|
|
|
@ -69,7 +69,7 @@ accomplished in two ways:
|
|||
|
||||
- **Using an overlay network**
|
||||
- An overlay network obscures the underlying network architecture from the
|
||||
pod network through traffic encapsulation (e.g vxlan).
|
||||
pod network through traffic encapsulation (e.g. vxlan).
|
||||
- Encapsulation reduces performance, though exactly how much depends on your solution.
|
||||
- **Without an overlay network**
|
||||
- Configure the underlying network fabric (switches, routers, etc.) to be aware of pod IP addresses.
|
||||
|
@ -180,7 +180,7 @@ we recommend that you run these as containers, so you need an image to be built.
|
|||
You have several choices for Kubernetes images:
|
||||
|
||||
- Use images hosted on Google Container Registry (GCR):
|
||||
- e.g `gcr.io/google_containers/hyperkube:$TAG`, where `TAG` is the latest
|
||||
- e.g. `gcr.io/google_containers/hyperkube:$TAG`, where `TAG` is the latest
|
||||
release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest).
|
||||
- Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy.
|
||||
- The [hyperkube](https://releases.k8s.io/{{page.githubbranch}}/cmd/hyperkube) binary is an all in one binary
|
||||
|
|
|
@ -93,7 +93,7 @@ Note that each controller can host multiple Kubernetes clusters in a given cloud
|
|||
|
||||
## Launch a Kubernetes cluster
|
||||
|
||||
The following command will deploy the intial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but
|
||||
The following command will deploy the initial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but
|
||||
|
||||
```shell
|
||||
juju deploy canonical-kubernetes
|
||||
|
@ -206,7 +206,7 @@ Congratulations, you've now set up a Kubernetes cluster!
|
|||
Want larger Kubernetes nodes? It is easy to request different sizes of cloud
|
||||
resources from Juju by using **constraints**. You can increase the amount of
|
||||
CPU or memory (RAM) in any of the systems requested by Juju. This allows you
|
||||
to fine tune th Kubernetes cluster to fit your workload. Use flags on the
|
||||
to fine tune the Kubernetes cluster to fit your workload. Use flags on the
|
||||
bootstrap command or as a separate `juju constraints` command. Look to the
|
||||
[Juju documentation for machine](https://jujucharms.com/docs/2.0/charms-constraints)
|
||||
details.
|
||||
|
|
|
@ -122,7 +122,7 @@ through `FLANNEL_BACKEND` and `FLANNEL_OTHER_NET_CONFIG`, as explained in `clust
|
|||
The default setting for `ADMISSION_CONTROL` is right for the latest
|
||||
release of Kubernetes, but if you choose an earlier release then you
|
||||
might want a different setting. See
|
||||
[the admisson control doc](http://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use)
|
||||
[the admission control doc](http://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use)
|
||||
for the recommended settings for various releases.
|
||||
|
||||
**Note:** When deploying, master needs to be connected to the Internet to download the necessary files.
|
||||
|
|
|
@ -5,24 +5,4 @@ assignees:
|
|||
title: Report a Security Vulnerability
|
||||
---
|
||||
|
||||
If you believe you have discovered a vulnerability or a have a security incident to report, please follow the steps below. This applies to Kubernetes releases v1.0 or later.
|
||||
|
||||
To watch for security and major API announcements, please join our [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group.
|
||||
|
||||
## Reporting a security issue
|
||||
|
||||
To report an issue, please:
|
||||
|
||||
- Submit a bug report [here](http://goo.gl/vulnz).
|
||||
- Select 'I want to report a technical security bug in a Google product (SQLi, XSS, etc.).'?
|
||||
- Select 'Other'? as the Application Type.
|
||||
- Under reproduction steps, please additionally include
|
||||
- the words "Kubernetes Security issue"
|
||||
- Description of the issue
|
||||
- Kubernetes release (e.g. output of `kubectl version` command, which includes server version.)
|
||||
- Environment setup (e.g. which "Getting Started Guide" you followed, if any; what node operating system used; what service or software creates your virtual machines, if any)
|
||||
|
||||
An online submission will have the fastest response; however, if you prefer email, please send mail to security@google.com. If you feel the need, please use the [PGP public key](https://services.google.com/corporate/publickey.txt) to encrypt communications.
|
||||
|
||||
|
||||
|
||||
This document has moved to [http://kubernetes.io/security](http://kubernetes.io/security).
|
||||
|
|
|
@ -52,7 +52,7 @@ load-balanced access to an application running in a cluster.
|
|||
NAME DESIRED CURRENT AGE
|
||||
hello-world-2189936611 2 2 12m
|
||||
|
||||
1. Create a Serivice object that exposes the replica set:
|
||||
1. Create a Service object that exposes the replica set:
|
||||
|
||||
kubectl expose rs <your-replica-set-name> --type="LoadBalancer" --name="example-service"
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ title: Using a Service to Expose Your App
|
|||
|
||||
<p>This abstraction will allow us to expose Pods to traffic originating from outside the cluster. Services have their own unique cluster-private IP address and expose a port to receive traffic. If you choose to expose the service outside the cluster, the options are:</p>
|
||||
<ul>
|
||||
<li>LoadBalancer - provides a public IP address (what you would typically use when you run Kubernetes on GKE or AWS)</li>
|
||||
<li>LoadBalancer - provides a public IP address (what you would typically use when you run Kubernetes on GCP or AWS)</li>
|
||||
<li>NodePort - exposes the Service on the same port on each Node of the cluster using NAT (available on all Kubernetes clusters, and in Minikube)</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
|
|
@ -122,7 +122,7 @@ launching `web-1`. In fact, `web-1` is not launched until `web-0` is
|
|||
[Running and Ready](/docs/user-guide/pod-states).
|
||||
|
||||
### Pods in a StatefulSet
|
||||
Unlike Pods in other controllers, the Pods in a StatefulSet have a unqiue
|
||||
Unlike Pods in other controllers, the Pods in a StatefulSet have a unique
|
||||
ordinal index and a stable network identity.
|
||||
|
||||
#### Examining the Pod's Ordinal Index
|
||||
|
@ -177,7 +177,7 @@ Name: web-1.nginx
|
|||
Address 1: 10.244.2.6
|
||||
```
|
||||
|
||||
The CNAME of the headless serivce points to SRV records (one for each Pod that
|
||||
The CNAME of the headless service points to SRV records (one for each Pod that
|
||||
is Running and Ready). The SRV records point to A record entries that
|
||||
contain the Pods' IP addresses.
|
||||
|
||||
|
|
|
@ -180,7 +180,7 @@ replicating.
|
|||
In general, when a new Pod joins the set as a slave, it must assume the MySQL
|
||||
master might already have data on it. It also must assume that the replication
|
||||
logs might not go all the way back to the beginning of time.
|
||||
These conservative assumptions are the key to allowing a running StatefulSet
|
||||
These conservative assumptions are the key to allow a running StatefulSet
|
||||
to scale up and down over time, rather than being fixed at its initial size.
|
||||
|
||||
The second Init Container, named `clone-mysql`, performs a clone operation on
|
||||
|
|
|
@ -4,7 +4,7 @@ title: Exposing an External IP Address to Access an Application in a Cluster
|
|||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to create a Kubernetes Service object that exposees an
|
||||
This page shows how to create a Kubernetes Service object that exposes an
|
||||
external IP address.
|
||||
|
||||
{% endcapture %}
|
||||
|
|
|
@ -295,7 +295,7 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el
|
|||
|
||||
Kubernetes also supports Federated Services, which can span multiple
|
||||
clusters and cloud providers, to provide increased availability,
|
||||
bettern fault tolerance and greater scalability for your services. See
|
||||
better fault tolerance and greater scalability for your services. See
|
||||
the [Federated Services User Guide](/docs/user-guide/federation/federated-services/)
|
||||
for further information.
|
||||
|
||||
|
|
|
@ -60,7 +60,9 @@ This hook is called immediately before a container is terminated. No parameters
|
|||
|
||||
### Hook Handler Execution
|
||||
|
||||
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook. These hook handler calls are synchronous in the context of the pod containing the container. Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop).
|
||||
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook. These hook handler calls are synchronous in the context of the pod containing the container. This means that for a `PostStart` hook, the container entrypoint and hook will fire asynchronously. However, if the hook takes a while to run or hangs, the container will never reach a "running" state. The behavior is similar for a `PreStop` hook. If the hook hangs during execution, the Pod phase will stay in a "running" state and never reach "failed." If a `PostStart` or `PreStop` hook fails, it will kill the container.
|
||||
|
||||
Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop).
|
||||
|
||||
### Hook delivery guarantees
|
||||
|
||||
|
@ -81,4 +83,23 @@ Hook handlers are the way that hooks are surfaced to containers. Containers ca
|
|||
|
||||
* HTTP - Executes an HTTP request against a specific endpoint on the container.
|
||||
|
||||
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
|
||||
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
|
||||
|
||||
### Debugging Hook Handlers
|
||||
|
||||
Currently, the logs for a hook handler are not exposed in the pod events. If your handler fails for some reason, it will emit an event. For `PostStart`, this is the `FailedPostStartHook` event. For `PreStop` this is the `FailedPreStopHook` event. You can see these events by running `kubectl describe pod <pod_name>`. An example output of events from runing this command is below:
|
||||
|
||||
```
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
|
||||
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
|
||||
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
|
||||
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
|
||||
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
|
||||
```
|
|
@ -9,7 +9,7 @@ title: Cron Jobs
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
## What is a Cron Job?
|
||||
## What is a cron job?
|
||||
|
||||
A _Cron Job_ manages time based [Jobs](/docs/user-guide/jobs/), namely:
|
||||
|
||||
|
|
|
@ -86,24 +86,56 @@ After creating or updating a Deployment, you would want to confirm whether it su
|
|||
|
||||
```shell
|
||||
$ kubectl rollout status deployment/nginx-deployment
|
||||
deployment nginx-deployment successfully rolled out
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
This verifies the Deployment's `.status.observedGeneration` >= `.metadata.generation`, and its up-to-date replicas
|
||||
(`.status.updatedReplicas`) matches the desired replicas (`.spec.replicas`) to determine if the rollout succeeded.
|
||||
If the rollout is still in progress, it watches for Deployment status changes and prints related messages.
|
||||
|
||||
Note that it's impossible to know whether a Deployment will ever succeed, so if the above command doesn't return success,
|
||||
you'll need to timeout and give up at some point.
|
||||
|
||||
Additionally, if you set `.spec.minReadySeconds`, you would also want to check if the available replicas (`.status.availableReplicas`) matches the desired replicas too.
|
||||
(`.status.updatedReplicas`) matches the desired replicas (`.spec.replicas`) to determine if the rollout succeeded.
|
||||
It also expects that the available replicas running (`.spec.availableReplicas`) will be at least the minimum required
|
||||
based on the Deployment strategy. If the rollout is still in progress, it watches for Deployment status changes and
|
||||
prints related messages.
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 20s
|
||||
$ kubectl rollout status deployment/nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 2 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 2 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 3 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 3 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 5 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 5 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 5 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 5 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 7 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 7 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 7 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 7 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 8 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 8 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 8 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 9 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 9 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 9 out of 10 new replicas have been updated...
|
||||
Waiting for rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for rollout to finish: 9 of 10 updated replicas are available...
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
For more information about the status of a Deployment [read more here](#deployment-status).
|
||||
|
||||
|
||||
## Updating a Deployment
|
||||
|
||||
**Note:** a Deployment's rollout is triggered if and only if the Deployment's pod template (i.e. `.spec.template`) is changed,
|
||||
|
@ -129,7 +161,7 @@ To see its rollout status, simply run:
|
|||
```shell
|
||||
$ kubectl rollout status deployment/nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
deployment nginx-deployment successfully rolled out
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
After the rollout succeeds, you may want to `get` the Deployment:
|
||||
|
@ -244,12 +276,12 @@ deployment "nginx-deployment" image updated
|
|||
|
||||
The rollout will be stuck.
|
||||
|
||||
```
|
||||
```shell
|
||||
$ kubectl rollout status deployments nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
```
|
||||
|
||||
Press Ctrl-C to stop the above rollout status watch.
|
||||
Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, [read more here](#deployment-status).
|
||||
|
||||
You will also see that both the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) and new replicas (nginx-deployment-3066724191) are 2.
|
||||
|
||||
|
@ -413,7 +445,7 @@ $ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=
|
|||
deployment "nginx-deployment" autoscaled
|
||||
```
|
||||
|
||||
RollingUpdate Deployments support running multitple versions of an application at the same time. When you
|
||||
RollingUpdate Deployments support running multiple versions of an application at the same time. When you
|
||||
or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress
|
||||
or paused), then the Deployment controller will balance the additional replicas in the existing active
|
||||
ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*.
|
||||
|
@ -549,7 +581,7 @@ updates you've requested have been completed.
|
|||
|
||||
You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed successfully, `kubectl rollout status` returns a zero exit code.
|
||||
|
||||
```
|
||||
```shell
|
||||
$ kubectl rollout status deploy/nginx
|
||||
Waiting for rollout to finish: 2 of 3 updated replicas are available...
|
||||
deployment "nginx" successfully rolled out
|
||||
|
@ -568,7 +600,7 @@ Your Deployment may get stuck trying to deploy its newest ReplicaSet without eve
|
|||
* Limit ranges
|
||||
* Application runtime misconfiguration
|
||||
|
||||
One way you can detect this condition is to specify specify a deadline parameter in your Deployment spec: ([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the number of seconds the Deployment controller waits before indicating (via the Deployment status) that the Deployment progress has stalled.
|
||||
One way you can detect this condition is to specify a deadline parameter in your Deployment spec: ([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the number of seconds the Deployment controller waits before indicating (via the Deployment status) that the Deployment progress has stalled.
|
||||
|
||||
The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report lack of progress for a Deployment after 10 minutes:
|
||||
|
||||
|
@ -594,7 +626,7 @@ You may experience transient errors with your Deployments, either due to a low t
|
|||
of error that can be treated as transient. For example, let's suppose you have insufficient quota. If you describe the Deployment
|
||||
you will notice the following section:
|
||||
|
||||
```
|
||||
```shell
|
||||
$ kubectl describe deployment nginx-deployment
|
||||
<...>
|
||||
Conditions:
|
||||
|
@ -667,7 +699,7 @@ required new replicas are available (see the Reason of the condition for the par
|
|||
|
||||
You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status` returns a non-zero exit code if the Deployment has exceeded the progression deadline.
|
||||
|
||||
```
|
||||
```shell
|
||||
$ kubectl rollout status deploy/nginx
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
error: deployment "nginx" exceeded its progress deadline
|
||||
|
|
|
@ -24,7 +24,7 @@ general.
|
|||
|
||||
## Overview
|
||||
|
||||
Events in federation control plane (refered to as "federation events" in
|
||||
Events in federation control plane (referred to as "federation events" in
|
||||
this guide) are very similar to the traditional Kubernetes
|
||||
Events providing the same functionality.
|
||||
Federation Events are stored only in federation control plane and are not passed on to the underlying kubernetes clusters.
|
||||
|
|
|
@ -232,7 +232,7 @@ due to caching by intermediate DNS servers.
|
|||
The above set of DNS records is automatically kept in sync with the
|
||||
current state of health of all service shards globally by the
|
||||
Federated Service system. DNS resolver libraries (which are invoked by
|
||||
all clients) automatically traverse the hiearchy of 'CNAME' and 'A'
|
||||
all clients) automatically traverse the hierarchy of 'CNAME' and 'A'
|
||||
records to return the correct set of healthy IP addresses. Clients can
|
||||
then select any one of the returned addresses to initiate a network
|
||||
connection (and fail over automatically to one of the other equivalent
|
||||
|
@ -295,7 +295,7 @@ availability zones and regions other than the ones local to a Pod by
|
|||
specifying the appropriate DNS names explicitly, and not relying on
|
||||
automatic DNS expansion. For example,
|
||||
"nginx.mynamespace.myfederation.svc.europe-west1.example.com" will
|
||||
resolve to all of the currently healthy service shards in europe, even
|
||||
resolve to all of the currently healthy service shards in Europe, even
|
||||
if the Pod issuing the lookup is located in the U.S., and irrespective
|
||||
of whether or not there are healthy shards of the service in the U.S.
|
||||
This is useful for remote monitoring and other similar applications.
|
||||
|
@ -316,7 +316,7 @@ us.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.us-central1.ex
|
|||
nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.example.com.
|
||||
```
|
||||
That way your clients can always use the short form on the left, and
|
||||
always be automatcally routed to the closest healthy shard on their
|
||||
always be automatically routed to the closest healthy shard on their
|
||||
home continent. All of the required failover is handled for you
|
||||
automatically by Kubernetes Cluster Federation. Future releases will
|
||||
improve upon this even further.
|
||||
|
|
|
@ -8,7 +8,7 @@ title: Jobs
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
## What is a job?
|
||||
## What is a Job?
|
||||
|
||||
A _job_ creates one or more pods and ensures that a specified number of them successfully terminate.
|
||||
As pods successfully complete, the _job_ tracks the successful completions. When a specified number
|
||||
|
@ -166,7 +166,7 @@ parallelism, for a variety or reasons:
|
|||
- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc.),
|
||||
then there may be fewer pods than requested.
|
||||
- The controller may throttle new pod creation due to excessive previous pod failures in the same Job.
|
||||
- When a pod is gracefully shutdown, it make take time to stop.
|
||||
- When a pod is gracefully shutdown, it takes time to stop.
|
||||
|
||||
## Handling Pod and Container Failures
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ Drain node in preparation for maintenance
|
|||
|
||||
Drain node in preparation for maintenance.
|
||||
|
||||
The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supports eviciton (http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force.
|
||||
The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supports eviction (http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force.
|
||||
|
||||
'drain' waits for graceful termination. You should not operate on the machine until the command completes.
|
||||
|
||||
|
|
|
@ -497,7 +497,7 @@ parameters:
|
|||
```
|
||||
|
||||
* `quobyteAPIServer`: API Server of Quobyte in the format `http(s)://api-server:7860`
|
||||
* `registry`: Quobyte registry to use to mount the volume. You can specifiy the registry as ``<host>:<port>`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
|
||||
* `registry`: Quobyte registry to use to mount the volume. You can specify the registry as ``<host>:<port>`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
|
||||
* `adminSecretNamespace`: The namespace for `adminSecretName`. Default is "default".
|
||||
* `adminSecretName`: secret that holds information about the Quobyte user and the password to authenticate agains the API server. The provided secret must have type "kubernetes.io/quobyte", e.g. created in this way:
|
||||
```
|
||||
|
|
|
@ -6,7 +6,7 @@ title: Pod Security Policies
|
|||
|
||||
Objects of type `podsecuritypolicy` govern the ability
|
||||
to make requests on a pod that affect the `SecurityContext` that will be
|
||||
applied to a pod and container.
|
||||
applied to a pod and container.
|
||||
|
||||
See [PodSecurityPolicy proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/security-context-constraints.md) for more information.
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ title: Pods
|
|||
_pods_ are the smallest deployable units of computing that can be created and
|
||||
managed in Kubernetes.
|
||||
|
||||
## What is a pod?
|
||||
## What is a Pod?
|
||||
|
||||
A _pod_ (as in a pod of whales or pea pod) is a group of one or more containers
|
||||
(such as Docker containers), the shared storage for those containers, and
|
||||
|
|
|
@ -159,7 +159,7 @@ reasons:
|
|||
* This is uncommon and would have to be done by someone with root access to nodes.
|
||||
* All containers in a pod are terminated, requiring a restart (RestartPolicyAlways) AND the record of init container completion has been lost due to garbage collection.
|
||||
|
||||
## Support and compatibilty
|
||||
## Support and compatibility
|
||||
|
||||
A cluster with Kubelet and Apiserver version 1.4.0 or greater supports init
|
||||
containers with the beta annotations. Support varies for other combinations of
|
||||
|
|
|
@ -9,17 +9,17 @@ title: Replica Sets
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
## What is a Replica Set?
|
||||
## What is a ReplicaSet?
|
||||
|
||||
Replica Set is the next-generation Replication Controller. The only difference
|
||||
between a _Replica Set_ and a
|
||||
ReplicaSet is the next-generation Replication Controller. The only difference
|
||||
between a _ReplicaSet_ and a
|
||||
[_Replication Controller_](/docs/user-guide/replication-controller/) right now is
|
||||
the selector support. Replica Set supports the new set-based selector requirements
|
||||
the selector support. ReplicaSet supports the new set-based selector requirements
|
||||
as described in the [labels user guide](/docs/user-guide/labels/#label-selectors)
|
||||
whereas a Replication Controller only supports equality-based selector requirements.
|
||||
|
||||
Most [`kubectl`](/docs/user-guide/kubectl/) commands that support
|
||||
Replication Controllers also support Replica Sets. One exception is the
|
||||
Replication Controllers also support ReplicaSets. One exception is the
|
||||
[`rolling-update`](/docs/user-guide/kubectl/kubectl_rolling-update/) command. If
|
||||
you want the rolling update functionality please consider using Deployments
|
||||
instead. Also, the
|
||||
|
@ -27,21 +27,22 @@ instead. Also, the
|
|||
imperative whereas Deployments are declarative, so we recommend using Deployments
|
||||
through the [`rollout`](/docs/user-guide/kubectl/kubectl_rollout/) command.
|
||||
|
||||
While Replica Sets can be used independently, today it's mainly used by
|
||||
While ReplicaSets can be used independently, today it's mainly used by
|
||||
[Deployments](/docs/user-guide/deployments/) as a mechanism to orchestrate pod
|
||||
creation, deletion and updates. When you use Deployments you don't have to worry
|
||||
about managing the Replica Sets that they create. Deployments own and manage
|
||||
their Replica Sets.
|
||||
about managing the ReplicaSets that they create. Deployments own and manage
|
||||
their ReplicaSets.
|
||||
|
||||
## When to use a Replica Set?
|
||||
## When to use a ReplicaSet?
|
||||
|
||||
A ReplicaSet ensures that a specified number of pod “replicas” are running at any given
|
||||
time. However, a Deployment is a higher-level concept that manages ReplicaSets and
|
||||
|
||||
A Replica Set ensures that a specified number of pod "replicas" are running at any given
|
||||
time. However, a Deployment is a higher-level concept that manages Replica Sets and
|
||||
provides declarative updates to pods along with a lot of other useful features.
|
||||
Therefore, we recommend using Deployments instead of directly using Replica Sets, unless
|
||||
Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless
|
||||
you require custom update orchestration or don't require updates at all.
|
||||
|
||||
This actually means that you may never need to manipulate Replica Set objects:
|
||||
This actually means that you may never need to manipulate ReplicaSet objects:
|
||||
use directly a Deployment and define your application in the spec section.
|
||||
|
||||
## Example
|
||||
|
@ -49,7 +50,7 @@ use directly a Deployment and define your application in the spec section.
|
|||
{% include code.html language="yaml" file="replicasets/frontend.yaml" ghlink="/docs/user-guide/replicasets/frontend.yaml" %}
|
||||
|
||||
Saving this config into `frontend.yaml` and submitting it to a Kubernetes cluster should
|
||||
create the defined Replica Set and the pods that it manages.
|
||||
create the defined ReplicaSet and the pods that it manages.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f frontend.yaml
|
||||
|
@ -76,18 +77,18 @@ frontend-dnjpy 1/1 Running 0 1m
|
|||
frontend-qhloh 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
## Replica Set as an Horizontal Pod Autoscaler target
|
||||
## ReplicaSet as an Horizontal Pod Autoscaler target
|
||||
|
||||
A Replica Set can also be a target for
|
||||
A ReplicaSet can also be a target for
|
||||
[Horizontal Pod Autoscalers (HPA)](/docs/user-guide/horizontal-pod-autoscaling/),
|
||||
i.e. a Replica Set can be auto-scaled by an HPA. Here is an example HPA targeting
|
||||
the Replica Set we created in the previous example.
|
||||
i.e. a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting
|
||||
the ReplicaSet we created in the previous example.
|
||||
|
||||
{% include code.html language="yaml" file="replicasets/hpa-rs.yaml" ghlink="/docs/user-guide/replicasets/hpa-rs.yaml" %}
|
||||
|
||||
|
||||
Saving this config into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should
|
||||
create the defined HPA that autoscales the target Replica Set depending on the CPU usage
|
||||
create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage
|
||||
of the replicated pods.
|
||||
|
||||
```shell
|
||||
|
|
|
@ -8,30 +8,30 @@ title: Replication Controller
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
## What is a replication controller?
|
||||
## What is a ReplicationController?
|
||||
|
||||
A _replication controller_ ensures that a specified number of pod "replicas" are running at any one
|
||||
time. In other words, a replication controller makes sure that a pod or homogeneous set of pods are
|
||||
A _ReplicationController_ ensures that a specified number of pod "replicas" are running at any one
|
||||
time. In other words, a ReplicationController makes sure that a pod or homogeneous set of pods are
|
||||
always up and available.
|
||||
If there are too many pods, it will kill some. If there are too few, the
|
||||
replication controller will start more. Unlike manually created pods, the pods maintained by a
|
||||
replication controller are automatically replaced if they fail, get deleted, or are terminated.
|
||||
ReplicationController will start more. Unlike manually created pods, the pods maintained by a
|
||||
ReplicationController are automatically replaced if they fail, get deleted, or are terminated.
|
||||
For example, your pods get re-created on a node after disruptive maintenance such as a kernel upgrade.
|
||||
For this reason, we recommend that you use a replication controller even if your application requires
|
||||
only a single pod. You can think of a replication controller as something similar to a process supervisor,
|
||||
but rather than individual processes on a single node, the replication controller supervises multiple pods
|
||||
For this reason, we recommend that you use a ReplicationController even if your application requires
|
||||
only a single pod. You can think of a ReplicationController as something similar to a process supervisor,
|
||||
but rather than individual processes on a single node, the ReplicationController supervises multiple pods
|
||||
across multiple nodes.
|
||||
|
||||
Replication Controller is often abbreviated to "rc" or "rcs" in discussion, and as a shortcut in
|
||||
ReplicationController is often abbreviated to "rc" or "rcs" in discussion, and as a shortcut in
|
||||
kubectl commands.
|
||||
|
||||
A simple case is to create 1 Replication Controller object in order to reliably run one instance of
|
||||
A simple case is to create 1 ReplicationController object in order to reliably run one instance of
|
||||
a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated
|
||||
service, such as web servers.
|
||||
|
||||
## Running an example Replication Controller
|
||||
## Running an example ReplicationController
|
||||
|
||||
Here is an example Replication Controller config. It runs 3 copies of the nginx web server.
|
||||
Here is an example ReplicationController config. It runs 3 copies of the nginx web server.
|
||||
|
||||
{% include code.html language="yaml" file="replication.yaml" ghlink="/docs/user-guide/replication.yaml" %}
|
||||
|
||||
|
@ -42,7 +42,7 @@ $ kubectl create -f ./replication.yaml
|
|||
replicationcontrollers/nginx
|
||||
```
|
||||
|
||||
Check on the status of the replication controller using this command:
|
||||
Check on the status of the ReplicationController using this command:
|
||||
|
||||
```shell
|
||||
$ kubectl describe replicationcontrollers/nginx
|
||||
|
@ -79,18 +79,18 @@ echo $pods
|
|||
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
|
||||
```
|
||||
|
||||
Here, the selector is the same as the selector for the replication controller (seen in the
|
||||
Here, the selector is the same as the selector for the ReplicationController (seen in the
|
||||
`kubectl describe` output, and in a different form in `replication.yaml`. The `--output=jsonpath` option
|
||||
specifies an expression that just gets the name from each pod in the returned list.
|
||||
|
||||
|
||||
## Writing a Replication Controller Spec
|
||||
## Writing a ReplicationController Spec
|
||||
|
||||
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
|
||||
general information about working with config files, see [here](/docs/user-guide/simple-yaml/),
|
||||
[here](/docs/user-guide/configuring-containers/), and [here](/docs/user-guide/working-with-resources/).
|
||||
|
||||
A Replication Controller also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
|
||||
A ReplicationController also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
|
@ -100,28 +100,28 @@ The `.spec.template` is a [pod template](#pod-template). It has exactly
|
|||
the same schema as a [pod](/docs/user-guide/pods/), except it is nested and does not have an `apiVersion` or
|
||||
`kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a Replication Controller must specify appropriate
|
||||
In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate
|
||||
labels (i.e. don't overlap with other controllers, see [pod selector](#pod-selector)) and an appropriate restart policy.
|
||||
|
||||
Only a [`.spec.template.spec.restartPolicy`](/docs/user-guide/pod-states/) equal to `Always` is allowed, which is the default
|
||||
if not specified.
|
||||
|
||||
For local container restarts, replication controllers delegate to an agent on the node,
|
||||
For local container restarts, ReplicationControllers delegate to an agent on the node,
|
||||
for example the [Kubelet](/docs/admin/kubelet/) or Docker.
|
||||
|
||||
### Labels on the Replication Controller
|
||||
### Labels on the ReplicationController
|
||||
|
||||
The replication controller can itself have labels (`.metadata.labels`). Typically, you
|
||||
The ReplicationController can itself have labels (`.metadata.labels`). Typically, you
|
||||
would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
|
||||
then it is defaulted to `.spec.template.metadata.labels`. However, they are allowed to be
|
||||
different, and the `.metadata.labels` do not affect the behavior of the replication controller.
|
||||
different, and the `.metadata.labels` do not affect the behavior of the ReplicationController.
|
||||
|
||||
### Pod Selector
|
||||
|
||||
The `.spec.selector` field is a [label selector](/docs/user-guide/labels/#label-selectors). A replication
|
||||
controller manages all the pods with labels which match the selector. It does not distinguish
|
||||
between pods which it created or deleted versus pods which some other person or process created or
|
||||
deleted. This allows the replication controller to be replaced without affecting the running pods.
|
||||
deleted. This allows the ReplicationController to be replaced without affecting the running pods.
|
||||
|
||||
If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will
|
||||
be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to
|
||||
|
@ -144,54 +144,54 @@ shutdown, and a replacement starts early.
|
|||
|
||||
If you do not specify `.spec.replicas`, then it defaults to 1.
|
||||
|
||||
## Working with Replication Controllers
|
||||
## Working with ReplicationControllers
|
||||
|
||||
### Deleting a Replication Controller and its Pods
|
||||
### Deleting a ReplicationController and its Pods
|
||||
|
||||
To delete a replication controller and all its pods, use [`kubectl
|
||||
delete`](/docs/user-guide/kubectl/kubectl_delete/). Kubectl will scale the replication controller to zero and wait
|
||||
for it to delete each pod before deleting the replication controller itself. If this kubectl
|
||||
To delete a ReplicationController and all its pods, use [`kubectl
|
||||
delete`](/docs/user-guide/kubectl/kubectl_delete/). Kubectl will scale the ReplicationController to zero and wait
|
||||
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
|
||||
command is interrupted, it can be restarted.
|
||||
|
||||
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
|
||||
0, wait for pod deletions, then delete the replication controller).
|
||||
0, wait for pod deletions, then delete the ReplicationController).
|
||||
|
||||
### Deleting just a Replication Controller
|
||||
### Deleting just a ReplicationController
|
||||
|
||||
You can delete a replication controller without affecting any of its pods.
|
||||
You can delete a ReplicationController without affecting any of its pods.
|
||||
|
||||
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/).
|
||||
|
||||
When using the REST API or go client library, simply delete the replication controller object.
|
||||
When using the REST API or go client library, simply delete the ReplicationController object.
|
||||
|
||||
Once the original is deleted, you can create a new replication controller to replace it. As long
|
||||
Once the original is deleted, you can create a new ReplicationController to replace it. As long
|
||||
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
|
||||
However, it will not make any effort to make existing pods match a new, different pod template.
|
||||
To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
|
||||
|
||||
### Isolating pods from a Replication Controller
|
||||
### Isolating pods from a ReplicationController
|
||||
|
||||
Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
|
||||
## Common usage patterns
|
||||
|
||||
### Rescheduling
|
||||
|
||||
As mentioned above, whether you have 1 pod you want to keep running, or 1000, a replication controller will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent).
|
||||
As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent).
|
||||
|
||||
### Scaling
|
||||
|
||||
The replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
|
||||
The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
|
||||
|
||||
### Rolling updates
|
||||
|
||||
The replication controller is designed to facilitate rolling updates to a service by replacing pods one-by-one.
|
||||
The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.
|
||||
|
||||
As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new replication controller with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
|
||||
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
|
||||
|
||||
The two replication controllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
|
||||
The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
|
||||
|
||||
Rolling update is implemented in the client tool
|
||||
[`kubectl rolling-update`](/docs/user-guide/kubectl/kubectl_rolling-update). Visit [`kubectl rolling-update` tutorial](/docs/user-guide/rolling-updates/) for more concrete examples.
|
||||
|
@ -200,26 +200,26 @@ Rolling update is implemented in the client tool
|
|||
|
||||
In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
|
||||
|
||||
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a replication controller with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another replication controller with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the replication controllers separately to test things out, monitor the results, etc.
|
||||
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another ReplicationController with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc.
|
||||
|
||||
### Using Replication Controllers with Services
|
||||
### Using ReplicationControllers with Services
|
||||
|
||||
Multiple replication controllers can sit behind a single service, so that, for example, some traffic
|
||||
Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic
|
||||
goes to the old version, and some goes to the new version.
|
||||
|
||||
A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services.
|
||||
A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services.
|
||||
|
||||
## Writing programs for Replication
|
||||
|
||||
Pods created by a replication controller are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but replication controllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the replication controller itself.
|
||||
Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.
|
||||
|
||||
## Responsibilities of the replication controller
|
||||
## Responsibilities of the ReplicationController
|
||||
|
||||
The replication controller simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
|
||||
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
|
||||
|
||||
The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
|
||||
## API Object
|
||||
|
@ -228,11 +228,11 @@ Replication controller is a top-level resource in the kubernetes REST API. More
|
|||
API object can be found at: [ReplicationController API
|
||||
object](/docs/api-reference/v1/definitions/#_v1_replicationcontroller).
|
||||
|
||||
## Alternatives to Replication Controller
|
||||
## Alternatives to ReplicationController
|
||||
|
||||
### ReplicaSet
|
||||
|
||||
[`ReplicaSet`](/docs/user-guide/replicasets/) is the next-generation Replication Controller that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement).
|
||||
[`ReplicaSet`](/docs/user-guide/replicasets/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement).
|
||||
It’s mainly used by [`Deployment`](/docs/user-guide/deployments/) as a mechanism to orchestrate pod creation, deletion and updates.
|
||||
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all.
|
||||
|
||||
|
@ -244,20 +244,20 @@ because unlike `kubectl rolling-update`, they are declarative, server-side, and
|
|||
|
||||
### Bare Pods
|
||||
|
||||
Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
|
||||
Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
|
||||
|
||||
### Job
|
||||
|
||||
Use a [`Job`](/docs/user-guide/jobs/) instead of a replication controller for pods that are expected to terminate on their own
|
||||
Use a [`Job`](/docs/user-guide/jobs/) instead of a ReplicationController for pods that are expected to terminate on their own
|
||||
(i.e. batch jobs).
|
||||
|
||||
### DaemonSet
|
||||
|
||||
Use a [`DaemonSet`](/docs/admin/daemons/) instead of a replication controller for pods that provide a
|
||||
Use a [`DaemonSet`](/docs/admin/daemons/) instead of a ReplicationController for pods that provide a
|
||||
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied
|
||||
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
|
||||
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
|
||||
|
||||
## For more information
|
||||
|
||||
Read [Replication Controller Operations](/docs/user-guide/replication-controller/operations/).
|
||||
Read [ReplicationController Operations](/docs/user-guide/replication-controller/operations/).
|
||||
|
|
|
@ -666,7 +666,7 @@ one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
|
|||
|
||||
### Use-case: Dotfiles in secret volume
|
||||
|
||||
In order to make piece of data 'hidden' (ie, in a file whose name begins with a dot character), simply
|
||||
In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply
|
||||
make that key begin with a dot. For example, when the following secret is mounted into a volume:
|
||||
|
||||
```json
|
||||
|
|
|
@ -500,7 +500,7 @@ within AWS Certificate Manager.
|
|||
},
|
||||
```
|
||||
|
||||
The second annotation specificies which protocol a pod speaks. For HTTPS and
|
||||
The second annotation specifies which protocol a pod speaks. For HTTPS and
|
||||
SSL, the ELB will expect the pod to authenticate itself over the encrypted
|
||||
connection.
|
||||
|
||||
|
|
|
@ -34,9 +34,9 @@ $ kubectl explain thirdpartyresource
|
|||
|
||||
## Creating a ThirdPartyResource
|
||||
|
||||
When you user create a new `ThirdPartyResource`, the Kubernetes API Server reacts by creating a new, namespaced RESTful resource path. For now, non-namespaced objects are not supported. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. `ThirdPartyResources` themselves are non-namespaced and are available to all namespaces.
|
||||
When you create a new `ThirdPartyResource`, the Kubernetes API Server reacts by creating a new, namespaced RESTful resource path. For now, non-namespaced objects are not supported. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. `ThirdPartyResources` themselves are non-namespaced and are available to all namespaces.
|
||||
|
||||
For example, if a save the following `ThirdPartyResource` to `resource.yaml`:
|
||||
For example, if you save the following `ThirdPartyResource` to `resource.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
|
|
|
@ -15,6 +15,14 @@ Dashboard also provides information on the state of Kubernetes resources in your
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Deploying the Dashboard UI
|
||||
|
||||
The Dashboard UI is not deployed by default. To deploy it, run the following command:
|
||||
|
||||
```
|
||||
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
|
||||
```
|
||||
|
||||
## Accessing the Dashboard UI
|
||||
|
||||
There are multiple ways you can access the Dashboard UI; either by using the kubectl command-line interface, or by accessing the Kubernetes master apiserver using your web browser.
|
||||
|
|
|
@ -52,7 +52,7 @@ Summary of container benefits:
|
|||
* **Cloud and OS distribution portability**:
|
||||
Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Container Engine, and anywhere else.
|
||||
* **Application-centric management**:
|
||||
Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
|
||||
Raises the level of abstraction from running an OS on virtual hardware to run an application on an OS using logical resources.
|
||||
* **Loosely coupled, distributed, elastic, liberated [micro-services](http://martinfowler.com/articles/microservices.html)**:
|
||||
Applications are broken into smaller, independent pieces and can be deployed and managed dynamically -- not a fat monolithic stack running on one big single-purpose machine.
|
||||
* **Resource isolation**:
|
||||
|
@ -106,7 +106,7 @@ Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) syst
|
|||
* Kubernetes does not provide nor mandate a comprehensive application configuration language/system (e.g., [jsonnet](https://github.com/google/jsonnet)).
|
||||
* Kubernetes does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
|
||||
|
||||
On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Gondor](https://gondor.io/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes.
|
||||
On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Eldarion Cloud](http://eldarion.cloud/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes.
|
||||
|
||||
Since Kubernetes operates at the application level rather than at just the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, monitoring, etc. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable.
|
||||
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
layout: docwithnav
|
||||
title: Kubernetes Security and Disclosure Information
|
||||
permalink: /security/
|
||||
assignees:
|
||||
- eparis
|
||||
- erictune
|
||||
- philips
|
||||
- jessfraz
|
||||
---
|
||||
|
||||
## Security Announcements
|
||||
|
||||
Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group for emails about security and major API announcements.
|
||||
|
||||
## Report a Vulnerability
|
||||
|
||||
We’re extremely grateful for security researchers and users that report vulnerabilities to the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers.
|
||||
|
||||
To make a report, please email the private [kubernetes-security@googlegroups.com](mailto:kubernetes-security@googlegroups.com) list with the security details and the details expected for [all Kubernetes bug reports](https://github.com/kubernetes/kubernetes/blob/master/.github/ISSUE_TEMPLATE.md).
|
||||
|
||||
You may encrypt your email to this list using the GPG keys of the [Product Security Team members](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md#product-security-team-pst). Encryption using GPG is NOT required to make a disclosure.
|
||||
|
||||
### When Should I Report a Vulnerability?
|
||||
|
||||
- You think you discovered a potential security vulnerability in Kubernetes
|
||||
- You are unsure how a vulnerability affects Kubernetes
|
||||
- You think you discovered a vulnerability in another project that Kubernetes depends on (e.g. docker, rkt, etcd)
|
||||
|
||||
### When Should I NOT Report a Vulnerability?
|
||||
|
||||
- You need help tuning Kubernetes components for security
|
||||
- You need help applying security related updates
|
||||
- Your issue is not security related
|
||||
|
||||
## Security Vulnerability Response
|
||||
|
||||
Each report is acknowledged and analyzed by Product Security Team members within 3 working days. This will set off the [Security Release Process](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md#product-security-team-pst).
|
||||
|
||||
Any vulnerability information shared with Product Security Team stays within Kubernetes project and will not be disseminated to other projects unless it is necessary to get the issue fixed.
|
||||
|
||||
As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated.
|
||||
|
||||
## Public Disclosure Timing
|
||||
|
||||
A public disclosure date is negotiated by the Kubernetes product security team and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. As a basic default, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes product security team holds the final say when setting a disclosure date.
|
Loading…
Reference in New Issue