Merge branch 'master' into chrismarino-patch
commit
ce2e9950fc
|
@ -7,18 +7,20 @@ Add-ons extend the functionality of Kubernetes.
|
|||
|
||||
This page lists some of the available add-ons and links to their respective installation instructions.
|
||||
|
||||
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
|
||||
|
||||
## Networking and Network Policy
|
||||
|
||||
* [Weave Net](https://github.com/weaveworks/weave-kube) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
* [Calico](http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider.
|
||||
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes.
|
||||
* [Calico](http://docs.projectcalico.org/v1.6/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider.
|
||||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy.
|
||||
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes.
|
||||
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
|
||||
* [Weave Net](https://github.com/weaveworks/weave-kube) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
|
||||
## Visualization & Control
|
||||
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.
|
||||
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes.
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.
|
||||
|
||||
## Legacy Add-ons
|
||||
|
||||
|
|
|
@ -124,7 +124,7 @@ With v1.3, the following annotations are deprecated: `pod.beta.kubernetes.io/hos
|
|||
|
||||
## How do I test if it is working?
|
||||
|
||||
### Create a simple Pod to use as a test environment.
|
||||
### Create a simple Pod to use as a test environment
|
||||
|
||||
Create a file named busybox.yaml with the
|
||||
following contents:
|
||||
|
@ -152,7 +152,7 @@ Then create a pod using this file:
|
|||
kubectl create -f busybox.yaml
|
||||
```
|
||||
|
||||
### Wait for this pod to go into the running state.
|
||||
### Wait for this pod to go into the running state
|
||||
|
||||
You can get its status with:
|
||||
```
|
||||
|
@ -165,7 +165,7 @@ NAME READY STATUS RESTARTS AGE
|
|||
busybox 1/1 Running 0 <some-time>
|
||||
```
|
||||
|
||||
### Validate DNS works
|
||||
### Validate that DNS is working
|
||||
|
||||
Once that pod is running, you can exec nslookup in that environment:
|
||||
|
||||
|
@ -185,6 +185,115 @@ Address 1: 10.0.0.1
|
|||
|
||||
If you see that, DNS is working correctly.
|
||||
|
||||
### Troubleshooting Tips
|
||||
|
||||
If the nslookup command fails, check the following:
|
||||
|
||||
#### Check the local DNS configuration first
|
||||
Take a look inside the resolv.conf file. (See "Inheriting DNS from the node" and "Known issues" below for more information)
|
||||
|
||||
```
|
||||
cat /etc/resolv.conf
|
||||
```
|
||||
|
||||
Verify that the search path and name server are set up like the following (note that seach path may vary for different cloud providers):
|
||||
|
||||
```
|
||||
search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal
|
||||
nameserver 10.0.0.10
|
||||
options ndots:5
|
||||
```
|
||||
|
||||
#### Quick diagnosis
|
||||
|
||||
Errors such as the following indicate a problem with the kube-dns add-on or associated Services:
|
||||
|
||||
```
|
||||
$ kubectl exec busybox -- nslookup kubernetes.default
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
nslookup: can't resolve 'kubernetes.default'
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
$ kubectl exec busybox -- nslookup kubernetes.default
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||
|
||||
nslookup: can't resolve 'kubernetes.default'
|
||||
```
|
||||
|
||||
#### Check if the DNS pod is running
|
||||
|
||||
Use the kubectl get pods command to verify that the DNS pod is running.
|
||||
|
||||
```
|
||||
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
|
||||
```
|
||||
|
||||
You should see something like:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
...
|
||||
kube-dns-v19-ezo1y 3/3 Running 0 1h
|
||||
...
|
||||
```
|
||||
|
||||
If you see that no pod is running or that the pod has failed/completed, the dns add-on may not be deployed by default in your current environment and you will have to deploy it manually.
|
||||
|
||||
#### Check for Errors in the DNS pod
|
||||
|
||||
Use `kubectl logs` command to see logs for the DNS daemons.
|
||||
|
||||
```
|
||||
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
|
||||
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
|
||||
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c healthz
|
||||
```
|
||||
|
||||
See if there is any suspicious log. W, E, F letter at the beginning represent Warning, Error and Failure. Please search for entries that have these as the logging level and use [kubernetes issues](https://github.com/kubernetes/kubernetes/issues) to report unexpected errors.
|
||||
|
||||
#### Is dns service up?
|
||||
|
||||
Verify that the DNS service is up by using the `kubectl get service` command.
|
||||
|
||||
```
|
||||
kubectl get svc --namespace=kube-system
|
||||
```
|
||||
|
||||
You should see:
|
||||
|
||||
```
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
...
|
||||
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
|
||||
...
|
||||
```
|
||||
|
||||
If you have created the service or in the case it should be created by default but it does not appear, see this [debugging services page](http://kubernetes.io/docs/user-guide/debugging-services/) for more information.
|
||||
|
||||
#### Are dns endpoints exposed?
|
||||
|
||||
You can verify that dns endpoints are exposed by using the `kubectl get endpoints` command.
|
||||
|
||||
```
|
||||
kubectl get ep kube-dns --namespace=kube-system
|
||||
```
|
||||
|
||||
You should see something like:
|
||||
```
|
||||
NAME ENDPOINTS AGE
|
||||
kube-dns 10.180.3.17:53,10.180.3.17:53 1h
|
||||
```
|
||||
|
||||
If you do not see the endpoints, see endpoints section in the [debugging services documentation](http://kubernetes.io/docs/user-guide/debugging-services/).
|
||||
|
||||
For additional Kubernetes DNS examples, see the [cluster-dns examples](https://github.com/kubernetes/kubernetes/tree/master/examples/cluster-dns) in the Kubernetes GitHub repository.
|
||||
|
||||
## Kubernetes Federation (Multiple Zone support)
|
||||
|
||||
Release 1.3 introduced Cluster Federation support for multi-site
|
||||
|
@ -213,6 +322,34 @@ the flag `--cluster-domain=<default local domain>`
|
|||
The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library)
|
||||
supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records).
|
||||
|
||||
## Inheriting DNS from the node
|
||||
When running a pod, kubelet will prepend the cluster DNS server and search
|
||||
paths to the node's own DNS settings. If the node is able to resolve DNS names
|
||||
specific to the larger environment, pods should be able to, also. See "Known
|
||||
issues" below for a caveat.
|
||||
|
||||
If you don't want this, or if you want a different DNS config for pods, you can
|
||||
use the kubelet's `--resolv-conf` flag. Setting it to "" means that pods will
|
||||
not inherit DNS. Setting it to a valid file path means that kubelet will use
|
||||
this file instead of `/etc/resolv.conf` for DNS inheritance.
|
||||
|
||||
## Known issues
|
||||
Kubernetes installs do not configure the nodes' resolv.conf files to use the
|
||||
cluster DNS by default, because that process is inherently distro-specific.
|
||||
This should probably be implemented eventually.
|
||||
|
||||
Linux's libc is impossibly stuck ([see this bug from
|
||||
2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just
|
||||
3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to
|
||||
consume 1 `nameserver` record and 3 `search` records. This means that if a
|
||||
local installation already uses 3 `nameserver`s or uses more than 3 `search`es,
|
||||
some of those settings will be lost. As a partial workaround, the node can run
|
||||
`dnsmasq` which will provide more `nameserver` entries, but not more `search`
|
||||
entries. You can also use kubelet's `--resolv-conf` flag.
|
||||
|
||||
If you are using Alpine version 3.3 or earlier as your base image, dns may not
|
||||
work properly owing to a known issue with Alpine. Check [here](https://github.com/kubernetes/kubernetes/issues/30215)
|
||||
for more information.
|
||||
|
||||
## References
|
||||
|
||||
|
|
|
@ -44,7 +44,11 @@ In addition to the CNI plugin specified by the configuration file, Kubernetes re
|
|||
|
||||
### kubenet
|
||||
|
||||
The Linux-only kubenet plugin provides functionality similar to the `--configure-cbr0` kubelet command-line option. It creates a Linux bridge named `cbr0` and creates a veth pair for each pod with the host end of each pair connected to `cbr0`. The pod end of the pair is assigned an IP address allocated from a range assigned to the node either through configuration or by the controller-manager. `cbr0` is assigned an MTU matching the smallest MTU of an enabled normal interface on the host. The kubenet plugin is currently mutually exclusive with, and will eventually replace, the --configure-cbr0 option. It is also currently incompatible with the flannel experimental overlay.
|
||||
Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy. It is typically used together with a cloud provider that sets up routing rules for communication between nodes, or in single-node environments.
|
||||
|
||||
Kubenet creates a Linux bridge named `cbr0` and creates a veth pair for each pod with the host end of each pair connected to `cbr0`. The pod end of the pair is assigned an IP address allocated from a range assigned to the node either through configuration or by the controller-manager. `cbr0` is assigned an MTU matching the smallest MTU of an enabled normal interface on the host.
|
||||
|
||||
The kubenet plugin is mutually exclusive with the --configure-cbr0 option.
|
||||
|
||||
The plugin requires a few things:
|
||||
|
||||
|
|
|
@ -100,8 +100,19 @@ existence or non-existence of host ports.
|
|||
There are a number of ways that this network model can be implemented. This
|
||||
document is not an exhaustive study of the various methods, but hopefully serves
|
||||
as an introduction to various technologies and serves as a jumping-off point.
|
||||
If some techniques become vastly preferable to others, we might detail them more
|
||||
here.
|
||||
|
||||
The following networking options are sorted alphabetically - the order does not
|
||||
imply any preferential status.
|
||||
|
||||
### Contiv
|
||||
|
||||
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced.
|
||||
|
||||
### Flannel
|
||||
|
||||
[Flannel](https://github.com/coreos/flannel#flannel) is a very simple overlay
|
||||
network that satisfies the Kubernetes requirements. Many
|
||||
people have reported success with Flannel and Kubernetes.
|
||||
|
||||
### Google Compute Engine (GCE)
|
||||
|
||||
|
@ -158,29 +169,12 @@ Follow the "With Linux Bridge devices" section of [this very nice
|
|||
tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
|
||||
Lars Kellogg-Stedman.
|
||||
|
||||
### Weave Net from Weaveworks
|
||||
|
||||
[Weave Net](https://www.weave.works/products/weave-net/) is a
|
||||
resilient and simple to use network for Kubernetes and its hosted applications.
|
||||
Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/)
|
||||
or stand-alone. In either version, it doesn't require any configuration or extra code
|
||||
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
|
||||
|
||||
|
||||
### Flannel
|
||||
|
||||
[Flannel](https://github.com/coreos/flannel#flannel) is a very simple overlay
|
||||
network that satisfies the Kubernetes requirements. It installs in minutes and
|
||||
should get you up and running if the above techniques are not working. Many
|
||||
people have reported success with Flannel and Kubernetes.
|
||||
|
||||
### OpenVSwitch
|
||||
|
||||
[OpenVSwitch](/docs/admin/ovs-networking) is a somewhat more mature but also
|
||||
complicated way to build an overlay network. This is endorsed by several of the
|
||||
"Big Shops" for networking.
|
||||
|
||||
|
||||
### Project Calico
|
||||
|
||||
[Project Calico](https://github.com/projectcalico/calico-containers/blob/master/docs/cni/kubernetes/README.md) is an open source container networking provider and network policy engine.
|
||||
|
@ -193,9 +187,13 @@ Calico can also be run in policy enforcement mode in conjunction with other netw
|
|||
|
||||
[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/user-guide/networkpolicies/) to provide isolation across network namespaces.
|
||||
|
||||
### Contiv
|
||||
### Weave Net from Weaveworks
|
||||
|
||||
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced.
|
||||
[Weave Net](https://www.weave.works/products/weave-net/) is a
|
||||
resilient and simple to use network for Kubernetes and its hosted applications.
|
||||
Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/)
|
||||
or stand-alone. In either version, it doesn't require any configuration or extra code
|
||||
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
|
||||
|
||||
## Other reading
|
||||
|
||||
|
|
|
@ -82,6 +82,8 @@ curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page
|
|||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
For Windows, download [kubectl.exe](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe) and save it to a location on your PATH.
|
||||
|
||||
The generic download path is:
|
||||
```
|
||||
https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}
|
||||
|
|
|
@ -59,7 +59,7 @@ Under rktnetes, `kubectl get logs` currently cannot get logs from applications t
|
|||
|
||||
## Init containers
|
||||
|
||||
The alpha [init container](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/container-init.md) feature is currently not supported.
|
||||
The beta [init container](/docs/user-guide/pods/init-containers.md) feature is currently not supported.
|
||||
|
||||
## Container restart back-off
|
||||
|
||||
|
|
|
@ -81,12 +81,12 @@ to implement one of the above options:
|
|||
|
||||
- **Use a network plugin which is called by Kubernetes**
|
||||
- Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface.
|
||||
- There are a number of solutions which provide plugins for Kubernetes:
|
||||
- There are a number of solutions which provide plugins for Kubernetes (listed alphabetically):
|
||||
- [Calico](http://docs.projectcalico.org/)
|
||||
- [Flannel](https://github.com/coreos/flannel)
|
||||
- [Calico](https://github.com/projectcalico/calico-containers)
|
||||
- [Weave](https://weave.works/)
|
||||
- [Romana](http://romana.io/)
|
||||
- [Open vSwitch (OVS)](http://openvswitch.org/)
|
||||
- [Romana](http://romana.io/)
|
||||
- [Weave](http://weave.works/)
|
||||
- [More found here](/docs/admin/networking#how-to-achieve-this)
|
||||
- You can also write your own.
|
||||
- **Compile support directly into Kubernetes**
|
||||
|
|
|
@ -5,9 +5,7 @@ assignees:
|
|||
|
||||
---
|
||||
|
||||
<p>The Kubernetes documentation can help you set up Kubernetes, learn about the system, or get your applications and workloads running on Kubernetes.</p>
|
||||
|
||||
<p><a href="/docs/whatisk8s/" class="button">Read the Kubernetes Overview</a></p>
|
||||
<p>Kubernetes documentation can help you set up Kubernetes, learn about the system, or get your applications and workloads running on Kubernetes. To learn the basics of what Kubernetes is and how it works, read "<a href="/docs/whatisk8s/">What is Kubernetes</a>". </p>
|
||||
|
||||
<h2>Interactive Tutorial</h2>
|
||||
|
||||
|
@ -40,4 +38,4 @@ assignees:
|
|||
|
||||
<h2>Tools</h2>
|
||||
|
||||
<p>The <a href="/docs/tools/">tools</a> page contains a list of native and third-party tools for Kubernetes.</p>
|
||||
<p>The <a href="/docs/tools/">tools</a> page contains a list of native and third-party tools for Kubernetes.</p>
|
||||
|
|
|
@ -64,12 +64,12 @@ healthy backend service endpoint at all times, even in the event of
|
|||
pod, cluster,
|
||||
availability zone or regional outages.
|
||||
|
||||
Note that in the
|
||||
case of Google Cloud, the logical L7 load balancer is not a single physical device (which
|
||||
would present both a single point of failure, and a single global
|
||||
network routing choke point), but rather a [truly global, highly available
|
||||
load balancing managed service](https://cloud.google.com/load-balancing/),
|
||||
globally reachable via a single, static IP address.
|
||||
Note that in the case of Google Cloud, the logical L7 load balancer is
|
||||
not a single physical device (which would present both a single point
|
||||
of failure, and a single global network routing choke point), but
|
||||
rather a
|
||||
[truly global, highly available load balancing managed service](https://cloud.google.com/load-balancing/),
|
||||
globally reachable via a single, static IP address.
|
||||
|
||||
Clients inside your federated Kubernetes clusters (i.e. Pods) will be
|
||||
automatically routed to the cluster-local shard of the Federated Service
|
||||
|
@ -86,13 +86,13 @@ You can create a federated ingress in any of the usual ways, for example using k
|
|||
``` shell
|
||||
kubectl --context=federation-cluster create -f myingress.yaml
|
||||
```
|
||||
|
||||
For example ingress YAML configurations, see the [Ingress User Guide](/docs/user-guide/ingress/)
|
||||
The '--context=federation-cluster' flag tells kubectl to submit the
|
||||
request to the Federation API endpoint, with the appropriate
|
||||
credentials. If you have not yet configured such a context, visit the
|
||||
[federation admin guide](/docs/admin/federation/) or one of the
|
||||
[administration tutorials](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
to find out how to do so. TODO: Update links
|
||||
to find out how to do so.
|
||||
|
||||
As described above, the Federated Ingress will automatically create
|
||||
and maintain matching Kubernetes ingresses in all of the clusters
|
||||
|
@ -147,17 +147,28 @@ Events:
|
|||
2m 2m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.5.194
|
||||
```
|
||||
|
||||
Note the address of your Federated Ingress
|
||||
Note that:
|
||||
|
||||
1. the address of your Federated Ingress
|
||||
corresponds with the address of all of the
|
||||
underlying Kubernetes ingresses (once these have been allocated - this
|
||||
may take up to a few minutes).
|
||||
|
||||
Note also that we have not yet provisioned any backend Pods to receive
|
||||
2. we have not yet provisioned any backend Pods to receive
|
||||
the network traffic directed to this ingress (i.e. 'Service
|
||||
Endpoints' behind the service backing the Ingress), so the Federated Ingress does not yet consider these to
|
||||
be healthy shards and will not direct traffic to any of these clusters.
|
||||
3. the federation control system will
|
||||
automatically reconfigure the load balancer controllers in all of the
|
||||
clusters in your federation to make them consistent, and allow
|
||||
them to share global load balancers. But this reconfiguration can
|
||||
only complete successfully if there are no pre-existing Ingresses in
|
||||
those clusters (this is a safety feature to prevent accidental
|
||||
breakage of existing ingresses). So to ensure that your federated
|
||||
ingresses function correctly, either start with new, empty clusters, or make
|
||||
sure that you delete (and recreate if necessary) all pre-existing
|
||||
Ingresses in the clusters comprising your federation.
|
||||
|
||||
## Adding backend services and pods
|
||||
#Adding backend services and pods
|
||||
|
||||
To render the underlying ingress shards healthy, we need to add
|
||||
backend Pods behind the service upon which the Ingress is based. There are several ways to achieve this, but
|
||||
|
@ -175,6 +186,16 @@ kubectl --context=federation-cluster create -f services/nginx.yaml
|
|||
kubectl --context=federation-cluster create -f myreplicaset.yaml
|
||||
```
|
||||
|
||||
Note that in order for your federated ingress to work correctly on
|
||||
Google Cloud, the node ports of all of the underlying cluster-local
|
||||
services need to be identical. If you're using a federated service
|
||||
this is easy to do. Simply pick a node port that is not already
|
||||
being used in any of your clusters, and add that to the spec of your
|
||||
federated service. If you do not specify a node port for your
|
||||
federated service, each cluster will choose it's own node port for
|
||||
its cluster-local shard of the service, and these will probably end
|
||||
up being different, which is not what you want.
|
||||
|
||||
You can verify this by checking in each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
|
@ -258,6 +279,35 @@ Check that:
|
|||
`service-controller` or `replicaset-controller`,
|
||||
errors in the output of `kubectl logs federation-controller-manager --namespace federation`).
|
||||
|
||||
#### I can create a federated ingress successfully, but request load is not correctly distributed across the underlying clusters
|
||||
|
||||
Check that:
|
||||
|
||||
1. the services underlying your federated ingress in each cluster have
|
||||
identical node ports. See [above](#creating_a_federated_ingress) for further explanation.
|
||||
2. the load balancer controllers in each of your clusters are of the
|
||||
correct type ("GLBC") and have been correctly reconfigured by the
|
||||
federation control plane to share a global GCE load balancer (this
|
||||
should happen automatically). If they of the correct type, and
|
||||
have been correctly reconfigured, the UID data item in the GLBC
|
||||
configmap in each cluster will be identical across all clusters.
|
||||
See
|
||||
[the GLBC docs](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#changing-the-cluster-uid)
|
||||
for further details.
|
||||
If this is not the case, check the logs of your federation
|
||||
controller manager to determine why this automated reconfiguration
|
||||
might be failing.
|
||||
3. no ingresses have been manually created in any of your clusters before the above
|
||||
reconfiguration of the load balancer controller completed
|
||||
successfully. Ingresses created before the reconfiguration of
|
||||
your GLBC will interfere with the behavior of your federated
|
||||
ingresses created after the reconfiguration (see
|
||||
[the GLBC docs](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#changing-the-cluster-uid)
|
||||
for further information. To remedy this,
|
||||
delete any ingresses created before the cluster joined the
|
||||
federation (and had it's GLBC reconfigured), and recreate them if
|
||||
necessary.
|
||||
|
||||
#### This troubleshooting guide did not help me solve my problem
|
||||
|
||||
Please use one of our [support channels](http://kubernetes.io/docs/troubleshooting/) to seek assistance.
|
||||
|
|
|
@ -88,7 +88,7 @@ vm-1 # printf "GET / HTTP/1.0\r\n\r\n" | netcat vm-0.ub 80
|
|||
It's worth exploring what just happened. Init containers run sequentially *before* the application container. In this example we used the init container to copy shared libraries from the rootfs, while preserving user installed packages across container restart.
|
||||
|
||||
```yaml
|
||||
pod.alpha.kubernetes.io/init-containers: '[
|
||||
pod.beta.kubernetes.io/init-containers: '[
|
||||
{
|
||||
"name": "rootfs",
|
||||
"image": "ubuntu:15.10",
|
||||
|
|
|
@ -29,7 +29,7 @@ spec:
|
|||
app: nginx
|
||||
annotations:
|
||||
pod.alpha.kubernetes.io/initialized: "true"
|
||||
pod.alpha.kubernetes.io/init-containers: '[
|
||||
pod.beta.kubernetes.io/init-containers: '[
|
||||
{
|
||||
"name": "peerfinder",
|
||||
"image": "gcr.io/google_containers/peer-finder:0.1",
|
||||
|
|
|
@ -0,0 +1,169 @@
|
|||
---
|
||||
assignees:
|
||||
- erictune
|
||||
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
In addition to having one or more main containers (or **app containers**), a
|
||||
pod can also have one or more **init containers** which run before the app
|
||||
containers. Init containers allow you to reduce and reorganize setup scripts
|
||||
and "glue code".
|
||||
|
||||
## Overview
|
||||
|
||||
An init container is exactly like a regular container, except that it always
|
||||
runs to completion and each init container must complete successfully before
|
||||
the next one is started. If the init container fails, Kubernetes will restart
|
||||
the pod until the init container succeeds. If a pod is marked as `RestartNever`,
|
||||
the pod will fail if the init container fails.
|
||||
|
||||
You specify a container as an init container by adding an annotation
|
||||
The annotation key is `pod.beta.kubernetes.io/init-containers`. The annotation
|
||||
value is a JSON array of [objects of type `v1.Container`
|
||||
](http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_container)
|
||||
|
||||
Once the feature exits beta, the init containers will be specified on the Pod
|
||||
Spec alongside the app `containers` array.
|
||||
The status of the init containers is returned as another annotation -
|
||||
`pod.beta.kubernetes.io/init-container-statuses` -- as an array of the
|
||||
container statuses (similar to the `status.containerStatuses` field).
|
||||
|
||||
Init containers support all of the same features as normal containers,
|
||||
including resource limits, volumes, and security settings. The resource
|
||||
requests and limits for an init container are [handled slightly differently](
|
||||
#resources). Init containers do not support readiness probes since they will
|
||||
run to completion before the pod can be ready.
|
||||
An init container has all of the fields of an app container.
|
||||
|
||||
If you specify multiple init containers for a pod, those containers run one at
|
||||
a time in sequential order. Each must succeed before the next can run. Once all
|
||||
init containers have run to completion, Kubernetes initializes the pod and runs
|
||||
the application containers as usual.
|
||||
|
||||
## What are Init Containers Good For?
|
||||
|
||||
Because init containers have separate images from application containers, they
|
||||
have some advantages for start-up related code. These include:
|
||||
|
||||
* they can contain utilities that are not desirable to include in the app container
|
||||
image for security reasons,
|
||||
* they can contain utilities or custom code for setup that is not present in an app
|
||||
image. (No need to make an image `FROM` another image just to use a tool like
|
||||
`sed`, `awk`, `python`, `dig`, etc during setup).
|
||||
* the application image builder and the deployer roles can work independently without
|
||||
the need to jointly build a single app image.
|
||||
|
||||
Because init containers have different filesystem view (Linux namespaces) from
|
||||
app containers, they can be given access to Secrets that the app containers are
|
||||
not able to access.
|
||||
|
||||
Since init containers run to completion before any app containers start, and
|
||||
since app containers run in parallel, they provide an easier way to block or
|
||||
delay the startup of application containers until some precondition is met.
|
||||
|
||||
Because init containers run in sequence and there can be multiple init containers,
|
||||
they can be composed easily.
|
||||
|
||||
Here are some ideas for how to use init containers:
|
||||
- Wait for a service to be created with a shell command like:
|
||||
`for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; exit 1`
|
||||
- Register this pod with a remote server with a command like:
|
||||
`curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(POD_NAME)&ip=$(POD_IP)'`
|
||||
using `POD_NAME` and `POD_IP` from the downward API.
|
||||
- Wait for some time before starting the app container with a command like `sleep 60`.
|
||||
- Clone a git repository into a volume
|
||||
- Place values like a POD_IP into a configuration file, and run a template tool (e.g. jinja)
|
||||
to generate a configuration file to be consumed by the main app contianer.
|
||||
```
|
||||
|
||||
Complete usage examples can be found in the [PetSets
|
||||
guide](docs/user-guide/petset/bootstrapping/index.md) and the [Production Pods
|
||||
guide](/docs/user-guide/production-pods.md#handling-initialization).
|
||||
|
||||
|
||||
## Detailed Behavior
|
||||
|
||||
Each pod may have 0..N init containers defined along with the existing
|
||||
1..M app containers.
|
||||
|
||||
On startup of the pod, after the network and volumes are initialized, the init
|
||||
containers are started in order. Each container must exit successfully before
|
||||
the next is invoked. If a container fails to start (due to the runtime) or
|
||||
exits with failure, it is retried according to the pod RestartPolicy, except
|
||||
when the pod restart policy is RestartPolicyAlways, in which case just the init
|
||||
containers use RestartPolicyOnFailure.
|
||||
|
||||
A pod cannot be ready until all init containers have succeeded. The ports on an
|
||||
init container are not aggregated under a service. A pod that is being
|
||||
initialized is in the `Pending` phase but should has a condition `Initializing`
|
||||
set to `true`.
|
||||
|
||||
If the pod is [restarted](#pod-restart-reasons) all init containers must
|
||||
execute again.
|
||||
|
||||
Changes to the init container spec are limited to the container image field.
|
||||
Altering a init container image field is equivalent to restarting the pod.
|
||||
|
||||
Because init containers can be restarted, retried, or reexecuted, init container
|
||||
code should be idempotent. In particular, code that writes to files on EmptyDirs
|
||||
should be prepared for the possibility that an output file already exists.
|
||||
|
||||
An init container has all of the fields of an app container. The following
|
||||
fields are prohibited from being used on init containers by validation:
|
||||
|
||||
* `readinessProbe` - init containers must exit for pod startup to continue,
|
||||
are not included in rotation, and so cannot define readiness distinct from
|
||||
completion.
|
||||
|
||||
Init container authors may use `activeDeadlineSeconds` on the pod and
|
||||
`livenessProbe` on the container to prevent init containers from failing
|
||||
forever. The active deadline includes init containers.
|
||||
|
||||
The name of each app and init container in a pod must be unique - it is a
|
||||
validation error for any container to share a name.
|
||||
|
||||
### Resources
|
||||
|
||||
Given the ordering and execution for init containers, the following rules
|
||||
for resource usage apply:
|
||||
|
||||
* The highest of any particular resource request or limit defined on all init
|
||||
containers is the **effective init request/limit**
|
||||
* The pod's **effective request/limit** for a resource is the higher of:
|
||||
* sum of all app containers request/limit for a resource
|
||||
* effective init request/limit for a resource
|
||||
* Scheduling is done based on effective requests/limits, which means
|
||||
init containers can reserve resources for initialization that are not used
|
||||
during the life of the pod.
|
||||
* QoS tier of the pod's **effective QoS tier** is the QoS tier for init containers
|
||||
and app containers alike.
|
||||
|
||||
Quota and limits are applied based on the effective pod request and
|
||||
limit.
|
||||
|
||||
Pod level cGroups are based on the effective pod request and limit, the
|
||||
same as the scheduler.
|
||||
|
||||
|
||||
## Pod Restart Reasons
|
||||
|
||||
A Pod may "restart", causing reexecution of init containers, for the following
|
||||
reasons:
|
||||
|
||||
* An init container image is changed by a user updating the Pod Spec.
|
||||
* App container image changes only restart the app container.
|
||||
* The pod infrastructure container is restarted
|
||||
* This is uncommon and would have to be done by someone with root access to nodes.
|
||||
* All containers in a pod are terminated, requiring a restart (RestartPolicyAlways) AND the record of init container completion has been lost due to garbage collection.
|
||||
|
||||
## Support and compatibilty
|
||||
|
||||
A cluster with Kubelet and Apiserver version 1.4.0 or greater supports init
|
||||
containers with the beta annotations. Support varies for other combinations of
|
||||
Kubelet and Apiserver version; see the [release notes
|
||||
](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md) for details.
|
||||
|
||||
|
|
@ -204,6 +204,8 @@ The status of the init containers is returned as another annotation - `pod.beta.
|
|||
|
||||
Init containers support all of the same features as normal containers, including resource limits, volumes, and security settings. The resource requests and limits for an init container are handled slightly different than normal containers since init containers are run one at a time instead of all at once - any limits or quotas will be applied based on the largest init container resource quantity, rather than as the sum of quantities. Init containers do not support readiness probes since they will run to completion before the pod can be ready.
|
||||
|
||||
[Complete Init Container Documentation](/docs/user-guide/pods/init-containers.md)
|
||||
|
||||
|
||||
## Lifecycle hooks and termination notice
|
||||
|
||||
|
|
|
@ -345,7 +345,7 @@ can do a DNS SRV query for `"_http._tcp.my-service.my-ns"` to discover the port
|
|||
number for `"http"`.
|
||||
|
||||
The Kubernetes DNS server is the only way to access services of type
|
||||
`ExternalName`.
|
||||
`ExternalName`. More information is available in the [DNS Admin Guide](http://kubernetes.io/docs/admin/dns/).
|
||||
|
||||
## Headless services
|
||||
|
||||
|
|
Loading…
Reference in New Issue