Merge branch 'master' into concepts-root

reviewable/pr1611/r3
Jared 2016-11-21 17:25:58 -08:00 committed by GitHub
commit 80bd66bef4
57 changed files with 1166 additions and 893 deletions

36
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,36 @@
# Contributing to Kubernetes Documentation
**First off, thanks for taking the time to contribute!**
The following is a set of guidelines for contributing to Kubernetes documentation, hosted at [Kubernetes.io](http://kubernetes.io/).
These are just guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request.
## Before you get started
### Code of Conduct
Kubernetes follows the [Cloud Native Computing Foundation (CNCF) Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to Sarah Novotny [sarahnovotny@google.com](mailto:sarahnovotny@google.com) and/or Dan Kohn [dan@linuxfoundation.org](mailto:dan@linuxfoundation.org).
### Documentation and Site Decisions
The [Kubernetes SIG Docs Discussion Group](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) is the discussion group for doc releases, suggested site improvements, and improving the doc contribution experience. If you are planning to be a regular contributor, join this group to stay informed and involved.
### Style Guides and Templates
Before submitting a pull request to create new content, please review the [Kubernetes.io style guide](http://kubernetes.io/docs/contribute/style-guide/) and follow the [instructions for using page templates](http://kubernetes.io/docs/contribute/page-templates/).
## Contributing to Documentation
### Reporting Documentation Issues
Kubernetes.io uses github issues to track documentation issues and requests. If you see a documentation issue, submit an issue using the following steps:
1. Check the [kubernetes.io issues list](https://github.com/kubernetes/kubernetes.github.io/issues) as you might find out the issue is a duplicate.
2. Use the [included template for every new issue](https://github.com/kubernetes/kubernetes.github.io/issues/new). When you create a bug report, include as many details as possible and include suggested fixes to the issue.
Note that code issues should be filed against the main kubernetes repository, while documentation issues should go in the kubernetes.io repository.
### Submitting Documentation Pull Requests
If youre fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/contribute/create-pull-request/).

View File

@ -14,6 +14,8 @@ toc:
path: /docs/getting-started-guides/kops/ path: /docs/getting-started-guides/kops/
- title: Hello World on Google Container Engine - title: Hello World on Google Container Engine
path: /docs/hellonode/ path: /docs/hellonode/
- title: Installing kubectl
path: /docs/getting-started-guides/kubectl/
- title: Downloading or Building Kubernetes - title: Downloading or Building Kubernetes
path: /docs/getting-started-guides/binary_release/ path: /docs/getting-started-guides/binary_release/
- title: Online Training Course - title: Online Training Course
@ -207,8 +209,6 @@ toc:
path: /docs/getting-started-guides/ovirt/ path: /docs/getting-started-guides/ovirt/
- title: OpenStack Heat - title: OpenStack Heat
path: /docs/getting-started-guides/openstack-heat/ path: /docs/getting-started-guides/openstack-heat/
- title: CoreOS on Multinode Cluster
path: /docs/getting-started-guides/coreos/coreos_multinode_cluster/
- title: rkt - title: rkt
section: section:
- title: Running Kubernetes with rkt - title: Running Kubernetes with rkt
@ -233,12 +233,8 @@ toc:
path: /docs/getting-started-guides/centos/centos_manual_config/ path: /docs/getting-started-guides/centos/centos_manual_config/
- title: CoreOS - title: CoreOS
path: /docs/getting-started-guides/coreos path: /docs/getting-started-guides/coreos
- title: CoreOS with Calico
path: /docs/getting-started-guides/coreos/bare_metal_calico/
- title: Ubuntu - title: Ubuntu
path: /docs/getting-started-guides/ubuntu/ path: /docs/getting-started-guides/ubuntu/
- title: Ubuntu Nodes with Calico
path: /docs/getting-started-guides/ubuntu-calico/
- title: Validate Node Setup - title: Validate Node Setup
path: /docs/admin/node-conformance path: /docs/admin/node-conformance
- title: Portable Multi-Node Cluster - title: Portable Multi-Node Cluster

View File

@ -219,7 +219,7 @@ toc:
- title: Replication Controller - title: Replication Controller
path: /docs/user-guide/replication-controller/ path: /docs/user-guide/replication-controller/
- title: Resource Quotas - title: Resource Quotas
path: /docs/admin/resource-quota/ path: /docs/admin/resourcequota/
- title: Scheduled Jobs - title: Scheduled Jobs
path: /docs/user-guide/scheduled-jobs/ path: /docs/user-guide/scheduled-jobs/
- title: Secrets - title: Secrets
@ -269,6 +269,6 @@ toc:
- title: Federation Components - title: Federation Components
section: section:
- title: federation-apiserver - title: federation-apiserver
path: /docs/admin/federation-apiserver.md path: /docs/admin/federation-apiserver
- title : federation-controller-mananger - title : federation-controller-mananger
path: /docs/admin/federation-controller-manager.md path: /docs/admin/federation-controller-manager

View File

@ -15,6 +15,14 @@ toc:
section: section:
- title: Using Port Forwarding to Access Applications in a Cluster - title: Using Port Forwarding to Access Applications in a Cluster
path: /docs/tasks/access-application-cluster/port-forward-access-application-cluster/ path: /docs/tasks/access-application-cluster/port-forward-access-application-cluster/
- title: Debugging Applications in a Cluster
section:
- title: Determining the Reason for Pod Failure
path: /docs/tasks/debug-application-cluster/determine-reason-pod-failure/
- title: Accessing the Kubernetes API - title: Accessing the Kubernetes API
section: section:
- title: Using an HTTP Proxy to Access the Kubernetes API - title: Using an HTTP Proxy to Access the Kubernetes API

View File

@ -20,6 +20,8 @@
<a href="https://calendar.google.com/calendar/embed?src=nt2tcnbtbied3l6gi2h29slvc0%40group.calendar.google.com" class="calendar"><span>Events Calendar</span></a> <a href="https://calendar.google.com/calendar/embed?src=nt2tcnbtbied3l6gi2h29slvc0%40group.calendar.google.com" class="calendar"><span>Events Calendar</span></a>
</div> </div>
<div> <div>
<a href="//get.k8s.io" class="button">Download K8s</a>
<a href="https://github.com/kubernetes/kubernetes" class="button">Contribute to the K8s codebase</a>
</div> </div>
</div> </div>
<div id="miceType" class="center">&copy; {{ 'now' | date: "%Y" }} Kubernetes</div> <div id="miceType" class="center">&copy; {{ 'now' | date: "%Y" }} Kubernetes</div>

View File

@ -14,6 +14,13 @@
link: 'https://deis.com', link: 'https://deis.com',
blurb: 'Deis the creators of Helm, Workflow, and Steward, helps developers and operators build, deploy, manage and scale their applications on top of Kubernetes.' blurb: 'Deis the creators of Helm, Workflow, and Steward, helps developers and operators build, deploy, manage and scale their applications on top of Kubernetes.'
}, },
{
type: 0,
name: 'StackPointCloud',
logo: 'stackpoint',
link: 'https://stackpoint.io',
blurb: 'StackPointCloud builds Stackpoint.io, the universal control plane for Kubernetes Anywhere -- compose and build your own infrastructure as easily as a DigitalOcean droplet at any public cloud provider.'
},
{ {
type: 0, type: 0,
name: 'Sysdig Cloud', name: 'Sysdig Cloud',
@ -217,6 +224,13 @@
link: 'https://deis.com/services/', link: 'https://deis.com/services/',
blurb: 'Deis provides professional services and 24x7 operational support for any Kubernetes cluster managed by our global cluster operations team.' blurb: 'Deis provides professional services and 24x7 operational support for any Kubernetes cluster managed by our global cluster operations team.'
}, },
{
type: 1,
name: 'StackPointCloud',
logo: 'stackpoint',
link: 'https://stackpoint.io',
blurb: 'StackPointCloud offers a wide range of support plans for managed Kubernetes clusters built through its universal control plane for Kubernetes Anywhere.'
},
{ {
type: 1, type: 1,
name: 'Samsung SDS', name: 'Samsung SDS',

View File

@ -41,7 +41,7 @@
{% if notitle != "true" %}<h1>{{ title }}</h1>{% endif %} {% if notitle != "true" %}<h1>{{ title }}</h1>{% endif %}
{{ content }} {{ content }}
<p><a href=""><img src="https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/{{ page.path }}?pixel" alt="Analytics" /></a> <p><a href=""><img src="https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/{{ page.path }}?pixel" alt="Analytics" /></a>
{% if page.url != "/404.html" and page.url != "/docs/search/" %}<div id="pd_rating_holder_8345992"></div> {% if page.url != "/404.html" and page.url != "/docs/search/" %}
<script type="text/javascript"> <script type="text/javascript">
PDRTJS_settings_8345992 = { PDRTJS_settings_8345992 = {
"id" : "8345992", "id" : "8345992",
@ -80,6 +80,34 @@
})(window,document,'script','//www.google-analytics.com/analytics.js','ga'); })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-36037335-10', 'auto'); ga('create', 'UA-36037335-10', 'auto');
ga('send', 'pageview'); ga('send', 'pageview');
// hide docs nav area if no nav is present, or if nav only contains a link to the current page
(function () {
window.addEventListener('DOMContentLoaded', init)
// play nice with our neighbors
function init() {
window.removeEventListener('DOMContentLoaded', init)
hideNav()
}
function hideNav(toc){
if (!toc) toc = document.querySelector('#docsToc')
var container = toc.querySelector('.container')
// container is built dynamically, so it may not be present on the first runloop
if (container) {
if (container.childElementCount === 0 || toc.querySelectorAll('a.item').length === 1) {
toc.style.display = 'none'
document.getElementById('docsContent').style.width = '100%'
}
} else {
requestAnimationFrame(function () {
hideNav(toc)
})
}
}
})();
</script> </script>
<!-- Commenting out AnswerDash for now; we need to work on our list of questions/answers/design first <!-- Commenting out AnswerDash for now; we need to work on our list of questions/answers/design first
<!-- Start of AnswerDash script <script>var AnswerDash;!function(e,t,n,s,a){if(!t.getElementById(s)){var i,r=t.createElement(n),c=t.getElementsByTagName(n)[0];e[a]||(i=e[a]=function(){i.__oninit.push(arguments)},i.__oninit=[]),r.type="text/javascript",r.async=!0,r.src="https://p1.answerdash.com/answerdash.min.js?siteid=756",r.setAttribute("id",s),c.parentNode.insertBefore(r,c)}}(window,document,"script","answerdash-script","AnswerDash");</script> <!-- End of AnswerDash script --> <!-- Start of AnswerDash script <script>var AnswerDash;!function(e,t,n,s,a){if(!t.getElementById(s)){var i,r=t.createElement(n),c=t.getElementsByTagName(n)[0];e[a]||(i=e[a]=function(){i.__oninit.push(arguments)},i.__oninit=[]),r.type="text/javascript",r.async=!0,r.src="https://p1.answerdash.com/answerdash.min.js?siteid=756",r.setAttribute("id",s),c.parentNode.insertBefore(r,c)}}(window,document,"script","answerdash-script","AnswerDash");</script> <!-- End of AnswerDash script -->

View File

@ -390,6 +390,14 @@ footer
height: 0 height: 0
overflow: hidden overflow: hidden
&.button
background-image: none
width: auto
height: auto
&:hover
color: $blue
a.twitter a.twitter
background-position: 0 0 background-position: 0 0
@ -874,11 +882,22 @@ dd
img img
max-width: 100% max-width: 100%
a
//font-weight: 700
text-decoration: underline
a:visited
color: blueviolet
a.button a.button
border-radius: 2px border-radius: 2px
text-decoration: none
&:visited
color: white
a.issue a.issue
margin-left: 20px margin-left: 0px
.fixed footer .fixed footer
position: fixed position: fixed

View File

@ -7,18 +7,20 @@ Add-ons extend the functionality of Kubernetes.
This page lists some of the available add-ons and links to their respective installation instructions. This page lists some of the available add-ons and links to their respective installation instructions.
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
## Networking and Network Policy ## Networking and Network Policy
* [Weave Net](https://github.com/weaveworks/weave-kube) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. * [Calico](http://docs.projectcalico.org/v1.6/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider.
* [Calico](http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy. * [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize). * [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
## Visualization &amp; Control ## Visualization &amp; Control
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes. * [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes.
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.
## Legacy Add-ons ## Legacy Add-ons

View File

@ -6,7 +6,8 @@ assignees:
- deads2k - deads2k
--- ---
* TOC
{:toc}
## Users in Kubernetes ## Users in Kubernetes
@ -33,7 +34,7 @@ or be treated as an anonymous user.
## Authentication strategies ## Authentication strategies
Kubernetes uses client certificates, bearer tokens, or HTTP basic auth to Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to
authenticate API requests through authentication plugins. As HTTP request are authenticate API requests through authentication plugins. As HTTP request are
made to the API server plugins attempts to associate the following attributes made to the API server plugins attempts to associate the following attributes
with the request: with the request:
@ -360,6 +361,20 @@ An unsuccessful request would return:
HTTP status codes can be used to supply additional error context. HTTP status codes can be used to supply additional error context.
### Authenticating Proxy
The API server can be configured to identify users from request header values, such as `X-Remote-User`.
It is designed for use in combination with an authenticating proxy, which sets the request header value.
In order to prevent header spoofing, the authenticating proxy is required to present a valid client
certificate to the API server for validation against the specified CA before the request headers are
checked.
* `--requestheader-username-headers` Required, case-insensitive. Header names to check, in order, for the user identity. The first header containing a value is used as the identity.
* `--requestheader-client-ca-file` Required. PEM-encoded certificate bundle. A valid client certificate must be presented and validated against the certificate authorities in the specified file before the request headers are checked for user names.
* `--requestheader-allowed-names` Optional. List of common names (cn). If set, a valid client certificate with a Common Name (cn) in the specified list must be presented before the request headers are checked for user names. If empty, any Common Name is allowed.
### Keystone Password ### Keystone Password
Keystone authentication is enabled by passing the `--experimental-keystone-url=<AuthURL>` Keystone authentication is enabled by passing the `--experimental-keystone-url=<AuthURL>`

View File

@ -207,6 +207,29 @@ and [enable the API version](
/docs/admin/cluster-management/#turn-on-or-off-an-api-version-for-your-cluster), /docs/admin/cluster-management/#turn-on-or-off-an-api-version-for-your-cluster),
with a `--runtime-config=` that includes `rbac.authorization.k8s.io/v1alpha1`. with a `--runtime-config=` that includes `rbac.authorization.k8s.io/v1alpha1`.
### Privilege Escalation Prevention and Bootstrapping
The `rbac.authorization.k8s.io` API group inherently attempts to prevent users
from escalating privileges. Simply put, __a user can't grant permissions they
don't already have even when the RBAC authorizer it disabled__. If "user-1"
does not have the ability to read secrets in "namespace-a", they cannot create
a binding that would grant that permission to themselves or any other user.
For bootstrapping the first roles, it becomes necessary for someone to get
around these limitations. For the alpha release of RBAC, an API Server flag was
added to allow one user to step around all RBAC authorization and privilege
escalation checks. NOTE: _This is subject to change with future releases._
```
--authorization-rbac-super-user=admin
```
Once set the specified super user, in this case "admin", can be used to create
the roles and role bindings to initialize the system.
This flag is optional and once the initial bootstrapping is performed can be
unset.
### Roles, RolesBindings, ClusterRoles, and ClusterRoleBindings ### Roles, RolesBindings, ClusterRoles, and ClusterRoleBindings
The RBAC API Group declares four top level types which will be covered in this The RBAC API Group declares four top level types which will be covered in this
@ -417,29 +440,6 @@ subjects:
name: system:serviceaccounts name: system:serviceaccounts
``` ```
### Privilege Escalation Prevention and Bootstrapping
The `rbac.authorization.k8s.io` API group inherently attempts to prevent users
from escalating privileges. Simply put, __a user can't grant permissions they
don't already have even when the RBAC authorizer it disabled__. If "user-1"
does not have the ability to read secrets in "namespace-a", they cannot create
a binding that would grant that permission to themselves or any other user.
For bootstrapping the first roles, it becomes necessary for someone to get
around these limitations. For the alpha release of RBAC, an API Server flag was
added to allow one user to step around all RBAC authorization and privilege
escalation checks. NOTE: _This is subject to change with future releases._
```
--authorization-rbac-super-user=admin
```
Once set the specified super user, in this case "admin", can be used to create
the roles and role bindings to initialize the system.
This flag is optional and once the initial bootstrapping is performed can be
unset.
## Webhook Mode ## Webhook Mode
When specified, mode `Webhook` causes Kubernetes to query an outside REST When specified, mode `Webhook` causes Kubernetes to query an outside REST

View File

@ -95,13 +95,13 @@ If you are using GCE then you can either enable it while creating a cluster with
To configure cluser autoscaler you have to set 3 environment variables: To configure cluser autoscaler you have to set 3 environment variables:
* `KUBE_ENABLE_CLUSTER_AUTOSCALER` - it enables cluster autoscaler if set to true. * `KUBE_ENABLE_CLUSTER_AUTOSCALER` - it enables cluster autoscaler if set to true.
* `KUBE_AUTOSCALING_MIN_NODES` - minimum number of nodes in the cluster. * `KUBE_AUTOSCALER_MIN_NODES` - minimum number of nodes in the cluster.
* `KUBE_AUTOSCALING_MAX_NODES` - maximum number of nodes in the cluster. * `KUBE_AUTOSCALER_MAX_NODES` - maximum number of nodes in the cluster.
Example: Example:
```shell ```shell
KUBE_ENABLE_CLUSTER_AUTOSCALER=true KUBE_AUTOSCALING_MIN_NODES=3 KUBE_AUTOSCALING_MAX_NODES=10 NUM_NODES=5 ./cluster/kube-up.sh KUBE_ENABLE_CLUSTER_AUTOSCALER=true KUBE_AUTOSCALER_MIN_NODES=3 KUBE_AUTOSCALER_MAX_NODES=10 NUM_NODES=5 ./cluster/kube-up.sh
``` ```
On GKE you configure cluster autoscaler either on cluster creation or update or when creating a particular node pool On GKE you configure cluster autoscaler either on cluster creation or update or when creating a particular node pool

View File

@ -124,7 +124,7 @@ With v1.3, the following annotations are deprecated: `pod.beta.kubernetes.io/hos
## How do I test if it is working? ## How do I test if it is working?
### Create a simple Pod to use as a test environment. ### Create a simple Pod to use as a test environment
Create a file named busybox.yaml with the Create a file named busybox.yaml with the
following contents: following contents:
@ -152,7 +152,7 @@ Then create a pod using this file:
kubectl create -f busybox.yaml kubectl create -f busybox.yaml
``` ```
### Wait for this pod to go into the running state. ### Wait for this pod to go into the running state
You can get its status with: You can get its status with:
``` ```
@ -160,12 +160,13 @@ kubectl get pods busybox
``` ```
You should see: You should see:
``` ```
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 <some-time> busybox 1/1 Running 0 <some-time>
``` ```
### Validate DNS works ### Validate that DNS is working
Once that pod is running, you can exec nslookup in that environment: Once that pod is running, you can exec nslookup in that environment:
@ -185,6 +186,115 @@ Address 1: 10.0.0.1
If you see that, DNS is working correctly. If you see that, DNS is working correctly.
### Troubleshooting Tips
If the nslookup command fails, check the following:
#### Check the local DNS configuration first
Take a look inside the resolv.conf file. (See "Inheriting DNS from the node" and "Known issues" below for more information)
```
cat /etc/resolv.conf
```
Verify that the search path and name server are set up like the following (note that seach path may vary for different cloud providers):
```
search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal
nameserver 10.0.0.10
options ndots:5
```
#### Quick diagnosis
Errors such as the following indicate a problem with the kube-dns add-on or associated Services:
```
$ kubectl exec busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10
nslookup: can't resolve 'kubernetes.default'
```
or
```
$ kubectl exec busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes.default'
```
#### Check if the DNS pod is running
Use the kubectl get pods command to verify that the DNS pod is running.
```
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
```
You should see something like:
```
NAME READY STATUS RESTARTS AGE
...
kube-dns-v19-ezo1y 3/3 Running 0 1h
...
```
If you see that no pod is running or that the pod has failed/completed, the dns add-on may not be deployed by default in your current environment and you will have to deploy it manually.
#### Check for Errors in the DNS pod
Use `kubectl logs` command to see logs for the DNS daemons.
```
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c healthz
```
See if there is any suspicious log. W, E, F letter at the beginning represent Warning, Error and Failure. Please search for entries that have these as the logging level and use [kubernetes issues](https://github.com/kubernetes/kubernetes/issues) to report unexpected errors.
#### Is dns service up?
Verify that the DNS service is up by using the `kubectl get service` command.
```
kubectl get svc --namespace=kube-system
```
You should see:
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
...
```
If you have created the service or in the case it should be created by default but it does not appear, see this [debugging services page](http://kubernetes.io/docs/user-guide/debugging-services/) for more information.
#### Are dns endpoints exposed?
You can verify that dns endpoints are exposed by using the `kubectl get endpoints` command.
```
kubectl get ep kube-dns --namespace=kube-system
```
You should see something like:
```
NAME ENDPOINTS AGE
kube-dns 10.180.3.17:53,10.180.3.17:53 1h
```
If you do not see the endpoints, see endpoints section in the [debugging services documentation](http://kubernetes.io/docs/user-guide/debugging-services/).
For additional Kubernetes DNS examples, see the [cluster-dns examples](https://github.com/kubernetes/kubernetes/tree/master/examples/cluster-dns) in the Kubernetes GitHub repository.
## Kubernetes Federation (Multiple Zone support) ## Kubernetes Federation (Multiple Zone support)
Release 1.3 introduced Cluster Federation support for multi-site Release 1.3 introduced Cluster Federation support for multi-site
@ -213,8 +323,36 @@ the flag `--cluster-domain=<default local domain>`
The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library) The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library)
supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records). supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records).
## Inheriting DNS from the node
When running a pod, kubelet will prepend the cluster DNS server and search
paths to the node's own DNS settings. If the node is able to resolve DNS names
specific to the larger environment, pods should be able to, also. See "Known
issues" below for a caveat.
If you don't want this, or if you want a different DNS config for pods, you can
use the kubelet's `--resolv-conf` flag. Setting it to "" means that pods will
not inherit DNS. Setting it to a valid file path means that kubelet will use
this file instead of `/etc/resolv.conf` for DNS inheritance.
## Known issues
Kubernetes installs do not configure the nodes' resolv.conf files to use the
cluster DNS by default, because that process is inherently distro-specific.
This should probably be implemented eventually.
Linux's libc is impossibly stuck ([see this bug from
2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just
3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to
consume 1 `nameserver` record and 3 `search` records. This means that if a
local installation already uses 3 `nameserver`s or uses more than 3 `search`es,
some of those settings will be lost. As a partial workaround, the node can run
`dnsmasq` which will provide more `nameserver` entries, but not more `search`
entries. You can also use kubelet's `--resolv-conf` flag.
If you are using Alpine version 3.3 or earlier as your base image, dns may not
work properly owing to a known issue with Alpine. Check [here](https://github.com/kubernetes/kubernetes/issues/30215)
for more information.
## References ## References
- [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/build/kube-dns/README.md) - [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/build-tools/kube-dns/README.md)

View File

@ -100,16 +100,15 @@ for `${NODE_IP}` on each machine.
#### Validating your cluster #### Validating your cluster
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate on master with
```shell ```shell
etcdctl member list kubectl exec < pod_name > etcdctl member list
``` ```
and and
```shell ```shell
etcdctl cluster-health kubectl exec < pod_name > etcdctl cluster-health
``` ```
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcdctl get foo` You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcdctl get foo`

View File

@ -36,15 +36,21 @@ Place plugins in `network-plugin-dir/plugin-name/plugin-name`, i.e if you have a
### CNI ### CNI
The CNI plugin is selected by passing Kubelet the `--network-plugin=cni` command-line option. Kubelet reads a file from `--cni-conf-dir` (default `/etc/cni/net.d`) and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md), and any required CNI plugins referenced by the configuration must be present in `--cni-bin-dir` (default `/opt/cni/bin`). The CNI plugin is selected by passing Kubelet the `--network-plugin=cni` command-line option. Kubelet reads a file from `--cni-conf-dir` (default `/etc/cni/net.d`) and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration), and any required CNI plugins referenced by the configuration must be present in `--cni-bin-dir` (default `/opt/cni/bin`).
If there are multiple CNI configuration files in the directory, the first one in lexicographic order of file name is used. If there are multiple CNI configuration files in the directory, the first one in lexicographic order of file name is used.
In addition to the CNI plugin specified by the configuration file, Kubernetes requires the standard CNI `lo` plugin, at minimum version 0.2.0 In addition to the CNI plugin specified by the configuration file, Kubernetes requires the standard CNI [`lo`](https://github.com/containernetworking/cni/blob/master/plugins/main/loopback/loopback.go) plugin, at minimum version 0.2.0
Limitation: Due to [#31307](https://github.com/kubernetes/kubernetes/issues/31307), `HostPort` won't work with CNI networking plugin at the moment. That means all `hostPort` attribute in pod would be simply ignored.
### kubenet ### kubenet
The Linux-only kubenet plugin provides functionality similar to the `--configure-cbr0` kubelet command-line option. It creates a Linux bridge named `cbr0` and creates a veth pair for each pod with the host end of each pair connected to `cbr0`. The pod end of the pair is assigned an IP address allocated from a range assigned to the node either through configuration or by the controller-manager. `cbr0` is assigned an MTU matching the smallest MTU of an enabled normal interface on the host. The kubenet plugin is currently mutually exclusive with, and will eventually replace, the --configure-cbr0 option. It is also currently incompatible with the flannel experimental overlay. Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy. It is typically used together with a cloud provider that sets up routing rules for communication between nodes, or in single-node environments.
Kubenet creates a Linux bridge named `cbr0` and creates a veth pair for each pod with the host end of each pair connected to `cbr0`. The pod end of the pair is assigned an IP address allocated from a range assigned to the node either through configuration or by the controller-manager. `cbr0` is assigned an MTU matching the smallest MTU of an enabled normal interface on the host.
The kubenet plugin is mutually exclusive with the --configure-cbr0 option.
The plugin requires a few things: The plugin requires a few things:

View File

@ -1,4 +1,4 @@
--- ---
assignees: assignees:
- lavalamp - lavalamp
- thockin - thockin
@ -100,8 +100,19 @@ existence or non-existence of host ports.
There are a number of ways that this network model can be implemented. This There are a number of ways that this network model can be implemented. This
document is not an exhaustive study of the various methods, but hopefully serves document is not an exhaustive study of the various methods, but hopefully serves
as an introduction to various technologies and serves as a jumping-off point. as an introduction to various technologies and serves as a jumping-off point.
If some techniques become vastly preferable to others, we might detail them more
here. The following networking options are sorted alphabetically - the order does not
imply any preferential status.
### Contiv
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced.
### Flannel
[Flannel](https://github.com/coreos/flannel#flannel) is a very simple overlay
network that satisfies the Kubernetes requirements. Many
people have reported success with Flannel and Kubernetes.
### Google Compute Engine (GCE) ### Google Compute Engine (GCE)
@ -158,32 +169,15 @@ Follow the "With Linux Bridge devices" section of [this very nice
tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
Lars Kellogg-Stedman. Lars Kellogg-Stedman.
### Weave Net from Weaveworks
[Weave Net](https://www.weave.works/products/weave-net/) is a
resilient and simple to use network for Kubernetes and its hosted applications.
Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/)
or stand-alone. In either version, it doesn't require any configuration or extra code
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
### Flannel
[Flannel](https://github.com/coreos/flannel#flannel) is a very simple overlay
network that satisfies the Kubernetes requirements. It installs in minutes and
should get you up and running if the above techniques are not working. Many
people have reported success with Flannel and Kubernetes.
### OpenVSwitch ### OpenVSwitch
[OpenVSwitch](/docs/admin/ovs-networking) is a somewhat more mature but also [OpenVSwitch](/docs/admin/ovs-networking) is a somewhat more mature but also
complicated way to build an overlay network. This is endorsed by several of the complicated way to build an overlay network. This is endorsed by several of the
"Big Shops" for networking. "Big Shops" for networking.
### Project Calico ### Project Calico
[Project Calico](https://github.com/projectcalico/calico-containers/blob/master/docs/cni/kubernetes/README.md) is an open source container networking provider and network policy engine. [Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall. Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
@ -193,9 +187,13 @@ Calico can also be run in policy enforcement mode in conjunction with other netw
[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/user-guide/networkpolicies/) to provide isolation across network namespaces. [Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/user-guide/networkpolicies/) to provide isolation across network namespaces.
### Contiv ### Weave Net from Weaveworks
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced. [Weave Net](https://www.weave.works/products/weave-net/) is a
resilient and simple to use network for Kubernetes and its hosted applications.
Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/)
or stand-alone. In either version, it doesn't require any configuration or extra code
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
## Other reading ## Other reading

View File

@ -27,7 +27,7 @@ pieces of information:
The usage of these fields varies depending on your cloud provider or bare metal configuration. The usage of these fields varies depending on your cloud provider or bare metal configuration.
* HostName: Generally not used * HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet `--hostname-override` parameter.
* ExternalIP: Generally the IP address of the node that is externally routable (available from outside the cluster) * ExternalIP: Generally the IP address of the node that is externally routable (available from outside the cluster)

View File

@ -33,7 +33,7 @@ cd kubernetes
make release make release
``` ```
For more details on the release process see the [`build/` directory](http://releases.k8s.io/{{page.githubbranch}}/build/) For more details on the release process see the [`build/`](http://releases.k8s.io/{{page.githubbranch}}/build/) directory
### Download Kubernetes and automatically set up a default cluster ### Download Kubernetes and automatically set up a default cluster
@ -57,4 +57,4 @@ Possible values for `YOUR_PROVIDER` include:
* `vsphere` - VMWare VSphere * `vsphere` - VMWare VSphere
* `rackspace` - Rackspace * `rackspace` - Rackspace
For the complete, up-to-date list of providers supported by this script, see [the `/cluster` folder in the main Kubernetes repo](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster), where each folder represents a possible value for `YOUR_PROVIDER`. If you don't see your desired provider, try looking at our [getting started guides](/docs/getting-started-guides); there's a good chance we have docs for them. For the complete, up-to-date list of providers supported by this script, see the [`/cluster`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster) folder in the main Kubernetes repo, where each folder represents a possible value for `YOUR_PROVIDER`. If you don't see your desired provider, try looking at our [getting started guides](/docs/getting-started-guides); there's a good chance we have docs for them.

View File

@ -10,7 +10,7 @@ CloudStack is a software to build public and private clouds based on hardware vi
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions. [CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes). This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](/docs/getting-started-guides/coreos/coreos_multinode_cluster). This is completely automated, a single playbook deploys Kubernetes.
This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init. This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.

View File

@ -1,209 +0,0 @@
---
---
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
Specifically, this guide will have you do the following:
- Deploy a Kubernetes master node on CoreOS using cloud-config.
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
- Configure `kubectl` to access your cluster.
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
## Prerequisites and Assumptions
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
- 1 Kubernetes Master
- 2 Kubernetes Nodes
- Your nodes should have IP connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
## Cloud-config
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
We'll use two cloud-config files:
- `master-config.yaml`: cloud-config for the Kubernetes master
- `node-config.yaml`: cloud-config for each Kubernetes node
## Download CoreOS
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
2. *On another machine*, download the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
3. Replace the following variables in the `master-config.yaml` file.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
### Configure TLS
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
2. Send the three files to your master host (using `scp` for example).
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```shell
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set Permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
```
4. Restart the kubelet to pick up the changes:
```shell
sudo systemctl restart kubelet
```
## Configure the compute nodes
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
4. Replace the following placeholders with the contents of their respective files.
- `<CA_CERT>`: Complete contents of `ca.pem`
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> <CA_CERT>
> ```
>
> should look like this once the certificate is in place:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> -----BEGIN CERTIFICATE-----
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
> ...<snip>...
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
> -----END CERTIFICATE-----
> ```
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
```
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
## Configure Kubeconfig
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
```shell
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
Check your work with `kubectl get nodes`.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,197 +0,0 @@
---
assignees:
- dchen1107
---
Use the [master.yaml](/docs/getting-started-guides/coreos/cloud-configs/master.yaml) and [node.yaml](/docs/getting-started-guides/coreos/cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster.
> **Attention**: This requires at least CoreOS version **[695.0.0][coreos695]**, which includes `etcd2`.
[coreos695]: https://coreos.com/releases/#695.0.0
* TOC
{:toc}
### AWS
*Attention:* Replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
#### Provision the Master
```shell
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
```
```shell
aws ec2 run-instances \
--image-id <ami_image_id> \
--key-name <keypair> \
--region us-west-2 \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://master.yaml
```
#### Capture the private IP address
```shell
aws ec2 describe-instances --instance-id <master-instance-id>
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
#### Provision worker nodes
```shell
aws ec2 run-instances \
--count 1 \
--image-id <ami_image_id> \
--key-name <keypair> \
--region us-west-2 \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://node.yaml
```
### Google Compute Engine (GCE)
*Attention:* Replace `<gce_image_id>` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
#### Provision the Master
```shell
gcloud compute instances create master \
--image-project coreos-cloud \
--image <gce_image_id> \
--boot-disk-size 200GB \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=master.yaml
```
#### Capture the private IP address
```shell
gcloud compute instances list
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
#### Provision worker nodes
```shell
gcloud compute instances create node1 \
--image-project coreos-cloud \
--image <gce_image_id> \
--boot-disk-size 200GB \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=node.yaml
```
#### Establish network connectivity
Next, setup an ssh tunnel to the master so you can run kubectl from your local host.
In one terminal, run `gcloud compute ssh master --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second
run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`.
### OpenStack
These instructions are for running on the command line. Most of this you can also do through the Horizon dashboard.
These instructions were tested on the Ice House release on a Metacloud distribution of OpenStack but should be similar if not the same across other versions/distributions of OpenStack.
#### Make sure you can connect with OpenStack
Make sure the environment variables are set for OpenStack such as:
```shell
OS_TENANT_ID
OS_PASSWORD
OS_AUTH_URL
OS_USERNAME
OS_TENANT_NAME
```
Test this works with something like:
```shell
nova list
```
#### Get a Suitable CoreOS Image
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack.html)
Once you download that, upload it to glance. An example is shown below:
```shell
glance image-create --name CoreOS723 \
--container-format bare --disk-format qcow2 \
--file coreos_production_openstack_image.img \
--is-public True
```
#### Create security group
```shell
nova secgroup-create kubernetes "Kubernetes Security Group"
nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0
nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0
```
#### Provision the Master
```shell
nova boot \
--image <image_name> \
--key-name <my_key> \
--flavor <flavor id> \
--security-group kubernetes \
--user-data files/master.yaml \
kube-master
```
`<image_name>` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
`<my_key>` is the keypair name that you already generated to access the instance.
`<flavor_id>` is the flavor ID you use to size the instance. Run `nova flavor-list` to get the IDs. 3 on the system this was tested with gives the m1.large size.
The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work.
Next, assign it a public IP address:
```shell
nova floating-ip-list
```
Get an IP address that's free and run:
```shell
nova floating-ip-associate kube-master <ip address>
```
where `<ip address>` is the IP address that was available from the `nova floating-ip-list` command.
#### Provision Worker Nodes
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node. You can get this by running `nova show kube-master` assuming you named your instance kube master. This is not the floating IP address you just assigned it.
```shell
nova boot \
--image <image_name> \
--key-name <my_key> \
--flavor <flavor id> \
--security-group kubernetes \
--user-data files/node.yaml \
minion01
```
This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.

View File

@ -41,12 +41,6 @@ A generic guide to setting up an HA cluster on any cloud or bare metal, with ful
These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS. These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS.
[**Multi-node Cluster**](/docs/getting-started-guides/coreos/coreos_multinode_cluster)
Set up a single master, multi-worker cluster on your choice of platform: AWS, GCE, or VMware Fusion.
<hr/>
[**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md) [**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
Scripted installation of a single master, multi-worker cluster on GCE. Kubernetes components are managed by [fleet](https://github.com/coreos/fleet). Scripted installation of a single master, multi-worker cluster on GCE. Kubernetes components are managed by [fleet](https://github.com/coreos/fleet).

View File

@ -43,6 +43,8 @@ clusters.
[KCluster.io](https://kcluster.io) provides highly available and scalable managed Kubernetes clusters for AWS. [KCluster.io](https://kcluster.io) provides highly available and scalable managed Kubernetes clusters for AWS.
[Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or any public cloud, and provides 24/7 health monitoring and alerting.
### Turn-key Cloud Solutions ### Turn-key Cloud Solutions
These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
@ -74,8 +76,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [AWS + CoreOS](/docs/getting-started-guides/coreos) - [AWS + CoreOS](/docs/getting-started-guides/coreos)
- [GCE + CoreOS](/docs/getting-started-guides/coreos) - [GCE + CoreOS](/docs/getting-started-guides/coreos)
- [AWS + Ubuntu](/docs/getting-started-guides/juju) - [AWS/GCE/Rackspace/Joyent + Ubuntu](/docs/getting-started-guides/ubuntu/automated)
- [Joyent + Ubuntu](/docs/getting-started-guides/juju)
- [Rackspace + CoreOS](/docs/getting-started-guides/rackspace) - [Rackspace + CoreOS](/docs/getting-started-guides/rackspace)
#### On-Premises VMs #### On-Premises VMs
@ -84,7 +85,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [CloudStack](/docs/getting-started-guides/cloudstack) (uses Ansible, CoreOS and flannel) - [CloudStack](/docs/getting-started-guides/cloudstack) (uses Ansible, CoreOS and flannel)
- [Vmware vSphere](/docs/getting-started-guides/vsphere) (uses Debian) - [Vmware vSphere](/docs/getting-started-guides/vsphere) (uses Debian)
- [Vmware Photon Controller](/docs/getting-started-guides/photon-controller) (uses Debian) - [Vmware Photon Controller](/docs/getting-started-guides/photon-controller) (uses Debian)
- [juju.md](/docs/getting-started-guides/juju) (uses Juju, Ubuntu and flannel) - [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/automated) (uses Juju, Ubuntu and flannel)
- [Vmware](/docs/getting-started-guides/coreos) (uses CoreOS and flannel) - [Vmware](/docs/getting-started-guides/coreos) (uses CoreOS and flannel)
- [libvirt-coreos.md](/docs/getting-started-guides/libvirt-coreos) (uses CoreOS) - [libvirt-coreos.md](/docs/getting-started-guides/libvirt-coreos) (uses CoreOS)
- [oVirt](/docs/getting-started-guides/ovirt) - [oVirt](/docs/getting-started-guides/ovirt)
@ -99,7 +100,8 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [Fedora single node](/docs/getting-started-guides/fedora/fedora_manual_config) - [Fedora single node](/docs/getting-started-guides/fedora/fedora_manual_config)
- [Fedora multi node](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) - [Fedora multi node](/docs/getting-started-guides/fedora/flannel_multi_node_cluster)
- [Centos](/docs/getting-started-guides/centos/centos_manual_config) - [Centos](/docs/getting-started-guides/centos/centos_manual_config)
- [Ubuntu](/docs/getting-started-guides/ubuntu) - [Bare Metal with Ubuntu](/docs/getting-started-guides/ubuntu/automated)
- [Ubuntu Manual](/docs/getting-started-guides/ubuntu/manual)
- [Docker Multi Node](/docs/getting-started-guides/docker-multinode) - [Docker Multi Node](/docs/getting-started-guides/docker-multinode)
- [CoreOS](/docs/getting-started-guides/coreos) - [CoreOS](/docs/getting-started-guides/coreos)
@ -123,6 +125,7 @@ GKE | | | GCE | [docs](https://clou
Stackpoint.io | | multi-support | multi-support | [docs](http://www.stackpointcloud.com) | | Commercial Stackpoint.io | | multi-support | multi-support | [docs](http://www.stackpointcloud.com) | | Commercial
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | | Commercial AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | | Commercial
KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | | Commercial KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | | Commercial
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/products/kubernetes/) | | Commercial
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin)) Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens)) Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens))
@ -140,17 +143,17 @@ AWS | CoreOS | CoreOS | flannel | [docs](/docs/gettin
GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires)) GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires))
Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline) | | Community ([@jeffbean](https://github.com/jeffbean)) Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline) | | Community ([@jeffbean](https://github.com/jeffbean))
Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport))
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack) | | Community ([@runseb](https://github.com/runseb)) CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack) | | Community ([@runseb](https://github.com/runseb))
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@imkin](https://github.com/imkin)) Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@imkin](https://github.com/imkin))
Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | | Community ([@alainroy](https://github.com/alainroy)) Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | | Community ([@alainroy](https://github.com/alainroy))
Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap))
AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) GCE | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws) | | Community ([@justinsb](https://github.com/justinsb)) AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws) | | Community ([@justinsb](https://github.com/justinsb))
AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb)) AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
Bare-metal | custom | Ubuntu | Calico | [docs](/docs/getting-started-guides/ubuntu-calico) | | Community ([@djosborne](https://github.com/djosborne))
Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos) | | Community ([@lhuard1A](https://github.com/lhuard1A)) libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos) | | Community ([@lhuard1A](https://github.com/lhuard1A))
oVirt | | | | [docs](/docs/getting-started-guides/ovirt) | | Community ([@simon3z](https://github.com/simon3z)) oVirt | | | | [docs](/docs/getting-started-guides/ovirt) | | Community ([@simon3z](https://github.com/simon3z))
@ -161,7 +164,7 @@ any | any | any | any | [docs](/docs/gettin
*Note*: The above table is ordered by version test/used in notes followed by support level. *Note*: The above table is ordered by version test/used in notes followed by support level.
Definition of columns: Definition of columns
- **IaaS Provider** is who/what provides the virtual or physical machines (nodes) that Kubernetes runs on. - **IaaS Provider** is who/what provides the virtual or physical machines (nodes) that Kubernetes runs on.
- **OS** is the base operating system of the nodes. - **OS** is the base operating system of the nodes.

View File

@ -1,325 +0,0 @@
---
assignees:
- caesarxuchao
- erictune
---
[Juju](https://jujucharms.com/docs/2.0/about-juju) encapsulates the
operational knowledge of provisioning, installing, and securing a Kubernetes
cluster into one step. Juju allows you to deploy a Kubernetes cluster on
different cloud providers with a consistent, repeatable user experience.
Once deployed the cluster can easily scale up with one command.
The Juju Kubernetes work is curated by a dedicated team of community members,
let us know how we are doing. If you find any problems please open an
[issue on the kubernetes project](https://github.com/kubernetes/kubernetes/issues)
and tag the issue with "juju" so we can find them.
* TOC
{:toc}
## Prerequisites
> Note: If you're running kube-up, on Ubuntu - all of the dependencies
> will be handled for you. You may safely skip to the section:
> [Launch a Kubernetes Cluster](#launch-a-kubernetes-cluster)
### On Ubuntu
[Install the Juju client](https://jujucharms.com/docs/2.0/getting-started-general)
> This documentation focuses on the Juju 2.0 release which will be
> promoted to stable during the April 2016 release cycle.
To paraphrase, on your local Ubuntu system:
```shell
sudo add-apt-repository ppa:juju/devel
sudo apt-get update
sudo apt-get install juju
```
If you are using another distro/platform - please consult the
[getting started guide](https://jujucharms.com/docs/2.0/getting-started-general)
to install the Juju dependencies for your platform.
### With Docker
If you prefer the isolation of Docker, you can run the Juju client in a
container. Create a local directory to store the Juju configuration, then
volume mount the container:
```shell
mkdir -p $HOME/.local/share/juju
docker run --rm -ti \
-v $HOME/.local/share/juju:/home/ubuntu/.local/share/juju \
jujusolutions/charmbox:devel
```
> While this is a common target, the charmbox flavors of images are
> unofficial, and should be treated as experimental. If you encounter any issues
> turning up the Kubernetes cluster with charmbox, please file a bug on the
> [charmbox issue tracker](https://github.com/juju-solutions/charmbox/issues).
### Configure Juju to your favorite cloud provider
At this point you have access to the Juju client. Before you can deploy a
cluster you have to configure Juju with the
[cloud credentials](https://jujucharms.com/docs/2.0/credentials) for each
cloud provider you would like to use.
Juju [supports a wide variety of public clouds](#cloud-compatibility) to set
up the credentials for your chosen cloud see the
[cloud setup page](https://jujucharms.com/docs/devel/getting-started-general#2.-choose-a-cloud).
After configuration is complete test your setup with a `juju bootstrap`
command: `juju bootstrap $controllername $cloudtype` you are ready to launch
the Kubernetes cluster.
## Launch a Kubernetes cluster
You can deploy a Kubernetes cluster with Juju from the `kubernetes` directory of
the [kubernetes github project](https://github.com/kubernetes/kubernetes.git).
Clone the repository on your local system. Export the `KUBERNETES_PROVIDER`
environment variable before bringing up the cluster.
```shell
cd kubernetes
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
```
If this is your first time running the `kube-up.sh` script, it will attempt to
install the required dependencies to get started with Juju.
The script will deploy two nodes of Kubernetes, 1 unit of etcd, and network
the units so containers on different hosts can communicate with each other.
## Exploring the cluster
The `juju status` command provides information about each unit in the cluster:
```shell
$ juju status
MODEL CONTROLLER CLOUD/REGION VERSION
default windows azure/centralus 2.0-beta13
APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS
etcd active false jujucharms etcd 3 ubuntu
kubernetes active true jujucharms kubernetes 5 ubuntu
RELATION PROVIDES CONSUMES TYPE
cluster etcd etcd peer
etcd etcd kubernetes regular
certificates kubernetes kubernetes peer
UNIT WORKLOAD AGENT MACHINE PORTS PUBLIC-ADDRESS MESSAGE
etcd/0 active idle 0 2379/tcp 13.67.217.11 (leader) cluster is healthy
kubernetes/0 active idle 1 8088/tcp 13.67.219.76 Kubernetes running.
kubernetes/1 active idle 2 6443/tcp 13.67.219.182 (master) Kubernetes running.
MACHINE STATE DNS INS-ID SERIES AZ
0 started 13.67.217.11 machine-0 trusty
1 started 13.67.219.76 machine-1 trusty
2 started 13.67.219.182 machine-2 trusty
```
## Run some containers!
The `kubectl` file, and the TLS certificates along with the configuration are
all available on the Kubernetes master unit. Fetch the kubectl package so you
can run commands on the new Kuberntetes cluster.
Use the `juju status` command to figure out which unit is the master. In the
example above the "kubernetes/1" unit is the master. Use the `juju scp`
command to copy the file from the unit:
```shell
juju scp kubernetes/1:kubectl_package.tar.gz .
tar xvfz kubectl_package.tar.gz
./kubectl --kubeconfig kubeconfig get pods
```
If you are not on a Linux amd64 host system, you will need to find or build a
kubectl binary package for your architecture.
Copy the `kubeconfig` file to the home directory so you don't have to specify
it on the command line each time. The default location is
`${HOME}/.kube/config`.
No pods will be available before starting a container:
```shell
kubectl get pods
NAME READY STATUSRESTARTS AGE
kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
```json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
```
Create the pod with kubectl:
```shell
kubectl create -f pod.json
```
Get info on the pod:
```shell
kubectl get pods
```
To test the hello app, we need to locate which node is hosting
the container. We can use `juju run` and `juju status` commands to find
our hello app.
Exit out of our ssh session and run:
```shell
juju run --unit kubernetes/0 "docker ps -n=1"
...
juju run --unit kubernetes/1 "docker ps -n=1"
CONTAINER IDIMAGE COMMAND CREATED STATUS PORTS NAMES
02beb61339d8quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hourk8s_hello....
```
We see "kubernetes/1" has our container, expose the kubernetes charm and open
port 80:
```shell
juju run --unit kubernetes/1 "open-port 80"
juju expose kubernetes
sudo apt-get install curl
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
```
Finally delete the pod:
```shell
juju ssh kubernetes/0
kubectl delete pods hello
```
## Scale up cluster
Want larger Kubernetes nodes? It is easy to request different sizes of cloud
resources from Juju by using **constraints**. You can increase the amount of
CPU or memory (RAM) in any of the systems requested by Juju. This allows you
to fine tune th Kubernetes cluster to fit your workload. Use flags on the
bootstrap command or as a separate `juju constraints` command. Look to the
[Juju documentation for machine](https://jujucharms.com/docs/2.0/charms-constraints)
details.
## Scale out cluster
Need more workers? Juju makes it easy to add units of a charm:
```shell
juju add-unit kubernetes
```
Or multiple units at one time:
```shell
juju add-unit -n3 kubernetes
```
You can also scale the etcd charm for more fault tolerant key/value storage:
```shell
juju add-unit -n2 etcd
```
## Tear down cluster
We recommend that you use the `kube-down.sh` script when you are done using
the cluster, as it properly brings down the cloud and removes some of the
build directories.
```shell
./cluster/kube-down.sh
```
Alternately if you want stop the servers you can destroy the Juju model or the
controller. Use the `juju switch` command to get the current controller name:
```shell
juju switch
juju destroy-controller $controllername --destroy-all-models
```
## More Info
Juju works with charms and bundles to deploy solutions. The code that stands up
a Kubernetes cluster is done in the charm code. The charm is built from using
a layered approach to keep the code smaller and more focused on the operations
of Kubernetes.
The Kubernetes layer and bundles can be found in the `kubernetes`
project on github.com:
- [Bundle location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/bundles)
- [Kubernetes charm layer location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes)
- [More about Juju](https://jujucharms.com)
### Cloud compatibility
Juju is cloud agnostic and gives you a consistent experience across different
cloud providers. Juju supports a variety of public cloud providers: [Amazon Web Service](https://jujucharms.com/docs/2.0/help-aws),
[Microsoft Azure](https://jujucharms.com/docs/2.0/help-azure),
[Google Compute Engine](https://jujucharms.com/docs/2.0/help-google),
[Joyent](https://jujucharms.com/docs/2.0/help-joyent),
[Rackspace](https://jujucharms.com/docs/2.0/help-rackspace), any
[OpenStack cloud](https://jujucharms.com/docs/2.0/clouds#specifying-additional-clouds),
and
[Vmware vSphere](https://jujucharms.com/docs/2.0/config-vmware).
If you do not see your favorite cloud provider listed many clouds with ssh
access can be configured for
[manual provisioning](https://jujucharms.com/docs/2.0/clouds-manual).
To change to a different cloud you can use the `juju switch` command and set
up the credentials for that cloud provider and continue to use the `kubeup.sh`
script.
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Amazon Web Services (AWS) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
OpenStack | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Google Compute Engine (GCE) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -26,6 +26,12 @@ a building block. kops builds on the kubeadm work.
### (1/5) Install kops ### (1/5) Install kops
#### Requirements
You must have [kubectl](http://kubernetes.io/docs/getting-started-guides/kubectl/) installed in order for kops to work.
#### Installation
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source): Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source):
On MacOS: On MacOS:

View File

@ -0,0 +1,110 @@
---
---
<style>
li>.highlighter-rouge {position:relative; top:3px;}
</style>
## Overview
kubectl is the command line tool you use to interact with Kubernetes clusters.
You should use a version of kubectl that is at least as new as your server.
`kubectl version` will print the server and client versions. Using the same version of kubectl
as your server naturally works; using a newer kubectl than your server also works; but if you use
an older kubectl with a newer server you may see odd validation errors .
## Download a release
Download kubectl from the [official Kubernetes releases](https://console.cloud.google.com/storage/browser/kubernetes-release/release/):
On MacOS:
```shell
wget https://storage.googleapis.com/kubernetes-release/release/v1.4.4/bin/darwin/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin/kubectl
```
On Linux:
```shell
wget https://storage.googleapis.com/kubernetes-release/release/v1.4.4/bin/linux/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin/kubectl
```
You may need to `sudo` the `mv`; you can put it anywhere in your `PATH` - some people prefer to install to `~/bin`.
## Alternatives
### Download as part of the Google Cloud SDK
kubectl can be installed as part of the Google Cloud SDK:
First install the [Google Cloud SDK](https://cloud.google.com/sdk/).
After Google Cloud SDK installs, run the following command to install `kubectl`:
```shell
gcloud components install kubectl
```
Do check that the version is sufficiently up-to-date using `kubectl version`.
### Install with brew
If you are on MacOS and using brew, you can install with:
```shell
brew install kubectl
```
The homebrew project is independent from kubernetes, so do check that the version is
sufficiently up-to-date using `kubectl version`.
# Enabling shell autocompletion
kubectl includes autocompletion support, which can save a lot of typing!
The completion script itself is generated by kubectl, so you typically just need to invoke it from your profile.
Common examples are provided here, but for more details please consult `kubectl completion -h`
## On Linux, using bash
To add it to your current shell: `source <(kubectl completion bash)`
To add kubectl autocompletion to your profile (so it is automatically loaded in future shells):
```shell
echo "source <(kubectl completion bash)" >> ~/.bashrc
```
## On MacOS, using bash
On MacOS, you will need to install the bash-completion support first:
```shell
brew install bash-completion
```
To add it to your current shell:
```shell
source $(brew --prefix)/etc/bash_completion
source <(kubectl completion bash)
```
To add kubectl autocompletion to your profile (so it is automatically loaded in future shells):
```shell
echo "source $(brew --prefix)/etc/bash_completion" >> ~/.bash_profile
echo "source <(kubectl completion bash)" >> ~/.bash_profile
```
Please note that this only appears to work currently if you install using `brew install kubectl`,
and not if you downloaded kubectl directly.

View File

@ -121,7 +121,7 @@ setfacl -m g:kvm:--x ~
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation. By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
There is both an automated way and a manual, customizable way of setting up libvert Kubernetes clusters on CoreOS. There is both an automated way and a manual, customizable way of setting up libvirt Kubernetes clusters on CoreOS.
#### Automated setup #### Automated setup

View File

@ -67,21 +67,23 @@ to run commands against the cluster.
```shell ```shell
# linux/amd64 # linux/amd64
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# linux/386 # linux/386
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# linux/arm # linux/arm
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# linux/arm64 # linux/arm64
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
#linux/ppc64le #linux/ppc64le
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# OS X/amd64 # OS X/amd64
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# OS X/386 # OS X/386
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
``` ```
For Windows, download [kubectl.exe](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe) and save it to a location on your PATH.
The generic download path is: The generic download path is:
``` ```
https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY} https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}

View File

@ -12,7 +12,7 @@ export KUBE_NODE_OS_DISTRIBUTION=debian
curl -sS https://get.k8s.io | bash curl -sS https://get.k8s.io | bash
``` ```
See the [Calico documentation](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes#getting-started) for more options to deploy Calico with Kubernetes. See the [Calico documentation](http://docs.projectcalico.org/) for more options to deploy Calico with Kubernetes.
Once your cluster using Calico is running, you should see a collection of pods running in the `kube-system` Namespace that support Kubernetes NetworkPolicy. Once your cluster using Calico is running, you should see a collection of pods running in the `kube-system` Namespace that support Kubernetes NetworkPolicy.

View File

@ -59,7 +59,7 @@ Under rktnetes, `kubectl get logs` currently cannot get logs from applications t
## Init containers ## Init containers
The alpha [init container](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/container-init.md) feature is currently not supported. The beta [init container](/docs/user-guide/pods/init-containers.md) feature is currently not supported.
## Container restart back-off ## Container restart back-off

View File

@ -81,12 +81,12 @@ to implement one of the above options:
- **Use a network plugin which is called by Kubernetes** - **Use a network plugin which is called by Kubernetes**
- Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface. - Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface.
- There are a number of solutions which provide plugins for Kubernetes: - There are a number of solutions which provide plugins for Kubernetes (listed alphabetically):
- [Calico](http://docs.projectcalico.org/)
- [Flannel](https://github.com/coreos/flannel) - [Flannel](https://github.com/coreos/flannel)
- [Calico](https://github.com/projectcalico/calico-containers)
- [Weave](https://weave.works/)
- [Romana](http://romana.io/)
- [Open vSwitch (OVS)](http://openvswitch.org/) - [Open vSwitch (OVS)](http://openvswitch.org/)
- [Romana](http://romana.io/)
- [Weave](http://weave.works/)
- [More found here](/docs/admin/networking#how-to-achieve-this) - [More found here](/docs/admin/networking#how-to-achieve-this)
- You can also write your own. - You can also write your own.
- **Compile support directly into Kubernetes** - **Compile support directly into Kubernetes**

View File

@ -0,0 +1,288 @@
---
assignees:
- caesarxuchao
- erictune
---
Ubuntu 16.04 introduced the [Canonical Distribution of Kubernetes](https://jujucharms.com/canonical-kubernetes/), a pure upstream distribution of Kubernetes designed for production usage. Out of the box it comes with the following components on 12 machines:
- Kubernetes (automated deployment, operations, and scaling)
- Three node Kubernetes cluster with one master and two worker nodes.
- TLS used for communication between units for security.
- Flannel Software Defined Network (SDN) plugin
- A load balancer for HA kubernetes-master (Experimental)
- Optional Ingress Controller (on worker)
- Optional Dashboard addon (on master) including Heapster for cluster monitoring
- EasyRSA
- Performs the role of a certificate authority serving self signed certificates
to the requesting units of the cluster.
- Etcd (distributed key value store)
- Three unit cluster for reliability.
- Elastic stack
- Two units for ElasticSearch
- One units for a Kibana dashboard
- Beats on every Kubernetes and Etcd units:
- Filebeat for forwarding logs to ElasticSearch
- Topbeat for inserting server monitoring data to ElasticSearch
The Juju Kubernetes work is curated by a dedicated team of community members,
let us know how we are doing. If you find any problems please open an
[issue on our tracker](https://github.com/juju-solutions/bundle-canonical-kubernetes)
so we can find them.
* TOC
{:toc}
## Prerequisites
- A working [Juju client](https://jujucharms.com/docs/2.0/getting-started-general); this does not have to be a Linux machine, it can also be Windows or OSX.
- A [supported cloud](#cloud-compatibility).
### On Ubuntu
On your local Ubuntu system:
```shell
sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju
```
If you are using another distro/platform - please consult the
[getting started guide](https://jujucharms.com/docs/2.0/getting-started-general)
to install the Juju dependencies for your platform.
### Configure Juju to your favorite cloud provider
Deployment of the cluster is [supported on a wide variety of public clouds](#cloud-compatibility), private OpenStack clouds, or raw bare metal clusters.
After deciding which cloud to deploy to, follow the [cloud setup page](https://jujucharms.com/docs/devel/getting-started-general#2.-choose-a-cloud) to configure deploying to that cloud.
Load your [cloud credentials](https://jujucharms.com/docs/2.0/credentials) for each
cloud provider you would like to use.
In this example
```shell
juju add-credential aws
credential name: my_credentials
select auth-type [userpass, oauth, etc]: userpass
enter username: jorge
enter password: *******
```
You can also just auto load credentials for popular clouds with the `juju autoload-credentials` command, which will auto import your credentials from the default files and environment variables for each cloud.
Next we need to bootstrap a controller to manage the cluster. You need to define the cloud you want to bootstrap on, the region, and then any name for your controller node:
```shell
juju update-clouds # This command ensures all the latest regions are up to date on your client
juju bootstrap aws/us-east-2
```
or, another example, this time on Azure:
```shell
juju bootstrap azure/centralus
```
You will need a controller node for each cloud or region you are deploying to. See the [controller documentation](https://jujucharms.com/docs/2.0/controllers) for more information.
Note that each controller can host multiple Kubernetes clusters in a given cloud or region.
## Launch a Kubernetes cluster
The following command will deploy the intial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but
```shell
juju deploy canonical-kubernetes
```
After this command executes we need to wait for the cloud to return back instances and for all the automated deployment tasks to execute.
## Monitor deployment
The `juju status` command provides information about each unit in the cluster. We recommend using the `watch -c juju status --color` command to get a real-time view of the cluster as it deploys. When all the states are green and "Idle", the cluster is ready to go.
```shell
$ juju status
Model Controller Cloud/Region Version
default aws-us-east-2 aws/us-east-2 2.0.1
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 3 ubuntu
elasticsearch active 2 elasticsearch jujucharms 19 ubuntu
etcd 2.2.5 active 3 etcd jujucharms 14 ubuntu
filebeat active 4 filebeat jujucharms 5 ubuntu
flannel 0.6.1 maintenance 4 flannel jujucharms 5 ubuntu
kibana active 1 kibana jujucharms 15 ubuntu
kubeapi-load-balancer 1.10.0 active 1 kubeapi-load-balancer jujucharms 3 ubuntu exposed
kubernetes-master 1.4.5 active 1 kubernetes-master jujucharms 6 ubuntu
kubernetes-worker 1.4.5 active 3 kubernetes-worker jujucharms 8 ubuntu exposed
topbeat active 3 topbeat jujucharms 5 ubuntu
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 52.15.95.92 Certificate Authority connected.
elasticsearch/0* active idle 1 52.15.67.111 9200/tcp Ready
elasticsearch/1 active idle 2 52.15.109.132 9200/tcp Ready
etcd/0 active idle 3 52.15.79.127 2379/tcp Healthy with 3 known peers.
etcd/1* active idle 4 52.15.111.66 2379/tcp Healthy with 3 known peers. (leader)
etcd/2 active idle 5 52.15.144.25 2379/tcp Healthy with 3 known peers.
kibana/0* active idle 6 52.15.57.157 80/tcp,9200/tcp ready
kubeapi-load-balancer/0* active idle 7 52.15.84.179 443/tcp Loadbalancer ready.
kubernetes-master/0* active idle 8 52.15.106.225 6443/tcp Kubernetes master services ready.
filebeat/3 active idle 52.15.106.225 Filebeat ready.
flannel/3 maintenance idle 52.15.106.225 Installing flannel.
kubernetes-worker/0* active idle 9 52.15.153.246 Kubernetes worker running.
filebeat/2 active idle 52.15.153.246 Filebeat ready.
flannel/2 active idle 52.15.153.246 Flannel subnet 10.1.53.1/24
topbeat/2 active idle 52.15.153.246 Topbeat ready.
kubernetes-worker/1 active idle 10 52.15.52.103 Kubernetes worker running.
filebeat/0* active idle 52.15.52.103 Filebeat ready.
flannel/0* active idle 52.15.52.103 Flannel subnet 10.1.31.1/24
topbeat/0* active idle 52.15.52.103 Topbeat ready.
kubernetes-worker/2 active idle 11 52.15.104.181 Kubernetes worker running.
filebeat/1 active idle 52.15.104.181 Filebeat ready.
flannel/1 active idle 52.15.104.181 Flannel subnet 10.1.83.1/24
topbeat/1 active idle 52.15.104.181 Topbeat ready.
Machine State DNS Inst id Series AZ
0 started 52.15.95.92 i-06e66414008eca61c xenial us-east-2c
1 started 52.15.67.111 i-050cbd7eb35fa0fe6 trusty us-east-2a
2 started 52.15.109.132 i-069196660db07c2f6 trusty us-east-2b
3 started 52.15.79.127 i-0038186d2c5103739 xenial us-east-2b
4 started 52.15.111.66 i-0ac66c86a8ec93b18 xenial us-east-2a
5 started 52.15.144.25 i-078cfe79313d598c9 xenial us-east-2c
6 started 52.15.57.157 i-09fd16d9328105ec0 trusty us-east-2a
7 started 52.15.84.179 i-00fd70321a51b658b xenial us-east-2c
8 started 52.15.106.225 i-0109a5fc942c53ed7 xenial us-east-2b
9 started 52.15.153.246 i-0ab63e34959cace8d xenial us-east-2b
10 started 52.15.52.103 i-0108a8cc0978954b5 xenial us-east-2a
11 started 52.15.104.181 i-0f5562571c649f0f2 xenial us-east-2c
```
## Interacting with the cluster
After the cluster is deployed you may assume control over the cluster from any kubernetes-master, or kubernetes-worker node.
First we need to download the credentials and client application to your local workstation:
Create the kubectl config directory.
```shell
mkdir -p ~/.kube
```
Copy the kubeconfig file to the default location.
```shell
juju scp kubernetes-master/0:config ~/.kube/config
```
Fetch a binary for the architecture you have deployed. If your client is a
different architecture you will need to get the appropriate `kubectl` binary
through other means.
```shell
juju scp kubernetes-master/0:kubectl ./kubectl
```
Query the cluster.
```shell
./kubectl cluster-info
Kubernetes master is running at https://52.15.104.227:443
Heapster is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
Grafana is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
```
Congratulations, you've now set up a Kubernetes cluster!
## Scale up cluster
Want larger Kubernetes nodes? It is easy to request different sizes of cloud
resources from Juju by using **constraints**. You can increase the amount of
CPU or memory (RAM) in any of the systems requested by Juju. This allows you
to fine tune th Kubernetes cluster to fit your workload. Use flags on the
bootstrap command or as a separate `juju constraints` command. Look to the
[Juju documentation for machine](https://jujucharms.com/docs/2.0/charms-constraints)
details.
## Scale out cluster
Need more workers? We just add more units:
```shell
juju add-unit kubernetes-worker
```
Or multiple units at one time:
```shell
juju add-unit -n3 kubernetes-worker
```
You can also ask for specific instance types or other machine-specific constraints. See the [constraints documentation](https://jujucharms.com/docs/stable/reference-constraints) for more information. Here are some examples, note that generic constraints such as `cores` and `mem` are more portable between clouds. In this case we'll ask for a specific instance type from AWS:
```shell
juju set-constraints kubernetes-worker instance-type=c4.large
juju add-unit kubernetes-worker
```
You can also scale the etcd charm for more fault tolerant key/value storage:
```shell
juju add-unit -n3 etcd
```
It is strongly recommended to run an odd number of units for quorum.
## Tear down cluster
If you want stop the servers you can destroy the Juju model or the
controller. Use the `juju switch` command to get the current controller name:
```shell
juju switch
juju destroy-controller $controllername --destroy-all-models
```
This will shutdown and terminate all running instances on that cloud.
## More Info
We stand up Kubernetes with open-source operations, or operations as code, known as charms. These charms are assembled from layers which keeps the code smaller and more focused on the operations of just Kubernetes and its components.
The Kubernetes layer and bundles can be found in the `kubernetes`
project on github.com:
- [Bundle location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/bundles)
- [Kubernetes charm layer location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes)
- [Canonical Kubernetes home](https://jujucharms.com/canonical-kubernetes/)
Feature requests, bug reports, pull requests or any feedback would be much appreciated.
### Cloud compatibility
This deployment methodology is continually tested on the following clouds:
[Amazon Web Service](https://jujucharms.com/docs/2.0/help-aws),
[Microsoft Azure](https://jujucharms.com/docs/2.0/help-azure),
[Google Compute Engine](https://jujucharms.com/docs/2.0/help-google),
[Joyent](https://jujucharms.com/docs/2.0/help-joyent),
[Rackspace](https://jujucharms.com/docs/2.0/help-rackspace), any
[OpenStack cloud](https://jujucharms.com/docs/2.0/clouds#specifying-additional-clouds),
and
[Vmware vSphere](https://jujucharms.com/docs/2.0/config-vmware).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Amazon Web Services (AWS) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
OpenStack | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Google Compute Engine (GCE) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -145,7 +145,7 @@ docker stop hello_tutorial
Now that the image works as intended and is all tagged with your `$PROJECT_ID`, we can push it to the [Google Container Registry](https://cloud.google.com/tools/container-registry/), a private repository for your Docker images accessible from every Google Cloud project (but also from outside Google Cloud Platform) : Now that the image works as intended and is all tagged with your `$PROJECT_ID`, we can push it to the [Google Container Registry](https://cloud.google.com/tools/container-registry/), a private repository for your Docker images accessible from every Google Cloud project (but also from outside Google Cloud Platform) :
```shell ```shell
gcloud docker push gcr.io/$PROJECT_ID/hello-node:v1 gcloud docker -- push gcr.io/$PROJECT_ID/hello-node:v1
``` ```
If all goes well, you should be able to see the container image listed in the console: *Compute > Container Engine > Container Registry*. We now have a project-wide Docker image available which Kubernetes can access and orchestrate. If all goes well, you should be able to see the container image listed in the console: *Compute > Container Engine > Container Registry*. We now have a project-wide Docker image available which Kubernetes can access and orchestrate.

View File

@ -5,9 +5,7 @@ assignees:
--- ---
<p>The Kubernetes documentation can help you set up Kubernetes, learn about the system, or get your applications and workloads running on Kubernetes.</p> <p>Kubernetes documentation can help you set up Kubernetes, learn about the system, or get your applications and workloads running on Kubernetes. To learn the basics of what Kubernetes is and how it works, read "<a href="/docs/whatisk8s/">What is Kubernetes</a>". </p>
<p><a href="/docs/whatisk8s/" class="button">Read the Kubernetes Overview</a></p>
<h2>Interactive Tutorial</h2> <h2>Interactive Tutorial</h2>

View File

@ -0,0 +1,110 @@
---
---
{% capture overview %}
This page shows how to write and read a Container
termination message.
Termination messages provide a way for containers to write
information about fatal events to a location where it can
be easily retrieved and surfaced by tools like dashboards
and monitoring software. In most cases, information that you
put in a termination message should also be written to
the general
[Kubernetes logs](/docs/user-guide/logging/).
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
### Writing and reading a termination message
In this exercise, you create a Pod that runs one container.
The configuration file specifies a command that runs when
the container starts.
{% include code.html language="yaml" file="termination.yaml" ghlink="/docs/tasks/debug-pod-container/termination.yaml" %}
1. Create a Pod based on the YAML configuration file:
export REPO=https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master
kubectl create -f $REPO/docs/tasks/debug-pod-container/termination.yaml
In the YAML file, in the `cmd` and `args` fields, you can see that the
container sleeps for 10 seconds and then writes "Sleep expired" to
the `/dev/termination-log` file. After the container writes
the "Sleep expired" message, it terminates.
1. Display information about the Pod:
kubectl get pod termination-demo
Repeat the preceding command until the Pod is no longer running.
1. Display detailed information about the Pod:
kubectl get pod --output=yaml
The output includes the "Sleep expired" message:
apiVersion: v1
kind: Pod
...
lastState:
terminated:
containerID: ...
exitCode: 0
finishedAt: ...
message: |
Sleep expired
...
1. Use a Go template to filter the output so that it includes
only the termination message:
```
{% raw %} kubectl get pod termination-demo -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"{% endraw %}
```
### Setting the termination log file
By default Kubernetes retrieves termination messages from
`/dev/termination-log`. To change this to a different file,
specify a `terminationMessagePath` field for your Container.
For example, suppose your Container writes termination messages to
`/tmp/my-log`, and you want Kubernetes to retrieve those messages.
Set `terminationMessagePath` as shown here:
apiVersion: v1
kind: Pod
metadata:
name: msg-path-demo
spec:
containers:
- name: msg-path-demo-container
image: debian
terminationMessagePath: "/tmp/my-log"
{% endcapture %}
{% capture whatsnext %}
* See the `terminationMessagePath` field in
[Container](/docs/api-reference/v1/definitions#_v1_container).
* Learn about [retrieving logs](/docs/user-guide/logging/).
* Learn about [Go templates](https://golang.org/pkg/text/template/).
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: termination-demo
spec:
containers:
- name: termination-demo-container
image: debian
command: ["/bin/sh"]
args: ["-c", "sleep 10 && echo Sleep expired > /dev/termination-log"]

View File

@ -129,7 +129,7 @@ To use it,
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct. * Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
The Go client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file) The Go client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file)
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/examples/out-of-cluster.go): as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster/main.go):
```golang ```golang
import ( import (
@ -183,7 +183,8 @@ From within a pod the recommended ways to connect to API are:
in any container of the pod can access it. See this [example of using kubectl proxy in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/). in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
- use the Go client library, and create a client using the `client.NewInCluster()` factory. - use the Go client library, and create a client using the `client.NewInCluster()` factory.
This handles locating and authenticating to the apiserver. [example](https://github.com/kubernetes/client-go/examples/in-cluster.go) This handles locating and authenticating to the apiserver. See this [example of using Go client
library in a pod](https://github.com/kubernetes/client-go/blob/master/examples/in-cluster/main.go).
In each case, the credentials of the pod are used to communicate securely with the apiserver. In each case, the credentials of the pod are used to communicate securely with the apiserver.

View File

@ -12,7 +12,7 @@ assignees:
In addition to the imperative-style commands, such as `kubectl run` and `kubectl expose`, described [elsewhere](/docs/user-guide/quick-start), Kubernetes supports declarative configuration. Oftentimes, configuration files are preferable to imperative commands, since they can be checked into version control and changes to the files can be code reviewed, which is especially important for more complex configurations, producing a more robust, reliable and archival system. In addition to the imperative-style commands, such as `kubectl run` and `kubectl expose`, described [elsewhere](/docs/user-guide/quick-start), Kubernetes supports declarative configuration. Oftentimes, configuration files are preferable to imperative commands, since they can be checked into version control and changes to the files can be code reviewed, which is especially important for more complex configurations, producing a more robust, reliable and archival system.
In the declarative style, all configuration is stored in YAML or JSON configuration files using Kubernetes's API resource schemas as the configuration schemas. `kubectl` can create, update, delete, and get API resources. The `apiVersion` (currently 'v1'?), resource `kind`, and resource `name` are used by `kubectl` to construct the appropriate API path to invoke for the specified operation. In the declarative style, all configuration is stored in YAML or JSON configuration files using Kubernetes's API resource schemas as the configuration schemas. `kubectl` can create, update, delete, and get API resources. The `apiVersion` (currently `v1`?), resource `kind`, and resource `name` are used by `kubectl` to construct the appropriate API path to invoke for the specified operation.
## Launching a container using a configuration file ## Launching a container using a configuration file

View File

@ -64,12 +64,12 @@ healthy backend service endpoint at all times, even in the event of
pod, cluster, pod, cluster,
availability zone or regional outages. availability zone or regional outages.
Note that in the Note that in the case of Google Cloud, the logical L7 load balancer is
case of Google Cloud, the logical L7 load balancer is not a single physical device (which not a single physical device (which would present both a single point
would present both a single point of failure, and a single global of failure, and a single global network routing choke point), but
network routing choke point), but rather a [truly global, highly available rather a
load balancing managed service](https://cloud.google.com/load-balancing/), [truly global, highly available load balancing managed service](https://cloud.google.com/load-balancing/),
globally reachable via a single, static IP address. globally reachable via a single, static IP address.
Clients inside your federated Kubernetes clusters (i.e. Pods) will be Clients inside your federated Kubernetes clusters (i.e. Pods) will be
automatically routed to the cluster-local shard of the Federated Service automatically routed to the cluster-local shard of the Federated Service
@ -86,13 +86,13 @@ You can create a federated ingress in any of the usual ways, for example using k
``` shell ``` shell
kubectl --context=federation-cluster create -f myingress.yaml kubectl --context=federation-cluster create -f myingress.yaml
``` ```
For example ingress YAML configurations, see the [Ingress User Guide](/docs/user-guide/ingress/)
The '--context=federation-cluster' flag tells kubectl to submit the The '--context=federation-cluster' flag tells kubectl to submit the
request to the Federation API endpoint, with the appropriate request to the Federation API endpoint, with the appropriate
credentials. If you have not yet configured such a context, visit the credentials. If you have not yet configured such a context, visit the
[federation admin guide](/docs/admin/federation/) or one of the [federation admin guide](/docs/admin/federation/) or one of the
[administration tutorials](https://github.com/kelseyhightower/kubernetes-cluster-federation) [administration tutorials](https://github.com/kelseyhightower/kubernetes-cluster-federation)
to find out how to do so. TODO: Update links to find out how to do so.
As described above, the Federated Ingress will automatically create As described above, the Federated Ingress will automatically create
and maintain matching Kubernetes ingresses in all of the clusters and maintain matching Kubernetes ingresses in all of the clusters
@ -147,17 +147,28 @@ Events:
2m 2m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.5.194 2m 2m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.5.194
``` ```
Note the address of your Federated Ingress Note that:
1. the address of your Federated Ingress
corresponds with the address of all of the corresponds with the address of all of the
underlying Kubernetes ingresses (once these have been allocated - this underlying Kubernetes ingresses (once these have been allocated - this
may take up to a few minutes). may take up to a few minutes).
2. we have not yet provisioned any backend Pods to receive
Note also that we have not yet provisioned any backend Pods to receive
the network traffic directed to this ingress (i.e. 'Service the network traffic directed to this ingress (i.e. 'Service
Endpoints' behind the service backing the Ingress), so the Federated Ingress does not yet consider these to Endpoints' behind the service backing the Ingress), so the Federated Ingress does not yet consider these to
be healthy shards and will not direct traffic to any of these clusters. be healthy shards and will not direct traffic to any of these clusters.
3. the federation control system will
automatically reconfigure the load balancer controllers in all of the
clusters in your federation to make them consistent, and allow
them to share global load balancers. But this reconfiguration can
only complete successfully if there are no pre-existing Ingresses in
those clusters (this is a safety feature to prevent accidental
breakage of existing ingresses). So to ensure that your federated
ingresses function correctly, either start with new, empty clusters, or make
sure that you delete (and recreate if necessary) all pre-existing
Ingresses in the clusters comprising your federation.
## Adding backend services and pods #Adding backend services and pods
To render the underlying ingress shards healthy, we need to add To render the underlying ingress shards healthy, we need to add
backend Pods behind the service upon which the Ingress is based. There are several ways to achieve this, but backend Pods behind the service upon which the Ingress is based. There are several ways to achieve this, but
@ -175,6 +186,16 @@ kubectl --context=federation-cluster create -f services/nginx.yaml
kubectl --context=federation-cluster create -f myreplicaset.yaml kubectl --context=federation-cluster create -f myreplicaset.yaml
``` ```
Note that in order for your federated ingress to work correctly on
Google Cloud, the node ports of all of the underlying cluster-local
services need to be identical. If you're using a federated service
this is easy to do. Simply pick a node port that is not already
being used in any of your clusters, and add that to the spec of your
federated service. If you do not specify a node port for your
federated service, each cluster will choose it's own node port for
its cluster-local shard of the service, and these will probably end
up being different, which is not what you want.
You can verify this by checking in each of the underlying clusters, for example: You can verify this by checking in each of the underlying clusters, for example:
``` shell ``` shell
@ -258,6 +279,35 @@ Check that:
`service-controller` or `replicaset-controller`, `service-controller` or `replicaset-controller`,
errors in the output of `kubectl logs federation-controller-manager --namespace federation`). errors in the output of `kubectl logs federation-controller-manager --namespace federation`).
#### I can create a federated ingress successfully, but request load is not correctly distributed across the underlying clusters
Check that:
1. the services underlying your federated ingress in each cluster have
identical node ports. See [above](#creating_a_federated_ingress) for further explanation.
2. the load balancer controllers in each of your clusters are of the
correct type ("GLBC") and have been correctly reconfigured by the
federation control plane to share a global GCE load balancer (this
should happen automatically). If they of the correct type, and
have been correctly reconfigured, the UID data item in the GLBC
configmap in each cluster will be identical across all clusters.
See
[the GLBC docs](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#changing-the-cluster-uid)
for further details.
If this is not the case, check the logs of your federation
controller manager to determine why this automated reconfiguration
might be failing.
3. no ingresses have been manually created in any of your clusters before the above
reconfiguration of the load balancer controller completed
successfully. Ingresses created before the reconfiguration of
your GLBC will interfere with the behavior of your federated
ingresses created after the reconfiguration (see
[the GLBC docs](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#changing-the-cluster-uid)
for further information. To remedy this,
delete any ingresses created before the cluster joined the
federation (and had it's GLBC reconfigured), and recreate them if
necessary.
#### This troubleshooting guide did not help me solve my problem #### This troubleshooting guide did not help me solve my problem
Please use one of our [support channels](http://kubernetes.io/docs/troubleshooting/) to seek assistance. Please use one of our [support channels](http://kubernetes.io/docs/troubleshooting/) to seek assistance.

View File

@ -53,18 +53,18 @@ Make sure you review the [beta limitations](https://github.com/kubernetes/contri
A minimal Ingress might look like: A minimal Ingress might look like:
```yaml ```yaml
01. apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
02. kind: Ingress kind: Ingress
03. metadata: metadata:
04. name: test-ingress name: test-ingress
05. spec: spec:
06. rules: rules:
07. - http: - http:
08. paths: paths:
09. - path: /testpath - path: /testpath
10. backend: backend:
11. serviceName: test serviceName: test
12. servicePort: 80 servicePort: 80
``` ```
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).* *POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*

View File

@ -10,4 +10,4 @@ spec:
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
hostPath: hostPath:
path: "/somepath/data01" path: "/tmp/data01"

View File

@ -27,7 +27,7 @@ for ease of development and testing. You'll create a local `HostPath` for this
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the `HostPath` resides. support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the `HostPath` resides.
```shell ```shell
# This will be nginx's webroot # This will be nginx's webroot; execute this on the node where your pod will run.
$ mkdir /tmp/data01 $ mkdir /tmp/data01
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html $ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
``` ```

View File

@ -88,7 +88,7 @@ vm-1 # printf "GET / HTTP/1.0\r\n\r\n" | netcat vm-0.ub 80
It's worth exploring what just happened. Init containers run sequentially *before* the application container. In this example we used the init container to copy shared libraries from the rootfs, while preserving user installed packages across container restart. It's worth exploring what just happened. Init containers run sequentially *before* the application container. In this example we used the init container to copy shared libraries from the rootfs, while preserving user installed packages across container restart.
```yaml ```yaml
pod.alpha.kubernetes.io/init-containers: '[ pod.beta.kubernetes.io/init-containers: '[
{ {
"name": "rootfs", "name": "rootfs",
"image": "ubuntu:15.10", "image": "ubuntu:15.10",

View File

@ -29,7 +29,7 @@ spec:
app: nginx app: nginx
annotations: annotations:
pod.alpha.kubernetes.io/initialized: "true" pod.alpha.kubernetes.io/initialized: "true"
pod.alpha.kubernetes.io/init-containers: '[ pod.beta.kubernetes.io/init-containers: '[
{ {
"name": "peerfinder", "name": "peerfinder",
"image": "gcr.io/google_containers/peer-finder:0.1", "image": "gcr.io/google_containers/peer-finder:0.1",

View File

@ -27,7 +27,7 @@ spec:
app: ub app: ub
annotations: annotations:
pod.alpha.kubernetes.io/initialized: "true" pod.alpha.kubernetes.io/initialized: "true"
pod.alpha.kubernetes.io/init-containers: '[ pod.beta.kubernetes.io/init-containers: '[
{ {
"name": "rootfs", "name": "rootfs",
"image": "ubuntu:15.10", "image": "ubuntu:15.10",

View File

@ -0,0 +1,169 @@
---
assignees:
- erictune
---
* TOC
{:toc}
In addition to having one or more main containers (or **app containers**), a
pod can also have one or more **init containers** which run before the app
containers. Init containers allow you to reduce and reorganize setup scripts
and "glue code".
## Overview
An init container is exactly like a regular container, except that it always
runs to completion and each init container must complete successfully before
the next one is started. If the init container fails, Kubernetes will restart
the pod until the init container succeeds. If a pod is marked as `RestartNever`,
the pod will fail if the init container fails.
You specify a container as an init container by adding an annotation
The annotation key is `pod.beta.kubernetes.io/init-containers`. The annotation
value is a JSON array of [objects of type `v1.Container`
](http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_container)
Once the feature exits beta, the init containers will be specified on the Pod
Spec alongside the app `containers` array.
The status of the init containers is returned as another annotation -
`pod.beta.kubernetes.io/init-container-statuses` -- as an array of the
container statuses (similar to the `status.containerStatuses` field).
Init containers support all of the same features as normal containers,
including resource limits, volumes, and security settings. The resource
requests and limits for an init container are [handled slightly differently](
#resources). Init containers do not support readiness probes since they will
run to completion before the pod can be ready.
An init container has all of the fields of an app container.
If you specify multiple init containers for a pod, those containers run one at
a time in sequential order. Each must succeed before the next can run. Once all
init containers have run to completion, Kubernetes initializes the pod and runs
the application containers as usual.
## What are Init Containers Good For?
Because init containers have separate images from application containers, they
have some advantages for start-up related code. These include:
* they can contain utilities that are not desirable to include in the app container
image for security reasons,
* they can contain utilities or custom code for setup that is not present in an app
image. (No need to make an image `FROM` another image just to use a tool like
`sed`, `awk`, `python`, `dig`, etc during setup).
* the application image builder and the deployer roles can work independently without
the need to jointly build a single app image.
Because init containers have different filesystem view (Linux namespaces) from
app containers, they can be given access to Secrets that the app containers are
not able to access.
Since init containers run to completion before any app containers start, and
since app containers run in parallel, they provide an easier way to block or
delay the startup of application containers until some precondition is met.
Because init containers run in sequence and there can be multiple init containers,
they can be composed easily.
Here are some ideas for how to use init containers:
- Wait for a service to be created with a shell command like:
`for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; exit 1`
- Register this pod with a remote server with a command like:
`curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(POD_NAME)&ip=$(POD_IP)'`
using `POD_NAME` and `POD_IP` from the downward API.
- Wait for some time before starting the app container with a command like `sleep 60`.
- Clone a git repository into a volume
- Place values like a POD_IP into a configuration file, and run a template tool (e.g. jinja)
to generate a configuration file to be consumed by the main app contianer.
```
Complete usage examples can be found in the [PetSets
guide](docs/user-guide/petset/bootstrapping/index.md) and the [Production Pods
guide](/docs/user-guide/production-pods.md#handling-initialization).
## Detailed Behavior
Each pod may have 0..N init containers defined along with the existing
1..M app containers.
On startup of the pod, after the network and volumes are initialized, the init
containers are started in order. Each container must exit successfully before
the next is invoked. If a container fails to start (due to the runtime) or
exits with failure, it is retried according to the pod RestartPolicy, except
when the pod restart policy is RestartPolicyAlways, in which case just the init
containers use RestartPolicyOnFailure.
A pod cannot be ready until all init containers have succeeded. The ports on an
init container are not aggregated under a service. A pod that is being
initialized is in the `Pending` phase but should has a condition `Initializing`
set to `true`.
If the pod is [restarted](#pod-restart-reasons) all init containers must
execute again.
Changes to the init container spec are limited to the container image field.
Altering a init container image field is equivalent to restarting the pod.
Because init containers can be restarted, retried, or reexecuted, init container
code should be idempotent. In particular, code that writes to files on EmptyDirs
should be prepared for the possibility that an output file already exists.
An init container has all of the fields of an app container. The following
fields are prohibited from being used on init containers by validation:
* `readinessProbe` - init containers must exit for pod startup to continue,
are not included in rotation, and so cannot define readiness distinct from
completion.
Init container authors may use `activeDeadlineSeconds` on the pod and
`livenessProbe` on the container to prevent init containers from failing
forever. The active deadline includes init containers.
The name of each app and init container in a pod must be unique - it is a
validation error for any container to share a name.
### Resources
Given the ordering and execution for init containers, the following rules
for resource usage apply:
* The highest of any particular resource request or limit defined on all init
containers is the **effective init request/limit**
* The pod's **effective request/limit** for a resource is the higher of:
* sum of all app containers request/limit for a resource
* effective init request/limit for a resource
* Scheduling is done based on effective requests/limits, which means
init containers can reserve resources for initialization that are not used
during the life of the pod.
* QoS tier of the pod's **effective QoS tier** is the QoS tier for init containers
and app containers alike.
Quota and limits are applied based on the effective pod request and
limit.
Pod level cGroups are based on the effective pod request and limit, the
same as the scheduler.
## Pod Restart Reasons
A Pod may "restart", causing reexecution of init containers, for the following
reasons:
* An init container image is changed by a user updating the Pod Spec.
* App container image changes only restart the app container.
* The pod infrastructure container is restarted
* This is uncommon and would have to be done by someone with root access to nodes.
* All containers in a pod are terminated, requiring a restart (RestartPolicyAlways) AND the record of init container completion has been lost due to garbage collection.
## Support and compatibilty
A cluster with Kubelet and Apiserver version 1.4.0 or greater supports init
containers with the beta annotations. Support varies for other combinations of
Kubelet and Apiserver version; see the [release notes
](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md) for details.

View File

@ -204,6 +204,8 @@ The status of the init containers is returned as another annotation - `pod.beta.
Init containers support all of the same features as normal containers, including resource limits, volumes, and security settings. The resource requests and limits for an init container are handled slightly different than normal containers since init containers are run one at a time instead of all at once - any limits or quotas will be applied based on the largest init container resource quantity, rather than as the sum of quantities. Init containers do not support readiness probes since they will run to completion before the pod can be ready. Init containers support all of the same features as normal containers, including resource limits, volumes, and security settings. The resource requests and limits for an init container are handled slightly different than normal containers since init containers are run one at a time instead of all at once - any limits or quotas will be applied based on the largest init container resource quantity, rather than as the sum of quantities. Init containers do not support readiness probes since they will run to completion before the pod can be ready.
[Complete Init Container Documentation](/docs/user-guide/pods/init-containers.md)
## Lifecycle hooks and termination notice ## Lifecycle hooks and termination notice
@ -218,7 +220,7 @@ The specification of a pre-stop hook is similar to that of probes, but without t
## Termination message ## Termination message
In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](/docs/user-guide/kubectl/kubectl) or the [UI](/docs/user-guide/ui), in addition to general [log collection](/docs/user-guide/logging). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`. In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](/docs/user-guide/kubectl/) or the [UI](/docs/user-guide/ui), in addition to general [log collection](/docs/user-guide/logging). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
Here is a toy example: Here is a toy example:

View File

@ -265,7 +265,7 @@ All listed keys must exist in the corresponding secret. Otherwise, the volume is
**Secret files permissions** **Secret files permissions**
You can also specify the permission mode bits files part of a secret will have. You can also specify the permission mode bits files part of a secret will have.
If you don't specify any, `0644` is used by default. You can sepecify a default If you don't specify any, `0644` is used by default. You can specify a default
mode for the whole secret volume and override per key if needed. mode for the whole secret volume and override per key if needed.
For example, you can specify a default mode like this: For example, you can specify a default mode like this:

View File

@ -31,8 +31,8 @@ you get the raw json or yaml for a pod you have created (e.g. `kubectl get
pods/podname -o yaml`), you can see the `spec.serviceAccount` field has been pods/podname -o yaml`), you can see the `spec.serviceAccount` field has been
[automatically set](/docs/user-guide/working-with-resources/#resources-are-automatically-modified). [automatically set](/docs/user-guide/working-with-resources/#resources-are-automatically-modified).
You can access the API using a proxy or with a client library, as described in With service accounts, you can access the API inside the pod using a proxy or with a client library,
[Accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod). as described in [Accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod).
## Using Multiple Service Accounts. ## Using Multiple Service Accounts.

View File

@ -182,7 +182,7 @@ In Kubernetes v1.0 the proxy was purely in userspace. In Kubernetes v1.1 an
iptables proxy was added, but was not the default operating mode. Since iptables proxy was added, but was not the default operating mode. Since
Kubernetes v1.2, the iptables proxy is the default. Kubernetes v1.2, the iptables proxy is the default.
As of Kubernetes v1.0, `Services` are a "layer 3" (TCP/UDP over IP) construct. As of Kubernetes v1.0, `Services` are a "layer 4" (TCP/UDP over IP) construct.
In Kubernetes v1.1 the `Ingress` API was added (beta) to represent "layer 7" In Kubernetes v1.1 the `Ingress` API was added (beta) to represent "layer 7"
(HTTP) services. (HTTP) services.
@ -345,7 +345,7 @@ can do a DNS SRV query for `"_http._tcp.my-service.my-ns"` to discover the port
number for `"http"`. number for `"http"`.
The Kubernetes DNS server is the only way to access services of type The Kubernetes DNS server is the only way to access services of type
`ExternalName`. `ExternalName`. More information is available in the [DNS Admin Guide](http://kubernetes.io/docs/admin/dns/).
## Headless services ## Headless services

View File

@ -115,12 +115,12 @@ make all four clusters available on both hosts by running
# on host2, copy host1's default kubeconfig, and merge it from env # on host2, copy host1's default kubeconfig, and merge it from env
$ scp host1:/path/to/home1/.kube/config /path/to/other/.kube/config $ scp host1:/path/to/home1/.kube/config /path/to/other/.kube/config
$ export $KUBECONFIG=/path/to/other/.kube/config $ export KUBECONFIG=/path/to/other/.kube/config
# on host1, copy host2's default kubeconfig and merge it from env # on host1, copy host2's default kubeconfig and merge it from env
$ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config $ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config
$ export $KUBECONFIG=/path/to/other/.kube/config $ export KUBECONFIG=/path/to/other/.kube/config
``` ```
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file](/docs/user-guide/kubeconfig-file). Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file](/docs/user-guide/kubeconfig-file).

View File

@ -22,7 +22,7 @@ Each `ThirdPartyResource` has the following:
* `description` - A free text description of the resource. * `description` - A free text description of the resource.
* `versions` - A list of the versions of the resource. * `versions` - A list of the versions of the resource.
The `kind` for a `ThirdPartyResource` takes the form `<kind name>.<domain>`. You are expected to provide a unique kind and domain name in order to avoid conflicts with other `ThirdPartyResource` objects. Kind names will be converted to CamelCase when creating instances of the `ThirdPartyResource`. Hypens in the `kind` are assumed to be word breaks. For instance the kind `camel-case` would be converted to `CamelCase` but `camelcase` would be converted to `Camelcase`. The `kind` for a `ThirdPartyResource` takes the form `<kind name>.<domain>`. You are expected to provide a unique kind and domain name in order to avoid conflicts with other `ThirdPartyResource` objects. Kind names will be converted to CamelCase when creating instances of the `ThirdPartyResource`. Hyphens in the `kind` are assumed to be word breaks. For instance the kind `camel-case` would be converted to `CamelCase` but `camelcase` would be converted to `Camelcase`.
Other fields on the `ThirdPartyResource` are treated as custom data fields. These fields can hold arbitrary JSON data and have any structure. Other fields on the `ThirdPartyResource` are treated as custom data fields. These fields can hold arbitrary JSON data and have any structure.

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB