Merge branch 'master' into release-1.3

# Conflicts:
#	docs/admin/resourcequota/index.md
#	docs/admin/resourcequota/object-counts.yaml
#	docs/admin/resourcequota/walkthrough.md
#	docs/user-guide/downward-api/index.md
pull/788/head
johndmulhausen 2016-07-07 02:00:32 -07:00
commit cef1accd9d
119 changed files with 3269 additions and 792 deletions

View File

@ -6,44 +6,48 @@ Welcome! We are very pleased you want to contribute to the documentation and/or
You can click the "Fork" button in the upper-right area of the screen to create a copy of our site on your GitHub account called a "fork." Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click "New Pull Request" to let us know about it.
## Staging the site on GitHub Pages
If you want to see your changes staged without having to install anything locally, remove the CNAME file in this directory and
change the name of the fork to be:
YOUR_GITHUB_USERNAME.github.io
Then, visit: [http://YOUR_GITHUB_USERNAME.github.io](http://YOUR_GITHUB_USERNAME.github.io)
Then make your changes.
You should see a special-to-you version of the site.
When you visit [http://YOUR_GITHUB_USERNAME.github.io](http://YOUR_GITHUB_USERNAME.github.io) you should see a special-to-you version of the site that contains the changes you just made.
## Editing/staging the site locally
## Staging the site locally (using Docker)
If you have files to upload, or just want to work offline, run the below commands to setup
your environment for running GitHub pages locally. Then, any edits you make will be viewable
Don't like installing stuff? Download and run a local staging server with a single `docker run` command.
git clone https://github.com/kubernetes/kubernetes.github.io.git
cd kubernetes.github.io
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 johndmulhausen/k8sdocs
Then visit [http://localhost:4000](http://localhost:4000) to see our site. Any changes you make on your local machine will be automatically staged.
If you're interested you can view [the Dockerfile for this image](https://gist.github.com/johndmulhausen/f8f0ab8d82d2c755af3a4709729e1859).
## Staging the site locally (from scratch setup)
The below commands to setup your environment for running GitHub pages locally. Then, any edits you make will be viewable
on a lightweight webserver that runs on your local machine.
First install rvm
This will typically be the fastest way (by far) to iterate on docs changes and see them staged, once you get this set up, but it does involve several install steps that take awhile to complete, and makes system-wide modifications.
curl -sSL https://get.rvm.io | bash -s stable
Install Ruby 2.2 or higher. If you're on a Mac, follow [these instructions](https://gorails.com/setup/osx/). If you're on Linux, run these commands:
Then load it into your environment
apt-get install software-properties-common
apt-add-repository ppa:brightbox/ruby-ng
apt-get install ruby2.2
apt-get install ruby2.2-dev
source ${HOME}/.rvm/scripts/rvm (or whatever is prompted by the installer)
Then install Ruby 2.2 or higher
rvm install ruby-2.2.4
rvm use ruby-2.2.4 --default
Verify that this new version is running (optional)
which ruby
ruby -v
Install the GitHub Pages package, which includes Jekyll
Install the GitHub Pages package, which includes Jekyll:
gem install github-pages
Clone our site
Clone our site:
git clone https://github.com/kubernetes/kubernetes.github.io.git
@ -53,20 +57,21 @@ Make any changes you want. Then, to see your changes locally:
jekyll serve
Your copy of the site will then be viewable at: [http://localhost:4000](http://localhost:4000)
(or wherever Ruby tells you).
(or wherever Jekyll tells you).
The above instructions work on Mac and Linux.
[These instructions](https://martinbuberl.com/blog/setup-jekyll-on-windows-and-host-it-on-github-pages/) are for Windows users.
## GitHub help
If you're a bit rusty with git/GitHub, you might wanna read
[this](http://readwrite.com/2013/10/02/github-for-beginners-part-2) for a refresher.
The above instructions work on Mac and Linux.
[These instructions ](https://martinbuberl.com/blog/setup-jekyll-on-windows-and-host-it-on-github-pages/)
might help for Windows users.
## Common Tasks
### Edit Page Titles or Change the Left Navigation
Edit the yaml files in `/_data/` for the Guides, Reference, Samples, or Support areas.
Edit the yaml files in `/_data/` for the Guides, Reference, Samples, or Support areas.
You may have to exit and `jekyll clean` before restarting the `jekyll serve` to
get changes to files in `/_data/` to show up.
@ -107,11 +112,11 @@ In English, this would read: "Create a set of tabs with the alias `servicesample
and have tabs visually labeled "JSON" and "YAML" that use `json` and `yaml` Rouge syntax highlighting, which display the contents of
`service-sample.{extension}` on the page, and link to the file in GitHub at (full path)."
Example file: [Pods: Multi-Container](/docs/user-guide/pods/multi-container/).
Example file: [Pods: Multi-Container](http://kubernetes.io/docs/user-guide/pods/multi-container/).
## Use a global variable
The `/_config.yml` file defines some useful variables you can use when editing docs.
The `/_config.yml` file defines some useful variables you can use when editing docs.
* `page.githubbranch`: The name of the GitHub branch on the Kubernetes repo that is associated with this branch of the docs. e.g. `release-1.2`
* `page.version` The version of Kubernetes associated with this branch of the docs. e.g. `v1.2`
@ -133,17 +138,27 @@ The current version of the website is served out of the `master` branch.
All versions of the site that relate to past and future versions will be named after their Kubernetes release number. For example, [the old branch for the 1.1 docs is called `release-1.1`](https://github.com/kubernetes/kubernetes.github.io/tree/release-1.1).
Changes in the "docsv2" branch (where we are testing a revamp of the docs) are automatically staged here:
Changes in the "docsv2" branch (where we are testing a revamp of the docs) are automatically staged here:
http://k8sdocs.github.io/docs/tutorials/
Changes in the "release-1.1" branch (for k8s v1.1 docs) are automatically staged here:
http://kubernetes-v1-1.github.io/
Changes in the "release-1.3" branch (for k8s v1.3 docs) are automatically staged here:
Changes in the "release-1.3" branch (for k8s v1.3 docs) are automatically staged here:
http://kubernetes-v1-3.github.io/
Editing of these branches will kick off a build using Travis CI that auto-updates these URLs; you can monitor the build progress at [https://travis-ci.org/kubernetes/kubernetes.github.io](https://travis-ci.org/kubernetes/kubernetes.github.io).
## Partners
Partners can get their logos added to the partner section of the [community page](http://k8s.io/community) by following the below steps and meeting the below logo specifications. Partners will also need to have a URL that is specific to integrating with Kubernetes ready; this URL will be the destination when the logo is clicked.
* The partner product logo should be a transparent png image centered in a 215x125 px frame. (look at the existing logos for reference)
* The logo must link to a URL that is specific to integrating with Kubernetes, hosted on the partner's site.
* The logo should be named *product-name*_logo.png and placed in the `/images/community_logos` folder.
* The image reference (including the link to the partner URL) should be added in `community.html` under `<div class="partner-logos" > ...</div>`.
* Please do not change the order of the existing partner images. Append your logo to the end of the list.
* Once completed and tested the look and feel, submit the pull request.
## Thank you!
Kubernetes thrives on community participation and we really appreciate your

View File

@ -61,6 +61,17 @@ toc:
- title: Garbage collection
path: /docs/user-guide/garbage-collector/
- title: Batch Jobs
section:
- title: Jobs
path: /docs/user-guide/jobs/
- title: Parallel Processing using Expansions
path: /docs/user-guide/jobs/expansions/
- title: Coarse Parallel Processing using a Work Queue
path: /docs/user-guide/jobs/work-queue-1/
- title: Fine Parallel Processing using a Work Queue
path: /docs/user-guide/jobs/work-queue-2/
- title: Service Discovery and Load Balancing
section:
- title: Connecting Applications with Services
@ -132,6 +143,8 @@ toc:
path: /docs/getting-started-guides/
- title: Running Kubernetes on Your Local Machine
section:
- title: Running Kubernetes Locally via Minikube
path: /docs/getting-started-guides/minikube/
- title: Running Kubernetes Locally via Docker
path: /docs/getting-started-guides/docker/
- title: Running Kubernetes Locally with No VM
@ -146,8 +159,10 @@ toc:
path: /docs/getting-started-guides/gce/
- title: Running Kubernetes on AWS EC2
path: /docs/getting-started-guides/aws/
- title: Running Kubernetes on Azure
- title: Running Kubernetes on Azure (Weave-based)
path: /docs/getting-started-guides/coreos/azure/
- title: Running Kubernetes on Azure (Flannel-based)
path: /docs/getting-started-guides/azure/
- title: Running Kubernetes on CenturyLink Cloud
path: /docs/getting-started-guides/clc/
- title: Portable Multi-Node Clusters

View File

@ -76,4 +76,4 @@ toc:
- title: Nodejs + Mongo
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/nodesjs-mongodb
- title: Petstore
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/k8spetstore/
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/k8petstore/

View File

@ -40,3 +40,5 @@ toc:
path: https://github.com/kubernetes/kubernetes/milestones/
- title: Contributing to Kubernetes Documentation
path: /editdocs/
- title: New Template Instructions
path: /docs/templatedemos/

View File

@ -0,0 +1,16 @@
### ERROR: You must define a <span style="font-family: monospace">`{{ include.missing_block }}`</span> block
{: style="color:red" }
This template requires that you provide text that {{ include.purpose }}. The text in this block will
be displayed under the heading **{{ include.heading }}**.
To get rid of this message and take advantage of this template, define the `{{ include.missing_block }}`
variable and populate it with content.
```liquid
{% raw %}{%{% endraw %} capture {{ include.missing_block }} {% raw %}%}{% endraw %}
Text that {{ include.purpose }}.
{% raw %}{%{% endraw %} endcapture {% raw %}%}{% endraw %}
```
<!-- TEMPLATE_ERROR -->

View File

@ -10,20 +10,8 @@
{% else %}
### ERROR: You must define a "what_is" block
{: style="color:red" }
{% include templates/_errorthrower.md missing_block='what_is' heading='What is a (Concept)?' purpose='explains what this concept is and its purpose.' %}
This template requires that you explain what this concept is. This explanation will
be displayed under the heading, **What is a {{ concept }}?**
To get rid of this message and take advantage of this template, define the `what_is`
variable and populate it with content.
```liquid
{% raw %}{% capture what_is %}{% endraw %}
A {{ concept }} does x and y and z...(etc, etc, text goes on)
{% raw %}{% endcapture %}{% endraw %}
```
{% endif %}
@ -35,45 +23,21 @@ A {{ concept }} does x and y and z...(etc, etc, text goes on)
{% else %}
### ERROR: You must define a "when_to_use" block
{: style="color:red" }
{% include templates/_errorthrower.md missing_block='when_to_use' heading='When to use (Concept)' purpose='explains when to use this object.' %}
This template requires that you explain when to use this object. This explanation will
be displayed under the heading, **When to use {{ concept }}s**
To get rid of this message and take advantage of this template, define the `when_to_use`
variable and populate it with content.
```liquid
{% raw %}{% capture when_to_use %}{% endraw %}
You should use {{ concept }} when...
{% raw %}{% endcapture %}{% endraw %}
```
{% endif %}
{% if when_not_to_use %}
### When not to use {{ concept }}s (alternatives)
### When not to use {{ concept }}s
{{ when_not_to_use }}
{% else %}
### ERROR: You must define a "when_not_to_use" block
{: style="color:red" }
{% include templates/_errorthrower.md missing_block='when_not_to_use' heading='When not to use (Concept)' purpose='explains when not to use this object.' %}
This template requires that you explain when not to use this object. This explanation will
be displayed under the heading, **When not to use {{ concept }}s (alternatives)**
To get rid of this message and take advantage of this template, define the `when_not_to_use`
block and populate it with content.
```liquid
{% raw %}{% capture when_not_to_use %}{% endraw %}
You should not use {{ concept }} if...
{% raw %}{% endcapture %}{% endraw %}
```
{% endif %}
@ -85,69 +49,23 @@ You should not use {{ concept }} if...
{% else %}
### ERROR: You must define a "status" block
{: style="color:red" }
{% include templates/_errorthrower.md missing_block='status' heading='Retrieving status for a (Concept)' purpose='explains how to retrieve a status description for this object.' %}
This template requires that you explain the current status of support for this object.
This explanation will be displayed under the heading, **{{ concept }} status**.
To get rid of this message and take advantage of this template, define the `status`
block and populate it with content.
```liquid
{% raw %}{% capture status %}{% endraw %}
The current status of {{ concept }}s is...
{% raw %}{% endcapture %}{% endraw %}
```
{% endif %}
{% if required_fields %}
{% if usage %}
### {{ concept }} spec
#### Usage
#### Required Fields
{{ required_fields }}
{{ usage }}
{% else %}
### ERROR: You must define a "required_fields" block
{: style="color:red" }
This template requires that you provide a Markdown list of required fields for this
object. This list will be displayed under the heading **Required Fields**.
To get rid of this message and take advantage of this template, define the `required_fields`
block and populate it with content.
```liquid
{% raw %}{% capture required_fields %}
* `kind`: Always `Pod`.
* `apiVersion`: Currently `v1`.
* `metadata`: An object containing:
* `name`: Required if `generateName` is not specified. The name of this pod.
It must be an
[RFC1035](https://www.ietf.org/rfc/rfc1035.txt) compatible value and be
unique within the namespace.
{% endcapture %}{% endraw %}
```
**Note**: You can also define a `common_fields` block that will go under a heading
directly underneath **Required Fields** called **Common Fields**, but it is
not required.
{% endif %}
{% if common_fields %}
#### Common Fields
{{ common_fields }}
{% include templates/_errorthrower.md missing_block='usage' heading='Usage' purpose='shows the most basic, common use case for this object, in the form of a code sample, command, etc, using tabs to show multiple approaches' %}
{% endif %}
<!-- continuing the "if concept" if/then: -->
{% else %}

View File

@ -0,0 +1,37 @@
{% if command %}
# {% if site.data.kubectl[command].name != "kubectl" %}kubectl {% endif %}{{ site.data.kubectl[command].name }}
{{ site.data.kubectl[command].synopsis }}
## Description
{{ site.data.kubectl[command].description }}
{% if site.data.kubectl[command].options %}
## Options
| Option | Shorthand | Default Value | Usage |
|--------------------|---------------|-------|{% for option in site.data.kubectl[command].options %}
| `{{option.name | strip}}` | {% if option.shorthand %}`{{ option.shorthand | strip }}`{% endif %} | {% if option.default_value %}`{{option.default_value| strip}}`{% endif %} | {% if option.usage %}{{option.usage| strip | replace:'|',', '}}{% endif %} |{% endfor %}
{% endif %}
{% if site.data.kubectl[command].inherited_options %}
## Inherited Options
| Option | Shorthand | Default Value | Usage |
|--------------------|---------------|-------|{% for option in site.data.kubectl[command].inherited_options %}
| `{{option.name | strip}}` | {% if option.shorthand %}`{{ option.shorthand | strip }}`{% endif %} | {% if option.default_value %}`{{option.default_value| strip}}`{% endif %} | {% if option.usage %}{{option.usage| strip | replace:'|',', '}}{% endif %} |{% endfor %}
{% endif %}
## See also
{% for seealso in site.data.kubectl[command].see_also %}
- [`{{ seealso }}`](/docs/kubectl/{% if seealso != "kubectl" %}kubectl_{{seealso}}{% endif %})
{% endfor %}
{% else %}
{% include templates/_errorthrower.md missing_block='command' heading='kubectl (command)' purpose='names the kubectl command, so that the appropriate YAML file (from _data/kubectl) can be transformed into a page.' %}
{% endif %}

View File

@ -0,0 +1,36 @@
{% if purpose %}
### Purpose
{{ purpose }}
{% else %}
{% include templates/_errorthrower.md missing_block='purpose' heading='Purpose' purpose='states, in one sentence, what the purpose of this document is, so that the user will know what they are able to achieve if they follow the provided steps.' %}
{% endif %}
{% if recommended_background %}
### Recommended background
{{ recommended_background }}
{% else %}
{% include templates/_errorthrower.md missing_block='recommended_background' heading='Recommended background' purpose='lists assumptions of baseline knowledge that you expect the user to have before reading ahead.' %}
{% endif %}
{% if step_by_step %}
### Step by step
{{ step_by_step }}
{% else %}
{% include templates/_errorthrower.md missing_block='step_by_step' heading='Step by step' purpose='lists a series of linear, numbered steps that accomplish the described task.' %}
{% endif %}

View File

@ -834,7 +834,7 @@ dd
td
font-size: 0.85em
#editPageButton
position: absolute
top: -25px
@ -1165,6 +1165,17 @@ $feature-box-div-margin-bottom: 40px
margin: 10px
background-color: $light-grey
.partner-logos
text-align: center
max-width: 1200px
margin: 0 auto
img
width: auto
margin: 10px
background-color: $white
box-shadow: 0 5px 5px rgba(0,0,0,.24),0 0 5px rgba(0,0,0,.12)
#calendarWrapper
position: relative
width: 80vw
@ -1274,4 +1285,4 @@ $feature-box-div-margin-bottom: 40px
background-image: url(/images/community_logos/ebay_logo.png)
div:nth-child(3)
background-image: url(/images/community_logos/wikimedia_foundation_logo.png)
background-image: url(/images/community_logos/wikimedia_foundation_logo.png)

View File

@ -2,6 +2,7 @@ $blue: #3371e3
$light-grey: #f7f7f7
$dark-grey: #303030
$medium-grey: #4c4c4c
$white: #ffffff
$base-font: 'Roboto', sans-serif
$mono-font: 'Roboto Mono', monospace

22
_travis.yml Normal file
View File

@ -0,0 +1,22 @@
language: ruby
rvm:
- 2.1
branches:
only:
- master
script:
- cd $HOME
- git config --global user.email ${GIT_EMAIL}
- git config --global user.name "${GIT_NAME}"
- git clone https://${GIT_USERNAME}:${GH_TOKEN}@github.com/kubernetes-v1-2/kubernetes-v1-2.github.io.git
- cd kubernetes-v1-2.github.io
- git remote add --fetch --track master homebase "https://${GIT_USERNAME}:${GH_TOKEN}@github.com/kubernetes/kubernetes.github.io.git"
- git merge -s recursive -X theirs homebase/master -m "Sync from homebase"
- git push
env:
global:
- secure: Fd6wlE2mjPb1fAACxklQcJumpJWycYkaJQBfKRcjGCFlmw1XWVFGhpUC7Ni/MOyzTolqOvtb2rXnYpaujMlJP1UXqVFJ+zPbwur2lc8unQF8PcqPezl8DbPsr6HdceOjRdut/dN8zTw6+hZRDzw/mG4Rf8IVaozlYycAOnWZdAZsLXdbBpAvBp0WYHP9+8wn9xiet1L/QpSiawa/Q35Q9UM1tciPmfFBL4Fkq6Mm8/w6ECaxHyA2EX+eH5ea9EtykrzB5cMA/odJLptjnmfzsPeGS5F1MxuTqrH2Z09emZcqjfXW+fNNzioiZdD4BQe+rqOA1Ktpw645szJVJJenIMzM4P8dDiW+9cJyddGdTC9A7APgLlLVdG/tuoYcjiY1F068SWx8sUWW/xc7YenCzGj2nXIeHjUuWsjqjMorpBKe6Y1QSGfi7HsLq1DpJDR1xQZbPgM/FOaovVIPIozgAIBpS1ukdNNwadmrCw55tUKfr4s2SNv3SUmeprL37QEtIxtBpPDjL1z+qTLjc6JzRr7J57guPBdNbj6Ukg/uxW+z1CwDYb3uAIIa5e6eDxfxGQsUEj8NmkIHBLLGTnKb1WAyPXAIf9ZvZHUwQQzHvOWXGZRd32enRh00uvuwHEy3OuGEqTAOIs/31q5WAamZ1uqzW7KBTMU/nq4QA9W2yfM=
- secure: Lvj1MT/I2zaJp9mQi46us64O4JfeT9cz1m+nqKY/nuC828ULfqMYLlpw08jvrc25JhAaqHNawAwdtaxHTQqHztsmq1ixHFYA6G/1oll+YPhym29toFT9oIkXThf/L77FTJfM1gVFMyiHFbz11Ob8R15tRB1WF1+Uu1RBmtudJD7HSy0u6Uc6lPxpqtycWVRCyPdvZqF3e0KIZaTDRkRzJpcVMHa/5ZJDDcMtuJyjJYXZqS1WR3QHC1z44LlnqyB5ZM2PU4H3LyWnY8wHF2mutF0QtDDVdEuBqILBHiFuKHMxpLY92UgHm2n51RR63MxFjjEzE+iu5f9ComEm5JC0N/cc8sunIiol+d8SRC30/00Vs0tvmeAjRX9IMCExiP3mv7Tz6mqEVk6PyrVlg675hxRg0eVdaNNv92gWzCSIecZ5TuCaRG4JaWO0P8lQC4NreONt6gbnwMD350hMQZLpUUE0QoSEdafpudaD+agl4ZzTFVTOOcSOz2Sa/+RT96Msazq2YlddYXaEKZeyYzqHVkk60PiQzQcAuwCMFrAagqMe3bNI2aCFEWbc9CoR13K/wwRDwAeSzBq/UkylZ8AayJLnIpewr/iYBOQasrXLrorW969Rfr8d/nwhDN5VpRgItU0arDzngaJRoZEAtHx4zaCvZF/H/nvbORcD2gsFP9Q=
- secure: WGmgvSnT1OeHPYtsm/JzcRDnZYp+SeelgbSVNbtwOXGImkeqDo/sDIRH80HxVC3zkUu5wvRO/NkMA7N3LXN2BqENacWBFQ+j+Hf3lnDw9yETBaBpLQPO/MElKGd48GoSiukCipfaITUfwtjAQ3wx0W/z06dYC9BdUYvUM73OtsGtlrY1MEQnBfff/neDGZbzfvTdgDwg1n6d69WP+WSUJt5Aysz0ZoAbYO2rzBrKNwcL7KEhoI/ketXXQ4xEW0nnxU/qfxqhVfJbck1HpSz+HMsVGXF4tT3zdRXmU+P47KGbfYn0GmlviGrRSWCu9/elKTlX9fIqRnR1/UvdTkflwLsk4i3hWnEUflKpIT7soJrKjNQseyd06KI8qvVRXRcll92vCtsYDuOO1AlemxIqp2dIFduCf+5FWH06PbS8oHU8NBe5aVYaoBY8nK14EB3A98saJ8Un7ziB8moRaWZX9bMjfJWfxDF526NdHIusqLQrxvUthtBjf5aRULE2PeIyqYVY8/IKmBEfsy9IUENmlM5KItvgZZt4xdk8o//kceLrKabudHGVC1M1RLZo9ejkdLySz2i8nlJyJ03Hn30H9DfDjW/OMQTn/b1HU4CYSbd/yGR+6+5+tdPeKy/Z6gI6+EjHRYaaQrBLo0RXm2620cb7uDTBIsKVs2TetyhTq54=
- secure: lqboLQKFpfa6F+UkSCs9+NPv027EnSWZA4/cmAXj0hZvxc4W+qMFIlPKytcq+bbZrp7kx3cZgRd5NrgmOTqlJ9merVK1cgMsDegMv/YImHjhGVBH+3ATvqMDb7I7dLg4x/DjhKt/ogyZpoh8vsS77T9MQXaIJM8JMM3ISxYI6MMloCh0z/hbT2qv3IrPMFjpHohHdF1yorxKvSefNgPx6Y4wC+t1t+u8wS9RBhvrmOkVynlU2NgbSiMPyz3syJmnszHDTXXJlqciGT5Lc8lzpzt2OGOnZmAupEYVGok7Jx4aAxU9O8/bnoDwOcGGVrd/pGHTI5+rvT4TPy+WG1Aec55dgva4XZfrJQ88snHLvq0VrvdiI/fHJEPF18QnrBwKSni1Vd1jOzsuPIF6HL0Wwv8tklO8OX3D5wqVsXpeJd4Cj4HPPAtzxOVsV6Fz3On9vuLdV7StC/ZafQL40koaytqkNO0gX8zeUiaaVQtMCf/2MVTbX9x5m51gkGpwT0JBvqQpSMOlIM5S1fiN4X9DLBLqmsARYeZw1Jeiq7Zm39Sy1QeCgfUXS8+6t62BjWx70iftYGIkoljXtD7x/3pjKymdpwjcyUcS8KQ5W6vbMXFxmZe2phtgGTCxOIDuASPspD8zwm/ckIGArHj4qkrg94/mUpgDDIbVsaUpWXjYvSc=
- secure: bPZMNH79Lx0Wb2SCxTwyZek28w/keKxEAlfad8RDMwJrct6Bi2B0o4KjkrwFS5DyCUU7Ndk192145XnUKOWX2YVs6cQ8ge+5LvtgYhWLgX8g5Ycro+JyzOBskn+o1gQjvi8+3b42X31efcUmTEhfRdKVrrUpONEIcjG1NpLGk/mQJ6AM6hGWO2xdNfAezeWq5ISnpK0b6VUWZyTEDg3NivrTfCL2juWWCnjYm1BWSHUblXwRQ/Rl7Tcldl6cMMUVsalUQ1iG0h8YZDrxNz0cm3XTZZZJKuSHYeTCLd57RBeHD5/iMxjCmfzfq6ETNLONWLmtvA8yhWMQZ0DPFtLZzbqFIfOR6P1feBZFqP7/X5KZeFKBufN15JbcXIqHE8homLY9mS0LhyNffOs5G/P/x8ChE1DJaYiZIWCZ60umvpqibScZB3z5uFPTxLk9rJSOtT7hCWjcmg3EdJ+R4ExOiBDd62ZS5jH72WU4uysPXORRofUCL+zHycJoJxsFWQSW49GAGrohllrW45jnTgpalErxvjJFSKy8JW9w634eetz9ct2fObep7m8bfVMl8U2H3ITIoXHm+f8ooHUiNzHFLSl4wtcoAOtrCAvGtJPNfv8T2eTkznzj7Tk+XzTvFg03u+J99TmnC58Bs516Oc7E40NarmsZceOD2sN4BA3X9tE=

View File

@ -23,7 +23,7 @@ title: Community
to participate.</p>
</div>
<div class="content">
<h3>Sigs</h3>
<h3>SIGs</h3>
<p>Have a special interest in how Kubernetes works with another technology? See our ever growing
<a href="https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)">lists of SIGs</a>,
from AWS and Openstack to Big Data and Scalability, theres a place for you to contribute and instructions
@ -64,6 +64,18 @@ title: Community
<img src="/images/community_logos/deis_logo.png">
</div>
</div>
<div class="content">
<h3>Partners</h3>
<p>We are working with a broad group of partners to help grow the kubernetes ecosystem supporting
a sprectrum of compelmenting platforms, from open source solutions to market-leading technologies.</p>
<div class="partner-logos">
<a href="https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud/"><img src="/images/community_logos/sysdig_cloud_logo.png"></a>
<a href="http://wercker.com/workflows/partners/kubernetes/"><img src="/images/community_logos/wercker_logo.png"></a>
<a href="http://rancher.com/kubernetes/"><img src="/images/community_logos/rancher_logo.png"></a>
<a href="https://elasticbox.com/kubernetes/"><img src="/images/community_logos/elastickube_logo.png"></a>
<a href="http://docs.datadoghq.com/integrations/kubernetes/"><img src="/images/community_logos/datadog_logo.png"></a>
</div>
</div>
</main>
</section>
@ -95,4 +107,4 @@ title: Community
{% include footer.html %}
</body>
</html>
</html>

View File

@ -158,7 +158,7 @@ kube-system full privilege to the API, you would add this line to your policy
file:
```json
{"apiVersion":"abac.authorization.kubernetes.io/v1beta1","kind":"Policy","user":"system:serviceaccount:kube-system:default","namespace":"*","resource":"*","apiGroup":"*"}
{"apiVersion":"abac.authorization.kubernetes.io/v1beta1","kind":"Policy","spec":{"user":"system:serviceaccount:kube-system:default","namespace":"*","resource":"*","apiGroup":"*"}}
```
The apiserver will need to be restarted to pickup the new policy lines.

View File

@ -1,8 +1,10 @@
---
---
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
## Introduction
As of Kubernetes 1.3, DNS is a built-in service launched automatically using the addon manager [cluster add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md).
A DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
Every Service defined in the cluster (including the DNS server itself) will be
@ -15,25 +17,29 @@ in namespace `bar` can look up this service by simply doing a DNS query for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.
The cluster DNS server ([SkyDNS](https://github.com/skynetservices/skydns))
supports forward lookups (A records) and service lookups (SRV records).
The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library)
supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records).
## How it Works
The running DNS pod holds 4 containers - skydns, etcd (a private instance which skydns uses),
a Kubernetes-to-skydns bridge called kube2sky, and a health check called healthz. The kube2sky process
watches the Kubernetes master for changes in Services, and then writes the
information to etcd, which skydns reads. This etcd instance is not linked to
any other etcd clusters that might exist, including the Kubernetes master.
The running Kubernetes DNS pod holds 3 containers - kubedns, dnsmasq and a health check called healthz.
The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains
in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve
performance. The healthz container provides a single health check endpoint while performing dual healthchecks
(for dnsmasq and kubedns).
## Issues
## Kubernetes Federation (Multiple Zone support)
The skydns service is reachable directly from Kubernetes nodes (outside
of any container) and DNS resolution works if the skydns service is targeted
explicitly. However, nodes are not configured to use the cluster DNS service or
to search the cluster's DNS domain by default. This may be resolved at a later
time.
Release 1.3 introduced Cluster Federation support for multi-site
Kubernetes installations. This required some minor
(backward-compatible) changes to the way
the Kubernetes cluster DNS server processes DNS queries, to facilitate
the lookup of federated services (which span multiple Kubernetes clusters).
See the [Cluster Federation Administrators' Guide](/docs/admin/federation/index.md) for more
details on Cluster Federation and multi-site support.
## For more information
## References
- [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md)
See [the docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).

View File

@ -5,14 +5,14 @@
Kubernetes 1.2 adds support for running a single cluster in multiple failure zones
(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones").
This is a lightweight version of a broader effort for federating multiple
Kubernetes clusters together (sometimes referred to by the affectionate
This is a lightweight version of a broader Cluster Federation feature (previously referred to by the affectionate
nickname ["Ubernetes"](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md).
Full federation will allow combining separate
Kubernetes clusters running in different regions or clouds. However, many
Full Cluster Federation allows combining separate
Kubernetes clusters running in different regions or cloud providers
(or on-premise data centers). However, many
users simply want to run a more available Kubernetes cluster in multiple zones
of their cloud provider, and this is what the multizone support in 1.2 allows
(we nickname this "Ubernetes Lite").
of their single cloud provider, and this is what the multizone support in 1.2 allows
(this previously went by the nickname "Ubernetes Lite").
Multizone support is deliberately limited: a single Kubernetes cluster can run
in multiple zones, but only within the same region (and cloud provider). Only
@ -73,7 +73,7 @@ plane should follow the [high availability](/docs/admin/high-availability) instr
We're now going to walk through setting up and using a multi-zone
cluster on both GCE & AWS. To do so, you bring up a full cluster
(specifying `MULTIZONE=1`), and then you add nodes in additional zones
(specifying `MULTIZONE=true`), and then you add nodes in additional zones
by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`).
### Bringing up your cluster
@ -83,17 +83,17 @@ Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage m
GCE:
```shell
curl -sS https://get.k8s.io | MULTIZONE=1 KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
```
AWS:
```shell
curl -sS https://get.k8s.io | MULTIZONE=1 KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
```
This step brings up a cluster as normal, still running in a single zone
(but `MULTIZONE=1` has enabled multi-zone capabilities).
(but `MULTIZONE=true` has enabled multi-zone capabilities).
### Nodes are labeled
@ -124,14 +124,14 @@ created instead.
GCE:
```shell
KUBE_USE_EXISTING_MASTER=true MULTIZONE=1 KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
```
On AWS we also need to specify the network CIDR for the additional
subnet, along with the master internal IP address:
```shell
KUBE_USE_EXISTING_MASTER=true MULTIZONE=1 KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
```
@ -235,13 +235,13 @@ across zones. First, let's launch more nodes in a third zone:
GCE:
```shell
KUBE_USE_EXISTING_MASTER=true MULTIZONE=1 KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh
```
AWS:
```shell
KUBE_USE_EXISTING_MASTER=true MULTIZONE=1 KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
```
Verify that you now have nodes in 3 zones:

View File

@ -46,6 +46,7 @@ kube-system <none> Active
```
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system

View File

@ -1,6 +1,9 @@
---
---
* TOC
{:toc}
__Disclaimer__: Network plugins are in alpha. Its contents will change rapidly.
Network plugins in Kubernetes come in a few flavors:
@ -18,24 +21,25 @@ The kubelet has a single default network plugin, and a default network common to
## Network Plugin Requirements
Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/pkg/kubelet/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/pkg/kubelet/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
By default if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like docker with a bridge) work correctly with the iptables proxy.
### Exec
Place plugins in `network-plugin-dir/plugin-name/plugin-name`, i.e if you have a bridge plugin and `network-plugin-dir` is `/usr/lib/kubernetes`, you'd place the bridge plugin executable at `/usr/lib/kubernetes/bridge/bridge`. See [this comment](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/pkg/kubelet/network/exec/exec.go) for more details.
Place plugins in `network-plugin-dir/plugin-name/plugin-name`, i.e if you have a bridge plugin and `network-plugin-dir` is `/usr/lib/kubernetes`, you'd place the bridge plugin executable at `/usr/lib/kubernetes/bridge/bridge`. See [this comment](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/pkg/kubelet/network/exec/exec.go) for more details.
### CNI
The CNI plugin is selected by passing Kubelet the `--network-plugin=cni` command-line option. Kubelet reads the first CNI configuration file from `--network-plugin-dir` and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the [CNI specification](https://github.com/appc/cni/blob/master/SPEC.md), and any required CNI plugins referenced by the configuration must be present in `/opt/cni/bin`.
The CNI plugin is selected by passing Kubelet the `--network-plugin=cni` command-line option. Kubelet reads the first CNI configuration file from `--network-plugin-dir` and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md), and any required CNI plugins referenced by the configuration must be present in `/opt/cni/bin`.
### kubenet
The Linux-only kubenet plugin provides functionality similar to the `--configure-cbr0` kubelet command-line option. It creates a Linux bridge named `cbr0` and creates a veth pair for each pod with the host end of each pair connected to `cbr0`. The pod end of the pair is assigned an IP address allocated from a range assigned to the node through either configuration or by the controller-manager. `cbr0` is assigned an MTU matching the smallest MTU of an enabled normal interface on the host. The kubenet plugin is currently mutually exclusive with, and will eventually replace, the --configure-cbr0 option. It is also currently incompatible with the flannel experimental overlay.
The Linux-only kubenet plugin provides functionality similar to the `--configure-cbr0` kubelet command-line option. It creates a Linux bridge named `cbr0` and creates a veth pair for each pod with the host end of each pair connected to `cbr0`. The pod end of the pair is assigned an IP address allocated from a range assigned to the node either through configuration or by the controller-manager. `cbr0` is assigned an MTU matching the smallest MTU of an enabled normal interface on the host. The kubenet plugin is currently mutually exclusive with, and will eventually replace, the --configure-cbr0 option. It is also currently incompatible with the flannel experimental overlay.
The plugin requires a few things:
* The standard CNI `bridge` and `host-local` plugins to be placed in `/opt/cni/bin`.
* The standard CNI `bridge` and `host-local` plugins are required. Kubenet will first search for them in `/opt/cni/bin`. Specify `network-plugin-dir` to supply additional search path. The first found match will take effect.
* Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin
* Kubelet must also be run with the `--reconcile-cidr` argument to ensure the IP subnet assigned to the node by configuration or the controller-manager is propagated to the plugin
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.

View File

@ -6,4 +6,4 @@ spec:
hard:
persistentvolumeclaims: "2"
services.loadbalancers: "2"
services.nodeports: "0"
services.nodeports: "0"

View File

@ -74,6 +74,7 @@ services.nodeports 0 0
The quota system will now prevent users from creating more than the specified amount for each resource.
## Step 3: Apply a compute-resource quota to the namespace
To limit the amount of compute resource that can be consumed in this namespace,
@ -124,6 +125,7 @@ $ kubectl get pods --namespace=quota-example
```
What happened? I have no pods! Let's describe the deployment to get a view of what is happening.
<<<<<<< HEAD
```shell
$ kubectl describe deployment nginx --namespace=quota-example

View File

@ -136,7 +136,20 @@ Make sure the environment variables you used to provision your cluster are still
cluster/kube-down.sh
```
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
AWS | Saltstack | Ubuntu | OVS | [docs](/docs/getting-started-guides/aws) | | Community ([@justinsb](https://github.com/justinsb))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.
and using a Kubernetes cluster.

View File

@ -0,0 +1,174 @@
---
---
* TOC
{:toc}
## Prerequisites
1. An Azure subscription. If you don't already have one, you may create one on [azure.microsoft.com](https://azure.microsoft.com).
2. An account with Owner access to the subscription.
3. Both `docker` and `jq` need to be installed and available on `$PATH`.
## Cluster operations
### Cluster bring-up
```shell
export KUBERNETES_PROVIDER=azure; curl -sS https://get.k8s.io | bash
```
Note: if you receive an error "the input device is not a TTY", then you need to start the deployment manually.
```shell
cd ~/kubernetes
./cluster/kube-up.sh
```
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/kube-up.sh)
which in turn calls [cluster/azure/util.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/azure/util.sh)
using [cluster/azure/config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/azure/config-default.sh).
You must set `AZURE_SUBSCRIPTION_ID` or you will receive errors. Prior to Kubernetes 1.3.0, you must also set `AZURE_TENANT_ID`.
These may be set in `cluster/azure/config-default.sh` or set as environment variables:
```shell
export AZURE_SUBSCRIPTION_ID="<subscription-guid>"
export AZURE_TENANT_ID="<tenant-guid>" # only needed for Kubernetes < v1.3.0.
```
These values can be overriden by setting them in `cluster/azure/config-default.sh` or as environment variables. They are shown here with their default values:
```shell
export AZURE_DEPLOY_ID="" # autogenerated if blank
export AZURE_LOCATION="westus"
export AZURE_RESOURCE_GROUP="" # generated from AZURE_DEPLOY_ID if unset
export AZURE_MASTER_SIZE="Standard_A1"
export AZURE_NODE_SIZE="Standard_A1"
export AZURE_USERNAME="kube"
export NUM_NODES=3
export AZURE_AUTH_METHOD="device"
```
By default, this will deploy a cluster with 4 `Standard_A1`-sized VMs: one master node, three worker nodes. This process takes about 5 to 10 minutes. Once the cluster is up, connection information to the cluster will be displayed. Additionally, your `kubectl` configuration will be updated to know about this cluster and this new cluster will be set as the active context.
The Azure deployment process produces an output directory `cluster/azure/_deployments/${AZURE_DEPLOY_ID}`. In this directory you will find the PKI and SSH assets created for the cluster, as well as a script named `util.sh`. Here are some examples of its usage:
```shell
$ cd cluster/azure/_deployments/kube-20160316-001122/
# This uses the client cert with curl to make an http call to the apiserver.
$ ./util.sh curl api/v1/nodes
# This uses the client cert with kubectl to target this deployment's apiserver.
$ ./util.sh kubectl get nodes
# This alters the current kubectl configuration to point at this cluster.
$ ./util.sh configure-kubectl
# This will deploy the kube-system namespace, the SkyDNS addon, and the kube-dashboard addon.
$ ./util.sh deploy-addons
# This uses the ssh private key to copy the private key itself to the master node.
$ ./util.sh copykey
# This uses the ssh private key to open an ssh connection to the master.
$ ./util.sh ssh
```
### Cluster deployment examples
#### Deploy the `kube-system` namespace
The cluster addons are created in the `kube-system` namespace.
For versions of Kubernetes before 1.3.0, this must be done manually. Starting with 1.3.0, the
namespace is created automatically as part of the Azure bring-up. For versions prior to 1.3.0, you may
execute this to create the `kube-system` namespace:
```shell
kubectl create -f https://raw.githubusercontent.com/colemickens/azkube/v0.0.5/templates/coreos/addons/kube-system.yaml
```
#### Using `kubectl proxy`
`kubectl proxy` is currently used to access to access deployed services.
```shell
kubectl proxy --port=8001
```
Deployed services are available at: `http://localhost:8001/api/v1/proxy/namespaces/<namespace>/services/<service_name>`.
#### Addon: SkyDNS
You can deploy the [SkyDNS addon](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster/addons/dns):
```shell
kubectl create -f https://raw.githubusercontent.com/colemickens/azkube/v0.0.5/templates/coreos/addons/skydns.yaml
```
#### Addon: Kube-Dashboard
This will deploy the [`kube-dashboard`](https://github.com/kubernetes/dashboard) addon:
```shell
kubectl create -f https://raw.githubusercontent.com/colemickens/azkube/v0.0.5/templates/coreos/addons/kube-dashboard.yaml
```
The dashboard is then available at: `http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/dashboard-canary`.
#### Example: Guestbook
This will deploy the [`guestbook example`](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/examples/guestbook/README.md) (the all-in-one variant):
```shell
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.2/examples/guestbook/all-in-one/guestbook-all-in-one.yaml
```
The guestbook is then available at: `http://localhost:8001/api/v1/proxy/namespaces/default/services/frontend`.
### Cluster scaling
The `azkube` tool used internally during `kube-up` can also be used to scale your cluster.
Here's an example of scaling a default deployment of 3 nodes to 10 nodes:
```shell
export AZURE_DEPLOY_ID="kube-20160316-001122"
$ docker run -it -v "$HOME/.azkube:/.azkube" -v "/tmp:/tmp" \
colemickens/azkube:v0.0.5 /opt/azkube/azkube scale \
--deployment-name="${AZURE_DEPLOY_ID}" \
--node-size="Standard_A1" \
--node-count=10
```
### Cluster tear-down
You can tear-down a cluster using `kube-down.sh`:
```shell
export AZURE_DEPLOY_ID="kube-20160316-001122"
$ ./cluster/kube-down.sh
```
Prior to Kubernetes 1.3, the cluster must be deleted manually with the Azure CLI or via the Azure Portal.
### Notes
1. The user account used for these operations must have Owner access to the subscription.
2. You can find your subscription ID in the [Azure Portal](https://portal.microsoft.com). (All Resources → Subscriptions)
3. The `AZURE_AUTH_METHOD` environment variable controls what authentication mechanism is used when bringing up the cluster. By default it is set to `device`. This allows the user to login via the a web browser. This interactive step can be automated by creating a Service Principal, setting `AZURE_AUTH_METHOD=client_secret` and setting `AZURE_CLIENT_ID` + `AZURE_CLIENT_SECRET` as appropriate for your Service Principal.
4. The `--node-size` used in the `scale` command must be the same size deployed initially or it will not have the desired effect.
5. Cluster tear-down requires manual intervention, due to the fact that it deletes the entire resource group and someone else may have deployed other resources since the initial deployment. For this reason you must confirm the list of resources that are to be deleted. If you wish to skip it, you may set `AZURE_DOWN_SKIP_CONFIRM` to `true`. This will delete everything in the resource group that was deployed to.
6. If you are deploying from a checkout of `kubernetes`, then you will need to take an additional step to ensure that a `hyperkube` image is available. You can set `AZURE_DOCKER_REGISTRY` and `AZURE_DOCKER_REPO` and the deployment will ensure that a hyperkube container is built and available in the specified Docker registry. That `hyperkube` image will then be used throughout the cluster for running the Kubernetes services. Alternatively, you may set `AZURE_HYPERKUBE_SPEC` to point to a custom `hyperkube` image.
## Further reading
* Please see the [azkube](https://github.com/colemickens/azkube) repository for more information about the deployment tool that manages the deployment.

View File

@ -3,6 +3,9 @@
You can either build a release from sources or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest a pre-built release.
If you just want to run Kubernetes locally for development, we recommend using Minikube. You can download Minikube [here](https://github.com/kubernetes/minikube/releases/latest).
Minikube sets up a local VM that runs a Kubernetes cluster securely, and makes it easy to work with that cluster.
* TOC
{:toc}

View File

@ -1,7 +1,7 @@
---
---
---
---
* TOC
* TOC
{:toc}
## Prerequisites
@ -20,38 +20,38 @@ The Kubernetes package provides a few services: kube-apiserver, kube-scheduler,
Hosts:
```conf
```conf
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
```
```
**Prepare the hosts:**
* Create a virt7-docker-common-release repo on all hosts - centos-{master,minion} with following information.
```conf
```conf
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
```
```
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
```shell
```shell
yum -y install --enablerepo=virt7-docker-common-release kubernetes
```
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```shell
```shell
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
```
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
```shell
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
@ -63,20 +63,20 @@ KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
```shell
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such:
```shell
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
@ -94,25 +94,25 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
```
* Start the appropriate services on master:
```shell
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet and start the kubelet and proxy***
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
```shell
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
@ -127,28 +127,38 @@ KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own!
KUBELET_ARGS=""
```
```
* Start the appropriate services on node (centos-minion).
```shell
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
*You should be finished!*
* Check to make sure the cluster can see the node (on centos-master)
```shell
```shell
$ kubectl get nodes
NAME LABELS STATUS
centos-minion <none> Ready
```
```
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -332,3 +332,10 @@ These are the known items that don't work on CenturyLink cloud but do work on ot
If you want more information about our Ansible files, please [read this file](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md)
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.

View File

@ -78,3 +78,12 @@ SSH to it using the key that was created and using the _core_ user and you can l
a017c422... <node #1 IP> role=node
ad13bf84... <master IP> role=master
e9af8293... <node #2 IP> role=node
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack) | | Community ([@runseb](https://github.com/runseb))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,11 +1,11 @@
---
---
* TOC
{:toc}
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
---
---
* TOC
{:toc}
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
### Prerequisites
@ -15,83 +15,83 @@ In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure clo
To get started, you need to checkout the code:
```shell
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
```
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
```shell
```shell
npm install
```
```
Now, all you need to do is:
```shell
```shell
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
If you need to pass Azure specific options for the creation script you can do this via additional environment variables e.g.
```shell
AZ_SUBSCRIPTION=<id> AZ_LOCATION="East US" ./create-kubernetes-cluster.js
# or
AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
```
AZ_SUBSCRIPTION=<id> AZ_LOCATION="East US" ./create-kubernetes-cluster.js
# or
AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
```
![VMs in Azure](/images/docs/initial_cluster.png)
Once the creation of Azure VMs has finished, you should see the following:
```shell
```shell
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
```
```
Let's login to the master node like so:
```shell
```shell
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
```shell
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
```
```
## Deploying the workload
Let's follow the Guestbook example now:
```shell
```shell
kubectl create -f ~/guestbook-example
```
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
```shell
```shell
kubectl get pods --watch
```
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
```shell
```shell
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 4m
frontend-4wahe 1/1 Running 0 4m
@ -99,8 +99,8 @@ frontend-6l36j 1/1 Running 0 4m
redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
```
```
## Scaling
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
@ -109,13 +109,13 @@ You will need to open another terminal window on your machine and go to the same
First, lets set the size of new VMs:
```shell
```shell
export AZ_VM_SIZE=Large
```
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```shell
```shell
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
@ -129,69 +129,69 @@ azure_wrapper/info: The hosts in this deployment are:
'kube-03',
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
```shell
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
```
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
```shell
```shell
core@kube-00 ~ $ kubectl get rc
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
```
```
As there are 4 nodes, let's scale proportionally:
```shell
```shell
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
>>>>>>> coreos/azure: Updates for 1.0
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
```
```
Check what you have now:
```shell
```shell
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
```
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
```shell
```shell
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 22m
frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
```
```
## Exposing the app to the outside world
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
```shell
```shell
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
Guestbook app is on port 31605, will map it to port 80 on kube-00
info: Executing command vm endpoint create
@ -207,8 +207,8 @@ data: Protcol : tcp
data: Virtual IP Address : 137.117.156.164
data: Direct server return : Disabled
info: vm endpoint show command OK
```
```
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
## Next steps
@ -221,10 +221,27 @@ You should probably try deploy other [example apps](https://github.com/kubernete
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
```shell
```shell
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
```
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster

View File

@ -9,7 +9,7 @@
"author": "Ilya Dmitrichenko <errordeveloper@gmail.com>",
"license": "Apache 2.0",
"dependencies": {
"azure-cli": "^0.9.9",
"azure-cli": "^0.10.1",
"colors": "^1.0.3",
"js-yaml": "^3.2.5",
"openssl-wrapper": "^0.2.1",

View File

@ -1,59 +1,59 @@
---
---
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
---
---
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
Specifically, this guide will have you do the following:
Specifically, this guide will have you do the following:
- Deploy a Kubernetes master node on CoreOS using cloud-config.
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
- Configure `kubectl` to access your cluster.
- Configure `kubectl` to access your cluster.
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
## Prerequisites and Assumptions
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
- 1 Kubernetes Master
- 2 Kubernetes Nodes
- Your nodes should have IP connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
## Cloud-config
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
We'll use two cloud-config files:
We'll use two cloud-config files:
- `master-config.yaml`: cloud-config for the Kubernetes master
- `node-config.yaml`: cloud-config for each Kubernetes node
## Download CoreOS
## Download CoreOS
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
2. *On another machine*, download the the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
2. *On another machine*, download the the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
3. Replace the following variables in the `master-config.yaml` file.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
3. Replace the following variables in the `master-config.yaml` file.
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
### Configure TLS
@ -61,14 +61,14 @@ Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
2. Send the three files to your master host (using `scp` for example).
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```shell
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```shell
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set Permissions
@ -77,9 +77,9 @@ The master requires the CA certificate, `ca.pem`; its own certificate, `apiserve
```
4. Restart the kubelet to pick up the changes:
```shell
sudo systemctl restart kubelet
```shell
sudo systemctl restart kubelet
```
## Configure the compute nodes
@ -90,108 +90,119 @@ The following steps will set up a single Kubernetes node for use as a compute ho
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
4. Replace the following placeholders with the contents of their respective files.
4. Replace the following placeholders with the contents of their respective files.
- `<CA_CERT>`: Complete contents of `ca.pem`
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> <CA_CERT>
> ```
>
> should look like this once the certificate is in place:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> -----BEGIN CERTIFICATE-----
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
> ...<snip>...
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
> -----END CERTIFICATE-----
> ```
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> <CA_CERT>
> ```
>
> should look like this once the certificate is in place:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> -----BEGIN CERTIFICATE-----
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
> ...<snip>...
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
> -----END CERTIFICATE-----
> ```
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
```shell
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
```
```
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
## Configure Kubeconfig
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
```shell
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
## Configure Kubeconfig
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
```shell
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
Check your work with `kubectl get nodes`.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/examples/) to set up other services on your cluster.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -693,3 +693,12 @@ Kill all pods:
```shell
for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done
```
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline) | | Community ([@jeffbean](https://github.com/jeffbean))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -80,4 +80,20 @@ Guide to running an HA etcd cluster with a single master on Azure. Uses the Azur
[**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
Configure a single master, single worker cluster on VMware ESXi.
Configure a single master, single worker cluster on VMware ESXi.
<hr/>
[**Single/Multi-node cluster using cloud-config, CoreOS and Foreman**](https://github.com/johscheuer/theforeman-coreos-kubernetes)
Configure a standalone Kubernetes or a Kubernetes cluster with [Foreman](https://theforeman.org).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires))
Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -128,3 +128,11 @@ $ dcos kubectl get pods --all-namespaces
```shell
$ dcos package uninstall kubernetes
```
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -86,3 +86,14 @@ See [here](/docs/getting-started-guides/docker-multinode/deployDNS) for instruct
Once your cluster has been created you can [test it out](/docs/getting-started-guides/docker-multinode/testing)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Docker Multi Node | custom | N/A | flannel | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -20,7 +20,7 @@ If the status of any node is `Unknown` or `NotReady` your cluster is broken, dou
### Run an application
```shell
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
kubectl run nginx --image=nginx --port=80
```
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
@ -65,4 +65,4 @@ And list the pods
kubectl get pods
```
You should see pods landing on the newly added machine.
You should see pods landing on the newly added machine.

View File

@ -1,6 +1,8 @@
---
---
**Stop. This guide has been superseded by [Minikube](../minikube/) which is the recommended method of running Kubernetes on your local machine.**
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
Here's a diagram of what the final result will look like:
@ -12,6 +14,8 @@ Here's a diagram of what the final result will look like:
## Prerequisites
**Note: These steps have not been tested with the [Docker For Mac or Docker For Windows beta programs](https://blog.docker.com/2016/03/docker-for-mac-windows-beta/).**
1. You need to have docker installed on one machine.
2. Decide what Kubernetes version to use. Set the `${K8S_VERSION}` variable to
a released version of Kubernetes >= "v1.2.0". If you'd like to use the current stable version of Kubernetes, run the following:
@ -209,3 +213,19 @@ output of /proc/cmdline:
$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory=1
```
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.

View File

@ -1,9 +1,9 @@
---
---
---
---
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
* TOC
* TOC
{:toc}
## Prerequisites
@ -18,12 +18,12 @@ The hosts can be virtual or bare metal. Ansible will take care of the rest of th
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```shell
```shell
master,etcd = kube-master.example.com
node1 = kube-node-01.example.com
node2 = kube-node-02.example.com
```
```
**Make sure your local machine has**
- ansible (must be 1.9.0+)
@ -32,22 +32,22 @@ master,etcd = kube-master.example.com
If not
```shell
```shell
yum install -y ansible git python-netaddr
```
```
**Now clone down the Kubernetes repository**
```shell
```shell
git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible
```
```
**Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
```shell
```shell
[masters]
kube-master.example.com
@ -57,8 +57,8 @@ kube-master.example.com
[nodes]
kube-node-01.example.com
kube-node-02.example.com
```
```
## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
@ -67,52 +67,52 @@ If you already are running on a machine which has passwordless ssh access to the
edit: ~/contrib/ansible/group_vars/all.yml
```yaml
```yaml
ansible_ssh_user: root
```
```
**Configuring ssh access to the cluster**
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
Make sure your local machine (root) has an ssh key pair if not
```shell
```shell
ssh-keygen
```
```
Copy the ssh public key to **all** nodes in the cluster
```shell
```shell
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node}
done
```
```
## Setting up the cluster
Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
```conf
edit: ~/contrib/ansible/group_vars/all.yml
edit: ~/contrib/ansible/group_vars/all.yml
```
**Configure access to kubernetes packages**
Modify `source_type` as below to access kubernetes packages through the package manager.
```yaml
```yaml
source_type: packageManager
```
```
**Configure the IP addresses used for services**
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml
```yaml
kube_service_addresses: 10.254.0.0/16
```
```
**Managing flannel**
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
@ -122,32 +122,32 @@ Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defa
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
```yaml
```yaml
cluster_logging: true
```
```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
```yaml
```yaml
cluster_monitoring: true
```
```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
```yaml
```yaml
dns_setup: true
```
```
**Tell ansible to get to work!**
This will finally setup your whole Kubernetes cluster for you.
```shell
```shell
cd ~/contrib/ansible/
./setup.sh
```
```
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
@ -156,25 +156,26 @@ That's all there is to it. It's really that easy. At this point you should hav
Run the following on the kube-master:
```shell
```shell
kubectl get nodes
```
```
**Show services running on masters and nodes**
```shell
```shell
systemctl | grep -i kube
```
```
**Show firewall rules on the masters and nodes**
```shell
iptables -nvL
```
```shell
iptables -nvL
```
**Create /tmp/apache.json on the master with the following contents and deploy pod**
```json
```json
{
"kind": "Pod",
"apiVersion": "v1",
@ -199,29 +200,38 @@ iptables -nvL
]
}
}
```
```shell
```
```shell
kubectl create -f /tmp/apache.json
```
```
**Check where the pod was created**
```shell
```shell
kubectl get pods
```
```
**Check Docker status on nodes**
```shell
```shell
docker ps
docker images
```
```
**After the pod is 'Running' Check web server access on the node**
```shell
```shell
curl http://localhost
```
That's it !
```
That's it !
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,7 +1,7 @@
---
---
---
---
* TOC
* TOC
{:toc}
## Prerequisites
@ -20,37 +20,37 @@ The Kubernetes package provides a few services: kube-apiserver, kube-scheduler,
Hosts:
```conf
```conf
fed-master = 192.168.121.9
fed-node = 192.168.121.65
```
```
**Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
```shell
```shell
yum -y install --enablerepo=updates-testing kubernetes
```
```
* Install etcd and iptables
```shell
```shell
yum -y install etcd iptables
```
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```shell
```shell
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```shell
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
@ -62,20 +62,20 @@ KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```shell
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```shell
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
@ -87,37 +87,37 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```shell
```shell
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
```
* Create /var/run/kubernetes on master:
```shell
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
```
* Start the appropriate services on master:
```shell
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
* Addition of nodes:
* Create following node.json file on Kubernetes master node:
```json
```json
{
"apiVersion": "v1",
"kind": "Node",
@ -129,18 +129,18 @@ done
"externalID": "fed-node"
}
}
```
```
Now create a node object internally in your Kubernetes cluster by running:
```shell
```shell
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
@ -153,7 +153,7 @@ a Kubernetes node (fed-node) below.
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
```shell
###
# Kubernetes kubelet (node) config
@ -168,36 +168,46 @@ KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
```
```
* Start the appropriate services on the node (fed-node).
```shell
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```shell
```shell
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
```
* Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```shell
```shell
kubectl delete -f ./node.json
```
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -170,3 +170,17 @@ PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
```
Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -212,4 +212,20 @@ being used in `cluster/config-default.sh` create a new rule with the following
field values:
* Source Ranges: `10.0.0.0/8`
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.

View File

@ -41,6 +41,8 @@ The local-machine solutions are:
[Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes
clusters.
[Stackpoint.io](https://stackpoint.io) provides Kubernetes infrastructure automation and management for multiple public clouds.
### Turn-key Cloud Solutions
These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
@ -48,7 +50,8 @@ few commands, and have active community support.
- [GCE](/docs/getting-started-guides/gce)
- [AWS](/docs/getting-started-guides/aws)
- [Azure](/docs/getting-started-guides/coreos/azure/)
- [Azure](/docs/getting-started-guides/coreos/azure/) (Weave-based, contributed by WeaveWorks employees)
- [Azure](/docs/getting-started-guides/azure/) (Flannel-based, contributed by Microsoft employee)
- [CenturyLink Cloud](/docs/getting-started-guides/clc)
### Custom Solutions
@ -114,8 +117,10 @@ Here are all the solutions mentioned above in table form.
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | ['œ“][3] | Commercial
Stackpoint.io | | multi-support | multi-support | [docs](http://www.stackpointcloud.com) | | Commercial
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
Azure | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/azure) | | Community ([@colemickens](https://github.com/colemickens))
Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns))
Docker Multi Node | custom | N/A | flannel | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns))
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project
@ -126,7 +131,7 @@ KVM | custom | Fedora | flannel | [docs](/docs/gettin
Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires))
Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline) | | Community ([@jeffbean](https://github.com/jeffbean))

View File

@ -275,3 +275,18 @@ works with [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws),
If you do not see your favorite cloud provider listed many clouds with ssh
access can be configured for
[manual provisioning](https://jujucharms.com/docs/stable/config-manual).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -320,3 +320,14 @@ usermod -a -G libvirtd $USER
#### error: Out of memory initializing network (virsh net-create...)
Ensure libvirtd has been restarted since ebtables was installed.
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos) | | Community ([@lhuard1A](https://github.com/lhuard1A))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,6 +1,8 @@
---
---
**Stop. This guide has been superseded by [Minikube](../minikube/) which is the recommended method of running Kubernetes on your local machine.**
* TOC
{:toc}
@ -49,6 +51,7 @@ Your cluster is running, and you want to start running containers!
You can now use any of the cluster/kubectl.sh commands to interact with your local setup.
```shell
export KUBERNETES_PROVIDER=local
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get deployments

View File

@ -3,7 +3,7 @@
**By: Sandeep Dinesh** - _July 29, 2015_
![image](/images/docs/meanstack/image_0.png)
![image](/images/docs/meanstack/image_0.png)
In [a recent post](http://blog.sandeepdinesh.com/2015/07/running-mean-web-application-in-docker.html), I talked about running a MEAN stack with [Docker Containers.](http://docker.com/)
@ -14,7 +14,7 @@ Thankfully, there is a system we can use to manage our containers in a cluster e
* TOC
{:toc}
## The Basics of Using Kubernetes
## The Basics of Using Kubernetes
Before we jump in and start kubeing it up, its important to understand some of the fundamentals of Kubernetes.
@ -34,7 +34,7 @@ Instead, you have to build a custom container that has the code already inside i
To do this, you need to use more Docker. Make sure you have the latest version installed for the rest of this tutorial.
Getting the code:
Getting the code:
Before starting, lets get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial.
@ -72,7 +72,7 @@ Then, it creates a folder to store the code, `cd`s into that directory, copies t
Finally, it specifies the command Docker should run when the container starts, which is to start the app.
## Step 2: Building our Container
## Step 2: Building our Container
Right now, the directory should look like this:
@ -410,11 +410,11 @@ web-controller-xxxx 1/1 Running 0 1m
## Step 9: Accessing the App
At this point, everything is up and running. The architecture looks something like this:
At this point, everything is up and running. The architecture looks something like this:
![image](/images/docs/meanstack/MEANstack_architecture.svg){: style="max-width:25%"}
![image](/images/docs/meanstack/MEANstack_architecture.svg){: style="max-width:25%"}
By default, port 80 should be open on the load balancer. In order to find the IP address of our app, run this command:
By default, port 80 should be open on the load balancer. In order to find the IP address of our app, run this command:
```shell
$ gcloud compute forwarding-rules list
@ -436,7 +436,7 @@ And the Database works!
By using Container Engine and Kubernetes, we have a very robust, container based MEAN stack running in production.
[In anoter post](https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474#.e93x7kuq5), I cover how to setup a MongoDB replica set. This is very important for running in production.
[In another post](https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474#.e93x7kuq5), I cover how to setup a MongoDB replica set. This is very important for running in production.
Hopefully I can do some more posts about advanced Kubernetes topics such as changing the cluster size and number of Node.js web server replicas, using different environments (dev, staging, prod) on the same cluster, and doing rolling updates.

View File

@ -304,4 +304,14 @@ Breakdown:
- `hack/build-go.sh` - builds the Go binaries for the current architecture (linux/amd64 when in a docker container)
- `make` - delegates to `hack/build-go.sh`
- `build/run.sh` - executes a command in the build container
- `build/release.sh` - cross compiles Kubernetes for all supported architectures and operating systems (slow)
- `build/release.sh` - cross compiles Kubernetes for all supported architectures and operating systems (slow)
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -302,6 +302,15 @@ Address 1: 10.10.10.10
Name: kubernetes
Address 1: 10.10.10.1
```
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
## What next?

View File

@ -0,0 +1,145 @@
---
---
* TOC
{:toc}
Minikube starts a single node kubernetes cluster locally for purposes of development and testing.
Minikube packages and configures a Linux VM, Docker and all Kubernetes components, optimized for local development.
Minikube supports Kubernetes features such as:
* DNS
* NodePorts
* ConfigMaps and Secrets
* Dashboards
Minikube does not yet support Cloud Provider specific features such as:
* LoadBalancers
* PersistentVolumes
* Ingress
### Requirements
Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS on all platforms.
To check that this is enabled on Linux, run:
```shell
cat /proc/cpuinfo | grep 'vmx\|svm'
```
This command should output something if the setting is enabled.
To check that this is enabled on OSX (most newer Macs have this enabled by default), run:
```shell
sysctl -a | grep machdep.cpu.features | grep VMX
```
This command should output something if the setting is enabled.
#### Linux
Minikube requires the latest [Virtualbox](https://www.virtualbox.org/wiki/Downloads) to be installed on your system.
#### OSX
Minikube requires one of the following:
* The latest [Virtualbox](https://www.virtualbox.org/wiki/Downloads).
* The latest version of [VMWare Fusion](https://www.vmware.com/products/fusion).
### Installation
See the [latest Minikube release](https://github.com/kubernetes/minikube/releases) for installation instructions.
### Download `kubectl`
You will need to download the kubectl client binary for `${K8S_VERSION}` (in this example: `{{page.version}}.0`)
to run commands against the cluster.
Downloads:
- `linux/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl
- `linux/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl
- `linux/arm`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl
- `linux/arm64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl
- `linux/ppc64le`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl
- `OS X/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl
- `OS X/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl
- `windows/amd64`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe
- `windows/386`: http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/386/kubectl.exe
The generic download path is:
```
http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}
```
An example install with `linux/amd64`:
```
curl -sSL "http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl" > /usr/bin/kubectl
chmod +x /usr/bin/kubectl
```
### Starting the cluster
To start a cluster, run the command:
```shell
minikube start
Starting local Kubernetes cluster...
Kubernetes is available at https://192.168.99.100:443.
```
This will build and start a lightweight local cluster, consisting of a master, etcd, Docker and a single node.
Minikube will also create a "minikube" context, and set it to default in kubectl.
To switch back to this context later, run this command: `kubectl config use-context minikube`.
Type `minikube stop` to shut the cluster down.
Minikube also includes the [Kubernetes dashboard](http://kubernetes.io/docs/user-guide/ui/). Run this command to see the included kube-system pods:
```shell
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-127.0.0.1 1/1 Running 0 35s
kube-system kubernetes-dashboard-9brhv 1/1 Running 0 20s
```
Run this command to open the Kubernetes dashboard:
```shell
minikube dashboard
```
### Test it out
List the nodes in your cluster by running:
```shell
kubectl get nodes
```
Minikube contains a built-in Docker daemon that for running containers.
If you use another Docker daemon for building your containers, you will have to publish them to a registry before minikube can pull them.
You can use minikube's built in Docker daemon to avoid this extra step of pushing your images.
Use the built-in Docker daemon with:
```shell
eval $(minikube docker-env)
```
This command sets up the Docker environment variables so a Docker client can communicate with the minikube Docker daemon:
```shell
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42c643fea98b gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1 "/dashboard --port=90" 3 minutes ago Up 3 minutes k8s_kubernetes-dashboard.1d0d880_kubernetes-dashboard-9brhv_kube-system_5062dd0b-370b-11e6-84b6-5eab1f51187f_134cba4c
475db7659edf gcr.io/google_containers/pause-amd64:3.0 "/pause" 3 minutes ago Up 3 minutes k8s_POD.2225036b_kubernetes-dashboard-9brhv_kube-system_5062dd0b-370b-11e6-84b6-5eab1f51187f_e76d8136
e9096501addf gcr.io/google-containers/kube-addon-manager-amd64:v2 "/opt/kube-addons.sh" 3 minutes ago Up 3 minutes k8s_kube-addon-manager.a1c58ca2_kube-addon-manager-127.0.0.1_kube-system_48abed82af93bb0b941173334110923f_82655b7d
64748893cf7c gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 minutes ago Up 4 minutes k8s_POD.d8dbe16c_kube-addon-manager-127.0.0.1_kube-system_48abed82af93bb0b941173334110923f_c67701c3
```

View File

@ -237,3 +237,12 @@ To bring down your cluster, issue the following command:
KUBERNETES_PROVIDER=openstack-heat ./cluster/kube-down.sh
```
If you have changed the default `$STACK_NAME`, you must specify the name. Note that this will not remove any Cinder volumes created by Kubernetes.
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/getting-started-guides/openstack-heat) | | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -47,4 +47,13 @@ The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
[![Screencast](http://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](http://www.youtube.com/watch?v=JyyST4ZKne8)
[![Screencast](http://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](http://www.youtube.com/watch?v=JyyST4ZKne8)
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
oVirt | | | | [docs](/docs/getting-started-guides/ovirt) | | Community ([@simon3z](https://github.com/simon3z))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -60,4 +60,14 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo
- eth0 - Public Interface used for servers/containers to reach the internet
- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services.
- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface.
- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface.
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Rackspace | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/rackspace) | | Community ([@doublerr](https://github.com/doublerr))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -837,3 +837,13 @@ pinging or SSH-ing from one node to another.
If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](/docs/troubleshooting#slack).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
any | any | any | any | [docs](/docs/getting-started-guides/scratch) | | Community ([@erictune](https://github.com/erictune))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -436,7 +436,7 @@ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/examples/) to set up other services on your cluster.
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
@ -463,3 +463,13 @@ ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Ubuntu | Calico | [docs](/docs/getting-started-guides/ubuntu-calico) | | Community ([@djosborne](https://github.com/djosborne))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -280,4 +280,14 @@ You can use the `kubectl` command to check if the newly upgraded kubernetes clus
To make sure the version of the upgraded cluster is what you expect, you will find these commands helpful.
* upgrade all components or master: `$ kubectl version`. Check the *Server Version*.
* upgrade node `vcap@10.10.102.223`: `$ ssh -t vcap@10.10.102.223 'cd /opt/bin && sudo ./kubelet --version'`
* upgrade node `vcap@10.10.102.223`: `$ ssh -t vcap@10.10.102.223 'cd /opt/bin && sudo ./kubelet --version'`*
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,6 +1,8 @@
---
---
**Stop. This guide has been superseded by [Minikube](../minikube/) which is the recommended method of running Kubernetes on your local machine.**
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
* TOC

View File

@ -94,3 +94,13 @@ The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You
can log into any VM as the `kube` user to poke around and figure out what is
going on (find yourself authorized with your SSH key, or use the password
`kube` otherwise).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Vmware | | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@pietern](https://github.com/pietern))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -139,11 +139,11 @@ Its now time to deploy your own containerized application to the Kubernetes c
$ gcloud container clusters get-credentials hello-world
```
**The rest of this document requires both the kubernetes client and server version to be 1.2. Run `kubectl version` to see your current versions.** For 1.1 see [this document](https://github.com/kubernetes/kubernetes.github.io/blob/release-1.1/docs/hellonode.md).
**The rest of this document requires both the Kubernetes client and server version to be 1.2. Run `kubectl version` to see your current versions.** For 1.1 see [this document](https://github.com/kubernetes/kubernetes.github.io/blob/release-1.1/docs/hellonode.md).
## Create your pod
A kubernetes **[pod](/docs/user-guide/pods/)** is a group of containers, tied together for the purposes of administration and networking. It can contain a single container or multiple.
A Kubernetes **[pod](/docs/user-guide/pods/)** is a group of containers, tied together for the purposes of administration and networking. It can contain a single container or multiple.
Create a pod with the `kubectl run` command:
@ -200,7 +200,7 @@ At this point you should have our container running under the control of Kuberne
## Allow external traffic
By default, the pod is only accessible by its internal IP within the Kubernetes cluster. In order to make the `hello-node` container accessible from outside the kubernetes virtual network, you have to expose the pod as a kubernetes **[service](/docs/user-guide/services/)**.
By default, the pod is only accessible by its internal IP within the Kubernetes cluster. In order to make the `hello-node` container accessible from outside the Kubernetes virtual network, you have to expose the pod as a Kubernetes **[service](/docs/user-guide/services/)**.
From our development machine we can expose the pod to the public internet using the `kubectl expose` command combined with the `--type="LoadBalancer"` flag. The flag is needed for the creation of an externally accessible ip:
@ -287,10 +287,10 @@ gcloud docker push gcr.io/PROJECT_ID/hello-node:v2
Building and pushing this updated image should be much quicker as we take full advantage of the Docker cache.
Were now ready for kubernetes to smoothly update our deployment to the new version of the application. In order to change
Were now ready for Kubernetes to smoothly update our deployment to the new version of the application. In order to change
the image label for our running container, we will need to edit the existing *hello-node deployment* and change the image from
`gcr.io/PROJECT_ID/hello-node:v1` to `gcr.io/PROJECT_ID/hello-node:v2`. To do this, we will use the `kubectl edit` command.
This will open up a text editor displaying the full deployment yaml configuration. It isn't necessary to understand the full yaml config
This will open up a text editor displaying the full deployment yaml [configuration](/docs/user-guide/configuring-containers/). It isn't necessary to understand the full yaml config
right now, instead just understand that by updating the `spec.template.spec.containers.image` field in the config we are telling
the deployment to update the pods to use the new image.
@ -373,7 +373,7 @@ This user interface allows you to get started quickly and enables some of the fu
Enjoy the Kubernetes graphical dashboard and use it for deploying containerized applications, as well as for monitoring and managing your clusters!
![image](/images/docs/ui-dashboard-cards-menu.png)
![image](/images/docs/ui-dashboard-workloadview.png)
Learn more about the web interface by taking the [Dashboard tour](/docs/user-guide/ui/).

View File

@ -23,4 +23,4 @@ Explore the glossary of essential Kubernetes concepts. Some good starting points
## Design Docs
An archive of the design docs for Kubernetes functionality. Good starting points are [Kubernetes Architecture](https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/architecture.md) and [Kubernetes Design Overview](https://github.com/kubernetes/kubernetes/tree/release-1.1/docs/design).
An archive of the design docs for Kubernetes functionality. Good starting points are [Kubernetes Architecture](https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/architecture.md) and [Kubernetes Design Overview](https://github.com/kubernetes/kubernetes/tree/release-1.1/docs/design).

114
docs/sitemap.md Normal file
View File

@ -0,0 +1,114 @@
---
---
<script language="JavaScript">
var dropDownsPopulated = false;
$( document ).ready(function() {
// When the document loads, get the metadata JSON, and kick off tbl render
$.get("/metadata.txt", function(data, status) {
metadata = $.parseJSON(data);
metadata.pages.sort(dynamicSort("t"));
mainLogic()
$(window).bind( 'hashchange', function(e) {
mainLogic();
});
});
});
function mainLogic()
{
// If there's a tag filter, change the table/drop down output
if (!dropDownsPopulated) populateDropdowns();
var tag=window.location.hash.replace("#","");
if(tag) {
tag = $.trim(tag);
for (i=0;i<tagName.length;i++) {
querystringTag = tagName[i] + "=";
if (tag.indexOf(querystringTag) > -1)
{
console.log("in mainLog: querystringTag of " + querystringTag + " matches tag of " + tag);
tag = tag.replace(querystringTag,"");
selectDropDown(tagName[i],tag);
topicsFilter(tagName[i],tag,"output");
}
}
} else {
currentTopics = metadata.pages;
}
renderTable(currentTopics,"output");
}
function populateDropdowns()
{
// Keeping mainLogic() brief by functionizing the initialization of the
// drop-down filter boxes
for(i=0;i<metadata.pages.length;i++)
{
var metadataArrays = [metadata.pages[i].cr,metadata.pages[i].or,metadata.pages[i].mr];
for(j=0;j<metadataArrays.length;j++)
{
if (metadataArrays[j]) {
for (k=0;k<metadataArrays[j].length;k++) {
if (typeof storedTagsArrays[j] == 'undefined') storedTagsArrays[j] = new Array();
storedTagsArrays[j][metadataArrays[j][k][tagName[j]]] = true;
// ^ conceptList[metadata.pages[i].cr[k].concept] = true; (if rolling through concepts)
// ^ conceptList['container'] = true; (ultimate result)
// ^ objectList[metadata.pages[i].or[k].object] = true; (if rolling through objects)
// ^ objectList['restartPolicy'] = true; (ultimate result)
}
}
}
}
var output = new Array();
for(i=0;i<tagName.length;i++)
{
// Phew! All tags in conceptList, objectList, and commandList!
// Loop through them and populate those drop-downs through html() injection
output = [];
output.push("<select id='" + tagName[i] + "' onchange='dropFilter(this)'>");
output.push("<option>---</option>");
Object.keys(storedTagsArrays[i]).sort().forEach(function (key) {
output.push("<option>" + key + "</option>");
});
output.push("</select>")
$(dropDowns[i]).html(output.join(""));
}
dropDownsPopulated = true;
}
function dropFilter(srcobj)
{
// process the change of a drop-down value
// the ID of the drop down is either command, object, or concept
// these exact values are what topicsFilter() expects, plus a filter val
// which we get from .text() of :selected
console.log("dropFilter:" + $(srcobj).attr('id') + ":" + $(srcobj).find(":selected").text());
topicsFilter($(srcobj).attr('id').replace("#",""),$(srcobj).find(":selected").text(),"output");
for(i=0;i<tagName.length;i++)
{
if($(srcobj).attr('id')!=tagName[i]) selectDropDown(tagName[i],"---");
}
}
function selectDropDown(type,tag)
{
// change drop-down selection w/o filtering
$("#" + type).val(tag);
}
</script>
<style>
#filters select{
font-size: 14px;
border: 1px #000 solid;
}
#filters {
padding-top: 20px;
}
</style>
Click tags or use the drop downs to filter. Click table headers to sort or reverse sort.
<p id="filters">
Filter by Concept: <span id="conceptFilter" /><br/>
Filter by Object: <span id="objectFilter" /><br/>
Filter by Command: <span id="commandFilter" />
</p>
<div id="output" />

View File

@ -0,0 +1,3 @@
---
---
{% include templates/kubectl.md %}

View File

@ -0,0 +1,3 @@
---
---
{% include templates/task.md %}

View File

@ -1,32 +1,80 @@
---
---
{% assign concept="Replication Controller" %}
{% assign concept="Pod" %}
{% capture what_is %}
A Replication Controller does x and y and z...(etc, etc, text goes on)
A pod is the vehicle for running containers in Kubernetes. A pod consists of:
- One or more containers
- An IP address that is unique within the cluster
- Optionally: Environment variables, storage volumes, and enterprise features (such as health checking)
Resources are shared amongst containers in the pod. Containers within a pod also share an IP address and port space, and can find each other via localhost, or interprocess communications (such as semaphores).
![Pod diagram](/images/docs/pod-overview.svg){: style="max-width: 25%" }
{% comment %}https://drive.google.com/open?id=1pQe4-s76fqyrzB8f3xoJo4MPLNVoBlsE1tT9MyLNINg{% endcomment %}
{% endcapture %}
{% capture when_to_use %}
You should use Replication Controller when...
Pods are used any time you need a container to be run. However, they are rarely created by a user, and are instead automatically created by controllers such as jobs, replication controllers, deployments, daemon set. The following table describes the strategy each controller uses to create pods.
| Controller | Usage Strategy |
|------------|----------------|
| Deployment | For running pods as a continuous and healthy application |
| Replication Controller | Used for the same purpose as Deployments (superseded Replication Controllers) |
| Jobs | For running pods "to completion" (which are then shut down) |
| Daemon Set | Mainly for performing operations on any nodes that match given parameters |
{% endcapture %}
{% capture when_not_to_use %}
You should not use Replication Controller if...
Do not use pods directly. Pods should always be managed by a controller.
{% endcapture %}
{% capture status %}
The current status of Replication Controllers is...
To retrieve the status of a pod, run the following command:
```shell
kubectl get pod <name>
```
| Return Value | Description |
|--------------|-------------|
| `READY` | Describes the number of containers that are ready to recieve traffic. |
| `STATUS` | A value from the `PodPhase` enum describing the current status of the pod. Can be `Running`, `Pending`, `Succeeded`, `Failed`, and `Unknown`. |
TODO: Link to refpage for `kubectl get pod`
To get a full description of a pod, including past events, run the following command:
```shell
kubectl describe pod <name>
```
TODO: Link to refpage for `kubectl describe pod`
#### Possible status results
| Value | Description |
|------------|----------------|
| Deployment | For running pods as a continuous and healthy application |
| Replication Controller | Used for the same purpose as Deployments (superseded Replication Controllers) |
| Jobs | For running pods "to completion" (which are then shut down) |
| Daemon Set | Mainly for performing operations on any nodes that match given parameters |
{% endcapture %}
{% capture required_fields %}
* `kind`: Always `Pod`.
* `apiVersion`: Currently `v1`.
* `metadata`: An object containing:
* `name`: Required if `generateName` is not specified. The name of this pod.
It must be an
[RFC1035](https://www.ietf.org/rfc/rfc1035.txt) compatible value and be
unique within the namespace.
{% capture usage %}
Pods are defined when configuring the controller of your choice. In controller specifications,
the parts that define the contents of the pod are inside the `template:` section.
```yaml
YAML EXAMPLE HERE
```
{% endcapture %}
{% include templates/concept-overview.md %}

View File

@ -1,19 +1,246 @@
---
---
# Template Demos
* TOC
{:toc}
This page demonstrates new doc templates being worked on.
## Before you Begin: Get the docs code checked out locally
Click the headings to see the source of the template in GitHub.
Check out the kubernetes/kubernetes.github.io repo and the docsv2 branch.
## [Concept Overviews](https://github.com/kubernetes/kubernetes.github.io/blob/master/_includes/templates/concept-overview.md)
### Step 1: Fork and Clone the repo
- [Blank page that is trying to use template](blank/)
- [Partially filled out page](partial/)
- [Completely filled out page](filledout/)
- Fork [kubernetes/kubernetes.github.io](https://github.com/kubernetes/kubernetes.github.io)
- [Setup your GitHub authentication using ssh](https://help.github.com/articles/generating-an-ssh-key/)
- Clone the repo under ~/go/src/k8s.io
## [Landing Pages](https://github.com/kubernetes/kubernetes.github.io/blob/master/_includes/templates/landing-page.md)
```shell
cd ~/go/src/k8s.io
git clone git@github.com:<your-github-username>/kubernetes.github.io
cd kubernetes.github.io
git remote add upstream https://github.com/kubernetes/kubernetes.github.io.git
```
- [Blank](blanklanding/)
- [Filled Out](landingpage/)
### Step 2: Switch to the docsv2 branch
Docs v2 development is being performed in the `docsv2` branch. This is the branch
you want to be working from.
From ~/go/src/k8s.io/kubernetes.github.io:
```shell
git checkout -b docsv2
git fetch upstream
git reset --hard upstream/docsv2
```
### Step 3: Make sure you can serve rendered docs
One option is to simply rename your fork's repo on GitHub.com to `yourusername.github.io`, which will auto-stage your commits at that URL.
Or, just use Docker! Run this from within your local `kubernetes.github.io` directory and you should be good:
```shell
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 johndmulhausen/k8sdocs
```
The site will then be viewable at [http://localhost:4000/](http://localhost:4000/).
Or, you can [follow the instructions](/editdocs/) for running a from-scratch staging server, which is both the most performant option and the biggest pain to get set up.
## Writing Docs Using Templates
### Types of Templates
- Concept Template
- Introduce K8s Api Objects e.g. Pod
- Task Template
- Step-by-step guide for "Doing X".
- Useful for breaking down various ways of configuring Concepts into sub-topics
- Landing Pages Template
- Collection of click-able cards on a grid
- Useful for directing users to actual content from a visual Table of Contents
## Concept Overview Template Details
A concept overview covers the most essential, important information about core
Kubernetes concepts and features. Examples of Concepts include `Pod`,
`Deployment`, `Service`, etc.
### Reference Examples
- [Link to Example Template: Source](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/docsv2/docs/pod/index.md)
- [Link to Example Template: Rendered](http://k8sdocs.github.io/docs/pod/)
### Usage
### Creating the file
To create a new concept overview page, create a new directory with the concept
name under the docs directory and an index.md file.
e.g. `docs/your-concept-name/index.md`.
### Adding the page to navigation
Once your page is saved, somewhere in the `/docs/` directory, add a reference to the `concepts.yml` file under `/_data/` so that it will appear in the left-hand navigation of the site. This is also where you add a title to the page.
### Adding the Template sections
- concept: the concept name e.g. Pod
- what_is: one sentence description the function / role of the concept. Diagrams are helpful.
- when_to_use: disambiguate when to use this vs alternatives
- when_not_to_use: highlight common anti-patterns
- status: how to get the status for this object using kubectl
- usage: example yaml
- template: include the template at the end
### Tags structure
- `glossary:` a brief (~140 character) definition of what this concept is.
- `object_rankings:` associates the page with API objects/functions.
- `concept_rankings:` associates the page with Kubernetes concepts.
- `command_rankings:` associates the page with CLI commands
In each case, the association is ranked. If ranked "1," the topic will surface as a "Core Topic" (of high importance) on various associated pages. If ranked "2," the topic will be grouped under "Advanced Topics," which are deemed less essential.
Only ranks 1 and 2 are supported.
Tags are mandatory and should be thorough; they are the connective tissue of the site. To see them in action, [visit our sitemap](http://k8sdocs.github.io/docs/sitemap/).
```liquid{% raw %}
---
glossary: A pod is the vehicle for running containers in Kubernetes.
object_rankings:
- object: pod
rank: 1
concept_rankings:
- concept: pod
rank: 1
command_rankings:
- command: kubect describe
rank: 1
- command: kubectl get
rank: 1
---
{% capture concept %} concept-name-here {% endcapture %}
{% capture what_is %} description-of-concept-here {% endcapture %}
{% capture when_to_use %} when-to-use-here {% endcapture %}
{% capture when_not_to_use %} anti-patterns-here {% endcapture %}
{% capture status %} how-to-get-with-kubectl-here {% endcapture %}
{% capture usage %} yaml-config-usage-here {% endcapture %}
{% include templates/concept-overview.md %}
{% endraw %}```
## Task Template Details
A task page offers step-by-step instructions for completing a task with Kubernetes. **A task page should be narrowly focused on task completion and not delve into concepts or reference information.**
### Example
- [Link to Example Template: Source](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/docsv2/docs/tasks/explicitly-scheduling-pod-node.md)
- [Link to Example Template: Rendered](http://k8sdocs.github.io/docs/tasks/explicitly-scheduling-pod-node/)
### Usage
### Creating the file
To create a new task page, create a file under docs/tasks/task-name.
e.g. `docs/tasks/your-task-name`.
Task filenames should match the title, chaining words with dashes in all lowercase, omitting articles and prepositions. For example, the topic "Explictly Scheduling a Pod on a Node" is stored in file `/docs/tasks/explicitly-scheduling-pod-node.md`.
### Adding the page to navigation
Add a reference to the `tasks.yml` file under `/_data/` so that it will appear in the left-hand navigation of the site. This is also where you add a title to the page.
### Adding the Template sections
- metadata: structured description of the doc content
- purpose: one sentence description of the task and motivation
- recommended_background: List of Concepts referenced or other Tasks, Tutorials that provide needed context
- set_by_step: Add multiple sections. 1 per step in the task.
- template: include the template at the end
### Tags structure
- `object_rankings:` associates the page with API objects/functions.
- `concept_rankings:` associates the page with Kubernetes concepts.
- `command_rankings:` associates the page with CLI commands
In each case, the association is ranked. If ranked "1," the topic will surface as a "Core Topic" (of high importance) on various associated pages. If ranked "2," the topic will be grouped under "Advanced Topics," which are deemed less essential.
Only ranks 1 and 2 are supported.
Tags are mandatory and should be thorough; they are the connective tissue of the site. To see them in action, [visit our sitemap](http://k8sdocs.github.io/docs/sitemap/).
```liquid{% raw %}
---
object_rankings:
- object: nodeAffinity
rank: 1
- object: nodeSelector
rank: 2
concept_rankings:
- concept: node
rank: 1
- concept: pod
rank: 1
command_rankings:
- command: kubectl label
rank: 1
- command: kubectl get
rank: 2
---
{% capture purpose %} task-description-here {% endcapture %}
{% capture recommended_background %} prereq-reading-here {% endcapture %}
{% capture step_by_step %} single-step-here {% endcapture %}
{% include templates/task.md %}
{% endraw %}```
## Landing Pages
Landing pages are a set of clickable "cards" arranged in a grid. Each card has a heading and description, and optioninall, a thumbnail image. They are meant to be index pages that quickly forward users on to deeper content.
### Demos
- [Link to Example Landing Page](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/templatedemos/landingpage.md)
- [Link to Rendered Landing Page](landingpage/)
### Usage
To use this template, create a new file with these contents. Essentially, you declare the cards you want by inserting the following YAML structure in the front-matter YAML section at the top of the page, and the body of the page just has the include statement.
```yaml
---
cards:
- progression: no #"yes" = display cards as linearly progressing
- card:
title: Mean Stack
image: /images/docs/meanstack/image_0.png
description: Lorem ipsum dolor it verberum.
# repeat -card: items as necessary
---
{% raw %}{% include templates/landing-page.md %}{% endraw %}
```
### Adding page to navigation
Once your page is saved, somewhere in the `/docs/` directory, add a reference to the appropriate .yml file under `/_data/` so that it will appear in the left-hand navigation of the site. This is also where you add a title to the page.
## kubectl yaml
You probably shouldn't be using this, but we also have templates which consume YAML files that are generated by the Kubernetes authors. These are turned into pages which display the reference information for the various CLI tools.
### Demos
- [Link to Example Template](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/docsv2/docs/kubectl/kubectl_api-versions.md)
- [Link to Rendered Template](http://k8sdocs.github.io/docs/kubectl/kubectl_api-versions/)
### Adding page to navigation
Once your page is saved, somewhere in the `/docs/` directory, add a reference to the `reference.yml` file under `/_data/` so that it will appear in the left-hand navigation of the site. This is also where you add a title to the page.

View File

@ -0,0 +1,4 @@
---
---
{% capture command %}kubectl_annotate{% endcapture %}
{% include templates/kubectl.md %}

View File

@ -0,0 +1,62 @@
---
---
# Doing a thing with a thing
{% capture purpose %}
This document teaches you how to do a thing.
{% endcapture %}
{% capture recommended_background %}
In order to do a thing, you must be familiar with the following:
- [Thing 1](/foo/)
- [Thing 2](/bar/)
{% endcapture %}
{% capture step_by_step %}
Here's how to do a thing with a thing.
#### 1. Prepare the thing
Lorem ipsum dolor it verberum.
#### 2. Run the thing command
Lorem ipsum dolor it verberum.
#### 3. Create the thing.yaml file
Lorem ipsum dolor it verberum.
```yaml
# Creates three nginx replicas
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
#### 4. ???
Lorem ipsum dolor it verberum.
#### 5. Profit!
Lorem ipsum dolor it verberum.
{% endcapture %}
{% include templates/task.md %}

View File

@ -3,20 +3,24 @@
## Troubleshooting
Sometimes things go wrong. This guide is aimed at making them right. It has two sections:
Sometimes things go wrong. This guide is aimed at making them right. It has
two sections:
* [Troubleshooting your application](/docs/user-guide/application-troubleshooting) - Useful for users who are deploying code into Kubernetes and wondering why it is not working.
* [Troubleshooting your cluster](/docs/admin/cluster-troubleshooting) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy.
You should also check the known issues for the [release](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md) you're using.
You should also check the known issues for the [release](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md)
you're using.
### Getting help
If your problem isn't answered by any of the guides above, there are variety of ways for you to get help from the Kubernetes team.
If your problem isn't answered by any of the guides above, there are variety of
ways for you to get help from the Kubernetes team.
### Questions
If you aren't familiar with it, many of your questions may be answered by the [user guide](/docs/user-guide/).
If you aren't familiar with it, many of your questions may be answered by the
[user guide](/docs/user-guide/).
We also have a number of FAQ pages:
@ -33,11 +37,23 @@ You may also find the Stack Overflow topics relevant:
### Stack Overflow
Someone else from the community may have already asked a similar question or may be able to help with your problem. The Kubernetes team will also monitor [posts tagged kubernetes](http://stackoverflow.com/questions/tagged/kubernetes). If there aren't any existing questions that help, please [ask a new one](http://stackoverflow.com/questions/ask?tags=kubernetes)!
Someone else from the community may have already asked a similar question or may
be able to help with your problem. The Kubernetes team will also monitor
[posts tagged kubernetes](http://stackoverflow.com/questions/tagged/kubernetes).
If there aren't any existing questions that help, please [ask a new one](http://stackoverflow.com/questions/ask?tags=kubernetes)!
### Slack
The Kubernetes team hangs out on Slack in the `#kubernetes-users` channel. You can participate in the Kubernetes team [here](https://kubernetes.slack.com). Slack requires registration, but the Kubernetes team is open invitation to anyone to register [here](http://slack.kubernetes.io). Feel free to come and ask any and all questions.
The Kubernetes team hangs out on Slack in the `#kubernetes-users` channel. You
can participate in discussion with the Kubernetes team [here](https://kubernetes.slack.com).
Slack requires registration, but the Kubernetes team is open invitation to
anyone to register [here](http://slack.kubernetes.io). Feel free to come and ask
any and all questions.
Once registered, browse the growing list of channels for various subjects of
interest. For example, people new to Kubernetes may also want to join the
`#kubernetes-novice` channel. As another example, developers should join the
`#kubernetes-dev` channel.
### Mailing List
@ -45,14 +61,15 @@ The Google Container Engine mailing list is [google-containers@googlegroups.com]
### Bugs and Feature requests
If you have what looks like a bug, or you would like to make a feature request, please use the [Github issue tracking system](https://github.com/kubernetes/kubernetes/issues).
If you have what looks like a bug, or you would like to make a feature request,
please use the [Github issue tracking system](https://github.com/kubernetes/kubernetes/issues).
Before you file an issue, please search existing issues to see if your issue is already covered.
Before you file an issue, please search existing issues to see if your issue is
already covered.
If filing a bug, please include detailed information about how to reproduce the problem, such as:
If filing a bug, please include detailed information about how to reproduce the
problem, such as:
* Kubernetes version: `kubectl version`
* Cloud provider, OS distro, network configuration, and Docker version
* Steps to reproduce the problem

View File

@ -91,10 +91,16 @@ runner (Docker or rkt).
When using Docker:
- The `spec.container[].resources.limits.cpu` is multiplied by 1024, converted to an integer, and
used as the value of the [`--cpu-shares`](
- The `spec.container[].resources.requests.cpu` is converted to its core value (potentially fractional),
and multipled by 1024, and used as the value of the [`--cpu-shares`](
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
command.
- The `spec.container[].resources.limits.cpu` is converted to its millicore value,
multipled by 100000, and then divided by 1000, and used as the value of the [`--cpu-quota`](
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
command. The [`--cpu-period`] flag is set to 100000 which represents the default 100ms period
for measuring quota usage. The kubelet enforces cpu limits if it was started with the
[`--cpu-cfs-quota`] flag set to true. As of version 1.2, this flag will now default to true.
- The `spec.container[].resources.limits.memory` is converted to an integer, and used as the value
of the [`--memory`](https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag
to the `docker run` command.

View File

@ -83,7 +83,7 @@ spec: # specification of the pods contents
args: ["/bin/echo \"${MESSAGE}\""]
```
However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/expansion):
However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/expansion.md):
```yaml
command: ["/bin/echo"]

View File

@ -105,7 +105,7 @@ KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
```
Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the Deployment to recreate them. This time around the Service exists *before* the replicas. This will given you scheduler level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the Deployment to recreate them. This time around the Service exists *before* the replicas. This will give you scheduler-level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
```shell
$ kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;

View File

@ -555,6 +555,63 @@ There are three things to check:
Engine doesn't do port remapping, so if your application serves on 8080,
the `containerPort` field needs to be 8080.
### A Pod cannot reach itself via Service IP
This mostly happens when `kube-proxy` is running in `iptables` mode and Pods
are connected with bridge network. The `Kubelet` exposes a `hairpin-mode`
[flag](http://kubernetes.io/docs/admin/kubelet/) that allows endpoints of a Service to loadbalance back to themselves
if they try to access their own Service VIP. The `hairpin-mode` flag must either be
set to `haripin-veth` or `promiscuous-bridge`.
The common steps to trouble shoot this are as follows:
* Confirm `hairpin-mode` is set to `haripin-veth` or `promiscuous-bridge`.
You should see something like the below. `hairpin-mode` is set to
`promiscuous-bridge` in the following example.
```shell
u@node$ ps auxw|grep kubelet
root 3392 1.1 0.8 186804 65208 ? Sl 00:51 11:11 /usr/local/bin/kubelet --enable-debugging-handlers=true --config=/etc/kubernetes/manifests --allow-privileged=True --v=4 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --configure-cbr0=true --cgroup-root=/ --system-cgroups=/system --hairpin-mode=promiscuous-bridge --runtime-cgroups=/docker-daemon --kubelet-cgroups=/kubelet --babysit-daemons=true --max-pods=110 --serialize-image-pulls=false --outofdisk-transition-frequency=0
```
* Confirm the effective `hairpin-mode`. To do this, you'll have to look at
kubelet log. Accessing the logs depends on your Node OS. On some OSes it
is a file, such as /var/log/kubelet.log, while other OSes use `journalctl`
to access logs. Please be noted that the effective hairpin mode may not
match `--hairpin-mode` flag due to compatibility. Check if there is any log
lines with key word `hairpin` in kubelet.log. There should be log lines
indicating the effective hairpin mode, like something below.
```shell
I0629 00:51:43.648698 3252 kubelet.go:380] Hairpin mode set to "promiscuous-bridge"
```
* If the effective hairpin mode is `hairpin-veth`, ensure the `Kubelet` has
the permission to operate in `/sys` on node. If everything works properly,
you should see something like:
```shell
u@node$ for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done
1
1
1
1
```
* If the effective hairpin mode is `promiscuous-bridge`, ensure `Kubelet`
has the permission to manipulate linux bridge on node. If cbr0` bridge is
used and configured properly, you should see:
```shell
u@node$ ifconfig cbr0 |grep PROMISC
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1
```
* Seek help if none of above works out.
## Seek help
If you get this far, something very strange is happening. Your `Service` is

View File

@ -122,6 +122,7 @@ This is an example of a pod that consumes its container's resources via the down
{% include code.html language="yaml" file="volume/dapi-volume-resources.yaml" ghlink="/docs/user-guide/downward-api/volume/dapi-volume-resources.yaml" %}
Some more thorough examples:
* [environment variables](/docs/user-guide/environment-guide/)
* [downward API](/docs/user-guide/downward-api/)

View File

@ -1,20 +1,20 @@
---
---
This document describes the current state of Horizontal Pod Autoscaler in Kubernetes.
This document describes the current state of Horizontal Pod Autoscaling in Kubernetes.
## What is Horizontal Pod Autoscaler?
## What is Horizontal Pod Autoscaling?
Horizontal pod autoscaling allows to automatically scale the number of pods
in a replication controller, deployment or replica set based on observed CPU utilization.
With Horizontal Pod Autoscaling, Kubernetes automatically scales the number of pods
in a replication controller, deployment or replica set based on observed CPU utilization
(or, with alpha support, on some other, application-provided metrics).
The autoscaler is implemented as a Kubernetes API resource and a controller.
The resource describes behavior of the controller.
The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
The resource determines the behavior of the controller.
The controller periodically adjusts the number of replicas in a replication controller or deployment
to match the observed average CPU utilization to the target specified by user.
## How does Horizontal Pod Autoscaler work?
## How does the Horizontal Pod Autoscaler work?
![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.svg)
@ -29,34 +29,34 @@ Please note that if some of the pod's containers do not have CPU request set,
CPU utilization for the pod will not be defined and the autoscaler will not take any action.
Further details of the autoscaling algorithm are given [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm).
Autoscaler uses heapster to collect CPU utilization.
The autoscaler uses heapster to collect CPU utilization.
Therefore, it is required to deploy heapster monitoring in your cluster for autoscaling to work.
Autoscaler accesses corresponding replication controller, deployment or replica set by scale sub-resource.
The autoscaler accesses corresponding replication controller, deployment or replica set by scale sub-resource.
Scale is an interface which allows to dynamically set the number of replicas and to learn the current state of them.
More details on scale sub-resource can be found [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#scale-subresource).
## API Object
Horizontal pod autoscaler is a top-level resource in the Kubernetes REST API.
Horizontal Pod Autoscaler is a top-level resource in the Kubernetes REST API.
In Kubernetes 1.2 HPA was graduated from beta to stable (more details about [api versioning](/docs/api/#api-versioning)) with compatibility between versions.
The stable version is available in `autoscaling/v1` api group whereas the beta vesion is available in `extensions/v1beta1` api group as before.
The transition plan is to depracate beta version of HPA in Kubernetes 1.3 and get it rid off completely in Kubernetes 1.4.
The stable version is available in the `autoscaling/v1` api group whereas the beta vesion is available in the `extensions/v1beta1` api group as before.
The transition plan is to deprecate beta version of HPA in Kubernetes 1.3, and get it rid off completely in Kubernetes 1.4.
**Warning!** Please have in mind that all Kubernetes components still use HPA in version `extensions/v1beta1` in Kubernetes 1.2.
**Warning!** Please have in mind that all Kubernetes components still use HPA in `extensions/v1beta1` in Kubernetes 1.2.
More details about the API object can be found at
[HorizontalPodAutoscaler Object](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
## Support for horizontal pod autoscaler in kubectl
## Support for Horizontal Pod Autoscaler in kubectl
Horizontal pod autoscaler, like every API resource, is supported in a standard way by `kubectl`.
Horizontal Pod Autoscaler, like every API resource, is supported in a standard way by `kubectl`.
We can create a new autoscaler using `kubectl create` command.
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
Finally, we can delete an autoscaler using `kubectl delete hpa`.
In addition, there is a special `kubectl autoscale` command that allows for easy creation of horizontal pod autoscaler.
In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80`
will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
@ -67,17 +67,65 @@ The detailed documentation of `kubectl autoscale` can be found [here](/docs/user
Currently in Kubernetes, it is possible to perform a rolling update by managing replication controllers directly,
or by using the deployment object, which manages the underlying replication controllers for you.
Horizontal pod autoscaler only supports the latter approach: the horizontal pod autoscaler is bound to the deployment object,
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replication controllers.
Horizontal pod autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a horizontal pod autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
The reason this doesn't work is that when rolling update creates a new replication controller,
the horizontal pod autoscaler will not be bound to the new replication controller.
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
## Support for custom metrics
Kubernetes 1.2 adds alpha support for scaling based on application-specific metrics like QPS (queries per second) or average request latency.
### Prerequisites
The cluster has to be started with `ENABLE_CUSTOM_METRICS` environment variable set to `true`.
### Pod configuration
The pods to be scaled must have cAdvisor-specific custom (aka application) metrics endpoint configured. The configuration format is described [here](https://github.com/google/cadvisor/blob/master/docs/application_metrics.md). Kubernetes expects the configuration to
be placed in `definition.json` mounted via a [config map](/docs/user-guide/horizontal-pod-autoscaling/configmap/) in `/etc/custom-metrics`. A sample config map may look like this:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-config
data:
definition.json: "{\"endpoint\" : \"http://localhost:8080/metrics\"}"
```
**Warning**
Due to the way cAdvisor currently works `localhost` refers to the node itself, not to the running pod. Thus the appropriate container in the pod must ask for a node port. Example:
```yaml
ports:
- hostPort: 8080
containerPort: 8080
```
### Specifying target
HPA for custom metrics is configured via an annotation. The value in the annotation is interpreted as a target metric value averaged over
all running pods. Example:
```yaml
annotations:
alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"qps", "value": "10"}]}'
```
In this case if there are 4 pods running and each of them reports qps metric to be equal to 15 HPA will start 2 additional pods so there will be 6 pods in total. If there are multiple metrics passed in the annotation or CPU is configured as well then HPA will use the biggest
number of replicas that comes from the calculations.
At this moment even if target CPU utilization is not specified a default of 80% will be used.
To calculate number of desired replicas based only on custom metrics CPU utilization
target should be set to a very large value (e.g. 100000%). Then CPU-related logic
will want only 1 replica, leaving the decision about higher replica count to cusom metrics (and min/max limits).
## Further reading
* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md).
* Manual of autoscale command in kubectl: [kubectl autoscale](/docs/user-guide/kubectl/kubectl_autoscale).
* Usage example of [Horizontal Pod Autoscaler](/docs/user-guide/horizontal-pod-autoscaling/).
* kubectl autoscale command: [kubectl autoscale](/docs/user-guide/kubectl/kubectl_autoscale).
* Usage example of [Horizontal Pod Autoscaler](/docs/user-guide/horizontal-pod-autoscaling/walkthrough/).

View File

@ -1,25 +1,25 @@
---
---
Horizontal pod autoscaling allows to automatically scale the number of pods
in a replication controller, deployment or replica set based on observed CPU utilization.
In the future also other metrics will be supported.
Horizontal Pod Autoscaling automatically scales the number of pods
in a replication controller, deployment or replica set based on observed CPU utilization
(or, with alpha support, on some other, application-provided metrics).
In this document we explain how this feature works by walking you through an example of enabling horizontal pod autoscaling for the php-apache server.
In this document we explain how this feature works by walking you through an example of enabling Horizontal Pod Autoscaling for the php-apache server.
## Prerequisites
This example requires a running Kubernetes cluster and kubectl in the version at least 1.2.
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later.
[Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster
as horizontal pod autoscaler uses it to collect metrics
as Horizontal Pod Autoscaler uses it to collect metrics
(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce),
heapster monitoring will be turned-on by default).
## Step One: Run & expose php-apache server
To demonstrate horizontal pod autoscaler we will use a custom docker image based on php-apache server.
To demonstrate Horizontal Pod Autoscaler we will use a custom docker image based on the php-apache image.
The image can be found [here](/docs/user-guide/horizontal-pod-autoscaling/image).
It defines [index.php](/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
It defines an [index.php](/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
First, we will start a deployment running the image and expose it as a service:
@ -29,15 +29,15 @@ service "php-apache" created
deployment "php-apache" created
```
## Step Two: Create horizontal pod autoscaler
## Step Two: Create Horizontal Pod Autoscaler
Now that the server is running, we will create the autoscaler using
[kubectl autoscale](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/user-guide/kubectl/kubectl_autoscale.md).
The following command will create a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache deployment we created in the first step of these instructions.
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
Roughly speaking, HPA will increase and decrease the number of replicas
(via the deployment) to maintain an average CPU utilization across all Pods of 50%
(since each pod requests 200 milli-cores by [kubectl run](#kubectl-run), this means average CPU usage of 100 milli-cores).
(since each pod requests 200 milli-cores by [kubectl run](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/user-guide/kubectl/kubectl_run.md), this means average CPU usage of 100 milli-cores).
See [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
```shell
@ -59,8 +59,8 @@ Please note that the current CPU consumption is 0% as we are not sending any req
## Step Three: Increase load
Now, we will see how the autoscaler reacts on the increased load on the server.
We will start a container with `busybox` image and an infinite loop of queries to our server inside (please run it in a different terminal):
Now, we will see how the autoscaler reacts to increased load.
We will start a container, and send an infinite loop of queries to the php-apache service (please run it in a different terminal):
```shell
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
@ -70,7 +70,7 @@ Hit enter for command prompt
$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
```
We may examine, how CPU load was increased by executing (it usually takes 1 minute):
Within a minute or so, we should see the higher CPU load by executing:
```shell
$ kubectl get hpa
@ -79,7 +79,7 @@ php-apache Deployment/php-apache/scale 50% 305% 1 10
```
In the case presented here, it bumped CPU consumption to 305% of the request.
Here, CPU consumption has increased to 305% of the request.
As a result, the deployment was resized to 7 replicas:
```shell
@ -88,7 +88,7 @@ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
php-apache 7 7 7 7 19m
```
**Warning!** Sometimes it may take few steps to stabilize the number of replicas.
**Note** Sometimes it may take a few minutes to stabilize the number of replicas.
Since the amount of load is not controlled in any way it may happen that the final number of replicas will
differ from this example.
@ -96,11 +96,10 @@ differ from this example.
We will finish our example by stopping the user load.
In the terminal where we created container with `busybox` image we will terminate
infinite ``while`` loop by sending `SIGINT` signal,
which can be done using `<Ctrl> + C` combination.
In the terminal where we created the container with `busybox` image, terminate
the load generation by typing `<Ctrl> + C`.
Then we will verify the result state:
Then we will verify the result state (after a minute or so):
```shell
$ kubectl get hpa
@ -112,9 +111,9 @@ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
php-apache 1 1 1 1 27m
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.
Here CPU utilization dropped to 0, and so HPA autoscaled the number of replicas back down to 1.
**Warning!** Sometimes dropping number of replicas may take few steps.
**Note** autoscaling the replicas may take a few minutes.
## Appendix: Other possible scenarios

View File

@ -7,8 +7,8 @@ For non-unique user-provided attributes, Kubernetes provides [labels](/docs/user
## Names
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md) for the precise syntax rules for names.
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md) for the precise syntax rules for names.
## UIDs
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).

View File

@ -98,7 +98,7 @@ There are existing Kubernetes concepts that allow you to expose a single service
{% include code.html language="yaml" file="ingress.yaml" ghlink="/docs/user-guide/ingress.yaml" %}
If you create it using `kubectl -f` you should see:
If you create it using `kubectl create -f` you should see:
```shell
$ kubectl get ing

View File

@ -249,12 +249,12 @@ The tradeoffs are:
The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
The pattern names are also links to examples and more detailed description.
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
| -------------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
| [Job Template Expansion](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/expansions/README.md) | | | ✓ | ✓ |
| [Queue with Pod Per Work Item](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/work-queue-1/README.md) | ✓ | | sometimes | ✓ |
| [Queue with Variable Pod Count](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/work-queue-2/README.md) | ✓ | ✓ | | ✓ |
| Single Job with Static Work Assignment | ✓ | | ✓ | |
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
| [Job Template Expansion](/docs/user-guide/job/expansions) | | | ✓ | ✓ |
| [Queue with Pod Per Work Item](/docs/user-guide/job/work-queue-1/) | ✓ | | sometimes | ✓ |
| [Queue with Variable Pod Count](/docs/user-guide/job/work-queue-2/) | ✓ | ✓ | | ✓ |
| Single Job with Static Work Assignment | ✓ | | ✓ | |
When you specify completions with `.spec.completions`, each Pod created by the Job controller
has an identical [`spec`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status). This means that
@ -265,12 +265,12 @@ are different ways to arrange for pods to work on different things.
This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
Here, `W` is the number of work items.
| Pattern | `.spec.completions` | `.spec.parallelism` |
| -------------------------------------------------------------------------- |:-------------------:|:--------------------:|
| [Job Template Expansion](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/expansions/README.md) | 1 | should be 1 |
| [Queue with Pod Per Work Item](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/work-queue-1/README.md) | W | any |
| [Queue with Variable Pod Count](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/job/work-queue-2/README.md) | 1 | any |
| Single Job with Static Work Assignment | W | any |
| Pattern | `.spec.completions` | `.spec.parallelism` |
| -------------------------------------------------------------------- |:-------------------:|:--------------------:|
| [Job Template Expansion](/docs/user-guide/jobs/expansions/) | 1 | should be 1 |
| [Queue with Pod Per Work Item](/docs/user-guide/jobs/work-queue-1/) | W | any |
| [Queue with Variable Pod Count](/docs/user-guide/jobs/work-queue-2/) | 1 | any |
| Single Job with Static Work Assignment | W | any |
## Advanced Usage

View File

@ -0,0 +1,193 @@
---
---
* TOC
{:toc}
# Example: Multiple Job Objects from Template Expansion
In this example, we will run multiple Kubernetes Jobs created from
a common template. You may want to be familiar with the basic,
non-parallel, use of [Jobs](/docs/user-guide/jobs) first.
## Basic Template Expansion
First, download the following template of a job to a file called `job.yaml.txt`
{% include code.html language="yaml" file="job.yaml.txt" ghlink="/docs/user-guide/job/expansions/job.yaml.txt" %}
Unlike a *pod template*, our *job template* is not a Kubernetes API type. It is just
a yaml representation of a Job object that has some placeholders that need to be filled
in before it can be used. The `$ITEM` syntax is not meaningful to Kubernetes.
In this example, the only processing the container does is to `echo` a string and sleep for a bit.
In a real use case, the processing would be some substantial computation, such as rendering a frame
of a movie, or processing a range of rows in a database. The "$ITEM" parameter would specify for
example, the frame number or the row range.
This Job and its Pod template have a label: `jobgroup=jobexample`. There is nothing special
to the system about this label. This label
makes it convenient to operate on all the jobs in this group at once.
We also put the same label on the pod template so that we can check on all Pods of these Jobs
with a single command.
After the job is created, the system will add more labels that distinguish one Job's pods
from another Job's pods.
Note that the label key `jobgroup` is not special to Kubernetes. you can pick your own label scheme.
Next, expand the template into multiple files, one for each item to be processed.
```shell
# Expand files into a temporary directory
mkdir ./jobs
for i in apple banana cherry
do
cat job.yaml.txt | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml
done
```
Check if it worked:
```shell
$ ls jobs/
job-apple.yaml
job-banana.yaml
job-cherry.yaml
```
Here, we used `sed` to replace the string `$ITEM` with the the loop variable.
You could use any type of template language (jinja2, erb) or write a program
to generate the Job objects.
Next, create all the jobs with one kubectl command:
```shell
$ kubectl create -f ./jobs
job "process-item-apple" created
job "process-item-banana" created
job "process-item-cherry" created
```
Now, check on the jobs:
```shell
$ kubectl get jobs -l app=jobexample
JOB CONTAINER(S) IMAGE(S) SELECTOR SUCCESSFUL
process-item-apple c busybox app in (jobexample),item in (apple) 1
process-item-banana c busybox app in (jobexample),item in (banana) 1
process-item-cherry c busybox app in (jobexample),item in (cherry) 1
```
Here we use the `-l` option to select all jobs that are part of this
group of jobs. (There might be other unrelated jobs in the system that we
do not care to see.)
We can check on the pods as well using the same label selector:
```shell
$ kubectl get pods -l app=jobexample
NAME READY STATUS RESTARTS AGE
process-item-apple-kixwv 0/1 Completed 0 4m
process-item-banana-wrsf7 0/1 Completed 0 4m
process-item-cherry-dnfu9 0/1 Completed 0 4m
```
There is not a single command to check on the output of all jobs at once,
but looping over all the pods is pretty easy:
```shell
$ for p in $(kubectl get pods -l app=jobexample -o name)
do
kubectl logs $p
done
Processing item apple
Processing item banana
Processing item cherry
```
## Multiple Template Parameters
In the first example, each instance of the template had one parameter, and that parameter was also
used as a label. However label keys are limited in [what characters they can
contain](docs/user-guide/labels/#syntax-and-character-set).
This slightly more complex example uses a the jinja2 template language to generate our objects.
We will use a one-line python script to convert the template to a file.
First, copy and paste the following template of a Job object, into a file called `job.yaml.jinja2`:
```liquid{% raw %}
{%- set params = [{ "name": "apple", "url": "http://www.orangepippin.com/apples", },
{ "name": "banana", "url": "https://en.wikipedia.org/wiki/Banana", },
{ "name": "raspberry", "url": "https://www.raspberrypi.org/" }]
%}
{%- for p in params %}
{%- set name = p["name"] %}
{%- set url = p["url"] %}
apiVersion: batch/v1
kind: Job
metadata:
name: jobexample-{{ {{ name }} }}
labels:
jobgroup: jobexample
spec:
template:
name: jobexample
labels:
jobgroup: jobexample
spec:
containers:
- name: c
image: busybox
command: ["sh", "-c", "echo Processing URL {{ url }} && sleep 5"]
restartPolicy: Never
---
{%- endfor %}
{% endraw %}
```
The above template defines parameters for each job object using a list of
python dicts (lines 1-4). Then a for loop emits one job yaml object
for each set of parameters (remaining lines).
We take advantage of the fact that multiple yaml documents can be concatenated
with the `---` separator (second to last line).
.) We can pipe the output directly to kubectl to
create the objects.
You will need the jinja2 package if you do not already have it: `pip install --user jinja2`.
Now, use this one-line python program to expand the template:
```shell
alias render_template='python -c "from jinja2 import Template; import sys; print(Template(sys.stdin.read()).render());"'
```
The output can be saved to a file, like this:
```shell
cat job.yaml.jinja2 | render_template > jobs.yaml
```
or sent directly to kubectl, like this:
```shell
cat job.yaml.jinja2 | render_template | kubectl create -f -
```
## Alternatives
If you have a large number of job objects, you may find that:
- even using labels, managing so many Job objects is cumbersome.
- You exceed resource quota when creating all the Jobs at once,
and do not want to wait to create them incrementally.
- You need a way to easily scale the number of pods running
concurrently. One reason would be to avoid using too many
compute resources. Another would be to limit the number of
concurrent requests to a shared resource, such as a database,
used by all the pods in the job.
- very large numbers of jobs created at once overload the
kubernetes apiserver, controller, or scheduler.
In this case, you can consider one of the
other [job patterns](/docs/user-guide/jobs/#job-patterns).

View File

@ -0,0 +1,18 @@
apiVersion: batch/v1
kind: Job
metadata:
name: process-item-$ITEM
labels:
jobgroup: jobexample
spec:
template:
metadata:
name: jobexample
labels:
jobgroup: jobexample
spec:
containers:
- name: c
image: busybox
command: ["sh", "-c", "echo Processing item $ITEM && sleep 5"]
restartPolicy: Never

View File

@ -0,0 +1,10 @@
# Specify BROKER_URL and QUEUE when running
FROM ubuntu:14.04
RUN apt-get update && \
apt-get install -y curl ca-certificates amqp-tools python \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
COPY ./worker.py /worker.py
CMD /usr/bin/amqp-consume --url=$BROKER_URL -q $QUEUE -c 1 /worker.py

View File

@ -0,0 +1,284 @@
---
---
* TOC
{:toc}
# Example: Job with Work Queue with Pod Per Work Item
In this example, we will run a Kubernetes Job with multiple parallel
worker processes. You may want to be familiar with the basic,
non-parallel, use of [Job](/docs/user-guide/jobs) first.
In this example, as each pod is created, it picks up one unit of work
from a task queue, completes it, deletes it from the queue, and exits.
Here is an overview of the steps in this example:
1. **Start a message queue service.** In this example, we use RabbitMQ, but you could use another
one. In practice you would set up a message queue service once and reuse it for many jobs.
1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In
this example, a message is just an integer that we will do a lengthy computation on.
1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes
one task from the message queue, processes it, and repeats until the end of the queue is reached.
## Starting a message queue service
This example uses RabbitMQ, but it should be easy to adapt to another AMQP-type message service.
In practice you could set up a message queue service once in a
cluster and reuse it for many jobs, as well as for long-running services.
Start RabbitMQ as follows:
```shell
$ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml
service "rabbitmq-service" created
$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml
replicationController "rabbitmq-controller" created
```
We will only use the rabbitmq part from the celery-rabbitmq example.
## Testing the message queue service
Now, we can experiment with accessing the message queue. We will
create a temporary interactive pod, install some tools on it,
and experiment with queues.
First create a temporary interactive Pod.
```shell
# Create a temporary interactive container
$ kubectl run -i --tty temp --image ubuntu:14.04
Waiting for pod default/temp-loe07 to be running, status is Pending, pod ready: false
... [ previous line repeats several times .. hit return when it stops ] ...
```
Note that your pod name and command prompt will be different.
Next install the `amqp-tools` so we can work with message queues.
```shell
# Install some tools
root@temp-loe07:/# apt-get update
.... [ lots of output ] ....
root@temp-loe07:/# apt-get install -y curl ca-certificates amqp-tools python dnsutils
.... [ lots of output ] ....
```
Later, we will make a docker image that includes these packages.
Next, we will check that we can discover the rabbitmq service:
```
# Note the rabitmq-service has a DNS name, provided by Kubernetes:
root@temp-loe07:/# nslookup rabbitmq-service
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: rabbitmq-service.default.svc.cluster.local
Address: 10.0.147.152
# Your address will vary.
```
If Kube-DNS is not setup correctly, the previous step may not work for you.
You can also find the service IP in an env var:
```
# env | grep RABBIT | grep HOST
RABBITMQ_SERVICE_SERVICE_HOST=10.0.147.152
# Your address will vary.
```
Next we will verify we can create a queue, and publish and consume messages.
```shell
# In the next line, rabbitmq-service is the hostname where the rabbitmq-service
# can be reached. 5672 is the standard port for rabbitmq.
root@temp-loe07:/# BROKER_URL=amqp://guest:guest@rabbitmq-service:5672
# If you could not resolve "rabbitmq-service" in the previous step,
# then use this command instead:
# root@temp-loe07:/# BROKER_URL=amqp://guest:guest@$RABBITMQ_SERVICE_SERVICE_HOST:5672
# Now create a queue:
root@temp-loe07:/# /usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d
foo
# Publish one message to it:
root@temp-loe07:/# /usr/bin/amqp-publish --url=$BROKER_URL -r foo -p -b Hello
# And get it back.
root@temp-loe07:/# /usr/bin/amqp-consume --url=$BROKER_URL -q foo -c 1 cat && echo
Hello
root@temp-loe07:/#
```
In the last command, the `amqp-consume` tool takes one message (`-c 1`)
from the queue, and passes that message to the standard input of an
an arbitrary command. In this case, the program `cat` is just printing
out what it gets on the standard input, and the echo is just to add a carriage
return so the example is readable.
## Filling the Queue with tasks
Now lets fill the queue with some "tasks". In our example, our tasks are just strings to be
printed.
In a practice, the content of the messages might be:
- names of files to that need to be processed
- extra flags to the program
- ranges of keys in a database table
- configuration parameters to a simulation
- frame numbers of a scene to be rendered
In practice, if there is large data that is needed in a read-only mode by all pods
of the Job, you will typically put that in a shared file system like NFS and mount
that readonly on all the pods, or the program in the pod will natively read data from
a cluster file system like HDFS.
For our example, we will create the queue and fill it using the amqp command line tools.
In practice, you might write a program to fill the queue using an amqp client library.
```shell
$ /usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d
job1
$ for f in apple banana cherry date fig grape lemon melon
do
/usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f
done
```
So, we filled the queue with 8 messages.
## Create an Image
Now we are ready to create an image that we will run as a job.
We will use the `amqp-consume` utility to read the message
from the queue and run our actual program. Here is a very simple
example program:
{% include code.html language="python" file="worker.py" ghlink="/docs/user-guide/job/work-queue-1/worker.py" %}
Now, build an an image. If you are working in the source
tree, then change directory to `examples/job/work-queue-1`.
Otherwise, make a temporary directory, change to it,
download the [Dockerfile](Dockerfile?raw=true),
and [worker.py](worker.py?raw=true). In either case,
build the image with this command: `
```shell
$ docker build -t job-wq-1 .
```
For the [Docker Hub](https://hub.docker.com/), tag your app image with
your username and push to the Hub with the below commands. Replace
`<username>` with your Hub username.
```shell
docker tag job-wq-1 <username>/job-wq-1
docker push <username>/job-wq-1
```
If you are using [Google Container
Registry](https://cloud.google.com/tools/container-registry/), tag
your app image with your project ID, and push to GCR. Replace
`<project>` with your project ID.
```shell
docker tag job-wq-1 gcr.io/<project>/job-wq-1
gcloud docker push gcr.io/<project>/job-wq-1
```
## Defining a Job
Here is a job definition. You'll need to make a copy of the Job and edit the
image to match the name you used, and call it `./job.yaml`.
{% include code.html language="yaml" file="job.yaml" ghlink="/docs/user-guide/job/work-queue-1/job.yaml" %}
In this example, each pod works on one item from the queue and then exits.
So, the completion count of the Job corresponds to the number of work items
done. So we set, `.spec.completions: 8` for the example, since we put 8 items in the queue.
## Running the Job
So, now run the Job:
```shell
kubectl create -f ./job.yaml
```
Now wait a bit, then check on the job.
```shell
$ kubectl describe jobs/job-wq-1
Name: job-wq-1
Namespace: default
Image(s): gcr.io/causal-jigsaw-637/job-wq-1
Selector: app in (job-wq-1)
Parallelism: 4
Completions: 8
Labels: app=job-wq-1
Pods Statuses: 0 Running / 8 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-hcobb
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-weytj
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-qaam5
27s 27s 1 {job } SuccessfulCreate Created pod: job-wq-1-b67sr
26s 26s 1 {job } SuccessfulCreate Created pod: job-wq-1-xe5hj
15s 15s 1 {job } SuccessfulCreate Created pod: job-wq-1-w2zqe
14s 14s 1 {job } SuccessfulCreate Created pod: job-wq-1-d6ppa
14s 14s 1 {job } SuccessfulCreate Created pod: job-wq-1-p17e0
```
All our pods succeeded. Yay.
## Alternatives
This approach has the advantage that you
do not need to modify your "worker" program to be aware that there is a work queue.
It does require that you run a message queue service.
If running a queue service is inconvenient, you may
want to consider one of the other [job patterns](/docs/user-guide/jobs/#job-patterns).
This approach creates a pod for every work item. If your work items only take a few seconds,
though, creating a Pod for every work item may add a lot of overhead. Consider another
[example](/docs/user-guide/job/work-queue-2), that executes multiple work items per Pod.
In this example, we used use the `amqp-consume` utility to read the message
from the queue and run our actual program. This has the advantage that you
do not need to modify your program to be aware of the queue.
A [different example](/docs/user-guide/job/work-queue-2), shows how to
communicate with the work queue using a client library.
## Caveats
If the number of completions is set to less than the number of items in the queue, then
not all items will be processed.
If the number of completions is set to more than the number of items in the queue,
then the Job will not appear to be completed, even though all items in the queue
have been processed. It will start additional pods which will block waiting
for a mesage.
There is an unlikely race with this pattern. If the container is killed in between the time
that the message is acknowledged by the amqp-consume command and the time that the container
exits with success, or if the node crashes before the kubelet is able to post the success of the pod
back to the api-server, then the Job will not appear to be complete, even though all items
in the queue have been processed.

View File

@ -0,0 +1,15 @@
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-1
spec:
completions: 8
parallelism: 2
template:
metadata:
name: job-wq-1
spec:
containers:
- name: c
image: gcr.io/<project>/job-wq-1
restartPolicy: OnFailure

View File

@ -0,0 +1,7 @@
#!/usr/bin/env python
# Just prints standard out and sleeps for 10 seconds.
import sys
import time
print("Processing " + sys.stdin.lines())
time.sleep(10)

View File

@ -0,0 +1,6 @@
FROM python
RUN pip install redis
COPY ./worker.py /worker.py
COPY ./rediswq.py /rediswq.py
CMD python worker.py

View File

@ -0,0 +1,210 @@
---
---
* TOC
{:toc}
# Example: Job with Work Queue with Pod Per Work Item
In this example, we will run a Kubernetes Job with multiple parallel
worker processes. You may want to be familiar with the basic,
non-parallel, use of [Job](/docs/user-guide/jobs) first.
In this example, as each pod is created, it picks up one unit of work
from a task queue, completes it, deletes it from the queue, and exits.
Here is an overview of the steps in this example:
1. **Start a storage service to hold the work queue.** In this example, we use Redis to store
our work items. In the previous example, we used RabbitMQ. In this example, we use Redis and
a custom work-queue client library because AMQP does not provide a good way for clients to
detect when a finite-length work queue is empty. In practice you would set up a store such
as Redis once and reuse it for the work queues of many jobs, and other things.
1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In
this example, a message is just an integer that we will do a lengthy computation on.
1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes
one task from the message queue, processes it, and repeats until the end of the queue is reached.
## Starting Redis
For this example, for simplicitly, we will start a single instance of Redis.
See the [Redis Example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/redis/README.md) for an example
of deploying Redis scaleably and redundantly.
Start a temporary Pod running Redis and a service so we can find it.
```shell
$ kubectl create -f examples/job/work-queue-2/redis-pod.yaml
pod "redis-master" created
$ kubectl create -f examples/job/work-queue-2/redis-service.yaml
service "redis" created
```
## Filling the Queue with tasks
Now lets fill the queue with some "tasks". In our example, our tasks are just strings to be
printed.
Start a temporary interactive pod for running the Redis CLI
```shell
$ kubectl run -i --tty temp --image redis --command "/bin/sh"
Waiting for pod default/redis2-c7h78 to be running, status is Pending, pod ready: false
Hit enter for command prompt
```
Now hit enter, start the redis CLI, and create a list with some work items in it.
```
# redis-cli -h redis
redis:6379> rpush job2 "apple"
(integer) 1
redis:6379> rpush job2 "banana"
(integer) 2
redis:6379> rpush job2 "cherry"
(integer) 3
redis:6379> rpush job2 "date"
(integer) 4
redis:6379> rpush job2 "fig"
(integer) 5
redis:6379> rpush job2 "grape"
(integer) 6
redis:6379> rpush job2 "lemon"
(integer) 7
redis:6379> rpush job2 "melon"
(integer) 8
redis:6379> rpush job2 "orange"
(integer) 9
redis:6379> lrange job2 0 -1
1) "apple"
2) "banana"
3) "cherry"
4) "date"
5) "fig"
6) "grape"
7) "lemon"
8) "melon"
9) "orange"
```
So, the list with key `job2` will be our work queue.
Note: if you do not have Kube DNS setup correctly, you may need to change
the first step of the above block to `redis-cli -h $REDIS_SERVICE_HOST`.
## Create an Image
Now we are ready to create an image that we will run.
We will use a python worker program with a redis client to read
the messages from the message queue.
A simple Redis work queue client library is provided,
called rediswq.py ([Download](rediswq.py?raw=true)).
The "worker" program in each Pod of the Job uses the work queue
client library to get work. Here it is:
{% include code.html language="python" file="worker.py" ghlink="/docs/user-guide/job/work-queue-2/worker.py" %}
If you are working from the source tree,
change directory to the `examples/job/work-queue-2` directory.
Otherwise, download [`worker.py`](worker.py?raw=true), [`rediswq.py`](rediswq.py?raw=true), and [`Dockerfile`](Dockerfile?raw=true)
using above links. Then build the image:
```shell
docker build -t job-wq-2 .
```
### Push the image
For the [Docker Hub](https://hub.docker.com/), tag your app image with
your username and push to the Hub with the below commands. Replace
`<username>` with your Hub username.
```shell
docker tag job-wq-2 <username>/job-wq-2
docker push <username>/job-wq-2
```
You need to push to a public repository or [configure your cluster to be able to access
your private repository](/docs/user-guide/images).
If you are using [Google Container
Registry](https://cloud.google.com/tools/container-registry/), tag
your app image with your project ID, and push to GCR. Replace
`<project>` with your project ID.
```shell
docker tag job-wq-2 gcr.io/<project>/job-wq-2
gcloud docker push gcr.io/<project>/job-wq-2
```
## Defining a Job
Here is the job definition:
{% include code.html language="yaml" file="job.yaml" ghlink="/docs/user-guide/job/work-queue-2/job.yaml" %}
Be sure to edit the job template to
change `gcr.io/myproject` to your own path.
In this example, each pod works on several items from the queue and then exits when there are no more items.
Since the workers themselves detect when the workqueue is empty, and the Job controller does not
know about the workqueue, it relies on the workers to signal when they are done working.
The workers signal that the queue is empty by exiting with success. So, as soon as any worker
exits with success, the controller knows the work is done, and the Pods will exit soon.
So, we set the completion count of the Job to 1. The job controller will wait for the other pods to complete
too.
## Running the Job
So, now run the Job:
```shell
kubectl create -f ./job.yaml
```
Now wait a bit, then check on the job.
```shell
$ kubectl describe jobs/job-wq-2
Name: job-wq-2
Namespace: default
Image(s): gcr.io/exampleproject/job-wq-2
Selector: app in (job-wq-2)
Parallelism: 2
Completions: Unset
Start Time: Mon, 11 Jan 2016 17:07:59 -0800
Labels: app=job-wq-2
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8
$ kubectl logs pods/job-wq-2-7r7b2
Worker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f
Initial queue state: empty=False
Working on banana
Working on date
Working on lemon
```
As you can see, one of our pods worked on several work units.
## Alternatives
If running a queue service or modifying your containers to use a work queue is inconvenient, you may
want to consider one of the other [job patterns](/docs/user-guide/jobs/#job-patterns).
If you have a continuous stream of background processing work to run, then
consider running your background workers with a `replicationController` instead,
and consider running a background processing library such as
https://github.com/resque/resque.

View File

@ -0,0 +1,14 @@
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
containers:
- name: c
image: gcr.io/myproject/job-wq-2
restartPolicy: OnFailure

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: redis-master
labels:
app: redis
spec:
containers:
- name: master
image: redis
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis

View File

@ -0,0 +1,130 @@
#!/usr/bin/env python
# Based on http://peter-hoffmann.com/2012/python-simple-queue-redis-queue.html
# and the suggestion in the redis documentation for RPOPLPUSH, at
# http://redis.io/commands/rpoplpush, which suggests how to implement a work-queue.
import redis
import uuid
import hashlib
class RedisWQ(object):
"""Simple Finite Work Queue with Redis Backend
This work queue is finite: as long as no more work is added
after workers start, the workers can detect when the queue
is completely empty.
The items in the work queue are assumed to have unique values.
This object is not intended to be used by multiple threads
concurrently.
"""
def __init__(self, name, **redis_kwargs):
"""The default connection parameters are: host='localhost', port=6379, db=0
The work queue is identified by "name". The library may create other
keys with "name" as a prefix.
"""
self._db = redis.StrictRedis(**redis_kwargs)
# The session ID will uniquely identify this "worker".
self._session = str(uuid.uuid4())
# Work queue is implemented as two queues: main, and processing.
# Work is initially in main, and moved to processing when a client picks it up.
self._main_q_key = name
self._processing_q_key = name + ":processing"
self._lease_key_prefix = name + ":leased_by_session:"
def sessionID(self):
"""Return the ID for this session."""
return self._session
def _main_qsize(self):
"""Return the size of the main queue."""
return self._db.llen(self._main_q_key)
def _processing_qsize(self):
"""Return the size of the main queue."""
return self._db.llen(self._processing_q_key)
def empty(self):
"""Return True if the queue is empty, including work being done, False otherwise.
False does not necessarily mean that there is work available to work on right now,
"""
return self._main_qsize() == 0 and self._processing_qsize() == 0
# TODO: implement this
# def check_expired_leases(self):
# """Return to the work queueReturn True if the queue is empty, False otherwise."""
# # Processing list should not be _too_ long since it is approximately as long
# # as the number of active and recently active workers.
# processing = self._db.lrange(self._processing_q_key, 0, -1)
# for item in processing:
# # If the lease key is not present for an item (it expired or was
# # never created because the client crashed before creating it)
# # then move the item back to the main queue so others can work on it.
# if not self._lease_exists(item):
# TODO: transactionally move the key from processing queue to
# to main queue, while detecting if a new lease is created
# or if either queue is modified.
def _itemkey(self, item):
"""Returns a string that uniquely identifies an item (bytes)."""
return hashlib.sha224(item).hexdigest()
def _lease_exists(self, item):
"""True if a lease on 'item' exists."""
return self._db.exists(self._lease_key_prefix + self._itemkey(item))
def lease(self, lease_secs=60, block=True, timeout=None):
"""Begin working on an item the work queue.
Lease the item for lease_secs. After that time, other
workers may consider this client to have crashed or stalled
and pick up the item instead.
If optional args block is true and timeout is None (the default), block
if necessary until an item is available."""
if block:
item = self._db.brpoplpush(self._main_q_key, self._processing_q_key, timeout=timeout)
else:
item = self._db.rpoplpush(self._main_q_key, self._processing_q_key)
if item:
# Record that we (this session id) are working on a key. Expire that
# note after the lease timeout.
# Note: if we crash at this line of the program, then GC will see no lease
# for this item an later return it to the main queue.
itemkey = self._itemkey(item)
self._db.setex(self._lease_key_prefix + itemkey, lease_secs, self._session)
return item
def complete(self, value):
"""Complete working on the item with 'value'.
If the lease expired, the item may not have completed, and some
other worker may have picked it up. There is no indication
of what happened.
"""
self._db.lrem(self._processing_q_key, 0, value)
# If we crash here, then the GC code will try to move the value, but it will
# not be here, which is fine. So this does not need to be a transaction.
itemkey = self._itemkey(value)
self._db.delete(self._lease_key_prefix + itemkey, self._session)
# TODO: add functions to clean up all keys associated with "name" when
# processing is complete.
# TODO: add a function to add an item to the queue. Atomically
# check if the queue is empty and if so fail to add the item
# since other workers might think work is done and be in the process
# of exiting.
# TODO(etune): move to my own github for hosting, e.g. github.com/erictune/rediswq-py and
# make it so it can be pip installed by anyone (see
# http://stackoverflow.com/questions/8247605/configuring-so-that-pip-install-can-work-from-github)
# TODO(etune): finish code to GC expired leases, and call periodically
# e.g. each time lease times out.

View File

@ -0,0 +1,23 @@
#!/usr/bin/env python
import time
import rediswq
host="redis"
# Uncomment next two lines if you do not have Kube-DNS working.
# import os
# host = os.getenv("REDIS_SERVICE_HOST")
q = rediswq.RedisWQ(name="job2", host="redis")
print("Worker with sessionID: " + q.sessionID())
print("Initial queue state: empty=" + str(q.empty()))
while not q.empty():
item = q.lease(lease_secs=10, block=True, timeout=2)
if item is not None:
itemstr = item.decode("utf=8")
print("Working on " + itemstr)
time.sleep(10) # Put your actual work here instead of sleep.
q.complete(item)
else:
print("Waiting for work")
print("Queue empty, exiting")

View File

@ -86,7 +86,7 @@ $ kubectl get pods --sort-by=.status.containerStatuses[0].restartCount
$ kubectl get pods --selector=app=cassandra rc -o 'jsonpath={.items[*].metadata.labels.version}'
# Get ExternalIPs of all nodes
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=ExternalIP)].address}'
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
# List Names of Pods that belong to Particular RC
# "jq" command useful for transformations that are too complex for jsonpath

View File

@ -14,7 +14,7 @@ A `PersistentVolume` (PV) is a piece of networked storage in the cluster that ha
A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).
Please see the [detailed walkthrough with working examples](/docs/user-guide/persistent-volumes/).
Please see the [detailed walkthrough with working examples](/docs/user-guide/persistent-volumes/walkthrough/).
## Lifecycle of a volume and claim
@ -169,4 +169,4 @@ spec:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
```
```

View File

@ -15,7 +15,7 @@ replication controller are automatically replaced if they fail, get deleted, or
For example, your pods get re-created on a node after disruptive maintenance such as a kernel upgrade.
For this reason, we recommend that you use a replication controller even if your application requires
only a single pod. You can think of a replication controller as something similar to a process supervisor,
but rather then individual processes on a single node, the replication controller supervises multiple pods
but rather than individual processes on a single node, the replication controller supervises multiple pods
across multiple nodes.
Replication Controller is often abbreviated to "rc" or "rcs" in discussion, and as a shortcut in

View File

@ -94,10 +94,10 @@ in json or yaml format, and then create that object.
Each item must be base64 encoded:
```shell
$ echo "admin" | base64
YWRtaW4K
$ echo "1f2d1e2e67df" | base64
MWYyZDFlMmU2N2RmCg==
$ echo -n "admin" | base64
YWRtaW4=
$ echo -n "1f2d1e2e67df" | base64
MWYyZDFlMmU2N2Rm
```
Now write a secret object that looks like this:
@ -109,8 +109,8 @@ metadata:
name: mysecret
type: Opaque
data:
password: MWYyZDFlMmU2N2RmCg==
username: YWRtaW4K
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
```
The data field is a map. Its keys must match
@ -138,8 +138,8 @@ Get back the secret created in the previous section:
$ kubectl get secret mysecret -o yaml
apiVersion: v1
data:
password: MWYyZDFlMmU2N2RmCg==
username: YWRtaW4K
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
kind: Secret
metadata:
creationTimestamp: 2016-01-22T18:41:56Z
@ -154,7 +154,7 @@ type: Opaque
Decode the password field:
```shell
$ echo "MWYyZDFlMmU2N2RmCg==" | base64 -D
$ echo "MWYyZDFlMmU2N2Rm" | base64 -D
1f2d1e2e67df
```
@ -214,7 +214,53 @@ You can package many files into one secret, or use many secrets, whichever is co
See another example of creating a secret and a pod that consumes that secret in a volume [here](/docs/user-guide/secrets/).
##### Consuming Secret Values from Volumes
**Projection of secret keys to specific paths**
We can also control the paths within the volume where Secret keys are projected.
You can use `spec.volumes[].secret.items` field to change target path of each key:
```json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "mypod",
"namespace": "myns"
},
"spec": {
"containers": [{
"name": "mypod",
"image": "redis",
"volumeMounts": [{
"name": "foo",
"mountPath": "/etc/foo",
"readOnly": true
}]
}],
"volumes": [{
"name": "foo",
"secret": {
"secretName": "mysecret",
"items": [{
"key": "username",
"path": "my-group/my-username"
}]
}
}]
}
}
```
What will happen:
* `username` secret is stored under `/etc/foo/my-group/my-username` file instead of `/etc/foo/username`.
* `password` secret is not projected
If `spec.volumes[].secret.items` is used, only keys specified in `items` are projected.
To consume all keys from the secret, all of them must be listed in the `items` field.
All listed keys must exist in the corresponding secret. Otherwise, the volume is not created.
**Consuming Secret Values from Volumes**
Inside the container that mounts a secret volume, the secret keys appear as
files and the secret values are base-64 decoded and stored inside these files.
@ -234,6 +280,11 @@ $ cat /etc/foo/password
The program in a container is responsible for reading the secret(s) from the
files.
**Mounted Secrets are updated automatically**
When a secret being already consumed in a volume is updated, projected keys are eventually updated as well.
The update time depends on the kubelet syncing period.
#### Using Secrets as Environment Variables
To use a secret in an environment variable in a pod:
@ -267,7 +318,7 @@ spec:
restartPolicy: Never
```
##### Consuming Secret Values from Environment Variables
**Consuming Secret Values from Environment Variables**
Inside a container that consumes a secret in an environment variables, the secret keys appear as
normal environment variables containing the base-64 decoded values of the secret data.
@ -285,7 +336,7 @@ $ echo $SECRET_PASSWORD
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
password to the Kubelet so it can pull a private image on behalf of your Pod.
##### Manually specifying an imagePullSecret
**Manually specifying an imagePullSecret**
Use of imagePullSecrets is described in the [images documentation](/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod)
@ -338,23 +389,6 @@ reason it is not started yet. Once the secret is fetched, the kubelet will
create and mount a volume containing it. None of the pod's containers will
start until all the pod's volumes are mounted.
Once the kubelet has started a pod's containers, its secret volumes will not
change, even if the secret resource is modified. To change the secret used,
the original pod must be deleted, and a new pod (perhaps with an identical
`PodSpec`) must be created. Therefore, updating a secret follows the same
workflow as deploying a new container image. The `kubectl rolling-update`
command can be used ([man page](/docs/user-guide/kubectl/kubectl_rolling-update)).
The [`resourceVersion`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#concurrency-control-and-consistency)
of the secret is not specified when it is referenced.
Therefore, if a secret is updated at about the same time as pods are starting,
then it is not defined which version of the secret will be used for the pod. It
is not possible currently to check what resource version of a secret object was
used when a pod was created. It is planned that pods will report this
information, so that a replication controller restarts ones using an old
`resourceVersion`. In the interim, if this is a concern, it is recommended to not
update the data of existing secrets, but to create new ones with distinct names.
## Use cases
### Use-Case: Pod with ssh keys

View File

@ -111,6 +111,8 @@ secrets/build-robot-secret
Now you can confirm that the newly built secret is populated with an API token for the "build-robot" service account.
Any tokens for non-existent service accounts will be cleaned up by the token controller.
```shell
$ kubectl describe secrets/build-robot-secret
Name: build-robot-secret

View File

@ -6,6 +6,50 @@ exposure to the internet. When exposing a service to the external world, you ma
one or more ports in these firewalls to serve traffic. This document describes this process, as
well as any provider specific details that may be necessary.
### Restrict Access For LoadBlancer Service
When using a Service with `spec.type: LoadBalancer`, you can specify the IP ranges that are allowed to access the load balancer
by using `spec.loadBalancerSourceRanges`. This field takes a list of IP CIDR ranges, which Kubernetes will use to configure firewall exceptions.
This feature is currently supported on Google Compute Engine, Google Container Engine and AWS. This field will be ignored if the cloud provider does not support the feature.
Assuming 10.0.0.0/8 is the internal subnet. In the following example, a load blancer will be created that is only accessible to cluster internal ips.
This will not allow clients from outside of your Kubernetes cluster to access the load blancer.
```yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 10.0.0.0/8
```
In the following example, a load blancer will be created that is only accessible to clients with IP addresses from 130.211.204.1 and 130.211.204.2.
```yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
```
### Google Compute Engine
When using a Service with `spec.type: LoadBalancer`, the firewall will be
@ -48,4 +92,4 @@ This will be fixed in an upcoming release of Kubernetes.
### Other cloud providers
Coming soon.
Coming soon.

View File

@ -165,7 +165,7 @@ will be proxied to one of the `Service`'s backend `Pods` (as reported in
`Endpoints`). Which backend `Pod` to use is decided based on the
`SessionAffinity` of the `Service`. Lastly, it installs iptables rules which
capture traffic to the `Service`'s `clusterIP` (which is virtual) and `Port`
and redirects that traffic to the proxy port which proxies the a backend `Pod`.
and redirects that traffic to the proxy port which proxies the backend `Pod`.
The net result is that any traffic bound for the `Service`'s IP:Port is proxied
to an appropriate backend without the clients knowing anything about Kubernetes
@ -193,7 +193,10 @@ default is `"None"`).
As with the userspace proxy, the net result is that any traffic bound for the
`Service`'s IP:Port is proxied to an appropriate backend without the clients
knowing anything about Kubernetes or `Services` or `Pods`. This should be
faster and more reliable than the userspace proxy.
faster and more reliable than the userspace proxy. However, unlike the
userspace proxier, the iptables proxier cannot automatically retry another
`Pod` if the one it initially selects does not respond, so it depends on
having working [readiness probes](/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks).
![Services overview diagram for iptables proxy](/images/docs/services-iptables-overview.svg)
@ -423,6 +426,44 @@ with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not s
an ephemeral IP will be assigned to the loadBalancer. If the `loadBalancerIP` is specified, but the
cloud provider does not support the feature, the field will be ignored.
#### SSL support on AWS
For partial SSL support on clusters running on AWS, starting with 1.3 two
annotations can be added to a `LoadBalancer` service:
```
"metadata": {
"name": "my-service",
"annotations": {
"service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
}
},
```
The first specifies which certificate to use. It can be either a
certificate from a third party issuer that was uploaded to IAM or one created
within AWS Certificate Manager.
```
"metadata": {
"name": "my-service",
"annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol=": "(https|http|ssl|tcp)"
}
},
```
The second annotation specificies which protocol a pod speaks. For HTTPS and
SSL, the ELB will expect the pod to authenticate itself over the encrypted
connection.
HTTP and HTTPS will select layer 7 proxying: the ELB will terminate
the connection with the user, parse headers and inject the `X-Forwarded-For`
header with the user's IP address (pods will only see the IP address of the
ELB at the other end of its connection) when forwarding requests.
TCP and SSL will select layer 4 proxying: the ELB will forward traffic without
modifying the headers.
### External IPs
If there are external IPs that route to one or more cluster nodes, Kubernetes services can be exposed on those

View File

@ -12,6 +12,10 @@
"selector": {
"app": "example"
},
"type": "LoadBalancer"
"type": "LoadBalancer",
"loadBalancerSourceRanges": [
"10.180.0.0/16",
"10.245.0.0/24"
]
}
}

View File

@ -4,9 +4,11 @@ metadata:
name: myapp
spec:
ports:
-
port: 8765
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 10.180.0.0/16
- 10.245.0.0/24

View File

@ -51,7 +51,11 @@ YAML or as JSON, and supports the following fields:
"selector": {
string: string
},
"type": "LoadBalancer"
"type": "LoadBalancer",
"loadBalancerSourceRanges": [
"10.180.0.0/16",
"10.245.0.0/24"
]
}
}
```
@ -71,6 +75,10 @@ Required fields are:
* `type`: Optional. If the type is `LoadBalancer`, sets up a [network load balancer](/docs/user-guide/load-balancer/)
for your service. This provides an externally-accessible IP address that
sends traffic to the correct port on your cluster nodes.
* `loadBalancerSourceRanges:`: Optional. Must use with `LoadBalancer` type.
If specified and supported by the cloud provider, this will restrict traffic
such that the load balancer will be accessible only to clients from the specified IP ranges.
This field will be ignored if the cloud-provider does not support the feature.
For the full `service` schema see the
[Kubernetes api reference](/docs/api-reference/v1/definitions/#_v1_service).

View File

@ -4,8 +4,7 @@ metadata:
name: myapp
spec:
ports:
-
port: 8765
- port: 8765
targetPort: 9376
selector:
app: example

View File

@ -1,32 +0,0 @@
---
---
By default, the Kubernetes Dashboard is deployed as a cluster addon. For 1.2 clusters, it is enabled by default.
If you want to manually install it, visit
`https://<kubernetes-master>/ui`, which redirects to
`https://<kubernetes-master>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard`.
If you find that you're not able to access the Dashboard, it may be because the
`kubernetes-dashboard` service has not been started on your cluster. In that case,
you can start it manually as follows:
```shell
kubectl create -f cluster/addons/dashboard/dashboard-controller.yaml --namespace=kube-system
kubectl create -f cluster/addons/dashboard/dashboard-service.yaml --namespace=kube-system
```
Normally, this should be taken care of automatically by the
[`kube-addons.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kube-addons/kube-addons.sh)
script that runs on the master. Release notes and development versions of the Dashboard can be
found at https://github.com/kubernetes/dashboard/releases.
## Walkthrough
For information on how to use the Dashboard, take the [Dashboard tour](/docs/user-guide/ui/).
## More Information
For more information, see the
[Kubernetes Dashboard repository](https://github.com/kubernetes/dashboard).

View File

@ -2,48 +2,82 @@
---
Kubernetes has a web-based user interface that allows you to deploy containerized
applications to a Kubernetes cluster, troubleshoot them, and manage the cluster itself.
Dashboard (the web-based user interface of Kubernetes) allows you to deploy containerized applications to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself. You can use it for getting an overview of applications running on the cluster, as well as for creating or modifying individual Kubernetes resources and workloads, such as Daemon sets, Pet sets, Replica sets, Jobs, Replication controllers and corresponding Services, or Pods.
By default, the Kubernetes Dashboard is deployed as a cluster addon. It is enabled by default in Kubernetes 1.2 clusters. Click [here](/docs/user-guide/ui-access/) to learn more about the Dashboard access.
## Using the Dashboard
The Dashboard can be used to get an overview of applications running on the cluster, and to provide information on any errors that have occurred. You can also inspect your replication controllers and corresponding services, change the number of replicated Pods, and deploy new applications using a deploy wizard.
Dashboard also provides information on the state of Pods, Replication controllers, etc. and on any errors that might have occurred. You can inspect and manage the Kubernetes resources, as well as your deployed containerized applications. You can also change the number of replicated Pods, delete Pods, and deploy new applications using a deploy wizard.
When accessing the Dashboard on an empty cluster for the first time, the Welcome page is displayed. This page contains a link to this document as well as a button to deploy your first application. In addition, you can view which system applications are running by default in the `kube-system` [namespace](/docs/admin/namespaces/) of your cluster, for example monitoring applications such as Heapster.
By default, Dashboard is installed as a cluster addon. It is enabled by default as of Kubernetes 1.2 clusters.
* TOC
{:toc}
## Dashboard access
Navigate in your Browser to the following URL:
```
https://<kubernetes-master>/ui
```
This redirects to the following URL:
```
https://<kubernetes-master>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
```
The Dashboard UI lives in the `kube-system` [namespace](/docs/admin/namespaces/), but shows all resources from all namespaces in your environment.
If you find that you are not able to access Dashboard, you can install and open the latest stable release by running the following command:
```
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
```
Then, navigate to
```
https://<kubernetes-master>/ui
```
In case you have to provide a password, use the following command to find it out:
```
kubectl config view
```
## Welcome page
When accessing Dashboard on an empty cluster for the first time, the Welcome page is displayed. This page contains a link to this document as well as a button to deploy your first application. In addition, you can view which system applications are running by **default** in the `kube-system` [namespace](/docs/admin/namespaces/) of your cluster, for example monitoring applications such as Heapster.
![Kubernetes Dashboard welcome page](/images/docs/ui-dashboard-zerostate.png)
### Deploying applications
## Deploying containerized applications
The Dashboard lets you create and deploy a containerized application as a Replication Controller with a simple wizard:
Dashboard lets you create and deploy a containerized application as a Replication Controller and corresponding Service with a simple wizard. You can either manually specify application details, or upload a YAML or JSON file containing the required information.
To access the deploy wizard from the Welcome page, click the respective button. To access the wizard at a later point in time, click the **DEPLOY APP** or **UPLOAD YAML** link in the upper right corner of any page listing workloads.
![Deploy wizard](/images/docs/ui-dashboard-deploy-simple.png)
### Specifying application details
![Kubernetes Dashboard deploy form](/images/docs/ui-dashboard-deploy-simple.png)
#### Specifying application details
The wizard expects that you provide the following information:
The deploy wizard expects that you provide the following information:
- **App name** (mandatory): Name for your application. A [label](/docs/user-guide/labels/) with the name will be added to the Replication Controller and Service, if any, that will be deployed.
The application name must be unique within the selected Kubernetes [namespace](/docs/admin/namespaces/). It must start with a lowercase character, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters.
The application name must be unique within the selected Kubernetes [namespace](/docs/admin/namespaces/). It must start and end with a lowercase character, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored.
- **Container image** (mandatory): The URL of a public Docker [container image](/docs/user-guide/images/) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub).
- **Container image** (mandatory): The URL of a public Docker [container image](/docs/user-guide/images/) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub). The container image specification must end with a colon.
- **Number of pods** (mandatory): The target number of Pods you want your application to be deployed in. The value must be a positive integer.
A [Replication Controller](/docs/user-guide/replication-controller/) will be created to maintain the desired number of Pods across your cluster.
- **Ports** (optional): If your container listens on a port, you can provide a port and target port. The wizard will create a corresponding Kubernetes [Service](http://kubernetes.io/v1.1/docs/user-guide/services.html) which will route to your deployed Pods. Supported protocols are TCP and UDP. In case you specify ports, the internal DNS name for this Service will be the value you specified as application name above.
Be aware that if you specify ports, you need to provide both port and target port.
- For some parts of your application (e.g. frontends), you can expose the Service onto an external, maybe public IP address by selecting the **Expose service externally** option. You may need to open up one or more ports to do so. Find more details [here](/docs/user-guide/services-firewalls/).
- **Service** (optional): For some parts of your application (e.g. frontends) you may want to expose a [Service](http://kubernetes.io/docs/user-guide/services/) onto an external, maybe public IP address outside of your cluster (external Service). For external Services, you may need to open up one or more ports to do so. Find more details [here](/docs/user-guide/services-firewalls/).
Other Services that are only visible from inside the cluster are called internal Services.
Irrespective of the Service type, if you choose to create a Service and your container listens on a port (incoming), you need to specify two ports. The Service will be created mapping the port (incoming) to the target port seen by the container. This Service will route to your deployed Pods. Supported protocols are TCP and UDP. The internal DNS name for this Service will be the value you specified as application name above.
If needed, you can expand the **Advanced options** section where you can specify more settings:
![Kubernetes Dashboard deploy form advanced options](/images/docs/ui-dashboard-deploy-more.png)
![Deploy wizard advanced options](/images/docs/ui-dashboard-deploy-more.png)
- **Description**: The text you enter here will be added as an [annotation](/docs/user-guide/annotations/) to the Replication Controller and displayed in the application's details.
@ -58,63 +92,111 @@ environment=pod
track=stable
```
- **Kubernetes namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called [namespaces](/docs/admin/namespaces/). They let you partition resources into logically named groups.
- **Namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called [namespaces](/docs/admin/namespaces/). They let you partition resources into logically named groups.
The Dashboard offers all available namespaces in a dropdown list and allows you to create a new namespace. The namespace name may contain alphanumeric characters and dashes (-).
Dashboard offers all available namespaces in a dropdown list, and allows you to create a new namespace. The namespace name may contain a maximum of 63 alphanumeric characters and dashes (-).
- **Image pull secrets**: In case the Docker container image is private, it may require [pull secret](/docs/user-guide/secrets/) credentials.
In case the creation of the namespace is successful, it is selected by default. If the creation fails, the first namespace is selected.
The Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`. The content of a secret must be base24-encoded and specified in a [`.dockercfg`](/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) file.
- **Image Pull Secret**: In case the specified Docker container image is private, it may require [pull secret](/docs/user-guide/secrets/) credentials.
- **CPU requirement** and **Memory requirement**: You can specify the minimum [resource limits](/docs/admin/limitrange/) for the container. By default, Pods run with unbounded CPU and memory limits.
Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`. The content of a secret must be base64-encoded and specified in a [`.dockercfg`](/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) file. The secret name may consist of a maximum of 253 characters.
- **Run command** and **Run command arguments**: By default, your containers run the selected Docker image's default [entrypoint command](/docs/user-guide/containers/#containers-and-commands). You can use the command options and arguments to override the default.
In case the creation of the image pull secret is successful, it is selected by default. If the creation fails, no secret is applied.
- **CPU requirement (cores)** and **Memory requirement (MiB)**: You can specify the minimum [resource limits](/docs/admin/limitrange/) for the container. By default, Pods run with unbounded CPU and memory limits.
- **Run command** and **Run command arguments**: By default, your containers run the specified Docker image's default [entrypoint command](/docs/user-guide/containers/#containers-and-commands). You can use the command options and arguments to override the default.
- **Run as privileged**: This setting determines whether processes in [privileged containers](/docs/user-guide/pods/#privileged-mode-for-pod-containers) are equivalent to processes running as root on the host. Privileged containers can make use of capabilities like manipulating the network stack and accessing devices.
- **Environment variables**: Kubernetes exposes Services through [environment variables](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/design/expansion.md). You can compose environment variable or pass arguments to your commands using the values of environnment variables. They can be used in applications to find a Service. Environment variables are also useful for decreasing coupling and the use of workarounds. Values can reference other variables using the `$(VAR_NAME)` syntax.
- **Environment variables**: Kubernetes exposes Services through [environment variables](http://kubernetes.io/docs/user-guide/environment-guide/). You can compose environment variable or pass arguments to your commands using the values of environment variables. They can be used in applications to find a Service. Values can reference other variables using the `$(VAR_NAME)` syntax.
#### Uploading a YAML or JSON file
### Uploading a YAML or JSON file
Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes' [API](http://kubernetes.io/v1.1/docs/api.html) resource schemas as the configuration schemas.
Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes' [API](http://kubernetes.io/docs/api/) resource schemas as the configuration schemas.
As an alternative to specifying application details in the deploy wizard, you can define your Replication Controllers and Services in YAML or JSON files, and upload the files to your Pods:
![Kubernetes Dashboard deploy from file upload](/images/docs/ui-dashboard-deploy-file.png)
### Applications view
![Deploy wizard file upload](/images/docs/ui-dashboard-deploy-file.png)
As soon as applications are running on your cluster, the initial view of the Dashboard defaults to showing an overview of them, for example:
![Kubernetes Dashboard applications view](/images/docs/ui-dashboard-rcs.png)
Individual applications are shown as cards - where an application is defined as a Replication Controller and its corresponding Services. Each card shows the current number of running and desired replicas, along with errors reported by Kubernetes, if any.
## Managing resources
You can view application details (**View details**), make quick changes to the number of replicas (**Edit pod count**) or delete the application directly (**Delete**) from the menu in each card's corner:
![Kubernetes Dashboard deploy form file upload](/images/docs/ui-dashboard-cards-menu.png)
#### View details
Selecting this option from the card menu will take you to the following page where you can view more information about the Pods that make up your application:
![Kubernetes Dashboard application detail](/images/docs/ui-dashboard-rcs-detail.png)
The **EVENTS** tab can be useful for debugging flapping applications.
Clicking the plus sign in the right corner of the screen leads you back to the page for deploying a new application.
### List view
#### Edit pod count
As soon as applications are running on your cluster, Dashboard's initial view defaults to showing all resources available in all namespaces in a list view, for example:
If you choose to change the number of Pods, the respective Replication Controller will be updated to reflect the newly specified number.
![Workloads view](/images/docs/ui-dashboard-workloadview.png)
#### Delete
For every resource, the list view shows the following information:
Deleting a Replication Controller also deletes the Pods managed by it. It is currently not supported to leave the Pods running.
* Name of the resource
* All labels assigned to the resource
* Number of pods assigned to the resource
* Age, i.e. amount of time passed since the resource has been created
* Docker container image
You have the option to also delete Services related to the Replication Controller if the label selector targets only the Replication Controller to be deleted.
To filter the resources and only show those of a specific namespace, select it from the dropdown list in the right corner of the title bar:
## More Information
![Namespace selector](/images/docs/ui-dashboard-namespace.png)
### Details view
When clicking a resource, the details view is opened, for example:
![Details view](/images/docs/ui-dashboard-detailsview.png)
The **OVERVIEW** tab shows the actual resource details as well as the Pods the resource is running in.
The **EVENTS** tab can be useful for debugging applications.
To go back to the workloads overview, click the Kubernetes logo.
### Workload categories
Workloads are categorized as follows:
* [Daemon Sets](http://kubernetes.io/docs/admin/daemons/) which ensure that all or some of the nodes in your cluster run a copy of a Pod.
* [Deployments](http://kubernetes.io/docs/user-guide/deployments/) which provide declarative updates for Pods and Replica Sets (the next-generation [Replication Controller](http://kubernetes.io/docs/user-guide/replication-controller/))
The Details page for a Deployment lists resource details, as well as new and old Replica Sets. The resource details also include information on the [RollingUpdate](http://kubernetes.io/docs/user-guide/rolling-updates/) strategy, if any.
* [Pet Sets](http://kubernetes.io/docs/user-guide/load-balancer/) (nominal Services, also known as load-balanced Services) for legacy application support.
* [Replica Sets](http://kubernetes.io/docs/user-guide/replicasets/) for using label selectors.
* [Jobs](http://kubernetes.io/docs/user-guide/jobs/) for creating one or more Pods, ensuring that a specified number of them successfully terminate, and tracking the completions.
* [Replication Controllers](http://kubernetes.io/docs/user-guide/replication-controller/)
* [Pods](http://kubernetes.io/docs/user-guide/pods/)
You can display the resources of a specific category in two ways:
* Click the category name, e.g. **Deployments**
* Edit the Dashboard URL and add the name of a desired category. For example, to display the list of Replication Controllers, specify the following URL:
```
http://<your_host>:9090/#/replicationcontroller
```
### Actions
Every list view offers an action menu to the right of the listed resources. The related details view provides the same actions as buttons in the upper right corner of the page.
* **Edit**
Opens a text editor so that you can instantly view or update the JSON or YAML file of the respective resource.
* **Delete**
After confirmation, deletes the respective resource.
When deleting a Replication Controller, the Pods managed by it are also deleted. You have the option to also delete Services related to the Replication Controller.
* **View details**
For Replication Controllers only. Takes you to the details page where you can view more information about the Pods that make up your application.
* **Scale**
For Replication Controllers only. Changes the number of Pods your application runs in. The respective Replication Controller will be updated to reflect the newly specified number. Be aware that setting a high number of Pods may result in a decrease of performance of the cluster or Dashboard itself.
## More information
For more information, see the
[Kubernetes Dashboard repository](https://github.com/kubernetes/dashboard).

Some files were not shown because too many files have changed in this diff Show More