Conflicts:
	README.md
	_config.yml
reviewable/pr1478/r1
steveperry-53 2016-10-15 13:31:14 -07:00
commit 81cb7792c1
206 changed files with 2991 additions and 2599 deletions

64
404.md
View File

@ -2,67 +2,9 @@
layout: docwithnav
title: 404 Error!
permalink: /404.html
no_canonical: true
---
<script language="JavaScript">
$( document ).ready(function() {
var oldURLs=["/README.md","/README.html",".html",".md","/v1.1/","/v1.0/"];
var fwdDirs=["examples/","cluster/","docs/devel","docs/design"];
var doRedirect = false;
var notHere = false;
var forwardingURL=window.location.href;
if (forwardingURL.indexOf("third_party/swagger-ui") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/kubernetes/third_party/swagger-ui/");
}
if (forwardingURL.indexOf("resource-quota") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/docs/admin/resourcequota/");
}
if (forwardingURL.indexOf("horizontal-pod-autoscaler") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/");
}
if (forwardingURL.indexOf("docs/roadmap") > -1)
{
notHere = true;
window.location.replace("https://github.com/kubernetes/kubernetes/milestones/");
}
if (forwardingURL.indexOf("api-ref/") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/docs/api/");
}
if (forwardingURL.indexOf("docs/user-guide/overview") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/docs/whatisk8s/");
}
for (i=0;i<fwdDirs.length;i++) {
if (forwardingURL.indexOf(fwdDirs[i]) > -1)
{
var urlPieces = forwardingURL.split(fwdDirs[i]);
var newURL = "https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/" + fwdDirs[i] + urlPieces[1];
notHere = true;
window.location.replace(newURL);
}
}
if (!notHere) {
for (i=0;i<oldURLs.length;i++) {
if (forwardingURL.indexOf(oldURLs[i]) > -1)
{
doRedirect=true;
forwardingURL=forwardingURL.replace(oldURLs[i],"/");
}
}
if (doRedirect)
{
window.location.replace(forwardingURL);
};
}
});
</script>
<script src="/js/redirects.js"></script>
Sorry, this page was not found. :(

View File

@ -19,6 +19,72 @@ The staging site for the next upcoming Kubernetes release is here: [http://kuber
The staging site reflects the current state of what's been merged in the release branch, or in other words, what the docs will look like for the next upcoming release. It's automatically updated as new PRs get merged.
## Automatic Staging for Pull Requests
When you create a pull request (either against master or the upcoming release), your changes are staged in a custom subdomain on Netlify so that you can see your changes in rendered form before the PR is merged. You can use this to verify that everything is correct before the PR gets merged. To view your changes:
- Scroll down to the PR's list of Automated Checks
- Click "Show All Checks"
- Look for "deploy/netlify"; you'll see "Deploy Preview Ready!" if staging was successful
- Click "Details" to bring up the staged site and navigate to your changes
## Release Branch Staging
The Kubernetes site maintains staged versions at a subdomain provided by Netlify. Every PR for the Kubernetes site, either against the master branch or the upcoming release branch, is staged automatically.
The staging site for the next upcoming Kubernetes release is here: [http://kubernetes-io-vnext-staging.netlify.com/](http://kubernetes-io-vnext-staging.netlify.com/)
The staging site reflects the current state of what's been merged in the release branch, or in other words, what the docs will look like for the next upcoming release. It's automatically updated as new PRs get merged.
## Staging the site locally (using Docker)
Don't like installing stuff? Download and run a local staging server with a single `docker run` command.
git clone https://github.com/kubernetes/kubernetes.github.io.git
cd kubernetes.github.io
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 gcr.io/google-samples/k8sdocs:1.0
Then visit [http://localhost:4000](http://localhost:4000) to see our site. Any changes you make on your local machine will be automatically staged.
If you're interested you can view [the Dockerfile for this image](https://github.com/kubernetes/kubernetes.github.io/blob/master/staging-container/Dockerfile).
## Staging the site locally (from scratch setup)
The below commands to setup your environment for running GitHub pages locally. Then, any edits you make will be viewable
on a lightweight webserver that runs on your local machine.
This will typically be the fastest way (by far) to iterate on docs changes and see them staged, once you get this set up, but it does involve several install steps that take awhile to complete, and makes system-wide modifications.
Install Ruby 2.2 or higher. If you're on Linux, run these commands:
apt-get install software-properties-common
apt-add-repository ppa:brightbox/ruby-ng
apt-get install ruby2.2
apt-get install ruby2.2-dev
* If you're on a Mac, follow [these instructions](https://gorails.com/setup/osx/).
* If you're on a Windows machine you can use the [Ruby Installer](http://rubyinstaller.org/downloads/). During the installation make sure to check the option for *Add Ruby executables to your PATH*.
The remainder of the steps should work the same across operating systems.
To confirm you've installed Ruby correctly, at the command prompt run `gem --version` and you should get a response with your version number. Likewise you can confirm you have Git installed properly by running `git --version`, which will respond with your version of Git.
Install the GitHub Pages package, which includes Jekyll:
gem install github-pages
Clone our site:
git clone https://github.com/kubernetes/kubernetes.github.io.git
Make any changes you want. Then, to see your changes locally:
cd kubernetes.github.io
jekyll serve
Your copy of the site will then be viewable at: [http://localhost:4000](http://localhost:4000)
(or wherever Jekyll tells you).
## GitHub help
If you're a bit rusty with git/GitHub, you might want to read

View File

@ -18,7 +18,7 @@ defaults:
values:
version: "v1.3"
githubbranch: "master"
docsbranch: "release-1.3"
docsbranch: "master"
-
scope:
path: "docs"

View File

@ -4,5 +4,6 @@ tocs:
- tasks
- concepts
- reference
- tools
- samples
- support

View File

@ -163,10 +163,10 @@ toc:
path: /docs/getting-started-guides/gce/
- title: Running Kubernetes on AWS EC2
path: /docs/getting-started-guides/aws/
- title: Running Kubernetes on Azure
path: /docs/getting-started-guides/azure/
- title: Running Kubernetes on Azure (Weave-based)
path: /docs/getting-started-guides/coreos/azure/
- title: Running Kubernetes on Azure (Flannel-based)
path: /docs/getting-started-guides/azure/
- title: Running Kubernetes on CenturyLink Cloud
path: /docs/getting-started-guides/clc/
- title: Running Kubernetes on IBM SoftLayer
@ -252,6 +252,8 @@ toc:
path: /docs/admin/
- title: Cluster Management Guide
path: /docs/admin/cluster-management/
- title: kubeadm reference
path: /docs/admin/kubeadm/
- title: Installing Addons
path: /docs/admin/addons/
- title: Sharing a Cluster with Namespaces

View File

@ -63,7 +63,7 @@ toc:
- title: kubectl Commands
section:
- title: kubectl
path: /docs/user-guide/kubectl/kubectl/
path: /docs/user-guide/kubectl/
- title: kubectl annotate
path: /docs/user-guide/kubectl/kubectl_annotate/
- title: kubectl api-versions
@ -230,6 +230,8 @@ toc:
path: /docs/user-guide/services/
- title: Service Accounts
path: /docs/user-guide/service-accounts/
- title: Third Party Resources
path: /docs/user-guide/thirdpartyresources/
- title: Volumes
path: /docs/user-guide/volumes/

View File

@ -10,4 +10,8 @@ toc:
section:
- title: Using an HTTP Proxy to Access the Kubernetes API
path: /docs/tasks/access-kubernetes-api/http-proxy-access-api/
- title: Administering a Cluster
section:
- title: Assigning Pods to Nodes
path: /docs/tasks/administer-cluster/assign-pods-nodes/

4
_data/tools.yml Normal file
View File

@ -0,0 +1,4 @@
bigheader: "Tools"
toc:
- title: Tools
path: /docs/tools/

View File

@ -2,59 +2,49 @@ bigheader: "Tutorials"
toc:
- title: Tutorials
path: /docs/tutorials/
- title: Getting Started
- title: Kubernetes Basics
section:
- title: Overview
path: /docs/tutorials/kubernetes-basics/
- title: 1. Create a Cluster
section:
- title: Creating a Cluster
path: /docs/tutorials/getting-started/create-cluster/
- title: Using Minikube to Create a Cluster
path: /docs/tutorials/getting-started/cluster-intro/
path: /docs/tutorials/kubernetes-basics/cluster-intro/
- title: Interactive Tutorial - Creating a Cluster
path: /docs/tutorials/getting-started/cluster-interactive/
path: /docs/tutorials/kubernetes-basics/cluster-interactive/
- title: 2. Deploy an App
section:
- title: Deploying an App
path: /docs/tutorials/getting-started/deploy-app/
- title: Using kubectl to Create a Deployment
path: /docs/tutorials/getting-started/deploy-intro/
path: /docs/tutorials/kubernetes-basics/deploy-intro/
- title: Interactive Tutorial - Deploying an App
path: /docs/tutorials/getting-started/deploy-interactive/
path: /docs/tutorials/kubernetes-basics/deploy-interactive/
- title: 3. Explore Your App
section:
- title: Exploring Your App
path: /docs/tutorials/getting-started/explore-app/
- title: Viewing Pods and Nodes
path: /docs/tutorials/getting-started/explore-intro/
path: /docs/tutorials/kubernetes-basics/explore-intro/
- title: Interactive Tutorial - Exploring Your App
path: /docs/tutorials/getting-started/explore-interactive/
path: /docs/tutorials/kubernetes-basics/explore-interactive/
- title: 4. Expose Your App Publicly
section:
- title: Exposing Your App Publicly
path: /docs/tutorials/getting-started/expose-app/
- title: Using a Service to Expose Your App
path: /docs/tutorials/getting-started/expose-intro/
path: /docs/tutorials/kubernetes-basics/expose-intro/
- title: Interactive Tutorial - Exposing Your App
path: /docs/tutorials/getting-started/expose-interactive/
path: /docs/tutorials/kubernetes-basics/expose-interactive/
- title: 5. Scale Your App
section:
- title: Scaling Your App
path: /docs/tutorials/getting-started/scale-app/
- title: Running Multiple Instances of Your App
path: /docs/tutorials/getting-started/scale-intro/
path: /docs/tutorials/kubernetes-basics/scale-intro/
- title: Interactive Tutorial - Scaling Your App
path: /docs/tutorials/getting-started/scale-interactive/
path: /docs/tutorials/kubernetes-basics/scale-interactive/
- title: 6. Update Your App
section:
- title: Updating Your App
path: /docs/tutorials/getting-started/update-app/
- title: Performing a Rolling Update
path: /docs/tutorials/getting-started/update-intro/
path: /docs/tutorials/kubernetes-basics/update-intro/
- title: Interactive Tutorial - Updating Your App
path: /docs/tutorials/getting-started/update-interactive/
path: /docs/tutorials/kubernetes-basics/update-interactive/
- title: Stateless Applications
section:
- title: Running a Stateless Application Using a Deployment
path: /docs/tutorials/stateless-application/run-stateless-application-deployment/
- title: Exposing an External IP Address Using a Service
- title: Using a Service to Access an Application in a Cluster
path: /docs/tutorials/stateless-application/expose-external-ip-address-service/

View File

@ -3,40 +3,40 @@
margin-top: 1em !important;
}
#caseStudies p {
.gridPage p {
color: rgb(26,26,26) !important;
margin-left: 0 !important;
padding-left: 0 !important;
font-weight: 300 !important;
}
#caseStudies #mainContent {
.gridPage #mainContent {
padding: 0;
}
#caseStudies #mainContent .content {
.gridPage #mainContent .content {
padding-top: 0;
}
#caseStudies main {
.gridPage main {
max-width: 1100px !important;
}
#caseStudies .content {
.gridPage .content {
position: relative;
margin: 0 auto 50px;
max-width: 90%;
}
#caseStudies .content p {
.gridPage .content p {
line-height: 24px !important;
}
#caseStudies .content h3 {
.gridPage .content h3 {
padding: 0 !important;
}
#caseStudies #hero h5 {
.gridPage #hero h5 {
padding-left: 20px;
margin: 0;
}
@ -67,7 +67,7 @@
left: 0;
}
#caseStudies #mainContent .content .case-study p {
.gridPage #mainContent .content .case-study p {
font-family: "Roboto", sans-serif;
font-size: 16px;
padding: 0;
@ -77,13 +77,13 @@
font-style: italic;
}
#caseStudies #video {
.gridPage #video {
background: #f9f9f9;
height: auto;
/*height: 340px;*/
}
#caseStudies #video main {
.gridPage #video main {
position: relative;
max-width: 900px !important;
height: 100%;
@ -93,19 +93,19 @@
padding: 50px 20px;
}
#caseStudies #video main > div {
.gridPage #video main > div {
width: 50%;
}
#caseStudies #video main #zulilyLogo {
.gridPage #video main #zulilyLogo {
width: 100px;
}
#caseStudies #video main img {
.gridPage #video main img {
max-width: 100%;
}
#caseStudies #video h3 {
.gridPage #video h3 {
font-size: 32px;
font-weight: 300;
line-height: 38px;
@ -113,57 +113,57 @@
margin: 0 0 1em 0;
}
#caseStudies #video p {
.gridPage #video p {
margin: 0;
}
#caseStudies #video p.attrib {
.gridPage #video p.attrib {
margin-bottom: 20px;
}
#caseStudies #video button > h6 {
.gridPage #video button > h6 {
font-size: 18px;
font-weight: 500;
margin: 1em 0;
color: #326de6;
}
#caseStudies #users {
.gridPage #users {
padding: 50px;
}
#caseStudies #users main {
.gridPage #users main {
max-width: 1150px !important;
}
#caseStudies #users main h3 {
.gridPage #users main h3 {
padding-left: 20px;
margin-bottom: 20px;
}
#caseStudies #usersGrid {
.gridPage #usersGrid {
position: relative;
display: flex;
flex-wrap: wrap;
justify-content: center;
}
#caseStudies #usersGrid a {
.gridPage #usersGrid a {
display: inline-block;
margin: 5px;
}
#caseStudies #usersGrid a img {
.gridPage #usersGrid a img {
box-shadow: 1px 1px 2px transparent;
transition: box-shadow 0.25s;
}
#caseStudies #usersGrid a img:hover {
.gridPage #usersGrid a img:hover {
box-shadow: 1px 1px 2px #cccccc;
}
#caseStudies #usersGrid a:last-child img,
#caseStudies #usersGrid a:last-child img:hover {
.gridPage #usersGrid a:last-child img,
.gridPage #usersGrid a:last-child img:hover {
box-shadow: 1px 1px 2px transparent;
}
@ -173,12 +173,12 @@
box-shadow: 1px 2px 2px #dddddd;
}
#caseStudies .feature {
.gridPage .feature {
position: relative;
padding: 20px 0 20px 242px;
}
#caseStudies .feature img {
.gridPage .feature img {
position: absolute;
top: 20px;
left: 0;
@ -225,7 +225,7 @@
margin-bottom: 0.5em;
}
#caseStudies .feature p.quote {
.gridPage .feature p.quote {
font-size: 20px;
line-height: 28px !important;
}
@ -250,20 +250,20 @@
}
@media screen and (max-width: 900px){
#caseStudies #video main {
.gridPage #video main {
flex-direction: column;
align-items: center;
}
#caseStudies #video main > div {
.gridPage #video main > div {
width: 400px;
}
#caseStudies #video main > div + div {
.gridPage #video main > div + div {
margin-top: 30px;
}
#caseStudies #video h3 {
.gridPage #video h3 {
max-width: 100%;
}
}
@ -282,12 +282,12 @@
transform: translateX(-50%);
}
#caseStudies .feature {
.gridPage .feature {
margin-top: 50px;
padding: 180px 0 0;
}
#caseStudies .feature img {
.gridPage .feature img {
top: 0;
left: 50%;
transform: translateX(-50%);
@ -295,12 +295,12 @@
}
@media screen and (max-width: 480px){
#caseStudies #hero {
.gridPage #hero {
padding-right: 20px;
padding-left: 20px;
}
#caseStudies #video main > div {
.gridPage #video main > div {
width: 80%;
min-width: 280px;
}

View File

@ -2,7 +2,7 @@
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="canonical" href="http://kubernetes.io{{page.url}}" />
{% if !page.no_canonical %}<link rel="canonical" href="http://kubernetes.io{{page.url}}" />{% endif %}
<link rel="shortcut icon" type="image/png" href="/images/favicon.png">
<link href='https://fonts.googleapis.com/css?family=Roboto:400,100,100italic,300,300italic,400italic,500,500italic,700,700italic,900,900italic' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href='https://fonts.googleapis.com/css?family=Roboto+Mono' type='text/css'>

View File

@ -16,6 +16,7 @@
<li><a href="/docs/tasks/" {% if site.data[foundTOC].bigheader == "Tasks" %}class="YAH"{% endif %}>TASKS</a></li>
<li><a href="/docs/concepts/" {% if site.data[foundTOC].bigheader == "Concepts" %}class="YAH"{% endif %}>CONCEPTS</a></li>
<li><a href="/docs/reference" {% if site.data[foundTOC].bigheader == "Reference Documentation" %}class="YAH"{% endif %}>REFERENCE</a></li>
<li><a href="/docs/tools" {% if site.data[foundTOC].bigheader == "Tools" %}class="YAH"{% endif %}>TOOLS</a></li>
<li><a href="/docs/samples" {% if site.data[foundTOC].bigheader == "Samples" %}class="YAH"{% endif %}>SAMPLES</a></li>
<li><a href="/docs/troubleshooting/" {% if site.data[foundTOC].bigheader == "Support" %}class="YAH"{% endif %}>SUPPORT</a></li>
</ul>
@ -48,7 +49,6 @@
(function(d,c,j){if(!document.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src=('https:'==document.location.protocol)?'https://polldaddy.com/js/rating/rating.js':'http://i0.poll.fm/js/rating/rating.js';s=document.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);}}(document,'script','pd-rating-js'));
</script>
<a href="" onclick="window.open('https://github.com/kubernetes/kubernetes.github.io/issues/new?title=Issue%20with%20' +
window.location.pathname + '&body=Issue%20with%20' +
window.location.pathname)" class="button issue">Create Issue</a>
<a href="/editdocs#{{ page.path }}" class="button issue">Edit This Page</a>
{% endif %}

View File

@ -1131,7 +1131,7 @@ $feature-box-div-margin-bottom: 40px
// Community
#community, #caseStudies
#community, .gridPage
&.open-nav, &.flip-nav
.logo
background-image: url(/images/nav_logo2.svg)
@ -1340,3 +1340,4 @@ $feature-box-div-margin-bottom: 40px
//
//
//
//

View File

@ -270,7 +270,7 @@ $video-section-height: 550px
#community, #caseStudies
#community, .gridPage
#hero
text-align: left

View File

@ -3,7 +3,7 @@ title: Case Studies
---
<!Doctype html>
<html id="caseStudies">
<html id="caseStudies" class="gridPage">
{% include head-header.html %}
<section id="hero" class="light-text">

View File

@ -3,7 +3,7 @@ title: Pearson Case Study
---
<!Doctype html>
<html id="caseStudies">
<html id="caseStudies" class="gridPage">
{% include head-header.html %}
<section id="hero" class="light-text">

View File

@ -3,7 +3,7 @@ title: Wikimedia Case Study
---
<!Doctype html>
<html id="caseStudies">
<html id="caseStudies" class="gridPage">
{% include head-header.html %}
<section id="hero" class="light-text">

View File

@ -52,8 +52,8 @@ On GCE, Client Certificates, Password, Plain Tokens, and JWT Tokens are all enab
If the request cannot be authenticated, it is rejected with HTTP status code 401.
Otherwise, the user is authenticated as a specific `username`, and the user name
is available to subsequent steps to use in their decisions. Some authenticators
may also provide the group memberships of the user, while other authenticators
do not (and expect the authorizer to determine these).
also provide the group memberships of the user, while other authenticators
do not.
While Kubernetes uses "usernames" for access control decisions and in request logging,
it does not have a `user` object nor does it store usernames or other information about

View File

@ -349,8 +349,8 @@ logs or through `journalctl`. More information is provided in
Additional resources:
- http://wiki.apparmor.net/index.php/QuickProfileLanguage
- http://wiki.apparmor.net/index.php/ProfileLanguage
- [Quick guide to the AppArmor profile language](http://wiki.apparmor.net/index.php/QuickProfileLanguage)
- [AppArmor core policy reference](http://wiki.apparmor.net/index.php/ProfileLanguage)
## API Reference

View File

@ -25,10 +25,11 @@ manually through API calls. Service accounts are tied to a set of credentials
stored as `Secrets`, which are mounted into pods allowing in cluster processes
to talk to the Kubernetes API.
All API requests are tied to either a normal user or a service account. This
means every process inside or outside the cluster, from a human user typing
`kubectl` on a workstation, to `kubelets` on nodes, to members of the control
plane, must authenticate when making requests to the the API server.
API requests are tied to either a normal user or a service account, or are treated
as anonymous requests. This means every process inside or outside the cluster, from
a human user typing `kubectl` on a workstation, to `kubelets` on nodes, to members
of the control plane, must authenticate when making requests to the the API server,
or be treated as an anonymous user.
## Authentication strategies
@ -54,20 +55,31 @@ When multiple are enabled, the first authenticator module
to successfully authenticate the request short-circuits evaluation.
The API server does not guarantee the order authenticators run in.
The `system:authenticated` group is included in the list of groups for all authenticated users.
### X509 Client Certs
Client certificate authentication is enabled by passing the `--client-ca-file=SOMEFILE`
option to API server. The referenced file must contain one or more certificates authorities
to use to validate client certificates presented to the API server. If a client certificate
is presented and verified, the common name of the subject is used as the user name for the
request.
request. As of Kubernetes 1.4, client certificates can also indicate a user's group memberships
using the certificate's organization fields. To include multiple group memberships for a user,
include multiple organization fields in the certificate.
For example, using the `openssl` command line tool to generate a certificate signing request:
``` bash
openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj "/CN=jbeda/O=app1/O=app2"
```
This would create a CSR for the username "jbeda", belonging to two groups, "app1" and "app2".
See [APPENDIX](#appendix) for how to generate a client cert.
### Static Token File
Token file is enabled by passing the `--token-auth-file=SOMEFILE` option to the
API server. Currently, tokens last indefinitely, and the token list cannot be
The API server reads bearer tokens from a file when given the `--token-auth-file=SOMEFILE` option on the command line. Currently, tokens last indefinitely, and the token list cannot be
changed without restarting API server.
The token file format is implemented in `plugin/pkg/auth/authenticator/token/tokenfile/...`
@ -78,8 +90,19 @@ optional group names. Note, if you have more than one group the column must be d
token,user,uid,"group1,group2,group3"
```
When using token authentication from an http client the API server expects an `Authorization`
header with a value of `Bearer SOMETOKEN`.
#### Putting a Bearer Token in a Request
When using bearer token authentication from an http client, the API
server expects an `Authorization` header with a value of `Bearer
THETOKEN`. The bearer token must be a character sequence that can be
put in an HTTP header value using no more than the encoding and
quoting facilities of HTTP. For example: if the bearer token is
`31ada4fd-adec-460c-809a-9e56ceb75269` then it would appear in an HTTP
header as shown below.
```http
Authentication: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269
```
### Static Password File
@ -171,7 +194,8 @@ type: kubernetes.io/service-account-token
Note: values are base64 encoded because secrets are always base64 encoded.
The signed JWT can be used as a bearer token to authenticate as the given service
account. Normally these secrets are mounted into pods for in-cluster access to
account. See [above](#putting-a-bearer-token-in-a-request) for how the token is included
in a request. Normally these secrets are mounted into pods for in-cluster access to
the API server, but can be used from outside the cluster as well.
Service accounts authenticate with the username `system:serviceaccount:(NAMESPACE):(SERVICEACCOUNT)`,
@ -192,11 +216,8 @@ email, signed by the server.
To identify the user, the authenticator uses the `id_token` (not the `access_token`)
from the OAuth2 [token response](https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse)
as a bearer token.
```
Authentication: Bearer (id_token)
```
as a bearer token. See [above](#putting-a-bearer-token-in-a-request) for how the token
is included in a request.
To enable the plugin, pass the following required flags:
@ -272,10 +293,11 @@ contexts:
name: webhook
```
When a client attempts to authenticate with the API server using a bearer token,
using the `Authorization: Bearer (TOKEN)` HTTP header the authentication webhook
When a client attempts to authenticate with the API server using a bearer token
as discussed [above](#putting-a-bearer-token-in-a-request),
the authentication webhook
queries the remote service with a review object containing the token. Kubernetes
will not challenge request that lack such a header.
will not challenge a request that lacks such a header.
Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/api/)
as other Kubernetes API objects. Implementers should be aware of looser
@ -354,6 +376,22 @@ Please refer to the [discussion](https://github.com/kubernetes/kubernetes/pull/1
[blueprint](https://github.com/kubernetes/kubernetes/issues/11626) and [proposed
changes](https://github.com/kubernetes/kubernetes/pull/25536) for more details.
## Anonymous requests
Anonymous access is enabled by default, and can be disabled by passing `--anonymous-auth=false`
option to the API server during startup.
When enabled, requests that are not rejected by other configured authentication methods are
treated as anonymous requests, and given a username of `system:anonymous` and a group of
`system:unauthenticated`.
For example, on a server with token authentication configured, and anonymous access enabled,
a request providing an invalid bearer token would receive a `401 Unauthorized` error.
A request providing no bearer token would be treated as an anonymous request.
If you rely on authentication alone to authorize access, either change to use an
authorization mode other than `AlwaysAllow`, or set `--anonymous-auth=false`.
## Plugin Development
We plan for the Kubernetes API server to issue tokens after the user has been

View File

@ -53,7 +53,7 @@ A request has the following attributes that can be considered for authorization:
- what resource is being accessed (for resource requests only)
- what subresource is being accessed (for resource requests only)
- the namespace of the object being accessed (for namespaced resource requests only)
- the API group being accessed (for resource requests only)
- the API group being accessed (for resource requests only); an empty string designates the [core API group](../api.md#api-groups)
The request verb for a resource API endpoint can be determined by the HTTP verb used and whether or not the request acts on an individual resource or a collection of resources:
@ -231,7 +231,7 @@ metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # The API group "" indicates the default API Group.
- apiGroups: [""] # The API group "" indicates the core API Group.
resources: ["pods"]
verbs: ["get", "watch", "list"]
nonResourceURLs: []
@ -323,6 +323,32 @@ roleRef:
apiVersion: rbac.authorization.k8s.io/v1alpha1
```
### Referring to Resources
Most resources are represented by a string representation of their name, such as "pods", just as it
appears in the URL for the relevant API endpoint. However, some Kubernetes APIs involve a
"subresource" such as the logs for a pod. The URL for the pods logs endpoint is:
```
GET /api/v1/namespaces/{namespace}/pods/{name}/log
```
In this case, "pods" is the namespaced resource, and "log" is a subresource of pods. To represent
this in an RBAC role, use a slash to delimit the resource and subresource names. To allow a subject
to read both pods and pod logs, you would write:
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
namespace: default
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
```
### Referring to Subjects
RoleBindings and ClusterRoleBindings bind "subjects" to "roles".
@ -351,6 +377,7 @@ to groups with the `system:` prefix.
Only the `subjects` section of a RoleBinding object shown in the following examples.
For a user called `alice@example.com`, specify
```yaml
subjects:
- kind: User
@ -358,6 +385,7 @@ subjects:
```
For a group called `frontend-admins`, specify:
```yaml
subjects:
- kind: Group
@ -365,6 +393,7 @@ subjects:
```
For the default service account in the kube-system namespace:
```yaml
subjects:
- kind: ServiceAccount
@ -373,6 +402,7 @@ subjects:
```
For all service accounts in the `qa` namespace:
```yaml
subjects:
- kind: Group
@ -380,6 +410,7 @@ subjects:
```
For all service accounts everywhere:
```yaml
subjects:
- kind: Group
@ -601,4 +632,4 @@ subjectaccessreview "" created
```
This is useful for debugging access problems, in that you can use this resource
to determine what access an authorizer is granting.
to determine what access an authorizer is granting.

View File

@ -9,10 +9,14 @@ assignees:
## Introduction
As of Kubernetes 1.3, DNS is a built-in service launched automatically using the addon manager [cluster add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md).
A DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
Every Service defined in the cluster (including the DNS server itself) will be
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures
the kubelets to tell individual containers to use the DNS Service's IP to
resolve DNS names.
## What things get DNS names?
Every Service defined in the cluster (including the DNS server itself) is
assigned a DNS name. By default, a client Pod's DNS search list will
include the Pod's own namespace and the cluster's default domain. This is best
illustrated by example:
@ -22,17 +26,161 @@ in namespace `bar` can look up this service by simply doing a DNS query for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.
The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library)
supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records).
## Supported DNS schema
The following sections detail the supported record types and layout that is
supported. Any other layout or names or queries that happen to work are
considered implementation details and are subject to change without warning.
## How it Works
### Services
The running Kubernetes DNS pod holds 3 containers - kubedns, dnsmasq and a health check called healthz.
The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains
in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve
performance. The healthz container provides a single health check endpoint while performing dual healthchecks
(for dnsmasq and kubedns).
#### A records
"Normal" (not headless) Services are assigned a DNS A record for a name of the
form `my-svc.my-namespace.svc.cluster.local`. This resolves to the cluster IP
of the Service.
"Headless" (without a cluster IP) Services are also assigned a DNS A record for
a name of the form `my-svc.my-namespace.svc.cluster.local`. Unlike normal
Services, this resolves to the set of IPs of the pods selected by the Service.
Clients are expected to consume the set or else use standard round-robin
selection from the set.
### SRV records
SRV Records are created for named ports that are part of normal or Headless
Services.
For each named port, the SRV record would have the form
`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`.
For a regular service, this resolves to the port number and the CNAME:
`my-svc.my-namespace.svc.cluster.local`.
For a headless service, this resolves to multiple answers, one for each pod
that is backing the service, and contains the port number and a CNAME of the pod
of the form `auto-generated-name.my-svc.my-namespace.svc.cluster.local`.
### Backwards compatibility
Previous versions of kube-dns made names of the for
`my-svc.my-namespace.cluster.local` (the 'svc' level was added later). This
is no longer supported.
### Pods
#### A Records
When enabled, pods are assigned a DNS A record in the form of `pod-ip-address.my-namespace.pod.cluster.local`.
For example, a pod with ip `1.2.3.4` in the namespace `default` with a dns name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`.
#### A Records and hostname based on Pod's hostname and subdomain fields
Currently when a pod is created, its hostname is the Pod's `metadata.name` value.
With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, to specify what the Pod's hostname should be.
The Pod annotation, if specified, takes precendence over the Pod's name, to be the hostname of the pod.
For example, given a Pod with annotation `pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname set to "my-pod-name".
With v1.3, the PodSpec has a `hostname` field, which can be used to specify the Pod's hostname. This field value takes precedence over the
`pod.beta.kubernetes.io/hostname` annotation value.
v1.2 introduces a beta feature where the user can specify a Pod annotation, `pod.beta.kubernetes.io/subdomain`, to specify the Pod's subdomain.
The final domain will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>".
For example, a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", will have the FQDN "foo.bar.my-namespace.svc.cluster.local"
With v1.3, the PodSpec has a `subdomain` field, which can be used to specify the Pod's subdomain. This field value takes precedence over the
`pod.beta.kubernetes.io/subdomain` annotation value.
Example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
hostname: busybox-1
subdomain: default
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
```
If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster's KubeDNS Server also returns an A record for the Pod's fully qualified hostname.
Given a Pod with the hostname set to "foo" and the subdomain set to "bar", and a headless Service named "bar" in the same namespace, the pod will see it's own FQDN as "foo.bar.my-namespace.svc.cluster.local". DNS serves an A record at that name, pointing to the Pod's IP.
With v1.2, the Endpoints object also has a new annotation `endpoints.beta.kubernetes.io/hostnames-map`. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: '{"10.245.1.6":{HostName: "my-webserver"}}'.
If the Endpoints are for a headless service, an A record is created with the format <hostname>.<service name>.<pod namespace>.svc.<cluster domain>
For the example json, if endpoints are for a headless service named "bar", and one of the endpoints has IP "10.245.1.6", an A is created with the name "my-webserver.bar.my-namespace.svc.cluster.local" and the A record lookup would return "10.245.1.6".
This endpoints annotation generally does not need to be specified by end-users, but can used by the internal service controller to deliver the aforementioned feature.
With v1.3, The Endpoints object can specify the `hostname` for any endpoint, along with its IP. The hostname field takes precedence over the hostname value
that might have been specified via the `endpoints.beta.kubernetes.io/hostnames-map` annotation.
With v1.3, the following annotations are deprecated: `pod.beta.kubernetes.io/hostname`, `pod.beta.kubernetes.io/subdomain`, `endpoints.beta.kubernetes.io/hostnames-map`
## How do I test if it is working?
### Create a simple Pod to use as a test environment.
Create a file named busybox.yaml with the
following contents:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
```
Then create a pod using this file:
```
kubectl create -f busybox.yaml
```
### Wait for this pod to go into the running state.
You can get its status with:
```
kubectl get pods busybox
```
You should see:
```
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 <some-time>
```
### Validate DNS works
Once that pod is running, you can exec nslookup in that environment:
```
kubectl exec busybox -- nslookup kubernetes.default
```
You should see something like:
```
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: kubernetes.default
Address 1: 10.0.0.1
```
If you see that, DNS is working correctly.
## Kubernetes Federation (Multiple Zone support)
@ -44,6 +192,25 @@ the lookup of federated services (which span multiple Kubernetes clusters).
See the [Cluster Federation Administrators' Guide](/docs/admin/federation) for more
details on Cluster Federation and multi-site support.
## How it Works
The running Kubernetes DNS pod holds 3 containers - kubedns, dnsmasq and a health check called healthz.
The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains
in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve
performance. The healthz container provides a single health check endpoint while performing dual healthchecks
(for dnsmasq and kubedns).
The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned the
kubelet passes DNS configured using the `--cluster-dns=10.0.0.10` flag to each
container.
DNS names also need domains. The local domain is configurable, in the kubelet using
the flag `--cluster-domain=<default local domain>`
The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library)
supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records).
## References
- [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/build/kube-dns/README.md)

150
docs/admin/kubeadm.md Normal file
View File

@ -0,0 +1,150 @@
---
assignees:
- mikedanese
- luxas
- errordeveloper
---
This document provides information on how to use kubeadm's advanced options.
Running kubeadm init bootstraps a Kubernetes cluster. This consists of the
following steps:
1. kubeadm generates a token that additional nodes can use to register themselves
with the master in future.
1. kubeadm generates a self-signed CA using openssl to provision identities
for each node in the cluster, and for the API server to secure communication
with clients.
1. Outputting a kubeconfig file for the kubelet to use to connect to the API server,
as well as an additional kubeconfig file for administration.
1. kubeadm generates Kubernetes resource manifests for the API server, controller manager
and scheduler, and placing them in `/etc/kubernetes/manifests`. The kubelet watches
this directory for static resources to create on startup. These are the core
components of Kubernetes, and once they are up and running we can use `kubectl`
to set up/manage any additional components.
1. kubeadm installs any add-on components, such as DNS or discovery, via the API server.
## Usage
Fields that support multiple values do so either with comma separation, or by specifying
the flag multiple times.
### `kubeadm init`
It is usually sufficient to run `kubeadm init` without any flags,
but in some cases you might like to override the default behaviour.
Here we specify all the flags that can be used to customise the Kubernetes
installation.
- `--api-advertise-addresses` (multiple values are allowed)
- `--api-external-dns-names` (multiple values are allowed)
By default, `kubeadm init` automatically detects IP addresses and uses
these to generate certificates for the API server. This uses the IP address
of the default network interface. If you would like to access the API server
through a different IP address, or through a hostname, you can override these
defaults with `--api-advertise-addresses` and `--api-external-dns-names`.
For example, to generate certificates that verify the API server at addresses
`10.100.245.1` and `100.123.121.1`, you could use
`--api-advertise-addresses=10.100.245.1,100.123.121.1`. To allow it to be accessed
with a hostname, `--api-external-dns-names=kubernetes.example.com,kube.example.com`
Specifying `--api-advertise-addresses` disables auto detection of IP addresses.
- `--cloud-provider`
Currently, `kubeadm init` does not provide autodetection of cloud provider.
This means that load balancing and persistent volumes are not supported out
of the box. You can specify a cloud provider using `--cloud-provider`.
Valid values are the ones supported by `controller-manager`, namely `"aws"`,
`"azure"`, `"cloudstack"`, `"gce"`, `"mesos"`, `"openstack"`, `"ovirt"`,
`"rackspace"`, `"vsphere"`. In order to provide additional configuration for
the cloud provider, you should create a `/etc/kubernetes/cloud-config.json`
file manually, before running `kubeadm init`. `kubeadm` automatically
picks those settings up and ensures other nodes are configured correctly.
You must also set the `--cloud-provider` and `--cloud-config` parameters
yourself by editing the `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
file appropriately.
- `--external-etcd-cafile` etcd certificate authority file
- `--external-etcd-endpoints` (multiple values are allowed)
- `--external-etcd-certfile` etcd client certificate file
- `--external-etcd-keyfile` etcd client key file
By default, `kubeadm` deploys a single node etcd cluster on the master
to store Kubernetes state. This means that any failure on the master node
requires you to rebuild your cluster from scratch. Currently `kubeadm init`
does not support automatic deployment of a highly available etcd cluster.
If you would like to use your own etcd cluster, you can override this
behaviour with `--external-etcd-endpoints`. `kubeadm` supports etcd client
authentication using the `--external-etcd-cafile`, `--external-etcd-certfile`
and `--external-etcd-keyfile` flags.
- `--pod-network-cidr`
By default, `kubeadm init` does not set node CIDR's for pods and allows you to
bring your own networking configuration through a CNI compatible network
controller addon such as [Weave Net](https://github.com/weaveworks/weave-kube),
[Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm)
or [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm).
If you are using a compatible cloud provider or flannel, you can specify a
subnet to use for each pod on the cluster with the `--pod-network-cidr` flag.
This should be a minimum of a /16 so that kubeadm is able to assign /24 subnets
to each node in the cluster.
- `--service-cidr` (default '10.12.0.0/12')
You can use the `--service-cidr` flag to override the subnet Kubernetes uses to
assign pods IP addresses. If you do, you will also need to update the
`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` file to reflect this change
else DNS will not function correctly.
- `--service-dns-domain` (default 'cluster.local')
By default, `kubeadm init` deploys a cluster that assigns services with DNS names
`<service_name>.<namespace>.svc.cluster.local`. You can use the `--service-dns-domain`
to change the DNS name suffix. Again, you will need to update the
`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` file accordingly else DNS will
not function correctly.
- `--token`
By default, `kubeadm init` automatically generates the token used to initialise
each new node. If you would like to manually specify this token, you can use the
`--token` flag. The token must be of the format '<6 character string>.<16 character string>'.
- `--use-kubernetes-version` (default 'v1.4.1') the kubernetes version to initialise
`kubeadm` was originally built for Kubernetes version **v1.4.0**, older versions are not
supported. With this flag you can try any future version, e.g. **v1.5.0-beta.1**
whenever it comes out (check [releases page](https://github.com/kubernetes/kubernetes/releases)
for a full list of available versions).
### `kubeadm join`
`kubeadm join` has one mandatory flag, the token used to secure cluster bootstrap,
and one mandatory argument, the master IP address.
Here's an example on how to use it:
`kubeadm join --token=the_secret_token 192.168.1.1`
- `--token=<token>`
By default, when `kubeadm init` runs, a token is generated and revealed in the output.
That's the token you should use here.
## Troubleshooting
* Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl config, eg.
```
# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
```

View File

@ -1,214 +1,214 @@
---
assignees:
- derekwaynecarr
- janetkuo
---
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
system will be able to consume as much CPU and memory on the node that executes the pod.
Users may want to impose restrictions on the amount of resource a single pod in the system may consume
for a variety of reasons.
For example:
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
of memory as part of admission control.
2. A cluster is shared by two communities in an organization that runs production and development workloads
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
each namespace.
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
average node size in order to provide for more uniform scheduling and to limit waste.
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.
See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/docs/user-guide/compute-resources/)
## Step 0: Prerequisites
This example requires a running Kubernetes cluster. See the [Getting Started guides](/docs/getting-started-guides/) for how to get started.
Change to the `<kubernetes>` directory if you're not already there.
## Step 1: Create a namespace
This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called limit-example:
```shell
$ kubectl create namespace limit-example
namespace "limit-example" created
```
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands:
```shell
$ kubectl get namespaces
NAME STATUS AGE
default Active 51s
limit-example Active 45s
```
## Step 2: Apply a limit to the namespace
Let's create a simple limit in our namespace.
```shell
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
limitrange "mylimits" created
```
Let's describe the limits that we have imposed in our namespace.
```shell
$ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits
Namespace: limit-example
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Pod cpu 200m 2 - - -
Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
```
In this scenario, we have said the following:
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
must be specified for that resource across all containers. Failure to specify a limit will result in
a validation error when attempting to create the pod. Note that a default value of limit is set by
*default* in file `limits.yaml` (300m CPU and 200Mi memory).
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
request must be specified for that resource across all containers. Failure to specify a request will
result in a validation error when attempting to create the pod. Note that a default value of request is
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
containers CPU limits must be <= 2.
## Step 3: Enforcing limits at point of creation
The limits enumerated in a namespace are only enforced when a pod is created or updated in
the cluster. If you change the limits to a different value range, it does not affect pods that
were previously created in a namespace.
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
of creation explaining why.
Let's first spin up a [Deployment](/docs/user-guide/deployments) that creates a single container Pod to demonstrate
how default values are applied to each pod.
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
deployment "nginx" created
```
Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details.
The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod:
```shell
$ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE
nginx-2040093540-s8vzu 1/1 Running 0 11s
```
Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different.
``` shell
$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8
resourceVersion: "57"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu
uid: 67b20741-f53b-11e5-b066-64510658e388
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
```shell
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
```
Let's create a pod that falls within the allowed limit boundaries.
```shell
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
```
Now look at the Pod's resources field:
```shell
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
uid: 3b1bfd7a-f53c-11e5-b066-64510658e388
spec:
containers:
- image: gcr.io/google_containers/serve_hostname
imagePullPolicy: Always
name: kubernetes-serve-hostname
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "1"
memory: 512Mi
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
```shell
$ kubelet --help
Usage of kubelet
....
--cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=false ...
```
## Step 4: Cleanup
To remove the resources used by this example, you can just delete the limit-example namespace.
```shell
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
$ kubectl get namespaces
NAME STATUS AGE
default Active 12m
```
## Summary
Cluster operators that want to restrict the amount of resources a single container or pod may consume
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
constrain the amount of resource a pod consumes on a node.
---
assignees:
- derekwaynecarr
- janetkuo
---
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
system will be able to consume as much CPU and memory on the node that executes the pod.
Users may want to impose restrictions on the amount of resource a single pod in the system may consume
for a variety of reasons.
For example:
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
of memory as part of admission control.
2. A cluster is shared by two communities in an organization that runs production and development workloads
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
each namespace.
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
average node size in order to provide for more uniform scheduling and to limit waste.
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.
See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/docs/user-guide/compute-resources/)
## Step 0: Prerequisites
This example requires a running Kubernetes cluster. See the [Getting Started guides](/docs/getting-started-guides/) for how to get started.
Change to the `<kubernetes>` directory if you're not already there.
## Step 1: Create a namespace
This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called limit-example:
```shell
$ kubectl create namespace limit-example
namespace "limit-example" created
```
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands:
```shell
$ kubectl get namespaces
NAME STATUS AGE
default Active 51s
limit-example Active 45s
```
## Step 2: Apply a limit to the namespace
Let's create a simple limit in our namespace.
```shell
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
limitrange "mylimits" created
```
Let's describe the limits that we have imposed in our namespace.
```shell
$ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits
Namespace: limit-example
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Pod cpu 200m 2 - - -
Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
```
In this scenario, we have said the following:
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
must be specified for that resource across all containers. Failure to specify a limit will result in
a validation error when attempting to create the pod. Note that a default value of limit is set by
*default* in file `limits.yaml` (300m CPU and 200Mi memory).
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
request must be specified for that resource across all containers. Failure to specify a request will
result in a validation error when attempting to create the pod. Note that a default value of request is
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
containers CPU limits must be <= 2.
## Step 3: Enforcing limits at point of creation
The limits enumerated in a namespace are only enforced when a pod is created or updated in
the cluster. If you change the limits to a different value range, it does not affect pods that
were previously created in a namespace.
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
of creation explaining why.
Let's first spin up a [Deployment](/docs/user-guide/deployments) that creates a single container Pod to demonstrate
how default values are applied to each pod.
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
deployment "nginx" created
```
Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details.
The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod:
```shell
$ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE
nginx-2040093540-s8vzu 1/1 Running 0 11s
```
Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different.
```shell
$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8
resourceVersion: "57"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu
uid: 67b20741-f53b-11e5-b066-64510658e388
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
```shell
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
```
Let's create a pod that falls within the allowed limit boundaries.
```shell
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
```
Now look at the Pod's resources field:
```shell
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
uid: 3b1bfd7a-f53c-11e5-b066-64510658e388
spec:
containers:
- image: gcr.io/google_containers/serve_hostname
imagePullPolicy: Always
name: kubernetes-serve-hostname
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "1"
memory: 512Mi
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
```shell
$ kubelet --help
Usage of kubelet
....
--cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=false ...
```
## Step 4: Cleanup
To remove the resources used by this example, you can just delete the limit-example namespace.
```shell
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
$ kubectl get namespaces
NAME STATUS AGE
default Active 12m
```
## Summary
Cluster operators that want to restrict the amount of resources a single container or pod may consume
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
constrain the amount of resource a pod consumes on a node.

View File

@ -29,7 +29,7 @@ table below. The value of each signal is described in the description column ba
summary API.
| Eviction Signal | Description |
|------------------|---------------------------------------------------------------------------------|
|----------------------------|-----------------------------------------------------------------------|
| `memory.available` | `memory.available` := `node.status.capacity[memory]` - `node.stats.memory.workingSet` |
| `nodefs.available` | `nodefs.available` := `node.stats.fs.available` |
| `nodefs.inodesFree` | `nodefs.inodesFree` := `node.stats.fs.inodesFree` |
@ -128,7 +128,7 @@ reflects the node is under pressure.
The following node conditions are defined that correspond to the specified eviction signal.
| Node Condition | Eviction Signal | Description |
|----------------|------------------|------------------------------------------------------------------|
|-------------------------|-------------------------------|--------------------------------------------|
| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold |
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesytem or image filesystem has satisfied an eviction threshold |
@ -270,7 +270,7 @@ the node depends on the [oom_killer](https://lwn.net/Articles/391222/) to respon
The `kubelet` sets a `oom_score_adj` value for each container based on the quality of service for the pod.
| Quality of Service | oom_score_adj |
| ----------------- | ------------- |
|----------------------------|-----------------------------------------------------------------------|
| `Guaranteed` | -998 |
| `BestEffort` | 1000 |
| `Burstable` | min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999) |

View File

@ -58,7 +58,7 @@ that can be requested in a given namespace.
The following resource types are supported:
| Resource Name | Description |
| ------------ | ----------- |
| --------------------- | ----------------------------------------------------------- |
| `cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. |
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
@ -73,7 +73,7 @@ The number of objects of a given type can be restricted. The following types
are supported:
| Resource Name | Description |
| ------------ | ----------- |
| ------------------------------- | ------------------------------------------------- |
| `configmaps` | The total number of config maps that can exist in the namespace. |
| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `pods` | The total number of pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `status.phase in (Failed, Succeeded)` is true. |

View File

@ -1,106 +1,106 @@
---
assignees:
- davidopp
- lavalamp
---
The Kubernetes cluster can be configured using Salt.
The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers.
## Salt cluster setup
The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
master: kubernetes-master
```
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
## Standalone Salt Configuration on GCE
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
## Salt security
*(Not applicable on default GCE setup.)*
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
```shell
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
open_mode: True
auto_accept: True
```
## Salt minion configuration
Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
An example file is presented below using the Vagrant based environment.
```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
grains:
etcd_servers: $MASTER_IP
cloud: vagrant
roles:
- kubernetes-master
```
Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list.
Key | Value
------------- | -------------
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
`node_ip` | (Optional) The IP address to use to address this node
`hostname_override` | (Optional) Mapped to the kubelet hostname-override
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine.
These keys may be leveraged by the Salt sls files to branch behavior.
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
```liquid
{% raw %}
{% if grains['os_family'] == 'RedHat' %}
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
{% else %}
// something specific to Debian environment (apt-get, initd)
{% endif %}
{% endraw %}
```
## Best Practices
1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.
## Future enhancements (Networking)
Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.
## Further reading
The [cluster/saltbase](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/) tree has more details on the current SaltStack configuration.
---
assignees:
- davidopp
- lavalamp
---
The Kubernetes cluster can be configured using Salt.
The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers.
## Salt cluster setup
The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
master: kubernetes-master
```
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
## Standalone Salt Configuration on GCE
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
## Salt security
*(Not applicable on default GCE setup.)*
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
```shell
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
open_mode: True
auto_accept: True
```
## Salt minion configuration
Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
An example file is presented below using the Vagrant based environment.
```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
grains:
etcd_servers: $MASTER_IP
cloud: vagrant
roles:
- kubernetes-master
```
Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list.
Key | Value
-----------------------------------|----------------------------------------------------------------
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
`node_ip` | (Optional) The IP address to use to address this node
`hostname_override` | (Optional) Mapped to the kubelet hostname-override
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine.
These keys may be leveraged by the Salt sls files to branch behavior.
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
```liquid
{% raw %}
{% if grains['os_family'] == 'RedHat' %}
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
{% else %}
// something specific to Debian environment (apt-get, initd)
{% endif %}
{% endraw %}
```
## Best Practices
1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.
## Future enhancements (Networking)
Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.
## Further reading
The [cluster/saltbase](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/) tree has more details on the current SaltStack configuration.

View File

@ -12,7 +12,7 @@
<li><a href="#concept_template">Concept</a></li>
</ul>
<p>The page templates are in the <a href="https://github.com/kubernetes/kubernetes.github.io/tree/master/_includes/templates">_includes/templates</a> directory of the <a href="https://github.com/kubernetes/kubernetes.github.io">kubernetes.github.io</a> repository.
<p>The page templates are in the <a href="https://github.com/kubernetes/kubernetes.github.io/tree/master/_includes/templates" target="_blank">_includes/templates</a> directory of the <a href="https://github.com/kubernetes/kubernetes.github.io">kubernetes.github.io</a> repository.
<h3 id="task_template">Task template</h3>

View File

@ -5,12 +5,8 @@ assignees:
---
* TOC
{:toc}
## Overview
The recommended approach for deploying a Kubernetes 1.4 cluster on Azure is the
[`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere) project. You will want to take a look at the
[Azure Getting Started Guide](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/azure/README.md).
[`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere) project.
You will want to take a look at the
[Azure Getting Started Guide](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/azure/README.md).

View File

@ -1,182 +1,182 @@
---
assignees:
- lavalamp
- thockin
---
* TOC
{:toc}
## Prerequisites
You need two machines with CentOS installed on them.
## Starting a cluster
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](/docs/admin/networking) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
**System Information:**
Hosts:
Please replace host IP with your environment.
```conf
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
```
**Prepare the hosts:**
* Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion} with following information.
```conf
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
```
* Install Kubernetes and etcd on all hosts - centos-{master,minion}. This will also pull in docker and cadvisor.
```shell
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```shell
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://centos-master:8080"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the Kubernetes services on the master.**
* Edit /etc/etcd/etcd.conf to appear as such:
```shell
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
```
* Edit /etc/kubernetes/apiserver to appear as such:
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Start the appropriate services on master:
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet and start the kubelet and proxy***
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=centos-minion"
# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own!
KUBELET_ARGS=""
```
* Start the appropriate services on node (centos-minion).
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
*You should be finished!*
* Check to make sure the cluster can see the node (on centos-master)
```shell
$ kubectl get nodes
NAME LABELS STATUS
centos-minion <none> Ready
```
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
---
assignees:
- lavalamp
- thockin
---
* TOC
{:toc}
## Prerequisites
You need two machines with CentOS installed on them.
## Starting a cluster
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](/docs/admin/networking) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
**System Information:**
Hosts:
Please replace host IP with your environment.
```conf
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
```
**Prepare the hosts:**
* Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion} with following information.
```conf
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
```
* Install Kubernetes and etcd on all hosts - centos-{master,minion}. This will also pull in docker and cadvisor.
```shell
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```shell
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://centos-master:8080"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the Kubernetes services on the master.**
* Edit /etc/etcd/etcd.conf to appear as such:
```shell
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
```
* Edit /etc/kubernetes/apiserver to appear as such:
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Start the appropriate services on master:
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet and start the kubelet and proxy***
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=centos-minion"
# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own!
KUBELET_ARGS=""
```
* Start the appropriate services on node (centos-minion).
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
*You should be finished!*
* Check to make sure the cluster can see the node (on centos-master)
```shell
$ kubectl get nodes
NAME LABELS STATUS
centos-minion <none> Ready
```
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,209 +1,209 @@
---
---
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
Specifically, this guide will have you do the following:
- Deploy a Kubernetes master node on CoreOS using cloud-config.
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
- Configure `kubectl` to access your cluster.
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
## Prerequisites and Assumptions
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
- 1 Kubernetes Master
- 2 Kubernetes Nodes
- Your nodes should have IP connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
## Cloud-config
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
We'll use two cloud-config files:
- `master-config.yaml`: cloud-config for the Kubernetes master
- `node-config.yaml`: cloud-config for each Kubernetes node
## Download CoreOS
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
2. *On another machine*, download the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
3. Replace the following variables in the `master-config.yaml` file.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
### Configure TLS
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
2. Send the three files to your master host (using `scp` for example).
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```shell
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set Permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
```
4. Restart the kubelet to pick up the changes:
```shell
sudo systemctl restart kubelet
```
## Configure the compute nodes
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
4. Replace the following placeholders with the contents of their respective files.
- `<CA_CERT>`: Complete contents of `ca.pem`
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> <CA_CERT>
> ```
>
> should look like this once the certificate is in place:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> -----BEGIN CERTIFICATE-----
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
> ...<snip>...
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
> -----END CERTIFICATE-----
> ```
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
```
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
## Configure Kubeconfig
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
```shell
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
Check your work with `kubectl get nodes`.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
---
---
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
Specifically, this guide will have you do the following:
- Deploy a Kubernetes master node on CoreOS using cloud-config.
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
- Configure `kubectl` to access your cluster.
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
## Prerequisites and Assumptions
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
- 1 Kubernetes Master
- 2 Kubernetes Nodes
- Your nodes should have IP connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
## Cloud-config
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
We'll use two cloud-config files:
- `master-config.yaml`: cloud-config for the Kubernetes master
- `node-config.yaml`: cloud-config for each Kubernetes node
## Download CoreOS
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
2. *On another machine*, download the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
3. Replace the following variables in the `master-config.yaml` file.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
### Configure TLS
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
2. Send the three files to your master host (using `scp` for example).
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```shell
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set Permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
```
4. Restart the kubelet to pick up the changes:
```shell
sudo systemctl restart kubelet
```
## Configure the compute nodes
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
4. Replace the following placeholders with the contents of their respective files.
- `<CA_CERT>`: Complete contents of `ca.pem`
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> <CA_CERT>
> ```
>
> should look like this once the certificate is in place:
>
> ```shell
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> -----BEGIN CERTIFICATE-----
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
> ...<snip>...
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
> -----END CERTIFICATE-----
> ```
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
```
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
## Configure Kubeconfig
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
```shell
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
Check your work with `kubectl get nodes`.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -16,7 +16,7 @@ and a _worker_ node which receives work from the master. You can repeat the proc
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes on Docker](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/k8s-docker.png)
![Kubernetes on Docker](/images/docs/k8s-docker.png)
### Bootstrap Docker
@ -86,7 +86,7 @@ Clone the `kube-deploy` repo, and run `worker.sh` on the worker machine _with ro
```shell
$ git clone https://github.com/kubernetes/kube-deploy
$ cd docker-multinode
$ cd kube-deploy/docker-multinode
$ export MASTER_IP=${SOME_IP}
$ ./worker.sh
```

View File

@ -1,241 +1,241 @@
---
assignees:
- aveshagarwal
- erictune
---
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
* TOC
{:toc}
## Prerequisites
1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/kubernetes/kubernetes.git)
2. A Fedora 21+ host to act as cluster master
3. As many Fedora 21+ hosts as you would like, that act as cluster nodes
The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes.
## Architecture of the cluster
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```shell
master,etcd = kube-master.example.com
node1 = kube-node-01.example.com
node2 = kube-node-02.example.com
```
**Make sure your local machine has**
- ansible (must be 1.9.0+)
- git
- python-netaddr
If not
```shell
yum install -y ansible git python-netaddr
```
**Now clone down the Kubernetes repository**
```shell
git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible
```
**Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
```shell
[masters]
kube-master.example.com
[etcd]
kube-master.example.com
[nodes]
kube-node-01.example.com
kube-node-02.example.com
```
## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
edit: ~/contrib/ansible/group_vars/all.yml
```yaml
ansible_ssh_user: root
```
**Configuring ssh access to the cluster**
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
Make sure your local machine (root) has an ssh key pair if not
```shell
ssh-keygen
```
Copy the ssh public key to **all** nodes in the cluster
```shell
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node}
done
```
## Setting up the cluster
Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
```conf
edit: ~/contrib/ansible/group_vars/all.yml
```
**Configure access to kubernetes packages**
Modify `source_type` as below to access kubernetes packages through the package manager.
```yaml
source_type: packageManager
```
**Configure the IP addresses used for services**
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml
kube_service_addresses: 10.254.0.0/16
```
**Managing flannel**
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
**Managing add on services in your cluster**
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
```yaml
cluster_logging: true
```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
```yaml
cluster_monitoring: true
```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
```yaml
dns_setup: true
```
**Tell ansible to get to work!**
This will finally setup your whole Kubernetes cluster for you.
```shell
cd ~/contrib/ansible/
./setup.sh
```
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
**Show kubernetes nodes**
Run the following on the kube-master:
```shell
kubectl get nodes
```
**Show services running on masters and nodes**
```shell
systemctl | grep -i kube
```
**Show firewall rules on the masters and nodes**
```shell
iptables -nvL
```
**Create /tmp/apache.json on the master with the following contents and deploy pod**
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "fedoraapache",
"labels": {
"name": "fedoraapache"
}
},
"spec": {
"containers": [
{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [
{
"hostPort": 80,
"containerPort": 80
}
]
}
]
}
}
```
```shell
kubectl create -f /tmp/apache.json
```
**Check where the pod was created**
```shell
kubectl get pods
```
**Check Docker status on nodes**
```shell
docker ps
docker images
```
**After the pod is 'Running' Check web server access on the node**
```shell
curl http://localhost
```
That's it !
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
---
assignees:
- aveshagarwal
- erictune
---
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
* TOC
{:toc}
## Prerequisites
1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/kubernetes/kubernetes.git)
2. A Fedora 21+ host to act as cluster master
3. As many Fedora 21+ hosts as you would like, that act as cluster nodes
The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes.
## Architecture of the cluster
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```shell
master,etcd = kube-master.example.com
node1 = kube-node-01.example.com
node2 = kube-node-02.example.com
```
**Make sure your local machine has**
- ansible (must be 1.9.0+)
- git
- python-netaddr
If not
```shell
yum install -y ansible git python-netaddr
```
**Now clone down the Kubernetes repository**
```shell
git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible
```
**Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
```shell
[masters]
kube-master.example.com
[etcd]
kube-master.example.com
[nodes]
kube-node-01.example.com
kube-node-02.example.com
```
## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
edit: ~/contrib/ansible/group_vars/all.yml
```yaml
ansible_ssh_user: root
```
**Configuring ssh access to the cluster**
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
Make sure your local machine (root) has an ssh key pair if not
```shell
ssh-keygen
```
Copy the ssh public key to **all** nodes in the cluster
```shell
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node}
done
```
## Setting up the cluster
Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
```conf
edit: ~/contrib/ansible/group_vars/all.yml
```
**Configure access to kubernetes packages**
Modify `source_type` as below to access kubernetes packages through the package manager.
```yaml
source_type: packageManager
```
**Configure the IP addresses used for services**
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml
kube_service_addresses: 10.254.0.0/16
```
**Managing flannel**
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
**Managing add on services in your cluster**
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
```yaml
cluster_logging: true
```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
```yaml
cluster_monitoring: true
```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
```yaml
dns_setup: true
```
**Tell ansible to get to work!**
This will finally setup your whole Kubernetes cluster for you.
```shell
cd ~/contrib/ansible/
./setup.sh
```
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
**Show kubernetes nodes**
Run the following on the kube-master:
```shell
kubectl get nodes
```
**Show services running on masters and nodes**
```shell
systemctl | grep -i kube
```
**Show firewall rules on the masters and nodes**
```shell
iptables -nvL
```
**Create /tmp/apache.json on the master with the following contents and deploy pod**
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "fedoraapache",
"labels": {
"name": "fedoraapache"
}
},
"spec": {
"containers": [
{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [
{
"hostPort": 80,
"containerPort": 80
}
]
}
]
}
}
```
```shell
kubectl create -f /tmp/apache.json
```
**Check where the pod was created**
```shell
kubectl get pods
```
**Check Docker status on nodes**
```shell
docker ps
docker images
```
**After the pod is 'Running' Check web server access on the node**
```shell
curl http://localhost
```
That's it !
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,219 +1,219 @@
---
assignees:
- aveshagarwal
- eparis
- thockin
---
* TOC
{:toc}
## Prerequisites
1. You need 2 or more machines with Fedora installed.
## Instructions
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
**System Information:**
Hosts:
```conf
fed-master = 192.168.121.9
fed-node = 192.168.121.65
```
**Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
* Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the changing the `enable=0` to `enable=1` for extras.
```shell
yum -y install --enablerepo=updates-testing kubernetes
```
* Install etcd and iptables
```shell
yum -y install etcd iptables
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```shell
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```shell
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
* Create /var/run/kubernetes on master:
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
* Start the appropriate services on master:
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Addition of nodes:
* Create following node.json file on Kubernetes master node:
```json
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "fed-node",
"labels":{ "name": "fed-node-label"}
},
"spec": {
"externalID": "fed-node"
}
}
```
Now create a node object internally in your Kubernetes cluster by running:
```shell
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
reachable from Kubernetes master node. This guide will discuss how to provision
a Kubernetes node (fed-node) below.
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet on the node.***
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
###
# Kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=fed-node"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
```
* Start the appropriate services on the node (fed-node).
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```shell
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
* Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```shell
kubectl delete -f ./node.json
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
---
assignees:
- aveshagarwal
- eparis
- thockin
---
* TOC
{:toc}
## Prerequisites
1. You need 2 or more machines with Fedora installed.
## Instructions
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
**System Information:**
Hosts:
```conf
fed-master = 192.168.121.9
fed-node = 192.168.121.65
```
**Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
* Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the changing the `enable=0` to `enable=1` for extras.
```shell
yum -y install --enablerepo=updates-testing kubernetes
```
* Install etcd and iptables
```shell
yum -y install etcd iptables
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```shell
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```shell
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
* Create /var/run/kubernetes on master:
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
* Start the appropriate services on master:
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Addition of nodes:
* Create following node.json file on Kubernetes master node:
```json
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "fed-node",
"labels":{ "name": "fed-node-label"}
},
"spec": {
"externalID": "fed-node"
}
}
```
Now create a node object internally in your Kubernetes cluster by running:
```shell
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
reachable from Kubernetes master node. This guide will discuss how to provision
a Kubernetes node (fed-node) below.
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet on the node.***
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
###
# Kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=fed-node"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
```
* Start the appropriate services on the node (fed-node).
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```shell
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
* Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```shell
kubectl delete -f ./node.json
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,191 +1,191 @@
---
assignees:
- dchen1107
- erictune
- thockin
---
* TOC
{:toc}
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites
You need 2 or more machines with Fedora installed.
## Master Setup
**Perform following commands on the Kubernetes master**
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. Flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
Add the configuration to the etcd server on fed-master.
```shell
etcdctl set /coreos.com/network/config < flannel-config.json
```
* Verify the key exists in the etcd server on fed-master.
```shell
etcdctl get /coreos.com/network/config
```
## Node Setup
**Perform following commands on all Kubernetes nodes**
Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
```shell
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://fed-master:4001"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
```
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
Enable the flannel service.
```shell
systemctl enable flanneld
```
If docker is not running, then starting flannel service is enough and skip the next step.
```shell
systemctl start flanneld
```
If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
```shell
systemctl stop docker
ip link delete docker0
systemctl start flanneld
systemctl start docker
```
## **Test the cluster and flannel configuration**
Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
```shell
# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
inet 18.16.29.0/16 scope global flannel.1
inet 18.16.29.1/24 scope global docker0
```
From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
```shell
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
```
```json
{
"node": {
"key": "/coreos.com/network/subnets",
{
"key": "/coreos.com/network/subnets/18.16.29.0-24",
"value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}"
},
{
"key": "/coreos.com/network/subnets/18.16.83.0-24",
"value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}"
},
{
"key": "/coreos.com/network/subnets/18.16.90.0-24",
"value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}"
}
}
}
```
From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
```shell
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
```
At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
Issue the following commands on any 2 nodes:
```shell
# docker run -it fedora:latest bash
bash-4.3#
```
This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
```shell
bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
```
Now note the IP address on the first node:
```shell
bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0
```
And also note the IP address on the other node:
```shell
bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0
```
Now ping from the first node to the other node:
```shell
bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
```
Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
---
assignees:
- dchen1107
- erictune
- thockin
---
* TOC
{:toc}
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites
You need 2 or more machines with Fedora installed.
## Master Setup
**Perform following commands on the Kubernetes master**
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. Flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
Add the configuration to the etcd server on fed-master.
```shell
etcdctl set /coreos.com/network/config < flannel-config.json
```
* Verify the key exists in the etcd server on fed-master.
```shell
etcdctl get /coreos.com/network/config
```
## Node Setup
**Perform following commands on all Kubernetes nodes**
Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
```shell
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://fed-master:4001"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
```
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
Enable the flannel service.
```shell
systemctl enable flanneld
```
If docker is not running, then starting flannel service is enough and skip the next step.
```shell
systemctl start flanneld
```
If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
```shell
systemctl stop docker
ip link delete docker0
systemctl start flanneld
systemctl start docker
```
## **Test the cluster and flannel configuration**
Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
```shell
# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
inet 18.16.29.0/16 scope global flannel.1
inet 18.16.29.1/24 scope global docker0
```
From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
```shell
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
```
```json
{
"node": {
"key": "/coreos.com/network/subnets",
{
"key": "/coreos.com/network/subnets/18.16.29.0-24",
"value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}"
},
{
"key": "/coreos.com/network/subnets/18.16.83.0-24",
"value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}"
},
{
"key": "/coreos.com/network/subnets/18.16.90.0-24",
"value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}"
}
}
}
```
From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
```shell
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
```
At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
Issue the following commands on any 2 nodes:
```shell
# docker run -it fedora:latest bash
bash-4.3#
```
This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
```shell
bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
```
Now note the IP address on the first node:
```shell
bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0
```
And also note the IP address on the other node:
```shell
bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0
```
Now ping from the first node to the other node:
```shell
bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
```
Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -48,8 +48,8 @@ few commands, and have active community support.
- [GCE](/docs/getting-started-guides/gce)
- [AWS](/docs/getting-started-guides/aws)
- [Azure](/docs/getting-started-guides/azure/)
- [Azure](/docs/getting-started-guides/coreos/azure/) (Weave-based, contributed by WeaveWorks employees)
- [Azure](/docs/getting-started-guides/azure/) (Flannel-based, contributed by Microsoft employee)
- [CenturyLink Cloud](/docs/getting-started-guides/clc)
- [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer)
@ -70,7 +70,7 @@ writing a new solution](https://github.com/kubernetes/kubernetes/tree/{{page.git
These solutions are combinations of cloud provider and OS not covered by the above solutions.
- [AWS + coreos](/docs/getting-started-guides/coreos)
- [AWS + CoreOS](/docs/getting-started-guides/coreos)
- [GCE + CoreOS](/docs/getting-started-guides/coreos)
- [AWS + Ubuntu](/docs/getting-started-guides/juju)
- [Joyent + Ubuntu](/docs/getting-started-guides/juju)
@ -122,7 +122,7 @@ Stackpoint.io | | multi-support | multi-support | [d
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | | Commercial
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
Azure | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/azure) | | Community ([@colemickens](https://github.com/colemickens))
Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens))
Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns))
Docker Multi Node | custom | N/A | flannel | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns))
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project

View File

@ -45,7 +45,7 @@ For each host in turn:
* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
* If the machine is running Ubuntu 16.04, run:
# curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
@ -178,13 +178,13 @@ As an example, install a sample microservices application, a socks shop, to put
To learn more about the sample microservices app, see the [GitHub README](https://github.com/microservices-demo/microservices-demo).
# git clone https://github.com/microservices-demo/microservices-demo
# kubectl apply -f microservices-demo/deploy/kubernetes/manifests
# kubectl apply -f microservices-demo/deploy/kubernetes/manifests/sock-shop-ns.yml -f microservices-demo/deploy/kubernetes/manifests
You can then find out the port that the [NodePort feature of services](/docs/user-guide/services/) allocated for the front-end service by running:
# kubectl describe svc front-end
# kubectl describe svc front-end -n sock-shop
Name: front-end
Namespace: default
Namespace: sock-shop
Labels: name=front-end
Selector: name=front-end
Type: NodePort
@ -194,7 +194,7 @@ You can then find out the port that the [NodePort feature of services](/docs/use
Endpoints: <none>
Session Affinity: None
It takes several minutes to download and start all the containers, watch the output of `kubectl get pods` to see when they're all up and running.
It takes several minutes to download and start all the containers, watch the output of `kubectl get pods -n sock-shop` to see when they're all up and running.
Then go to the IP address of your cluster's master node in your browser, and specify the given port.
So for example, `http://<master_ip>:<port>`.
@ -211,21 +211,24 @@ See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, includi
* Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/).
* Install Kubernetes with [a cloud provider configurations](/docs/getting-started-guides/) to add Load Balancer and Persistent Volume support.
* Learn about `kubeadm`'s advanced usage on the [advanced reference doc](/docs/admin/kubeadm/)
## Cleanup
* To uninstall the socks shop, run `kubectl delete -f microservices-demo/deploy/kubernetes/manifests` on the master.
* To undo what `kubeadm` did, simply delete the machines you created for this tutorial, or run the script below and then uninstall the packages.
<details>
<pre><code>systemctl stop kubelet;
docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null;
rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni;
ip link set cbr0 down; ip link del cbr0;
ip link set cni0 down; ip link del cni0;
systemctl start kubelet</code></pre>
</details> <!-- *syntax-highlighting-hack -->
* To undo what `kubeadm` did, simply delete the machines you created for this tutorial, or run the script below and then start over or uninstall the packages.
<br>
Reset local state:
<pre><code>systemctl stop kubelet;
docker rm -f -v $(docker ps -q);
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
</code></pre>
If you wish to start over, run `systemctl start kubelet` followed by `kubeadm init` or `kubeadm join`.
<!-- *syntax-highlighting-hack -->
## Feedback
@ -253,3 +256,9 @@ Please note: `kubeadm` is a work in progress and these limitations will be addre
1. There is not yet an easy way to generate a `kubeconfig` file which can be used to authenticate to the cluster remotely with `kubectl` on, for example, your workstation.
Workaround: copy the kubelet's `kubeconfig` from the master: use `scp root@<master>:/etc/kubernetes/admin.conf .` and then e.g. `kubectl --kubeconfig ./admin.conf get nodes` from your workstation.
1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one).
By default, it doesn't do this and kubelet ends-up using first non-loopback network interface, which is usually NATed.
Workaround: Modify `/etc/hosts`, take a look at this [`Vagrantfile`][ubuntu-vagrantfile] for how you this can be achieved.
[ubuntu-vagrantfile]: https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11),

View File

@ -134,7 +134,7 @@ export KUBERNETES_PROVIDER=libvirt-coreos; wget -q -O - https://get.k8s.io | bas
Here is the curl version of this command:
```shell
export KUBERNETES_PROVIDER=libvirt-coreos; curl -sS https://get.k8s.io | bash`
export KUBERNETES_PROVIDER=libvirt-coreos; curl -sS https://get.k8s.io | bash
```
This script downloads and unpacks the tarball, then spawns a Kubernetes cluster on CoreOS instances with the following characteristics:

View File

@ -77,9 +77,9 @@ h2, h3, h4 {
<a href="/docs/whatisk8s/" class="button">Read the Overview</a>
</div>
<div class="col3rd">
<h3>Hello World on Google Container Engine</h3>
<p>In this quickstart, well be creating a Kubernetes instance that stands up a simple “Hello World” app using Node.js. In just a few minutes you'll go from zero to deployed Kubernetes app on Google Container Engine (GKE), a hosted service from Google.</p>
<a href="/docs/hellonode/" class="button">Get Started on GKE</a>
<h3>Kubernetes Basics Interactive Tutorial</h3>
<p>The Kubernetes Basics interactive tutorials let you try out Kubernetes features using Minikube right out of your web browser in a virtual terminal. Learn about the Kubernetes system and deploy, expose, scale, and upgrade a containerized application in just a few minutes.</p>
<a href="/docs/tutorials/kubernetes-basics/" class="button">Try the Interactive Tutorials</a>
</div>
<div class="col3rd">
<h3>Installing Kubernetes on Linux with kubeadm</h3>

View File

@ -0,0 +1,88 @@
---
---
{% capture overview %}
This page shows how to assign a Kubernetes Pod to a particular node in a
Kubernetes cluster.
{% endcapture %}
{% capture prerequisites %}
* Install [kubectl](http://kubernetes.io/docs/user-guide/prereqs).
* Create a Kubernetes cluster, including a running Kubernetes
API server. One way to create a new cluster is to use
[Minikube](/docs/getting-started-guides/minikube).
* Configure `kubectl` to communicate with your Kubernetes API server. This
configuration is done automatically if you use Minikube.
{% endcapture %}
{% capture steps %}
### Adding a label to a node
1. List the nodes in your cluster:
kubectl get nodes
The output is similar to this:
NAME STATUS AGE
worker0 Ready 1d
worker1 Ready 1d
worker2 Ready 1d
1. Chose one of your nodes, and add a label to it:
kubectl label nodes <your-node-name> disktype=ssd
where `<your-node-name>` is the name of your chosen node.
1. Verify that your chosen node has a `disktype=ssd` label:
kubectl get nodes --show-labels
The output is similar to this:
NAME STATUS AGE LABELS
worker0 Ready 1d ...,disktype=ssd,kubernetes.io/hostname=worker0
worker1 Ready 1d ...,kubernetes.io/hostname=worker1
worker2 Ready 1d ...,kubernetes.io/hostname=worker2
In the preceding output, you can see that the `worker0` node has a
`disktype=ssd` label.
### Creating a pod that gets scheduled to your chosen node
This pod configuration file describes a pod that has a node selector,
`disktype: ssd`. This means that the pod will get scheduled on a node that has
a `disktype=ssd` label.
{% include code.html language="yaml" file="pod.yaml" ghlink="/docs/tasks/administer-cluster/pod.yaml" %}
1. Use the configuration file to create a pod that will get scheduled on your
chosen node:
export REPO=https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master
kubectl create -f $REPO/docs/tasks/administer-cluster/pod.yaml
1. Verify that the pod is running on your chosen node:
kubectl get pods --output=wide
The output is similar to this:
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 13s 10.200.0.4 worker0
{% endcapture %}
{% capture whatsnext %}
Learn more about
[labels and selectors](/docs/user-guide/labels/).
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd

View File

@ -11,6 +11,9 @@ The Tasks section of the Kubernetes documentation is a work in progress
* [Using an HTTP Proxy to Access the Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api)
#### Administering a Cluster
* [Assigning Pods to Nodes](/docs/tasks/administer-cluster/assign-pods-nodes/)
### What's next

40
docs/tools/index.md Normal file
View File

@ -0,0 +1,40 @@
---
assignees:
- janetkuo
---
* TOC
{:toc}
## Native Tools
### Kubectl
[`kubectl`](/docs/user-guide/kubectl/) is the command line tool for Kubernetes. It controls the Kubernetes cluster manager.
### Dashboard
[Dashboard](/docs/user-guide/ui/), the web-based user interface of Kubernetes, allows you to deploy containerized applications
to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself.
## Third-Party Tools
### Helm
[Kubernetes Helm](https://github.com/kubernetes/helm) is a tool for managing packages of pre-configured
Kubernetes resources, aka Kubernetes charts.
Use Helm to:
* Find and use popular software packaged as Kubernetes charts
* Share your own applications as Kubernetes charts
* Create reproducible builds of your Kubernetes applications
* Intelligently manage your Kubernetes manifest files
* Manage releases of Helm packages
### Kompose
[`kompose`](https://github.com/skippbox/kompose) is a tool to help users familiar with `docker-compose`
move to Kubernetes. It takes a Docker Compose file and translates it into Kubernetes objects. `kompose`
is a convenient tool to go from local Docker development to managing your application with Kubernetes.

View File

@ -1,47 +0,0 @@
---
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-8">
<h3>Module overview</h3>
<ul style="color: #3771e3;">
<li><i>learn what a Kubernetes cluster is</i></li>
<li><i>learn what <a href="https://github.com/kubernetes/minikube">minikube</a> is</i></li>
<li><i>start a Kubernetes cluster using an online terminal</i></li>
</ul>
<p><img src="/docs/tutorials/getting-started/public/images/module_01.svg?v=1469803628347"></p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
<h3>What you need to know first</h3>
<p>
Before you do this tutorial, you should be familiar with Linux containers.
</p>
</div>
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/cluster-intro.html" role="button">Start Module 1 <span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -1,52 +0,0 @@
---
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/cluster-interactive.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<div class="row">
<div class="col-md-8">
<h3>Module overview</h3>
<ul style="color: #3771e3;">
<li><i>Learn about application Deployments</i></li>
<li><i>Deploy your first app on Kubernetes with Kubectl</i></li>
</ul>
<p><img src="/docs/tutorials/getting-started/public/images/module_02.svg?v=1469803628347"></p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
<h3>What you need to know first</h3>
<p>
How to <a href="/docs/tutorials/getting-started/create-cluster.html">start a Kubernetes cluster</a> with minikube <br>
</p>
</div>
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/deploy-intro.html" role="button">Start Module 2 <span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -1,53 +0,0 @@
---
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/deploy-interactive.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<div class="row">
<div class="col-md-8">
<h3>Module overview</h3>
<ul style="color: #3771e3;">
<li><i>Learn about Kubernetes <a href="http://kubernetes.io/docs/user-guide/pods/">Pods</a></i></li>
<li><i>Learn about Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/admin/node.md">Nodes</a></i></li>
<li><i>Troubleshoot deployed applications</i></li>
</ul>
<p><img src="/docs/tutorials/getting-started/public/images/module_03.svg?v=1469803628347"></p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
<h3>What you need to know first</h3>
<p>
What are <a href="/docs/tutorials/getting-started/deploy-app.html">Deployments</a> <br>
How to deploy applications on Kubernetes
</p>
</div>
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/explore-intro.html" role="button">Start Module 3 <span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -1,54 +0,0 @@
---
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/explore-interactive.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<div class="row">
<div class="col-md-8">
<h3>Module overview</h3>
<ul style="color: #3771e3;">
<li><i><a href="http://kubernetes.io/docs/user-guide/services">Services</a></i></li>
<li><i>Learn about Kubernetes <a href="http://kubernetes.io/docs/user-guide/labels">Labels</a></i></li>
<li><i>Exposing applications outside Kubernetes</i></li>
</ul>
<p><img src="/docs/tutorials/getting-started/public/images/module_04.svg?v=1469803628347"></p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
<h3>What you need to know first</h3>
<p>
How to <a href="/docs/tutorials/getting-started/deploy-app.html">deploy apps</a> on Kubernetes<br>
How to <a href="/docs/tutorials/getting-started/explore-app.html"> troubleshoot </a> applications with Kubectl
</p>
</div>
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/expose-intro.html" role="button">Start Module 4 <span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -1,97 +0,0 @@
---
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="./public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-9">
<h2>Getting Started with Kubernetes</h2>
<p><i style="color: #3771e3;">By the end of this tutorial you will understand what Kubernetes does. You will also learn how to deploy, scale, update and debug containerized applications on a Kubernetes cluster using an interactive online terminal.</i></p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-9">
<h2>Why Kubernetes?</h2>
<p>Today users expect applications to be available 24/7, while developers expect to deploy new versions of those applications several times a day. The way we build software is moving in this direction, enabling applications to be released and updated in an easy and fast way without downtime. We also need to be able to scale application in line with the user demand and we expect them to make intelligent use of the available resources. <a href="http://kubernetes.io/docs/whatisk8s/">Kubernetes</a> is a platform designed to meet those requirements, using the experience accumulated by Google in this area, combined with best-of-breed ideas from the community.</p>
</div>
</div>
<div class="content__modules">
<h2>Getting Started Modules</h2>
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/getting-started/create-cluster.html"><img src="./public/images/module_01.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="1-0.html"><h5>1. Create a Kubernetes cluster</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/getting-started/deploy-app.html"><img src="./public/images/module_02.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="2-0.html"><h5>2. Deploy an app</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/getting-started/explore-app.html"><img src="./public/images/module_03.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="3-0.html"><h5>3. Explore your app</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/getting-started/expose-app.html"><img src="./public/images/module_04.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="4-0.html"><h5>4. Expose your app publicly</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/getting-started/scale-app.html"><img src="./public/images/module_05.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="5-0.html"><h5>5. Scale up your app</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/getting-started/update-app.html"><img src="./public/images/module_06.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="6-0.html"><h5>6. Update your app</h5></a>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/create-cluster.html" role="button">Start the tutorial<span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -1,52 +0,0 @@
---
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/explore-interactive.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<div class="row">
<div class="col-md-8">
<h3>Module overview</h3>
<ul style="color: #3771e3;">
<li><i>Scaling an app with Kubectl</i></li>
</ul>
<p><img src="/docs/tutorials/getting-started/public/images/module_05.svg?v=1469803628347"></p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
<h3>What you need to know first</h3>
<p>
What are <a href="/docs/tutorials/getting-started/deploy-app.html">Deployments</a> <br>
What are <a href="/docs/tutorials/getting-started/expose-app.html">Services</a>
</p>
</div>
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/scale-intro.html" role="button">Start Module 5 <span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -1,54 +0,0 @@
---
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/scale-interactive.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<div class="row">
<div class="col-md-8">
<h3>Module overview</h3>
<ul style="color: #3771e3;">
<li><i>Performing Rolling Updates with Kubectl</i></li>
</ul>
<p><img src="/docs/tutorials/getting-started/public/images/module_06.svg?v=1469803628347"></p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
<h3>What you need to know first</h3>
<p>
What are <a href="/docs/tutorials/getting-started/deploy-app.html">Deployments</a> <br>
What is <a href="/docs/tutorials/getting-started/scale-app.html">Scaling</a>
</p>
</div>
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/update-intro.html" role="button">Start Module 6 <span class="btn__next"></span></a>
</div>
</div>
</main>
<a class="scrolltop" href="#top"></a>
</div>
</body>
</html>

View File

@ -3,11 +3,15 @@
The Tutorials section of the Kubernetes documentation is a work in progress.
#### Kubernetes Basics
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.
#### Stateless Applications
* [Running a Stateless Application Using a Deployment](/docs/tutorials/stateless-application/run-stateless-application-deployment/)
* [Exposing an External IP Address Using a Service](/docs/tutorials/stateless-application/expose-external-ip-address-service/)
* [Using a Service to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address-service/)
### What's next

View File

@ -7,19 +7,13 @@
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/cluster-intro.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<br>
<div class="katacoda">
<div class="katacoda__alert">
To interact with the Terminal, please use the desktop/tablet version
@ -28,7 +22,7 @@
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/deploy-app.html" role="button">Continue to Module 2<span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-intro.html" role="button">Continue to Module 2<span class="btn__next"></span></a>
</div>
</div>

View File

@ -1,4 +1,7 @@
---
redirect_from:
- /docs/tutorials/getting-started/create-cluster/
- /docs/tutorials/getting-started/create-cluster.html
---
<!DOCTYPE html>
@ -7,23 +10,25 @@
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/create-cluster.html" role="button"><span class="btn__prev"></span> Back</a>
<div class="col-md-8">
<h3>Objectives</h3>
<ul>
<li>Learn what a Kubernetes cluster is.</li>
<li>Learn what Minikube is.</li>
<li>Start a Kubernetes cluster using an online terminal.</li>
</ul>
</div>
</div>
<br>
<br>
<div class="row">
<div class="col-md-8">
<h3>Kubernetes Clusters</h3>
<p>
<b>Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit.</b> The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized. Containerized applications are more flexible and available than in past deployment models, where applications were installed directly onto specific machines as packages deeply integrated into the host. <b>Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.</b> Kubernetes is an <a href="https://github.com/kubernetes/kubernetes">open-source</a> platform and is production-ready.
</p>
@ -60,7 +65,7 @@
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/getting-started/public/images/module_01_cluster.svg"></p>
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_01_cluster.svg"></p>
</div>
</div>
<br>
@ -92,7 +97,7 @@
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/cluster-interactive.html" role="button">Start Interactive Tutorial <span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/cluster-interactive.html" role="button">Start Interactive Tutorial <span class="btn__next"></span></a>
</div>
</div>

View File

@ -7,18 +7,13 @@
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/deploy-intro.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<br>
<div class="katacoda">
<div class="katacoda__alert">
@ -31,7 +26,7 @@
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/explore-app.html" role="button">Continue to Module 3<span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/explore-intro.html" role="button">Continue to Module 3<span class="btn__next"></span></a>
</div>
</div>

View File

@ -7,23 +7,24 @@
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/deploy-app.html" role="button"><span class="btn__prev"></span> Back</a>
<div class="col-md-8">
<h3>Objectives</h3>
<ul>
<li>Learn about application Deployments.</li>
<li>Deploy your first app on Kubernetes with kubectl.</li>
</ul>
</div>
</div>
<br>
<br>
<div class="row">
<div class="col-md-8">
<h3>Kubernetes Deployments</h3>
<p>
Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes <b>Deployment</b>. The Deployment is responsible for creating and updating instances of your application. Once you've created a Deployment, the Kubernetes master schedules the application instances that the Deployment creates onto individual Nodes in the cluster.
</p>
@ -59,7 +60,7 @@
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/getting-started/public/images/module_02_first_app.svg"></p>
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg"></p>
</div>
</div>
<br>
@ -94,7 +95,7 @@
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/deploy-interactive.html" role="button">Start Interactive Tutorial <span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-interactive.html" role="button">Start Interactive Tutorial <span class="btn__next"></span></a>
</div>
</div>

View File

@ -7,18 +7,13 @@
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/explore-intro.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<br>
<div class="katacoda">
@ -31,7 +26,7 @@
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/expose-app.html" role="button">Continue to Module 4<span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/expose-intro.html" role="button">Continue to Module 4<span class="btn__next"></span></a>
</div>
</div>

View File

@ -7,7 +7,7 @@
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
@ -15,18 +15,19 @@
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/explore-app.html" role="button"><span class="btn__prev"></span> Back</a>
<div class="col-md-8">
<h3>Objectives</h3>
<ul>
<li>Learn about Kubernetes Pods.</li>
<li>Learn about Kubernetes Nodes.</li>
<li>Troubleshoot deployed applications.</li>
</ul>
</div>
</div>
<br>
<br>
<div class="row">
<div class="col-md-8">
<h2>Pods</h2>
<p>When you created a Deployment in Module <a href="/docs/tutorials/getting-started/deploy-app.html">2</a>, Kubernetes created a <b>Pod</b> to host your application instance. A Pod is Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:</p>
<h2>Kubernetes Pods</h2>
<p>When you created a Deployment in Module <a href="/docs/tutorials/kubernetes-basics/deploy-app.html">2</a>, Kubernetes created a <b>Pod</b> to host your application instance. A Pod is Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:</p>
<ul>
<li>Shared storage, as Volumes</li>
<li>Networking, as a unique cluster IP address</li>
@ -63,7 +64,7 @@
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/getting-started/public/images/module_03_pods.svg"></p>
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_03_pods.svg"></p>
</div>
</div>
<br>
@ -97,7 +98,7 @@
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/getting-started/public/images/module_03_nodes.svg"></p>
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_03_nodes.svg"></p>
</div>
</div>
<br>
@ -128,7 +129,7 @@
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/explore-interactive.html" role="button">Start Interactive Tutorial <span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/explore-interactive.html" role="button">Start Interactive Tutorial <span class="btn__next"></span></a>
</div>
</div>

View File

@ -7,19 +7,13 @@
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/expose-intro.html" role="button"><span class="btn__prev"></span> Back</a>
</div>
</div>
<br>
<div class="katacoda">
<div class="katacoda__alert">
To interact with the Terminal, please use the desktop/tablet version
@ -29,7 +23,7 @@
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/scale-app.html" role="button">Continue to Module 5<span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/scale-intro.html" role="button">Continue to Module 5<span class="btn__next"></span></a>
</div>
</div>

View File

@ -7,23 +7,26 @@
<body>
<link href="/docs/tutorials/getting-started/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-12 text-left">
<a class="btn btn-default" href="/docs/tutorials/getting-started/explore-app.html" role="button"><span class="btn__prev"></span> Back</a>
<div class="col-md-8">
<h3>Objectives</h3>
<ul>
<li>Learn about Kubernetes Services.</li>
<li>Learn about Kubernetes Labels.</li>
<li>Expose an application outside Kubernetes.</li>
</ul>
</div>
</div>
<br>
<br>
<div class="row">
<div class="col-md-8">
<h3>Kubernetes Services</h3>
<p>While Pods do have their own unique IP across the cluster, those IPs are not exposed outside Kubernetes. Taking into account that over time Pods may be terminated, deleted or replaced by other Pods, we need a way to let other Pods and applications automatically discover each other. Kubernetes addresses this by grouping Pods in Services. A Kubernetes <b>Service</b> is an abstraction layer which defines a logical set of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.</p>
<p>This abstraction will allow us to expose Pods to traffic originating from outside the cluster. Services have their own unique cluster-private IP address and expose a port to receive traffic. If you choose to expose the service outside the cluster, the options are:</p>
@ -58,7 +61,7 @@
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/getting-started/public/images/module_04_services.svg"></p>
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_04_services.svg"></p>
</div>
</div>
<br>
@ -106,7 +109,7 @@
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/getting-started/public/images/module_04_labels.svg"></p>
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg"></p>
</div>
</div>
<br>
@ -122,7 +125,7 @@
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/getting-started/expose-interactive.html" role="button">Start Interactive Tutorial <span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/expose-interactive.html" role="button">Start Interactive Tutorial <span class="btn__next"></span></a>
</div>
</div>

View File

@ -0,0 +1,105 @@
---
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="./public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-9">
<h2>Kubernetes Basics</h2>
<p>This tutorial provides a walkthrough of the basics of the Kubernetes cluster orchestration system. Each module contains some background information on major Kubernetes features and concepts, and includes an interactive online tutorial. These interactive tutorials let you manage a simple cluster and its containerized applications for yourself.</p>
<p>Using the interactive tutorials, you can learn to:</p>
<ul>
<li>Deploy a containerized application on a cluster</li>
<li>Scale the deployment</li>
<li>Update the containerized application with a new software version</li>
<li>Debug the containerized application</li>
</ul>
<p>The tutorials use Katacoda to run a virtual terminal in your web browser that runs Minikube, a small-scale local deployment of Kubernetes that can run anywhere. There's no need to install any software or configure anything; each interactive tutorial runs directly out of your web browser itself.</p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-9">
<h2>What can Kubernetes do for you?</h2>
<p>With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containzerization helps package software to serve these goals, enabling applications to be released and updated in an easy and fast way without downtime. Kubernetes helps you make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work. <a href="http://kubernetes.io/docs/whatisk8s/">Kubernetes</a> is a production-ready, open source platform designed with the Google's accumulated experience in container orchestration, combined with best-of-breed ideas from the community.</p>
</div>
</div>
<div class="content__modules">
<h2>Kubernetes Basics Modules</h2>
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/cluster-intro/"><img src="./public/images/module_01.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="cluster-intro/"><h5>1. Create a Kubernetes cluster</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/deploy-intro/"><img src="./public/images/module_02.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="deploy-intro/"><h5>2. Deploy an app</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/explore-intro/"><img src="./public/images/module_03.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="explore-intro/"><h5>3. Explore your app</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/expose-intro/"><img src="./public/images/module_04.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="expose-intro/"><h5>4. Expose your app publicly</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/scale-intro/"><img src="./public/images/module_05.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="scale-intro/"><h5>5. Scale up your app</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/update-intro/"><img src="./public/images/module_06.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="update-intro/"><h5>6. Update your app</h5></a>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/cluster-intro/" role="button">Start the tutorial<span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 5.3 KiB

After

Width:  |  Height:  |  Size: 5.3 KiB

View File

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

Before

Width:  |  Height:  |  Size: 7.2 KiB

After

Width:  |  Height:  |  Size: 7.2 KiB

View File

Before

Width:  |  Height:  |  Size: 729 B

After

Width:  |  Height:  |  Size: 729 B

View File

Before

Width:  |  Height:  |  Size: 768 B

After

Width:  |  Height:  |  Size: 768 B

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

Before

Width:  |  Height:  |  Size: 5.8 KiB

After

Width:  |  Height:  |  Size: 5.8 KiB

View File

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

Before

Width:  |  Height:  |  Size: 2.3 KiB

After

Width:  |  Height:  |  Size: 2.3 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 3.0 KiB

After

Width:  |  Height:  |  Size: 3.0 KiB

View File

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 37 KiB

View File

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 57 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View File

Before

Width:  |  Height:  |  Size: 1.8 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

Before

Width:  |  Height:  |  Size: 425 B

After

Width:  |  Height:  |  Size: 425 B

View File

Before

Width:  |  Height:  |  Size: 506 B

After

Width:  |  Height:  |  Size: 506 B

View File

Before

Width:  |  Height:  |  Size: 396 B

After

Width:  |  Height:  |  Size: 396 B

View File

Before

Width:  |  Height:  |  Size: 396 B

After

Width:  |  Height:  |  Size: 396 B

Some files were not shown because too many files have changed in this diff Show More