Merge branch 'master' into patch-1

reviewable/pr1363/r1
Steve Perry 2016-10-21 18:05:09 -07:00 committed by GitHub
commit 2c604254a4
63 changed files with 956 additions and 278 deletions

View File

@ -4,6 +4,13 @@ Welcome! We are very pleased you want to contribute to the documentation and/or
You can click the "Fork" button in the upper-right area of the screen to create a copy of our site on your GitHub account called a "fork." Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click "New Pull Request" to let us know about it.
For more information about contributing to the Kubernetes documentation, see:
* [Creating a Documentation Pull Request](http://kubernetes.io/docs/contribute/create-pull-request/)
* [Writing a New Topic](http://kubernetes.io/docs/contribute/write-new-topic/)
* [Staging Your Documentation Changes](http://kubernetes.io/docs/contribute/stage-documentation-changes/)
* [Using Page Templates](http://kubernetes.io/docs/contribute/page-templates/)
## Automatic Staging for Pull Requests
When you create a pull request (either against master or the upcoming release), your changes are staged in a custom subdomain on Netlify so that you can see your changes in rendered form before the PR is merged. You can use this to verify that everything is correct before the PR gets merged. To view your changes:
@ -13,17 +20,17 @@ When you create a pull request (either against master or the upcoming release),
- Look for "deploy/netlify"; you'll see "Deploy Preview Ready!" if staging was successful
- Click "Details" to bring up the staged site and navigate to your changes
## Release Branch Staging
## Branch structure and staging
The Kubernetes site maintains staged versions at a subdomain provided by Netlify. Every PR for the Kubernetes site, either against the master branch or the upcoming release branch, is staged automatically.
The current version of the website is served out of the `master` branch. To make changes to the live docs, such as bug fixes, broken links, typos, etc, **target your pull request to the master branch**
The staging site for the next upcoming Kubernetes release is here: [http://kubernetes-io-vnext-staging.netlify.com/](http://kubernetes-io-vnext-staging.netlify.com/)
The `release-1.x` branch stores changes for **upcoming releases of Kubernetes**. For example, the `release-1.5` branch has changes for the 1.5 release. These changes target branches (and *not* master) to avoid publishing documentation updates prior to the release for which they're relevant. If you have a change for an upcoming release of Kubernetes, **target your pull request to the appropriate release branch**.
The staging site reflects the current state of what's been merged in the release branch, or in other words, what the docs will look like for the next upcoming release. It's automatically updated as new PRs get merged.
The staging site for the next upcoming Kubernetes release is here: [http://kubernetes-io-vnext-staging.netlify.com/](http://kubernetes-io-vnext-staging.netlify.com/). The staging site reflects the current state of what's been merged in the release branch, or in other words, what the docs will look like for the next upcoming release. It's automatically updated as new PRs get merged.
## Staging the site locally (using Docker)
Don't like installing stuff? Download and run a local staging server with a single `docker run` command.
Don't like installing stuff? Download and run a local staging server with a single `docker run` command.
git clone https://github.com/kubernetes/kubernetes.github.io.git
cd kubernetes.github.io
@ -47,7 +54,7 @@ Install Ruby 2.2 or higher. If you're on Linux, run these commands:
apt-get install ruby2.2
apt-get install ruby2.2-dev
* If you're on a Mac, follow [these instructions](https://gorails.com/setup/osx/).
* If you're on a Mac, follow [these instructions](https://gorails.com/setup/osx/).
* If you're on a Windows machine you can use the [Ruby Installer](http://rubyinstaller.org/downloads/). During the installation make sure to check the option for *Add Ruby executables to your PATH*.
The remainder of the steps should work the same across operating systems.
@ -140,16 +147,6 @@ That, of course, will send users to:
(Or whatever Kubernetes release that docs branch is associated with.)
## Branch structure
The current version of the website is served out of the `master` branch. To make changes to the live docs, such as bug fixes, broken links, typos, etc, **target your pull request to the master branch**.
The `release-1.x` branches store changes for **upcoming releases of Kubernetes**. For example, the `release-1.5` branch has changes for the upcoming 1.5 release. These changes target branches (and *not* master) to avoid publishing documentation updates prior to the release for which they're relevant. If you have a change for an upcoming release of Kubernetes, **target your pull request to the appropriate release branch**.
Changes in the "docsv2" branch (where we are testing a revamp of the docs) are automatically staged here:
http://k8sdocs.github.io/docs/tutorials/
## Config yaml guidelines
Guidelines for config yamls that are included in the site docs. These

View File

@ -30,4 +30,3 @@ permalink: pretty
gems:
- jekyll-redirect-from

View File

@ -6,6 +6,12 @@ toc:
- title: Contributing to the Kubernetes Docs
section:
- title: Creating a Documentation Pull Request
path: /docs/contribute/create-pull-request/
- title: Writing a New Topic
path: /docs/contribute/write-new-topic/
- title: Staging Your Documentation Changes
path: /docs/contribute/stage-documentation-changes/
- title: Using Page Templates
path: /docs/contribute/page-templates/

View File

@ -2,6 +2,12 @@ bigheader: "Tasks"
toc:
- title: Tasks
path: /docs/tasks/
- title: Configuring Pods and Containers
section:
- title: Defining Environment Variables for a Container
path: /docs/tasks/configure-pod-container/define-environment-variable-container/
- title: Defining a Command and Arguments for a Container
path: /docs/tasks/configure-pod-container/define-command-argument-container/
- title: Accessing Applications in a Cluster
section:
- title: Using Port Forwarding to Access Applications in a Cluster

View File

@ -4,6 +4,7 @@
<a href="/docs/hellonode/">Get Started</a>
<a href="/docs/">Documentation</a>
<a href="http://blog.kubernetes.io/">Blog</a>
<a href="/partners/">Partners</a>
<a href="/community/">Community</a>
<a href="/case-studies/">Case Studies</a>
</nav>

208
_includes/partner-script.js Normal file
View File

@ -0,0 +1,208 @@
;(function () {
var partners = [
{
type: 0,
name: 'CoreOS',
logo: 'core_os',
link: 'https://tectonic.com/',
blurb: 'Tectonic is the enterprise-ready Kubernetes product, by CoreOS. It adds key features to allow you to manage, update, and control clusters in production.'
},
{
type: 0,
name: 'Deis',
logo: 'deis',
link: 'https://deis.com',
blurb: 'Deis the creators of Helm, Workflow, and Steward, helps developers and operators build, deploy, manage and scale their applications on top of Kubernetes.'
},
{
type: 0,
name: 'Sysdig Cloud',
logo: 'sys_dig',
link: 'https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud/',
blurb: 'Container native monitoring with deep support for Kubernetes.'
},
{
type: 0,
name: 'Puppet',
logo: 'puppet',
link: 'https://puppet.com/blog/managing-kubernetes-configuration-puppet',
blurb: 'The Puppet module for Kubernetes makes it easy to manage Pods, Replication Controllers, Services and more in Kubernetes, and to build domain-specific interfaces to one\'s Kubernetes configuration.'
},
{
type: 0,
name: 'Citrix',
logo: 'citrix',
link: 'http://wercker.com/workflows/partners/kubernetes/',
blurb: 'Netscaler CPX gives app developers all the features they need to load balance their microservices and containerized apps with Kubernetes.'
},
{
type: 0,
name: 'Wercker',
logo: 'wercker',
link: 'http://wercker.com/workflows/partners/kubernetes/',
blurb: 'Wercker automates your build, test and deploy pipelines for launching containers and triggering rolling updates on your Kubernetes cluster. '
},
{
type: 0,
name: 'Rancher',
logo: 'rancher',
link: 'http://rancher.com/kubernetes/',
blurb: 'Rancher is an open-source, production-ready container management platform that makes it easy to deploy and leverage Kubernetes in the enterprise.'
},
{
type: 0,
name: 'Red Hat',
logo: 'redhat',
link: 'https://www.openshift.com/',
blurb: 'Leverage an enterprise Kubernetes platform to orchestrate complex, multi-container apps.'
},
{
type: 0,
name: 'Intel',
logo: 'intel',
link: 'https://tectonic.com/press/intel-coreos-collaborate-on-openstack-with-kubernetes.html',
blurb: 'Powering the GIFEE (Googles Infrastructure for Everyone Else), to run OpenStack deployments on Kubernetes.'
},
{
type: 0,
name: 'ElasticKube',
logo: 'elastickube',
link: 'https://www.ctl.io/elastickube-kubernetes/',
blurb: 'Self-service container management for Kubernetes.'
},
{
type: 0,
name: 'Platform9',
logo: 'platform9',
link: 'https://platform9.com/products/kubernetes/',
blurb: 'Platform9 is the open source-as-a-service company that takes all of the goodness of Kubernetes and delivers it as a managed service.'
},
{
type: 0,
name: 'Datadog',
logo: 'datadog',
link: 'http://docs.datadoghq.com/integrations/kubernetes/',
blurb: 'Full-stack observability for dynamic infrastructure & applications. Includes precision alerting, analytics and deep Kubernetes integrations. '
},
{
type: 0,
name: 'AppFormix',
logo: 'appformix',
link: 'http://www.appformix.com/solutions/appformix-for-kubernetes/',
blurb: 'AppFormix is a cloud infrastructure performance optimization service helping enterprise operators streamline their cloud operations on any Kubernetes cloud. '
},
{
type: 0,
name: 'Crunchy',
logo: 'crunchy',
link: 'http://info.crunchydata.com/blog/advanced-crunchy-containers-for-postgresql',
blurb: 'Crunchy PostgreSQL Container Suite is a set of containers for managing PostgreSQL with DBA microservices leveraging Kubernetes and Helm.'
},
{
type: 0,
name: 'Aqua',
logo: 'aqua',
link: 'http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment',
blurb: 'Deep, automated security for your containers running on Kubernetes.'
},
{
type: 0,
name: 'Canonical',
logo: 'canonical',
link: 'https://jujucharms.com/canonical-kubernetes/',
blurb: 'The Canonical Distribution of Kubernetes enables you to operate Kubernetes clusters on demand on any major public cloud and private infrastructure.'
},
{
type: 0,
name: 'Distelli',
logo: 'distelli',
link: 'https://www.distelli.com/',
blurb: 'Pipelines from your source repositories to your Kubernetes Clusters on any cloud.'
},
{
type: 0,
name: 'Nuage networks',
logo: 'nuagenetworks',
link: 'https://github.com/nuagenetworks/nuage-kubernetes',
blurb: 'The Nuage SDN platform provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.'
},
{
type: 1,
name: 'Apprenda',
logo: 'apprenda',
link: 'https://apprenda.com/kubernetes-support/',
blurb: 'Apprenda offers flexible and wide range of support plans for pure play Kubernetes on your choice of infrastructure, cloud provider and operating system.'
},
{
type: 1,
name: 'Reactive Ops',
logo: 'reactive_ops',
link: 'https://www.reactiveops.com/kubernetes/',
blurb: 'ReactiveOps has written automation on best practices for infrastructure as code on GCP & AWS using Kubernetes, helping you build and maintain a world-class infrastructure at a fraction of the price of an internal hire.'
},
{
type: 1,
name: 'Livewyer',
logo: 'livewyer',
link: 'https://livewyer.io/services/kubernetes-experts/',
blurb: 'Kubernetes experts that on-board applications and empower IT teams to get the most out of containerised technology.'
},
{
type: 1,
name: 'Deis',
logo: 'deis',
link: 'https://deis.com/services/',
blurb: 'Deis provides professional services and 24x7 operational support for any Kubernetes cluster managed by our global cluster operations team.'
},
{
type: 1,
name: 'Samsung SDS',
logo: 'samsung_sds',
link: 'http://www.samsungsdsa.com/cloud-infrastructure_kubernetes',
blurb: 'Samsung SDSs Cloud Native Computing Team offers expert consulting across the range of technical aspects involved in building services targeted at a Kubernetes cluster.'
},
{
type: 1,
name: 'Container Solutions',
logo: 'container_solutions',
link: 'http://container-solutions.com/resources/kubernetes/',
blurb: 'Container Solutions is a premium software consultancy that focuses on programmable infrastructure, offering our expertise in software development, strategy and operations to help you innovate at speed and scale.'
}
]
var isvContainer = document.getElementById('isvContainer')
var servContainer = document.getElementById('servContainer')
var sorted = partners.sort(function (a, b) {
if (a.name > b.name) return 1
if (a.name < b.name) return -1
return 0
})
sorted.forEach(function (obj) {
var box = document.createElement('div')
box.className = 'partner-box'
var img = document.createElement('img')
img.src = '/images/square-logos/' + obj.logo + '.png'
var div = document.createElement('div')
var p = document.createElement('p')
p.textContent = obj.blurb
var link = document.createElement('a')
link.href = obj.link
link.target = '_blank'
link.textContent = 'Learn more'
div.appendChild(p)
div.appendChild(link)
box.appendChild(img)
box.appendChild(div)
var container = obj.type ? servContainer : isvContainer
container.appendChild(box)
})
})();

View File

@ -0,0 +1,94 @@
h5 {
font-size: 18px;
line-height: 1.5em;
margin-bottom: 2em;
}
#usersGrid a {
display: inline-block;
background-color: #f9f9f9;
}
#isvContainer, #servContainer {
position: relative;
width: 100%;
display: flex;
justify-content: space-between;
flex-wrap: wrap;
}
#isvContainer {
margin-bottom: 80px;
}
.partner-box {
position: relative;
width: 47%;
max-width: 48%;
min-width: 48%;
margin-bottom: 20px;
padding: 20px;
flex: 1;
display: flex;
justify-content: space-between;
align-items: flex-start;
}
.partner-box img {
background-color: #f9f9f9;
}
.partner-box > div {
margin-left: 30px;
}
.partner-box a {
color: #3576E3;
}
@media screen and (max-width: 1024px) {
.partner-box {
flex-direction: column;
justify-content: flex-start;
}
.partner-box > div {
margin: 20px 0 0;
}
}
@media screen and (max-width: 568px) {
#isvContainer, #servContainer {
justify-content: center;
}
.partner-box {
flex-direction: column;
justify-content: flex-start;
width: 100%;
max-width: 100%;
min-width: 100%;
}
.partner-box > div {
margin: 20px 0 0;
}
}
@media screen and (max-width: 568px) {
#isvContainer, #servContainer {
justify-content: center;
}
.partner-box {
flex-direction: column;
justify-content: flex-start;
width: 100%;
max-width: 100%;
min-width: 100%;
}
.partner-box > div {
margin: 20px 0 0;
}
}

View File

@ -0,0 +1,4 @@
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. If you do not already have a
cluster, you can create one by using
[Minikube](/docs/getting-started-guides/minikube).

View File

@ -164,10 +164,11 @@ $video-section-height: 550px
margin-bottom: 20px
a
width: 20%
width: 16.65%
float: left
font-size: 24px
font-weight: 300
white-space: nowrap
.social
padding: 0 30px

View File

@ -222,8 +222,7 @@ $feature-box-div-width: 45%
text-align: center
a
font-size: 22px
width: auto
width: 30%
padding: 0 20px
.social

View File

@ -10,8 +10,6 @@ title: Community
<h1>Community</h1>
</section>
<section id="mainContent">
<main>
<div class="content">
@ -29,20 +27,6 @@ title: Community
from AWS and Openstack to Big Data and Scalability, theres a place for you to contribute and instructions
for forming a new SIG if your special interest isnt covered (yet).</p>
</div>
<div class="content">
<h3>Customers</h3>
<div class="company-logos">
<img src="/images/community_logos/zulily_logo.png">
<img src="/images/community_logos/we_pay_logo.png">
<img src="/images/community_logos/goldman_sachs_logo.png">
<img src="/images/community_logos/ebay_logo.png">
<img src="/images/community_logos/box_logo.png">
<img src="/images/community_logos/wikimedia_logo.png">
<img src="/images/community_logos/soundcloud_logo.png">
<img src="/images/community_logos/new_york_times_logo.png">
<img src="/images/community_logos/kabam_logo.png">
</div>
</div>
<div class="content">
<h3>Events</h3>
<div id="calendarWrapper">
@ -50,34 +34,6 @@ title: Community
frameborder="0" scrolling="no"></iframe>
</div>
</div>
<div class="content">
<h3>Partners</h3>
<p>We are working with a broad group of partners who contribute to the kubernetes core codebase, making it stronger and richer, as well as help in growing the kubernetes ecosystem supporting
a sprectrum of compelmenting platforms, from open source solutions to market-leading technologies.</p>
<div class="partner-logos">
<a href="https://coreos.com/kubernetes"><img src="/images/community_logos/core_os_logo.png"></a>
<a href="https://deis.com"><img src="/images/community_logos/deis_logo.png"></a>
<a href="https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud/"><img src="/images/community_logos/sysdig_cloud_logo.png"></a>
<a href="https://puppet.com/blog/managing-kubernetes-configuration-puppet"><img src="/images/community_logos/puppet_logo.png"></a>
<a href="https://www.citrix.com/blogs/2016/07/15/citrix-kubernetes-a-home-run/"><img src="/images/community_logos/citrix_logo.png"></a>
<a href="http://wercker.com/workflows/partners/kubernetes/"><img src="/images/community_logos/wercker_logo.png"></a>
<a href="http://rancher.com/kubernetes/"><img src="/images/community_logos/rancher_logo.png"></a>
<a href="https://www.openshift.com/"><img src="/images/community_logos/red_hat_logo.png"></a>
<a href="https://tectonic.com/press/intel-coreos-collaborate-on-openstack-with-kubernetes.html"><img src="/images/community_logos/intel_logo.png"></a>
<a href="https://elasticbox.com/kubernetes/"><img src="/images/community_logos/elastickube_logo.png"></a>
<a href="https://platform9.com/blog/containers-as-a-service-kubernetes-docker"><img src="/images/community_logos/platform9_logo.png"></a>
<a href="http://www.appformix.com/solutions/appformix-for-kubernetes/"><img src="/images/community_logos/appformix_logo.png"></a>
<a href="http://kubernetes.io/docs/getting-started-guides/dcos/"><img src="/images/community_logos/mesosphere_logo.png"></a>
<a href="http://docs.datadoghq.com/integrations/kubernetes/"><img src="/images/community_logos/datadog_logo.png"></a>
<a href="https://apprenda.com/kubernetes-support/"><img src="/images/community_logos/apprenda_logo.png"></a>
<a href="http://www.ibm.com/cloud-computing/"><img src="/images/community_logos/ibm_logo.png"></a>
<a href="http://info.crunchydata.com/blog/advanced-crunchy-containers-for-postgresql"><img src="/images/community_logos/crunchy_data_logo.png"></a>
<a href="https://content.mirantis.com/Containerizing-OpenStack-on-Kubernetes-Video-Landing-Page.html"><img src="/images/community_logos/mirantis_logo.png"></a>
<a href="http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment"><img src="/images/community_logos/aqua_logo.png"></a>
<a href="https://jujucharms.com/canonical-kubernetes/"><img src="/images/community_logos/ubuntu_cannonical_logo.png"></a>
<a href="https://github.com/nuagenetworks/nuage-kubernetes"><img src="/images/community_logos/nuage_network_logo.png"></a>
</div>
</div>
</main>
</section>

View File

@ -101,7 +101,7 @@ quoting facilities of HTTP. For example: if the bearer token is
header as shown below.
```http
Authentication: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269
Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269
```
### Static Password File

View File

@ -0,0 +1,94 @@
---
redirect_from:
- /editdocs/
---
{% capture overview %}
To contribute to the Kubernetes documentation, create a pull request against the
[kubernetes/kubernetes.github.io](https://github.com/kubernetes/kubernetes.github.io){: target="_blank"}
repository. This page shows how to create a pull request.
{% endcapture %}
{% capture prerequisites %}
1. Create a [GitHub account](https://github.com){: target="_blank"}.
1. Sign the
[Google Contributor License Agreement](https://cla.developers.google.com/about/google-individual){: target="_blank"}.
1. Sign the
[Linux Contributor License Agreement](https://identity.linuxfoundation.org/projects/cncf){: target="_blank"}.
{% endcapture %}
{% capture steps %}
### Creating a fork of the Kubernetes documentation repository
1. Go to the
[kubernetes/kubernetes.github.io](https://github.com/kubernetes/kubernetes.github.io){: target="_blank"}
repository.
1. In the upper-right corner, click **Fork**. This creates a copy of the
Kubernetes documentation repository in your GitHub account. The copy
is called a *fork*.
### Making your changes
1. In your GitHub account, in your fork of the Kubernetes docs, create
a new branch to use for your contribution.
1. In your new branch, make your changes and commit them. If you want to
[write a new topic](/docs/contribute/write-new-topic/),
choose the
[page type](/docs/contribute/page-templates/)
that is the best fit for your content.
### Submitting a pull request to the master branch
If you want your change to be published in the released version Kubernetes docs,
create a pull request against the master branch of the Kubernetes
documentation repository.
1. In your GitHub account, in your new branch, create a pull request
against the master branch of the kubernetes/kubernetes.github.io
repository. This opens a page that shows the status of your pull request.
1. Click **Show all checks**. Wait for the **deploy/netlify** check to complete.
To the right of **deploy/netlify**, click **Details**. This opens a staging
site where you can verify that your changes have rendered correctly.
1. During the next few days, check your pull request for reviewer comments.
If needed, revise your pull request by committing changes to your
new branch in your fork.
### Submitting a pull request to the &lt;vnext&gt; branch
If your documentation change should not be released until the next release of
the Kubernetes product, create a pull request against the &lt;vnext&gt; branch
of the Kubernetes documentation repository. The &lt;vnext&gt; branch has the
form `release-<version-number>`, for example release-1.5.
1. In your GitHub account, in your new branch, create a pull request
against the &lt;vnext&gt; branch of the kubernetes/kubernetes.github.io
repository. This opens a page that shows the status of your pull request.
1. Click **Show all checks**. Wait for the **deploy/netlify** check to complete.
To the right of **deploy/netlify**, click **Details**. This opens a staging
site where you can verify that your changes have rendered correctly.
1. During the next few days, check your pull request for reviewer comments.
If needed, revise your pull request by committing changes to your
new branch in your fork.
{% endcapture %}
{% capture whatsnext %}
* Learn about [writing a new topic](/docs/contribute/write-new-topic).
* Learn about [using page templates](/docs/contribute/page-templates/).
* Learn about [staging your changes](/docs/contribute/stage-documentation-changes).
{% endcapture %}
{% include templates/task.md %}

View File

@ -71,7 +71,7 @@ Here's an interesting thing to know about the steps you just did.
<p>Here's an example of a published topic that uses the task template:</p>
<p><a href="/docs/tasks/access-application-cluster/http-proxy-access-application-cluster">Using an HTTP Proxy to Access Applications in a Cluster</a></p>
<p><a href="/docs/tasks/access-kubernetes-api/http-proxy-access-api">Using an HTTP Proxy to Access the Kubernetes API</a></p>
<h3 id="tutorial_template">Tutorial template</h3>

View File

@ -0,0 +1,98 @@
---
---
{% capture overview %}
This page shows how to stage content that you want to contribute
to the Kubernetes documentation.
{% endcapture %}
{% capture prerequisites %}
Create a fork of the Kubernetes documentation repository as described in
[Creating a Documentation Pull Request](/docs/contribute/create-pull-request/).
{% endcapture %}
{% capture steps %}
### Staging from your GitHub account
GitHub provides staging of content in your master branch. Note that you
might not want to merge your changes into your master branch. If that is
the case, choose another option for staging your content.
1. In your GitHub account, in your fork, merge your changes into
the master branch.
1. Change the name of your repository to `<your-username>.github.io`, where
`<your-username>` is the username of your GitHub account.
1. Delete the `CNAME` file.
1. View your staged content at this URL:
https://<your-username>.github.io
### Staging a pull request
When you create pull request against the Kubernetes documentation
repository, you can see your changes on a staging server.
1. In your GitHub account, in your new branch, submit a pull request to the
kubernetes/kubernetes.github.io repository. This opens a page that shows the
status of your pull request.
1. Click **Show all checks**. Wait for the **deploy/netlify** check to complete.
To the right of **deploy/netlify**, click **Details**. This opens a staging
site where you see your changes.
### Staging locally using Docker
You can use the k8sdocs Docker image to run a local staging server. If you're
interested, you can view the
[Dockerfile](https://github.com/kubernetes/kubernetes.github.io/blob/master/staging-container/Dockerfile){: target="_blank"}
for this image.
1. Install Docker if you don't already have it.
1. Clone your fork to your local development machine.
1. In the root of your cloned repository, enter this command to start a local
web server:
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 gcr.io/google-samples/k8sdocs:1.0
1. View your staged content at
[http://localhost:4000](http://localhost:4000){: target="_blank"}.
### Staging locally without Docker
1. [Install Ruby 2.2 or later](https://www.ruby-lang.org){: target="_blank"}.
1. [Install RubyGems](https://rubygems.org){: target="_blank"}.
1. Verify that Ruby and RubyGems are installed:
gem --version
1. Install the GitHub Pages package, which includes Jekyll:
gem install github-pages
1. Clone your fork to your local development machine.
1. In the root of your cloned repository, enter this command to start a local
web server:
jekyll serve
1. View your staged content at
[http://localhost:4000](http://localhost:4000){: target="_blank"}.
{% endcapture %}
{% capture whatsnext %}
* Learn about [writing a new topic](/docs/contribute/write-new-topic/).
* Learn about [using page templates](/docs/contribute/page-templates/).
* Learn about [creating a pull request](/docs/contribute/create-pull-request/).
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,83 @@
---
---
{% capture overview %}
This page shows how to create a new topic for the Kubernetes docs.
{% endcapture %}
{% capture prerequisites %}
Create a fork of the Kubernetes documentation repository as described in
[Creating a Documentation Pull Request](/docs/contribute/create-pull-request/).
{% endcapture %}
{% capture steps %}
### Choosing a page type
As you prepare to write a new topic, think about which of these page types
is the best fit for your content:
<table>
<tr>
<td>Task</td>
<td>A task page shows how to do a single thing, typically by giving a short sequence of steps. Task pages have minimal explanation, but often provide links to conceptual topics that provide related background and knowledge.</td>
</tr>
<tr>
<td>Tutorial</td>
<td>A tutorial page shows how to accomplish a goal that is larger than a single task. Typically a tutorial page has several sections, each of which has a sequence of steps. For example, a tutorial might provide a walkthrough of a code sample that illustrates a certain feature of Kubernetes. Tutorials can include surface-level explanations, but should link to related concept topics for deep explanations.</td>
</tr>
<tr>
<td>Concept</td>
<td>A concept page explains some aspect of Kubernetes. For example, a concept page might describe the Kubernetes Deployment object and explain the role it plays as an application is deployed, scaled, and updated. Typically, concept pages don't include sequences of steps, but instead provide links to tasks or tutorials.</td>
</tr>
</table>
Each page type has a
[template](/docs/contribute/page-templates/)
that you can use as you write your topic.
Using templates helps ensure consistency among topics of a given type.
### Choosing a title and filename
Choose a title that has the keywords you want search engines to find.
Create a filename that uses the words in your title separated by hyphens.
For example, the topic with title
[Using an HTTP Proxy to Access the Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api/)
has filename `http-proxy-access-api.md`. You don't need to put
"kubernetes" in the filename, because "kubernetes" is already in the
URL for the topic, for example:
http://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/
### Choosing a directory
Depending on your page type, put your new file in a subdirectory of one of these:
* /docs/tasks/
* /docs/tutorials/
* /docs/concepts/
You can put your file in an existing subdirectory, or you can create a new
subdirectory.
### Creating an entry in the table of contents
Depending page type, create an entry in one of these files:
* /_data/tasks.yaml
* /_data/tutorials.yaml
* /_data/concepts.yaml
{% endcapture %}
{% capture whatsnext %}
* Learn about [using page templates](/docs/contribute/page-templates/).
* Learn about [staging your changes](/docs/contribute/stage-documentation-changes).
* Learn about [creating a pull request](/docs/contribute/write-new-topic).
{% endcapture %}
{% include templates/task.md %}

View File

@ -10,15 +10,15 @@ assignees:
## Prerequisites
You need two machines with CentOS installed on them.
To configure Kubernetes with CentOS, you'll need a machine to act as a master, and one or more CentOS 7 hosts to act as cluster nodes.
## Starting a cluster
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](/docs/admin/networking) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager and kube-scheduler. In addition, the master will also run _etcd_. The remaining hosts, centos-minion-n will be the nodes and run kubelet, proxy, cadvisor and docker.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
All of then run flanneld as networking overlay.
**System Information:**
@ -28,12 +28,14 @@ Please replace host IP with your environment.
```conf
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
centos-minion-1 = 192.168.121.65
centos-minion-2 = 192.168.121.66
centos-minion-3 = 192.168.121.67
```
**Prepare the hosts:**
* Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion} with following information.
* Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion-n} with following information.
```conf
[virt7-docker-common-release]
@ -42,17 +44,19 @@ baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
```
* Install Kubernetes and etcd on all hosts - centos-{master,minion}. This will also pull in docker and cadvisor.
* Install Kubernetes, etcd and flannel on all hosts - centos-{master,minion-n}. This will also pull in docker and cadvisor.
```shell
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```shell
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
192.168.121.65 centos-minion-1
192.168.121.66 centos-minion-2
192.168.121.67 centos-minion-3" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
@ -74,7 +78,7 @@ KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-master:8080"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
* Disable the firewall on the master and all the nodes, as docker does not play well with other firewall rule managers
```shell
systemctl disable iptables-services firewalld
@ -114,17 +118,39 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_API_ARGS=""
```
* Configure ETCD to hold the network overlay configuration on master:
**Warning** This network must be unused in your network infrastructure! `172.30.0.0/16` is free in our network.
```shell
$ etcdctl mkdir /kube-centos/network
$ etcdclt mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
```
* Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we'll see):
```shell
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://centos-master:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/kube-centos/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
```
* Start the appropriate services on master:
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
**Configure the Kubernetes services on the node.**
**Configure the Kubernetes services on the nodes.**
***We need to configure the kubelet and start the kubelet and proxy***
@ -138,7 +164,7 @@ KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=centos-minion"
KUBELET_HOSTNAME="--hostname-override=centos-minion-n" # Check the node number!
# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
@ -147,10 +173,24 @@ KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
KUBELET_ARGS=""
```
* Start the appropriate services on node (centos-minion).
* Configure flannel to overlay Docker network in /etc/sysconfig/flanneld (in all the nodes)
```shell
for SERVICES in kube-proxy kubelet docker; do
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://centos-master:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/kube-centos/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
```
* Start the appropriate services on node (centos-minion-n).
```shell
for SERVICES in kube-proxy kubelet flanneld docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
@ -164,7 +204,9 @@ done
```shell
$ kubectl get nodes
NAME LABELS STATUS
centos-minion <none> Ready
centos-minion-1 <none> Ready
centos-minion-2 <none> Ready
centos-minion-3 <none> Ready
```
**The cluster should be running! Launch a test pod.**
@ -176,7 +218,7 @@ You should have a functional cluster, check out [101](/docs/user-guide/walkthrou
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap))
Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -41,6 +41,8 @@ clusters.
[AppsCode.com](https://appscode.com/products/cloud-deployment/) provides managed Kubernetes clusters for various public clouds (including AWS and Google Cloud Platform).
[KCluster.io](https://kcluster.io) provides highly available and scalable managed Kubernetes clusters for AWS.
### Turn-key Cloud Solutions
These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
@ -120,6 +122,7 @@ IaaS Provider | Config. Mgmt | OS | Networking | Docs
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | ['œ“][3] | Commercial
Stackpoint.io | | multi-support | multi-support | [docs](http://www.stackpointcloud.com) | | Commercial
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | | Commercial
KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | | Commercial
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens))

View File

@ -29,6 +29,7 @@ Internet to download the necessary files, while worker nodes do not.
Ubuntu 15 which uses systemd instead of upstart.
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.2.0, may work with higher versions.
5. All the remote servers can be ssh logged in without a password by using key authentication.
6. The remote user on all machines is using /bin/bash as its login shell, and has sudo access.
## Starting a Cluster

View File

@ -117,7 +117,7 @@ h2, h3, h4 {
<div class="col2nd">
<h3>Contribute to Our Docs</h3>
<p>The docs for Kubernetes are open-source, just like the code for Kubernetes itself. The docs are on GitHub Pages, so you can fork it and it will auto-stage on username.github.io, previewing your changes!</p>
<a href="/editdocs/" class="button">Write Docs for K8s</a>
<a href="/docs/contribute/create-pull-request/" class="button">Write Docs for K8s</a>
</div>
<div class="col2nd">
<h3>Need Help?</h3>

View File

@ -1,90 +0,0 @@
---
---
{% capture overview %}
This page shows how to use an HTTP proxy to access the Kubernetes API.
{% endcapture %}
{% capture prerequisites %}
* Install [kubectl](http://kubernetes.io/docs/user-guide/prereqs).
* Create a Kubernetes cluster, including a running Kubernetes
API server. One way to create a new cluster is to use
[Minikube](/docs/getting-started-guides/minikube).
* Configure `kubectl` to communicate with your Kubernetes API server. This
configuration is done automatically if you use Minikube.
* If you do not already have an application running in your cluster, start
a Hello world application by entering this command:
kubectl run --image=gcr.io/google-samples/node-hello:1.0 --port=8080
{% endcapture %}
{% capture steps %}
### Using kubectl to start a proxy server
This command starts a proxy to the Kubernetes API server:
kubectl proxy --port=8080
### Exploring the Kubernetes API
When the proxy server is running, you can explore the API using `curl`, `wget`,
or a browser.
Get the API versions:
curl http://localhost:8080/api/
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.2.15:8443"
}
]
}
Get a list of pods:
curl http://localhost:8080/api/v1/namespaces/default/pods
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/default/pods",
"resourceVersion": "33074"
},
"items": [
{
"metadata": {
"name": "kubernetes-bootcamp-2321272333-ix8pt",
"generateName": "kubernetes-bootcamp-2321272333-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/kubernetes-bootcamp-2321272333-ix8pt",
"uid": "ba21457c-6b1d-11e6-85f7-1ef9f1dab92b",
"resourceVersion": "33003",
"creationTimestamp": "2016-08-25T23:43:30Z",
"labels": {
"pod-template-hash": "2321272333",
"run": "kubernetes-bootcamp"
},
...
}
{% endcapture %}
{% capture whatsnext %}
Learn more about [kubectl proxy](/docs/user-guide/kubectl/kubectl_proxy).
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]

View File

@ -0,0 +1,105 @@
---
---
{% capture overview %}
This page shows how to define commands and arguments when you run a container
in a Kubernetes Pod.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
### Defining a command and arguments when you create a Pod
When you create a Pod, you can define a command and arguments for the
containers that run in the Pod. To define a command, include the `command`
field in the configuration file. To define arguments for the command, include
the `args` field in the configuration file. The command and arguments that
you define cannot be changed after the Pod is created.
The command and arguments that you define in the configuration file
override the default command and arguments provided by the container image.
If you define args, but do not define a command, the default command is used
with your new arguments. For more information, see
[Commands and Capabilities](/docs/user-guide/containers/).
In this exercise, you create a Pod that runs one container. The configuration
file for the Pod defines a command and two arguments:
{% include code.html language="yaml" file="commands.yaml" ghlink="/docs/tasks/configure-pod-container/commands.yaml" %}
1. Create a Pod based on the YAML configuration file:
export REPO=https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master
kubectl create -f $REPO/docs/tasks/configure-pod-container/commands.yaml
1. List the running Pods:
kubectl get pods
The output shows that the container that ran in the command-demo Pod has
completed.
1. To see the output of the command that ran in the container, view the logs
from the Pod:
kubectl logs command-demo
The output shows the values of the HOSTNAME and KUBERNETES_PORT environment
variables:
command-demo
tcp://10.3.240.1:443
### Using environment variables to define arguments
In the preceding example, you defined the arguments directly by
providing strings. As an alternative to providing strings directly,
you can define arguments by using environment variables:
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
This means you can define an argument for a Pod using any of
the techniques available for defining environment variables, including
[ConfigMaps](/docs/user-guide/configmap/)
and
[Secrets](/docs/user-guide/secrets/).
NOTE: The environment variable appears in parentheses, `"$(VAR)"`. This is
required for the variable to be expanded in the `command` or `args` field.
### Running a command in a shell
In some cases, you need your command to run in a shell. For example, your
command might consist of several commands piped together, or it might be a shell
script. To run your command in a shell, wrap it like this:
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
{% endcapture %}
{% capture whatsnext %}
* Learn more about [containers and commands](/docs/user-guide/containers/).
* Learn more about [configuring containers](/docs/user-guide/configuring-containers/).
* Learn more about [running commands in a container](/docs/user-guide/getting-into-containers/).
* See [Container](/docs/api-reference/v1/definitions/#_v1_container).
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,77 @@
---
---
{% capture overview %}
This page shows how to define environment variables when you run a container
in a Kubernetes Pod.
{% endcapture %}
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
{% endcapture %}
{% capture steps %}
### Defining an environment variable for a container
When you create a Pod, you can set environment variables for the containers
that run in the Pod. To set environment variables, include the `env` field in
the configuration file.
In this exercise, you create a Pod that runs one container. The configuration
file for the Pod defines an environment variable with name `DEMO_GREETING` and
value `"Hello from the environment"`. Here is the configuration file for the
Pod:
{% include code.html language="yaml" file="envars.yaml" ghlink="/docs/tasks/configure-pod-container/envars.yaml" %}
1. Create a Pod based on the YAML configuration file:
export REPO=https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master
kubectl create -f $REPO/docs/tasks/configure-pod-container/envars.yaml
1. List the running Pods:
kubectl get pods
The output is similar to this:
NAME READY STATUS RESTARTS AGE
envar-demo 1/1 Running 0 9s
1. Get a shell to the container running in your Pod:
kubectl exec -it envar-demo -- /bin/bash
1. In your shell, run the `printenv` command to list the environment variables.
root@envar-demo:/# printenv
The output is similar to this:
NODE_VERSION=4.4.2
EXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237
HOSTNAME=envar-demo
...
DEMO_GREETING=Hello from the environment
1. To exit the shell, enter `exit`.
{% endcapture %}
{% capture whatsnext %}
* Learn more about [environment variables](/docs/user-guide/environment-guide/).
* Learn about [using secrets as environment variables](/docs/user-guide/secrets/#using-secrets-as-environment-variables).
* See [EnvVarSource](/docs/api-reference/v1/definitions/#_v1_envvarsource).
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"

View File

@ -3,6 +3,11 @@
The Tasks section of the Kubernetes documentation is a work in progress
#### Configuring Pods and Containers
* [Defining Environment Variables for a Container](/docs/tasks/configure-pod-container/define-environment-variable-container/)
* [Defining a Command and Arguments for a Container](/docs/tasks/configure-pod-container/define-command-argument-container/)
#### Accessing Applications in a Cluster
* [Using Port Forwarding to Access Applications in a Cluster](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)

View File

@ -1,14 +1,13 @@
apiVersion: extensions/v1beta1
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleRef:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: php-apache
subresource: scale
minReplicas: 1
maxReplicas: 10
cpuUtilization:
targetPercentage: 50
targetCPUUtilizationPercentage: 50

View File

@ -127,20 +127,19 @@ Here CPU utilization dropped to 0, and so HPA autoscaled the number of replicas
Instead of using `kubectl autoscale` command we can use the [hpa-php-apache.yaml](/docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml) file, which looks like this:
```yaml
apiVersion: extensions/v1beta1
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleRef:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: php-apache
subresource: scale
minReplicas: 1
maxReplicas: 10
cpuUtilization:
targetPercentage: 50
targetCPUUtilizationPercentage: 50
```
We will create the autoscaler by executing the following command:

View File

@ -166,7 +166,7 @@ We will use the `amqp-consume` utility to read the message
from the queue and run our actual program. Here is a very simple
example program:
{% include code.html language="python" file="worker.py" ghlink="/docs/user-guide/job/work-queue-1/worker.py" %}
{% include code.html language="python" file="worker.py" ghlink="/docs/user-guide/jobs/work-queue-1/worker.py" %}
Now, build an image. If you are working in the source
tree, then change directory to `examples/job/work-queue-1`.
@ -204,7 +204,7 @@ Here is a job definition. You'll need to make a copy of the Job and edit the
image to match the name you used, and call it `./job.yaml`.
{% include code.html language="yaml" file="job.yaml" ghlink="/docs/user-guide/job/work-queue-1/job.yaml" %}
{% include code.html language="yaml" file="job.yaml" ghlink="/docs/user-guide/jobs/work-queue-1/job.yaml" %}
In this example, each pod works on one item from the queue and then exits.
So, the completion count of the Job corresponds to the number of work items
@ -258,12 +258,12 @@ want to consider one of the other [job patterns](/docs/user-guide/jobs/#job-patt
This approach creates a pod for every work item. If your work items only take a few seconds,
though, creating a Pod for every work item may add a lot of overhead. Consider another
[example](/docs/user-guide/job/work-queue-2), that executes multiple work items per Pod.
[example](/docs/user-guide/jobs/work-queue-2/), that executes multiple work items per Pod.
In this example, we used use the `amqp-consume` utility to read the message
from the queue and run our actual program. This has the advantage that you
do not need to modify your program to be aware of the queue.
A [different example](/docs/user-guide/job/work-queue-2), shows how to
A [different example](/docs/user-guide/jobs/work-queue-2/), shows how to
communicate with the work queue using a client library.
## Caveats

View File

@ -108,7 +108,7 @@ called rediswq.py ([Download](rediswq.py?raw=true)).
The "worker" program in each Pod of the Job uses the work queue
client library to get work. Here it is:
{% include code.html language="python" file="worker.py" ghlink="/docs/user-guide/job/work-queue-2/worker.py" %}
{% include code.html language="python" file="worker.py" ghlink="/docs/user-guide/jobs/work-queue-2/worker.py" %}
If you are working from the source tree,
change directory to the `examples/job/work-queue-2` directory.
@ -147,7 +147,7 @@ gcloud docker push gcr.io/<project>/job-wq-2
Here is the job definition:
{% include code.html language="yaml" file="job.yaml" ghlink="/docs/user-guide/job/work-queue-2/job.yaml" %}
{% include code.html language="yaml" file="job.yaml" ghlink="/docs/user-guide/jobs/work-queue-2/job.yaml" %}
Be sure to edit the job template to
change `gcr.io/myproject` to your own path.

View File

@ -12,6 +12,7 @@ Display one or many resources
Display one or many resources.
Valid resource types include:
* clusters (valid only for federation apiservers)
* componentstatuses (aka 'cs')
* configmaps (aka 'cm')
@ -68,7 +69,9 @@ kubectl get -o json pod web-pod-13je7
kubectl get -f pod.yaml -o json
# Return only the phase value of the specified pod.
{% raw %}
kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}
{% endraw %}
# List all replication controllers and services together in ps output format.
kubectl get rc,services

View File

@ -44,9 +44,11 @@ metadata:
To configure the annotation via `kubectl`:
```shell{% raw %}
```shell
{% raw %}
kubectl annotate ns <namespace> "net.beta.kubernetes.io/network-policy={\"ingress\": {\"isolation\": \"DefaultDeny\"}}"
{% endraw %}```
{% endraw %}
```
See the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) for an example.

View File

@ -17,7 +17,7 @@ Throughout this doc you will see a few terms that are sometimes used interchange
* Persistent Volume Claim (PVC): A request for storage, typically a [persistent volume](/docs/user-guide/persistent-volumes/walkthrough/).
* Host name: The hostname attached to the UTS namespace of the pod, i.e the output of `hostname` in the pod.
* DNS/Domain name: A *cluster local* domain name resolvable using standard methods (eg: [gethostbyname](http://linux.die.net/man/3/gethostbyname)).
* Ordinality: the proprety of being "ordinal", or occupying a position in a sequence.
* Ordinality: the property of being "ordinal", or occupying a position in a sequence.
* Pet: a single member of a PetSet; more generally, a stateful application.
* Peer: a process running a server, capable of communicating with other such processes.
@ -29,7 +29,7 @@ This doc assumes familiarity with the following Kubernetes concepts:
* [Cluster DNS](/docs/admin/dns/)
* [Headless Services](/docs/user-guide/services/#headless-services)
* [Persistent Volumes](/docs/user-guide/volumes/)
* [Dynamic volume provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md)
* [Persistent Volume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md)
You need a working Kubernetes cluster at version >= 1.3, with a healthy DNS [cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md) at version >= 15. You cannot use PetSet on a hosted Kubernetes provider that has disabled `alpha` resources.
@ -85,7 +85,7 @@ Before you start deploying applications as PetSets, there are a few limitations
* PetSet is an *alpha* resource, not available in any Kubernetes release prior to 1.3.
* As with all alpha/beta resources, it can be disabled through the `--runtime-config` option passed to the apiserver, and in fact most likely will be disabled on hosted offerings of Kubernetes.
* The only updatable field on a PetSet is `replicas`
* The storage for a given pet must either be provisioned by a [dynamic storage provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. Note that dynamic volume provisioning is also currently in alpha.
* The storage for a given pet must either be provisioned by a [persistent volume provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. Note that persistent volume provisioning is also currently in alpha.
* Deleting and/or scaling a PetSet down will *not* delete the volumes associated with the PetSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related PetSet resources. **Deleting the Persistent Volume Claims will result in a deletion of the associated volumes**.
* All PetSets currently require a "governing service", or a Service responsible for the network identity of the pets. The user is responsible for this Service.
* Updating an existing PetSet is currently a manual process, meaning you either need to deploy a new PetSet with the new image version, or orphan Pets one by one, update their image, and join them back to the cluster.
@ -392,7 +392,8 @@ $ grace=$(kubectl get po web-0 --template '{{.spec.terminationGracePeriodSeconds
$ kubectl delete petset,po -l app=nginx
$ sleep $grace
$ kubectl delete pvc -l app=nginx
{% endraw %}```
{% endraw %}
```
## Troubleshooting

View File

@ -226,7 +226,8 @@ Here is a toy example:
The message is recorded along with the other state of the last (i.e., most recent) termination:
```shell{% raw %}
```shell
{% raw %}
$ kubectl create -f ./pod-w-message.yaml
pod "pod-w-message" created
$ sleep 70
@ -234,7 +235,8 @@ $ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatus
Sleep expired
$ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.exitCode}}{{end}}"
0
{% endraw %}```
{% endraw %}
```
## What's next?

View File

@ -108,6 +108,25 @@ While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
machine reboot and any files you write will count against your container's
memory limit.
#### Example pod
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
```
### hostPath
A `hostPath` volume mounts a file or directory from the host node's filesystem

View File

@ -59,12 +59,14 @@ On most providers, the pod IPs are not externally accessible. The easiest way to
Provided the pod IP is accessible, you should be able to access its http endpoint with wget on port 80:
```shell{% raw %}
```shell
{% raw %}
$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1 --env "POD_IP=$(kubectl get pod nginx -o go-template='{{.status.podIP}}')"
u@busybox$ wget -qO- http://$POD_IP # Run in the busybox container
u@busybox$ exit # Exit the busybox container
$ kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
{% endraw %}```
{% endraw %}
```
Delete the pod by name:

View File

@ -136,7 +136,8 @@ On most providers, the service IPs are not externally accessible. The easiest wa
Provided the service IP is accessible, you should be able to access its http endpoint with wget on the exposed port:
```shell{% raw %}
```shell
{% raw %}
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template='{{.spec.clusterIP}}')
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template='{{(index .spec.ports 0).port}}')
$ echo "$SERVICE_IP:$SERVICE_PORT"
@ -144,7 +145,8 @@ $ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --
u@busybox$ wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run in the busybox container
u@busybox$ exit # Exit the busybox container
$ kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
{% endraw %}```
{% endraw %}
```
To delete the service by name:

View File

@ -1,42 +0,0 @@
---
layout: docwithnav
---
<!-- BEGIN: Gotta keep this section JS/HTML because it swaps out content dynamically -->
<p>&nbsp;</p>
<script language="JavaScript">
var forwarding=window.location.hash.replace("#","");
$( document ).ready(function() {
if(forwarding) {
$("#generalInstructions").hide();
$("#continueEdit").show();
$("#continueEditButton").text("Edit " + forwarding);
$("#continueEditButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/edit/master/" + forwarding)
} else {
$("#generalInstructions").show();
$("#continueEdit").hide();
}
});
</script>
<div id="continueEdit">
<h2>Continue your edit</h2>
<p>Click the below link to edit the page you were just on. When you are done, press "Commit Changes" at the bottom of the screen. This will create a copy of our site on your GitHub account called a "fork." You can make other changes in your fork after it is created, if you want. When you are ready to send us all your changes, go to the index page for your fork and click "New Pull Request" to let us know about it.</p>
<p><a id="continueEditButton" class="button"></a></p>
</div>
<div id="generalInstructions">
<h2>Edit our site in the cloud</h2>
<p>Click the below button to visit the repo for our site. You can then click the "Fork" button in the upper-right area of the screen to create a copy of our site on your GitHub account called a "fork." Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click "New Pull Request" to let us know about it.</p>
<p><a class="button" href="https://github.com/kubernetes/kubernetes.github.io/">Browse this site's source code</a></p>
</div>
<!-- END: Dynamic section -->
{% include_relative README.md %}

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

View File

@ -14,47 +14,24 @@ title: Partners
<section id="users">
<main>
<h5>We are working with a broad group of partners who contribute to the Kubernetes core codebase, making it stronger and richer, creating a vibrant Kubernetes ecosystem supporting a spectrum of complementing platforms, from open source solutions to market-leading technologies.</h5>
<h5>We are working with a broad group of partners who contribute to the Kubernetes core codebase, making it stronger and richer. There partners create a vibrant Kubernetes ecosystem supporting a spectrum of complementing platforms, from open source solutions to market-leading technologies.</h5>
<h3>ISV Partners</h3>
<div id="usersGrid">
<a target="_blank" href="https://coreos.com/kubernetes"><img src="/images/community_logos/core_os_logo.png"></a>
<a target="_blank" href="https://deis.com"><img src="/images/community_logos/deis_logo.png"></a>
<a target="_blank" href="https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud"><img src="/images/community_logos/sysdig_cloud_logo.png"></a>
<a target="_blank" href="https://puppet.com/blog/managing-kubernetes-configuration-puppet"><img src="/images/community_logos/puppet_logo.png"></a>
<a target="_blank" href="https://www.microloadbalancer.com/docs/deploy-netscaler-cpx-kubernetes-environment"><img src="/images/community_logos/citrix_logo.png"></a>
<a target="_blank" href="http://wercker.com/workflows/partners/kubernetes/"><img src="/images/community_logos/wercker_logo.png"></a>
<a target="_blank" href="http://rancher.com/kubernetes/"><img src="/images/community_logos/rancher_logo.png"></a>
<a target="_blank" href="https://www.openshift.com/"><img src="/images/community_logos/red_hat_logo.png"></a>
<a target="_blank" href="https://tectonic.com/press/intel-coreos-collaborate-on-openstack-with-kubernetes.html"><img src="/images/community_logos/intel_logo.png"></a>
<a target="_blank" href="https://elasticbox.com/kubernetes/"><img src="/images/community_logos/elastickube_logo.png"></a>
<a target="_blank" href="https://platform9.com/blog/containers-as-a-service-kubernetes-docker"><img src="/images/community_logos/platform9_logo.png"></a>
<a target="_blank" href="http://www.appformix.com/solutions/appformix-for-kubernetes/"><img src="/images/community_logos/appformix_logo.png"></a>
<a target="_blank" href="http://kubernetes.io/docs/getting-started-guides/dcos"><img src="/images/community_logos/mesosphere_logo.png"></a>
<a target="_blank" href="http://docs.datadoghq.com/integrations/kubernetes/"><img src="/images/community_logos/datadog_logo.png"></a>
<a target="_blank" href="https://apprenda.com/kubernetes-support/"><img src="/images/community_logos/apprenda_logo.png"></a>
<a target="_blank" href="http://www.ibm.com/cloud-computing/"><img src="/images/community_logos/ibm_logo.png"></a>
<a target="_blank" href="http://info.crunchydata.com/blog/advanced-crunchy-containers-for-postgresql"><img src="/images/community_logos/crunchy_data_logo.png"></a>
<a target="_blank" href="https://content.mirantis.com/Containerizing-OpenStack-on-Kubernetes-Video-Landing-Page.html"><img src="/images/community_logos/mirantis_logo.png"></a>
<a target="_blank" href="http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment"><img src="/images/community_logos/aqua_logo.png"></a>
<a target="_blank" href="https://jujucharms.com/canonical-kubernetes/"><img src="/images/community_logos/ubuntu_cannonical_logo.png"></a>
</div>
<div id="isvContainer"></div>
<h3>Services Partners</h3>
<div id="servContainer"></div>
</main>
</section>
<style>
h5 {
font-size: 18px;
line-height: 1.5em;
margin-bottom: 2em;
}
#usersGrid a {
display: inline-block;
background-color: #f9f9f9;
}
</style>
{% include footer.html %}
{% include case-study-styles.html %}
<style>
{% include partner-style.css %}
</style>
<script>
{% include partner-script.js %}
</script>
</body>
</html>