Merge remote-tracking branch 'refs/remotes/kubernetes/master' into UC-design-patch
|
@ -5,3 +5,4 @@ _site/**
|
|||
.sass-cache/**
|
||||
CNAME
|
||||
.travis.yml
|
||||
.idea/
|
||||
|
|
175
README.md
|
@ -1,182 +1,19 @@
|
|||
## Instructions for Contributing to the Docs/Website
|
||||
## Instructions for Contributing to the Kubernetes Documentation
|
||||
|
||||
Welcome! We are very pleased you want to contribute to the documentation and/or website for Kubernetes.
|
||||
Welcome! We are very pleased you want to contribute to the Kubernetes documentation.
|
||||
|
||||
You can click the "Fork" button in the upper-right area of the screen to create a copy of our site on your GitHub account called a "fork." Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click "New Pull Request" to let us know about it.
|
||||
You can click the **Fork** button in the upper-right area of the screen to create a copy of this repository in your GitHub account called a *fork*. Make any changes you want in your fork, and when you are ready to send those changes to us, go to your fork and create a new pull request to let us know about it.
|
||||
|
||||
For more information about contributing to the Kubernetes documentation, see:
|
||||
|
||||
* [Contributing to the kubernetes Documentation](http://kubernetes.io/editdocs/)
|
||||
* [Creating a Documentation Pull Request](http://kubernetes.io/docs/contribute/create-pull-request/)
|
||||
* [Writing a New Topic](http://kubernetes.io/docs/contribute/write-new-topic/)
|
||||
* [Staging Your Documentation Changes](http://kubernetes.io/docs/contribute/stage-documentation-changes/)
|
||||
* [Using Page Templates](http://kubernetes.io/docs/contribute/page-templates/)
|
||||
|
||||
## Automatic Staging for Pull Requests
|
||||
|
||||
When you create a pull request (either against master or the upcoming release), your changes are staged in a custom subdomain on Netlify so that you can see your changes in rendered form before the PR is merged. You can use this to verify that everything is correct before the PR gets merged. To view your changes:
|
||||
|
||||
- Scroll down to the PR's list of Automated Checks
|
||||
- Click "Show All Checks"
|
||||
- Look for "deploy/netlify"; you'll see "Deploy Preview Ready!" if staging was successful
|
||||
- Click "Details" to bring up the staged site and navigate to your changes
|
||||
|
||||
## Branch structure and staging
|
||||
|
||||
The current version of the website is served out of the `master` branch. To make changes to the live docs, such as bug fixes, broken links, typos, etc, **target your pull request to the master branch**
|
||||
|
||||
The `release-1.x` branch stores changes for **upcoming releases of Kubernetes**. For example, the `release-1.5` branch has changes for the 1.5 release. These changes target branches (and *not* master) to avoid publishing documentation updates prior to the release for which they're relevant. If you have a change for an upcoming release of Kubernetes, **target your pull request to the appropriate release branch**.
|
||||
|
||||
The staging site for the next upcoming Kubernetes release is here: [http://kubernetes-io-vnext-staging.netlify.com/](http://kubernetes-io-vnext-staging.netlify.com/). The staging site reflects the current state of what's been merged in the release branch, or in other words, what the docs will look like for the next upcoming release. It's automatically updated as new PRs get merged.
|
||||
|
||||
## Staging the site locally (using Docker)
|
||||
|
||||
Don't like installing stuff? Download and run a local staging server with a single `docker run` command.
|
||||
|
||||
git clone https://github.com/kubernetes/kubernetes.github.io.git
|
||||
cd kubernetes.github.io
|
||||
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 gcr.io/google-samples/k8sdocs:1.0
|
||||
|
||||
Then visit [http://localhost:4000](http://localhost:4000) to see our site. Any changes you make on your local machine will be automatically staged.
|
||||
|
||||
If you're interested you can view [the Dockerfile for this image](https://github.com/kubernetes/kubernetes.github.io/blob/master/staging-container/Dockerfile).
|
||||
|
||||
## Staging the site locally (from scratch setup)
|
||||
|
||||
The below commands to setup your environment for running GitHub pages locally. Then, any edits you make will be viewable
|
||||
on a lightweight webserver that runs on your local machine.
|
||||
|
||||
This will typically be the fastest way (by far) to iterate on docs changes and see them staged, once you get this set up, but it does involve several install steps that take awhile to complete, and makes system-wide modifications.
|
||||
|
||||
Install Ruby 2.2 or higher. If you're on Linux, run these commands:
|
||||
|
||||
apt-get install software-properties-common
|
||||
apt-add-repository ppa:brightbox/ruby-ng
|
||||
apt-get install ruby2.2
|
||||
apt-get install ruby2.2-dev
|
||||
|
||||
* If you're on a Mac, follow [these instructions](https://gorails.com/setup/osx/).
|
||||
* If you're on a Windows machine you can use the [Ruby Installer](http://rubyinstaller.org/downloads/). During the installation make sure to check the option for *Add Ruby executables to your PATH*.
|
||||
|
||||
The remainder of the steps should work the same across operating systems.
|
||||
|
||||
To confirm you've installed Ruby correctly, at the command prompt run `gem --version` and you should get a response with your version number. Likewise you can confirm you have Git installed properly by running `git --version`, which will respond with your version of Git.
|
||||
|
||||
Install the GitHub Pages package, which includes Jekyll:
|
||||
|
||||
gem install github-pages
|
||||
|
||||
Clone our site:
|
||||
|
||||
git clone https://github.com/kubernetes/kubernetes.github.io.git
|
||||
|
||||
Make any changes you want. Then, to see your changes locally:
|
||||
|
||||
cd kubernetes.github.io
|
||||
jekyll serve
|
||||
|
||||
Your copy of the site will then be viewable at: [http://localhost:4000](http://localhost:4000)
|
||||
(or wherever Jekyll tells you).
|
||||
|
||||
## GitHub help
|
||||
|
||||
If you're a bit rusty with git/GitHub, you might want to read
|
||||
[this](http://readwrite.com/2013/10/02/github-for-beginners-part-2) for a refresher.
|
||||
|
||||
## Common Tasks
|
||||
|
||||
### Edit Page Titles or Change the Left Navigation
|
||||
|
||||
Edit the yaml files in `/_data/` for the Guides, Reference, Samples, or Support areas.
|
||||
|
||||
You may have to exit and `jekyll clean` before restarting the `jekyll serve` to
|
||||
get changes to files in `/_data/` to show up.
|
||||
|
||||
### Add Images
|
||||
|
||||
Put the new image in `/images/docs/` if it's for the documentation, and just `/images/` if it's for the website.
|
||||
|
||||
**For diagrams, we greatly prefer SVG files!**
|
||||
|
||||
### Include code from another file
|
||||
|
||||
To include a file that is hosted on this GitHub repo, insert this code:
|
||||
|
||||
<pre>{% include code.html language="<LEXERVALUE>" file="<RELATIVEPATH>" ghlink="<PATHFROMROOT>" %}</pre>
|
||||
|
||||
* `LEXERVALUE`: The language in which the file was written; must be [a value supported by Rouge](https://github.com/jneen/rouge/wiki/list-of-supported-languages-and-lexers).
|
||||
* `RELATIVEPATH`: The path to the file you're including, relative to the current file.
|
||||
* `PATHFROMROOT`: The path to the file relative to root, e.g. `/docs/admin/foo.yaml`
|
||||
|
||||
To include a file that is hosted in the external, main Kubernetes repo, make sure it's added to [/update-imported-docs.sh](https://github.com/kubernetes/kubernetes.github.io/blob/master/update-imported-docs.sh), and run it so that the file gets downloaded, then enter:
|
||||
|
||||
<pre>{% include code.html language="<LEXERVALUE>" file="<RELATIVEPATH>" k8slink="<PATHFROMK8SROOT>" %}</pre>
|
||||
|
||||
* `PATHFROMK8SROOT`: The path to the file relative to the root of [the Kubernetes repo](https://github.com/kubernetes/kubernetes/tree/release-1.2), e.g. `/examples/rbd/foo.yaml`
|
||||
|
||||
## Using tabs for multi-language examples
|
||||
|
||||
By specifying some inline CSV in a varable called `tabspec`, you can include a file
|
||||
called `tabs.html` that generates tabs showing code examples in multiple langauges.
|
||||
|
||||
<pre>{% capture tabspec %}servicesample
|
||||
JSON,json,service-sample.json,/docs/user-guide/services/service-sample.json
|
||||
YAML,yaml,service-sample.yaml,/docs/user-guide/services/service-sample.yaml{% endcapture %}
|
||||
{% include tabs.html %}</pre>
|
||||
|
||||
In English, this would read: "Create a set of tabs with the alias `servicesample`,
|
||||
and have tabs visually labeled "JSON" and "YAML" that use `json` and `yaml` Rouge syntax highlighting, which display the contents of
|
||||
`service-sample.{extension}` on the page, and link to the file in GitHub at (full path)."
|
||||
|
||||
Example file: [Pods: Multi-Container](http://kubernetes.io/docs/user-guide/pods/multi-container/).
|
||||
|
||||
## Use a global variable
|
||||
|
||||
The `/_config.yml` file defines some useful variables you can use when editing docs.
|
||||
|
||||
* `page.githubbranch`: The name of the GitHub branch on the Kubernetes repo that is associated with this branch of the docs. e.g. `release-1.2`
|
||||
* `page.version` The version of Kubernetes associated with this branch of the docs. e.g. `v1.2`
|
||||
* `page.docsbranch` The name of the GitHub branch on the Docs/Website repo that you are currently using. e.g. `release-1.1` or `master`
|
||||
|
||||
This keeps the docs you're editing aligned with the Kubernetes version you're talking about. For example, if you define a link like so, you'll never have to worry about it going stale in future doc branches:
|
||||
|
||||
<pre>View the README [here](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md).</pre>
|
||||
|
||||
That, of course, will send users to:
|
||||
|
||||
[http://releases.k8s.io/release-1.2/cluster/addons/README.md](http://releases.k8s.io/release-1.2/cluster/addons/README.md)
|
||||
|
||||
(Or whatever Kubernetes release that docs branch is associated with.)
|
||||
|
||||
## Config yaml guidelines
|
||||
|
||||
Guidelines for config yamls that are included in the site docs. These
|
||||
are the yaml or json files that contain Kubernetes object
|
||||
configuration to be used with `kubectl create -f` Config yamls should
|
||||
be:
|
||||
|
||||
* Separate deployable files, not embedded in the document, unless very
|
||||
small variations of a full config.
|
||||
* Included in the doc with the include code
|
||||
[above.](#include-code-from-another-file)
|
||||
* In the same directory as the doc that they are being used in
|
||||
* If you are re-using a yaml from another doc, that is OK, just
|
||||
leave it there, don't move it up to a higher level directory.
|
||||
* Tested in
|
||||
[test/examples_test.go](https://github.com/kubernetes/kubernetes.github.io/blob/master/test/examples_test.go)
|
||||
* Follows
|
||||
[best practices.](http://kubernetes.io/docs/user-guide/config-best-practices/)
|
||||
|
||||
Don't assume the reader has this repository checked out, use `kubectl
|
||||
create -f https://github...` in example commands. For Docker images
|
||||
used in config yamls, try to use an image from an existing Kubernetes
|
||||
example. If creating an image for a doc, follow the
|
||||
[example guidelines](https://github.com/kubernetes/kubernetes/blob/master/examples/guidelines.md#throughout)
|
||||
section on "Docker images" from the Kubernetes repository.
|
||||
|
||||
## Partners
|
||||
Kubernetes partners refers to the companies who contribute to the Kubernetes core codebase, extend their platform to support Kubernetes or provide managed services to users centered around the Kubernetes platform. Partners can get their services and offerings added to the [partner page](https://k8s.io/partners) by completing and submitting the [partner request form](https://goo.gl/qcSnZF). Once the information and assets are verified, the partner product/services will be listed in the partner page. This would typically take 7-10 days.
|
||||
* [Documentation Style Guide](http://kubernetes.io/docs/contribute/style-guide/)
|
||||
|
||||
## Thank you!
|
||||
|
||||
Kubernetes thrives on community participation and we really appreciate your
|
||||
Kubernetes thrives on community participation, and we really appreciate your
|
||||
contributions to our site and our documentation!
|
||||
|
|
|
@ -7,3 +7,7 @@ toc:
|
|||
section:
|
||||
- title: Annotations
|
||||
path: /docs/concepts/object-metadata/annotations/
|
||||
- title: Controllers
|
||||
section:
|
||||
- title: StatefulSets
|
||||
path: /docs/concepts/abstractions/controllers/statefulsets/
|
||||
|
|
|
@ -237,6 +237,8 @@ toc:
|
|||
path: /docs/getting-started-guides/coreos
|
||||
- title: Ubuntu
|
||||
path: /docs/getting-started-guides/ubuntu/
|
||||
- title: Windows Server Containers
|
||||
path: /docs/getting-started-guides/windows/
|
||||
- title: Validate Node Setup
|
||||
path: /docs/admin/node-conformance
|
||||
- title: Portable Multi-Node Cluster
|
||||
|
@ -272,8 +274,6 @@ toc:
|
|||
path: /docs/admin/cluster-components/
|
||||
- title: Configuring Kubernetes Use of etcd
|
||||
path: /docs/admin/etcd/
|
||||
- title: Federating Clusters
|
||||
path: /docs/admin/federation/
|
||||
- title: Using Multiple Clusters
|
||||
path: /docs/admin/multi-cluster/
|
||||
- title: Changing Cluster Size
|
||||
|
@ -302,3 +302,10 @@ toc:
|
|||
path: /docs/admin/node-problem/
|
||||
- title: AppArmor
|
||||
path: /docs/admin/apparmor/
|
||||
|
||||
- title: Administering Federation
|
||||
section:
|
||||
- title: Using `kubefed`
|
||||
path: /docs/admin/federation/kubfed/
|
||||
- title: Using `federation-up` and `deploy.sh`
|
||||
path: /docs/admin/federation/
|
||||
|
|
|
@ -178,7 +178,15 @@ toc:
|
|||
- title: kube-scheduler
|
||||
path: /docs/admin/kube-scheduler/
|
||||
- title: kubelet
|
||||
path: /docs/admin/kubelet/
|
||||
section:
|
||||
- title: Overview
|
||||
path: /docs/admin/kubelet/
|
||||
- title: Master-Node communication
|
||||
path: /docs/admin/master-node-communication/
|
||||
- title: TLS bootstrapping
|
||||
path: /docs/admin/kubelet-tls-bootstrapping/
|
||||
- title: Kubelet authentication/authorization
|
||||
path: /docs/admin/kubelet-authentication-authorization/
|
||||
|
||||
- title: Glossary
|
||||
section:
|
||||
|
@ -254,6 +262,12 @@ toc:
|
|||
section:
|
||||
- title: Federation User Guide
|
||||
path: /docs/user-guide/federation/
|
||||
- title: Federated ConfigMap
|
||||
path: /docs/user-guide/federation/configmap/
|
||||
- title: Federated DaemonSet
|
||||
path: /docs/user-guide/federation/daemonsets/
|
||||
- title: Federated Deployment
|
||||
path: /docs/user-guide/federation/deployment/
|
||||
- title: Federated Events
|
||||
path: /docs/user-guide/federation/events/
|
||||
- title: Federated Ingress
|
||||
|
|
|
@ -6,6 +6,8 @@ toc:
|
|||
|
||||
- title: Contributing to the Kubernetes Docs
|
||||
section:
|
||||
- title: Contributing to the Kubernetes Documentation
|
||||
path: /editdocs/
|
||||
- title: Creating a Documentation Pull Request
|
||||
path: /docs/contribute/create-pull-request/
|
||||
- title: Writing a New Topic
|
||||
|
@ -51,5 +53,3 @@ toc:
|
|||
path: https://github.com/kubernetes/kubernetes/releases/
|
||||
- title: Release Roadmap
|
||||
path: https://github.com/kubernetes/kubernetes/milestones/
|
||||
- title: Contributing to Kubernetes Documentation
|
||||
path: /editdocs/
|
||||
|
|
|
@ -3,6 +3,7 @@ abstract: "Step-by-step instructions for performing operations with Kuberentes."
|
|||
toc:
|
||||
- title: Tasks
|
||||
path: /docs/tasks/
|
||||
|
||||
- title: Configuring Pods and Containers
|
||||
section:
|
||||
- title: Defining Environment Variables for a Container
|
||||
|
@ -11,24 +12,49 @@ toc:
|
|||
path: /docs/tasks/configure-pod-container/define-command-argument-container/
|
||||
- title: Assigning CPU and RAM Resources to a Container
|
||||
path: /docs/tasks/configure-pod-container/assign-cpu-ram-container/
|
||||
- title: Configuring a Pod to Use a Volume for Storage
|
||||
path: /docs/tasks/configure-pod-container/configure-volume-storage/
|
||||
|
||||
- title: Accessing Applications in a Cluster
|
||||
section:
|
||||
- title: Using Port Forwarding to Access Applications in a Cluster
|
||||
path: /docs/tasks/access-application-cluster/port-forward-access-application-cluster/
|
||||
|
||||
|
||||
- title: Debugging Applications in a Cluster
|
||||
section:
|
||||
- title: Determining the Reason for Pod Failure
|
||||
path: /docs/tasks/debug-application-cluster/determine-reason-pod-failure/
|
||||
|
||||
|
||||
- title: Accessing the Kubernetes API
|
||||
section:
|
||||
- title: Using an HTTP Proxy to Access the Kubernetes API
|
||||
path: /docs/tasks/access-kubernetes-api/http-proxy-access-api/
|
||||
|
||||
- title: Administering a Cluster
|
||||
section:
|
||||
- title: Assigning Pods to Nodes
|
||||
path: /docs/tasks/administer-cluster/assign-pods-nodes/
|
||||
- title: Autoscaling the DNS Service in a Cluster
|
||||
path: /docs/tasks/administer-cluster/dns-horizontal-autoscaling/
|
||||
- title: Safely Draining a Node while Respecting Application SLOs
|
||||
path: /docs/tasks/administer-cluster/safely-drain-node/
|
||||
|
||||
- title: Managing Stateful Applications
|
||||
section:
|
||||
- title: Upgrading from PetSets to StatefulSets
|
||||
path: /docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/
|
||||
- title: Scaling a StatefulSet
|
||||
path: /docs/tasks/manage-stateful-set/scale-stateful-set/
|
||||
- title: Deleting a Stateful Set
|
||||
path: /docs/tasks/manage-stateful-set/deleting-a-statefulset/
|
||||
- title: Debugging a StatefulSet
|
||||
path: /docs/tasks/manage-stateful-set/debugging-a-statefulset/
|
||||
- title: Force Deleting StatefulSet Pods
|
||||
path: /docs/tasks/manage-stateful-set/delete-pods/
|
||||
|
||||
- title: Troubleshooting
|
||||
section:
|
||||
- title: Debugging Init Containers
|
||||
path: /docs/tasks/troubleshoot/debug-init-containers/
|
||||
- title: Configuring Access Control and Identity Management
|
||||
path: /docs/tasks/administer-cluster/access-control-identity-management/
|
||||
|
|
|
@ -53,5 +53,9 @@ toc:
|
|||
path: /docs/tutorials/stateless-application/expose-external-ip-address/
|
||||
- title: Stateful Applications
|
||||
section:
|
||||
- title: StatefulSet Basics
|
||||
path: /docs/tutorials/stateful-application/basic-stateful-set/
|
||||
- title: Running a Single-Instance Stateful Application
|
||||
path: /docs/tutorials/stateful-application/run-stateful-application/
|
||||
- title: Running a Replicated Stateful Application
|
||||
path: /docs/tutorials/stateful-application/run-replicated-stateful-application/
|
|
@ -0,0 +1,6 @@
|
|||
You need to either have a dynamic PersistentVolume provisioner with a default
|
||||
[StorageClass](/docs/user-guide/persistent-volumes/#storageclasses),
|
||||
or [statically provision PersistentVolumes](/docs/user-guide/persistent-volumes/#provisioning)
|
||||
yourself to satisfy the [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims)
|
||||
used here.
|
||||
|
|
@ -24,6 +24,11 @@
|
|||
<a href="https://github.com/kubernetes/kubernetes" class="button">Contribute to the K8s codebase</a>
|
||||
</div>
|
||||
</div>
|
||||
<div id="miceType" class="center">© {{ 'now' | date: "%Y" }} The Kubernetes Authors | Distributed under <a href="https://github.com/kubernetes/kubernetes.github.io/blob/master/LICENSE" class="light-text">CC BY 4.0</a></div>
|
||||
<div id="miceType" class="center">
|
||||
© {{ 'now' | date: "%Y" }} The Kubernetes Authors | Documentation Distributed under <a href="https://github.com/kubernetes/kubernetes.github.io/blob/master/LICENSE" class="light-text">CC BY 4.0</a>
|
||||
</div>
|
||||
<div id="miceType" class="center">
|
||||
Copyright © {{ 'now' | date: "%Y" }} The Linux Foundation®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page: <a href="https://www.linuxfoundation.org/trademark-usage" class="light-text">https://www.linuxfoundation.org/trademark-usage</a>
|
||||
</div>
|
||||
</main>
|
||||
</footer>
|
||||
|
|
|
@ -196,6 +196,13 @@
|
|||
link: 'https://content.mirantis.com/Containerizing-OpenStack-on-Kubernetes-Video-Landing-Page.html',
|
||||
blurb: 'Mirantis builds and manages private clouds with open source software such as OpenStack, deployed as containers orchestrated by Kubernetes.'
|
||||
},
|
||||
{
|
||||
type: 0,
|
||||
name: 'Kubernetic',
|
||||
logo: 'kubernetic',
|
||||
link: 'https://kubernetic.com/',
|
||||
blurb: 'Kubernetic is a Kubernetes Desktop client that simplifies and democratizes cluster management for DevOps.'
|
||||
},
|
||||
{
|
||||
type: 1,
|
||||
name: 'Apprenda',
|
||||
|
@ -266,6 +273,13 @@
|
|||
link: 'http://www.skippbox.com/services/',
|
||||
blurb: 'Skippbox brings its Kubernetes expertise to help companies embrace Kubernetes on their way to digital transformation. Skippbox offers both professional services and expert training.'
|
||||
},
|
||||
{
|
||||
type: 1,
|
||||
name: 'Harbur',
|
||||
logo: 'harbur',
|
||||
link: 'https://harbur.io/',
|
||||
blurb: 'Based in Barcelona, Harbur is a consulting firm that helps companies deploy self-healing solutions empowered by Container technologies'
|
||||
},
|
||||
{
|
||||
type: 1,
|
||||
name: 'Endocode',
|
||||
|
|
|
@ -4,6 +4,7 @@ assignees:
|
|||
- lavalamp
|
||||
- ericchiang
|
||||
- deads2k
|
||||
- liggitt
|
||||
|
||||
---
|
||||
* TOC
|
||||
|
@ -382,6 +383,13 @@ option to the API server during startup. The plugin is implemented in
|
|||
`plugin/pkg/auth/authenticator/password/keystone/keystone.go` and currently uses
|
||||
basic auth to verify used by username and password.
|
||||
|
||||
If you have configured self-signed certificates for the Keystone server,
|
||||
you may need to set the `--experimental-keystone-ca-file=SOMEFILE` option when
|
||||
starting the Kubernetes API server. If you set the option, the Keystone
|
||||
server's certificate is verified by one of the authorities in the
|
||||
`experimental-keystone-ca-file`. Otherwise, the certificate is verified by
|
||||
the host's root Certificate Authority.
|
||||
|
||||
For details on how to use keystone to manage projects and users, refer to the
|
||||
[Keystone documentation](http://docs.openstack.org/developer/keystone/). Please
|
||||
note that this plugin is still experimental, under active development, and likely
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
assignees:
|
||||
- erictune
|
||||
- lavalamp
|
||||
- deads2k
|
||||
- liggitt
|
||||
|
||||
---
|
||||
|
||||
|
@ -565,10 +567,10 @@ Access to non-resource paths are sent as:
|
|||
|
||||
Non-resource paths include: `/api`, `/apis`, `/metrics`, `/resetMetrics`,
|
||||
`/logs`, `/debug`, `/healthz`, `/swagger-ui/`, `/swaggerapi/`, `/ui`, and
|
||||
`/version.` Clients require access to `/api`, `/api/*/`, `/apis/`, `/apis/*`,
|
||||
`/apis/*/*`, and `/version` to discover what resources and versions are present
|
||||
on the server. Access to other non-resource paths can be disallowed without
|
||||
restricting access to the REST api.
|
||||
`/version.` Clients require access to `/api`, `/api/*`, `/apis`, `/apis/*`,
|
||||
and `/version` to discover what resources and versions are present on the server.
|
||||
Access to other non-resource paths can be disallowed without restricting access
|
||||
to the REST api.
|
||||
|
||||
For further documentation refer to the authorization.v1beta1 API objects and
|
||||
plugin/pkg/auth/authorizer/webhook/webhook.go.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
assignees:
|
||||
- mml
|
||||
- davidopp
|
||||
|
||||
---
|
||||
This guide is for anyone wishing to specify safety constraints on pods or anyone
|
||||
|
@ -59,7 +59,7 @@ itself. To attempt an eviction (perhaps more REST-precisely, to attempt to
|
|||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "policy/v1alpha1",
|
||||
"apiVersion": "policy/v1beta1",
|
||||
"kind": "Eviction",
|
||||
"metadata": {
|
||||
"name": "quux",
|
||||
|
|
|
@ -356,3 +356,5 @@ for more information.
|
|||
|
||||
- [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/build-tools/kube-dns/README.md)
|
||||
|
||||
## What's next
|
||||
- [Autoscaling the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/).
|
||||
|
|
|
@ -14,11 +14,11 @@ This guide explains how to set up cluster federation that lets us control multip
|
|||
## Prerequisites
|
||||
|
||||
This guide assumes that you have a running Kubernetes cluster.
|
||||
If not, then head over to the [getting started guides](/docs/getting-started-guides/) to bring up a cluster.
|
||||
If you need to start a new cluster, see the [getting started guides](/docs/getting-started-guides/) for instructions on bringing a cluster up.
|
||||
|
||||
This guide also assumes that you have a Kubernetes release
|
||||
[downloaded from here](/docs/getting-started-guides/binary_release/),
|
||||
extracted into a directory and all the commands in this guide are run from
|
||||
To use the commands in this guide, you must download a Kubernetes release from the
|
||||
[getting started binary releases](/docs/getting-started-guides/binary_release/) and
|
||||
extract into a directory; all the commands in this guide are run from
|
||||
that directory.
|
||||
|
||||
```shell
|
||||
|
@ -26,8 +26,8 @@ $ curl -L https://github.com/kubernetes/kubernetes/releases/download/v1.4.0/kube
|
|||
$ cd kubernetes
|
||||
```
|
||||
|
||||
This guide also assumes that you have an installation of Docker running
|
||||
locally, i.e. on the machine where you run the commands described in this
|
||||
You must also have a Docker installation running
|
||||
locally--meaning on the machine where you run the commands described in this
|
||||
guide.
|
||||
|
||||
## Setting up a federation control plane
|
||||
|
@ -212,47 +212,81 @@ cluster1 Ready 3m
|
|||
|
||||
## Updating KubeDNS
|
||||
|
||||
Once the cluster is registered with the federation, you are all set to use it.
|
||||
But for the cluster to be able to route federation service requests, you need to restart
|
||||
KubeDNS and pass it a `--federations` flag which tells it about valid federation DNS hostnames.
|
||||
Format of the flag is like this:
|
||||
Once you've registered your cluster with the federation, you'll need to update KubeDNS so that your cluster can route federation service requests. The update method varies depending on your Kubernetes version; on Kubernetes 1.5 or later, you must pass the
|
||||
`--federations` flag to kube-dns via the kube-dns config map. In version 1.4 or earlier, you must set the `--federations` flag directly on kube-dns-rc on other clusters.
|
||||
|
||||
### Kubernetes 1.5+: Passing federations flag via config map to kube-dns
|
||||
|
||||
For kubernetes clusters of version 1.5+, you can pass the
|
||||
`--federations` flag to kube-dns via the kube-dns config map.
|
||||
The flag uses the following format:
|
||||
|
||||
```
|
||||
--federations=${FEDERATION_NAME}=${DNS_DOMAIN_NAME}
|
||||
```
|
||||
|
||||
To update KubeDNS with federations flag, you can edit the existing kubedns replication controller to
|
||||
include that flag in pod template spec and then delete the existing pod. Replication controller will
|
||||
recreate the pod with updated template.
|
||||
To pass this flag to KubeDNS, create a config-map with name `kube-dns` in
|
||||
namespace `kube-system`. The configmap should look like the following:
|
||||
|
||||
To find the name of existing kubedns replication controller, run
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
data:
|
||||
federations: <federation-name>=<dns-domain-name>
|
||||
```
|
||||
|
||||
where `<federation-name>` should be replaced by the name you want to give to your
|
||||
federation, and
|
||||
`federation-domain-name` should be replaced by the domain name you want to use
|
||||
in your federation DNS.
|
||||
|
||||
You can find more details about config maps in general at
|
||||
http://kubernetes.io/docs/user-guide/configmap/.
|
||||
|
||||
### Kubernetes 1.4 and earlier: Setting federations flag on kube-dns-rc
|
||||
|
||||
If your cluster is running Kubernetes version 1.4 or earlier, you must to restart
|
||||
KubeDNS and pass it a `--federations` flag, which tells it about valid federation DNS hostnames.
|
||||
The flag uses the following format:
|
||||
|
||||
```
|
||||
--federations=${FEDERATION_NAME}=${DNS_DOMAIN_NAME}
|
||||
```
|
||||
|
||||
To update KubeDNS with the `--federations` flag, you can edit the existing kubedns replication controller to
|
||||
include that flag in pod template spec, and then delete the existing pod. The replication controller then
|
||||
recreates the pod with updated template.
|
||||
|
||||
To find the name of existing kubedns replication controller, run the following command:
|
||||
|
||||
```shell
|
||||
$ kubectl get rc --namespace=kube-system
|
||||
```
|
||||
|
||||
This will list all the replication controllers. Name of the kube-dns replication
|
||||
controller will look like `kube-dns-v18`. You can then edit it by running:
|
||||
You should see a list of all the replication controllers on the cluster. The kube-dns replication
|
||||
controller should have a name similar to `kube-dns-v18`. To edit the replication controller, specify it by name as follows:
|
||||
|
||||
```shell
|
||||
$ kubectl edit rc <rc-name> --namespace=kube-system
|
||||
```
|
||||
Add the `--federations` flag as args to kube-dns container in the YAML file that
|
||||
pops up after running the above command.
|
||||
In the resulting YAML file for the kube-dns replication controller, add the `--federations` flag as an argument to kube-dns container.
|
||||
|
||||
To delete the existing kube dns pod, you can first find it by running:
|
||||
Then, you must delete the existing kube dns pod. You can find the pod by running:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --namespace=kube-system
|
||||
```
|
||||
|
||||
And then delete it by running:
|
||||
And then delete the appropriate pod by running:
|
||||
|
||||
```shell
|
||||
$ kubectl delete pods <pod-name> --namespace=kube-system
|
||||
```
|
||||
|
||||
You are now all set to start using federation.
|
||||
Once you've completed the kube-dns configuration, your federation is ready for use.
|
||||
|
||||
## Turn down
|
||||
|
||||
|
|
|
@ -0,0 +1,195 @@
|
|||
---
|
||||
assignees:
|
||||
- madhusudancs
|
||||
|
||||
---
|
||||
Kubernetes version 1.5 includes a new command line tool called
|
||||
`kubefed` to help you administrate your federated clusters.
|
||||
`kubefed` helps you to deploy a new Kubernetes cluster federation
|
||||
control plane, and to add clusters to or remove clusters from an
|
||||
existing federation control plane.
|
||||
|
||||
This guide explains how to administer a Kubernetes Cluster Federation
|
||||
using `kubefed`.
|
||||
|
||||
> Note: `kubefed` is an alpha feature in Kubernetes 1.5.
|
||||
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This guide assumes that you have a running Kubernetes cluster. Please
|
||||
see one of the [getting started](/docs/getting-started-guides/) guides
|
||||
for installation instructions for your platform.
|
||||
|
||||
|
||||
## Getting `kubefed`
|
||||
|
||||
Download the client tarball corresponding to Kubernetes version 1.5
|
||||
or later
|
||||
[from the release page](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md),
|
||||
extract the binaries in the tarball to one of the directories
|
||||
in your `$PATH` and set the executable permission on those binaries.
|
||||
|
||||
```shell
|
||||
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.5.0/kubernetes-client-linux-amd64.tar.gz
|
||||
tar -xzvf kubernetes-client-linux-amd64.tar.gz
|
||||
sudo cp kubernetes/client/bin/kubefed /usr/local/bin
|
||||
sudo chmod +x /usr/local/bin/kubefed
|
||||
sudo cp kubernetes/client/bin/kubectl /usr/local/bin
|
||||
sudo chmod +x /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
|
||||
## Choosing a host cluster.
|
||||
|
||||
You'll need to choose one of your Kubernetes clusters to be the
|
||||
*host cluster*. The host cluster hosts the components that make up
|
||||
your federation control plane. Ensure that you have a `kubeconfig`
|
||||
entry in your local `kubeconfig` that corresponds to the host cluster.
|
||||
You can verify that you have the required `kubeconfig` entry by
|
||||
running:
|
||||
|
||||
```shell
|
||||
kubectl config get-contexts
|
||||
```
|
||||
|
||||
The output should contain an entry corresponding to your host cluster,
|
||||
similar to the following:
|
||||
|
||||
```
|
||||
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
|
||||
gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1
|
||||
```
|
||||
|
||||
|
||||
You'll need to provide the `kubeconfig` context (called name in the
|
||||
entry above) for your host cluster when you deploy your federation
|
||||
control plane.
|
||||
|
||||
|
||||
## Deploying a federation control plane.
|
||||
|
||||
"To deploy a federation control plane on your host cluster, run
|
||||
`kubefed init` command. When you use `kubefed init`, you must provide
|
||||
the following:
|
||||
|
||||
* Federation name
|
||||
* `--host-cluster-context`, the `kubeconfig` context for the host cluster
|
||||
* `--dns-zone-name`, a domain name suffix for your federated services
|
||||
|
||||
The following example command deploys a federation control plane with
|
||||
the name `fellowship`, a host cluster context `rivendell`, and the
|
||||
domain suffix `example.com`:
|
||||
|
||||
```shell
|
||||
kubefed init fellowship --host-cluster-context=rivendell --dns-zone-name="example.com"
|
||||
```
|
||||
|
||||
The domain suffix you specify in `--dns-zone-name` must be an existing
|
||||
domain that you control, and that is programmable by your DNS provider.
|
||||
|
||||
`kubefed init` sets up the federation control plane in the host
|
||||
cluster and also adds an entry for the federation API server in your
|
||||
local kubeconfig. Note that in the alpha release in Kubernetes 1.5,
|
||||
`kubefed init` does not automatically set the current context to the
|
||||
newly deployed federation. You can set the current context manually by
|
||||
running:
|
||||
|
||||
```shell
|
||||
kubectl config use-context fellowship
|
||||
```
|
||||
|
||||
where `fellowship` is the name of your federation.
|
||||
|
||||
|
||||
## Adding a cluster to a federation
|
||||
|
||||
Once you've deployed a federation control plane, you'll need to make
|
||||
that control plane aware of the clusters it should manage. You can add
|
||||
a cluster to your federation by using the `kubefed join` command.
|
||||
|
||||
To use `kubefed join`, you'll need to provide the name of the cluster
|
||||
you want to add to the federation, and the `--host-cluster-context`
|
||||
for the federation control plane's host cluster.
|
||||
|
||||
The following example command adds the cluster `gondor` to the
|
||||
federation with host cluster `rivendell`:
|
||||
|
||||
```
|
||||
kubefed join gondor --host-cluster-context=rivendell
|
||||
```
|
||||
|
||||
> Note: Kubernetes requires that you manually join clusters to a
|
||||
federation because the federation control plane manages only those
|
||||
clusters that it is responsible for managing. Adding a cluster tells
|
||||
the federation control plane that it is responsible for managing that
|
||||
cluster.
|
||||
|
||||
### Naming rules and customization
|
||||
|
||||
The cluster name you supply to `kubefed join` must be a valid RFC 1035
|
||||
label.
|
||||
|
||||
Furthermore, federation control plane requires credentials of the
|
||||
joined clusters to operate on them. These credentials are obtained
|
||||
from the local kubeconfig. `kubefed join` uses the cluster name
|
||||
specified as the argument to look for the cluster's context in the
|
||||
local kubeconfig. If it fails to find a matching context, it exits
|
||||
with an error.
|
||||
|
||||
This might cause issues in cases where context names for each cluster
|
||||
in the federation don't follow RFC 1035 label naming rules. In such
|
||||
cases, you can specify a cluster name that conforms to the RFC 1035
|
||||
label naming rules and specify the cluster context using the
|
||||
`--cluster-context` flag. For example, if context of the cluster your
|
||||
are joining is `gondor_needs-no_king`, then you can
|
||||
join the cluster by running:
|
||||
|
||||
```shell
|
||||
kubefed join gondor --host-cluster-context=rivendell --cluster-context=gondor_needs-no_king
|
||||
```
|
||||
|
||||
#### Secret name
|
||||
|
||||
Cluster credentials required by the federation control plane as
|
||||
described above are stored as a secret in the host cluster. The name
|
||||
of the secret is also derived from the cluster name.
|
||||
|
||||
However, the name of a secret object in Kubernetes should conform
|
||||
to the subdomain name specification described in RFC 1123. If this
|
||||
isn't case, you can pass the secret name to `kubefed join` using the
|
||||
`--secret-name` flag. For example, if the cluster name is `noldor` and
|
||||
the secret name is `11kingdom`, you can join the cluster by
|
||||
running:
|
||||
|
||||
```shell
|
||||
kubefed join noldor --host-cluster-context=rivendell --secret-name=11kingdom
|
||||
```
|
||||
|
||||
## Removing a cluster from a federation
|
||||
|
||||
To remove a cluster from a federation, run the `kubefed unjoin`
|
||||
command with the cluster name and the federation's
|
||||
`--host-cluster-context`:
|
||||
|
||||
```
|
||||
kubefed unjoin gondor --host-cluster-context=rivendell
|
||||
```
|
||||
|
||||
|
||||
## Turning down the federation control plane:
|
||||
|
||||
Proper cleanup of federation control plane is not fully implemented in
|
||||
this alpha release of `kubefed`. However, for the time being, deleting
|
||||
the federation system namespace should remove all the resources except
|
||||
the persistent storage volume dynamically provisioned for the
|
||||
federation control plane's etcd. You can delete the federation
|
||||
namespace by running the following command:
|
||||
|
||||
```
|
||||
$ kubectl delete ns federation-system
|
||||
```
|
|
@ -0,0 +1,160 @@
|
|||
---
|
||||
assignees:
|
||||
- jszczepkowski
|
||||
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Introduction
|
||||
|
||||
Kubernetes version 1.5 adds alpha support for replicating Kubernetes masters in `kube-up` or `kube-down` scripts for Google Compute Engine.
|
||||
This document describes how to use kube-up/down scripts to manage highly available (HA) masters and how HA masters are implemented for use with GCE.
|
||||
|
||||
## Starting an HA-compatible cluster
|
||||
|
||||
To create a new HA-compatible cluster, you must set the following flags in your `kube-up` script:
|
||||
|
||||
* `MULTIZONE=true` - to prevent removal of master replicas kubelets from zones different than server's default zone.
|
||||
Required if you want to run master replicas in different zones, which is recommended.
|
||||
|
||||
* `ENABLE_ETCD_QUORUM_READS=true` - to ensure that reads from all API servers will return most up-to-date data.
|
||||
If true, reads will be directed to leader etcd replica.
|
||||
Setting this value to true is optional: reads will be more reliable but will also be slower.
|
||||
|
||||
Optionally, you can specify a GCE zone where the first master replica is to be created.
|
||||
Set the the following flag:
|
||||
|
||||
* `KUBE_GCE_ZONE=zone` - zone where the first master replica will run.
|
||||
|
||||
The following sample command sets up a HA-compatible cluster in the GCE zone europe-west1-b:
|
||||
|
||||
```shell
|
||||
$ MULTIZONE=true KUBE_GCE_ZONE=europe-west1-b ENABLE_ETCD_QUORUM_READS=true ./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Note that the commands above create a cluster with one master;
|
||||
however, you can add new master replicas to the cluster with subsequent commands.
|
||||
|
||||
## Adding a new master replica
|
||||
|
||||
After you have created an HA-compatible cluster, you can add master replicas to it.
|
||||
You add master replicas by using a `kube-up` script with the following flags:
|
||||
|
||||
* `KUBE_REPLICATE_EXISTING_MASTER=true` - to create a replica of an existing
|
||||
master.
|
||||
|
||||
* `KUBE_GCE_ZONE=zone` - zone where the master replica will run.
|
||||
Must be in the same region as other replicas' zones.
|
||||
|
||||
You don't need to set the `MULTIZONE` or `ENABLE_ETCD_QUORUM_READS` flags,
|
||||
as those are inherited from when you started your HA-compatible cluster.
|
||||
|
||||
The following sample command replicates the master on an existing HA-compatible cluster:
|
||||
|
||||
```shell
|
||||
$ KUBE_GCE_ZONE=europe-west1-c KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
## Removing a master replica
|
||||
|
||||
You can remove a master replica from an HA cluster by using a `kube-down` script with the following flags:
|
||||
|
||||
* `KUBE_DELETE_NODES=false` - to restrain deletion of kubelets.
|
||||
|
||||
* `KUBE_GCE_ZONE=zone` - the zone from where master replica will be removed.
|
||||
|
||||
* `KUBE_REPLICA_NAME=replica_name` - (optional) the name of master replica to remove.
|
||||
If empty: any replica from the given zone will be removed.
|
||||
|
||||
The following sample command removes a master replica from an existing HA cluster:
|
||||
|
||||
```shell
|
||||
$ KUBE_DELETE_NODES=false KUBE_GCE_ZONE=europe-west1-c ./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
## Handling master replica failures
|
||||
|
||||
If one of the master replicas in your HA cluster fails,
|
||||
the best practice is to remove the replica from your cluster and add a new replica in the same zone.
|
||||
The following sample commands demonstrate this process:
|
||||
|
||||
1. Remove the broken replica:
|
||||
|
||||
```shell
|
||||
$ KUBE_DELETE_NODES=false KUBE_GCE_ZONE=replica_zone KUBE_REPLICA_NAME=replica_name ./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
2. Add a new replica in place of the old one:
|
||||
|
||||
```shell
|
||||
$ KUBE_GCE_ZONE=replica-zone KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
## Best practices for replicating masters for HA clusters
|
||||
|
||||
* Try to place masters replicas in different zones. During a zone failure, all master placed inside the zone will fail.
|
||||
To survive zone failure, also place nodes in multiple zones
|
||||
(see [multiple-zones](http://kubernetes.io/docs/admin/multiple-zones/) for details).
|
||||
|
||||
* Do not use a cluster with two master replicas. Consensus on a two replica cluster requires both replicas running when changing persistent state.
|
||||
As a result, both replicas are needed and a failure of any replica turns cluster into majority failure state.
|
||||
A two-replica cluster is thus inferior, in terms of HA, to a single replica cluster.
|
||||
|
||||
* When you add a master replica, cluster state (etcd) is copied to a new instance.
|
||||
If the cluster is large, it may take a long time to duplicate its state.
|
||||
This operation may be speed up by migrating etcd data directory, as described [here](https://coreos.com/etcd/docs/latest/admin_guide.html#member-migration) here
|
||||
(we are considering adding support for etcd data dir migration in future).
|
||||
|
||||
## Implementation notes
|
||||
|
||||
![](ha-master-gce.png)
|
||||
|
||||
### Overview
|
||||
|
||||
Each of master replicas will run the following components in the following mode:
|
||||
|
||||
* etcd instance: all instances will be clustered together using consensus;
|
||||
|
||||
* API server: each server will talk to local etcd - all API servers in the cluster will be available;
|
||||
|
||||
* controllers, scheduler, and cluster auto-scaler: will use lease mechanism - only one instance of each of them will be active in the cluster;
|
||||
|
||||
* add-on manager: each manager will work independently trying to keep add-ons in sync.
|
||||
|
||||
In addition, there will be a load balancer in front of API servers that will route external and internal traffic to them.
|
||||
|
||||
### Load balancing
|
||||
|
||||
When starting the second master replica, a load balancer containing the two replicas will be created
|
||||
and the IP address of the first replica will be promoted to IP address of load balancer.
|
||||
Similarly, after removal of the penultimate master replica, the load balancer will be removed and its IP address will be assigned to the last remaining replica.
|
||||
Please note that creation and removal of load balancer are complex operations and it may take some time (~20 minutes) for them to propagate.
|
||||
|
||||
### Master service & kubelets
|
||||
|
||||
Instead of trying to keep an up-to-date list of Kubernetes apiserver in the Kubernetes service,
|
||||
the system directs all traffic to the external IP:
|
||||
|
||||
* in one master cluster the IP points to the single master,
|
||||
|
||||
* in multi-master cluster the IP points to the load balancer in-front of the masters.
|
||||
|
||||
Similarly, the external IP will be used by kubelets to communicate with master.
|
||||
|
||||
### Master certificates
|
||||
|
||||
Kubernetes generates Master TLS certificates for the external public IP and local IP for each replica.
|
||||
There are no certificates for the ephemeral public IP for replicas;
|
||||
to access a replica via its ephemeral public IP, you must skip TLS verification.
|
||||
|
||||
### Clustering etcd
|
||||
|
||||
To allow etcd clustering, ports needed to communicate between etcd instances will be opened (for inside cluster communication).
|
||||
To make such deployment secure, communication between etcd instances is authorized using SSL.
|
||||
|
||||
## Additional reading
|
||||
|
||||
[Automated HA master deployment - design doc](https://github.com/kubernetes/kubernetes/blob/master/docs/design/ha_master.md)
|
||||
|
After Width: | Height: | Size: 34 KiB |
|
@ -84,3 +84,8 @@ project](/docs/admin/salt).
|
|||
* **Sysctls** [sysctls](/docs/admin/sysctls.md)
|
||||
|
||||
* **Audit** [audit](/docs/admin/audit)
|
||||
|
||||
* **Securing the kubelet**
|
||||
* [Master-Node communication](/docs/admin/master-node-communication/)
|
||||
* [TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
|
|
|
@ -0,0 +1,81 @@
|
|||
---
|
||||
assignees:
|
||||
- liggitt
|
||||
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
A kubelet's HTTPS endpoint exposes APIs which give access to data of varying sensitivity,
|
||||
and allow you to perform operations with varying levels of power on the node and within containers.
|
||||
|
||||
This document describes how to authenticate and authorize access to the kubelet's HTTPS endpoint.
|
||||
|
||||
## Kubelet authentication
|
||||
|
||||
By default, requests to the kubelet's HTTPS endpoint that are not rejected by other configured
|
||||
authentication methods are treated as anonymous requests, and given a username of `system:anonymous`
|
||||
and a group of `system:unauthenticated`.
|
||||
|
||||
To disable anonymous access and send `401 Unauthorized` responses to unauthenticated requests:
|
||||
* start the kubelet with the `--anonymous-auth=false` flag
|
||||
|
||||
To enable X509 client certificate authentication to the kubelet's HTTPS endpoint:
|
||||
* start the kubelet with the `--client-ca-file` flag, providing a CA bundle to verify client certificates with
|
||||
* start the apiserver with `--kubelet-client-certificate` and `--kubelet-client-key` flags
|
||||
* see the [apiserver authentication documentation](/docs/admin/authentication/#x509-client-certs) for more details
|
||||
|
||||
To enable API bearer tokens (including service account tokens) to be used to authenticate to the kubelet's HTTPS endpoint:
|
||||
* ensure the `authentication.k8s.io/v1beta1` API group is enabled in the API server
|
||||
* start the kubelet with the `--authentication-token-webhook`, `--kubeconfig`, and `--require-kubeconfig` flags
|
||||
* the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens
|
||||
|
||||
## Kubelet authorization
|
||||
|
||||
Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is `AlwaysAllow`, which allows all requests.
|
||||
|
||||
There are many possible reasons to subdivide access to the kubelet API:
|
||||
* anonymous auth is enabled, but anonymous users' ability to call the kubelet API should be limited
|
||||
* bearer token auth is enabled, but arbitrary API users' (like service accounts) ability to call the kubelet API should be limited
|
||||
* client certificate auth is enabled, but only some of the client certificates signed by the configured CA should be allowed to use the kubelet API
|
||||
|
||||
To subdivide access to the kubelet API, delegate authorization to the API server:
|
||||
* ensure the `authorization.k8s.io/v1beta1` API group is enabled in the API server
|
||||
* start the kubelet with the `--authorization-mode=Webhook`, `--kubeconfig`, and `--require-kubeconfig` flags
|
||||
* the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized
|
||||
|
||||
The kubelet authorizes API requests using the same [request attributes](/docs/admin/authorization/#request-attributes) approach as the apiserver.
|
||||
|
||||
The verb is determined from the incoming request's HTTP verb:
|
||||
|
||||
HTTP verb | request verb
|
||||
----------|---------------
|
||||
POST | create
|
||||
GET, HEAD | get
|
||||
PUT | update
|
||||
PATCH | patch
|
||||
DELETE | delete
|
||||
|
||||
The resource and subresource is determined from the incoming request's path:
|
||||
|
||||
Kubelet API | resource | subresource
|
||||
-------------|----------|------------
|
||||
/stats/* | nodes | stats
|
||||
/metrics/* | nodes | metrics
|
||||
/logs/* | nodes | log
|
||||
/spec/* | nodes | spec
|
||||
*all others* | nodes | proxy
|
||||
|
||||
The namespace and API group attributes are always an empty string, and
|
||||
the resource name is always the name of the kubelet's `Node` API object.
|
||||
|
||||
When running in this mode, ensure the user identified by the `--kubelet-client-certificate` and `--kubelet-client-key`
|
||||
flags passed to the apiserver is authorized for the following attributes:
|
||||
* verb=*, resource=nodes, subresource=proxy
|
||||
* verb=*, resource=nodes, subresource=stats
|
||||
* verb=*, resource=nodes, subresource=log
|
||||
* verb=*, resource=nodes, subresource=spec
|
||||
* verb=*, resource=nodes, subresource=metrics
|
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes how to set up TLS client certificate boostrapping for kubelets.
|
||||
Kubernetes 1.4 introduces an experimental API for requesting certificates from a cluster-level
|
||||
Certificate Authority (CA). The first supported use of this API is the provisioning of TLS client
|
||||
certificates for kubelets. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439)
|
||||
and progress on the feature is being tracked as [feature #43](https://github.com/kubernetes/features/issues/43).
|
||||
|
||||
## apiserver configuration
|
||||
|
||||
You must provide a token file which specifies at least one "bootstrap token" assigned to a kubelet boostrap-specific group.
|
||||
This group will later be used in the controller-manager configuration to scope approvals in the default approval
|
||||
controller. As this feature matures, you should ensure tokens are bound to an RBAC policy which limits requests
|
||||
using the bootstrap token to only be able to make requests related to certificate provisioning. When RBAC policy
|
||||
is in place, scoping the tokens to a group will allow great flexibility (e.g. you could disable a particular
|
||||
bootstrap group's access when you are done provisioning the nodes).
|
||||
|
||||
### Token auth file
|
||||
Tokens are arbitrary but should represent at least 128 bits of entropy derived from a secure random number
|
||||
generator (such as /dev/urandom on most modern systems). There are multiple ways you can generate a token. For example:
|
||||
|
||||
`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
|
||||
|
||||
will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`
|
||||
|
||||
The token file will look like the following example, where the first three values can be anything and the quoted group
|
||||
name should be as depicted:
|
||||
|
||||
```
|
||||
02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
|
||||
```
|
||||
|
||||
Add the `--token-auth-file=FILENAME` flag to the apiserver command to enable the token file.
|
||||
See docs at http://kubernetes.io/docs/admin/authentication/#static-token-file for further details.
|
||||
|
||||
### Client certificate CA bundle
|
||||
|
||||
Add the `--client-ca-file=FILENAME` flag to the apiserver command to enable client certificate authentication,
|
||||
referencing a certificate authority bundle containing the signing certificate.
|
||||
|
||||
## controller-manager configuration
|
||||
The API for requesting certificates adds a certificate-issuing control loop to the KCM. This takes the form of a
|
||||
[cfssl](https://blog.cloudflare.com/introducing-cfssl/) local signer using assets on disk.
|
||||
Currently, all certificates issued have one year validity and a default set of key usages.
|
||||
|
||||
### Signing assets
|
||||
You must provide a Certificate Authority in order to provide the cryptographic materials necessary to issue certificates.
|
||||
This CA should be trusted by the apiserver for authentication with the `--client-ca-file=SOMEFILE` flag. The management
|
||||
of the CA is beyond the scope of this document but it is recommended that you generate a dedicated CA for Kubernetes.
|
||||
Both certificate and key are assumed to be PEM-encoded.
|
||||
|
||||
The new controller-manager flags are:
|
||||
```
|
||||
--cluster-signing-cert-file="/etc/path/to/kubernetes/ca/ca.crt" --cluster-signing-key-file="/etc/path/to/kubernetes/ca/ca.key"
|
||||
```
|
||||
|
||||
### Auto-approval
|
||||
To ease deployment and testing, the alpha version of the certificate request API includes a flag to approve all certificate
|
||||
requests made by users in a certain group. The intended use of this is to whitelist only the group corresponding to the bootstrap
|
||||
token in the token file above. Use of this flag circumvents makes the "approval" process described below and is not recommended
|
||||
for production use.
|
||||
|
||||
The flag is:
|
||||
```
|
||||
--insecure-experimental-approve-all-kubelet-csrs-for-group="system:kubelet-bootstrap"
|
||||
```
|
||||
|
||||
## kubelet configuration
|
||||
To use request a client cert from the certificate request API, the kubelet needs a path to a kubeconfig file that contains the
|
||||
bootstrap auth token. If the file specified by `--kubeconfig` does not exist, the bootstrap kubeconfig is used to request a
|
||||
client certificate from the API server. On success, a kubeconfig file referencing the generated key and obtained certificate
|
||||
is written to the path specified by `--kubeconfig`. The certificate and key file will be stored in the directory pointed
|
||||
by `--cert-dir`. The new flag is:
|
||||
|
||||
```
|
||||
--experimental-bootstrap-kubeconfig="/path/to/bootstrap/kubeconfig"
|
||||
```
|
||||
|
||||
## kubectl approval
|
||||
The signing controller does not immediately sign all certificate requests. Instead, it waits until they have been flagged with an
|
||||
"Approved" status by an appropriately-privileged user. This is intended to eventually be an automated process handled by an external
|
||||
approval controller, but for the alpha version of the API it can be done manually by a cluster administrator using kubectl.
|
||||
An administrator can list CSRs with `kubectl get csr`, describe one in detail with `kubectl describe <name>`. There are
|
||||
[currently no direct approve/deny commands](https://github.com/kubernetes/kubernetes/issues/30163) so an approver will need to update
|
||||
the Status field directly. A rough example of how to do this in bash which should only be used until the porcelain merges is available
|
||||
at [https://github.com/gtank/csrctl](https://github.com/gtank/csrctl).
|
||||
|
|
@ -2,13 +2,14 @@
|
|||
assignees:
|
||||
- dchen1107
|
||||
- roberthbailey
|
||||
- liggitt
|
||||
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Summary
|
||||
## Overview
|
||||
|
||||
This document catalogs the communication paths between the master (really the
|
||||
apiserver) and the Kubernetes cluster. The intent is to allow users to
|
||||
|
@ -22,14 +23,21 @@ All communication paths from the cluster to the master terminate at the
|
|||
apiserver (none of the other master components are designed to expose remote
|
||||
services). In a typical deployment, the apiserver is configured to listen for
|
||||
remote connections on a secure HTTPS port (443) with one or more forms of
|
||||
client [authentication](/docs/admin/authentication/) enabled.
|
||||
client [authentication](/docs/admin/authentication/) enabled. One or more forms
|
||||
of [authorization](/docs/admin/authorization/) should be enabled, especially
|
||||
if [anonymous requests](/docs/admin/authentication/#anonymous-requests) or
|
||||
[service account tokens](/docs/admin/authentication/#service-account-tokens)
|
||||
are allowed.
|
||||
|
||||
Nodes should be provisioned with the public root certificate for the cluster
|
||||
such that they can connect securely to the apiserver along with valid client
|
||||
credentials. For example, on a default GCE deployment, the client credentials
|
||||
provided to the kubelet are in the form of a client certificate. Pods that
|
||||
wish to connect to the apiserver can do so securely by leveraging a service
|
||||
account so that Kubernetes will automatically inject the public root
|
||||
provided to the kubelet are in the form of a client certificate. See
|
||||
[kubelet TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/) for
|
||||
automated provisioning of kubelet client certificates.
|
||||
|
||||
Pods that wish to connect to the apiserver can do so securely by leveraging a
|
||||
service account so that Kubernetes will automatically inject the public root
|
||||
certificate and a valid bearer token into the pod when it is instantiated.
|
||||
The `kubernetes` service (in all namespaces) is configured with a virtual IP
|
||||
address that is redirected (via kube-proxy) to the HTTPS endpoint on the
|
||||
|
@ -54,16 +62,29 @@ cluster. The first is from the apiserver to the kubelet process which runs on
|
|||
each node in the cluster. The second is from the apiserver to any node, pod,
|
||||
or service through the apiserver's proxy functionality.
|
||||
|
||||
### apiserver -> kubelet
|
||||
|
||||
The connections from the apiserver to the kubelet are used for fetching logs
|
||||
for pods, attaching (through kubectl) to running pods, and using the kubelet's
|
||||
port-forwarding functionality. These connections terminate at the kubelet's
|
||||
HTTPS endpoint, which is typically using a self-signed certificate, and
|
||||
ignore the certificate presented by the kubelet (although you can override this
|
||||
behavior by specifying the `--kubelet-certificate-authority`,
|
||||
`--kubelet-client-certificate`, and `--kubelet-client-key` flags when starting
|
||||
the cluster apiserver). By default, these connections **are not currently safe**
|
||||
to run over untrusted and/or public networks as they are subject to
|
||||
man-in-the-middle attacks.
|
||||
port-forwarding functionality. These connections terminate at the kubelet's
|
||||
HTTPS endpoint.
|
||||
|
||||
By default, the apiserver does not verify the kubelet's serving certificate,
|
||||
which makes the connection subject to man-in-the-middle attacks, and
|
||||
**unsafe** to run over untrusted and/or public networks.
|
||||
|
||||
To verify this connection, use the `--kubelet-certificate-authority` flag to
|
||||
provide the apiserver with a root certificates bundle to use to verify the
|
||||
kubelet's serving certificate.
|
||||
|
||||
If that is not possible, use [SSH tunneling](/docs/admin/master-node-communication/#ssh-tunnels)
|
||||
between the apiserver and kubelet if required to avoid connecting over an
|
||||
untrusted or public network.
|
||||
|
||||
Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
should be enabled to secure the kubelet API.
|
||||
|
||||
### apiserver -> nodes, pods, and services
|
||||
|
||||
The connections from the apiserver to a node, pod, or service default to plain
|
||||
HTTP connections and are therefore neither authenticated nor encrypted. They
|
||||
|
@ -83,83 +104,3 @@ cluster (connecting to the ssh server listening on port 22) and passes all
|
|||
traffic destined for a kubelet, node, pod, or service through the tunnel.
|
||||
This tunnel ensures that the traffic is not exposed outside of the private
|
||||
GCE network in which the cluster is running.
|
||||
|
||||
### Kubelet TLS Bootstrap
|
||||
|
||||
Kubernetes 1.4 introduces an experimental API for requesting certificates from a cluster-level
|
||||
Certificate Authority (CA). The first supported use of this API is the provisioning of TLS client
|
||||
certificates for kubelets. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439)
|
||||
and progress on the feature is being tracked as [feature #43](https://github.com/kubernetes/features/issues/43).
|
||||
|
||||
##### apiserver configuration
|
||||
You must provide a token file which specifies at least one "bootstrap token" assigned to a kubelet boostrap-specific group.
|
||||
This group will later be used in the controller-manager configuration to scope approvals in the default approval
|
||||
controller. As this feature matures, you should ensure tokens are bound to an RBAC policy which limits requests
|
||||
using the bootstrap token to only be able to make requests related to certificate provisioning. When RBAC policy
|
||||
is in place, scoping the tokens to a group will allow great flexibility (e.g. you could disable a particular
|
||||
bootstrap group's access when you are done provisioning the nodes).
|
||||
|
||||
##### Token auth file
|
||||
Tokens are arbitrary but should represent at least 128 bits of entropy derived from a secure random number
|
||||
generator (such as /dev/urandom on most modern systems). There are multiple ways you can generate a token. For example:
|
||||
|
||||
`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
|
||||
|
||||
will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`
|
||||
|
||||
The token file will look like the following example, where the first three values can be anything and the quoted group
|
||||
name should be as depicted:
|
||||
|
||||
```
|
||||
02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
|
||||
```
|
||||
|
||||
Add the `--token-auth-file=FILENAME` flag to the apiserver command to enable the token file.
|
||||
See docs at http://kubernetes.io/docs/admin/authentication/#static-token-file for further details.
|
||||
|
||||
#### controller-manager configuration
|
||||
The API for requesting certificates adds a certificate-issuing control loop to the KCM. This takes the form of a
|
||||
[cfssl](https://blog.cloudflare.com/introducing-cfssl/) local signer using assets on disk.
|
||||
Currently, all certificates issued have one year validity and a default set of key usages.
|
||||
|
||||
##### Signing assets
|
||||
You must provide a Certificate Authority in order to provide the cryptographic materials necessary to issue certificates.
|
||||
This CA should be trusted by the apiserver for authentication with the `--client-ca-file=SOMEFILE` flag. The management
|
||||
of the CA is beyond the scope of this document but it is recommended that you generate a dedicated CA for Kubernetes.
|
||||
Both certificate and key are assumed to be PEM-encoded.
|
||||
|
||||
The new controller-manager flags are:
|
||||
```
|
||||
--cluster-signing-cert-file="/etc/path/to/kubernetes/ca/ca.crt" --cluster-signing-key-file="/etc/path/to/kubernetes/ca/ca.key"
|
||||
```
|
||||
|
||||
##### Auto-approval
|
||||
To ease deployment and testing, the alpha version of the certificate request API includes a flag to approve all certificate
|
||||
requests made by users in a certain group. The intended use of this is to whitelist only the group corresponding to the bootstrap
|
||||
token in the token file above. Use of this flag circumvents makes the "approval" process described below and is not recommended
|
||||
for production use.
|
||||
|
||||
The flag is:
|
||||
```
|
||||
--insecure-experimental-approve-all-kubelet-csrs-for-group="system:kubelet-bootstrap"
|
||||
```
|
||||
|
||||
#### kubelet configuration
|
||||
To use request a client cert from the certificate request API, the kubelet needs a path to a kubeconfig file that contains the
|
||||
bootstrap auth token. If the file specified by `--kubeconfig` does not exist, the bootstrap kubeconfig is used to request a
|
||||
client certificate from the API server. On success, a kubeconfig file referencing the generated key and obtained certificate
|
||||
is written to the path specified by `--kubeconfig`. The certificate and key file will be stored in the directory pointed
|
||||
by `--cert-dir`. The new flag is:
|
||||
|
||||
```
|
||||
--experimental-bootstrap-kubeconfig="/path/to/bootstrap/kubeconfig"
|
||||
```
|
||||
|
||||
#### kubectl approval
|
||||
The signing controller does not immediately sign all certificate requests. Instead, it waits until they have been flagged with an
|
||||
"Approved" status by an appropriately-privileged user. This is intended to eventually be an automated process handled by an external
|
||||
approval controller, but for the alpha version of the API it can be done manually by a cluster administrator using kubectl.
|
||||
An administrator can list CSRs with `kubectl get csr`, describe one in detail with `kubectl describe <name>`. There are
|
||||
[currently no direct approve/deny commands](https://github.com/kubernetes/kubernetes/issues/30163) so an approver will need to update
|
||||
the Status field directly. A rough example of how to do this in bash which should only be used until the porcelain merges is available
|
||||
at https://github.com/gtank/csrctl.
|
||||
|
|
|
@ -169,6 +169,12 @@ Follow the "With Linux Bridge devices" section of [this very nice
|
|||
tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
|
||||
Lars Kellogg-Stedman.
|
||||
|
||||
### Nuage Networks VCS (Virtualized Cloud Services)
|
||||
|
||||
[Nuage](www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
|
||||
|
||||
The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage’s policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform’s real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
|
||||
|
||||
### OpenVSwitch
|
||||
|
||||
[OpenVSwitch](/docs/admin/ovs-networking) is a somewhat more mature but also
|
||||
|
|
|
@ -9,54 +9,52 @@ assignees:
|
|||
|
||||
## Node Conformance Test
|
||||
|
||||
*Node conformance test* is a test framework validating whether a node meets the
|
||||
minimum requirement of Kubernetes with a set of system verification and
|
||||
functionality test. A node which passes the tests is qualified to join a
|
||||
Kubernetes cluster.
|
||||
*Node conformance test* is a containerized test framework that provides a system
|
||||
verification and functionality test for a node. The test validates whether the
|
||||
node meets the minimum requirements for Kubernetes; a node that passes the test
|
||||
is qualified to join a Kubernetes cluster.
|
||||
|
||||
## Limitations
|
||||
|
||||
There are following limitations in the current implementation of node
|
||||
conformance test. They'll be improved in future version.
|
||||
In Kubernetes version 1.5, node conformance test has the following limitations:
|
||||
|
||||
* Node conformance test only supports Docker as the container runtime.
|
||||
* Node conformance test doesn't validate network related system configurations
|
||||
and functionalities.
|
||||
|
||||
## Prerequisite
|
||||
## Node Prerequisite
|
||||
|
||||
Node conformance test is used to test whether a node is ready to join a
|
||||
Kubernetes cluster, so the prerequisite is the same with a standard Kubernetes
|
||||
node. At least, the node should have properly installed:
|
||||
To run node conformance test, a node must satisfy the same prerequisites as a
|
||||
standard Kubernetes node. At a minimum, the node should have the following
|
||||
daemons installed:
|
||||
|
||||
* Container Runtime (Docker)
|
||||
* Kubelet
|
||||
|
||||
Node conformance test validates kernel configurations. If the kenrel module
|
||||
`configs` is built as module in your environment, it must be loaded before the
|
||||
test. (See [Caveats #3](#caveats) for more information)
|
||||
## Running Node Conformance Test
|
||||
|
||||
## Usage
|
||||
To run the node conformance test, perform the following steps:
|
||||
|
||||
### Run Node Conformance Test
|
||||
1. Point your Kubelet to localhost `--api-servers="http://localhost:8080"`,
|
||||
because the test framework starts a local master to test Kubelet. There are some
|
||||
other Kubelet flags you may care:
|
||||
* `--pod-cidr`: If you are using `kubenet`, you should specify an arbitrary CIDR
|
||||
to Kubelet, for example `--pod-cidr=10.180.0.0/24`.
|
||||
* `--cloud-provider`: If you are using `--cloud-provider=gce`, you should
|
||||
remove the flag to run the test.
|
||||
|
||||
* **Step 1:** Point your Kubelet to localhost `--api-servers="http://localhost:8080"`,
|
||||
because the test framework starts a local master to test Kubelet.
|
||||
|
||||
* **Step 2:** Run the node conformance test with command:
|
||||
2. Run the node conformance test with command:
|
||||
|
||||
```shell
|
||||
# $CONFIG_DIR is the pod manifest path of your kubelet.
|
||||
# $CONFIG_DIR is the pod manifest path of your Kubelet.
|
||||
# $LOG_DIR is the test output path.
|
||||
sudo docker run -it --rm --privileged --net=host \
|
||||
-v /:/rootfs:ro -v /var/run:/var/run \
|
||||
-v $CONFIG_DIR:/etc/manifest -v $LOG_DIR:/var/result \
|
||||
gcr.io/google_containers/node-test-amd64:v0.1
|
||||
-v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
|
||||
gcr.io/google_containers/node-test:0.2
|
||||
```
|
||||
|
||||
### Run Node Conformance Test for Other Architectures
|
||||
## Running Node Conformance Test for Other Architectures
|
||||
|
||||
We also build node conformance test docker images for other architectures:
|
||||
Kubernetes also provides node conformance test docker images for other
|
||||
architectures:
|
||||
|
||||
Arch | Image |
|
||||
--------|:-----------------:|
|
||||
|
@ -64,25 +62,16 @@ We also build node conformance test docker images for other architectures:
|
|||
arm | node-test-arm |
|
||||
arm64 | node-test-arm64 |
|
||||
|
||||
### Run Selected Test
|
||||
|
||||
In fact, Node conformance test is a containerized version of [node e2e
|
||||
test](https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/e2e-node-tests.md).
|
||||
By default, it runs all conformance test.
|
||||
|
||||
Theoretically, you can run any node e2e test if you configure the container and
|
||||
mount required volumes properly. But **it is strongly recommended to only run conformance
|
||||
test**, because the non-conformance test needs much more complex framework configuration.
|
||||
## Running Selected Test
|
||||
|
||||
To run specific tests, overwrite the environment variable `FOCUS` with the
|
||||
regular expression of tests you want to run.
|
||||
|
||||
```shell
|
||||
sudo docker run -it --rm --privileged --net=host \
|
||||
-v /:/rootfs:ro -v /var/run:/var/run \
|
||||
-v $CONFIG_DIR:/etc/manifest -v $LOG_DIR:/var/result \
|
||||
-v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
|
||||
-e FOCUS=MirrorPod \ # Only run MirrorPod test
|
||||
gcr.io/google_containers/node-test-amd64:v0.1
|
||||
gcr.io/google_containers/node-test:0.2
|
||||
```
|
||||
|
||||
To skip specific tests, overwrite the environment variable `SKIP` with the
|
||||
|
@ -90,25 +79,22 @@ regular expression of tests you want to skip.
|
|||
|
||||
```shell
|
||||
sudo docker run -it --rm --privileged --net=host \
|
||||
-v /:/rootfs:ro -v /var/run:/var/run \
|
||||
-v $CONFIG_DIR:/etc/manifest -v $LOG_DIR:/var/result \
|
||||
-e SKIP=MirrorPod \ # Run all conformance test and skip MirrorPod test
|
||||
gcr.io/google_containers/node-test-amd64:v0.1
|
||||
-v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
|
||||
-e SKIP=MirrorPod \ # Run all conformance tests but skip MirrorPod test
|
||||
gcr.io/google_containers/node-test:0.2
|
||||
```
|
||||
|
||||
### Caveats
|
||||
Node conformance test is a containerized version of [node e2e test](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/devel/e2e-node-tests.md).
|
||||
By default, it runs all conformance tests.
|
||||
|
||||
* The test will leave some docker images on the node, including the node
|
||||
conformance test image and images of containers used in the functionality
|
||||
Theoretically, you can run any node e2e test if you configure the container and
|
||||
mount required volumes properly. But **it is strongly recommended to only run conformance
|
||||
test**, because it requires much more complex configuration to run non-conformance test.
|
||||
|
||||
## Caveats
|
||||
|
||||
* The test leaves some docker images on the node, including the node conformance
|
||||
test image and images of containers used in the functionality
|
||||
test.
|
||||
* The test will leave dead containers on the node, these containers are created
|
||||
* The test leaves dead containers on the node. These containers are created
|
||||
during the functionality test.
|
||||
* Node conformance test validates kernel configuration. However, in some os
|
||||
distro the kernel module `configs` may not be loaded by default, and you will get
|
||||
the error `no config path in [POSSIBLE KERNEL CONFIG FILE PATHS] is
|
||||
available`. In that case please do either of the followings:
|
||||
* Manually load/unload `configs` kernel module: run `sudo modprobe configs` to
|
||||
load the kernel module, and `sudo modprobe -r configs` to unload it after the test.
|
||||
* Mount `modprobe` into the container: Add option `-v /bin/kmod:/bin/kmod
|
||||
-v /sbin/modprobe:/sbin/modprobe -v /lib/modules:/lib/modules` when starting
|
||||
the test container.
|
||||
|
|
|
@ -54,10 +54,9 @@ The node condition is represented as a JSON object. For example, the following r
|
|||
]
|
||||
```
|
||||
|
||||
If the Status of the Ready condition is Unknown or False for more than five
|
||||
minutes, then all of the pods on the node are terminated by the node
|
||||
controller. (The timeout length is configurable by the `--pod-eviction-timeout`
|
||||
parameter on the controller manager.)
|
||||
If the Status of the Ready condition is "Unknown" or "False" for longer than the `pod-eviction-timeout`, an argument passed to the [kube-controller-manager](docs/admin/kube-controller-manager/), all of the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on it. The decision to delete the pods cannot be communicated to the kubelet until it re-establishes communication with the apiserver. In the meantime, the pods which are scheduled for deletion may continue to run on the partitioned node.
|
||||
|
||||
In versions of Kubernetes prior to 1.5, the node controller would [force delete](/docs/user-guide/pods/#force-deletion-of-pods) these unreachable pods from the apiserver. However, in 1.5 and higher, the node controller does not force delete pods until it is confirmed that they have stopped running in the cluster. One can see these pods which may be running on an unreachable node as being in the "Terminating" or "Unknown" states. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from Kubernetes causes all the Pod objects running on it to be deleted from the apiserver, freeing up their names.
|
||||
|
||||
### Capacity
|
||||
|
||||
|
|
|
@ -26,9 +26,9 @@ What constitutes a compatible change and how to change the API are detailed by t
|
|||
|
||||
## API Swagger definitions
|
||||
|
||||
Complete API details are documented using [Swagger v1.2](http://swagger.io/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger Kubernetes API spec, by default at located at `/swaggerapi`, and a UI to browse the API documentation at `/swagger-ui`.
|
||||
Complete API details are documented using [Swagger v1.2](http://swagger.io/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger Kubernetes API spec located at `/swaggerapi`. You can also enable a UI to browse the API documentation at `/swagger-ui` by passing the `--enable-swagger-ui=true` flag to apiserver.
|
||||
|
||||
We also host a version of the [latest API documentation UI](http://kubernetes.io/kubernetes/third_party/swagger-ui/). This is updated with the latest release, so if you are using a different version of Kubernetes you will want to use the spec from your apiserver.
|
||||
We also host a version of the [latest API documentation](http://kubernetes.io/docs/api-reference/README/). This is updated with the latest release, so if you are using a different version of Kubernetes you will want to use the spec from your apiserver.
|
||||
|
||||
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.
|
||||
|
||||
|
|
|
@ -0,0 +1,174 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
**StatefulSets are a beta feature in 1.5. This feature replaces the
|
||||
PetSets feature from 1.4. Users of PetSets are referred to the 1.5
|
||||
[Upgrade Guide](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/)
|
||||
for further information on how to upgrade existing PetSets to StatefulSets.**
|
||||
|
||||
A StatefulSet is a Controller that provides a unique identity to its Pods. It provides
|
||||
guarantees about the ordering of deployment and scaling.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture body %}
|
||||
|
||||
### Using StatefulSets
|
||||
|
||||
StatefulSets are valuable for applications that require one or more of the
|
||||
following.
|
||||
|
||||
* Stable, unique network identifiers.
|
||||
* Stable, persistent storage.
|
||||
* Ordered, graceful deployment and scaling.
|
||||
* Ordered, graceful deletion and termination.
|
||||
|
||||
In the above, stable is synonymous with persistent across Pod (re) schedulings.
|
||||
If an application doesn't require any stable identifiers or ordered deployment,
|
||||
deletion, or scaling, you should deploy your application with a controller that
|
||||
provides a set of stateless replicas. Such controllers, such as
|
||||
[Deployment](/docs/user-guide/deployments/) or
|
||||
[ReplicaSet](/docs/user-guide/replicasets/) may be better suited to your needs.
|
||||
|
||||
### Limitations
|
||||
* StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5.
|
||||
* As with all alpha/beta resources, you can disable StatefulSet through the `--runtime-config` option passed to the apiserver.
|
||||
* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
|
||||
* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
|
||||
* StatefulSets currently require a [Headless Service](/docs/user-guide/services/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service.
|
||||
* Updating an existing StatefulSet is currently a manual process.
|
||||
|
||||
### Components
|
||||
The example below demonstrates the components of a StatefulSet.
|
||||
|
||||
* A Headless Service, named nginx, is used to control the network domain.
|
||||
* The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
|
||||
* The volumeClaimTemplates, will provide stable storage using [PersistentVolumes](/docs/user-guide/volumes/) provisioned by a
|
||||
PersistentVolume Provisioner.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: web
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: nginx
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: web
|
||||
spec:
|
||||
serviceName: "nginx"
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx
|
||||
image: gcr.io/google_containers/nginx-slim:0.8
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
### Pod Identity
|
||||
StatefulSet Pods have a unique identity that is comprised of an ordinal, a
|
||||
stable network identity, and stable storage. The identity sticks to the Pod,
|
||||
regardless of which node it's (re) scheduled on.
|
||||
|
||||
__Ordinal Index__
|
||||
|
||||
For a StatefulSet with N replicas, each Pod in the StatefulSet will be
|
||||
assigned an integer ordinal, in the range [0,N), that is unique over the Set.
|
||||
|
||||
__Stable Network ID__
|
||||
|
||||
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet
|
||||
and the ordinal of the Pod. The pattern for the constructed hostname
|
||||
is `$(statefulset name)-$(ordinal)`. The example above will create three Pods
|
||||
named `web-0,web-1,web-2`.
|
||||
A StatefulSet can use a [Headless Service](/docs/user-guide/services/#headless-services)
|
||||
to control the domain of its Pods. The domain managed by this Service takes the form:
|
||||
`$(service name).$(namespace).svc.cluster.local`, where "cluster.local"
|
||||
is the [cluster domain](http://releases.k8s.io/{{page.githubbranch}}/build/kube-dns/README.md#how-do-i-configure-it).
|
||||
As each Pod is created, it gets a matching DNS subdomain, taking the form:
|
||||
`$(podname).$(governing service domain)`, where the governing service is defined
|
||||
by the `serviceName` field on the StatefulSet.
|
||||
|
||||
Here are some examples of choices for Cluster Domain, Service name,
|
||||
StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods.
|
||||
|
||||
Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain | Pod DNS | Pod Hostname |
|
||||
-------------- | ----------------- | ----------------- | -------------- | ------- | ------------ |
|
||||
cluster.local | default/nginx | default/web | nginx.default.svc.cluster.local | web-{0..N-1}.nginx.default.svc.cluster.local | web-{0..N-1} |
|
||||
cluster.local | foo/nginx | foo/web | nginx.foo.svc.cluster.local | web-{0..N-1}.nginx.foo.svc.cluster.local | web-{0..N-1} |
|
||||
kube.local | foo/nginx | foo/web | nginx.foo.svc.kube.local | web-{0..N-1}.nginx.foo.svc.kube.local | web-{0..N-1} |
|
||||
|
||||
Note that Cluster Domain will be set to `cluster.local` unless
|
||||
[otherwise configured](http://releases.k8s.io/{{page.githubbranch}}/build/kube-dns/README.md#how-do-i-configure-it).
|
||||
|
||||
__Stable Storage__
|
||||
|
||||
Kubernetes creates one [PersistentVolumes](/docs/user-guide/volumes/) for each
|
||||
VolumeClaimTemplate, as specified in the StatefulSet's volumeClaimTemplates field
|
||||
In the example above, each Pod will receive a single PersistentVolume with a storage
|
||||
class of `anything` and 1 Gib of provisioned storage. When a Pod is (re) scheduled onto
|
||||
a node, its `volumeMounts` mount the PersistentVolumes associated with its
|
||||
PersistentVolume Claims. Note that, the PersistentVolumes associated with the
|
||||
Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
|
||||
This must be done manually.
|
||||
|
||||
### Deployment and Scaling Guarantee
|
||||
|
||||
* For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
|
||||
* When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
|
||||
* Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
|
||||
* Before a Pod is terminated, all of its successors must be completely shutdown.
|
||||
|
||||
The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. The practice of setting a `pod.Spec.TerminationGracePeriodSeconds` of 0 seconds is unsafe and strongly discouraged. For further explanation, please refer to [force deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/#deleting-pods).
|
||||
|
||||
When the web example above is created, three Pods will be deployed in the order
|
||||
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
|
||||
[Running and Ready](/docs/user-guide/pod-states), and web-2 will not be deployed until
|
||||
web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before
|
||||
web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
|
||||
becomes Running and Ready.
|
||||
|
||||
If a user were to scale the deployed example by patching the StatefulSet such that
|
||||
`replicas=1`, web-2 would be terminated first. web-1 would not be terminated until web-2
|
||||
is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and
|
||||
is completely shutdown, but prior to web-1's termination, web-1 would not be terminated
|
||||
until web-0 is Running and Ready.
|
||||
{% endcapture %}
|
||||
{% include templates/concept.md %}
|
|
@ -5,7 +5,12 @@ The Concepts section of the Kubernetes documentation is a work in progress.
|
|||
|
||||
#### Object Metadata
|
||||
|
||||
[Annotations](/docs/concepts/object-metadata/annotations/)
|
||||
|
||||
* [Annotations](/docs/concepts/object-metadata/annotations/)
|
||||
|
||||
#### Controllers
|
||||
* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
|
||||
|
||||
|
||||
### What's next
|
||||
|
||||
|
|
|
@ -80,6 +80,12 @@ site where you can verify that your changes have rendered correctly.
|
|||
If needed, revise your pull request by committing changes to your
|
||||
new branch in your fork.
|
||||
|
||||
The staging site for the upcoming Kubernetes release is here:
|
||||
[http://kubernetes-io-vnext-staging.netlify.com/](http://kubernetes-io-vnext-staging.netlify.com/).
|
||||
The staging site reflects the current state of what's been merged in the
|
||||
release branch, or in other words, what the docs will look like for the
|
||||
next upcoming release. It's automatically updated as new PRs get merged.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
|
|
@ -33,16 +33,18 @@ the master branch.
|
|||
|
||||
### Staging a pull request
|
||||
|
||||
When you create pull request against the Kubernetes documentation
|
||||
repository, you can see your changes on a staging server.
|
||||
When you create a pull request, either against the master or <vnext>
|
||||
branch, your changes are staged in a custom subdomain on Netlify so that
|
||||
you can see your changes in rendered form before the pull request is merged.
|
||||
|
||||
1. In your GitHub account, in your new branch, submit a pull request to the
|
||||
kubernetes/kubernetes.github.io repository. This opens a page that shows the
|
||||
status of your pull request.
|
||||
|
||||
1. Click **Show all checks**. Wait for the **deploy/netlify** check to complete.
|
||||
To the right of **deploy/netlify**, click **Details**. This opens a staging
|
||||
site where you see your changes.
|
||||
1. Scroll down to the list of automated checks. Click **Show all checks**.
|
||||
Wait for the **deploy/netlify** check to complete. To the right of
|
||||
**deploy/netlify**, click **Details**. This opens a staging site where you
|
||||
can see your changes.
|
||||
|
||||
### Staging locally using Docker
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ is the best fit for your content:
|
|||
<td>A concept page explains some aspect of Kubernetes. For example, a concept page might describe the Kubernetes Deployment object and explain the role it plays as an application is deployed, scaled, and updated. Typically, concept pages don't include sequences of steps, but instead provide links to tasks or tutorials.</td>
|
||||
</tr>
|
||||
|
||||
</table>
|
||||
</table>
|
||||
|
||||
Each page type has a
|
||||
[template](/docs/contribute/page-templates/)
|
||||
|
@ -72,6 +72,50 @@ Depending page type, create an entry in one of these files:
|
|||
* /_data/tutorials.yaml
|
||||
* /_data/concepts.yaml
|
||||
|
||||
### Including code from another file
|
||||
|
||||
To include a code file in your topic, place the code file in the Kubernetes
|
||||
documentation repository, preferably in the same directory as your topic
|
||||
file. In your topic file, use the `include` tag:
|
||||
|
||||
<pre>{% include code.html language="<LEXERVALUE>" file="<RELATIVEPATH>" ghlink="/<PATHFROMROOT>" %}</pre>
|
||||
|
||||
where:
|
||||
|
||||
* `<LEXERVALUE>` is the language in which the file was written. This must be
|
||||
[a value supported by Rouge](https://github.com/jneen/rouge/wiki/list-of-supported-languages-and-lexers).
|
||||
* `<RELATIVEPATH>` is the path to the file you're including, relative to the current file, for example, `gce-volume.yaml`.
|
||||
* `<PATHFROMROOT>` is the path to the file relative to root, for example, `docs/tutorials/stateful-application/gce-volume.yaml`.
|
||||
|
||||
Here's an example of using the `include` tag:
|
||||
|
||||
<pre>{% include code.html language="yaml" file="gce-volume.yaml" ghlink="/docs/tutorials/stateful-application/gce-volume.yaml" %}</pre>
|
||||
|
||||
### Showing how to create an API object from a configuration file
|
||||
|
||||
If you need to show the reader how to create an API object based on a
|
||||
configuration file, place the configuration file in the Kubernetes documentation
|
||||
repository, preferably in the same directory as your topic file.
|
||||
|
||||
In your topic, show this command:
|
||||
|
||||
kubectl create -f http://k8s.io/<PATHFROMROOT>
|
||||
|
||||
where `<PATHFROMROOT>` is the path to the configuration file relative to root,
|
||||
for example, `docs/tutorials/stateful-application/gce-volume.yaml`.
|
||||
|
||||
Here's an example of a command that creates an API object from a configuration file:
|
||||
|
||||
kubectl create -f http://k8s.io/docs/tutorials/stateful-application/gce-volume.yaml
|
||||
|
||||
For an example of a topic that uses this technique, see
|
||||
[Running a Single-Instance Stateful Application](/docs/tutorials/stateful-application/run-stateful-application/).
|
||||
|
||||
### Adding images to a topic
|
||||
|
||||
Put image files in the `/images` directory. The preferred
|
||||
image format is SVG.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
|
|
@ -33,7 +33,7 @@ cd kubernetes
|
|||
make release
|
||||
```
|
||||
|
||||
For more details on the release process see the [`build/`](http://releases.k8s.io/{{page.githubbranch}}/build/) directory
|
||||
For more details on the release process see the [`build-tools/`](http://releases.k8s.io/{{page.githubbranch}}/build-tools/) directory
|
||||
|
||||
### Download Kubernetes and automatically set up a default cluster
|
||||
|
||||
|
|
|
@ -6,144 +6,314 @@ assignees:
|
|||
|
||||
---
|
||||
|
||||
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
Minikube starts a single node kubernetes cluster locally for purposes of development and testing.
|
||||
Minikube packages and configures a Linux VM, Docker and all Kubernetes components, optimized for local development.
|
||||
Minikube supports Kubernetes features such as:
|
||||
### Minikube Features
|
||||
|
||||
* DNS
|
||||
* NodePorts
|
||||
* ConfigMaps and Secrets
|
||||
* Dashboards
|
||||
* Minikube supports Kubernetes features such as:
|
||||
* DNS
|
||||
* NodePorts
|
||||
* ConfigMaps and Secrets
|
||||
* Dashboards
|
||||
* Container Runtime: Docker, and [rkt](https://github.com/coreos/rkt)
|
||||
* Enabling CNI (Container Network Interface)
|
||||
* Ingress
|
||||
|
||||
Minikube does not yet support Cloud Provider specific features such as:
|
||||
|
||||
* LoadBalancers
|
||||
* PersistentVolumes
|
||||
* Ingress
|
||||
## Installation
|
||||
|
||||
### Requirements
|
||||
|
||||
Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS on all platforms.
|
||||
* OS X
|
||||
* [xhyve driver](./DRIVERS.md#xhyve-driver), [VirtualBox](https://www.virtualbox.org/wiki/Downloads) or [VMware Fusion](https://www.vmware.com/products/fusion) installation
|
||||
* Linux
|
||||
* [VirtualBox](https://www.virtualbox.org/wiki/Downloads) or [KVM](http://www.linux-kvm.org/) installation,
|
||||
* VT-x/AMD-v virtualization must be enabled in BIOS
|
||||
* `kubectl` must be on your path. To install kubectl:
|
||||
|
||||
To check that this is enabled on Linux, run:
|
||||
**Kubectl for Linux/amd64**
|
||||
|
||||
```
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
**Kubectl for OS X/amd64**
|
||||
|
||||
```
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
### Instructions
|
||||
|
||||
See the installation instructions for the [latest release](https://github.com/kubernetes/minikube/releases).
|
||||
|
||||
## Quickstart
|
||||
|
||||
Here's a brief demo of minikube usage.
|
||||
If you want to change the VM driver add the appropriate `--vm-driver=xxx` flag to `minikube start`. Minikube Supports
|
||||
the following drivers:
|
||||
|
||||
* virtualbox
|
||||
* vmwarefusion
|
||||
* kvm ([driver installation](./DRIVERS.md#kvm-driver))
|
||||
* xhyve ([driver installation](./DRIVERS.md#xhyve-driver))
|
||||
|
||||
Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`.
|
||||
|
||||
```shell
|
||||
cat /proc/cpuinfo | grep 'vmx\|svm'
|
||||
```
|
||||
|
||||
This command should output something if the setting is enabled.
|
||||
|
||||
To check that this is enabled on OSX (most newer Macs have this enabled by default), run:
|
||||
|
||||
```shell
|
||||
sysctl -a | grep machdep.cpu.features | grep VMX
|
||||
|
||||
```
|
||||
|
||||
This command should output something if the setting is enabled.
|
||||
|
||||
#### Linux
|
||||
|
||||
Minikube requires the latest [Virtualbox](https://www.virtualbox.org/wiki/Downloads) to be installed on your system.
|
||||
|
||||
#### OSX
|
||||
|
||||
Minikube requires one of the following:
|
||||
|
||||
* The latest [Virtualbox](https://www.virtualbox.org/wiki/Downloads).
|
||||
* The latest version of [VMWare Fusion](https://www.vmware.com/products/fusion).
|
||||
|
||||
### Install `minikube`
|
||||
|
||||
See the [latest Minikube release](https://github.com/kubernetes/minikube/releases) for installation instructions.
|
||||
|
||||
### Install `kubectl`
|
||||
|
||||
You will need to download and install the kubectl client binary for `${K8S_VERSION}` (in this example: `{{page.version}}.0`)
|
||||
to run commands against the cluster.
|
||||
|
||||
```shell
|
||||
# linux/amd64
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# linux/386
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# linux/arm
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# linux/arm64
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
#linux/ppc64le
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# OS X/amd64
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# OS X/386
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
For Windows, download [kubectl.exe](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe) and save it to a location on your PATH.
|
||||
|
||||
The generic download path is:
|
||||
```
|
||||
https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}
|
||||
```
|
||||
|
||||
### Starting the cluster
|
||||
|
||||
To start a cluster, run the command:
|
||||
|
||||
```shell
|
||||
minikube start
|
||||
$ minikube start
|
||||
Starting local Kubernetes cluster...
|
||||
Kubectl is now configured to use the cluster.
|
||||
Running pre-create checks...
|
||||
Creating machine...
|
||||
Starting local Kubernetes cluster...
|
||||
|
||||
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
|
||||
deployment "hello-minikube" created
|
||||
$ kubectl expose deployment hello-minikube --type=NodePort
|
||||
service "hello-minikube" exposed
|
||||
|
||||
# We have now launched an echoserver pod but we have to wait until the pod is up before curling/accessing it
|
||||
# via the exposed service.
|
||||
# To check whether the pod is up and running we can use the following:
|
||||
$ kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-minikube-3383150820-vctvh 1/1 ContainerCreating 0 3s
|
||||
# We can see that the pod is still being created from the ContainerCreating status
|
||||
$ kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-minikube-3383150820-vctvh 1/1 Running 0 13s
|
||||
# We can see that the pod is now Running and we will now be able to curl it:
|
||||
$ curl $(minikube service hello-minikube --url)
|
||||
CLIENT VALUES:
|
||||
client_address=192.168.99.1
|
||||
command=GET
|
||||
real path=/
|
||||
...
|
||||
$ minikube stop
|
||||
Stopping local Kubernetes cluster...
|
||||
Stopping "minikube"...
|
||||
```
|
||||
|
||||
This will build and start a lightweight local cluster, consisting of a master, etcd, Docker and a single node.
|
||||
### Using rkt container engine
|
||||
|
||||
Minikube will also create a "minikube" context, and set it to default in kubectl.
|
||||
To switch back to this context later, run this command: `kubectl config use-context minikube`.
|
||||
|
||||
Type `minikube stop` to shut the cluster down.
|
||||
|
||||
Minikube also includes the [Kubernetes dashboard](http://kubernetes.io/docs/user-guide/ui/). Run this command to see the included kube-system pods:
|
||||
To use [rkt](https://github.com/coreos/rkt) as the container runtime run:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system kube-addon-manager-127.0.0.1 1/1 Running 0 35s
|
||||
kube-system kubernetes-dashboard-9brhv 1/1 Running 0 20s
|
||||
$ minikube start \
|
||||
--network-plugin=cni \
|
||||
--container-runtime=rkt \
|
||||
--iso-url=https://github.com/coreos/minikube-iso/releases/download/v0.0.5/minikube-v0.0.5.iso
|
||||
```
|
||||
|
||||
Run this command to open the Kubernetes dashboard:
|
||||
This will use an alternative minikube ISO image containing both rkt, and Docker, and enable CNI networking.
|
||||
|
||||
### Driver plugins
|
||||
|
||||
See [DRIVERS](./DRIVERS.md) for details on supported drivers and how to install
|
||||
plugins, if required.
|
||||
|
||||
### Reusing the Docker daemon
|
||||
|
||||
When using a single VM of kubernetes its really handy to reuse the Docker daemon inside the VM; as this means you don't have to build on your host machine and push the image into a docker registry - you can just build inside the same docker daemon as minikube which speeds up local experiments.
|
||||
|
||||
To be able to work with the docker daemon on your mac/linux host use the [docker-env command](./docs/minikube_docker-env.md) in your shell:
|
||||
|
||||
```
|
||||
eval $(minikube docker-env)
|
||||
```
|
||||
you should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM:
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
On Centos 7, docker may report the following error:
|
||||
|
||||
```
|
||||
Could not read CA certificate "/etc/docker/ca.pem": open /etc/docker/ca.pem: no such file or directory
|
||||
```
|
||||
|
||||
The fix is to update /etc/sysconfig/docker to ensure that minikube's environment changes are respected:
|
||||
|
||||
```
|
||||
< DOCKER_CERT_PATH=/etc/docker
|
||||
---
|
||||
> if [ -z "${DOCKER_CERT_PATH}" ]; then
|
||||
> DOCKER_CERT_PATH=/etc/docker
|
||||
> fi
|
||||
```
|
||||
|
||||
Remember to turn off the imagePullPolicy:Always, as otherwise kubernetes won't use images you built locally.
|
||||
|
||||
## Managing your Cluster
|
||||
|
||||
### Starting a Cluster
|
||||
|
||||
The [minikube start](./docs/minikube_start.md) command can be used to start your cluster.
|
||||
This command creates and configures a virtual machine that runs a single-node Kubernetes cluster.
|
||||
This command also configures your [kubectl](http://kubernetes.io/docs/user-guide/kubectl-overview/) installation to communicate with this cluster.
|
||||
|
||||
### Configuring Kubernetes
|
||||
|
||||
Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values.
|
||||
To use this feature, you can use the `--extra-config` flag on the `minikube start` command.
|
||||
|
||||
This flag is repeated, so you can pass it several times with several different values to set multiple options.
|
||||
|
||||
This flag takes a string of the form `component.key=value`, where `component` is one of the strings from the above list, `key` is a value on the
|
||||
configuration struct and `value` is the value to set.
|
||||
|
||||
Valid `key`s can be found by examining the documentation for the Kubernetes `componentconfigs` for each component.
|
||||
Here is the documentation for each supported configuration:
|
||||
|
||||
* [kubelet](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeletConfiguration)
|
||||
* [apiserver](https://godoc.org/k8s.io/kubernetes/cmd/kube-apiserver/app/options#APIServer)
|
||||
* [proxy](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeProxyConfiguration)
|
||||
* [controller-manager](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeControllerManagerConfiguration)
|
||||
* [etcd](https://godoc.org/github.com/coreos/etcd/etcdserver#ServerConfig)
|
||||
* [scheduler](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeSchedulerConfiguration)
|
||||
|
||||
#### Examples
|
||||
|
||||
To change the `MaxPods` setting to 5 on the Kubelet, pass this flag: `--extra-config=kubelet.MaxPods=5`.
|
||||
|
||||
This feature also supports nested structs. To change the `LeaderElection.LeaderElect` setting to `true` on the scheduler, pass this flag: `--extra-config=scheduler.LeaderElection.LeaderElect=true`.
|
||||
|
||||
To set the `AuthorizationMode` on the `apiserver` to `RBAC`, you can use: `--extra-config=apiserver.AuthorizationMode=RBAC`.
|
||||
|
||||
### Stopping a Cluster
|
||||
The [minikube stop](./docs/minikube_stop.md) command can be used to stop your cluster.
|
||||
This command shuts down the minikube virtual machine, but preserves all cluster state and data.
|
||||
Starting the cluster again will restore it to it's previous state.
|
||||
|
||||
### Deleting a Cluster
|
||||
The [minikube delete](./docs/minikube_delete.md) command can be used to delete your cluster.
|
||||
This command shuts down and deletes the minikube virtual machine. No data or state is preserved.
|
||||
|
||||
## Interacting With your Cluster
|
||||
|
||||
### Kubectl
|
||||
|
||||
The `minikube start` command creates a "[kubectl context](http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_set-context/)" called "minikube".
|
||||
This context contains the configuration to communicate with your minikube cluster.
|
||||
|
||||
Minikube sets this context to default automatically, but if you need to switch back to it in the future, run:
|
||||
|
||||
`kubectl config use-context minikube`,
|
||||
|
||||
or pass the context on each command like this: `kubectl get pods --context=minikube`.
|
||||
|
||||
### Dashboard
|
||||
|
||||
To access the [Kubernetes Dashboard](http://kubernetes.io/docs/user-guide/ui/), run this command in a shell after starting minikube to get the address:
|
||||
```shell
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
### Test it out
|
||||
|
||||
List the nodes in your cluster by running:
|
||||
### Services
|
||||
|
||||
To access a service exposed via a node port, run this command in a shell after starting minikube to get the address:
|
||||
```shell
|
||||
kubectl get nodes
|
||||
minikube service [-n NAMESPACE] [--url] NAME
|
||||
```
|
||||
|
||||
Minikube contains a built-in Docker daemon for running containers.
|
||||
If you use another Docker daemon for building your containers, you will have to publish them to a registry before minikube can pull them.
|
||||
You can use minikube's built in Docker daemon to avoid this extra step of pushing your images.
|
||||
Use the built-in Docker daemon with:
|
||||
## Networking
|
||||
|
||||
The minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command.
|
||||
Any services of type `NodePort` can be accessed over that IP address, on the NodePort.
|
||||
|
||||
To determine the NodePort for your service, you can use a `kubectl` command like this:
|
||||
|
||||
`kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].NodePort}"'`
|
||||
|
||||
## Persistent Volumes
|
||||
Minikube supports [PersistentVolumes](http://kubernetes.io/docs/user-guide/persistent-volumes/) of type `hostPath`.
|
||||
These PersistentVolumes are mapped to a directory inside the minikube VM.
|
||||
|
||||
The Minikube VM boots into a tmpfs, so most directories will not be persisted across reboots (`minikube stop`).
|
||||
However, Minikube is configured to persist files stored under the following host directories:
|
||||
|
||||
* `/data`
|
||||
* `/var/lib/localkube`
|
||||
* `/var/lib/docker`
|
||||
|
||||
Here is an example PersistentVolume config to persist data in the '/data' directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0001
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
capacity:
|
||||
storage: 5Gi
|
||||
hostPath:
|
||||
path: /data/pv0001/
|
||||
```
|
||||
|
||||
## Mounted Host Folders
|
||||
Some drivers will mount a host folder within the VM so that you can easily share files between the VM and host. These are not configurable at the moment and different for the driver and OS you are using. Note: Host folder sharing is not implemented on Linux yet.
|
||||
|
||||
| Driver | OS | HostFolder | VM |
|
||||
| --- | --- | --- | --- |
|
||||
| Virtualbox | OSX | /Users | /Users |
|
||||
| Virtualbox | Windows | C://Users | /c/Users |
|
||||
| VMWare Fusion | OSX | /Users | /Users |
|
||||
| Xhyve | OSX | /Users | /Users |
|
||||
|
||||
|
||||
## Private Container Registries
|
||||
|
||||
To access a private container registry, follow the steps on [this page](http://kubernetes.io/docs/user-guide/images/).
|
||||
|
||||
We recommend you use ImagePullSecrets, but if you would like to configure access on the minikube VM you can place the `.dockercfg` in the `/home/docker` directory or the `config.json` in the `/home/docker/.docker` directory.
|
||||
|
||||
## Add-ons
|
||||
|
||||
In order to have minikube properly start/restart custom addons, place the addon(s) you wish to be launched with minikube in the `.minikube/addons` directory. Addons in this folder will be moved to the minikubeVM and launched each time minikube is started/restarted.
|
||||
|
||||
## Documentation
|
||||
|
||||
For a list of minikube's available commands see the [full CLI docs](./docs/minikube.md).
|
||||
|
||||
## Using Minikube with an HTTP Proxy
|
||||
|
||||
Minikube creates a Virtual Machine that includes Kubernetes and a Docker daemon.
|
||||
When Kubernetes attempts to schedule containers using Docker, the Docker daemon may require external network access to pull containers.
|
||||
|
||||
If you are behind an HTTP proxy, you may need to supply Docker with the proxy settings.
|
||||
To do this, pass the required environment variables as flags during `minikube start`.
|
||||
|
||||
For example:
|
||||
|
||||
```shell
|
||||
eval $(minikube docker-env)
|
||||
$ minikube start --docker-env HTTP_PROXY=http://$YOURPROXY:PORT \
|
||||
--docker-env HTTPS_PROXY=https://$YOURPROXY:PORT
|
||||
```
|
||||
This command sets up the Docker environment variables so a Docker client can communicate with the minikube Docker daemon.
|
||||
|
||||
```shell
|
||||
docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
42c643fea98b gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1 "/dashboard --port=90" 3 minutes ago Up 3 minutes k8s_kubernetes-dashboard.1d0d880_kubernetes-dashboard-9brhv_kube-system_5062dd0b-370b-11e6-84b6-5eab1f51187f_134cba4c
|
||||
475db7659edf gcr.io/google_containers/pause-amd64:3.0 "/pause" 3 minutes ago Up 3 minutes k8s_POD.2225036b_kubernetes-dashboard-9brhv_kube-system_5062dd0b-370b-11e6-84b6-5eab1f51187f_e76d8136
|
||||
e9096501addf gcr.io/google-containers/kube-addon-manager-amd64:v2 "/opt/kube-addons.sh" 3 minutes ago Up 3 minutes k8s_kube-addon-manager.a1c58ca2_kube-addon-manager-127.0.0.1_kube-system_48abed82af93bb0b941173334110923f_82655b7d
|
||||
64748893cf7c gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 minutes ago Up 4 minutes k8s_POD.d8dbe16c_kube-addon-manager-127.0.0.1_kube-system_48abed82af93bb0b941173334110923f_c67701c3
|
||||
```
|
||||
|
||||
## Known Issues
|
||||
* Features that require a Cloud Provider will not work in Minikube. These include:
|
||||
* LoadBalancers
|
||||
* Features that require multiple nodes. These include:
|
||||
* Advanced scheduling policies
|
||||
|
||||
## Design
|
||||
|
||||
Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmachine) for provisioning VMs, and [localkube](https://github.com/kubernetes/minikube/tree/master/pkg/localkube) (originally written and donated to this project by [RedSpread](https://redspread.com/)) for running the cluster.
|
||||
|
||||
For more information about minikube, see the [proposal](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/local-cluster-ux.md).
|
||||
|
||||
## Additional Links:
|
||||
* **Goals and Non-Goals**: For the goals and non-goals of the minikube project, please see our [roadmap](./ROADMAP.md).
|
||||
* **Development Guide**: See [CONTRIBUTING.md](./CONTRIBUTING.md) for an overview of how to send pull requests.
|
||||
* **Building Minikube**: For instructions on how to build/test minikube from source, see the [build guide](./BUILD_GUIDE.md)
|
||||
* **Adding a New Dependency**: For instructions on how to add a new dependency to minikube see the [adding dependencies guide](./ADD_DEPENDENCY.md)
|
||||
* **Updating Kubernetes**: For instructions on how to add a new dependency to minikube see the [updating kubernetes guide](./UPDATE_KUBERNETES.md)
|
||||
|
||||
## Community
|
||||
|
||||
Contributions, questions, and comments are all welcomed and encouraged! minkube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
|
|
@ -0,0 +1,165 @@
|
|||
---
|
||||
|
||||
---
|
||||
|
||||
Kubernetes version 1.5 introduces support for Windows Server Containers. In version 1.5, the Kubernetes control plane (API Server, Scheduler, Controller Manager, etc) continue to run on Linux, while the kubelet and kube-proxy can be run on Windows Server.
|
||||
|
||||
**Note:** Windows Server Containers on Kubernetes is an Alpha feature in Kubernetes 1.5.
|
||||
|
||||
## Prerequisites
|
||||
In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported using the following:
|
||||
|
||||
1. Kubernetes control plane running on existing Linux infrastructure (version 1.5 or later)
|
||||
2. Kubenet network plugin setup on the Linux nodes
|
||||
3. Windows Server 2016 (RTM version 10.0.14393 or later)
|
||||
4. Docker Version 1.12.2-cs2-ws-beta or later
|
||||
|
||||
## Networking
|
||||
Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
|
||||
|
||||
### Linux
|
||||
The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the “public” NIC.
|
||||
|
||||
### Windows
|
||||
Each Window Server node should have the following configuration:
|
||||
|
||||
1. Two NICs (virtual networking adapters) are required on each Windows Server node - The two Windows container networking modes of interest (transparent and L2 bridge) use an external Hyper-V virtual switch. This means that one of the NICs is entirely allocated to the bridge, creating the need for the second NIC.
|
||||
2. Transparent container network created - This is a manual configuration step and is shown in **_Route Setup_** section below
|
||||
3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also “captures” packets that have the destination IP of a POD running on the node. To enable, open “Server Manager”. Click on “Roles”, “Add Roles”. Click “Next”. Select “Network Policy and Access Services”. Click on “Routing and Remote Access Service” and the underlying checkboxes
|
||||
4. Routes defined pointing to the other pod CIDRs via the “public” NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below
|
||||
|
||||
The following diagram illustrates the Windows Server networking setup for Kubernetes Setup
|
||||
![Windows Setup](windows-setup.png)
|
||||
|
||||
## Setting up Windows Server Containers on Kubernetes
|
||||
To run Windows Server Containers on Kubernetes, you'll need to set up both your host machines and the Kubernetes node components for Windows and setup Routes for Pod communication on different nodes
|
||||
### Host Setup
|
||||
**Windows Host Setup**
|
||||
|
||||
1. Windows Server container host running Windows Server 2016 and Docker v1.12. Follow the setup instructions outlined by this blog post: https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_server
|
||||
2. DNS support for Windows recently got merged to docker master and is currently not supported in a stable docker release. To use DNS build docker from master or download the binary from [Docker master](https://master.dockerproject.org/)
|
||||
3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause`
|
||||
4. RRAS (Routing) Windows feature enabled
|
||||
|
||||
**Linux Host Setup**
|
||||
|
||||
1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using.
|
||||
2. CNI network plugin installed.
|
||||
|
||||
### Component Setup
|
||||
Requirements
|
||||
* Git, Go 1.7.1+
|
||||
* make (if using Linux or MacOS)
|
||||
* Important notes and other dependencies are listed [here](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md#building-kubernetes-on-a-local-osshell-environment)
|
||||
|
||||
**kubelet**
|
||||
|
||||
To build the *kubelet*, run:
|
||||
|
||||
1. `cd $GOPATH/src/k8s.io/kubernetes`
|
||||
2. Build *kubelet*
|
||||
1. Linux/MacOS: `KUBE_BUILD_PLATFORMS=windows/amd64 make WHAT=cmd/kubelet`
|
||||
2. Windows: `go build cmd/kubelet/kubelet.go`
|
||||
|
||||
**kube-proxy**
|
||||
|
||||
To build *kube-proxy*, run:
|
||||
|
||||
1. `cd $GOPATH/src/k8s.io/kubernetes`
|
||||
2. Build *kube-proxy*
|
||||
1. Linux/MacOS: `KUBE_BUILD_PLATFORMS=windows/amd64 make WHAT=cmd/kube-proxy`
|
||||
2. Windows: `go build cmd/kube-proxy/proxy.go`
|
||||
|
||||
### Route Setup
|
||||
The below example setup assumes one Linux and two Windows Server 2016 nodes and a cluster CIDR 192.168.0.0/16
|
||||
|
||||
| Hostname | Routable IP address | Pod CIDR |
|
||||
| --- | --- | --- |
|
||||
| Lin01 | `<IP of Lin01 host>` | 192.168.0.0/24 |
|
||||
| Win01 | `<IP of Win01 host>` | 192.168.1.0/24 |
|
||||
| Win02 | `<IP of Win02 host>` | 192.168.2.0/24 |
|
||||
|
||||
**Lin01**
|
||||
```
|
||||
ip route add 192.168.1.0/24 via <IP of Win01 host>
|
||||
ip route add 192.168.2.0/24 via <IP of Win02 host>
|
||||
```
|
||||
|
||||
**Win01**
|
||||
```
|
||||
docker network create -d transparent --gateway 192.168.1.1 --subnet 192.168.1.0/24 <network name>
|
||||
# A bridge is created with Adapter name "vEthernet (HNSTransparent)". Set its IP address to transparent network gateway
|
||||
netsh interface ipv4 set address "vEthernet (HNSTransparent)" addr=192.168.1.1
|
||||
route add 192.168.0.0 mask 255.255.255.0 192.168.0.1 if <Interface Id of the Routable Ethernet Adapter> -p
|
||||
route add 192.168.2.0 mask 255.255.255.0 192.168.2.1 if <Interface Id of the Routable Ethernet Adapter> -p
|
||||
```
|
||||
|
||||
**Win02**
|
||||
```
|
||||
docker network create -d transparent --gateway 192.168.2.1 --subnet 192.168.2.0/24 <network name>
|
||||
# A bridge is created with Adapter name "vEthernet (HNSTransparent)". Set its IP address to transparent network gateway
|
||||
netsh interface ipv4 set address "vEthernet (HNSTransparent)" addr=192.168.2.1
|
||||
route add 192.168.0.0 mask 255.255.255.0 192.168.0.1 if <Interface Id of the Routable Ethernet Adapter> -p
|
||||
route add 192.168.1.0 mask 255.255.255.0 192.168.1.1 if <Interface Id of the Routable Ethernet Adapter> -p
|
||||
```
|
||||
|
||||
## Starting the Cluster
|
||||
To start your cluster, you'll need to start both the Linux-based Kubernetes control plane, and the Windows Server-based Kubernetes node components.
|
||||
## Starting the Linux-based Control Plane
|
||||
Use your preferred method to start Kubernetes cluster on Linux. Please note that Cluster CIDR might need to be updated.
|
||||
## Starting the Windows Node Components
|
||||
To start kubelet on your Windows node:
|
||||
Run the following in a PowerShell window. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kubelet
|
||||
|
||||
1. Set environment variable *CONTAINER_NETWORK* value to the docker container network to use
|
||||
`$env:CONTAINER_NETWORK = "<docker network>"`
|
||||
|
||||
2. Run *kubelet* executable using the below command
|
||||
`kubelet.exe --hostname-override=<ip address/hostname of the windows node> --pod-infra-container-image="apprenda/pause" --resolv-conf="" --api_servers=<api server location>`
|
||||
|
||||
To start kube-proxy on your Windows node:
|
||||
|
||||
Run the following in a PowerShell window with administrative privileges. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kube-proxy.
|
||||
|
||||
1. Set environment variable *INTERFACE_TO_ADD_SERVICE_IP* value to a node only network interface. The interface created when docker is installed should work
|
||||
`$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (HNS Internal NIC)"`
|
||||
|
||||
2. Run *kube-proxy* executable using the below command
|
||||
`.\proxy.exe --v=3 --proxy-mode=userspace --hostname-override=<ip address/hostname of the windows node> --master=<api server location> --bind-address=<ip address of the windows node>`
|
||||
|
||||
## Scheduling Pods on Windows
|
||||
Because your cluster has both Linux and Windows nodes, you must explictly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example:
|
||||
```
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "iis",
|
||||
"labels": {
|
||||
"name": "iis"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "iis",
|
||||
"image": "microsoft/iis",
|
||||
"ports": [
|
||||
{
|
||||
"containerPort": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"nodeSelector": {
|
||||
"beta.kubernetes.io/os": "windows"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Known Limitations:
|
||||
1. There is no network namespace in Windows and as a result currently only one container per pod is supported
|
||||
2. Secrets currently do not work because of a bug in Windows Server Containers described [here](https://github.com/docker/docker/issues/28401)
|
||||
3. ConfigMaps have not been implemented yet.
|
||||
4. `kube-proxy` implementation uses `netsh portproxy` and as it only supports TCP, DNS currently works only if the client retries DNS query using TCP
|
After Width: | Height: | Size: 50 KiB |
|
@ -0,0 +1,31 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kube-dns-autoscaler
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns-autoscaler
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-dns-autoscaler
|
||||
spec:
|
||||
containers:
|
||||
- name: autoscaler
|
||||
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0
|
||||
resources:
|
||||
requests:
|
||||
cpu: "20m"
|
||||
memory: "10Mi"
|
||||
command:
|
||||
- /cluster-proportional-autoscaler
|
||||
- --namespace=kube-system
|
||||
- --configmap=kube-dns-autoscaler
|
||||
- --mode=linear
|
||||
- --target=<SCALE_TARGET>
|
||||
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
|
||||
# If using small nodes, "nodesPerReplica" should dominate.
|
||||
- --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"min":1}}
|
||||
- --logtostderr=true
|
||||
- --v=2
|
|
@ -0,0 +1,238 @@
|
|||
---
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This page shows how to enable and configure autoscaling of the DNS service in a
|
||||
Kubernetes cluster.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* {% include task-tutorial-prereqs.md %}
|
||||
|
||||
* Make sure the [DNS feature](/docs/admin/dns/) itself is enabled.
|
||||
|
||||
* Kubernetes version 1.4.0 or later is recommended.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
### Determining whether DNS horizontal autoscaling is already enabled
|
||||
|
||||
List the Deployments in your cluster in the kube-system namespace:
|
||||
|
||||
kubectl get deployment --namespace=kube-system
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
...
|
||||
kube-dns-autoscaler 1 1 1 1 ...
|
||||
...
|
||||
|
||||
If you see "kube-dns-autoscaler" in the output, DNS horizontal autoscaling is
|
||||
already enabled, and you can skip to
|
||||
[Tuning autoscaling parameters](#tuning-autoscaling-parameters).
|
||||
|
||||
### Getting the name of your DNS Deployment or ReplicationController
|
||||
|
||||
List the Deployments in your cluster in the kube-system namespace:
|
||||
|
||||
kubectl get deployment --namespace=kube-system
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
...
|
||||
kube-dns 1 1 1 1 ...
|
||||
...
|
||||
|
||||
In Kubernetes versions earlier than 1.5 DNS is implemented using a
|
||||
ReplicationController instead of a Deployment. So if you don't see kube-dns,
|
||||
or a similar name, in the preceding output, list the ReplicationControllers in
|
||||
your cluster in the kube-system namespace:
|
||||
|
||||
kubectl get rc --namespace=kube-system
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
...
|
||||
kube-dns-v20 1 1 1 ...
|
||||
...
|
||||
|
||||
### Determining your scale target
|
||||
|
||||
If you have a DNS Deployment, your scale target is:
|
||||
|
||||
Deployment/<your-deployment-name>
|
||||
|
||||
where <dns-deployment-name> is the name of your DNS Deployment. For example, if
|
||||
your DNS Deployment name is kube-dns, your scale target is Deployment/kube-dns.
|
||||
|
||||
If you have a DNS ReplicationController, your scale target is:
|
||||
|
||||
ReplicationController/<your-rc-name>
|
||||
|
||||
where <your-rc-name> is the name of your DNS ReplicationController. For example,
|
||||
if your DNS ReplicationController name is kube-dns-v20, your scale target is
|
||||
ReplicationController/kube-dns-v20.
|
||||
|
||||
### Enabling DNS horizontal autoscaling
|
||||
|
||||
In this section, you create a Deployment. The Pods in the Deployment run a
|
||||
container based on the `cluster-proportional-autoscaler-amd64` image.
|
||||
|
||||
Create a file named `dns-horizontal-autoscaler.yaml` with this content:
|
||||
|
||||
{% include code.html language="yaml" file="dns-horizontal-autoscaler.yaml" ghlink="/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml" %}
|
||||
|
||||
In the file, replace `<SCALE_TARGET>` with your scale target.
|
||||
|
||||
Go to the directory that contains your configuration file, and enter this
|
||||
command to create the Deployment:
|
||||
|
||||
kubectl create -f dns-horizontal-autoscaler.yaml
|
||||
|
||||
The output of a successful command is:
|
||||
|
||||
deployment "kube-dns-autoscaler" created
|
||||
|
||||
DNS horizontal autoscaling is now enabled.
|
||||
|
||||
### Tuning autoscaling parameters
|
||||
|
||||
Verify that the kube-dns-autoscaler ConfigMap exists:
|
||||
|
||||
kubectl get configmap --namespace=kube-system
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
NAME DATA AGE
|
||||
...
|
||||
kube-dns-autoscaler 1 ...
|
||||
...
|
||||
|
||||
Modify the data in the ConfigMap:
|
||||
|
||||
kubectl edit configmap kube-dns-autoscaler --namespace=kube-system
|
||||
|
||||
Look for this line:
|
||||
|
||||
linear: '{"coresPerReplica":256,"min":1,"nodesPerReplica":16}'
|
||||
|
||||
Modify the fields according to your needs. The "min" field indicates the
|
||||
minimal number of DNS backends. The actual number of backends number is
|
||||
calculated using this equation:
|
||||
|
||||
replicas = max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) )
|
||||
|
||||
Note that the values of both `coresPerReplica` and `nodesPerReplica` are
|
||||
integers.
|
||||
|
||||
The idea is that when a cluster is using nodes that have many cores,
|
||||
`coresPerReplica` dominates. When a cluster is using nodes that have fewer
|
||||
cores, `nodesPerReplica` dominates.
|
||||
|
||||
There are other supported scaling patterns. For details, see
|
||||
[cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler).
|
||||
|
||||
### Disable DNS horizontal autoscaling
|
||||
|
||||
There are a few options for turning DNS horizontal autoscaling. Which option to
|
||||
use depends on different conditions.
|
||||
|
||||
#### Option 1: Scale down the kube-dns-autoscaler deployment to 0 replicas
|
||||
|
||||
This option works for all situations. Enter this command:
|
||||
|
||||
kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system
|
||||
|
||||
The output is:
|
||||
|
||||
deployment "kube-dns-autoscaler" scaled
|
||||
|
||||
Verify that the replica count is zero:
|
||||
|
||||
kubectl get deployment --namespace-kube-system
|
||||
|
||||
The output displays 0 in the DESIRED and CURRENT columns:
|
||||
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
...
|
||||
kube-dns-autoscaler 0 0 0 0 ...
|
||||
...
|
||||
|
||||
#### Option 2: Delete the kube-dns-autoscaler deployment
|
||||
|
||||
This option works if kube-dns-autoscaler is under your own control, which means
|
||||
no one will re-create it:
|
||||
|
||||
kubectl delete deployment kube-dns-autoscaler --namespace=kube-system
|
||||
|
||||
The output is:
|
||||
|
||||
deployment "kube-dns-autoscaler" deleted
|
||||
|
||||
#### Option 3: Delete the kube-dns-autoscaler manifest file from the master node
|
||||
|
||||
This option works if kube-dns-autoscaler is under control of the
|
||||
[Addon Manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/README.md)'s
|
||||
control, and you have write access to the master node.
|
||||
|
||||
Sign in to the master node and delete the corresponding manifest file.
|
||||
The common path for this kube-dns-autoscaler is:
|
||||
|
||||
/etc/kubernetes/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
|
||||
|
||||
After the manifest file is deleted, the Addon Manager will delete the
|
||||
kube-dns-autoscaler Deployment.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
|
||||
### Understanding how DNS horizontal autoscaling works
|
||||
|
||||
* The cluster-proportional-autoscaler application is deployed separately from
|
||||
the DNS service.
|
||||
|
||||
* An autoscaler Pod runs a client that polls the Kubernetes API server for the
|
||||
number of nodes and cores in the cluster.
|
||||
|
||||
* A desired replica count is calculated and applied to the DNS backends based on
|
||||
the current schedulable nodes and cores and the given scaling parameters.
|
||||
|
||||
* The scaling parameters and data points are provided via a ConfigMap to the
|
||||
autoscaler, and it refreshes its parameters table every poll interval to be up
|
||||
to date with the latest desired scaling parameters.
|
||||
|
||||
* Changes to the scaling parameters are allowed without rebuilding or restarting
|
||||
the autoscaler Pod.
|
||||
|
||||
* The autoscaler provides a controller interface to support two control
|
||||
patterns: *linear* and *ladder*.
|
||||
|
||||
### Future enhancements
|
||||
|
||||
Control patterns, in addition to linear and ladder, that consider custom metrics
|
||||
are under consideration as a future development.
|
||||
|
||||
Scaling of DNS backends based on DNS-specific metrics is under consideration as
|
||||
a future development. The current implementation, which uses the number of nodes
|
||||
and cores in cluster, is limited.
|
||||
|
||||
Support for custom metrics, similar to that provided by
|
||||
[Horizontal Pod Autoscaling](http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/),
|
||||
is under consideration as a future development.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
Learn more about the
|
||||
[implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler).
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,88 @@
|
|||
---
|
||||
assignees:
|
||||
- davidopp
|
||||
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This page shows how to safely drain a machine, respecting the application-level
|
||||
disruption SLOs you have specified using PodDisruptionBudget.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
This task assumes that you have met the following prerequisites:
|
||||
|
||||
* You are using Kubernetes release >= 1.5.
|
||||
* You have created [PodDisruptionBudget(s)](/docs/admin/disruptions.md) to express the
|
||||
application-level disruption SLOs you want the system to enforce.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
### Use `kubectl drain` to remove a node from service
|
||||
|
||||
You can use `kubectl drain` to safely evict all of your pods from a
|
||||
node before you perform maintenance on the node (e.g. kernel upgrade,
|
||||
hardware maintenance, etc.). Safe evictions allow the pod's containers
|
||||
to
|
||||
[gracefully terminate](/docs/user-guide/production-pods.md#lifecycle-hooks-and-termination-notice) and
|
||||
will respect the `PodDisruptionBudgets` you have specified.
|
||||
|
||||
**Note:** By default `kubectl drain` will ignore certain system pods on the node
|
||||
that cannot be killed; see
|
||||
the [kubectl drain](/docs/user-guide/kubectl/kubectl_drain.md)
|
||||
documentation for more details.
|
||||
|
||||
When `kubectl drain` returns successfully, that indicates that all of
|
||||
the pods (except the ones excluded as described in the previous paragraph)
|
||||
have been safely evicted (respecting the desired graceful
|
||||
termination period, and without violating any application-level
|
||||
disruption SLOs). It is then safe to bring down the node by powering
|
||||
down its physical machine or, if running on a cloud platform, deleting its
|
||||
virtual machine.
|
||||
|
||||
First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
Next, tell Kubernetes to drain the node:
|
||||
```shell
|
||||
kubectl drain <node name>
|
||||
```
|
||||
|
||||
Once it returns (without giving an error), you can power down the node
|
||||
(or equivalently, if on a cloud platform, delete the virtual machine backing the node).
|
||||
If you leave the node in the cluster during the maintenance operation, you need to run
|
||||
```shell
|
||||
kubectl uncordon <node name>
|
||||
```
|
||||
afterwards to tell Kubernetes that it can resume scheduling new pods onto the node.
|
||||
|
||||
### Draining multiple nodes in parallel
|
||||
|
||||
The `kubectl drain` command should only be issued to a single node at a
|
||||
time. However, you can run multiple `kubectl drain` commands for
|
||||
different node in parallel, in different terminals or in the
|
||||
background. Multiple drain commands running concurrently will still
|
||||
respect the `PodDisruptionBudget` you specify.
|
||||
|
||||
For example, if you have a StatefulSet with three replicas and have
|
||||
set a `PodDisruptionBudget` for that set specifying `minAvailable:
|
||||
2`. `kubectl drain` will only evict a pod from the StatefulSet if all
|
||||
three pods are ready, and if you issue multiple drain commands in
|
||||
parallel, Kubernetes will respect the PodDisruptionBudget an ensure
|
||||
that only one pod is unavailable at any given time. Any drains that
|
||||
would cause the number of ready replicas to fall below the specified
|
||||
budget are blocked.
|
||||
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
*TODO: link to other docs about Stateful Set?*
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,109 @@
|
|||
---
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to configure a Pod to use a Volume for storage.
|
||||
|
||||
A Container's file system lives only as long as the Container does, so when a
|
||||
Container terminates and restarts, changes to the filesystem are lost. For more
|
||||
consistent storage that is independent of the Container, you can use a
|
||||
[Volume](/docs/user-guide/volumes). This is especially important for stateful
|
||||
applications, such as key-value stores and databases. For example, Redis is a
|
||||
key-value cache and store.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
### Configuring a volume for a Pod
|
||||
|
||||
In this exercise, you create a Pod that runs one Container. This Pod has a
|
||||
Volume of type
|
||||
[emptyDir](/docs/user-guide/volumes/#emptydir)
|
||||
that lasts for the life of the Pod, even if the Container terminates and
|
||||
restarts. Here is the configuration file for the Pod:
|
||||
|
||||
{% include code.html language="yaml" file="pod-redis.yaml" ghlink="/docs/tasks/configure-pod-container/pod-redis.yaml" %}
|
||||
|
||||
1. Create the Pod:
|
||||
|
||||
kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/pod-redis.yaml
|
||||
|
||||
1. Verify that the Pod's Container is running, and then watch for changes to
|
||||
the Pod:
|
||||
|
||||
kubectl get --watch pod redis
|
||||
|
||||
The output looks like this:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 13s
|
||||
|
||||
1. In another terminal, get a shell to the running Container:
|
||||
|
||||
kubectl exec -it redis -- /bin/bash
|
||||
|
||||
1. In your shell, go to `/data/redis`, and create a file:
|
||||
|
||||
root@redis:/data/redis# echo Hello > test-file
|
||||
|
||||
1. In your shell, list the running processes:
|
||||
|
||||
root@redis:/data/redis# ps aux
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379
|
||||
root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash
|
||||
root 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux
|
||||
|
||||
1. In your shell, kill the redis process:
|
||||
|
||||
root@redis:/data/redis# kill <pid>
|
||||
|
||||
where `<pid>` is the redis process ID (PID).
|
||||
|
||||
1. In your original terminal, watch for changes to the redis Pod. Eventually,
|
||||
you will see something like this:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 13s
|
||||
redis 0/1 Completed 0 6m
|
||||
redis 1/1 Running 1 6m
|
||||
|
||||
At this point, the Container has terminated and restarted. This is because the
|
||||
redis Pod has a
|
||||
[restartPolicy](http://kubernetes.io/docs/api-reference/v1/definitions#_v1_podspec)
|
||||
of `Always`.
|
||||
|
||||
1. Get a shell into the restarted Container:
|
||||
|
||||
kubectl exec -it redis -- /bin/bash
|
||||
|
||||
1. In your shell, goto `/data/redis`, and verify that `test-file` is still there.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
* See [Volume](/docs/api-reference/v1/definitions/#_v1_volume).
|
||||
|
||||
* See [Pod](http://kubernetes.io/docs/api-reference/v1/definitions#_v1_pod).
|
||||
|
||||
* In addition to the local disk storage provided by `emptyDir`, Kubernetes
|
||||
supports many different network-attached storage solutions, including PD on
|
||||
GCE and EBS on EC2, which are preferred for critical data, and will handle
|
||||
details such as mounting and unmounting the devices on the nodes. See
|
||||
[Volumes](/docs/user-guide/volumes) for more details.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: redis
|
||||
spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: redis-storage
|
||||
mountPath: /data/redis
|
||||
volumes:
|
||||
- name: redis-storage
|
||||
emptyDir: {}
|
|
@ -26,6 +26,20 @@ single thing, typically by giving a short sequence of steps.
|
|||
#### Administering a Cluster
|
||||
|
||||
* [Assigning Pods to Nodes](/docs/tasks/administer-cluster/assign-pods-nodes/)
|
||||
* [Autoscaling the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
|
||||
* [Safely Draining a Node while Respecting Application SLOs](/docs/tasks/administer-cluster/safely-drain-node/)
|
||||
|
||||
#### Managing Stateful Applications
|
||||
|
||||
* [Upgrading from PetSets to StatefulSets](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/)
|
||||
* [Scaling a StatefulSet](/docs/tasks/manage-stateful-set/scale-stateful-set/)
|
||||
* [Deleting a StatefulSet](/docs/tasks/manage-stateful-set/deleting-a-statefulset/)
|
||||
* [Debugging a StatefulSet](/docs/tasks/manage-stateful-set/debugging-a-statefulset/)
|
||||
* [Force Deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/)
|
||||
|
||||
#### Troubleshooting
|
||||
|
||||
* [Debugging Init Containers](/docs/tasks/troubleshoot/debug-init-containers/)
|
||||
|
||||
### What's next
|
||||
|
||||
|
|
|
@ -0,0 +1,85 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This task shows you how to debug a StatefulSet.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
|
||||
* You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster.
|
||||
* You should have a StatefulSet running that you want to investigate.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
### Debugging a StatefulSet
|
||||
|
||||
In order to list all the pods which belong to a StatefulSet, which have a label `app=myapp` set on them, you can use the following:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=myapp
|
||||
```
|
||||
|
||||
If you find that any Pods listed are in `Unknown` or `Terminating` state for an extended period of time, refer to the [Deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/) task for instructions on how to deal with them. You can debug individual Pods in a StatefulSet using the [Debugging Pods](/docs/user-guide/debugging-pods-and-replication-controllers/#debugging-pods) guide.
|
||||
|
||||
StatefulSets provide a debug mechanism to pause all controller operations on Pods using an annotation. Setting the `pod.alpha.kubernetes.io/initialized` annotation to `"false"` on any StatefulSet Pod will *pause* all operations of the StatefulSet. When paused, the StatefulSet will not perform any scaling operations. Once the debug hook is set, you can execute commands within the containers of StatefulSet pods without interference from scaling operations. You can set the annotation to `"false"` by executing the following:
|
||||
|
||||
```shell
|
||||
kubectl annotate pods <pod-name> pod.alpha.kubernetes.io/initialized="false" --overwrite
|
||||
```
|
||||
|
||||
When the annotation is set to `"false"`, the StatefulSet will not respond to its Pods becoming unhealthy or unavailable. It will not create replacement Pods till the annotation is removed or set to `"true"` on each StatefulSet Pod.
|
||||
|
||||
#### Step-wise Initialization
|
||||
|
||||
You can also use the same annotation to debug race conditions during bootstrapping of the StatefulSet by setting the `pod.alpha.kubernetes.io/initialized` annotation to `"false"` in the `.spec.template.metadata.annotations` field of the StatefulSet prior to creating it.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: my-app
|
||||
spec:
|
||||
serviceName: "my-app"
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: my-app
|
||||
annotations:
|
||||
pod.alpha.kubernetes.io/initialized: "false"
|
||||
...
|
||||
...
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
After setting the annotation, if you create the StatefulSet, you can wait for each Pod to come up and verify that it has initialized correctly. The StatefulSet will not create any subsequent Pods till the debug annotation is set to `"true"` (or removed) on each Pod that has already been created. You can set the annotation to `"true"` by executing the following:
|
||||
|
||||
```shell
|
||||
kubectl annotate pods <pod-name> pod.alpha.kubernetes.io/initialized="true" --overwrite
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
Learn more about [debugging an init-container](/docs/tasks/troubleshoot/debug-init-containers/).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,77 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- erictune
|
||||
- foxish
|
||||
- smarterclayton
|
||||
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This page shows how to delete Pods which are part of a stateful set, and explains the considerations to keep in mind when doing so.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* This is a fairly advanced task and has the potential to violate some of the properties inherent to StatefulSet.
|
||||
* Before proceeding, make yourself familiar with the considerations enumerated below.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
|
||||
### StatefulSet considerations
|
||||
|
||||
In normal operation of a StatefulSet, there is **never** a need to force delete a StatefulSet Pod. The StatefulSet controller is responsible for creating, scaling and deleting members of the StatefulSet. It tries to ensure that the specified number of Pods from ordinal 0 through N-1 are alive and ready. StatefulSet ensures that, at any time, there is at most one Pod with a given identity running in a cluster. This is referred to as *at most one* semantics provided by a StatefulSet.
|
||||
|
||||
Manual force deletion should be undertaken with caution, as it has the potential to violate the at most one semantics inherent to StatefulSet. StatefulSets may be used to run distributed and clustered applications which have a need for a stable network identity and stable storage. These applications often have configuration which relies on an ensemble of a fixed number of members with fixed identities. Having multiple members with the same identity can be disastrous and may lead to data loss (e.g. split brain scenario in quorum-based systems).
|
||||
|
||||
### Deleting Pods
|
||||
|
||||
You can perform a graceful pod deletion with the following command:
|
||||
|
||||
```shell
|
||||
kubectl delete pods <pod>
|
||||
```
|
||||
|
||||
For the above to lead to graceful termination, the Pod **must not** specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. The practice of setting a `pod.Spec.TerminationGracePeriodSeconds` of 0 seconds is unsafe and strongly discouraged for StatefulSet Pods. Graceful deletion is safe and will ensure that the [Pod shuts down gracefully](/docs/user-guide/pods/#termination-of-pods) before the kubelet deletes the name from the apiserver.
|
||||
|
||||
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a [timeout](/docs/admin/node/#node-condition). Pods may also enter these states when the user attempts graceful deletion of a Pod on an unreachable Node. The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
* The Node object is deleted (either by you, or by the [Node Controller](/docs/admin/node)).
|
||||
* The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.
|
||||
* Force deletion of the Pod by the user.
|
||||
|
||||
The recommended best practice is to use the first or second approach. If a Node is confirmed to be dead (e.g. permanently disconnected from the network, powered down, etc), then delete the node object. If the node is suffering from a network partition, then try to resolve this or wait for it to resolve. When the partition heals, the kubelet will complete the deletion of the Pod and free up its name in the apiserver.
|
||||
|
||||
Normally, the system completes the deletion once the Pod is no longer running on a Node, or the Node is deleted by an administrator. You may override this by force deleting the Pod.
|
||||
|
||||
#### Force Deletion
|
||||
|
||||
Force deletions **do not** wait for confirmation from the kubelet that the Pod has been terminated. Irrespective of whether a force deletion is successful in killing a Pod, it will immediately free up the name from the apiserver. This would let the StatefulSet controller create a replacement Pod with that same identity; this can lead to the duplication of a still-running Pod, and if said Pod can still communicate with the other members of the StatefulSet, will violate the at most one semantics that StatefulSet is designed to guarantee.
|
||||
|
||||
When you force delete a StatefulSet pod, you are asserting that the Pod in question will never again make contact with other Pods in the StatefulSet and its name can be safely freed up for a replacement to be created.
|
||||
|
||||
If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:
|
||||
|
||||
```shell
|
||||
kubectl delete pods <pod> --grace-period=0 --force
|
||||
```
|
||||
|
||||
If you're using any version of kubectl <= 1.4, you should omit the `--force` option and use:
|
||||
|
||||
```shell
|
||||
kubectl delete pods <pod> --grace-period=0
|
||||
```
|
||||
|
||||
Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
Learn more about [debugging a StatefulSet](/docs/tasks/manage-stateful-set/debugging-a-statefulset/).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,87 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- smarterclayton
|
||||
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This task shows you how to delete a StatefulSet.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* This task assumes you have an application running on your cluster represented by a StatefulSet.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
### Deleting a StatefulSet
|
||||
|
||||
You can delete a StatefulSet in the same way you delete other resources in kubernetes: use the `kubectl delete` command, and specify the StatefulSet either by file or by name.
|
||||
|
||||
```shell
|
||||
kubectl delete -f <file.yaml>
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl delete statefulsets <statefulset-name>
|
||||
```
|
||||
|
||||
You may need to delete the associated headless service separately after the StatefulSet itself is deleted.
|
||||
|
||||
```shell
|
||||
kubectl delete service <service-name>
|
||||
```
|
||||
|
||||
Deleting a StatefulSet through kubectl will scale it down to 0, thereby deleting all pods that are a part of it. If you want to delete just the StatefulSet and not the pods, use `--cascade=false`.
|
||||
|
||||
```shell
|
||||
kubectl delete -f <file.yaml> --cascade=false
|
||||
```
|
||||
|
||||
By passing `--cascade=false` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app=myapp`, you can then delete them as follows:
|
||||
|
||||
```shell
|
||||
kubectl delete pods -l app=myapp
|
||||
```
|
||||
|
||||
#### Persistent Volumes
|
||||
|
||||
Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have left the [terminating state](/docs/user-guide/pods/index#termination-of-pods) might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
|
||||
|
||||
**Note: Use caution when deleting a PVC, as it may lead to data loss.**
|
||||
|
||||
#### Complete deletion of a StatefulSet
|
||||
|
||||
To simply delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
|
||||
|
||||
```shell{% raw %}
|
||||
grace=$(kubectl get pods <stateful-set-pod> --template '{{.spec.terminationGracePeriodSeconds}}')
|
||||
kubectl delete statefulset -l app=myapp
|
||||
sleep $grace
|
||||
kubectl delete pvc -l app=myapp
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
In the example above, the Pods have the label `app=myapp`; substitute your own label as appropriate.
|
||||
|
||||
#### Force deletion of StatefulSet pods
|
||||
|
||||
If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/) for details.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
Learn more about [force deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This page shows how to scale a StatefulSet.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* StatefulSets are only available in Kubernetes version 1.5 or later.
|
||||
* **Not all stateful applications scale nicely.** You need to understand your StatefulSets well before continuing. If you're unsure, remember that it might not be safe to scale your StatefulSets.
|
||||
* You should perform scaling only when you're sure that your stateful application
|
||||
cluster is completely healthy.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
### Use `kubectl` to scale StatefulSets
|
||||
|
||||
Make sure you have `kubectl` upgraded to Kubernetes version 1.5 or later before
|
||||
continuing. If you're unsure, run `kubectl version` and check `Client Version`
|
||||
for which kubectl you're using.
|
||||
|
||||
#### `kubectl scale`
|
||||
|
||||
First, find the StatefulSet you want to scale. Remember, you need to first understand if you can scale it or not.
|
||||
|
||||
```shell
|
||||
kubectl get statefulsets <stateful-set-name>
|
||||
```
|
||||
|
||||
Change the number of replicas of your StatefulSet:
|
||||
|
||||
```shell
|
||||
kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
|
||||
```
|
||||
|
||||
#### Alternative: `kubectl apply` / `kubectl edit` / `kubectl patch`
|
||||
|
||||
Alternatively, you can do [in-place updates](/docs/user-guide/managing-deployments/#in-place-updates-of-resources) on your StatefulSets.
|
||||
|
||||
If your StatefulSet was initially created with `kubectl apply` or `kubectl create --save-config`,
|
||||
update `.spec.replicas` of the StatefulSet manifests, and then do a `kubectl apply`:
|
||||
|
||||
```shell
|
||||
kubectl apply -f <stateful-set-file-updated>
|
||||
```
|
||||
|
||||
Otherwise, edit that field with `kubectl edit`:
|
||||
|
||||
```shell
|
||||
kubectl edit statefulsets <stateful-set-name>
|
||||
```
|
||||
|
||||
Or use `kubectl patch`:
|
||||
|
||||
```shell
|
||||
kubectl patch statefulsets <stateful-set-name> -p '{"spec":{"replicas":<new-replicas>}}'
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### Scaling down doesn't not work right
|
||||
|
||||
You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place
|
||||
after those stateful Pods become running and ready.
|
||||
|
||||
With a StatefulSet of size > 1, if there is an unhealthy Pod, there is no way
|
||||
for Kubernetes to know (yet) if it is due to a permanent fault or a transient
|
||||
one (upgrade/maintenance/node reboot). If the Pod is unhealthy due to a permanent fault, scaling
|
||||
without correcting the fault may lead to a state where the StatefulSet membership
|
||||
drops below a certain minimum number of "replicas" that are needed to function
|
||||
correctly. This may cause your StatefulSet to become unavailable.
|
||||
|
||||
If the Pod is unhealthy due to a transient fault and the Pod might become available again,
|
||||
the transient error may interfere with your scale-up/scale-down operation. Some distributed
|
||||
databases have issues when nodes join and leave at the same time. It is better
|
||||
to reason about scaling operations at the application level in these cases, and
|
||||
perform scaling only when you're sure that your stateful application cluster is
|
||||
completely healthy.
|
||||
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
Learn more about [deleting a StatefulSet](/docs/tasks/manage-stateful-set/deleting-a-statefulset/).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,166 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This page shows how to upgrade from PetSets (Kubernetes version 1.3 or 1.4) to *StatefulSets* (Kubernetes version 1.5 or later).
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* If you don't have PetSets in your current cluster, or you don't plan to upgrade
|
||||
your master to Kubernetes 1.5 or later, you can skip this task.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
### Differences between alpha PetSets and beta StatefulSets
|
||||
|
||||
PetSet was introduced as an alpha resource in Kubernetes release 1.3, and was renamed to StatefulSet as a beta resource in 1.5.
|
||||
Here are some notable changes:
|
||||
|
||||
* **StatefulSet is the new PetSet**: PetSet is no longer available in Kubernetes release 1.5 or later. It becomes beta StatefulSet. To understand why the name was changed, see this [discussion thread](https://github.com/kubernetes/kubernetes/issues/27430).
|
||||
* **StatefulSet guards against split brain**: StatefulSets guarantee at most one Pod for a given ordinal index can be running anywhere in a cluster, to guard against split brain scenarios with distributed applications. *TODO: Link to doc about fencing.*
|
||||
* **Flipped debug annotation behavior**: The default value of the debug annotation (`pod.alpha.kubernetes.io/initialized`) is now `true`. The absence of this annotation will pause PetSet operations, but will NOT pause StatefulSet operations. In most cases, you no longer need this annotation in your StatefulSet manifests.
|
||||
|
||||
|
||||
### Upgrading from PetSets to StatefulSets
|
||||
|
||||
Note that these steps need to be done in the specified order. You **should
|
||||
NOT upgrade your Kubernetes master, nodes, or `kubectl` to Kubernetes version
|
||||
1.5 or later**, until told to do so.
|
||||
|
||||
#### Find all PetSets and their manifests
|
||||
|
||||
First, find all existing PetSets in your cluster:
|
||||
|
||||
```shell
|
||||
kubectl get petsets --all-namespaces
|
||||
```
|
||||
|
||||
If you don't find any existing PetSets, you can safely upgrade your cluster to
|
||||
Kubernetes version 1.5 or later.
|
||||
|
||||
If you find existing PetSets and you have all their manifests at hand, you can continue to the next step to prepare StatefulSet manifests.
|
||||
|
||||
Otherwise, you need to save their manifests so that you can recreate them as StatefulSets later.
|
||||
Here's an example command for you to save all existing PetSets as one file.
|
||||
|
||||
```shell
|
||||
# Save all existing PetSets in all namespaces into a single file. Only needed when you don't have their manifests at hand.
|
||||
kubectl get petsets --all-namespaces -o yaml > all-petsets.yaml
|
||||
```
|
||||
|
||||
#### Prepare StatefulSet manifests
|
||||
|
||||
Now, for every PetSet manifest you have, prepare a corresponding StatefulSet manifest:
|
||||
|
||||
1. Change `apiVersion` from `apps/v1alpha1` to `apps/v1beta1`.
|
||||
2. Change `kind` from `PetSet` to `StatefulSet`.
|
||||
3. If you have the debug hook annotation `pod.alpha.kubernetes.io/initialized` set to `true`, you can remove it because it's redundant. If you don't have this annotation, you should add one, with the value set to `false`, to pause StatefulSets operations.
|
||||
|
||||
It's recommended that you keep both PetSet manifests and StatefulSet manifests, so that you can safely roll back and recreate your PetSets,
|
||||
if you decide not to upgrade your cluster.
|
||||
|
||||
#### Delete all PetSets without cascading
|
||||
|
||||
If you find existing PetSets in your cluster in the previous step, you need to delete all PetSets *without cascading*. You can do this from `kubectl` with `--cascade=false`.
|
||||
Note that if the flag isn't set, **cascading deletion will be performed by default**, and all Pods managed by your PetSets will be gone.
|
||||
|
||||
Delete those PetSets by specifying file names. This only works when
|
||||
the files contain only PetSets, but not other resources such as Services:
|
||||
|
||||
```shell
|
||||
# Delete all existing PetSets without cascading
|
||||
# Note that <pet-set-file> should only contain PetSets that you want to delete, but not any other resources
|
||||
kubectl delete -f <pet-set-file> --cascade=false
|
||||
```
|
||||
|
||||
Alternatively, delete them by specifying resource names:
|
||||
|
||||
```shell
|
||||
# Alternatively, delete them by name and namespace without cascading
|
||||
kubectl delete petsets <pet-set-name> -n=<pet-set-namespace> --cascade=false
|
||||
```
|
||||
|
||||
Make sure you've deleted all PetSets in the system:
|
||||
|
||||
```shell
|
||||
# Get all PetSets again to make sure you deleted them all
|
||||
# This should return nothing
|
||||
kubectl get petsets --all-namespaces
|
||||
```
|
||||
|
||||
At this moment, you've deleted all PetSets in your cluster, but not their Pods, Persistent Volumes, or Persistent Volume Claims.
|
||||
However, since the Pods are not managed by PetSets anymore, they will be vulnerable to node failures until you finish the master upgrade and recreate StatefulSets.
|
||||
|
||||
#### Upgrade your master to Kubernetes version 1.5 or later
|
||||
|
||||
Now, you can [upgrade your Kubernetes master](/docs/admin/cluster-management/#upgrading-a-cluster) to Kubernetes version 1.5 or later.
|
||||
Note that **you should NOT upgrade Nodes at this time**, because the Pods
|
||||
(that were once managed by PetSets) are now vulnerable to node failures.
|
||||
|
||||
#### Upgrade kubectl to Kubernetes version 1.5 or later
|
||||
|
||||
Upgrade `kubectl` to Kubernetes version 1.5 or later, following [the steps for installing and setting up
|
||||
kubectl](/docs/user-guide/prereqs/).
|
||||
|
||||
#### Create StatefulSets
|
||||
|
||||
Make sure you have both master and `kubectl` upgraded to Kubernetes version 1.5
|
||||
or later before continuing:
|
||||
|
||||
```shell
|
||||
kubectl version
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell
|
||||
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.0", GitCommit:"0776eab45fe28f02bbeac0f05ae1a203051a21eb", GitTreeState:"clean", BuildDate:"2016-11-24T22:35:03Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.0", GitCommit:"0776eab45fe28f02bbeac0f05ae1a203051a21eb", GitTreeState:"clean", BuildDate:"2016-11-24T22:30:23Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```
|
||||
|
||||
If both `Client Version` (`kubectl` version) and `Server Version` (master
|
||||
version) are 1.5 or later, you are good to go.
|
||||
|
||||
Create StatefulSets to adopt the Pods belonging to the deleted PetSets with the
|
||||
StatefulSet manifests generated in the previous step:
|
||||
|
||||
```shell
|
||||
kubectl create -f <stateful-set-file>
|
||||
```
|
||||
|
||||
Make sure all StatefulSets are created and running as expected in the
|
||||
newly-upgraded cluster:
|
||||
|
||||
```shell
|
||||
kubectl get statefulsets --all-namespaces
|
||||
```
|
||||
|
||||
#### Upgrade nodes to Kubernetes version 1.5 or later (optional)
|
||||
|
||||
You can now [upgrade Kubernetes nodes](/docs/admin/cluster-management/#upgrading-a-cluster)
|
||||
to Kubernetes version 1.5 or later. This step is optional, but needs to be done after all StatefulSets
|
||||
are created to adopt PetSets' Pods.
|
||||
|
||||
You should be running Node version >= 1.1.0 to run StatefulSets safely. Older versions do not support features which allow the StatefulSet to guarantee that at any time, there is **at most** one Pod with a given identity running in a cluster.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
Learn more about [scaling a StatefulSet](/docs/tasks/manage-stateful-set/scale-stateful-set/).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,137 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to investigate problems related to the execution of
|
||||
Init Containers.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* You should be familiar with the basics of
|
||||
[Init Containers](/docs/user-guide/pods/init-containers/).
|
||||
* You should have a [Pod](/docs/user-guide/pods/) you want to debug that uses
|
||||
Init Containers. The example command lines below refer to the Pod as
|
||||
`<pod-name>` and the Init Containers as `<init-container-1>` and
|
||||
`<init-container-2>`.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
### Checking the status of Init Containers
|
||||
|
||||
The Pod status will give you an overview of Init Container execution:
|
||||
|
||||
```shell
|
||||
kubectl get pod <pod-name>
|
||||
```
|
||||
|
||||
For example, a status of `Init:1/2` indicates that one of two Init Containers
|
||||
has completed successfully:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
<pod-name> 0/1 Init:1/2 0 7s
|
||||
```
|
||||
|
||||
See [Understanding Pod status](#understanding-pod-status) for more examples of
|
||||
status values and their meanings.
|
||||
|
||||
### Getting details about Init Containers
|
||||
|
||||
You can see detailed information about Init Container execution by running:
|
||||
|
||||
```shell
|
||||
kubectl describe pod <pod-name>
|
||||
```
|
||||
|
||||
For example, a Pod with two Init Containers might show the following:
|
||||
|
||||
```
|
||||
Init Containers:
|
||||
<init-container-1>:
|
||||
Container ID: ...
|
||||
...
|
||||
State: Terminated
|
||||
Reason: Completed
|
||||
Exit Code: 0
|
||||
Started: ...
|
||||
Finished: ...
|
||||
Ready: True
|
||||
Restart Count: 0
|
||||
...
|
||||
<init-container-2>:
|
||||
Container ID: ...
|
||||
...
|
||||
State: Waiting
|
||||
Reason: CrashLoopBackOff
|
||||
Last State: Terminated
|
||||
Reason: Error
|
||||
Exit Code: 1
|
||||
Started: ...
|
||||
Finished: ...
|
||||
Ready: False
|
||||
Restart Count: 3
|
||||
...
|
||||
```
|
||||
|
||||
You can also access the Init Container statuses programmatically by reading the
|
||||
`pod.beta.kubernetes.io/init-container-status` annotation on the Pod:
|
||||
|
||||
{% raw %}
|
||||
```shell
|
||||
kubectl get pod <pod-name> --template '{{index .metadata.annotations "pod.beta.kubernetes.io/init-container-statuses"}}'
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
This will return the same information as above, but in raw JSON format.
|
||||
|
||||
### Accessing logs from Init Containers
|
||||
|
||||
You can access logs for an Init Container by passing its Container name along
|
||||
with the Pod name:
|
||||
|
||||
```shell
|
||||
kubectl logs <pod-name> -c <init-container-2>
|
||||
```
|
||||
|
||||
If your Init Container runs a shell script, it helps to enable printing of
|
||||
commands as they're executed. For example, you can do this in Bash by running
|
||||
`set -x` at the beginning of the script.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
|
||||
### Understanding Pod status
|
||||
|
||||
A Pod status beginning with `Init:` summarizes the status of Init Container
|
||||
execution. The table below describes some example status values that you might
|
||||
see while debugging Init Containers.
|
||||
|
||||
Status | Meaning
|
||||
------ | -------
|
||||
`Init:N/M` | The Pod has `M` Init Containers, and `N` have completed so far.
|
||||
`Init:Error` | An Init Container has failed to execute.
|
||||
`Init:CrashLoopBackOff` | An Init Container has failed repeatedly.
|
||||
|
||||
A Pod with status `Pending` has not yet begun executing Init Containers.
|
||||
A Pod with status `PodInitializing` or `Running` has already finished executing
|
||||
Init Containers.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
||||
|
|
@ -20,8 +20,12 @@ each of which has a sequence of steps.
|
|||
|
||||
#### Stateful Applications
|
||||
|
||||
* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
* [Running a Single-Instance Stateful Application](/docs/tutorials/stateful-application/run-stateful-application/)
|
||||
|
||||
* [Running a Replicated Stateful Application](/docs/tutorials/stateful-application/run-replicated-stateful-application/)
|
||||
|
||||
### What's next
|
||||
|
||||
If you would like to write a tutorial, see
|
||||
|
|
|
@ -0,0 +1,339 @@
|
|||
---
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
Applications running in a Kubernetes cluster find and communicate with each
|
||||
other, and the outside world, through the Service abstraction. This document
|
||||
explains what happens to the source IP of packets sent to different types
|
||||
of Services, and how you can toggle this behavior according to your needs.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
### Terminology
|
||||
|
||||
This document makes use of the following terms:
|
||||
|
||||
* [NAT](https://en.wikipedia.org/wiki/Network_address_translation): network address translation
|
||||
* [Source NAT](/docs/user-guide/services/#ips-and-vips): replacing the source IP on a packet, usually with a node's IP
|
||||
* [Destination NAT](/docs/user-guide/services/#ips-and-vips): replacing the destination IP on a packet, usually with a pod IP
|
||||
* [VIP](/docs/user-guide/services/#ips-and-vips): a virtual IP, such as the one assigned to every Kubernetes Service
|
||||
* [Kube-proxy](/docs/user-guide/services/#virtual-ips-and-service-proxies): a network daemon that orchestrates Service VIP management on every node
|
||||
|
||||
|
||||
### Prerequisites
|
||||
|
||||
You must have a working Kubernetes 1.5 cluster to run the examples in this
|
||||
document. The examples use a small nginx webserver that echoes back the source
|
||||
IP of requests it receives through a HTTP header. You can create it as follows:
|
||||
|
||||
```console
|
||||
$ kubectl run source-ip-app --image=gcr.io/google_containers/echoserver:1.4
|
||||
deployment "source-ip-app" created
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture objectives %}
|
||||
|
||||
* Expose a simple application through various types of Services
|
||||
* Understand how each Service type handles source IP NAT
|
||||
* Understand the tradeoffs involved in preserving source IP
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture lessoncontent %}
|
||||
|
||||
### Source IP for Services with Type=ClusterIP
|
||||
|
||||
Packets sent to ClusterIP from within the cluster are never source NAT'd if
|
||||
you're running kube-proxy in [iptables mode](/docs/user-guide/services/#proxy-mode-iptables),
|
||||
which is the default since Kubernetes 1.2. Kube-proxy exposes its mode through
|
||||
a `proxyMode` endpoint:
|
||||
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE
|
||||
kubernetes-minion-group-6jst Ready 2h
|
||||
kubernetes-minion-group-cx31 Ready 2h
|
||||
kubernetes-minion-group-jj1t Ready 2h
|
||||
|
||||
kubernetes-minion-group-6jst $ curl localhost:10249/proxyMode
|
||||
iptables
|
||||
```
|
||||
|
||||
You can test source IP preservation by creating a Service over the source IP app:
|
||||
|
||||
```console
|
||||
$ kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
|
||||
service "clusterip" exposed
|
||||
|
||||
$ kubectl get svc clusterip
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
clusterip 10.0.170.92 <none> 80/TCP 51s
|
||||
```
|
||||
|
||||
And hitting the `ClusterIP` from a pod in the same cluster:
|
||||
|
||||
```console
|
||||
$ kubectl run busybox -it --image=busybox --restart=Never --rm
|
||||
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
|
||||
# ip addr
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue
|
||||
link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.244.3.8/24 scope global eth0
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::188a:84ff:feb0:26a5/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
# wget -qO - 10.0.170.92
|
||||
CLIENT VALUES:
|
||||
client_address=10.244.3.8
|
||||
command=GET
|
||||
...
|
||||
```
|
||||
|
||||
### Source IP for Services with Type=NodePort
|
||||
|
||||
As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/user-guide/services/#type-nodeport)
|
||||
are source NAT'd by default. You can test this by creating a `NodePort` Service:
|
||||
|
||||
```console
|
||||
$ kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort
|
||||
service "nodeport" exposed
|
||||
|
||||
$ NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport)
|
||||
$ NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="ExternalIP")].address }')
|
||||
```
|
||||
|
||||
if you're running on a cloudprovider, you may need to open up a firewall-rule
|
||||
for the `nodes:nodeport` reported above.
|
||||
Now you can try reaching the Service from outside the cluster through the node
|
||||
port allocated above.
|
||||
|
||||
```console
|
||||
$ for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done
|
||||
client_address=10.180.1.1
|
||||
client_address=10.240.0.5
|
||||
client_address=10.240.0.3
|
||||
```
|
||||
|
||||
Note that these are not your IPs, they're cluster internal IPs. This is what happens:
|
||||
|
||||
* Client sends packet to `node2:nodePort`
|
||||
* `node2` replaces the source IP address (SNAT) in the packet with its own IP address
|
||||
* `node2` replaces the destination IP on the packet with the pod IP
|
||||
* packet is routed to node 1, and then to the endpoint
|
||||
* the pod's reply is routed back to node2
|
||||
* the pod's reply is sent back to the client
|
||||
|
||||
Visually:
|
||||
|
||||
```
|
||||
client
|
||||
\ ^
|
||||
\ \
|
||||
v \
|
||||
node 1 <--- node 2
|
||||
| ^ SNAT
|
||||
| | --->
|
||||
v |
|
||||
endpoint
|
||||
```
|
||||
|
||||
|
||||
To avoid this, Kubernetes 1.5 has a beta feature triggered by the
|
||||
`service.beta.kubernetes.io/external-traffic` [annotation](/docs/user-guide/load-balancer/#loss-of-client-source-ip-for-external-traffic).
|
||||
Setting it to the value `OnlyLocal` will only proxy requests to local endpoints,
|
||||
never forwarding traffic to other nodes and thereby preserving the original
|
||||
source IP address. If there are no local endpoints, packets sent to the node
|
||||
are dropped, so you can rely on the correct source-ip in any packet processing
|
||||
rules you might apply a packet that make it through to the endpoint.
|
||||
|
||||
Set the annotation as follows:
|
||||
|
||||
```console
|
||||
$ kubectl annotate service nodeport service.beta.kubernetes.io/external-traffic=OnlyLocal
|
||||
service "nodeport" annotated
|
||||
```
|
||||
|
||||
Now, re-run the test:
|
||||
|
||||
```console
|
||||
$ for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; do
|
||||
client_address=104.132.1.79
|
||||
```
|
||||
|
||||
Note that you only got one reply, with the *right* client IP, from the one node on which the endpoint pod
|
||||
is running on.
|
||||
|
||||
This is what happens:
|
||||
|
||||
* client sends packet to `node2:nodePort`, which doesn't have any endpoints
|
||||
* packet is dropped
|
||||
* client sends packet to `node1:nodePort`, which *does* have endpoints
|
||||
* node1 routes packet to endpoint with the correct source IP
|
||||
|
||||
Visually:
|
||||
|
||||
```
|
||||
client
|
||||
^ / \
|
||||
/ / \
|
||||
/ v X
|
||||
node 1 node 2
|
||||
^ |
|
||||
| |
|
||||
| v
|
||||
endpoint
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Source IP for Services with Type=LoadBalancer
|
||||
|
||||
As of Kubernetes 1.5, packets sent to Services with [Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer) are
|
||||
source NAT'd by default, because all schedulable Kubernetes nodes in the
|
||||
`Ready` state are eligible for loadbalanced traffic. So if packets arrive
|
||||
at a node without an endpoint, the system proxies it to a node *with* an
|
||||
endpoint, replacing the source IP on the packet with the IP of the node (as
|
||||
described in the previous section).
|
||||
|
||||
You can test this by exposing the source-ip-app through a loadbalancer
|
||||
|
||||
```console
|
||||
$ kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
|
||||
service "loadbalancer" exposed
|
||||
|
||||
$ kubectl get svc loadbalancer
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
loadbalancer 10.0.65.118 104.198.149.140 80/TCP 5m
|
||||
|
||||
$ curl 104.198.149.140
|
||||
CLIENT VALUES:
|
||||
client_address=10.240.0.5
|
||||
...
|
||||
```
|
||||
|
||||
However, if you're running on GKE/GCE, setting the same `service.beta.kubernetes.io/external-traffic`
|
||||
annotation to `OnlyLocal` forces nodes *without* Service endpoints to remove
|
||||
themselves from the list of nodes eligible for loadbalanced traffic by
|
||||
deliberately failing health checks. We expect to roll this feature out across a
|
||||
wider range of providers before GA (see next section).
|
||||
|
||||
Visually:
|
||||
|
||||
```
|
||||
client
|
||||
|
|
||||
lb VIP
|
||||
/ ^
|
||||
v /
|
||||
health check ---> node 1 node 2 <--- health check
|
||||
200 <--- ^ | ---> 500
|
||||
| V
|
||||
endpoint
|
||||
```
|
||||
|
||||
You can test this by setting the annotation:
|
||||
|
||||
```console
|
||||
$ kubectl annotate service loadbalancer service.beta.kubernetes.io/external-traffic=OnlyLocal
|
||||
```
|
||||
|
||||
You should immediately see a second annotation allocated by Kubernetes:
|
||||
|
||||
```console
|
||||
$ kubectl get svc loadbalancer -o yaml | grep -i annotations -A 2
|
||||
annotations:
|
||||
service.beta.kubernetes.io/external-traffic: OnlyLocal
|
||||
service.beta.kubernetes.io/healthcheck-nodeport: "32122"
|
||||
```
|
||||
|
||||
The `service.beta.kubernetes.io/healthcheck-nodeport` annotation points to
|
||||
a port on every node serving the health check at `/healthz`. You can test this:
|
||||
|
||||
```
|
||||
$ kubectl get po -o wide -l run=source-ip-app
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-minion-group-6jst
|
||||
|
||||
kubernetes-minion-group-6jst $ curl localhost:32122/healthz
|
||||
1 Service Endpoints found
|
||||
|
||||
kubernetes-minion-group-jj1t $ curl localhost:32122/healthz
|
||||
No Service Endpoints Found
|
||||
```
|
||||
|
||||
A service controller running on the master is responsible for allocating the cloud
|
||||
loadbalancer, and when it does so, it also allocates HTTP health checks
|
||||
pointing to this port/path on each node. Wait about 10 seconds for the 2 nodes
|
||||
without endpoints to fail health checks, then curl the lb ip:
|
||||
|
||||
```console
|
||||
$ curl 104.198.149.140
|
||||
CLIENT VALUES:
|
||||
client_address=104.132.1.79
|
||||
...
|
||||
```
|
||||
|
||||
__Cross platform support__
|
||||
|
||||
As of Kubernetes 1.5 support for source IP preservation through Services
|
||||
with Type=LoadBalancer is only implemented in a subset of cloudproviders
|
||||
(GCP and Azure). The cloudprovider you're running on might fulfill the
|
||||
request for a loadbalancer in a few different ways:
|
||||
|
||||
1. With a proxy that terminates the client connection and opens a new connection
|
||||
to your nodes/endpoints. In such cases the source IP will always be that of the
|
||||
cloud LB, not that of the client.
|
||||
|
||||
2. With a packet forwarder, such that requests from the client sent to the
|
||||
loadbalancer VIP end up at the node with the source IP of the client, not
|
||||
an intermediate proxy.
|
||||
|
||||
Loadbalancers in the first category must use an agreed upon
|
||||
protocol between the loadbalancer and backend to communicate the true client IP
|
||||
such as the HTTP [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For)
|
||||
header, or the [proxy protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt).
|
||||
Loadbalancers in the second category can leverage the feature described above
|
||||
by simply creating a HTTP health check pointing at the port stored in
|
||||
the `service.beta.kubernetes.io/healthcheck-nodeport` annotation on the Service.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture cleanup %}
|
||||
|
||||
Delete the Services:
|
||||
|
||||
```console
|
||||
$ kubectl delete svc -l run=source-ip-app
|
||||
```
|
||||
|
||||
Delete the Deployment, ReplicaSet and Pod:
|
||||
|
||||
```console
|
||||
$ kubectl delete deployment source-ip-app
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
* Learn more about [connecting applications via services](/docs/user-guide/connecting-applications/)
|
||||
* Learn more about [loadbalancing](/docs/user-guide/load-balancer)
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/tutorial.md %}
|
|
@ -0,0 +1,17 @@
|
|||
# This is an image with Percona XtraBackup, mysql-client and ncat installed.
|
||||
FROM debian:jessie
|
||||
|
||||
RUN \
|
||||
echo "deb http://repo.percona.com/apt jessie main" > /etc/apt/sources.list.d/percona.list \
|
||||
&& echo "deb-src http://repo.percona.com/apt jessie main" >> /etc/apt/sources.list.d/percona.list \
|
||||
&& apt-key adv --keyserver keys.gnupg.net --recv-keys 8507EFA5
|
||||
|
||||
RUN \
|
||||
apt-get update && apt-get install -y --no-install-recommends \
|
||||
percona-xtrabackup-24 \
|
||||
mysql-client \
|
||||
nmap \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
CMD ["bash"]
|
||||
|
|
@ -0,0 +1,735 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This tutorial provides an introduction to managing applications with
|
||||
[StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/). It
|
||||
demonstrates how to create, delete, scale, and update the container image of a
|
||||
StatefulSet.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
Before you begin this tutorial, you should familiarize yourself with the
|
||||
following Kubernetes concepts.
|
||||
|
||||
* [Pods](/docs/user-guide/pods/single-container/)
|
||||
* [Cluster DNS](/docs/admin/dns/)
|
||||
* [Headless Services](/docs/user-guide/services/#headless-services)
|
||||
* [PersistentVolumes](/docs/user-guide/volumes/)
|
||||
* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/)
|
||||
* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
|
||||
* [kubectl CLI](/docs/user-guide/kubectl)
|
||||
|
||||
This tutorial assumes that your cluster is configured to dynamically provision
|
||||
PersistentVolumes. If your cluster is not configured to do so, you
|
||||
will have to manually provision five 1 GiB volumes prior to starting this
|
||||
tutorial.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture objectives %}
|
||||
StatefulSets are intended to be used with stateful applications and distributed
|
||||
systems. However, the administration of stateful applications and
|
||||
distributed systems on Kubernetes is a broad, complex topic. In order to
|
||||
demonstrate the basic features of a StatefulSet, and to not conflate the former
|
||||
topic with the latter, you will deploy a simple web application using StatefulSets.
|
||||
|
||||
After this tutorial, you will be familiar with the following.
|
||||
|
||||
* How to create a StatefulSet
|
||||
* How a StatefulSet manages its Pods
|
||||
* How to delete a StatefulSet
|
||||
* How to scale a StatefulSet
|
||||
* How to update the container image of a StatefulSet's Pods
|
||||
{% endcapture %}
|
||||
|
||||
{% capture lessoncontent %}
|
||||
### Creating a StatefulSet
|
||||
|
||||
Begin by creating a StatefulSet using the example below. It is similar to the
|
||||
example presented in the
|
||||
[StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) concept. It creates
|
||||
a [Headless Service](/docs/user-guide/services/#headless-services), `nginx`, to
|
||||
control the domain of the StatefulSet, `web`.
|
||||
|
||||
{% include code.html language="yaml" file="web.yaml" ghlink="/docs/tutorials/stateful-application/web.yaml" %}
|
||||
|
||||
Download the example above, and save it to a file named `web.yaml`
|
||||
|
||||
You will need to use two terminal windows. In the first terminal, use
|
||||
[`kubectl get`](/docs/user-guide/kubectl/kubectl_get/) to watch the creation
|
||||
of the StatefulSet's Pods.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
```
|
||||
|
||||
In the second terminal, use
|
||||
[`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) to create the
|
||||
Headless Service and StatefulSet defined in `web.yaml`.
|
||||
|
||||
```shell
|
||||
kubectl create -f web.yml
|
||||
service "nginx" created
|
||||
statefulset "web" created
|
||||
```
|
||||
|
||||
The command above creates two Pods, each running an
|
||||
[NGINX](https://www.nginx.com) webserver. Get the `nginx` Service and the
|
||||
`web` StatefulSet to verify that they were created successfully.
|
||||
|
||||
```shell
|
||||
kubectl get service nginx
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
nginx None <none> 80/TCP 12s
|
||||
|
||||
kubectl get statefulset web
|
||||
NAME DESIRED CURRENT AGE
|
||||
web 2 1 20s
|
||||
```
|
||||
|
||||
#### Ordered Pod Creation
|
||||
|
||||
For a StatefulSet with N replicas, when Pods are being deployed, they are
|
||||
created sequentially, in order from {0..N-1}. Examine the output of the
|
||||
`kubectl get` command in the first terminal. Eventually, the output will
|
||||
look like the example below.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 0/1 Pending 0 0s
|
||||
web-0 0/1 Pending 0 0s
|
||||
web-0 0/1 ContainerCreating 0 0s
|
||||
web-0 1/1 Running 0 19s
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 ContainerCreating 0 0s
|
||||
web-1 1/1 Running 0 18s
|
||||
```
|
||||
|
||||
Notice that the `web-0` Pod is launched and set to Pending prior to
|
||||
launching `web-1`. In fact, `web-1` is not launched until `web-0` is
|
||||
[Running and Ready](/docs/user-guide/pod-states).
|
||||
|
||||
### Pods in a StatefulSet
|
||||
Unlike Pods in other controllers, the Pods in a StatefulSet have a unqiue
|
||||
ordinal index and a stable network identity.
|
||||
|
||||
#### Examining the Pod's Ordinal Index
|
||||
|
||||
Get the StatefulSet's Pods.
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 1m
|
||||
web-1 1/1 Running 0 1m
|
||||
|
||||
```
|
||||
|
||||
As mentioned in the [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
|
||||
concept, the Pods in a StatefulSet have a sticky, unique identity. This identity
|
||||
is based on a unique ordinal index that is assigned to each Pod by the Stateful
|
||||
Set controller. The Pods' names take the form
|
||||
`<statefulset name>-<ordinal index>`. Since the `web` StatefulSet has two
|
||||
replicas, it creates two Pods, `web-0` and `web-1`.
|
||||
|
||||
#### Using Stable Network Identities
|
||||
Each Pod has a stable hostname based on its ordinal index. Use
|
||||
[`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to execute the
|
||||
`hostname` command in each Pod.
|
||||
|
||||
```shell
|
||||
for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
|
||||
web-0
|
||||
web-1
|
||||
```
|
||||
|
||||
Use [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) to execute
|
||||
a container that provides the `nslookup` command from the `dnsutils` package.
|
||||
Using `nslookup` on the Pods' hostnames, you can examine their in-cluster DNS
|
||||
addresses.
|
||||
|
||||
```shell
|
||||
kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh
|
||||
nslookup web-0.nginx
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||
|
||||
Name: web-0.nginx
|
||||
Address 1: 10.244.1.6
|
||||
|
||||
nslookup web-1.nginx
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||
|
||||
Name: web-1.nginx
|
||||
Address 1: 10.244.2.6
|
||||
```
|
||||
|
||||
The CNAME of the headless serivce points to SRV records (one for each Pod that
|
||||
is Running and Ready). The SRV records point to A record entries that
|
||||
contain the Pods' IP addresses.
|
||||
|
||||
In one terminal, watch the StatefulSet's Pods.
|
||||
|
||||
```shell
|
||||
kubectl get pod -w -l app=nginx
|
||||
```
|
||||
In a second terminal, use
|
||||
[`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete all
|
||||
the Pods in the StatefulSet.
|
||||
|
||||
```shell
|
||||
kubectl delete pod -l app=nginx
|
||||
pod "web-0" deleted
|
||||
pod "web-1" deleted
|
||||
```
|
||||
|
||||
Wait for the StatefulSet to restart them, and for both Pods to transition to
|
||||
Running and Ready.
|
||||
|
||||
```shell
|
||||
kubectl get pod -w -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 0/1 ContainerCreating 0 0s
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 2s
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 ContainerCreating 0 0s
|
||||
web-1 1/1 Running 0 34s
|
||||
```
|
||||
|
||||
Use `kubectl exec` and `kubectl run` to view the Pods hostnames and in-cluster
|
||||
DNS entries.
|
||||
|
||||
```shell
|
||||
for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
|
||||
web-0
|
||||
web-1
|
||||
|
||||
kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh
|
||||
nslookup web-0.nginx
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||
|
||||
Name: web-0.nginx
|
||||
Address 1: 10.244.1.7
|
||||
|
||||
nslookup web-1.nginx
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||
|
||||
Name: web-1.nginx
|
||||
Address 1: 10.244.2.8
|
||||
```
|
||||
|
||||
The Pods' ordinals, hostnames, SRV records, and A record names have not changed,
|
||||
but the IP addresses associated with the Pods may have changed. In the cluster
|
||||
used for this tutorial, they have. This is why it is important not to configure
|
||||
other applications to connect to Pods in a StatefulSet by IP address.
|
||||
|
||||
|
||||
If you need to find and connect to the active members of a StatefulSet, you
|
||||
should query the CNAME of the Headless Service
|
||||
(`nginx.default.svc.cluster.local`). The SRV records associated with the
|
||||
CNAME will contain only the Pods in the StatefulSet that are Running and
|
||||
Ready.
|
||||
|
||||
If your application already implements connection logic that tests for
|
||||
liveness and readiness, you can use the SRV records of the Pods (
|
||||
`web-0.nginx.default.svc.cluster.local`,
|
||||
`web-1.nginx.default.svc.cluster.local`), as they are stable, and your
|
||||
application will be able to discover the Pods' addresses when they transition
|
||||
to Running and Ready.
|
||||
|
||||
#### Writing to Stable Storage
|
||||
|
||||
Get the PersistentVolumeClaims for `web-0` and `web-1`.
|
||||
|
||||
```shell
|
||||
kubectl get pvc -l app=nginx
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||
www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 48s
|
||||
www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s
|
||||
```
|
||||
The StatefulSet controller created two PersistentVolumeClaims that are
|
||||
bound to two [PersistentVolumes](/docs/user-guide/volumes/). As the cluster used
|
||||
in this tutorial is configured to dynamically provision PersistentVolumes, the
|
||||
PersistentVolumes were created and bound automatically.
|
||||
|
||||
The NGINX webservers, by default, will serve an index file at
|
||||
`/usr/share/nginx/html/index.html`. The `volumeMounts` field in the
|
||||
StatefulSets `spec` ensures that the `/usr/share/nginx/html` directory is
|
||||
backed by a PersistentVolume.
|
||||
|
||||
Write the Pods' hostnames to their `index.html` files and verify that the NGINX
|
||||
webservers serve the hostnames.
|
||||
|
||||
```shell
|
||||
for i in 0 1; do kubectl exec web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done
|
||||
|
||||
for i in 0 1; do kubectl exec -it web-$i -- curl localhost; done
|
||||
web-0
|
||||
web-1
|
||||
```
|
||||
|
||||
In one terminal, watch the StatefulSet's Pods.
|
||||
|
||||
```shell
|
||||
kubectl get pod -w -l app=nginx
|
||||
```
|
||||
|
||||
In a second terminal, delete all of the StatefulSet's Pods.
|
||||
|
||||
```shell
|
||||
kubectl delete pod -l app=nginx
|
||||
pod "web-0" deleted
|
||||
pod "web-1" deleted
|
||||
```
|
||||
Examine the output of the `kubectl get` command in the first terminal, and wait
|
||||
for all of the Pods to transition to Running and Ready.
|
||||
|
||||
```shell
|
||||
kubectl get pod -w -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 0/1 ContainerCreating 0 0s
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 2s
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 ContainerCreating 0 0s
|
||||
web-1 1/1 Running 0 34s
|
||||
```
|
||||
|
||||
Verify the web servers continue to serve their hostnames.
|
||||
|
||||
```
|
||||
for i in 0 1; do kubectl exec -it web-$i -- curl localhost; done
|
||||
web-0
|
||||
web-1
|
||||
```
|
||||
|
||||
Event though `web-0` and `web-1` were rescheduled, they continue to serve their
|
||||
hostnames because the PersistentVolumes associated with their Persistent
|
||||
Volume Claims are remounted to their `volumeMount`s. No matter what node `web-0`
|
||||
and `web-1` are scheduled on, their PersistentVolumes will be mounted to the
|
||||
appropriate mount points.
|
||||
|
||||
### Scaling a StatefulSet
|
||||
Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
|
||||
This is accomplished by updating the `replicas` field. You can use either
|
||||
[`kubectl scale`](/docs/user-guide/kubectl/kubectl_scale/) or
|
||||
[`kubectl patch`](/docs/user-guide/kubectl/kubectl_patch/) to scale a Stateful
|
||||
Set.
|
||||
|
||||
#### Scaling Up
|
||||
|
||||
In one terminal window, watch the Pods in the StatefulSet.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
```
|
||||
|
||||
In another terminal window, use `kubectl scale` to scale the number of replicas
|
||||
to 5.
|
||||
|
||||
```shell
|
||||
kubectl scale statefulset web --replicas=5
|
||||
statefulset "web" scaled
|
||||
```
|
||||
|
||||
Examine the output of the `kubectl get` command in the first terminal, and wait
|
||||
for the three additional Pods to transition to Running and Ready.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 2h
|
||||
web-1 1/1 Running 0 2h
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-2 0/1 Pending 0 0s
|
||||
web-2 0/1 Pending 0 0s
|
||||
web-2 0/1 ContainerCreating 0 0s
|
||||
web-2 1/1 Running 0 19s
|
||||
web-3 0/1 Pending 0 0s
|
||||
web-3 0/1 Pending 0 0s
|
||||
web-3 0/1 ContainerCreating 0 0s
|
||||
web-3 1/1 Running 0 18s
|
||||
web-4 0/1 Pending 0 0s
|
||||
web-4 0/1 Pending 0 0s
|
||||
web-4 0/1 ContainerCreating 0 0s
|
||||
web-4 1/1 Running 0 19s
|
||||
```
|
||||
|
||||
The StatefulSet controller scaled the number of replicas. As with
|
||||
[StatefulSet creation](#ordered-pod-creation), the StatefulSet controller
|
||||
created each Pod sequentially with respect to its ordinal index, and it
|
||||
waited for each Pod's predecessor to be Running and Ready before launching the
|
||||
subsequent Pod.
|
||||
|
||||
#### Scaling Down
|
||||
|
||||
In one terminal, watch the StatefulSet's Pods.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
```
|
||||
|
||||
In another terminal, use `kubectl patch` to scale the StatefulSet back down to
|
||||
3 replicas.
|
||||
|
||||
```shell
|
||||
kubectl patch statefulset web -p '{"spec":{"replicas":3}}'
|
||||
"web" patched
|
||||
```
|
||||
|
||||
Wait for `web-4` and `web-3` to transition to Terminating.
|
||||
|
||||
```
|
||||
kubectl get pods -w -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 3h
|
||||
web-1 1/1 Running 0 3h
|
||||
web-2 1/1 Running 0 55s
|
||||
web-3 1/1 Running 0 36s
|
||||
web-4 0/1 ContainerCreating 0 18s
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-4 1/1 Running 0 19s
|
||||
web-4 1/1 Terminating 0 24s
|
||||
web-4 1/1 Terminating 0 24s
|
||||
web-3 1/1 Terminating 0 42s
|
||||
web-3 1/1 Terminating 0 42s
|
||||
```
|
||||
|
||||
#### Ordered Pod Termination
|
||||
|
||||
The controller deleted one Pod at a time, with respect to its ordinal index,
|
||||
in reverse order, and it waited for each to be completely shutdown before
|
||||
deleting the next.
|
||||
|
||||
Get the StatefulSet's PersistentVolumeClaims.
|
||||
|
||||
```shell
|
||||
kubectl get pvc -l app=nginx
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||
www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 13h
|
||||
www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 13h
|
||||
www-web-2 Bound pvc-e1125b27-b508-11e6-932f-42010a800002 1Gi RWO 13h
|
||||
www-web-3 Bound pvc-e1176df6-b508-11e6-932f-42010a800002 1Gi RWO 13h
|
||||
www-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO 13h
|
||||
|
||||
```
|
||||
|
||||
There are still five PersistentVolumeClaims and five PersistentVolumes.
|
||||
When exploring a Pod's [stable storage](#stable-storage), we saw that the
|
||||
PersistentVolumes mounted to the Pods of a StatefulSet are not deleted when
|
||||
the StatefulSet's Pods are deleted. This is still true when Pod deletion is
|
||||
caused by scaling the StatefulSet down. This feature can be used to facilitate
|
||||
upgrading the container images of Pods in a StatefulSet.
|
||||
|
||||
### Updating Containers
|
||||
As demonstrated in the [Scaling a StatefulSet](#scaling-a-statefulset) section,
|
||||
the `replicas` field of a StatefulSet is mutable. The only other field of a
|
||||
StatefulSet that can be updated is the `spec.template.containers` field.
|
||||
|
||||
StatefulSet currently *does not* support automated image upgrade. However, you
|
||||
can update the `image` field of any container in the podTemplate and delete
|
||||
StatefulSet's Pods one by one, the StatefulSet controller will recreate
|
||||
each Pod with the new image.
|
||||
|
||||
Patch the container image for the `web` StatefulSet.
|
||||
|
||||
```shell
|
||||
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.7"}]'
|
||||
"web" patched
|
||||
```
|
||||
|
||||
Delete the `web-0` Pod.
|
||||
|
||||
```shell
|
||||
kubectl delete pod web-0
|
||||
pod "web-0" deleted
|
||||
```
|
||||
|
||||
Watch `web-0`, and wait for the Pod to transition to Running and Ready.
|
||||
|
||||
```shell
|
||||
kubectl get pod web-0 -w
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 54s
|
||||
web-0 1/1 Terminating 0 1m
|
||||
web-0 0/1 Terminating 0 1m
|
||||
web-0 0/1 Terminating 0 1m
|
||||
web-0 0/1 Terminating 0 1m
|
||||
web-0 0/1 Pending 0 0s
|
||||
web-0 0/1 Pending 0 0s
|
||||
web-0 0/1 ContainerCreating 0 0s
|
||||
web-0 1/1 Running 0 3s
|
||||
```
|
||||
|
||||
Get the Pods to view their container images.
|
||||
|
||||
```shell{% raw %}
|
||||
for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
|
||||
gcr.io/google_containers/nginx-slim:0.7
|
||||
gcr.io/google_containers/nginx-slim:0.8
|
||||
gcr.io/google_containers/nginx-slim:0.8
|
||||
{% endraw %}```
|
||||
|
||||
`web-0` has had its image updated. Complete the update by deleting the remaining
|
||||
Pods.
|
||||
|
||||
```shell
|
||||
kubectl delete pod web-1 web-2
|
||||
pod "web-1" deleted
|
||||
pod "web-2" deleted
|
||||
```
|
||||
|
||||
Watch the Pods, and wait for all of them to transition to Running and Ready.
|
||||
|
||||
```
|
||||
kubectl get pods -w -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 8m
|
||||
web-1 1/1 Running 0 4h
|
||||
web-2 1/1 Running 0 23m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-1 1/1 Terminating 0 4h
|
||||
web-1 1/1 Terminating 0 4h
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 ContainerCreating 0 0s
|
||||
web-2 1/1 Terminating 0 23m
|
||||
web-2 1/1 Terminating 0 23m
|
||||
web-1 1/1 Running 0 4s
|
||||
web-2 0/1 Pending 0 0s
|
||||
web-2 0/1 Pending 0 0s
|
||||
web-2 0/1 ContainerCreating 0 0s
|
||||
web-2 1/1 Running 0 36s
|
||||
```
|
||||
|
||||
Get the Pods to view their container images.
|
||||
|
||||
```shell{% raw %}
|
||||
for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
|
||||
gcr.io/google_containers/nginx-slim:0.7
|
||||
gcr.io/google_containers/nginx-slim:0.7
|
||||
gcr.io/google_containers/nginx-slim:0.7
|
||||
{% endraw %}```
|
||||
|
||||
All the Pods in the StatefulSet are now running a new container image.
|
||||
|
||||
### Deleting StatefulSets
|
||||
|
||||
StatefulSet supports both Non-Cascading and Cascading deletion. In a
|
||||
Non-Cascading Delete, the StatefulSet's Pods are not deleted when the Stateful
|
||||
Set is deleted. In a Cascading Delete, both the StatefulSet and its Pods are
|
||||
deleted.
|
||||
|
||||
#### Non-Cascading Delete
|
||||
|
||||
In one terminal window, watch the Pods in the StatefulSet.
|
||||
|
||||
```
|
||||
kubectl get pods -w -l app=nginx
|
||||
```
|
||||
|
||||
Use [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete the
|
||||
StatefulSet. Make sure to supply the `--cascade=false` parameter to the
|
||||
command. This parameter tells Kubernetes to only delete the StatefulSet, and to
|
||||
not delete any of its Pods.
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset web --cascade=false
|
||||
statefulset "web" deleted
|
||||
```
|
||||
|
||||
Get the Pods to examine their status.
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 6m
|
||||
web-1 1/1 Running 0 7m
|
||||
web-2 1/1 Running 0 5m
|
||||
```
|
||||
|
||||
Even though `web` has been deleted, all of the Pods are still Running and Ready.
|
||||
Delete `web-0`.
|
||||
|
||||
```shell
|
||||
kubectl delete pod web-0
|
||||
pod "web-0" deleted
|
||||
```
|
||||
|
||||
Get the StatefulSet's Pods.
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-1 1/1 Running 0 10m
|
||||
web-2 1/1 Running 0 7m
|
||||
```
|
||||
|
||||
As the `web` StatefulSet has been deleted, `web-0` has not been relaunched.
|
||||
|
||||
In one terminal, watch the StatefulSet's Pods.
|
||||
|
||||
```
|
||||
kubectl get pods -w -l app=nginx
|
||||
```
|
||||
|
||||
In a second terminal, recreate the StatefulSet. Note that, unless
|
||||
you deleted the `nginx` Service ( which you should not have ), you will see
|
||||
an error indicating that the Service already exists.
|
||||
|
||||
```shell
|
||||
kubectl create -f web.yaml
|
||||
statefulset "web" created
|
||||
Error from server (AlreadyExists): error when creating "web.yaml": services "nginx" already exists
|
||||
```
|
||||
|
||||
Ignore the error. It only indicates that an attempt was made to create the nginx
|
||||
Headless Service even though that Service already exists.
|
||||
|
||||
Examine the output of the `kubectl get` command running in the first terminal.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-1 1/1 Running 0 16m
|
||||
web-2 1/1 Running 0 2m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 0/1 Pending 0 0s
|
||||
web-0 0/1 Pending 0 0s
|
||||
web-0 0/1 ContainerCreating 0 0s
|
||||
web-0 1/1 Running 0 18s
|
||||
web-2 1/1 Terminating 0 3m
|
||||
web-2 0/1 Terminating 0 3m
|
||||
web-2 0/1 Terminating 0 3m
|
||||
web-2 0/1 Terminating 0 3m
|
||||
```
|
||||
|
||||
When the `web` StatefulSet was recreated, it first relaunched `web-0`.
|
||||
Since `web-1` was already Running and Ready, when `web-0` transitioned to
|
||||
Running and Ready, it simply adopted this Pod. Since you recreated the StatefulSet
|
||||
with `replicas` equal to 2, once `web-0` had been recreated, and once
|
||||
`web-1` had been determined to already be Running and Ready, `web-2` was
|
||||
terminated.
|
||||
|
||||
Let's take another look at the contents of the `index.html` file served by the
|
||||
Pods' webservers.
|
||||
|
||||
```shell
|
||||
for i in 0 1; do kubectl exec -it web-$i -- curl localhost; done
|
||||
web-0
|
||||
web-1
|
||||
```
|
||||
|
||||
Even though you deleted both the StatefulSet and the `web-0` Pod, it still
|
||||
serves the hostname originally entered into its `index.html` file. This is
|
||||
because the StatefulSet never deletes the PersistentVolumes associated with a
|
||||
Pod. When you recreated the StatefulSet and it relaunched `web-0`, its original
|
||||
PersistentVolume was remounted.
|
||||
|
||||
#### Cascading Delete
|
||||
|
||||
In one terminal window, watch the Pods in the StatefulSet.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
```
|
||||
|
||||
In another terminal, delete the StatefulSet again. This time, omit the
|
||||
`--cascade=false` parameter.
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset web
|
||||
statefulset "web" deleted
|
||||
```
|
||||
Examine the output of the `kubectl get` command running in the first terminal,
|
||||
and wait for all of the Pods to transition to Terminating.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 11m
|
||||
web-1 1/1 Running 0 27m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Terminating 0 12m
|
||||
web-1 1/1 Terminating 0 29m
|
||||
web-0 0/1 Terminating 0 12m
|
||||
web-0 0/1 Terminating 0 12m
|
||||
web-0 0/1 Terminating 0 12m
|
||||
web-1 0/1 Terminating 0 29m
|
||||
web-1 0/1 Terminating 0 29m
|
||||
web-1 0/1 Terminating 0 29m
|
||||
|
||||
```
|
||||
|
||||
As you saw in the [Scaling Down](#ordered-pod-termination) section, the Pods
|
||||
are terminated one at a time, with respect to the reverse order of their ordinal
|
||||
indices. Before terminating a Pod, the StatefulSet controller waits for
|
||||
the Pod's successor to be completely terminated.
|
||||
|
||||
Note that, while a cascading delete will delete the StatefulSet and its Pods,
|
||||
it will not delete the Headless Service associated with the StatefulSet. You
|
||||
must delete the `nginx` Service manually.
|
||||
|
||||
```shell
|
||||
kubectl delete service nginx
|
||||
service "nginx" deleted
|
||||
```
|
||||
|
||||
Recreate the StatefulSet and Headless Service one more time.
|
||||
|
||||
```shell
|
||||
kubectl create -f web.yaml
|
||||
service "nginx" created
|
||||
statefulset "web" created
|
||||
```
|
||||
|
||||
When all of the StatefulSet's Pods transition to Running and Ready, retrieve
|
||||
the contents of their `index.html` files.
|
||||
|
||||
```shell
|
||||
for i in 0 1; do kubectl exec -it web-$i -- curl localhost; done
|
||||
web-0
|
||||
web-1
|
||||
```
|
||||
|
||||
Even though you completely deleted the StatefulSet, and all of its Pods, the
|
||||
Pods are recreated with their PersistentVolumes mounted, and `web-0` and
|
||||
`web-1` will still serve their hostnames.
|
||||
|
||||
Finally delete the `web` StatefulSet and the `nginx` service.
|
||||
|
||||
```shell
|
||||
kubectl delete service nginx
|
||||
service "nginx" deleted
|
||||
|
||||
kubectl delete statefulset web
|
||||
statefulset "web" deleted
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture cleanup %}
|
||||
You will need to delete the persistent storage media for the PersistentVolumes
|
||||
used in this tutorial. Follow the necessary steps, based on your environment,
|
||||
storage configuration, and provisioning method, to ensure that all storage is
|
||||
reclaimed.
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/tutorial.md %}
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mysql
|
||||
labels:
|
||||
app: mysql
|
||||
data:
|
||||
master.cnf: |
|
||||
# Apply this config only on the master.
|
||||
[mysqld]
|
||||
log-bin
|
||||
slave.cnf: |
|
||||
# Apply this config only on slaves.
|
||||
[mysqld]
|
||||
super-read-only
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
# Headless service for stable DNS entries of StatefulSet members.
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mysql
|
||||
labels:
|
||||
app: mysql
|
||||
spec:
|
||||
ports:
|
||||
- name: mysql
|
||||
port: 3306
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: mysql
|
||||
---
|
||||
# Client service for connecting to any MySQL instance for reads.
|
||||
# For writes, you must instead connect to the master: mysql-0.mysql.
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mysql-read
|
||||
labels:
|
||||
app: mysql
|
||||
spec:
|
||||
ports:
|
||||
- name: mysql
|
||||
port: 3306
|
||||
selector:
|
||||
app: mysql
|
||||
|
|
@ -0,0 +1,165 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: mysql
|
||||
spec:
|
||||
serviceName: mysql
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mysql
|
||||
annotations:
|
||||
pod.beta.kubernetes.io/init-containers: '[
|
||||
{
|
||||
"name": "init-mysql",
|
||||
"image": "mysql:5.7",
|
||||
"command": ["bash", "-c", "
|
||||
set -ex\n
|
||||
# mysqld --initialize expects an empty data dir.\n
|
||||
rm -rf /mnt/data/lost+found\n
|
||||
# Generate mysql server-id from pod ordinal index.\n
|
||||
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
|
||||
ordinal=${BASH_REMATCH[1]}\n
|
||||
echo [mysqld] > /mnt/conf.d/server-id.cnf\n
|
||||
# Add an offset to avoid reserved server-id=0 value.\n
|
||||
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n
|
||||
# Copy appropriate conf.d files from config-map to emptyDir.\n
|
||||
if [[ $ordinal -eq 0 ]]; then\n
|
||||
cp /mnt/config-map/master.cnf /mnt/conf.d/\n
|
||||
else\n
|
||||
cp /mnt/config-map/slave.cnf /mnt/conf.d/\n
|
||||
fi\n
|
||||
"],
|
||||
"volumeMounts": [
|
||||
{"name": "data", "mountPath": "/mnt/data"},
|
||||
{"name": "conf", "mountPath": "/mnt/conf.d"},
|
||||
{"name": "config-map", "mountPath": "/mnt/config-map"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "clone-mysql",
|
||||
"image": "gcr.io/google-samples/xtrabackup:1.0",
|
||||
"command": ["bash", "-c", "
|
||||
set -ex\n
|
||||
# Skip the clone if data already exists.\n
|
||||
[[ -d /var/lib/mysql/mysql ]] && exit 0\n
|
||||
# Skip the clone on master (ordinal index 0).\n
|
||||
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
|
||||
ordinal=${BASH_REMATCH[1]}\n
|
||||
[[ $ordinal -eq 0 ]] && exit 0\n
|
||||
# Clone data from previous peer.\n
|
||||
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n
|
||||
# Prepare the backup.\n
|
||||
xtrabackup --prepare --target-dir=/var/lib/mysql\n
|
||||
"],
|
||||
"volumeMounts": [
|
||||
{"name": "data", "mountPath": "/var/lib/mysql"},
|
||||
{"name": "conf", "mountPath": "/etc/mysql/conf.d"}
|
||||
]
|
||||
}
|
||||
]'
|
||||
spec:
|
||||
containers:
|
||||
- name: mysql
|
||||
image: mysql:5.7
|
||||
env:
|
||||
- name: MYSQL_ALLOW_EMPTY_PASSWORD
|
||||
value: "1"
|
||||
ports:
|
||||
- name: mysql
|
||||
containerPort: 3306
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /var/lib/mysql
|
||||
- name: conf
|
||||
mountPath: /etc/mysql/conf.d
|
||||
resources:
|
||||
requests:
|
||||
cpu: 1
|
||||
memory: 1Gi
|
||||
livenessProbe:
|
||||
exec:
|
||||
command: ["mysqladmin", "ping"]
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
exec:
|
||||
# Check we can execute queries over TCP (skip-networking is off).
|
||||
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 1
|
||||
- name: xtrabackup
|
||||
image: gcr.io/google-samples/xtrabackup:1.0
|
||||
ports:
|
||||
- name: xtrabackup
|
||||
containerPort: 3307
|
||||
command:
|
||||
- bash
|
||||
- "-c"
|
||||
- |
|
||||
set -ex
|
||||
cd /var/lib/mysql
|
||||
|
||||
# Determine binlog position of cloned data, if any.
|
||||
if [[ -f xtrabackup_slave_info ]]; then
|
||||
# XtraBackup already generated a partial "CHANGE MASTER TO" query
|
||||
# because we're cloning from an existing slave.
|
||||
mv xtrabackup_slave_info change_master_to.sql.in
|
||||
# Ignore xtrabackup_binlog_info in this case (it's useless).
|
||||
rm -f xtrabackup_binlog_info
|
||||
elif [[ -f xtrabackup_binlog_info ]]; then
|
||||
# We're cloning directly from master. Parse binlog position.
|
||||
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
|
||||
rm xtrabackup_binlog_info
|
||||
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
|
||||
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
|
||||
fi
|
||||
|
||||
# Check if we need to complete a clone by starting replication.
|
||||
if [[ -f change_master_to.sql.in ]]; then
|
||||
echo "Waiting for mysqld to be ready (accepting connections)"
|
||||
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
|
||||
|
||||
echo "Initializing replication from clone position"
|
||||
# In case of container restart, attempt this at-most-once.
|
||||
mv change_master_to.sql.in change_master_to.sql.orig
|
||||
mysql -h 127.0.0.1 <<EOF
|
||||
$(<change_master_to.sql.orig),
|
||||
MASTER_HOST='mysql-0.mysql',
|
||||
MASTER_USER='root',
|
||||
MASTER_PASSWORD='',
|
||||
MASTER_CONNECT_RETRY=10;
|
||||
START SLAVE;
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Start a server to send backups when requested by peers.
|
||||
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
|
||||
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /var/lib/mysql
|
||||
- name: conf
|
||||
mountPath: /etc/mysql/conf.d
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumes:
|
||||
- name: conf
|
||||
emptyDir: {}
|
||||
- name: config-map
|
||||
configMap:
|
||||
name: mysql
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
annotations:
|
||||
volume.alpha.kubernetes.io/storage-class: default
|
||||
spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
|
|
@ -0,0 +1,535 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to run a replicated stateful application using a
|
||||
[StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) controller.
|
||||
The example is a MySQL single-master topology with multiple slaves running
|
||||
asynchronous replication.
|
||||
|
||||
Note that **this is not a production configuration**.
|
||||
In particular, MySQL settings remain on insecure defaults to keep the focus
|
||||
on general patterns for running stateful applications in Kubernetes.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* {% include task-tutorial-prereqs.md %}
|
||||
* {% include default-storage-class-prereqs.md %}
|
||||
* This tutorial assumes you are familiar with
|
||||
[PersistentVolumes](/docs/user-guide/persistent-volumes/)
|
||||
and [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/),
|
||||
as well as other core concepts like [Pods](/docs/user-guide/pods/),
|
||||
[Services](/docs/user-guide/services/), and
|
||||
[ConfigMaps](/docs/user-guide/configmap/).
|
||||
* Some familiarity with MySQL helps, but this tutorial aims to present
|
||||
general patterns that should be useful for other systems.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture objectives %}
|
||||
|
||||
* Deploy a replicated MySQL topology with a StatefulSet controller.
|
||||
* Send MySQL client traffic.
|
||||
* Observe resistance to downtime.
|
||||
* Scale the StatefulSet up and down.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture lessoncontent %}
|
||||
|
||||
### Deploying MySQL
|
||||
|
||||
The example MySQL deployment consists of a ConfigMap, two Services,
|
||||
and a StatefulSet.
|
||||
|
||||
#### ConfigMap
|
||||
|
||||
Create the ConfigMap from the following YAML configuration file:
|
||||
|
||||
```shell
|
||||
kubectl create -f http://k8s.io/docs/tutorials/stateful-application/mysql-configmap.yaml
|
||||
```
|
||||
|
||||
{% include code.html language="yaml" file="mysql-configmap.yaml" ghlink="/docs/tutorials/stateful-application/mysql-configmap.yaml" %}
|
||||
|
||||
This ConfigMap provides `my.cnf` overrides that let you independently control
|
||||
configuration on the MySQL master and slaves.
|
||||
In this case, you want the master to be able to serve replication logs to slaves
|
||||
and you want slaves to reject any writes that don't come via replication.
|
||||
|
||||
There's nothing special about the ConfigMap itself that causes different
|
||||
portions to apply to different Pods.
|
||||
Each Pod decides which portion to look at as it's initializing,
|
||||
based on information provided by the StatefulSet controller.
|
||||
|
||||
#### Services
|
||||
|
||||
Create the Services from the following YAML configuration file:
|
||||
|
||||
```shell
|
||||
kubectl create -f http://k8s.io/docs/tutorials/stateful-application/mysql-services.yaml
|
||||
```
|
||||
|
||||
{% include code.html language="yaml" file="mysql-services.yaml" ghlink="/docs/tutorials/stateful-application/mysql-services.yaml" %}
|
||||
|
||||
The Headless Service provides a home for the DNS entries that the StatefulSet
|
||||
controller creates for each Pod that's part of the set.
|
||||
Because the Headless Service is named `mysql`, the Pods are accessible by
|
||||
resolving `<pod-name>.mysql` from within any other Pod in the same Kubernetes
|
||||
cluster and namespace.
|
||||
|
||||
The Client Service, called `mysql-read`, is a normal Service with its own
|
||||
cluster IP that distributes connections across all MySQL Pods that report
|
||||
being Ready. The set of potential endpoints includes the MySQL master and all
|
||||
slaves.
|
||||
|
||||
Note that only read queries can use the load-balanced Client Service.
|
||||
Because there is only one MySQL master, clients should connect directly to the
|
||||
MySQL master Pod (through its DNS entry within the Headless Service) to execute
|
||||
writes.
|
||||
|
||||
#### StatefulSet
|
||||
|
||||
Finally, create the StatefulSet from the following YAML configuration file:
|
||||
|
||||
```shell
|
||||
kubectl create -f http://k8s.io/docs/tutorials/stateful-application/mysql-statefulset.yaml
|
||||
```
|
||||
|
||||
{% include code.html language="yaml" file="mysql-statefulset.yaml" ghlink="/docs/tutorials/stateful-application/mysql-statefulset.yaml" %}
|
||||
|
||||
You can watch the startup progress by running:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=mysql --watch
|
||||
```
|
||||
|
||||
After a while, you should see all 3 Pods become Running:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mysql-0 2/2 Running 0 2m
|
||||
mysql-1 2/2 Running 0 1m
|
||||
mysql-2 2/2 Running 0 1m
|
||||
```
|
||||
|
||||
Press **Ctrl+C** to cancel the watch.
|
||||
If you don't see any progress, make sure you have a dynamic PersistentVolume
|
||||
provisioner enabled as mentioned in the [prerequisites](#before-you-begin).
|
||||
|
||||
This manifest uses a variety of techniques for managing stateful Pods as part of
|
||||
a StatefulSet. The next section highlights some of these techniques to explain
|
||||
what happens as the StatefulSet creates Pods.
|
||||
|
||||
### Understanding stateful Pod initialization
|
||||
|
||||
The StatefulSet controller starts Pods one at a time, in order by their
|
||||
ordinal index.
|
||||
It waits until each Pod reports being Ready before starting the next one.
|
||||
|
||||
In addition, the controller assigns each Pod a unique, stable name of the form
|
||||
`<statefulset-name>-<ordinal-index>`.
|
||||
In this case, that results in Pods named `mysql-0`, `mysql-1`, and `mysql-2`.
|
||||
|
||||
The Pod template in the above StatefulSet manifest takes advantage of these
|
||||
properties to perform orderly startup of MySQL replication.
|
||||
|
||||
#### Generating configuration
|
||||
|
||||
Before starting any of the containers in the Pod spec, the Pod first runs any
|
||||
[Init Containers](/docs/user-guide/production-pods/#handling-initialization)
|
||||
in the order defined.
|
||||
In the StatefulSet manifest, you can find these defined within the
|
||||
`pod.beta.kubernetes.io/init-containers` annotation.
|
||||
|
||||
The first Init Container, named `init-mysql`, generates special MySQL config
|
||||
files based on the ordinal index.
|
||||
|
||||
The script determines its own ordinal index by extracting it from the end of
|
||||
the Pod name, which is returned by the `hostname` command.
|
||||
Then it saves the ordinal (with a numeric offset to avoid reserved values)
|
||||
into a file called `server-id.cnf` in the MySQL `conf.d` directory.
|
||||
This translates the unique, stable identity provided by the StatefulSet
|
||||
controller into the domain of MySQL server IDs, which require the same
|
||||
properties.
|
||||
|
||||
The script in the `init-mysql` container also applies either `master.cnf` or
|
||||
`slave.cnf` from the ConfigMap by copying the contents into `conf.d`.
|
||||
Because the example topology consists of a single MySQL master and any number of
|
||||
slaves, the script simply assigns ordinal `0` to be the master, and everyone
|
||||
else to be slaves.
|
||||
Combined with the StatefulSet controller's
|
||||
[deployment order guarantee](/docs/concepts/abstractions/controllers/statefulsets/#deployment-and-scaling-guarantee),
|
||||
this ensures the MySQL master is Ready before creating slaves, so they can begin
|
||||
replicating.
|
||||
|
||||
#### Cloning existing data
|
||||
|
||||
In general, when a new Pod joins the set as a slave, it must assume the MySQL
|
||||
master might already have data on it. It also must assume that the replication
|
||||
logs might not go all the way back to the beginning of time.
|
||||
These conservative assumptions are the key to allowing a running StatefulSet
|
||||
to scale up and down over time, rather than being fixed at its initial size.
|
||||
|
||||
The second Init Container, named `clone-mysql`, performs a clone operation on
|
||||
a slave Pod the first time it starts up on an empty PersistentVolume.
|
||||
That means it copies all existing data from another running Pod,
|
||||
so its local state is consistent enough to begin replicating from the master.
|
||||
|
||||
MySQL itself does not provide a mechanism to do this, so the example uses a
|
||||
popular open-source tool called Percona XtraBackup.
|
||||
During the clone, the source MySQL server might suffer reduced performance.
|
||||
To minimize impact on the MySQL master, the script instructs each Pod to clone
|
||||
from the Pod whose ordinal index is one lower.
|
||||
This works because the StatefulSet controller always ensures Pod `N` is
|
||||
Ready before starting Pod `N+1`.
|
||||
|
||||
#### Starting replication
|
||||
|
||||
After the Init Containers complete successfully, the regular containers run.
|
||||
The MySQL Pods consist of a `mysql` container that runs the actual `mysqld`
|
||||
server, and an `xtrabackup` container that acts as a
|
||||
[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html).
|
||||
|
||||
The `xtrabackup` sidecar looks at the cloned data files and determines if
|
||||
it's necessary to initialize MySQL replication on the slave.
|
||||
If so, it waits for `mysqld` to be ready and then executes the
|
||||
`CHANGE MASTER TO` and `START SLAVE` commands with replication parameters
|
||||
extracted from the XtraBackup clone files.
|
||||
|
||||
Once a slave begins replication, it remembers its MySQL master and
|
||||
reconnects automatically if the server restarts or the connection dies.
|
||||
Also, because slaves look for the master at its stable DNS name
|
||||
(`mysql-0.mysql`), they automatically find the master even if it gets a new
|
||||
Pod IP due to being rescheduled.
|
||||
|
||||
Lastly, after starting replication, the `xtrabackup` container listens for
|
||||
connections from other Pods requesting a data clone.
|
||||
This server remains up indefinitely in case the StatefulSet scales up, or in
|
||||
case the next Pod loses its PersistentVolumeClaim and needs to redo the clone.
|
||||
|
||||
### Sending client traffic
|
||||
|
||||
You can send test queries to the MySQL master (hostname `mysql-0.mysql`)
|
||||
by running a temporary container with the `mysql:5.7` image and running the
|
||||
`mysql` client binary.
|
||||
|
||||
```shell
|
||||
kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
|
||||
mysql -h mysql-0.mysql <<EOF
|
||||
CREATE DATABASE test;
|
||||
CREATE TABLE test.messages (message VARCHAR(250));
|
||||
INSERT INTO test.messages VALUES ('hello');
|
||||
EOF
|
||||
```
|
||||
|
||||
Use the hostname `mysql-read` to send test queries to any server that reports
|
||||
being Ready:
|
||||
|
||||
```shell
|
||||
kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
|
||||
mysql -h mysql-read -e "SELECT * FROM test.messages"
|
||||
```
|
||||
|
||||
You should get output like this:
|
||||
|
||||
```
|
||||
Waiting for pod default/mysql-client to be running, status is Pending, pod ready: false
|
||||
+---------+
|
||||
| message |
|
||||
+---------+
|
||||
| hello |
|
||||
+---------+
|
||||
pod "mysql-client" deleted
|
||||
```
|
||||
|
||||
To demonstrate that the `mysql-read` Service distributes connections across
|
||||
servers, you can run `SELECT @@server_id` in a loop:
|
||||
|
||||
```shell
|
||||
kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
|
||||
bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
|
||||
```
|
||||
|
||||
You should see the reported `@@server_id` change randomly, because a different
|
||||
endpoint might be selected upon each connection attempt:
|
||||
|
||||
```
|
||||
+-------------+---------------------+
|
||||
| @@server_id | NOW() |
|
||||
+-------------+---------------------+
|
||||
| 100 | 2006-01-02 15:04:05 |
|
||||
+-------------+---------------------+
|
||||
+-------------+---------------------+
|
||||
| @@server_id | NOW() |
|
||||
+-------------+---------------------+
|
||||
| 102 | 2006-01-02 15:04:06 |
|
||||
+-------------+---------------------+
|
||||
+-------------+---------------------+
|
||||
| @@server_id | NOW() |
|
||||
+-------------+---------------------+
|
||||
| 101 | 2006-01-02 15:04:07 |
|
||||
+-------------+---------------------+
|
||||
```
|
||||
|
||||
You can press **Ctrl+C** when you want to stop the loop, but it's useful to keep
|
||||
it running in another window so you can see the effects of the following steps.
|
||||
|
||||
### Simulating Pod and Node downtime
|
||||
|
||||
To demonstrate the increased availability of reading from the pool of slaves
|
||||
instead of a single server, keep the `SELECT @@server_id` loop from above
|
||||
running while you force a Pod out of the Ready state.
|
||||
|
||||
#### Break the Readiness Probe
|
||||
|
||||
The [readiness probe](/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks)
|
||||
for the `mysql` container runs the command `mysql -h 127.0.0.1 -e 'SELECT 1'`
|
||||
to make sure the server is up and able to execute queries.
|
||||
|
||||
One way to force this readiness probe to fail is to break that command:
|
||||
|
||||
```shell
|
||||
kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql /usr/bin/mysql.off
|
||||
```
|
||||
|
||||
This reaches into the actual container's filesystem for Pod `mysql-2` and
|
||||
renames the `mysql` command so the readiness probe can't find it.
|
||||
After a few seconds, the Pod should report one of its containers as not Ready,
|
||||
which you can check by running:
|
||||
|
||||
```shell
|
||||
kubectl get pod mysql-2
|
||||
```
|
||||
|
||||
Look for `1/2` in the `READY` column:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mysql-2 1/2 Running 0 3m
|
||||
```
|
||||
|
||||
At this point, you should see your `SELECT @@server_id` loop continue to run,
|
||||
although it never reports `102` anymore.
|
||||
Recall that the `init-mysql` script defined `server-id` as `100 + $ordinal`,
|
||||
so server ID `102` corresponds to Pod `mysql-2`.
|
||||
|
||||
Now repair the Pod and it should reappear in the loop output
|
||||
after a few seconds:
|
||||
|
||||
```shell
|
||||
kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql
|
||||
```
|
||||
|
||||
#### Delete Pods
|
||||
|
||||
The StatefulSet also recreates Pods if they're deleted, similar to what a
|
||||
ReplicaSet does for stateless Pods.
|
||||
|
||||
```shell
|
||||
kubectl delete pod mysql-2
|
||||
```
|
||||
|
||||
The StatefulSet controller notices that no `mysql-2` Pod exists anymore,
|
||||
and creates a new one with the same name and linked to the same
|
||||
PersistentVolumeClaim.
|
||||
You should see server ID `102` disappear from the loop output for a while
|
||||
and then return on its own.
|
||||
|
||||
#### Drain a Node
|
||||
|
||||
If your Kubernetes cluster has multiple Nodes, you can simulate Node downtime
|
||||
(such as when Nodes are upgraded) by issuing a
|
||||
[drain](http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/).
|
||||
|
||||
First determine which Node one of the MySQL Pods is on:
|
||||
|
||||
```shell
|
||||
kubectl get pod mysql-2 -o wide
|
||||
```
|
||||
|
||||
The Node name should show up in the last column:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
mysql-2 2/2 Running 0 15m 10.244.5.27 kubernetes-minion-group-9l2t
|
||||
```
|
||||
|
||||
Then drain the Node by running the following command, which cordons it so
|
||||
no new Pods may schedule there, and then evicts any existing Pods.
|
||||
Replace `<node-name>` with the name of the Node you found in the last step.
|
||||
|
||||
This might impact other applications on the Node, so it's best to
|
||||
**only do this in a test cluster**.
|
||||
|
||||
```shell
|
||||
kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets
|
||||
```
|
||||
|
||||
Now you can watch as the Pod reschedules on a different Node:
|
||||
|
||||
```shell
|
||||
kubectl get pod mysql-2 -o wide --watch
|
||||
```
|
||||
|
||||
It should look something like this:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
mysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-minion-group-9l2t
|
||||
[...]
|
||||
mysql-2 0/2 Pending 0 0s <none> kubernetes-minion-group-fjlm
|
||||
mysql-2 0/2 Init:0/2 0 0s <none> kubernetes-minion-group-fjlm
|
||||
mysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-minion-group-fjlm
|
||||
mysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-minion-group-fjlm
|
||||
mysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-minion-group-fjlm
|
||||
mysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-minion-group-fjlm
|
||||
```
|
||||
|
||||
And again, you should see server ID `102` disappear from the
|
||||
`SELECT @@server_id` loop output for a while and then return.
|
||||
|
||||
Now uncordon the Node to return it to a normal state:
|
||||
|
||||
```shell
|
||||
kubectl uncordon <node-name>
|
||||
```
|
||||
|
||||
### Scaling the number of slaves
|
||||
|
||||
With MySQL replication, you can scale your read query capacity by adding slaves.
|
||||
With StatefulSet, you can do this with a single command:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas=5 statefulset mysql
|
||||
```
|
||||
|
||||
Watch the new Pods come up by running:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=mysql --watch
|
||||
```
|
||||
|
||||
Once they're up, you should see server IDs `103` and `104` start appearing in
|
||||
the `SELECT @@server_id` loop output.
|
||||
|
||||
You can also verify that these new servers have the data you added before they
|
||||
existed:
|
||||
|
||||
```shell
|
||||
kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
|
||||
mysql -h mysql-3.mysql -e "SELECT * FROM test.messages"
|
||||
```
|
||||
|
||||
```
|
||||
Waiting for pod default/mysql-client to be running, status is Pending, pod ready: false
|
||||
+---------+
|
||||
| message |
|
||||
+---------+
|
||||
| hello |
|
||||
+---------+
|
||||
pod "mysql-client" deleted
|
||||
```
|
||||
|
||||
Scaling back down is also seamless:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas=3 statefulset mysql
|
||||
```
|
||||
|
||||
Note, however, that while scaling up creates new PersistentVolumeClaims
|
||||
automatically, scaling down does not automatically delete these PVCs.
|
||||
This gives you the choice to keep those initialized PVCs around to make
|
||||
scaling back up quicker, or to extract data before deleting them.
|
||||
|
||||
You can see this by running:
|
||||
|
||||
```shell
|
||||
kubectl get pvc -l app=mysql
|
||||
```
|
||||
|
||||
Which shows that all 5 PVCs still exist, despite having scaled the
|
||||
StatefulSet down to 3:
|
||||
|
||||
```
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||
data-mysql-0 Bound pvc-8acbf5dc-b103-11e6-93fa-42010a800002 10Gi RWO 20m
|
||||
data-mysql-1 Bound pvc-8ad39820-b103-11e6-93fa-42010a800002 10Gi RWO 20m
|
||||
data-mysql-2 Bound pvc-8ad69a6d-b103-11e6-93fa-42010a800002 10Gi RWO 20m
|
||||
data-mysql-3 Bound pvc-50043c45-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m
|
||||
data-mysql-4 Bound pvc-500a9957-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m
|
||||
```
|
||||
|
||||
If you don't intend to reuse the extra PVCs, you can delete them:
|
||||
|
||||
```shell
|
||||
kubectl delete pvc data-mysql-3
|
||||
kubectl delete pvc data-mysql-4
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture cleanup %}
|
||||
|
||||
1. Cancel the `SELECT @@server_id` loop by pressing **Ctrl+C** in its terminal,
|
||||
or running the following from another terminal:
|
||||
|
||||
```shell
|
||||
kubectl delete pod mysql-client-loop --now
|
||||
```
|
||||
|
||||
1. Delete the StatefulSet. This also begins terminating the Pods.
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset mysql
|
||||
```
|
||||
|
||||
1. Verify that the Pods disappear.
|
||||
They might take some time to finish terminating.
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=mysql
|
||||
```
|
||||
|
||||
You'll know the Pods have terminated when the above returns:
|
||||
|
||||
```
|
||||
No resources found.
|
||||
```
|
||||
|
||||
1. Delete the ConfigMap, Services, and PersistentVolumeClaims.
|
||||
|
||||
```shell
|
||||
kubectl delete configmap,service,pvc -l app=mysql
|
||||
```
|
||||
|
||||
1. If you manually provisioned PersistentVolumes, you also need to manually
|
||||
delete them, as well as release the underlying resources.
|
||||
If you used a dynamic provisioner, it automatically deletes the
|
||||
PersistentVolumes when it sees that you deleted the PersistentVolumeClaims.
|
||||
Some dynamic provisioners (such as those for EBS and PD) also release the
|
||||
underlying resources upon deleting the PersistentVolumes.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
* Look in the [Helm Charts repository](https://github.com/kubernetes/charts)
|
||||
for other stateful application examples.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/tutorial.md %}
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: web
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: nginx
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: web
|
||||
spec:
|
||||
serviceName: "nginx"
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: gcr.io/google_containers/nginx-slim:0.8
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
annotations:
|
||||
volume.alpha.kubernetes.io/storage-class: anything
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
|
|
@ -28,10 +28,10 @@ server.
|
|||
|
||||
Each container of a pod can optionally specify one or more of the following:
|
||||
|
||||
* `spec.container[].resources.limits.cpu`
|
||||
* `spec.container[].resources.limits.memory`
|
||||
* `spec.container[].resources.requests.cpu`
|
||||
* `spec.container[].resources.requests.memory`.
|
||||
* `spec.containers[].resources.limits.cpu`
|
||||
* `spec.containers[].resources.limits.memory`
|
||||
* `spec.containers[].resources.requests.cpu`
|
||||
* `spec.containers[].resources.requests.memory`.
|
||||
|
||||
Specifying resource requests and/or limits is optional. In some clusters, unset limits or requests
|
||||
may be replaced with default values when a pod is created or updated. The default value depends on
|
||||
|
@ -53,7 +53,7 @@ One cpu, in Kubernetes, is equivalent to:
|
|||
- 1 Azure vCore
|
||||
- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
|
||||
|
||||
Fractional requests are allowed. A container with `spec.container[].resources.requests.cpu` of `0.5` will
|
||||
Fractional requests are allowed. A container with `spec.containers[].resources.requests.cpu` of `0.5` will
|
||||
be guaranteed half as much CPU as one that asks for `1`. The expression `0.1` is equivalent to the expression
|
||||
`100m`, which can be read as "one hundred millicpu" (some may say "one hundred millicores", and this is understood
|
||||
to mean the same thing when talking about Kubernetes). A request with a decimal point, like `0.1` is converted to
|
||||
|
@ -121,17 +121,17 @@ runner (Docker or rkt).
|
|||
|
||||
When using Docker:
|
||||
|
||||
- The `spec.container[].resources.requests.cpu` is converted to its core value (potentially fractional),
|
||||
- The `spec.containers[].resources.requests.cpu` is converted to its core value (potentially fractional),
|
||||
and multiplied by 1024, and used as the value of the [`--cpu-shares`](
|
||||
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
|
||||
command.
|
||||
- The `spec.container[].resources.limits.cpu` is converted to its millicore value,
|
||||
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value,
|
||||
multiplied by 100000, and then divided by 1000, and used as the value of the [`--cpu-quota`](
|
||||
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
|
||||
command. The [`--cpu-period`] flag is set to 100000 which represents the default 100ms period
|
||||
for measuring quota usage. The kubelet enforces cpu limits if it was started with the
|
||||
[`--cpu-cfs-quota`] flag set to true. As of version 1.2, this flag will now default to true.
|
||||
- The `spec.container[].resources.limits.memory` is converted to an integer, and used as the value
|
||||
- The `spec.containers[].resources.limits.memory` is converted to an integer, and used as the value
|
||||
of the [`--memory`](https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag
|
||||
to the `docker run` command.
|
||||
|
||||
|
@ -269,6 +269,91 @@ LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-0
|
|||
|
||||
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
|
||||
|
||||
## Opaque Integer Resources (Alpha Feature)
|
||||
|
||||
Kubernetes version 1.5 introduces Opaque integer resources. Opaque
|
||||
integer resources allow cluster operators to advertise new node-level
|
||||
resources that would be otherwise unknown to the system.
|
||||
|
||||
Users can consume these resources in pod specs just like CPU and memory.
|
||||
The scheduler takes care of the resource accounting so that no more than the
|
||||
available amount is simultaneously allocated to pods.
|
||||
|
||||
**Note:** Opaque integer resources are Alpha in Kubernetes version 1.5.
|
||||
Only resource accounting is implemented; node-level isolation is still
|
||||
under active development.
|
||||
|
||||
Opaque integer resources are resources that begin with the prefix
|
||||
`pod.alpha.kubernetes.io/opaque-int-resource-`. The API server
|
||||
restricts quantities of these resources to whole numbers. Examples of
|
||||
_valid_ quantities are `3`, `3000m` and `3Ki`. Examples of _invalid_
|
||||
quantities are `0.5` and `1500m`.
|
||||
|
||||
There are two steps required to use opaque integer resources. First, the
|
||||
cluster operator must advertise a per-node opaque resource on one or more
|
||||
nodes. Second, users must request the opaque resource in pods.
|
||||
|
||||
To advertise a new opaque integer resource, the cluster operator should
|
||||
submit a `PATCH` HTTP request to the API server to specify the available
|
||||
quantity in the `status.capacity` for a node in the cluster. After this
|
||||
operation, the node's `status.capacity` will include a new resource. The
|
||||
`status.allocatable` field is updated automatically with the new resource
|
||||
asychronously by the Kubelet. Note that since the scheduler uses the
|
||||
node `status.allocatable` value when evaluating pod fitness, there may
|
||||
be a short delay between patching the node capacity with a new resource and the
|
||||
first pod that requests the resource to be scheduled on that node.
|
||||
|
||||
**Example:**
|
||||
|
||||
The HTTP request below advertises 5 "foo" resources on node `k8s-node-1`.
|
||||
|
||||
_NOTE: `~1` is the encoding for the character `/` in the patch path.
|
||||
The operation path value in JSON-Patch is interpreted as a JSON-Pointer.
|
||||
For more details, please refer to
|
||||
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3)._
|
||||
|
||||
```http
|
||||
PATCH /api/v1/nodes/k8s-node-1/status HTTP/1.1
|
||||
Accept: application/json
|
||||
Content-Type: application/json-patch+json
|
||||
Host: k8s-master:8080
|
||||
|
||||
[
|
||||
{
|
||||
"op": "add",
|
||||
"path": "/status/capacity/pod.alpha.kubernetes.io~1opaque-int-resource-foo",
|
||||
"value": "5"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
To consume opaque resources in pods, include the name of the opaque
|
||||
resource as a key in the the `spec.containers[].resources.requests` map.
|
||||
|
||||
The pod will be scheduled only if all of the resource requests are
|
||||
satisfied (including cpu, memory and any opaque resources.) The pod will
|
||||
remain in the `PENDING` state while the resource request cannot be met by any
|
||||
node.
|
||||
|
||||
**Example:**
|
||||
|
||||
The pod below requests 2 cpus and 1 "foo" (an opaque resource.)
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: my-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: my-container
|
||||
image: myimage
|
||||
resources:
|
||||
requests:
|
||||
cpu: 2
|
||||
pod.alpha.kubernetes.io/opaque-int-resource-foo: 1
|
||||
```
|
||||
|
||||
## Planned Improvements
|
||||
|
||||
The current system only allows resource quantities to be specified on a container.
|
||||
|
|
|
@ -454,6 +454,163 @@ nginx-deployment-3066724191 0 0 1h
|
|||
Note: You cannot rollback a paused Deployment until you resume it.
|
||||
|
||||
|
||||
## Deployment status
|
||||
|
||||
A Deployment enters various states during its lifecycle. It can be [progressing](#progressing-deployment) while rolling out a new ReplicaSet,
|
||||
it can be [complete](#complete-deployment), or it can [fail to progress](#failed-deployment).
|
||||
|
||||
### Progressing Deployment
|
||||
|
||||
Kubernetes marks a Deployment as _progressing_ when one of the following tasks is performed:
|
||||
|
||||
* The Deployment is in the process of creating a new ReplicaSet.
|
||||
* The Deployment is scaling up an existing ReplicaSet.
|
||||
* The Deployment is scaling down an existing ReplicaSet.
|
||||
|
||||
You can monitor the progress for a Deployment by using `kubectl rollout status`.
|
||||
|
||||
### Complete Deployment
|
||||
|
||||
Kubernetes marks a Deployment as _complete_ when it has the following characteristics:
|
||||
|
||||
* The Deployment has minimum availability. Minimum availability means that the Deployment's number of available replicas
|
||||
equals or exceeds the number required by the Deployment strategy.
|
||||
* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
|
||||
updates you've requested have been completed.
|
||||
|
||||
You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed successfully, `kubectl rollout status` returns a zero exit code.
|
||||
|
||||
```
|
||||
$ kubectl rollout status deploy/nginx
|
||||
Waiting for rollout to finish: 2 of 3 updated replicas are available...
|
||||
deployment "nginx" successfully rolled out
|
||||
$ echo $?
|
||||
0
|
||||
```
|
||||
|
||||
### Failed Deployment
|
||||
|
||||
Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur due to some of the following factors:
|
||||
|
||||
* Insufficient quota
|
||||
* Readiness probe failures
|
||||
* Image pull errors
|
||||
* Insufficient permissions
|
||||
* Limit ranges
|
||||
* Application runtime misconfiguration
|
||||
|
||||
One way you can detect this condition is to specify specify a deadline parameter in your Deployment spec: ([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the number of seconds the Deployment controller waits before indicating (via the Deployment status) that the Deployment progress has stalled.
|
||||
|
||||
The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report lack of progress for a Deployment after 10 minutes:
|
||||
|
||||
```shell
|
||||
$ kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
|
||||
"nginx-deployment" patched
|
||||
```
|
||||
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following attributes to
|
||||
the Deployment's `status.conditions`:
|
||||
|
||||
* Type=Progressing
|
||||
* Status=False
|
||||
* Reason=ProgressDeadlineExceeded
|
||||
|
||||
See the [Kubernetes API conventions](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/devel/api-conventions.md#typical-status-properties) for more information on status conditions.
|
||||
|
||||
Note that in version 1.5, Kubernetes will take no action on a stalled Deployment other than to report a status condition with
|
||||
`Reason=ProgressDeadlineExceeded`.
|
||||
|
||||
**Note:** If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can safely pause a Deployment in the middle of a rollout and resume without triggering a the condition for exceeding the deadline.
|
||||
|
||||
You may experience transient errors with your Deployments, either due to a low timeout that you have set or due to any other kind
|
||||
of error that can be treated as transient. For example, let's suppose you have insufficient quota. If you describe the Deployment
|
||||
you will notice the following section:
|
||||
|
||||
```
|
||||
$ kubectl describe deployment nginx-deployment
|
||||
<...>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True ReplicaSetUpdated
|
||||
ReplicaFailure True FailedCreate
|
||||
<...>
|
||||
```
|
||||
|
||||
If you run `kubectl get deployment nginx-deployment -o yaml`, the Deployement status might look like this:
|
||||
|
||||
```
|
||||
status:
|
||||
availableReplicas: 2
|
||||
conditions:
|
||||
- lastTransitionTime: 2016-10-04T12:25:39Z
|
||||
lastUpdateTime: 2016-10-04T12:25:39Z
|
||||
message: Replica set "nginx-deployment-4262182780" is progressing.
|
||||
reason: ReplicaSetUpdated
|
||||
status: "True"
|
||||
type: Progressing
|
||||
- lastTransitionTime: 2016-10-04T12:25:42Z
|
||||
lastUpdateTime: 2016-10-04T12:25:42Z
|
||||
message: Deployment has minimum availability.
|
||||
reason: MinimumReplicasAvailable
|
||||
status: "True"
|
||||
type: Available
|
||||
- lastTransitionTime: 2016-10-04T12:25:39Z
|
||||
lastUpdateTime: 2016-10-04T12:25:39Z
|
||||
message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
|
||||
object-counts, requested: pods=1, used: pods=3, limited: pods=2'
|
||||
reason: FailedCreate
|
||||
status: "True"
|
||||
type: ReplicaFailure
|
||||
observedGeneration: 3
|
||||
replicas: 2
|
||||
unavailableReplicas: 2
|
||||
```
|
||||
|
||||
Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:
|
||||
|
||||
```
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing False ProgressDeadlineExceeded
|
||||
ReplicaFailure True FailedCreate
|
||||
```
|
||||
|
||||
You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running,
|
||||
or by increasing quota in your namespace. If you satisfy the quota conditions and the Deployment controller then completes the Deployment
|
||||
rollout, you'll see the Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable`).
|
||||
|
||||
```
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True NewReplicaSetAvailable
|
||||
```
|
||||
|
||||
`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated
|
||||
by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment
|
||||
is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum
|
||||
required new replicas are available (see the Reason of the condition for the particulars - in our case
|
||||
`Reason=NewReplicaSetAvailable` means that the Deployment is complete).
|
||||
|
||||
You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status` returns a non-zero exit code if the Deployment has exceeded the progression deadline.
|
||||
|
||||
```
|
||||
$ kubectl rollout status deploy/nginx
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
error: deployment "nginx" exceeded its progress deadline
|
||||
$ echo $?
|
||||
1
|
||||
```
|
||||
|
||||
### Operating on a failed deployment
|
||||
|
||||
All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back
|
||||
to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment pod template.
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Canary Deployment
|
||||
|
@ -556,6 +713,17 @@ the rolling update starts, such that the total number of old and new Pods do not
|
|||
the new Replica Set can be scaled up further, ensuring that the total number of Pods running
|
||||
at any time during the update is at most 130% of desired Pods.
|
||||
|
||||
### Progress Deadline Seconds
|
||||
|
||||
`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want
|
||||
to wait for your Deployment to progress before the system reports back that the Deployment has
|
||||
[failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`.
|
||||
and `Reason=ProgressDeadlineExceeded` in the status of the resource. The deployment controller will keep
|
||||
retrying the Deployment. In the future, once automatic rollback will be implemented, the deployment
|
||||
controller will roll back a Deployment as soon as it observes such a condition.
|
||||
|
||||
If specified, this field needs to be greater than `.spec.minReadySeconds`.
|
||||
|
||||
### Min Ready Seconds
|
||||
|
||||
`.spec.minReadySeconds` is an optional field that specifies the
|
||||
|
|
|
@ -0,0 +1,86 @@
|
|||
---
|
||||
---
|
||||
|
||||
This guide explains how to use ConfigMaps in a Federation control plane.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/admin/federation/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you).
|
||||
Other tutorials, such as Kelsey Hightower's
|
||||
[Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation),
|
||||
might also help you create a Federated Kubernetes cluster.
|
||||
|
||||
You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
|
||||
general and [ConfigMaps](/docs/user-guide/ConfigMaps/) in particular.
|
||||
|
||||
## Overview
|
||||
|
||||
Federated ConfigMaps are very similar to the traditional [Kubernetes
|
||||
ConfigMaps](/docs/user-guide/configmap/) and provide the same functionality.
|
||||
Creating them in the federation control plane ensures that they are synchronized
|
||||
across all the clusters in federation.
|
||||
|
||||
|
||||
## Creating a Federated ConfigMap
|
||||
|
||||
The API for Federated ConfigMap is 100% compatible with the
|
||||
API for traditional Kubernetes ConfigMap. You can create a ConfigMap by sending
|
||||
a request to the federation apiserver.
|
||||
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f myconfigmap.yaml
|
||||
```
|
||||
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a kubernetes
|
||||
cluster.
|
||||
|
||||
Once a Federated ConfigMap is created, the federation control plane will create
|
||||
a matching ConfigMap in all underlying kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get configmap myconfigmap
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone.
|
||||
|
||||
These ConfigMaps in underlying clusters will match the Federated ConfigMap.
|
||||
|
||||
|
||||
## Updating a Federated ConfigMap
|
||||
|
||||
You can update a Federated ConfigMap as you would update a Kubernetes
|
||||
ConfigMap; however, for a Federated ConfigMap, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
The federation control plane ensures that whenever the Federated ConfigMap is
|
||||
updated, it updates the corresponding ConfigMaps in all underlying clusters to
|
||||
match it.
|
||||
|
||||
## Deleting a Federated ConfigMap
|
||||
|
||||
You can delete a Federated ConfigMap as you would delete a Kubernetes
|
||||
ConfigMap; however, for a Federated ConfigMap, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
|
||||
For example, you can do that using kubectl by running:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete configmap
|
||||
```
|
||||
|
||||
Note that at this point, deleting a Federated ConfigMap will not delete the
|
||||
corresponding ConfigMaps from underlying clusters.
|
||||
You must delete the underlying ConfigMaps manually.
|
||||
We intend to fix this in the future.
|
|
@ -0,0 +1,82 @@
|
|||
---
|
||||
---
|
||||
|
||||
This guide explains how to use DaemonSets in a federation control plane.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/admin/federation/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you).
|
||||
Other tutorials, such as Kelsey Hightower's
|
||||
[Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation),
|
||||
might also help you create a Federated Kubernetes cluster.
|
||||
|
||||
You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
|
||||
general and DaemonSets in particular.
|
||||
|
||||
## Overview
|
||||
|
||||
DaemonSets in federation control plane ("Federated Daemonsets" in
|
||||
this guide) are very similar to the traditional [Kubernetes
|
||||
DaemonSets](/docs/user-guide/DaemonSets/) and provide the same functionality.
|
||||
Creating them in the federation control plane ensures that they are synchronized
|
||||
across all the clusters in federation.
|
||||
|
||||
|
||||
## Creating a Federated Daemonset
|
||||
|
||||
The API for Federated Daemonset is 100% compatible with the
|
||||
API for traditional Kubernetes DaemonSet. You can create a DaemonSet by sending
|
||||
a request to the federation apiserver.
|
||||
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f mydaemonset.yaml
|
||||
```
|
||||
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a kubernetes
|
||||
cluster.
|
||||
|
||||
Once a Federated Daemonset is created, the federation control plane will create
|
||||
a matching DaemonSet in all underlying kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get daemonset mydaemonset
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone.
|
||||
|
||||
These DaemonSets in underlying clusters will match the Federated Daemonset.
|
||||
|
||||
|
||||
## Updating a Federated Daemonset
|
||||
|
||||
You can update a Federated Daemonset as you would update a Kubernetes
|
||||
DaemonSet; however, for a Federated Daemonset, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
The federation control plane ensures that whenever the Federated Daemonset is
|
||||
updated, it updates the corresponding DaemonSets in all underlying clusters to
|
||||
match it.
|
||||
|
||||
## Deleting a Federated Daemonset
|
||||
|
||||
You can delete a Federated Daemonset as you would delete a Kubernetes
|
||||
DaemonSet; however, for a Federated Daemonset, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
|
||||
For example, you can do that using kubectl by running:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete daemonset mydaemonset
|
||||
```
|
|
@ -0,0 +1,107 @@
|
|||
---
|
||||
---
|
||||
|
||||
This guide explains how to use Deployments in the Federation control plane.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/admin/federation/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you).
|
||||
Other tutorials, such as Kelsey Hightower's
|
||||
[Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation),
|
||||
might also help you create a Federated Kubernetes cluster.
|
||||
|
||||
You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
|
||||
general and [Deployment](/docs/user-guide/deployment.md) in particular.
|
||||
|
||||
## Overview
|
||||
|
||||
Deployments in federation control plane (referred to as "Federated Deployments" in
|
||||
this guide) are very similar to the traditional [Kubernetes
|
||||
Deployment](/docs/user-guide/deployment.md), and provide the same functionality.
|
||||
Creating them in the federation control plane ensures that the desired number of
|
||||
replicas exist across the registered clusters.
|
||||
|
||||
**As of Kubernetes version 1.5, Federated Deployment is an Alpha feature. The core
|
||||
functionality of Deployment is present, but some features
|
||||
(such as full rollout compatibility) are still in development.**
|
||||
|
||||
## Creating a Federated Deployment
|
||||
|
||||
The API for Federated Deployment is compatible with the
|
||||
API for traditional Kubernetes Deployment. You can create a Deployment by sending
|
||||
a request to the federation apiserver.
|
||||
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f mydeployment.yaml
|
||||
```
|
||||
|
||||
The '--context=federation-cluster' flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a kubernetes
|
||||
cluster.
|
||||
|
||||
Once a Federated Deployment is created, the federation control plane will create
|
||||
a Deployment in all underlying kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get deployment mydep
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone.
|
||||
|
||||
These Deployments in underlying clusters will match the federation Deployment
|
||||
_except_ in the number of replicas and revision-related annotations.
|
||||
Federation control plane ensures that the
|
||||
sum of replicas in each cluster combined matches the desired number of replicas in the
|
||||
Federated Deployment.
|
||||
|
||||
### Spreading Replicas in Underlying Clusters
|
||||
|
||||
By default, replicas are spread equally in all the underlying clusters. For ex:
|
||||
if you have 3 registered clusters and you create a Federated Deployment with
|
||||
`spec.replicas = 9`, then each Deployment in the 3 clusters will have
|
||||
`spec.replicas=3`.
|
||||
To modify the number of replicas in each cluster, you can specify
|
||||
[FederatedReplicaSetPreference](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/federation/apis/federation/types.go)
|
||||
as an annotation with key `federation.kubernetes.io/replica-set-preferences`
|
||||
on Federated Deployment.
|
||||
|
||||
|
||||
## Updating a Federated Deployment
|
||||
|
||||
You can update a Federated Deployment as you would update a Kubernetes
|
||||
Deployment; however, for a Federated Deployment, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
The federation control plane ensures that whenever the Federated Deployment is
|
||||
updated, it updates the corresponding Deployments in all underlying clusters to
|
||||
match it. So if the rolling update strategy was chosen then the underlying
|
||||
cluster will do the rolling update independently and `maxSurge` and `maxUnavailable`
|
||||
will apply only to individual clusters. This behavior may change in the future.
|
||||
|
||||
If your update includes a change in number of replicas, the federation
|
||||
control plane will change the number of replicas in underlying clusters to
|
||||
ensure that their sum remains equal to the number of desired replicas in
|
||||
Federated Deployment.
|
||||
|
||||
## Deleting a Federated Deployment
|
||||
|
||||
You can delete a Federated Deployment as you would delete a Kubernetes
|
||||
Deployment; however, for a Federated Deployment, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
|
||||
For example, you can do that using kubectl by running:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete deployment mydep
|
||||
```
|
|
@ -250,6 +250,44 @@ kept running, the Federated Ingress ensures that user traffic is
|
|||
automatically redirected away from the failed cluster to other
|
||||
available clusters.
|
||||
|
||||
## Known issue
|
||||
|
||||
GCE L7 load balancer back-ends and health checks are known to "flap"; this is due
|
||||
to conflicting firewall rules in the federation's underlying clusters, which might override one another. To work around this problem, you can
|
||||
install the firewall rules manually to expose the targets of all the
|
||||
underlying clusters in your federation for each Federated Ingress
|
||||
object. This way, the health checks can consistently pass and the GCE L7 load balancer
|
||||
can remain stable. You install the rules using the
|
||||
[`gcloud`](https://cloud.google.com/sdk/gcloud/) command line tool,
|
||||
[Google Cloud Console](https://console.cloud.google.com) or the
|
||||
[Google Compute Engine APIs](https://cloud.google.com/compute/docs/reference/latest/).
|
||||
|
||||
You can install these rules using
|
||||
[`gcloud`](https://cloud.google.com/sdk/gcloud/) as follows:
|
||||
|
||||
```shell
|
||||
gcloud compute firewall-rules create <firewall-rule-name> \
|
||||
--source-ranges 130.211.0.0/22 --allow [<service-nodeports>] \
|
||||
--target-tags [<target-tags>] \
|
||||
--network <network-name>
|
||||
```
|
||||
|
||||
where:
|
||||
|
||||
1. `firewall-rule-name` can be any name.
|
||||
2. `[<service-nodeports>]` is the comma separated list of node ports corresponding to the services that back the Federated Ingress.
|
||||
3. [<target-tags>] is the comma separated list of the target tags assigned to the nodes in a kubernetes cluster.
|
||||
4. <network-name> is the name of the network where the firewall rule must be installed.
|
||||
|
||||
Example:
|
||||
```shell
|
||||
gcloud compute firewall-rules create my-federated-ingress-firewall-rule \
|
||||
--source-ranges 130.211.0.0/22 --allow tcp:30301, tcp:30061, tcp:34564 \
|
||||
--target-tags my-cluster-1-minion, my-cluster-2-minion \
|
||||
--network default
|
||||
```
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
#### I cannot connect to my cluster federation API
|
||||
|
|
|
@ -46,3 +46,26 @@ The following guides explain some of the resources in detail:
|
|||
|
||||
[API reference docs](/federation/docs/api-reference/readme/) lists all the
|
||||
resources supported by federation apiserver.
|
||||
|
||||
## Cascading deletion
|
||||
|
||||
Kubernetes version 1.5 includes support for cascading deletion of federated
|
||||
resources. With cascading deletion, when you delete a resource from the
|
||||
federation control plane, the corresponding resources in all underlying clusters
|
||||
are also deleted.
|
||||
|
||||
To enable cascading deletion, set the option
|
||||
`DeleteOptions.orphanDependents=false` when you delete a resource from the
|
||||
federation control plane.
|
||||
|
||||
The following Federated resources are affected by cascading deletion:
|
||||
|
||||
* Ingress
|
||||
* Namespace
|
||||
* ReplicaSet
|
||||
* Secret
|
||||
* Deployment
|
||||
* DaemonSet
|
||||
|
||||
Note: By default, deleting a resource from federation control plane does not
|
||||
delete the corresponding resources from underlying clusters.
|
||||
|
|
|
@ -27,7 +27,7 @@ You can set up owner-dependent relationships among other objects by manually set
|
|||
|
||||
When deleting an object, you can request the GC to ***asynchronously*** delete its dependents by ***explicitly*** specifying `deleteOptions.orphanDependents=false` in the deletion request that you send to the API server. A 200 OK response from the API server indicates the owner is deleted.
|
||||
|
||||
Synchronous garbage collection will be supported in 1.5 (tracking [issue](https://github.com/kubernetes/kubernetes/issues/29891)).
|
||||
In Kubernetes version 1.5, synchronous garbage collection is under active development. See the [tracking [issue](https://github.com/kubernetes/kubernetes/issues/29891) for more details.
|
||||
|
||||
If you specify `deleteOptions.orphanDependents=true`, or leave it blank, then the GC will first reset the `ownerReferences` in the dependents, then delete the owner. Note that the deletion of the owner object is asynchronous, that is, a 200 OK response will be sent by the API server before the owner object gets deleted.
|
||||
|
||||
|
|
|
@ -21,6 +21,12 @@ due to a node hardware failure or a node reboot).
|
|||
|
||||
A Job can also be used to run multiple pods in parallel.
|
||||
|
||||
### extensions/v1beta1.Job is deprecated
|
||||
|
||||
Starting from version 1.5 `extensions/v1beta1.Job` is being deprecated, with a plan to be removed in
|
||||
version 1.6 of kubernetes (see this [issue](https://github.com/kubernetes/kubernetes/issues/32763)).
|
||||
Please use `batch/v1.Job` instead.
|
||||
|
||||
## Running an example Job
|
||||
|
||||
Here is an example Job config. It computes π to 2000 places and prints it out.
|
||||
|
|
|
@ -36,7 +36,9 @@ In order for `kubectl run` to satisfy infrastructure as code:
|
|||
* Pod - use `run-pod/v1`.
|
||||
* Replication controller - use `run/v1`.
|
||||
* Deployment - use `deployment/v1beta1`.
|
||||
* Job (using `extension/v1beta1` endpoint) - use `job/v1beta1`.
|
||||
* Job (using `extension/v1beta1` endpoint) - use `job/v1beta1`. Starting from
|
||||
version 1.5 of kuberentes this generator is deprecated, with a plan to be
|
||||
removed in 1.6. Please use `job/v1` instead.
|
||||
* Job - use `job/v1`.
|
||||
* CronJob - use `cronjob/v2alpha1`.
|
||||
|
||||
|
|
|
@ -89,13 +89,13 @@ The IP address is listed next to `LoadBalancer Ingress`.
|
|||
|
||||
## Loss of client source IP for external traffic
|
||||
|
||||
Due to the implementation of this feature, the source IP for sessions as seen in the target container will *not be the original source IP* of the client. This is the default behavior as of Kubernetes v1.4. However, starting in v1.4, an optional alpha feature has been added
|
||||
Due to the implementation of this feature, the source IP for sessions as seen in the target container will *not be the original source IP* of the client. This is the default behavior as of Kubernetes v1.5. However, starting in v1.5, an optional beta feature has been added
|
||||
that will preserve the client Source IP for GCE/GKE environments. This feature will be phased in for other cloud providers in subsequent releases.
|
||||
|
||||
## Annotation to modify the LoadBalancer behavior for preservation of Source IP
|
||||
In 1.4, an Alpha feature has been added that changes the behavior of the external LoadBalancer feature.
|
||||
In 1.5, an Beta feature has been added that changes the behavior of the external LoadBalancer feature.
|
||||
|
||||
This feature can be activated by adding the alpha annotation below to the metadata section of the Service Configuration file.
|
||||
This feature can be activated by adding the beta annotation below to the metadata section of the Service Configuration file.
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -104,7 +104,7 @@ This feature can be activated by adding the alpha annotation below to the metada
|
|||
"metadata": {
|
||||
"name": "example-service",
|
||||
"annotations": {
|
||||
"service.alpha.kubernetes.io/external-traffic": "OnlyLocal"
|
||||
"service.beta.kubernetes.io/external-traffic": "OnlyLocal"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
|
@ -120,18 +120,7 @@ This feature can be activated by adding the alpha annotation below to the metada
|
|||
}
|
||||
```
|
||||
|
||||
### Alpha Feature Gate for the 'service.alpha.kubernetes.io/external-traffic' annotation
|
||||
|
||||
Alpha features are not enabled by default, they must be enabled using the release gate command line flags
|
||||
for kube-controller-manager and kube-proxy.
|
||||
See [https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/runtimeconfig.md](Runtime feature flags proposal) for more details on feature gate flags.
|
||||
|
||||
If this feature is not enabled in your cluster, this annotation in your service configuration will be rejected.
|
||||
|
||||
### Implementation across different cloudproviders/environments
|
||||
|
||||
Note that this feature is not currently implemented for all cloudproviders/environments.
|
||||
This feature does not work for nodePorts yet, so environments/cloud providers with proxy-style load-balancers cannot use it yet.
|
||||
**Note that this feature is not currently implemented for all cloudproviders/environments.**
|
||||
|
||||
### Caveats and Limitations when preserving source IPs
|
||||
|
||||
|
|
|
@ -362,7 +362,7 @@ parameters:
|
|||
* `type`: `pd-standard` or `pd-ssd`. Default: `pd-ssd`
|
||||
* `zone`: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen.
|
||||
|
||||
#### GLUSTERFS
|
||||
#### Glusterfs
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
|
@ -371,18 +371,23 @@ metadata:
|
|||
name: slow
|
||||
provisioner: kubernetes.io/glusterfs
|
||||
parameters:
|
||||
endpoint: "glusterfs-cluster"
|
||||
resturl: "http://127.0.0.1:8081"
|
||||
restauthenabled: "true"
|
||||
restuser: "admin"
|
||||
restuserkey: "password"
|
||||
secretNamespace: "default"
|
||||
secretName: "heketi-secret"
|
||||
|
||||
```
|
||||
|
||||
* `endpoint`: `glusterfs-cluster` is the endpoint/service name which includes GlusterFS trusted pool IP addresses and this parameter is mandatory.
|
||||
* `resturl` : Gluster REST service url which provisions gluster volumes on demand. The format should be `http://IPaddress:Port` and this parameter is mandatory when using the GlusterFS dynamic provisioner.
|
||||
* `restauthenabled` : A boolean value that indicates whether Gluster REST service authentication is enabled on the REST server. If this value is 'true', you must supply values for the 'restuser' and 'restuserkey' parameters."
|
||||
* `restuser` : Gluster REST service user, who has access to create volumes in the Gluster Trusted Pool.
|
||||
* `restuserkey` : Gluster REST service user's password, will be used for authentication to the REST server.
|
||||
* `resturl`: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be `IPaddress:Port` and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to
|
||||
`http://heketi-storage-project.cloudapps.mystorage.com` where the fqdn is a resolvable heketi service url.
|
||||
* `restauthenabled` : Gluster REST service authentication boolean that enables authentication to the REST server. If this value is 'true', `restuser` and `restuserkey` or `secretNamespace` + `secretName` have to be filled. This option is deprecated, authentication is enabled when any of `restuser`, `restuserkey`, `secretName` or `secretNamespace` is specified.
|
||||
* `restuser` : Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.
|
||||
* `restuserkey` : Gluster REST service/Heketi user's password which will be used for authentication to the REST server. This parameter is deprecated in favor of `secretNamespace` + `secretName`.
|
||||
* `secretNamespace` + `secretName` : Identification of Secret instance that containes user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when both `secretNamespace` and `secretName` are omitted. The provided secret must have type "kubernetes.io/glusterfs", e.g. created in this way:
|
||||
```
|
||||
$ kubectl create secret heketi-secret --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' --namespace=default
|
||||
```
|
||||
|
||||
#### OpenStack Cinder
|
||||
|
||||
|
@ -414,6 +419,67 @@ parameters:
|
|||
|
||||
* `diskformat`: `thin`, `zeroedthick` and `eagerzeroedthick`. Default: `"thin"`.
|
||||
|
||||
#### Ceph RBD
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: fast
|
||||
provisioner: kubernetes.io/rbd
|
||||
parameters:
|
||||
monitors: 10.16.153.105:6789
|
||||
adminId: kube
|
||||
adminSecretName: ceph-secret
|
||||
adminSecretNamespace: kube-system
|
||||
pool: kube
|
||||
userId: kube
|
||||
userSecretName: ceph-secret-user
|
||||
```
|
||||
|
||||
* `monitors`: Ceph monitors, comma delimited. This parameter is required.
|
||||
* `adminId`: Ceph client ID that is capable of creating images in the pool. Default is "admin".
|
||||
* `adminSecretNamespace`: The namespace for `adminSecret`. Default is "default".
|
||||
* `adminSecret`: Secret Name for `adminId`. This parameter is required. The provided secret must have type "kubernetes.io/rbd".
|
||||
* `pool`: Ceph RBD pool. Default is "rbd".
|
||||
* `userId`: Ceph client ID that is used to map the RBD image. Default is the same as `adminId`.
|
||||
* `userSecretName`: The name of Ceph Secret for `userId` to map RBD image. It must exist in the same namespace as PVCs. This parameter is required. The provided secret must have type "kubernetes.io/rbd", e.g. created in this way:
|
||||
```
|
||||
$ kubectl create secret ceph-secret --type="kubernetes.io/rbd" --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' --namespace=kube-system
|
||||
```
|
||||
|
||||
#### Quobyte
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/quobyte
|
||||
parameters:
|
||||
quobyteAPIServer: "http://138.68.74.142:7860"
|
||||
registry: "138.68.74.142:7861"
|
||||
adminSecretName: "quobyte-admin-secret"
|
||||
adminSecretNamespace: "kube-system"
|
||||
user: "root"
|
||||
group: "root"
|
||||
quobyteConfig: "BASE"
|
||||
quobyteTenant: "DEFAULT"
|
||||
```
|
||||
|
||||
* `quobyteAPIServer`: API Server of Quobyte in the format `http(s)://api-server:7860`
|
||||
* `registry`: Quobyte registry to use to mount the volume. You can specifiy the registry as ``<host>:<port>`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
|
||||
* `adminSecretNamespace`: The namespace for `adminSecretName`. Default is "default".
|
||||
* `adminSecretName`: secret that holds information about the Quobyte user and the password to authenticate agains the API server. The provided secret must have type "kubernetes.io/quobyte", e.g. created in this way:
|
||||
```
|
||||
$ kubectl create secret quobyte-admin-secret --type="kubernetes.io/quobyte" --from-literal=key='opensesame' --namespace=kube-system
|
||||
```
|
||||
* `user`: maps all access to this user. Default is "root".
|
||||
* `group`: maps all access to this group. Default is "nfsnobody".
|
||||
* `quobyteConfig`: use the specified configuration to create the volume. You can create a new configuration or modify an existing one with the Web console or the quobyte CLI. Default is "BASE".
|
||||
* `quobyteTenant`: use the specified tenant ID to create/delete the volume. This Quobyte tenant has to be already present in Quobyte. Default is "DEFAULT".
|
||||
|
||||
|
||||
## Writing Portable Configuration
|
||||
|
||||
If you're writing configuration templates or examples that run on a wide range of clusters
|
||||
|
|
|
@ -1,10 +1,23 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
|
||||
---
|
||||
|
||||
__Warning:__ Starting in Kubernetes version 1.5, PetSet has been renamed to
|
||||
[StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/).
|
||||
To use (or continue to use) PetSet in Kubernetes 1.5 or higher, you must
|
||||
[migrate your existing PetSets to StatefulSets](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/).
|
||||
|
||||
__This document has been deprecated__, but can still apply if you're using
|
||||
Kubernetes version 1.4 or earlier.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
|
@ -424,4 +437,8 @@ Deploying one RC of size 1/Service per pod is a popular alternative, as is simpl
|
|||
|
||||
## Next steps
|
||||
|
||||
The deployment and maintenance of stateful applications is a vast topic. The next step is to explore cluster bootstrapping and initialization, [here](/docs/user-guide/petset/bootstrapping/).
|
||||
* Learn about [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/),
|
||||
the replacement for PetSet introduced in Kubernetes version 1.5.
|
||||
* [Migrate your existing PetSets to StatefulSets](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/)
|
||||
when upgrading to Kubernetes version 1.5 or higher.
|
||||
|
||||
|
|
|
@ -1,243 +1,15 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
__Warning:__ Starting in Kubernetes version 1.5, PetSet has been renamed to [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets). To use (or continue to use) PetSet in Kubernetes 1.5, you _must_ [migrate](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) your existing PetSets to StatefulSets. For information on working with StatefulSet, see the tutorial on [how to run replicated stateful applications](/docs/tutorials/stateful-application/run-replicated-stateful-application).
|
||||
|
||||
## Overview
|
||||
__This document has been deprecated__.
|
||||
|
||||
This purpose of this guide is to help you become familiar with the runtime initialization of [Pet Sets](/docs/user-guide/petset). This guide assumes the same prerequisites, and uses the same terminology as the [Pet Set user document](/docs/user-guide/petset).
|
||||
|
||||
The most common way to initialize the runtime in a containerized environment, is through a custom [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). While this is not necessarily bad, making your application pid 1, and treating containers as processes in general is good for a few reasons outside the scope of this document. Doing so allows you to run docker images from third-party vendors without modification. We will not be writing custom entrypoints for this example, but using a feature called [init containers](http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization), to explain 2 common patterns that come up deploying Pet Sets.
|
||||
|
||||
1. Transferring state across Pet restart, so that a future Pet is initialized with the computations of its past incarnation
|
||||
2. Initializing the runtime environment of a Pet based on existing conditions, like a list of currently healthy peers
|
||||
|
||||
## Example I: transferring state across Pet restart
|
||||
|
||||
This example shows you how to "carry over" runtime state across Pet restart by simulating virtual machines with a Pet Set.
|
||||
|
||||
### Background
|
||||
|
||||
Applications that incrementally build state usually need strong guarantees that they will not restart for extended durations. This is tricky to achieve with containers, so instead, we will ensure that the results of previous computations are transferred to future pets. Doing so is straightforward using vanilla Persistent Volumes (which Pet Set already gives you), unless the volume mount point itself needs to be initialized for the Pet to start. This is exactly the case with "virtual machine" docker images, like those based on ubuntu or fedora. Such images embed the entire rootfs of the distro, including package managers like `apt-get` that assume a certain layout of the filesystem. Meaning:
|
||||
|
||||
* If you mount an empty volume under `/usr`, you won't be able to `apt-get`
|
||||
* If you mount an empty volume under `/lib`, all your `apt-gets` will fail because there are no system libraries
|
||||
* If you clobber either of those, previous `apt-get` results will be dysfunctional
|
||||
|
||||
### Simulating Virtual Machines
|
||||
|
||||
Since Pet Set already gives each Pet a consistent identity, all we need is a way to initialize the user environment before allowing tools like `kubectl exec` to enter the application container.
|
||||
|
||||
Download [this](petset_vm.yaml) petset into a file called petset_vm.yaml, and create it:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f petset_vm.yaml
|
||||
service "ub" created
|
||||
petset "vm" created
|
||||
```
|
||||
|
||||
This should give you 2 pods.
|
||||
|
||||
```shell
|
||||
$ kubectl get po
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
vm-0 1/1 Running 0 37s
|
||||
vm-1 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
We can exec into one and install nginx
|
||||
|
||||
```shell
|
||||
$ kubectl exec -it vm-0 /bin/sh
|
||||
vm-0 # apt-get update
|
||||
...
|
||||
vm-0 # apt-get install nginx -y
|
||||
```
|
||||
|
||||
On killing this pod we need it to come back with all the Pet Set properties, as well as the installed nginx packages.
|
||||
|
||||
```shell
|
||||
$ kubectl delete po vm-0
|
||||
pod "vm-0" deleted
|
||||
|
||||
$ kubectl get po
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
vm-0 1/1 Running 0 1m
|
||||
vm-1 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
Now you can exec back into vm-0 and start nginx
|
||||
|
||||
```shell
|
||||
$ kubectl exec -it vm-0 /bin/sh
|
||||
vm-0 # mkdir -p /var/log/nginx /var/lib/nginx; nginx -g 'daemon off;'
|
||||
|
||||
```
|
||||
|
||||
And access it from anywhere in the cluster (and because this is an example that simulates vms, we're going to apt-get install netcat too)
|
||||
|
||||
```shell
|
||||
$ kubectl exec -it vm-1 /bin/sh
|
||||
vm-1 # apt-get update
|
||||
...
|
||||
vm-1 # apt-get install netcat -y
|
||||
vm-1 # printf "GET / HTTP/1.0\r\n\r\n" | netcat vm-0.ub 80
|
||||
```
|
||||
|
||||
It's worth exploring what just happened. Init containers run sequentially *before* the application container. In this example we used the init container to copy shared libraries from the rootfs, while preserving user installed packages across container restart.
|
||||
|
||||
```yaml
|
||||
pod.beta.kubernetes.io/init-containers: '[
|
||||
{
|
||||
"name": "rootfs",
|
||||
"image": "ubuntu:15.10",
|
||||
"command": [
|
||||
"/bin/sh",
|
||||
"-c",
|
||||
"for d in usr lib etc; do cp -vnpr /$d/* /${d}mnt; done;"
|
||||
],
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "usr",
|
||||
"mountPath": "/usrmnt"
|
||||
},
|
||||
{
|
||||
"name": "lib",
|
||||
"mountPath": "/libmnt"
|
||||
},
|
||||
{
|
||||
"name": "etc",
|
||||
"mountPath": "/etcmnt"
|
||||
}
|
||||
]
|
||||
}
|
||||
]'
|
||||
```
|
||||
|
||||
**It's important to note that the init container, when used this way, must be idempotent, or it'll end up clobbering data stored by a previous incarnation.**
|
||||
|
||||
|
||||
## Example II: initializing state based on environment
|
||||
|
||||
In this example we are going to setup a cluster of nginx servers, just like we did in the Pet Set [user guide](/docs/user-guide/petset), but make one of them a master. All the other nginx servers will simply proxy requests to the master. This is a common deployment pattern for databases like Mysql, but we're going to replace the database with a stateless webserver to simplify the problem.
|
||||
|
||||
### Background
|
||||
|
||||
Most clustered applications, such as mysql, require an admin to create a config file based on the current state of the world. The most common dynamic variable in such config files is a list of peers, or other Pets running similar database servers that are currently serving requests. The Pet Set user guide already [touched on this topic](/docs/user-guide/petset#peer-discovery), we'll explore it in greater depth in the context of writing a config file with a list of peers.
|
||||
|
||||
Here's a tiny peer finder helper script that handles peer discovery, [available here](https://github.com/kubernetes/contrib/tree/master/pets/peer-finder). The peer finder takes 3 important arguments:
|
||||
|
||||
* A DNS domain
|
||||
* An `on-start` script to run with the initial constituency of the given domain as input
|
||||
* An `on-change` script to run every time the constituency of the given domain changes
|
||||
|
||||
The role of the peer finder:
|
||||
|
||||
* Poll DNS for SRV records of a given domain till the `hostname` of the pod it's running in shows up as a subdomain
|
||||
* Pipe the sorted list of subdomains to the script specified by its `--on-start` argument
|
||||
* Exit with the appropriate error code if no `--on-change` script is specified
|
||||
* Loop invoking `--on-change` for every change
|
||||
|
||||
You can invoke the peer finder inside the Pets we created in the last example:
|
||||
|
||||
```shell
|
||||
$ kubectl exec -it vm-0 /bin/sh
|
||||
vm-0 # apt-get update
|
||||
...
|
||||
vm-0 # apt-get install curl -y
|
||||
vm-0 # curl -sSL -o /peer-finder https://storage.googleapis.com/kubernetes-release/pets/peer-finder
|
||||
vm-0 # chmod -c 755 peer-finder
|
||||
|
||||
vm-0 # ./peer-finder
|
||||
2016/06/23 21:25:46 Incomplete args, require -on-change and/or -on-start, -service and -ns or an env var for POD_NAMESPACE.
|
||||
|
||||
vm-0 # ./peer-finder -on-start 'tee' -service ub -ns default
|
||||
|
||||
2016/06/23 21:30:21 Peer list updated
|
||||
was []
|
||||
now [vm-0.ub.default.svc.cluster.local vm-1.ub.default.svc.cluster.local]
|
||||
2016/06/23 21:30:21 execing: tee with stdin: vm-0.ub.default.svc.cluster.local
|
||||
vm-1.ub.default.svc.cluster.local
|
||||
2016/06/23 21:30:21 vm-0.ub.default.svc.cluster.local
|
||||
vm-1.ub.default.svc.cluster.local
|
||||
2016/06/23 21:30:22 Peer finder exiting
|
||||
```
|
||||
|
||||
### Nginx master/slave cluster
|
||||
|
||||
Lets create a Pet Set that writes out its own config based on a list of peers at initialization time, as described above.
|
||||
|
||||
Download and create [this](petset_peers.yaml) petset. It will setup 2 nginx webservers, but the second one will proxy all requests to the first:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f petset_peers.yaml
|
||||
service "nginx" created
|
||||
petset "web" created
|
||||
|
||||
$ kubectl get po --watch-only
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 0/1 Pending 0 7s
|
||||
web-0 0/1 Init:0/1 0 18s
|
||||
web-0 0/1 PodInitializing 0 20s
|
||||
web-0 1/1 Running 0 21s
|
||||
web-1 0/1 Pending 0 0s
|
||||
web-1 0/1 Init:0/1 0 0s
|
||||
web-1 0/1 PodInitializing 0 20s
|
||||
web-1 1/1 Running 0 21s
|
||||
|
||||
$ kubectl get po
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 1m
|
||||
web-1 1/1 Running 0 47s
|
||||
```
|
||||
|
||||
web-1 will redirect all requests to its "master":
|
||||
|
||||
```shell
|
||||
$ kubectl exec -it web-1 -- curl localhost
|
||||
web-0
|
||||
```
|
||||
|
||||
If you scale the cluster, the new pods parent themselves to the same master. To test this you can `kubectl edit` the petset and change the `replicas` field to 5:
|
||||
|
||||
```shell
|
||||
$ kubectl edit petset web
|
||||
...
|
||||
|
||||
$ kubectl get po -l app=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 2h
|
||||
web-1 1/1 Running 0 2h
|
||||
web-2 1/1 Running 0 1h
|
||||
web-3 1/1 Running 0 1h
|
||||
web-4 1/1 Running 0 1h
|
||||
|
||||
$ for i in $(seq 0 4); do kubectl exec -it web-$i -- curl localhost; done | sort | uniq
|
||||
web-0
|
||||
```
|
||||
|
||||
Understanding how we generated the nginx config is important, we did so by passing an init script to the peer finder:
|
||||
|
||||
```shell
|
||||
echo `
|
||||
readarray PEERS;
|
||||
if [ 1 = ${#PEERS[@]} ]; then
|
||||
echo \"events{} http { server{ } }\";
|
||||
else
|
||||
echo \"events{} http { server{ location / { proxy_pass http://${PEERS[0]}; } } }\";
|
||||
fi;` > /conf/nginx.conf
|
||||
```
|
||||
|
||||
All that does is:
|
||||
|
||||
* read in a list of peers from stdin
|
||||
* if there's only 1, promote it to master
|
||||
* if there's more than 1, proxy requests to the 0th member of the list
|
||||
* write the config to a `hostPath` volume shared with the parent PetSet
|
||||
|
||||
**It's important to note that in practice all Pets should query their peers for the current master, instead of making assumptions based on the index.**
|
||||
|
||||
## Next Steps
|
||||
|
||||
You can deploy some example Pet Sets found [here](https://github.com/kubernetes/kubernetes/tree/master/test/e2e/testing-manifests/petset), or write your own.
|
||||
|
|
|
@ -16,8 +16,8 @@ spec:
|
|||
selector:
|
||||
app: nginx
|
||||
---
|
||||
apiVersion: apps/v1alpha1
|
||||
kind: PetSet
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: web
|
||||
spec:
|
||||
|
@ -28,7 +28,6 @@ spec:
|
|||
labels:
|
||||
app: nginx
|
||||
annotations:
|
||||
pod.alpha.kubernetes.io/initialized: "true"
|
||||
pod.beta.kubernetes.io/init-containers: '[
|
||||
{
|
||||
"name": "peerfinder",
|
||||
|
@ -68,7 +67,6 @@ spec:
|
|||
}
|
||||
]'
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 0
|
||||
containers:
|
||||
- name: nginx
|
||||
image: gcr.io/google_containers/nginx-slim:0.8
|
||||
|
|
|
@ -14,8 +14,8 @@ spec:
|
|||
selector:
|
||||
app: ub
|
||||
---
|
||||
apiVersion: apps/v1alpha1
|
||||
kind: PetSet
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: vm
|
||||
spec:
|
||||
|
@ -26,7 +26,6 @@ spec:
|
|||
labels:
|
||||
app: ub
|
||||
annotations:
|
||||
pod.alpha.kubernetes.io/initialized: "true"
|
||||
pod.beta.kubernetes.io/init-containers: '[
|
||||
{
|
||||
"name": "rootfs",
|
||||
|
@ -53,7 +52,6 @@ spec:
|
|||
}
|
||||
]'
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 0
|
||||
containers:
|
||||
- name: ub
|
||||
image: ubuntu:15.10
|
||||
|
|
|
@ -47,7 +47,7 @@ ephemeral (rather than durable) entities. As discussed in [life of a
|
|||
pod](/docs/user-guide/pod-states/), pods are created, assigned a unique ID (UID), and
|
||||
scheduled to nodes where they remain until termination (according to restart
|
||||
policy) or deletion. If a node dies, the pods scheduled to that node are
|
||||
deleted, after a timeout period. A given pod (as defined by a UID) is not
|
||||
scheduled for deletion, after a timeout period. A given pod (as defined by a UID) is not
|
||||
"rescheduled" to a new node; instead, it can be replaced by an identical pod,
|
||||
with even the same name if desired, but with a new UID (see [replication
|
||||
controller](/docs/user-guide/replication-controller/) for more details). (In the future, a
|
||||
|
@ -135,9 +135,9 @@ simplified management.
|
|||
|
||||
## Durability of pods (or lack thereof)
|
||||
|
||||
Pods aren't intended to be treated as durable [pets](https://blog.engineyard.com/2014/pets-vs-cattle). They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
|
||||
Pods aren't intended to be treated as durable entities. They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
|
||||
|
||||
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](/docs/user-guide/replication-controller/)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
|
||||
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [Deployments](/docs/user-guide/deployments/)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
|
||||
|
||||
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
|
||||
|
||||
|
@ -150,9 +150,7 @@ Pod is exposed as a primitive in order to facilitate:
|
|||
* clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller"
|
||||
* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](http://issue.k8s.io/3949)
|
||||
|
||||
There is new first-class support for pet-like pods with the [PetSet](/docs/user-guide/petset/) feature (currently in alpha).
|
||||
For prior versions of Kubernetes, best practice for pets is to create a replication controller with `replicas` equal to `1` and a corresponding service.
|
||||
|
||||
There is new first-class support for stateful pods with the [StatefulSet](/docs/concepts/controllers/statefulsets/) controller (currently in beta). The feature was alpha in 1.4 and was called [PetSet](/docs/user-guide/petset/). For prior versions of Kubernetes, best practice for having stateful pods is to create a replication controller with `replicas` equal to `1` and a corresponding service, see [this MySQL deployment example](/docs/tutorials/stateful-application/run-stateful-application/).
|
||||
|
||||
## Termination of Pods
|
||||
|
||||
|
@ -170,7 +168,13 @@ An example flow:
|
|||
6. When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
|
||||
7. The Kubelet will finish deleting the Pod on the API server by setting grace period 0 (immediate deletion). The Pod disappears from the API and is no longer visible from the client.
|
||||
|
||||
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports the `--grace-period=<seconds>` option which allows a user to override the default and specify their own value. The value `0` indicates that delete should be immediate, and removes the pod in the API immediately so a new pod can be created with the same name. On the node pods that are set to terminate immediately will still be given a small grace period before being force killed.
|
||||
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports the `--grace-period=<seconds>` option which allows a user to override the default and specify their own value. The value `0` [force deletes](/docs/user-guide/pods/#force-termination-of-pods) the pod. In kubectl version >= 1.5, you must specify an additional flag `--force` along with `--grace-period=0` in order to perform force deletions.
|
||||
|
||||
### Force deletion of pods
|
||||
|
||||
Force deletion of a pod is defined as deletion of a pod from the cluster state and etcd immediately. When a force deletion is performed, the apiserver does not wait for confirmation from the kubelet that the pod has been terminated on the node it was running on. It removes the pod in the API immediately so a new pod can be created with the same name. On the node, pods that are set to terminate immediately will still be given a small grace period before being force killed.
|
||||
|
||||
Force deletions can be potentially dangerous for some pods and should be performed with caution. In case of StatefulSet pods, please refer to the task documentation for [deleting Pods from a StatefulSet](/docs/tasks/stateful-sets/deleting-pods/).
|
||||
|
||||
## Privileged mode for pod containers
|
||||
|
||||
|
|
|
@ -11,11 +11,11 @@ You've seen [how to configure and deploy pods and containers](/docs/user-guide/c
|
|||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Persistent storage
|
||||
## Using a Volume for storage
|
||||
|
||||
The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. To access more-persistent storage, outside the container file system, you need a [*volume*](/docs/user-guide/volumes). This is especially important to stateful applications, such as key-value stores and databases.
|
||||
The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. For more consistent storage that lasts for the life of a Pod, you need a [*volume*](/docs/user-guide/volumes). This is especially important to stateful applications, such as key-value stores and databases.
|
||||
|
||||
For example, [Redis](http://redis.io/) is a key-value cache and store, which we use in the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) and other examples. We can add a volume to it to store persistent data as follows:
|
||||
For example, [Redis](http://redis.io/) is a key-value cache and store, which we use in the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) and other examples. We can add a volume to it to store data as follows:
|
||||
|
||||
{% include code.html language="yaml" file="redis-deployment.yaml" ghlink="/docs/user-guide/redis-deployment.yaml" %}
|
||||
|
||||
|
|
|
@ -2,60 +2,54 @@
|
|||
assignees:
|
||||
- bryk
|
||||
- mikedanese
|
||||
- rf232
|
||||
|
||||
---
|
||||
|
||||
|
||||
Dashboard (the web-based user interface of Kubernetes) allows you to deploy containerized applications to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself. You can use it for getting an overview of applications running on the cluster, as well as for creating or modifying individual Kubernetes resources and workloads, such as Daemon sets, Pet sets, Replica sets, Jobs, Replication controllers and corresponding Services, or Pods.
|
||||
Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster itself along with its attendant resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc). For example, you can scale a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard.
|
||||
|
||||
Dashboard also provides information on the state of Pods, Replication controllers, etc. and on any errors that might have occurred. You can inspect and manage the Kubernetes resources, as well as your deployed containerized applications. You can also change the number of replicated Pods, delete Pods, and deploy new applications using a deploy wizard.
|
||||
Dashboard also provides information on the state of Kubernetes resources in your cluster, and on any errors that may have occurred.
|
||||
|
||||
By default, Dashboard is installed as a cluster addon. It is enabled by default as of Kubernetes 1.2 clusters.
|
||||
![Kubernetes Dashboard UI](/images/docs/ui-dashboard.png)
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Dashboard access
|
||||
## Accessing the Dashboard UI
|
||||
|
||||
Navigate in your Browser to the following URL:
|
||||
```
|
||||
https://<kubernetes-master>/ui
|
||||
```
|
||||
This redirects to the following URL:
|
||||
```
|
||||
https://<kubernetes-master>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
|
||||
```
|
||||
The Dashboard UI lives in the `kube-system` [namespace](/docs/admin/namespaces/), but shows all resources from all namespaces in your environment.
|
||||
There are multiple ways you can access the Dashboard UI; either by using the kubectl command-line interface, or by accessing the Kubernetes master apiserver using your web browser.
|
||||
|
||||
If you find that you are not able to access Dashboard, you can install and open the latest stable release by running the following command:
|
||||
### Command line proxy
|
||||
You can access Dashboard using the kubectl command-line tool by running the following command:
|
||||
|
||||
```
|
||||
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
|
||||
$ kubectl proxy
|
||||
```
|
||||
|
||||
Then, navigate to
|
||||
kubectl will handle authentication with apiserver and make Dashboard available at [http://localhost:8001/ui](http://localhost:8001/ui)
|
||||
|
||||
```
|
||||
https://<kubernetes-master>/ui
|
||||
```
|
||||
The UI can _only_ be accessed from the machine where the command is executed. See `kubectl proxy --help` for more options.
|
||||
|
||||
In case you have to provide a password, use the following command to find it out:
|
||||
### Master server
|
||||
You may access the UI directly via the Kubernetes master apiserver. Open a browser and navigate to `https://<kubernetes-master>/ui`, where `<kubernetes-master>` is IP address or domain name of the Kubernetes
|
||||
master.
|
||||
|
||||
```
|
||||
kubectl config view
|
||||
```
|
||||
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., `kubeadm`). Refer to the [authentication admin documentation](/docs/admin/authentication/) for information on how to configure authentication manually.
|
||||
|
||||
## Welcome page
|
||||
If the username and password is configured but unknown to you, then use `kubectl config view` to find it.
|
||||
|
||||
When accessing Dashboard on an empty cluster for the first time, the Welcome page is displayed. This page contains a link to this document as well as a button to deploy your first application. In addition, you can view which system applications are running by **default** in the `kube-system` [namespace](/docs/admin/namespaces/) of your cluster, for example monitoring applications such as Heapster.
|
||||
## Welcome view
|
||||
|
||||
When you access Dashboard on an empty cluster, you'll see the welcome page. This page contains a link to this document as well as a button to deploy your first application. In addition, you can view which system applications are running by default in the `kube-system` [namespace](/docs/admin/namespaces/) of your cluster, for example the Dashboard itself.
|
||||
|
||||
![Kubernetes Dashboard welcome page](/images/docs/ui-dashboard-zerostate.png)
|
||||
|
||||
## Deploying containerized applications
|
||||
|
||||
Dashboard lets you create and deploy a containerized application as a Replication Controller and corresponding Service with a simple wizard. You can either manually specify application details, or upload a YAML or JSON file containing the required information.
|
||||
Dashboard lets you create and deploy a containerized application as a Deployment and optional Service with a simple wizard. You can either manually specify application details, or upload a YAML or JSON file containing application configuration.
|
||||
|
||||
To access the deploy wizard from the Welcome page, click the respective button. To access the wizard at a later point in time, click the **DEPLOY APP** or **UPLOAD YAML** link in the upper right corner of any page listing workloads.
|
||||
To access the deploy wizard from the Welcome page, click the respective button. To access the wizard at a later point in time, click the **CREATE** button in the upper right corner of any page.
|
||||
|
||||
![Deploy wizard](/images/docs/ui-dashboard-deploy-simple.png)
|
||||
|
||||
|
@ -63,7 +57,7 @@ To access the deploy wizard from the Welcome page, click the respective button.
|
|||
|
||||
The deploy wizard expects that you provide the following information:
|
||||
|
||||
- **App name** (mandatory): Name for your application. A [label](/docs/user-guide/labels/) with the name will be added to the Replication Controller and Service, if any, that will be deployed.
|
||||
- **App name** (mandatory): Name for your application. A [label](/docs/user-guide/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed.
|
||||
|
||||
The application name must be unique within the selected Kubernetes [namespace](/docs/admin/namespaces/). It must start and end with a lowercase character, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored.
|
||||
|
||||
|
@ -71,7 +65,7 @@ The deploy wizard expects that you provide the following information:
|
|||
|
||||
- **Number of pods** (mandatory): The target number of Pods you want your application to be deployed in. The value must be a positive integer.
|
||||
|
||||
A [Replication Controller](/docs/user-guide/replication-controller/) will be created to maintain the desired number of Pods across your cluster.
|
||||
A [Deployment](/docs/user-guide/deployment/) will be created to maintain the desired number of Pods across your cluster.
|
||||
|
||||
- **Service** (optional): For some parts of your application (e.g. frontends) you may want to expose a [Service](http://kubernetes.io/docs/user-guide/services/) onto an external, maybe public IP address outside of your cluster (external Service). For external Services, you may need to open up one or more ports to do so. Find more details [here](/docs/user-guide/services-firewalls/).
|
||||
|
||||
|
@ -81,11 +75,9 @@ The deploy wizard expects that you provide the following information:
|
|||
|
||||
If needed, you can expand the **Advanced options** section where you can specify more settings:
|
||||
|
||||
![Deploy wizard advanced options](/images/docs/ui-dashboard-deploy-more.png)
|
||||
- **Description**: The text you enter here will be added as an [annotation](/docs/user-guide/annotations/) to the Deployment and displayed in the application's details.
|
||||
|
||||
- **Description**: The text you enter here will be added as an [annotation](/docs/user-guide/annotations/) to the Replication Controller and displayed in the application's details.
|
||||
|
||||
- **Labels**: Default [labels](/docs/user-guide/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Replication Controller, Service (if any), and Pods, such as release, environment, tier, partition, and release track.
|
||||
- **Labels**: Default [labels](/docs/user-guide/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track.
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -118,89 +110,54 @@ track=stable
|
|||
|
||||
### Uploading a YAML or JSON file
|
||||
|
||||
Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes' [API](http://kubernetes.io/docs/api/) resource schemas as the configuration schemas.
|
||||
Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes [API](http://kubernetes.io/docs/api/) resource schemas.
|
||||
|
||||
As an alternative to specifying application details in the deploy wizard, you can define your Replication Controllers and Services in YAML or JSON files, and upload the files to your Pods:
|
||||
As an alternative to specifying application details in the deploy wizard, you can define your application in YAML or JSON files, and upload the files using Dashboard:
|
||||
|
||||
![Deploy wizard file upload](/images/docs/ui-dashboard-deploy-file.png)
|
||||
|
||||
## Managing resources
|
||||
## Using Dashboard
|
||||
Following sections describe views of the Kubernetes Dashboard UI; what they provide and how can they be used.
|
||||
|
||||
### List view
|
||||
### Navigation
|
||||
|
||||
As soon as applications are running on your cluster, Dashboard's initial view defaults to showing all resources available in all namespaces in a list view, for example:
|
||||
When there are Kubernetes objects defined in the cluster, Dashboard shows them in the initial view. By default only objects from the _default_ namespace are shown and this can be changed using the namespace selector located in the navigation menu.
|
||||
|
||||
Dashboard shows most Kubernetes object kinds and groups them in a few menu categories.
|
||||
|
||||
#### Admin
|
||||
View for cluster and namespace administrators. It lists Nodes, Namespaces and Persistent Volumes and has detail views for them. Node list view contains CPU and memory usage metrics aggregated across all Nodes. The details view shows the metrics for a Node, its specification, status, allocated resources, events and pods running on the node.
|
||||
|
||||
![Node detail view](/images/docs/ui-dashboard-node.png)
|
||||
|
||||
#### Workloads
|
||||
Entry point view that shows all applications running in the selected namespace. The view lists applications by workload kind (e.g., Deployments, Replica Sets, Stateful Sets, etc.) and each workload kind can be viewed separately. The lists summarize actionable information about the workloads, such as the number of ready pods for a Replica Set or current memory usage for a Pod.
|
||||
|
||||
![Workloads view](/images/docs/ui-dashboard-workloadview.png)
|
||||
|
||||
For every resource, the list view shows the following information:
|
||||
Detail views for workloads show status and specification information and surface relationships between objects. For example, Pods that Replica Set is controlling or New Replica Sets and Horizontal Pod Autoscalers for Deployments.
|
||||
|
||||
* Name of the resource
|
||||
* All labels assigned to the resource
|
||||
* Number of pods assigned to the resource
|
||||
* Age, i.e. amount of time passed since the resource has been created
|
||||
* Docker container image
|
||||
![Deployment detail view](/images/docs/ui-dashboard-deployment-detail.png)
|
||||
|
||||
To filter the resources and only show those of a specific namespace, select it from the dropdown list in the right corner of the title bar:
|
||||
#### Services and discovery
|
||||
Services and discovery view shows Kubernetes resources that allow for exposing services to external world and discovering them within a cluster. For that reason, Service and Ingress views show Pods targeted by them, internal endpoints for cluster connections and endpoints for external users.
|
||||
|
||||
![Namespace selector](/images/docs/ui-dashboard-namespace.png)
|
||||
![Service list partial view](/images/docs/ui-dashboard-service-list.png)
|
||||
|
||||
### Details view
|
||||
#### Storage
|
||||
Storage view shows Persistent Volume Claim resources which are used by applications for storing data.
|
||||
|
||||
When clicking a resource, the details view is opened, for example:
|
||||
#### Config
|
||||
Config view show all Kubernetes resources that are used for live configuration of applications running in clusters. This is now Config Maps and Secrets. Thie views allows for editing and managing config objects and displays secrets hidden by default.
|
||||
|
||||
![Details view](/images/docs/ui-dashboard-detailsview.png)
|
||||
![Secret detail view](/images/docs/ui-dashboard-secret-detail.png)
|
||||
|
||||
The **OVERVIEW** tab shows the actual resource details as well as the Pods the resource is running in.
|
||||
#### Logs viewer
|
||||
Pod lists and detail pages link to logs viewer that is built into Dashboard. The viewer allows for drilling down logs from containers belonging to a single Pod.
|
||||
|
||||
The **EVENTS** tab can be useful for debugging applications.
|
||||
|
||||
To go back to the workloads overview, click the Kubernetes logo.
|
||||
|
||||
### Workload categories
|
||||
|
||||
Workloads are categorized as follows:
|
||||
|
||||
* [Daemon Sets](http://kubernetes.io/docs/admin/daemons/) which ensure that all or some of the nodes in your cluster run a copy of a Pod.
|
||||
* [Deployments](http://kubernetes.io/docs/user-guide/deployments/) which provide declarative updates for Pods and Replica Sets (the next-generation [Replication Controller](http://kubernetes.io/docs/user-guide/replication-controller/))
|
||||
The Details page for a Deployment lists resource details, as well as new and old Replica Sets. The resource details also include information on the [RollingUpdate](http://kubernetes.io/docs/user-guide/rolling-updates/) strategy, if any.
|
||||
* [Pet Sets](http://kubernetes.io/docs/user-guide/petset/) (nominal Services, also known as load-balanced Services) for legacy application support.
|
||||
* [Replica Sets](http://kubernetes.io/docs/user-guide/replicasets/) for using label selectors.
|
||||
* [Jobs](http://kubernetes.io/docs/user-guide/jobs/) for creating one or more Pods, ensuring that a specified number of them successfully terminate, and tracking the completions.
|
||||
* [Replication Controllers](http://kubernetes.io/docs/user-guide/replication-controller/)
|
||||
* [Pods](http://kubernetes.io/docs/user-guide/pods/)
|
||||
|
||||
You can display the resources of a specific category in two ways:
|
||||
|
||||
* Click the category name, e.g. **Deployments**
|
||||
* Edit the Dashboard URL and add the name of a desired category. For example, to display the list of Replication Controllers, specify the following URL:
|
||||
|
||||
```
|
||||
http://<your_host>:9090/#/replicationcontroller
|
||||
```
|
||||
|
||||
### Actions
|
||||
|
||||
Every list view offers an action menu to the right of the listed resources. The related details view provides the same actions as buttons in the upper right corner of the page.
|
||||
|
||||
* **Edit**
|
||||
|
||||
Opens a text editor so that you can instantly view or update the JSON or YAML file of the respective resource.
|
||||
|
||||
* **Delete**
|
||||
|
||||
After confirmation, deletes the respective resource.
|
||||
|
||||
When deleting a Replication Controller, the Pods managed by it are also deleted. You have the option to also delete Services related to the Replication Controller.
|
||||
|
||||
* **View details**
|
||||
|
||||
For Replication Controllers only. Takes you to the details page where you can view more information about the Pods that make up your application.
|
||||
|
||||
* **Scale**
|
||||
|
||||
For Replication Controllers only. Changes the number of Pods your application runs in. The respective Replication Controller will be updated to reflect the newly specified number. Be aware that setting a high number of Pods may result in a decrease of performance of the cluster or Dashboard itself.
|
||||
![Logs viewer](/images/docs/ui-dashboard-logs-view.png)
|
||||
|
||||
## More information
|
||||
|
||||
For more information, see the
|
||||
[Kubernetes Dashboard repository](https://github.com/kubernetes/dashboard).
|
||||
[Kubernetes Dashboard project page](https://github.com/kubernetes/dashboard).
|
||||
|
|
13
editdocs.md
|
@ -22,7 +22,7 @@ $( document ).ready(function() {
|
|||
|
||||
<h2>Continue your edit</h2>
|
||||
|
||||
<p>Click the below link to edit the page you were just on. When you are done, press "Commit Changes" at the bottom of the screen. This will create a copy of our site on your GitHub account called a "fork." You can make other changes in your fork after it is created, if you want. When you are ready to send us all your changes, go to the index page for your fork and click "New Pull Request" to let us know about it.</p>
|
||||
<p>Click the button below to edit the page you were just on. When you are done, click <b>Commit Changes</b> at the bottom of the screen. This creates a copy of our site in your GitHub account called a <i>fork</i>. You can make other changes in your fork after it is created, if you want. When you are ready to send us all your changes, go to the index page for your fork and click <b>New Pull Request</b> to let us know about it.</p>
|
||||
|
||||
<p><a id="continueEditButton" class="button"></a></p>
|
||||
|
||||
|
@ -31,12 +31,19 @@ $( document ).ready(function() {
|
|||
|
||||
<h2>Edit our site in the cloud</h2>
|
||||
|
||||
<p>Click the below button to visit the repo for our site. You can then click the "Fork" button in the upper-right area of the screen to create a copy of our site on your GitHub account called a "fork." Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click "New Pull Request" to let us know about it.</p>
|
||||
<p>Click the button below to visit the repo for our site. You can then click the <b>Fork</b> button in the upper-right area of the screen to create a copy of our site in your GitHub account called a <i>fork</i>. Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click <b>New Pull Request</b> to let us know about it.</p>
|
||||
|
||||
<p><a class="button" href="https://github.com/kubernetes/kubernetes.github.io/">Browse this site's source code</a></p>
|
||||
|
||||
</div>
|
||||
<!-- END: Dynamic section -->
|
||||
|
||||
<br/>
|
||||
|
||||
{% include_relative README.md %}
|
||||
For more information about contributing to the Kubernetes documentation, see:
|
||||
|
||||
* [Creating a Documentation Pull Request](http://kubernetes.io/docs/contribute/create-pull-request/)
|
||||
* [Writing a New Topic](http://kubernetes.io/docs/contribute/write-new-topic/)
|
||||
* [Staging Your Documentation Changes](http://kubernetes.io/docs/contribute/stage-documentation-changes/)
|
||||
* [Using Page Templates](http://kubernetes.io/docs/contribute/page-templates/)
|
||||
* [Documentation Style Guide](http://kubernetes.io/docs/contribute/style-guide/)
|
||||
|
|
Before Width: | Height: | Size: 70 KiB |
Before Width: | Height: | Size: 71 KiB |
Before Width: | Height: | Size: 52 KiB |
Before Width: | Height: | Size: 67 KiB |
Before Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 76 KiB |
Before Width: | Height: | Size: 25 KiB After Width: | Height: | Size: 199 KiB |
Before Width: | Height: | Size: 78 KiB |
Before Width: | Height: | Size: 51 KiB After Width: | Height: | Size: 305 KiB |
After Width: | Height: | Size: 372 KiB |
After Width: | Height: | Size: 626 KiB |
Before Width: | Height: | Size: 7.1 KiB |
After Width: | Height: | Size: 437 KiB |
After Width: | Height: | Size: 303 KiB |
After Width: | Height: | Size: 91 KiB |
Before Width: | Height: | Size: 54 KiB After Width: | Height: | Size: 377 KiB |
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 242 KiB |
After Width: | Height: | Size: 373 KiB |
After Width: | Height: | Size: 7.3 KiB |
After Width: | Height: | Size: 6.9 KiB |
|
@ -6,10 +6,6 @@ $( document ).ready(function() {
|
|||
var forwardingURL=window.location.href;
|
||||
|
||||
var redirects = [{
|
||||
"from": "third_party/swagger-ui",
|
||||
"to": "http://kubernetes.io/kubernetes/third_party/swagger-ui/"
|
||||
},
|
||||
{
|
||||
"from": "resource-quota",
|
||||
"to": "http://kubernetes.io/docs/admin/resourcequota/"
|
||||
},
|
||||
|
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
|
||||
Kubernetes swagger UI has now been replaced by our generated API reference docs
|
||||
which can be accessed at http://kubernetes.io/docs/api-reference/README/.
|