Various manual fixes for syntax highlighting and lists; much prettier docs ensue.

pull/43/head
John Mulhausen 2016-02-17 16:18:59 -08:00
parent d6311dbd19
commit f0aea25011
102 changed files with 1486 additions and 1364 deletions

View File

@ -3,6 +3,7 @@ markdown: kramdown
kramdown:
input: GFM
html_to_native: true
hard_wrap: false
syntax_highlighter: rouge
baseurl: /
incremental: true

View File

@ -16,6 +16,7 @@ The Kubernetes API is served by the Kubernetes apiserver process. Typically,
there is one of these running on a single kubernetes-master node.
By default the Kubernetes APIserver serves HTTP on 2 ports:
1. Localhost Port
- serves HTTP
- default is port 8080, change with `--insecure-port` flag.
@ -47,6 +48,7 @@ kube-up.sh. Other cloud providers may vary.
There are three differently configured serving ports because there are a
variety of uses cases:
1. Clients outside of a Kubernetes cluster, such as human running `kubectl`
on desktop machine. Currently, accesses the Localhost Port via a proxy (nginx)
running on the `kubernetes-master` machine. The proxy can use cert-based authentication

View File

@ -12,6 +12,7 @@ the request, (such as user, resource, and namespace) with access
policies. An API call must be allowed by some policy in order to proceed.
The following implementations are available, and are selected by flag:
- `--authorization-mode=AlwaysDeny`
- `--authorization-mode=AlwaysAllow`
- `--authorization-mode=ABAC`
@ -25,6 +26,7 @@ The following implementations are available, and are selected by flag:
### Request Attributes
A request has 5 attributes that can be considered for authorization:
- user (the user-string which a user was authenticated as).
- group (the list of group names the authenticated user is a member of).
- whether the request is readonly (GETs are readonly).
@ -46,6 +48,7 @@ The file format is [one JSON object per line](http://jsonlines.org/). There sho
one map per line.
Each line is a "policy object". A policy object is a map with the following properties:
- `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the username of the authenticated user.
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user.
- `readonly`, type boolean, when true, means that the policy only applies to GET
@ -125,7 +128,4 @@ An authorization module can be completely implemented in go, or can call out
to a remote authorization service. Authorization modules can implement
their own caching to reduce the cost of repeated authorization calls with the
same or similar arguments. Developers should then consider the interaction between
caching and revocation of permissions.
caching and revocation of permissions.

View File

@ -39,6 +39,7 @@ of the relevant log files. (note that on systemd-based systems, you may need to
This is an incomplete list of things that could go wrong, and how to adjust your cluster setup to mitigate the problems.
Root causes:
- VM(s) shutdown
- Network partition within cluster, or between cluster and users
- Crashes in Kubernetes software
@ -46,6 +47,7 @@ Root causes:
- Operator error, e.g. misconfigured Kubernetes software or application software
Specific scenarios:
- Apiserver VM shutdown or apiserver crashing
- Results
- unable to stop, update, or start new pods, services, replication controller
@ -79,6 +81,7 @@ Specific scenarios:
- etc.
Mitigations:
- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Mitigates: Supporting services VM shutdown or crashes

View File

@ -165,8 +165,4 @@ on GCE.
DaemonSet objects effectively have [API version `v1alpha1`](../api.html#api-versioning).
Alpha objects may change or even be discontinued in future software releases.
However, due to to a known issue, they will appear as API version `v1beta1` if enabled.
However, due to to a known issue, they will appear as API version `v1beta1` if enabled.

View File

@ -36,7 +36,4 @@ time.
## For more information
See [the docs for the DNS cluster addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md).
See [the docs for the DNS cluster addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md).

View File

@ -20,7 +20,8 @@ Setting up a truly reliable, highly available distributed system requires a numb
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
of these steps in detail, but a summary is given here to help guide and orient the user.
The steps involved are as follows:
The steps involved are as follows:
* [Creating the reliable constituent nodes that collectively form our HA master implementation.](#reliable-nodes)
* [Setting up a redundant, reliable storage layer with clustered etcd.](#establishing-a-redundant-reliable-data-storage-layer)
* [Starting replicated, load balanced Kubernetes API servers](#replicated-api-servers)
@ -143,7 +144,8 @@ First you need to create the initial log file, so that Docker mounts a file inst
touch /var/log/kube-apiserver.log
```
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
* basic_auth.csv - basic auth user and password
* ca.crt - Certificate Authority cert
* known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver

View File

@ -1,12 +1,9 @@
---
title: "Kubernetes Cluster Admin Guide"
---
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
It assumes some familiarity with concepts in the [User Guide](../user-guide/README).
{% include pagetoc.html %}
## Planning a cluster

View File

@ -1,7 +1,6 @@
---
title: "Considerations for running multiple Kubernetes clusters"
---
You may want to set up multiple Kubernetes clusters, both to
have clusters in different regions to be nearer to your users, and to tolerate failures and/or invasive maintenance.
This document describes some of the issues to consider when making a decision about doing so.
@ -15,7 +14,9 @@ we [plan to do this in the future](../proposals/federation).
On IaaS providers such as Google Compute Engine or Amazon Web Services, a VM exists in a
[zone](https://cloud.google.com/compute/docs/zones) or [availability
zone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones).
We suggest that all the VMs in a Kubernetes cluster should be in the same availability zone, because:
- compared to having a single global Kubernetes cluster, there are fewer single-points of failure
- compared to a cluster that spans availability zones, it is easier to reason about the availability properties of a
single-zone cluster.
@ -24,12 +25,14 @@ We suggest that all the VMs in a Kubernetes cluster should be in the same availa
It is okay to have multiple clusters per availability zone, though on balance we think fewer is better.
Reasons to prefer fewer clusters are:
- improved bin packing of Pods in some cases with more nodes in one cluster (less resource fragmentation)
- reduced operational overhead (though the advantage is diminished as ops tooling and processes matures)
- reduced costs for per-cluster fixed resource costs, e.g. apiserver VMs (but small as a percentage
of overall cluster cost for medium to large clusters).
Reasons to have multiple clusters include:
- strict security policies requiring isolation of one class of work from another (but, see Partitioning Clusters
below).
- test clusters to canary new Kubernetes releases or other cluster software.

View File

@ -5,6 +5,7 @@ title: "Networking in Kubernetes"
Kubernetes approaches networking somewhat differently than Docker does by
default. There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container communications: this is solved by
[pods](../user-guide/pods) and `localhost` communications.
2. Pod-to-Pod communications: this is the primary focus of this document.
@ -59,6 +60,7 @@ different approach.
Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
* all containers can communicate with all other containers without NAT
* all nodes can communicate with all containers (and vice-versa) without NAT
* the IP that a container sees itself as is the same IP that others see it as

View File

@ -134,6 +134,7 @@ When kubelet flag `--register-node` is true (the default), the kubelet will atte
register itself with the API server. This is the preferred pattern, used by most distros.
For self-registration, the kubelet is started with the following options:
- `--api-servers=` tells the kubelet the location of the apiserver.
- `--kubeconfig` tells kubelet where to find credentials to authenticate itself to the apiserver.
- `--cloud-provider=` tells the kubelet how to talk to a cloud provider to read metadata about itself.

View File

@ -6,6 +6,7 @@ there is a concern that one team could use more than its fair share of resources
Resource quotas are a tool for administrators to address this concern. Resource quotas
work like this:
- Different teams work in different namespaces. Currently this is voluntary, but
support for making this mandatory via ACLs is planned.
- The administrator creates a Resource Quota for each namespace.
@ -28,6 +29,7 @@ work like this:
[admission controller](admission-controllers)) before the quota is checked to avoid this problem.
Examples of policies that could be created using namespaces and quotas are:
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 Gib and 10 cores,
let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation.
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
@ -130,6 +132,7 @@ expressed in absolute units. So, if you add nodes to your cluster, this does *n
automatically give each namespace the ability to consume more resources.
Sometimes more complex policies may be desired, such as:
- proportionally divide total cluster resources among several teams.
- allow each tenant to grow resource usage as needed, but have a generous
limit to prevent accidental resource exhaustion.

View File

@ -11,6 +11,7 @@ incomplete features are referred to in order to better describe service accounts
Kubernetes distinguished between the concept of a user account and a service accounts
for a number of reasons:
- User accounts are for humans. Service accounts are for processes, which
run in pods.
- User accounts are intended to be global. Names must be unique across all
@ -29,6 +30,7 @@ for a number of reasons:
## Service account automation
Three separate components cooperate to implement the automation around service accounts:
- A Service account admission controller
- A Token controller
- A Service account controller
@ -39,6 +41,7 @@ The modification of pods is implemented via a plugin
called an [Admission Controller](admission-controllers). It is part of the apiserver.
It acts synchronously to modify pods as they are created or updated. When this plugin is active
(and it is by default on most distributions), then it does the following when a pod is created or modified:
1. If the pod does not have a `ServiceAccount` set, it sets the `ServiceAccount` to `default`.
2. It ensures that the `ServiceAccount` referenced by the pod exists, and otherwise rejects it.
4. If the pod does not contain any `ImagePullSecrets`, then `ImagePullSecrets` of the
@ -49,6 +52,7 @@ It acts synchronously to modify pods as they are created or updated. When this p
### Token Controller
TokenController runs as part of controller-manager. It acts asynchronously. It:
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed

View File

@ -322,13 +322,15 @@ after an object is created/updated.
For example, the scheduler sets the `pod.spec.nodeName` field after the pod is created.
Late-initializers should only make the following types of modifications:
Late-initializers should only make the following types of modifications:
- Setting previously unset fields
- Adding keys to maps
- Adding values to arrays which have mergeable semantics (`patchStrategy:"merge"` attribute in
the type definition).
These conventions:
These conventions:
1. allow a user (with sufficient privilege) to override any system-default behaviors by setting
the fields that would otherwise have been defaulted.
1. enables updates from users to be merged with changes made during late initialization, using
@ -472,9 +474,10 @@ The following HTTP status codes may be returned by the API.
Kubernetes will always return the `Status` kind from any API endpoint when an error occurs.
Clients SHOULD handle these types of objects when appropriate.
A `Status` kind will be returned by the API in two cases:
* When an operation is not successful (i.e. when the server would return a non 2xx HTTP status code).
* When a HTTP `DELETE` call is successful.
A `Status` kind will be returned by the API in two cases:
1. When an operation is not successful (i.e. when the server would return a non 2xx HTTP status code).
2. When a HTTP `DELETE` call is successful.
The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority.
@ -520,7 +523,8 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/
`details` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.
Possible values for the `reason` and `details` fields:
Possible values for the `reason` and `details` fields:
* `BadRequest`
* Indicates that the request itself was invalid, because the request doesn't make any sense, for example deleting a read-only object.
* This is different than `status reason` `Invalid` above which indicates that the API call could possibly succeed, but the data was invalid.
@ -647,7 +651,8 @@ Annotations have very different intended usage from labels. We expect them to be
In fact, in-development API fields, including those used to represent fields of newer alpha/beta API versions in the older stable storage version, may be represented as annotations with the form `something.alpha.kubernetes.io/name` or `something.beta.kubernetes.io/name` (depending on our confidence in it). For example `net.alpha.kubernetes.io/policy` might represent an experimental network policy field.
Other advice regarding use of labels, annotations, and other generic map keys by Kubernetes components and tools:
Other advice regarding use of labels, annotations, and other generic map keys by Kubernetes components and tools:
- Key names should be all lowercase, with words separated by dashes, such as `desired-replicas`
- Prefix the key with `kubernetes.io/` or `foo.kubernetes.io/`, preferably the latter if the label/annotation is specific to `foo`
- For instance, prefer `service-account.kubernetes.io/name` over `kubernetes.io/service-account.name`

View File

@ -67,6 +67,7 @@ backward-compatibly.
Before talking about how to make API changes, it is worthwhile to clarify what
we mean by API compatibility. An API change is considered backward-compatible
if it:
* adds new functionality that is not required for correct behavior (e.g.,
does not add a new required field)
* does not change existing semantics, including:
@ -261,6 +262,7 @@ the release notes for the next release by labeling the PR with the "release-note
If you found that your change accidentally broke clients, it should be reverted.
In short, the expected API evolution is as follows:
* `extensions/v1alpha1` ->
* `newapigroup/v1alpha1` -> ... -> `newapigroup/v1alphaN` ->
* `newapigroup/v1beta1` -> ... -> `newapigroup/v1betaN` ->
@ -363,6 +365,7 @@ than the generic ones (which are based on reflections and thus are highly
inefficient).
The conversion code resides with each versioned API. There are two files:
- `pkg/api/<version>/conversion.go` containing manually written conversion
functions
- `pkg/api/<version>/conversion_generated.go` containing auto-generated
@ -381,8 +384,7 @@ Also note that you can (and for efficiency reasons should) use auto-generated
conversion functions when writing your conversion functions.
Once all the necessary manually written conversions are added, you need to
regenerate auto-generated ones. To regenerate them:
- run
regenerate auto-generated ones. To regenerate them, run:
```shell
hack/update-generated-conversions.sh
@ -404,11 +406,11 @@ structure changes done. You now need to generate code to handle deep copy
of your versioned api objects.
The deep copy code resides with each versioned API:
- `pkg/api/<version>/deep_copy_generated.go` containing auto-generated copy functions
- `pkg/apis/extensions/<version>/deep_copy_generated.go` containing auto-generated copy functions
To regenerate them:
- run
To regenerate them, run:
```shell
hack/update-generated-deep-copies.sh
@ -420,11 +422,11 @@ We are auto-generating code for marshaling and unmarshaling json representation
of api objects - this is to improve the overall system performance.
The auto-generated code resides with each versioned API:
- `pkg/api/<version>/types.generated.go`
- `pkg/apis/extensions/<version>/types.generated.go`
To regenerate them:
- run
To regenerate them, run:
```shell
hack/update-codecgen.sh

View File

@ -1,15 +1,14 @@
---
title: "Kubernetes Development Automation"
---
## Overview
Kubernetes uses a variety of automated tools in an attempt to relieve developers of repeptitive, low
brain power work. This document attempts to describe these processes.
## Submit Queue
In an effort to
In an effort to:
* reduce load on core developers
* maintain e2e stability
* load test githubs label feature
@ -32,6 +31,7 @@ The status of the submit-queue is [online.](http://submit-queue.k8s.io/)
### Ready to merge status
A PR is considered "ready for merging" if it matches the following:
* it has the `lgtm` label, and that `lgtm` is newer than the latest commit
* it has passed the cla pre-submit and has the `cla:yes` label
* it has passed the travis and shippable pre-submit tests
@ -65,6 +65,7 @@ We also run a [github "munger"](https://github.com/kubernetes/contrib/tree/maste
This runs repeatedly over github pulls and issues and runs modular "mungers" similar to "mungedocs"
Currently this runs:
* blunderbuss - Tries to automatically find an owner for a PR without an owner, uses mapping file here:
https://github.com/kubernetes/contrib/blob/master/mungegithub/blunderbuss.yml
* needs-rebase - Adds `needs-rebase` to PRs that aren't currently mergeable, and removes it from those that are.

View File

@ -3,6 +3,7 @@ title: "devel/coding-conventions"
---
Code conventions
- Bash
- https://google-styleguide.googlecode.com/svn/trunk/shell.xml
- Ensure that build, release, test, and cluster-management scripts run on OS X
@ -30,6 +31,7 @@ Code conventions
- [Logging conventions](logging)
Testing conventions
- All new packages and most new significant functionality must come with unit tests
- Table-driven tests are preferred for testing multiple scenarios/inputs; for example, see [TestNamespaceAuthorization](https://releases.k8s.io/release-1.1/test/integration/auth_test.go)
- Significant features should come with integration (test/integration) and/or end-to-end (test/e2e) tests
@ -37,6 +39,7 @@ Testing conventions
- Unit tests must pass on OS X and Windows platforms - if you use Linux specific features, your test case must either be skipped on windows or compiled out (skipped is better when running Linux specific commands, compiled out is required when your code does not compile on Windows).
Directory and file conventions
- Avoid package sprawl. Find an appropriate subdirectory for new packages. (See [#4851](http://issues.k8s.io/4851) for discussion.)
- Libraries with no more appropriate home belong in new package subdirectories of pkg/util
- Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the "wait" package and include functionality like Poll. So the full name is wait.Poll
@ -52,6 +55,7 @@ Directory and file conventions
- This includes modified third-party code and excerpts, as well
Coding advice
- Go
- [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f)

View File

@ -199,7 +199,4 @@ explanation.
Obviously, none of these points are hard rules. There is no document that can
take the place of common sense and good taste. Use your best judgment, but put
a bit of thought into how your work can be made easier to review. If you do
these things your PRs will flow much more easily.
these things your PRs will flow much more easily.

View File

@ -3,20 +3,16 @@ title: "GitHub Issues for the Kubernetes Project"
---
A list quick overview of how we will review and prioritize incoming issues at https://github.com/kubernetes/kubernetes/issues
Priorities
----------
## Priorities
We will use GitHub issue labels for prioritization. The absence of a priority label means the bug has not been reviewed and prioritized yet.
Definitions
-----------
## Definitions
* P0 - something broken for users, build broken, or critical security issue. Someone must drop everything and work on it.
* P1 - must fix for earliest possible binary release (every two weeks)
* P2 - should be fixed in next major release version
* P3 - default priority for lower importance bugs that we still want to track and plan to fix at some point
* design - priority/design is for issues that are used to track design discussions
* support - priority/support is used for issues tracking user support requests
* untriaged - anything without a priority/X label will be considered untriaged
* untriaged - anything without a priority/X label will be considered untriaged

View File

@ -1,7 +1,6 @@
---
title: "Kubectl Conventions"
---
Updated: 8/27/2015
{% include pagetoc.html %}

View File

@ -17,6 +17,7 @@ which is similar to the one you have planned, consider improving that one.
Distros fall into two categories:
- **versioned distros** are tested to work with a particular binary release of Kubernetes. These
come in a wide variety, reflecting a wide range of ideas and preferences in how to run a cluster.
- **development distros** are tested work with the latest Kubernetes source code. But, there are
@ -28,6 +29,7 @@ There are different guidelines for each.
## Versioned Distro Guidelines
These guidelines say *what* to do. See the Rationale section for *why*.
- Send us a PR.
- Put the instructions in `docs/getting-started-guides/...`. Scripts go there too. This helps devs easily
search for uses of flags by guides.
@ -49,6 +51,7 @@ Just file an issue or chat us on [Slack](../troubleshooting.html#slack) and one
## Development Distro Guidelines
These guidelines say *what* to do. See the Rationale section for *why*.
- the main reason to add a new development distro is to support a new IaaS provider (VM and
network management). This means implementing a new `pkg/cloudprovider/providers/$IAAS_NAME`.
- Development distros should use Saltstack for Configuration Management.
@ -93,8 +96,4 @@ These guidelines say *what* to do. See the Rationale section for *why*.
CoreOS Fleet, Ansible, and others.
- You can still run code from head or your own branch
if you use another Configuration Management tool -- you just have to do some manual steps
during testing and deployment.
during testing and deployment.

View File

@ -1,8 +1,6 @@
---
title: "Getting started on AWS EC2"
---
{% include pagetoc.html %}
## Prerequisites

View File

@ -1,8 +1,6 @@
---
title: "Getting started on CentOS"
---
{% include pagetoc.html %}
## Prerequisites
@ -21,25 +19,28 @@ The Kubernetes package provides a few services: kube-apiserver, kube-scheduler,
Hosts:
```
```conf
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
centos-minion = 192.168.121.65
```
**Prepare the hosts:**
* Create virt7-testing repo on all hosts - centos-{master,minion} with following information.
```
```conf
[virt7-testing]
name=virt7-testing
baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/
gpgcheck=0
gpgcheck=0
```
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
```shell
yum -y install --enablerepo=virt7-testing kubernetes
yum -y install --enablerepo=virt7-testing kubernetes
```
* Note * Using etcd-0.4.6-7 (This is temporary update in documentation)
If you do not get etcd-0.4.6-7 installed with virt7-testing repo,
@ -47,21 +48,24 @@ If you do not get etcd-0.4.6-7 installed with virt7-testing repo,
In the current virt7-testing repo, the etcd package is updated which causes service failure. To avoid this,
```shell
yum erase etcd
yum erase etcd
```
It will uninstall the current available etcd package
```shell
yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86_64/etcd-0.4.6-7.el7.centos.x86_64.rpm
yum -y install --enablerepo=virt7-testing kubernetes
yum -y install --enablerepo=virt7-testing kubernetes
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```shell
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
192.168.121.65 centos-minion" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
* Edit `/etc/kubernetes/config` which will be the same on all hosts to contain:
```shell
# Comma separated list of nodes in the etcd cluster
@ -74,14 +78,16 @@ KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such:
@ -103,8 +109,9 @@ KUBELET_PORT="--kubelet-port=10250"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
KUBE_API_ARGS=""
```
* Start the appropriate services on master:
```shell
@ -112,13 +119,14 @@ for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
done
```
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet and start the kubelet and proxy***
* Edit /etc/kubernetes/kubelet to appear as such:
* Edit `/etc/kubernetes/kubelet` to appear as such:
```shell
# The address for the info server to serve on
@ -134,8 +142,9 @@ KUBELET_HOSTNAME="--hostname-override=centos-minion"
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own!
KUBELET_ARGS=""
KUBELET_ARGS=""
```
* Start the appropriate services on node (centos-minion).
```shell
@ -143,8 +152,9 @@ for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
done
```
*You should be finished!*
* Check to make sure the cluster can see the node (on centos-master)
@ -152,8 +162,9 @@ done
```shell
$ kubectl get nodes
NAME LABELS STATUS
centos-minion <none> Ready
centos-minion <none> Ready
```
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/{{page.version}}/docs/user-guide/walkthrough/README)!

View File

@ -15,21 +15,24 @@ To get started, you need to checkout the code:
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
cd kubernetes/docs/getting-started-guides/coreos/azure/
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
```shell
npm install
npm install
```
Now, all you need to do is:
```shell
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
./create-kubernetes-cluster.js
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
![VMs in Azure](/images/docs/initial_cluster.png)
@ -41,13 +44,15 @@ Once the creation of Azure VMs has finished, you should see the following:
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
```
Let's login to the master node like so:
```shell
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
@ -56,20 +61,23 @@ Check there are 2 nodes in the cluster:
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
```
## Deploying the workload
Let's follow the Guestbook example now:
```shell
kubectl create -f ~/guestbook-example
kubectl create -f ~/guestbook-example
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
```shell
kubectl get pods --watch
kubectl get pods --watch
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
@ -81,8 +89,9 @@ frontend-4wahe 1/1 Running 0 4m
frontend-6l36j 1/1 Running 0 4m
redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
```
## Scaling
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
@ -92,8 +101,9 @@ You will need to open another terminal window on your machine and go to the same
First, lets set the size of new VMs:
```shell
export AZ_VM_SIZE=Large
export AZ_VM_SIZE=Large
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```shell
@ -109,8 +119,9 @@ azure_wrapper/info: The hosts in this deployment are:
'kube-02',
'kube-03',
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
@ -121,8 +132,9 @@ NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
@ -132,8 +144,9 @@ core@kube-00 ~ $ kubectl get rc
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
```
As there are 4 nodes, let's scale proportionally:
```shell
@ -141,8 +154,9 @@ core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
>>>>>>> coreos/azure: Updates for 1.0
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
scaled
```
Check what you have now:
```shell
@ -150,8 +164,9 @@ core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
```shell
@ -160,13 +175,14 @@ NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 22m
frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
frontend-z9oxo 1/1 Running 0 41s
```
## Exposing the app to the outside world
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
```
```shell
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
Guestbook app is on port 31605, will map it to port 80 on kube-00
info: Executing command vm endpoint create
@ -181,8 +197,9 @@ data: Local port : 31605
data: Protcol : tcp
data: Virtual IP Address : 137.117.156.164
data: Direct server return : Disabled
info: vm endpoint show command OK
info: vm endpoint show command OK
```
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
## Next steps
@ -196,8 +213,9 @@ You should probably try deploy other [example apps](../../../../examples/) or wr
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
```shell
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)

View File

@ -19,23 +19,23 @@ To get started, you need to checkout the code:
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
```shell
npm install
```
Now, all you need to do is:
```shell
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes.
The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to
ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
@ -50,14 +50,14 @@ azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
```
Let's login to the master node like so:
```shell
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
@ -67,22 +67,22 @@ core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
```
## Deploying the workload
Let's follow the Guestbook example now:
```shell
kubectl create -f ~/guestbook-example
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
```shell
kubectl get pods --watch
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
@ -95,8 +95,8 @@ frontend-6l36j 1/1 Running 0 4m
redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
```
## Scaling
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
@ -107,8 +107,8 @@ First, lets set the size of new VMs:
```shell
export AZ_VM_SIZE=Large
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```shell
@ -125,8 +125,8 @@ azure_wrapper/info: The hosts in this deployment are:
'kube-03',
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
@ -138,8 +138,8 @@ kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
@ -150,8 +150,8 @@ ONTROLLER CONTAINER(S) IMAGE(S) SELECTO
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
```
As there are 4 nodes, let's scale proportionally:
```shell
@ -160,8 +160,8 @@ core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
```
Check what you have now:
```shell
@ -170,8 +170,8 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECT
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
```shell
@ -181,13 +181,13 @@ frontend-0a9xi 1/1 Running 0 22m
frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
```
## Exposing the app to the outside world
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
```
```shell
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
Guestbook app is on port 31605, will map it to port 80 on kube-00
info: Executing command vm endpoint create
@ -203,8 +203,8 @@ data: Protcol : tcp
data: Virtual IP Address : 137.117.156.164
data: Direct server return : Disabled
info: vm endpoint show command OK
```
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
## Next steps
@ -219,11 +219,8 @@ If you don't wish care about the Azure bill, you can tear down the cluster. It's
```shell
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)

View File

@ -3,7 +3,8 @@ title: "Bare Metal CoreOS with Kubernetes and Project Calico"
---
This guide explains how to deploy a bare-metal Kubernetes cluster on CoreOS using [Calico networking](http://www.projectcalico.org).
Specifically, this guide will have you do the following:
Specifically, this guide will have you do the following:
- Deploy a Kubernetes master node on CoreOS using cloud-config
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config
@ -20,7 +21,8 @@ This guide will use [cloud-config](https://coreos.com/docs/cluster-management/se
For ease of distribution, the cloud-config files required for this demonstration can be found on [GitHub](https://github.com/projectcalico/calico-kubernetes-coreos-demo).
This repo includes two cloud config files:
This repo includes two cloud config files:
- `master-config.yaml`: Cloud-config for the Kubernetes master
- `node-config.yaml`: Cloud-config for each Kubernetes compute host
@ -30,22 +32,22 @@ In the next few steps you will be asked to configure these files and host them o
To get the Kubernetes source, clone the GitHub repo, and build the binaries.
```
```shell
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
./build/release.sh
```
Once the binaries are built, host the entire `<kubernetes>/_output/dockerized/bin/<OS>/<ARCHITECTURE>/` folder on an accessible HTTP server so they can be accessed by the cloud-config. You'll point your cloud-config files at this HTTP server later.
## Download CoreOS
Let's download the CoreOS bootable ISO. We'll use this image to boot and install CoreOS on each server.
```
```shell
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso
```
You can also download the ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master
@ -54,12 +56,13 @@ Once you've downloaded the image, use it to boot your Kubernetes Master server.
Let's get the master-config.yaml and fill in the necessary variables. Run the following commands on your HTTP server to get the cloud-config files.
```
```shell
git clone https://github.com/Metaswitch/calico-kubernetes-demo.git
cd calico-kubernetes-demo/coreos
```
You'll need to replace the following variables in the `master-config.yaml` file to match your deployment.
You'll need to replace the following variables in the `master-config.yaml` file to match your deployment.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_LOC>`: The address used to get the kubernetes binaries over HTTP.
@ -69,10 +72,10 @@ Host the modified `master-config.yaml` file and pull it on to your Kubernetes Ma
The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS to disk and configure the install using cloud-config. The following command will download and install stable CoreOS, using the master-config.yaml file for configuration.
```
```shell
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
Once complete, eject the bootable ISO and restart the server. When it comes back up, you should have SSH access as the `core` user using the public key provided in the master-config.yaml file.
## Configure the compute hosts
@ -83,7 +86,8 @@ First, boot up your node using the bootable ISO we downloaded earlier. You shou
Let's modify the `node-config.yaml` cloud-config file on your HTTP server. Make a copy for this node, and fill in the necessary variables.
You'll need to replace the following variables in the `node-config.yaml` file to match your deployment.
You'll need to replace the following variables in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
@ -94,27 +98,23 @@ You'll need to replace the following variables in the `node-config.yaml` file to
Host the modified `node-config.yaml` file and pull it on to your Kubernetes node.
```
```shell
wget http://<http_server_ip>/node-config.yaml
```
Install and configure CoreOS on the node using the following command.
```
```shell
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
```
Once complete, restart the server. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured. Once fully configured, you can check that the node is running with the following command on the Kubernetes master.
```
```shell
/home/core/kubectl get nodes
```
## Testing the Cluster
You should now have a functional bare-metal Kubernetes cluster with one master and two compute hosts.
Try running the [guestbook demo](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/) to test out your new cluster!
Try running the [guestbook demo](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/) to test out your new cluster!

View File

@ -24,57 +24,67 @@ whether for testing a POC before the real deal, or you are restricted to be tota
## This Guides variables
```shell
| Node Description | MAC | IP |
| :---------------------------- | :---------------: | :---------: |
| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 |
| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 |
```
## Setup PXELINUX CentOS
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server). This section is the abbreviated version.
1. Install packages needed on CentOS
sudo yum install tftp-server dhcp syslinux
```shell
sudo yum install tftp-server dhcp syslinux
```
2. `vi /etc/xinetd.d/tftp` to enable tftp service and change disable to 'no'
disable = no
```conf
disable = no
```
3. Copy over the syslinux images we will need.
su -
mkdir -p /tftpboot
cd /tftpboot
cp /usr/share/syslinux/pxelinux.0 /tftpboot
cp /usr/share/syslinux/menu.c32 /tftpboot
cp /usr/share/syslinux/memdisk /tftpboot
cp /usr/share/syslinux/mboot.c32 /tftpboot
cp /usr/share/syslinux/chain.c32 /tftpboot
```shell
su -
mkdir -p /tftpboot
cd /tftpboot
cp /usr/share/syslinux/pxelinux.0 /tftpboot
cp /usr/share/syslinux/menu.c32 /tftpboot
cp /usr/share/syslinux/memdisk /tftpboot
cp /usr/share/syslinux/mboot.c32 /tftpboot
cp /usr/share/syslinux/chain.c32 /tftpboot
/sbin/service dhcpd start
/sbin/service xinetd start
/sbin/chkconfig tftp on
/sbin/service dhcpd start
/sbin/service xinetd start
/sbin/chkconfig tftp on
```
4. Setup default boot menu
mkdir /tftpboot/pxelinux.cfg
touch /tftpboot/pxelinux.cfg/default
```shell
mkdir /tftpboot/pxelinux.cfg
touch /tftpboot/pxelinux.cfg/default
```
5. Edit the menu `vi /tftpboot/pxelinux.cfg/default`
default menu.c32
prompt 0
timeout 15
ONTIMEOUT local
display boot.msg
```conf
default menu.c32
prompt 0
timeout 15
ONTIMEOUT local
display boot.msg
MENU TITLE Main Menu
MENU TITLE Main Menu
LABEL local
MENU LABEL Boot local hard drive
LOCALBOOT 0
LABEL local
MENU LABEL Boot local hard drive
LOCALBOOT 0
```
Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers.
@ -87,42 +97,46 @@ This section describes how to setup the CoreOS images to live alongside a pre-ex
2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images.
3. Download the CoreOS PXE files provided by the CoreOS team.
MY_TFTPROOT_DIR=/tftpboot
mkdir -p $MY_TFTPROOT_DIR/images/coreos/
cd $MY_TFTPROOT_DIR/images/coreos/
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
gpg --verify coreos_production_pxe.vmlinuz.sig
gpg --verify coreos_production_pxe_image.cpio.gz.sig
```shell
MY_TFTPROOT_DIR=/tftpboot
mkdir -p $MY_TFTPROOT_DIR/images/coreos/
cd $MY_TFTPROOT_DIR/images/coreos/
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
gpg --verify coreos_production_pxe.vmlinuz.sig
gpg --verify coreos_production_pxe_image.cpio.gz.sig
```
4. Edit the menu `vi /tftpboot/pxelinux.cfg/default` again
default menu.c32
prompt 0
timeout 300
ONTIMEOUT local
display boot.msg
MENU TITLE Main Menu
```conf
default menu.c32
prompt 0
timeout 300
ONTIMEOUT local
display boot.msg
LABEL local
MENU LABEL Boot local hard drive
LOCALBOOT 0
MENU TITLE Main Menu
MENU BEGIN CoreOS Menu
LABEL local
MENU LABEL Boot local hard drive
LOCALBOOT 0
LABEL coreos-master
MENU LABEL CoreOS Master
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-single-master.yml
MENU BEGIN CoreOS Menu
LABEL coreos-slave
MENU LABEL CoreOS Slave
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-slave.yml
MENU END
LABEL coreos-master
MENU LABEL CoreOS Master
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-single-master.yml
LABEL coreos-slave
MENU LABEL CoreOS Slave
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-slave.yml
MENU END
```
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
@ -132,40 +146,44 @@ This section covers configuring the DHCP server to hand out our new images. In t
1. Add the `filename` to the _host_ or _subnet_ sections.
filename "/tftpboot/pxelinux.0";
```conf
filename "/tftpboot/pxelinux.0";
```
2. At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments.
subnet 10.20.30.0 netmask 255.255.255.0 {
next-server 10.20.30.242;
option broadcast-address 10.20.30.255;
filename "<other default image>";
```conf
subnet 10.20.30.0 netmask 255.255.255.0 {
next-server 10.20.30.242;
option broadcast-address 10.20.30.255;
filename "<other default image>";
...
# http://www.syslinux.org/wiki/index.php/PXELINUX
host core_os_master {
hardware ethernet d0:00:67:13:0d:00;
option routers 10.20.30.1;
fixed-address 10.20.30.40;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave {
hardware ethernet d0:00:67:13:0d:01;
option routers 10.20.30.1;
fixed-address 10.20.30.41;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave2 {
hardware ethernet d0:00:67:13:0d:02;
option routers 10.20.30.1;
fixed-address 10.20.30.42;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
...
...
# http://www.syslinux.org/wiki/index.php/PXELINUX
host core_os_master {
hardware ethernet d0:00:67:13:0d:00;
option routers 10.20.30.1;
fixed-address 10.20.30.40;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave {
hardware ethernet d0:00:67:13:0d:01;
option routers 10.20.30.1;
fixed-address 10.20.30.41;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave2 {
hardware ethernet d0:00:67:13:0d:02;
option routers 10.20.30.1;
fixed-address 10.20.30.42;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
...
}
```
We will be specifying the node configuration later in the guide.
@ -185,19 +203,21 @@ To get this up and running we are going to setup a simple `apache` server to ser
This is on the PXE server from the previous section:
rm /etc/httpd/conf.d/welcome.conf
cd /var/www/html/
wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64
wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate
wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate
```shell
rm /etc/httpd/conf.d/welcome.conf
cd /var/www/html/
wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64
wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate
wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate
```
This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future.
@ -437,6 +457,7 @@ coreos:
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
```
### node.yml
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-slave.yml`.
@ -572,33 +593,38 @@ coreos:
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
```
## New pxelinux.cfg file
Create a pxelinux target file for a _slave_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-slave`
default coreos
prompt 1
timeout 15
```conf
default coreos
prompt 1
timeout 15
display boot.msg
display boot.msg
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
```
And one for the _master_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-master`
default coreos
prompt 1
timeout 15
```conf
default coreos
prompt 1
timeout 15
display boot.msg
display boot.msg
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
```
## Specify the pxelinux targets
@ -606,11 +632,12 @@ Now that we have our new targets setup for master and slave we want to configure
Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX).
cd /tftpboot/pxelinux.cfg
ln -s coreos-node-master 01-d0-00-67-13-0d-00
ln -s coreos-node-slave 01-d0-00-67-13-0d-01
ln -s coreos-node-slave 01-d0-00-67-13-0d-02
```shell
cd /tftpboot/pxelinux.cfg
ln -s coreos-node-master 01-d0-00-67-13-0d-00
ln -s coreos-node-slave 01-d0-00-67-13-0d-01
ln -s coreos-node-slave 01-d0-00-67-13-0d-02
```
Reboot these servers to get the images PXEd and ready for running containers!
@ -626,30 +653,41 @@ For more complete applications, please look in the [examples directory](https://
List all keys in etcd:
etcdctl ls --recursive
```shell
etcdctl ls --recursive
```
List fleet machines
fleetctl list-machines
```shell
fleetctl list-machines
```
Check system status of services on master:
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
systemctl status kube-register
```shell
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
systemctl status kube-register
```
Check system status of services on a node:
systemctl status kube-kubelet
systemctl status docker.service
```shell
systemctl status kube-kubelet
systemctl status docker.service
```
List Kubernetes
kubectl get pods
kubectl get nodes
```shell
kubectl get pods
kubectl get nodes
```
Kill all pods:
for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done
```shell
for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done
```

View File

@ -25,8 +25,8 @@ aws ec2 create-security-group --group-name kubernetes --description "Kubernetes
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
```
```shell
aws ec2 run-instances \
--image-id <ami_image_id> \
@ -35,14 +35,14 @@ aws ec2 run-instances \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://master.yaml
```
#### Capture the private IP address
```shell
aws ec2 describe-instances --instance-id <master-instance-id>
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
@ -58,8 +58,8 @@ aws ec2 run-instances \
--security-groups kubernetes \
--instance-type m3.medium \
--user-data file://node.yaml
```
### Google Compute Engine (GCE)
*Attention:* Replace `<gce_image_id>` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
@ -74,14 +74,14 @@ gcloud compute instances create master \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=master.yaml
```
#### Capture the private IP address
```shell
gcloud compute instances list
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
@ -96,8 +96,8 @@ gcloud compute instances create node1 \
--machine-type n1-standard-1 \
--zone us-central1-a \
--metadata-from-file user-data=node.yaml
```
#### Establish network connectivity
Next, setup an ssh tunnel to the master so you can run kubectl from your local host.
@ -119,14 +119,14 @@ OS_PASSWORD
OS_AUTH_URL
OS_USERNAME
OS_TENANT_NAME
```
Test this works with something like:
```
```shell
nova list
```
#### Get a Suitable CoreOS Image
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack)
@ -137,16 +137,16 @@ glance image-create --name CoreOS723 \
--container-format bare --disk-format qcow2 \
--file coreos_production_openstack_image.img \
--is-public True
```
#### Create security group
```shell
nova secgroup-create kubernetes "Kubernetes Security Group"
nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0
nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0
```
#### Provision the Master
```shell
@ -157,39 +157,35 @@ nova boot \
--security-group kubernetes \
--user-data files/master.yaml \
kube-master
```
```<image_name>```
is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
```<my_key>```
is the keypair name that you already generated to access the instance.
`<image_name>` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
```<flavor_id>```
is the flavor ID you use to size the instance. Run ```nova flavor-list```
`<my_key>` is the keypair name that you already generated to access the instance.
`<flavor_id>` is the flavor ID you use to size the instance. Run `nova flavor-list`
to get the IDs. 3 on the system this was tested with gives the m1.large size.
The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work.
Next, assign it a public IP address:
```
```shell
nova floating-ip-list
```
Get an IP address that's free and run:
```
```shell
nova floating-ip-associate kube-master <ip address>
```
where ```<ip address>```
is the IP address that was available from the ```nova floating-ip-list```
...where `<ip address>` is the IP address that was available from the `nova floating-ip-list`
command.
#### Provision Worker Nodes
Edit ```node.yaml```
Edit `node.yaml`
and replace all instances of ```<master-private-ip>```
with the private IP address of the master node. You can get this by running ```nova show kube-master```
assuming you named your instance kube master. This is not the floating IP address you just assigned it.
@ -202,8 +198,6 @@ nova boot \
--security-group kubernetes \
--user-data files/node.yaml \
minion01
```
This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.
This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.

View File

@ -13,7 +13,7 @@ First of all, download the template dns rc and svc file from
Then you need to set `DNS_REPLICAS` , `DNS_DOMAIN` , `DNS_SERVER_IP` , `KUBE_SERVER` ENV.
```
```shell
$ export DNS_REPLICAS=1
$ export DNS_DOMAIN=cluster.local # specify in startup parameter `--cluster-domain` for containerized kubelet
@ -21,25 +21,25 @@ $ export DNS_DOMAIN=cluster.local # specify in startup parameter `--cluster-doma
$ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns` for containerized kubelet
$ export KUBE_SERVER=10.10.103.250 # your master server ip, you may change it
```
### Replace the corresponding value in the template.
```
```shell
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{kube_server_url}/${KUBE_SERVER}/g;" skydns-rc.yaml.in > ./skydns-rc.yaml
$ sed -e "s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns-svc.yaml.in > ./skydns-svc.yaml
```
### Use `kubectl` to create skydns rc and service
```
```shell
$ kubectl -s "$KUBE_SERVER:8080" --namespace=kube-system create -f ./skydns-rc.yaml
$ kubectl -s "$KUBE_SERVER:8080" --namespace=kube-system create -f ./skydns-svc.yaml
```
### Test if DNS works
Follow [this link](https://releases.k8s.io/release-1.1/cluster/addons/dns#how-do-i-test-if-it-is-working) to check it out.

View File

@ -24,8 +24,8 @@ Run:
```shell
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
@ -37,14 +37,14 @@ Run:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/google_containers/etcd:2.0.12 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```
### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.
@ -59,14 +59,14 @@ Turning down Docker is system dependent, it may be:
```shell
sudo /etc/init.d/docker stop
```
or
```shell
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
@ -75,16 +75,16 @@ Now run flanneld itself:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
@ -95,8 +95,8 @@ Regardless, you need to add the following to the docker command line:
```shell
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
@ -104,8 +104,8 @@ Docker creates a bridge named `docker0` by default. You need to remove this:
```shell
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the `bridge-utils` package for the `brctl` binary.
#### Restart Docker
@ -114,14 +114,14 @@ Again this is system dependent, it may be:
```shell
sudo /etc/init.d/docker start
```
it may be:
```shell
systemctl start docker
```
## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
@ -139,16 +139,16 @@ sudo docker run \
--pid=host \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests-multi --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
### Also run the service proxy
```shell
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!
@ -161,22 +161,19 @@ List the nodes
```shell
kubectl get nodes
```
This should print:
```shell
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
If all else fails, ask questions on [Slack](../../troubleshooting.html#slack).
### Next steps
Move on to [adding one or more workers](worker) or [deploy a dns](deployDNS)
Move on to [adding one or more workers](worker) or [deploy a dns](deployDNS)

View File

@ -5,50 +5,50 @@ To validate that your node(s) have been added, run:
```shell
kubectl get nodes
```
That should show something like:
```shell
NAME LABELS STATUS
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on [Slack](../../troubleshooting.html#slack).
### Run an application
```shell
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
```
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```shell
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (`CLUSTER_IP`), and the second one is the external load-balanced IP.
```shell
kubectl get svc nginx
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
```shell
kubectl get svc nginx --template={{.spec.clusterIP}}
{% raw %}kubectl get svc nginx --template={{.spec.clusterIP}}{% endraw %}
```
Hit the webserver with the first IP (CLUSTER_IP):
```shell
curl <insert-cluster-ip-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### Scaling
@ -57,15 +57,12 @@ Now try to scale up the nginx you created before:
```shell
kubectl scale rc nginx --replicas=3
```
And list the pods
```shell
kubectl get pods
```
You should see pods landing on the newly added machine.
You should see pods landing on the newly added machine.

View File

@ -6,9 +6,10 @@ You need to repeat these instructions for each node you want to join the cluster
We will assume that the IP address of this node is `${NODE_IP}` and you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](master).
For each worker node, there are three steps:
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
### Set up Flanneld on the worker node
@ -27,8 +28,8 @@ Run:
```shell
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
@ -41,14 +42,14 @@ Turning down Docker is system dependent, it may be:
```shell
sudo /etc/init.d/docker stop
```
or
```shell
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
@ -57,16 +58,16 @@ Now run flanneld itself, this call is slightly different from the above, since w
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
@ -77,8 +78,8 @@ Regardless, you need to add the following to the docker command line:
```shell
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
@ -86,8 +87,8 @@ Docker creates a bridge named `docker0` by default. You need to remove this:
```shell
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the `bridge-utils` package for the `brctl` binary.
#### Restart Docker
@ -96,14 +97,14 @@ Again this is system dependent, it may be:
```shell
sudo /etc/init.d/docker start
```
it may be:
```shell
systemctl start docker
```
### Start Kubernetes on the worker node
#### Run the kubelet
@ -123,19 +124,16 @@ sudo docker run \
--pid=host \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=$(hostname -i) --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
```
#### Run the service proxy
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
```shell
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
```
### Next steps
Move on to [testing your cluster](testing) or [add another node](#adding-a-kubernetes-worker-node-via-docker)
Move on to [testing your cluster](testing) or add another node](#).

View File

@ -13,6 +13,7 @@ This guide will walk you through the process of getting a Kubernetes Fedora clus
It will cover the installation and configuration of the following systemd processes on the following hosts:
Kubernetes Master:
- `kube-apiserver`
- `kube-controller-manager`
- `kube-scheduler`
@ -21,6 +22,7 @@ Kubernetes Master:
- `calico-node`
Kubernetes Node:
- `kubelet`
- `kube-proxy`
- `docker`
@ -44,12 +46,12 @@ Digital Ocean private networking configures a private network on eth1 for each h
so that all hosts in the cluster can hostname-resolve one another to this interface. **It is important that the hostname resolves to this interface instead of eth0, as
all Kubernetes and Calico services will be running on it.**
```
```shell
echo "10.134.251.56 kube-master" >> /etc/hosts
echo "10.134.251.55 kube-node-1" >> /etc/hosts
```
>Make sure that communication works between kube-master and each kube-node by using a utility such as ping.
> Make sure that communication works between kube-master and each kube-node by using a utility such as ping.
## Setup Master
@ -57,36 +59,36 @@ echo "10.134.251.55 kube-node-1" >> /etc/hosts
* Both Calico and Kubernetes use etcd as their datastore. We will run etcd on Master and point all Kubernetes and Calico services at it.
```
```shell
yum -y install etcd
```
* Edit `/etc/etcd/etcd.conf`
```
```conf
ETCD_LISTEN_CLIENT_URLS="http://kube-master:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://kube-master:4001"
```
### Install Kubernetes
* Run the following command on Master to install the latest Kubernetes (as well as docker):
```
```shell
yum -y install kubernetes
```
* Edit `/etc/kubernetes/config `
```
```conf
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kube-master:8080"
```
* Edit `/etc/kubernetes/apiserver`
```
```conf
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
@ -94,40 +96,40 @@ KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001"
# Remove ServiceAccount from this line to run without API Tokens
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
```
* Create /var/run/kubernetes on master:
```
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
* Start the appropriate services on master:
```
```shell
for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICE
systemctl enable $SERVICE
systemctl status $SERVICE
done
```
### Install Calico
Next, we'll launch Calico on Master to allow communication between Pods and any services running on the Master.
* Install calicoctl, the calico configuration tool.
```
```shell
wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x ./calicoctl
sudo mv ./calicoctl /usr/bin
```
* Create `/etc/systemd/system/calico-node.service`
```
```conf
[Unit]
Description=calicoctl node
Requires=docker.service
@ -142,17 +144,17 @@ ExecStart=/usr/bin/calicoctl node --ip=10.134.251.56 --detach=false
[Install]
WantedBy=multi-user.target
```
>Be sure to substitute `--ip=10.134.251.56` with your Master's eth1 IP Address.
* Start Calico
```
```shell
systemctl enable calico-node.service
systemctl start calico-node.service
```
>Starting calico for the first time may take a few minutes as the calico-node docker image is downloaded.
## Setup Node
@ -164,57 +166,57 @@ In order to set our own address range, we will create a new virtual interface ca
* Add a virtual interface by creating `/etc/sysconfig/network-scripts/ifcfg-cbr0`:
```
```conf
DEVICE=cbr0
TYPE=Bridge
IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=static
```
>**Note for Multi-Node Clusters:** Each node should be assigned an IP address on a unique subnet. In this example, node-1 is using 192.168.1.1/24,
so node-2 should be assigned another pool on the 192.168.x.0/24 subnet, e.g. 192.168.2.1/24.
* Ensure that your system has bridge-utils installed. Then, restart the networking daemon to activate the new interface
```
```shell
systemctl restart network.service
```
### Install Docker
* Install Docker
```
```shell
yum -y install docker
```
* Configure docker to run on `cbr0` by editing `/etc/sysconfig/docker-network`:
```
```conf
DOCKER_NETWORK_OPTIONS="--bridge=cbr0 --iptables=false --ip-masq=false"
```
* Start docker
```
```shell
systemctl start docker
```
### Install Calico
* Install calicoctl, the calico configuration tool.
```
```shell
wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x ./calicoctl
sudo mv ./calicoctl /usr/bin
```
* Create `/etc/systemd/system/calico-node.service`
```
```conf
[Unit]
Description=calicoctl node
Requires=docker.service
@ -229,48 +231,48 @@ ExecStart=/usr/bin/calicoctl node --ip=10.134.251.55 --detach=false --kubernetes
[Install]
WantedBy=multi-user.target
```
> Note: You must replace the IP address with your node's eth1 IP Address!
* Start Calico
```
```shell
systemctl enable calico-node.service
systemctl start calico-node.service
```
* Configure the IP Address Pool
Most Kubernetes application deployments will require communication between Pods and the kube-apiserver on Master. On a standard Digital
Ocean Private Network, requests sent from Pods to the kube-apiserver will not be returned as the networking fabric will drop response packets
destined for any 192.168.0.0/16 address. To resolve this, you can have calicoctl add a masquerade rule to all outgoing traffic on the node:
```
```shell
ETCD_AUTHORITY=kube-master:4001 calicoctl pool add 192.168.0.0/16 --nat-outgoing
```
### Install Kubernetes
* First, install Kubernetes.
```
```shell
yum -y install kubernetes
```
* Edit `/etc/kubernetes/config`
```
```conf
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kube-master:8080"
```
* Edit `/etc/kubernetes/kubelet`
We'll pass in an extra parameter - `--network-plugin=calico` to tell the Kubelet to use the Calico networking plugin. Additionally, we'll add two
environment variables that will be used by the Calico networking plugin.
```
```shell
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
@ -286,25 +288,24 @@ KUBELET_ARGS="--network-plugin=calico"
# The following are variables which the kubelet will pass to the calico-networking plugin
ETCD_AUTHORITY="kube-master:4001"
KUBE_API_ROOT="http://kube-master:8080/api/v1"
```
* Start Kubernetes on the node.
```
```shell
for SERVICE in kube-proxy kubelet; do
systemctl restart $SERVICE
systemctl enable $SERVICE
systemctl status $SERVICE
done
```
## Check Running Cluster
The cluster should be running! Check that your nodes are reporting as such:
```
```shell
kubectl get nodes
NAME LABELS STATUS
kube-node-1 kubernetes.io/hostname=kube-node-1 Ready
```
```

View File

@ -1,11 +1,8 @@
---
title: "Configuring Kubernetes on Fedora via Ansible"
---
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
{% include pagetoc.html %}
## Prerequisites
@ -23,8 +20,9 @@ A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a c
```shell
master,etcd = kube-master.example.com
node1 = kube-node-01.example.com
node2 = kube-node-02.example.com
node2 = kube-node-02.example.com
```
**Make sure your local machine has**
- ansible (must be 1.9.0+)
@ -34,14 +32,16 @@ master,etcd = kube-master.example.com
If not
```shell
yum install -y ansible git python-netaddr
yum install -y ansible git python-netaddr
```
**Now clone down the Kubernetes repository**
```shell
git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible
cd contrib/ansible
```
**Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
@ -55,8 +55,9 @@ kube-master.example.com
[nodes]
kube-node-01.example.com
kube-node-02.example.com
kube-node-02.example.com
```
## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
@ -66,8 +67,9 @@ If you already are running on a machine which has passwordless ssh access to the
edit: ~/contrib/ansible/group_vars/all.yml
```yaml
ansible_ssh_user: root
ansible_ssh_user: root
```
**Configuring ssh access to the cluster**
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
@ -75,35 +77,41 @@ If you already have ssh access to every machine using ssh public keys you may sk
Make sure your local machine (root) has an ssh key pair if not
```shell
ssh-keygen
ssh-keygen
```
Copy the ssh public key to **all** nodes in the cluster
```shell
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node}
done
done
```
## Setting up the cluster
Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
edit: ~/contrib/ansible/group_vars/all.yml
```conf
edit: ~/contrib/ansible/group_vars/all.yml
```
**Configure access to kubernetes packages**
Modify `source_type` as below to access kubernetes packages through the package manager.
```yaml
source_type: packageManager
source_type: packageManager
```
**Configure the IP addresses used for services**
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml
kube_service_addresses: 10.254.0.0/16
kube_service_addresses: 10.254.0.0/16
```
**Managing flannel**
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
@ -114,18 +122,21 @@ Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defa
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
```yaml
cluster_logging: true
cluster_logging: true
```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
```yaml
cluster_monitoring: true
cluster_monitoring: true
```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
```yaml
dns_setup: true
dns_setup: true
```
**Tell ansible to get to work!**
This will finally setup your whole Kubernetes cluster for you.
@ -133,8 +144,9 @@ This will finally setup your whole Kubernetes cluster for you.
```shell
cd ~/contrib/ansible/
./setup.sh
./setup.sh
```
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
@ -144,13 +156,15 @@ That's all there is to it. It's really that easy. At this point you should hav
Run the following on the kube-master:
```shell
kubectl get nodes
kubectl get nodes
```
**Show services running on masters and nodes**
```shell
systemctl | grep -i kube
systemctl | grep -i kube
```
**Show firewall rules on the masters and nodes**
```shell
@ -182,25 +196,30 @@ iptables -nvL
}
]
}
}
}
```
```shell
kubectl create -f /tmp/apache.json
kubectl create -f /tmp/apache.json
```
**Check where the pod was created**
```shell
kubectl get pods
kubectl get pods
```
**Check Docker status on nodes**
```shell
docker ps
docker images
docker images
```
**After the pod is 'Running' Check web server access on the node**
```shell
curl http://localhost
curl http://localhost
```
That's it !

View File

@ -24,11 +24,11 @@ that _etcd_ and Kubernetes master run on the same host). The remaining host, fe
Hosts:
```
```conf
fed-master = 192.168.121.9
fed-node = 192.168.121.65
```
**Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master.
@ -43,21 +43,21 @@ fed-node = 192.168.121.65
```shell
yum -y install --enablerepo=updates-testing kubernetes
```
* Install etcd and iptables
```shell
yum -y install etcd iptables
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```shell
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```shell
@ -72,15 +72,15 @@ KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else.
@ -98,22 +98,22 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```shell
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
* Create /var/run/kubernetes on master:
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
* Start the appropriate services on master:
```shell
@ -122,8 +122,8 @@ for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Addition of nodes:
* Create following node.json file on Kubernetes master node:
@ -140,8 +140,8 @@ done
"externalID": "fed-node"
}
}
```
Now create a node object internally in your Kubernetes cluster by running:
```shell
@ -150,8 +150,8 @@ $ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
@ -179,8 +179,8 @@ KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
```
* Start the appropriate services on the node (fed-node).
```shell
@ -189,29 +189,26 @@ for SERVICES in kube-proxy kubelet docker; do
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```shell
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
* Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```shell
kubectl delete -f ./node.json
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/{{page.version}}/docs/user-guide/walkthrough/README)!

View File

@ -1,23 +1,19 @@
---
title: "Kubernetes multiple nodes cluster with flannel on Fedora"
---
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
{% include pagetoc.html %}
## Introduction
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites
1. You need 2 or more machines with Fedora installed.
You need 2 or more machines with Fedora installed.
## Master Setup
**Perform following commands on the Kubernetes master**
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```json
{
@ -27,25 +23,28 @@ This document describes how to deploy Kubernetes on multiple hosts to set up a m
"Type": "vxlan",
"VNI": 1
}
}
}
```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
* Add the configuration to the etcd server on fed-master.
Add the configuration to the etcd server on fed-master.
```shell
etcdctl set /coreos.com/network/config < flannel-config.json
etcdctl set /coreos.com/network/config < flannel-config.json
```
* Verify the key exists in the etcd server on fed-master.
```shell
etcdctl get /coreos.com/network/config
etcdctl get /coreos.com/network/config
```
## Node Setup
**Perform following commands on all Kubernetes nodes**
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
```shell
# Flanneld configuration options
@ -58,46 +57,51 @@ FLANNEL_ETCD="http://fed-master:4001"
FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
FLANNEL_OPTIONS=""
```
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
* Enable the flannel service.
Enable the flannel service.
```shell
systemctl enable flanneld
systemctl enable flanneld
```
* If docker is not running, then starting flannel service is enough and skip the next step.
If docker is not running, then starting flannel service is enough and skip the next step.
```shell
systemctl start flanneld
systemctl start flanneld
```
* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
```shell
systemctl stop docker
ip link delete docker0
systemctl start flanneld
systemctl start docker
systemctl start docker
```
***
## **Test the cluster and flannel configuration**
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
```shell
# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
inet 18.16.29.0/16 scope global flannel.1
inet 18.16.29.1/24 scope global docker0
inet 18.16.29.1/24 scope global docker0
```
* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
```shell
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
```
```json
{
"node": {
@ -115,48 +119,54 @@ curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjso
"value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}"
}
}
}
}
```
* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
```shell
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
FLANNEL_IPMASQ=false
```
* At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
* Issue the following commands on any 2 nodes:
Issue the following commands on any 2 nodes:
```shell
# docker run -it fedora:latest bash
bash-4.3#
bash-4.3#
```
* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
```shell
bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
```
* Now note the IP address on the first node:
Now note the IP address on the first node:
```shell
bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0
inet 18.16.29.4/24 scope global eth0
```
* And also note the IP address on the other node:
And also note the IP address on the other node:
```shell
bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0
inet 18.16.90.4/24 scope global eth0
```
* Now ping from the first node to the other node:
Now ping from the first node to the other node:
```shell
bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
```
* Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.

View File

@ -97,7 +97,8 @@ If your `$HOME` is world readable, everything is fine. If your $HOME is private,
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
```
In order to fix that issue, you have several possibilities:
In order to fix that issue, you have several possibilities:
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
* backed by a filesystem with a lot of free disk space
* writable by your user;

View File

@ -1,8 +1,6 @@
---
title: "Getting started locally"
---
--
{% include pagetoc.html %}
### Requirements

View File

@ -5,8 +5,6 @@ title: "Getting Started With Kubernetes on Mesos on Docker"
The mesos/docker provider uses docker-compose to launch Kubernetes as a Mesos framework, running in docker with its
dependencies (etcd & mesos).
{% include pagetoc.html %}
## Cluster Goals

View File

@ -1,15 +1,8 @@
---
title: "Getting started with Kubernetes on Mesos"
---
Getting started with Kubernetes on Mesos
----------------------------------------
{% include pagetoc.html %}
## About Kubernetes on Mesos
<!-- TODO: Update, clean up. -->

View File

@ -1,9 +1,6 @@
---
title: "Getting started on Rackspace"
---
## Introduction
* Supported Version: v0.18.1
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to Google Compute Engine. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design.
@ -13,11 +10,10 @@ These scripts should be used to deploy development environments for Kubernetes.
NOTE: The rackspace scripts do NOT rely on `saltstack` and instead rely on cloud-init for configuration.
The current cluster design is inspired by:
- [corekube](https://github.com/metral/corekube)
- [Angus Lees](https://github.com/anguslees/kube-openstack)
{% include pagetoc.html %}
## Prerequisites

View File

@ -26,14 +26,14 @@ set these flags:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
```
Then we can launch the local cluster using the script:
```shell
$ hack/local-up-cluster.sh
```
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
@ -43,20 +43,20 @@ $ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
### CoreOS cluster on AWS
@ -67,26 +67,26 @@ To use rkt as the container runtime for your CoreOS cluster on AWS, you need to
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
```shell
$ export COREOS_CHANNEL=stable
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
```
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
@ -121,20 +121,18 @@ using `journalctl`:
```shell
$ sudo journalctl -u $SERVICE_FILE
```
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
##### Check the log of the container in the pod:
```shell
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
```
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.
##### Check Kubernetes events, logs.
Besides above tricks, Kubernetes also provides us handy tools for debugging the pods. More information can be found [here](/{{page.version}}/docs/user-guide/application-troubleshooting)
Besides above tricks, Kubernetes also provides us handy tools for debugging the pods. More information can be found [here](/{{page.version}}/docs/user-guide/application-troubleshooting).

View File

@ -26,14 +26,14 @@ set these flags:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
```
Then we can launch the local cluster using the script:
```shell
$ hack/local-up-cluster.sh
```
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
@ -43,20 +43,20 @@ $ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
### CoreOS cluster on AWS
@ -67,26 +67,26 @@ To use rkt as the container runtime for your CoreOS cluster on AWS, you need to
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
```shell
$ export COREOS_CHANNEL=stable
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
```
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
@ -121,20 +121,18 @@ using `journalctl`:
```shell
$ sudo journalctl -u $SERVICE_FILE
```
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
##### Check the log of the container in the pod:
```shell
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
```
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.
##### Check Kubernetes events, logs.
Besides above tricks, Kubernetes also provides us handy tools for debugging the pods. More information can be found [here](/{{page.version}}/docs/user-guide/application-troubleshooting)
Besides above tricks, Kubernetes also provides us handy tools for debugging the pods. More information can be found [here](/{{page.version}}/docs/user-guide/application-troubleshooting)

View File

@ -1,7 +1,6 @@
---
title: "Creating a Custom Cluster From Scratch"
---
This guide is for people who want to craft a custom Kubernetes cluster. If you
can find an existing Getting Started Guide that meets your needs on [this
list](/{{page.version}}/docs/getting-started-guides/README/), then we recommend using it, as you will be able to benefit
@ -14,8 +13,6 @@ pre-defined guides.
This guide is also useful for those wanting to understand at a high level some of the
steps that existing cluster setup scripts are making.
{% include pagetoc.html %}
## Designing and Preparing
@ -61,7 +58,8 @@ need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest
approach is to allocate a different block of IPs to each node in the cluster as
the node is added. A process in one pod should be able to communicate with
another pod using the IP of the second pod. This connectivity can be
accomplished in two ways:
accomplished in two ways:
- Configure network to route Pod IPs
- Harder to setup from scratch.
- Google Compute Engine ([GCE](gce)) and [AWS](aws) guides use this approach.
@ -78,7 +76,8 @@ accomplished in two ways:
- Does not require "Routes" portion of Cloud Provider module.
- Reduced performance (exactly how much depends on your solution).
You need to select an address range for the Pod IPs.
You need to select an address range for the Pod IPs.
- Various approaches:
- GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each
Kubernetes cluster from that space, which leaves room for several clusters.
@ -113,7 +112,8 @@ Also, you need to pick a static IP for master node.
### Cluster Naming
You should pick a name for your cluster. Pick a short name for each cluster
which is unique from future cluster names. This will be used in several ways:
which is unique from future cluster names. This will be used in several ways:
- by kubectl to distinguish between various clusters you have access to. You will probably want a
second one sometime later, such as for testing new Kubernetes releases, running in a different
region of the world, etc.
@ -122,7 +122,8 @@ region of the world, etc.
### Software Binaries
You will need binaries for:
You will need binaries for:
- etcd
- A container runner, one of:
- docker
@ -151,7 +152,8 @@ You will run docker, kubelet, and kube-proxy outside of a container, the same wa
you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler,
we recommend that you run these as containers, so you need an image to be built.
You have several choices for Kubernetes images:
You have several choices for Kubernetes images:
- Use images hosted on Google Container Registry (GCR):
- e.g `gcr.io/google_containers/hyperkube:$TAG`, where `TAG` is the latest
release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest).
@ -166,7 +168,8 @@ You have several choices for Kubernetes images:
- You can verify if the image is loaded successfully with the right repository and tag using
command like `docker images`
For etcd, you can:
For etcd, you can:
- Use images hosted on Google Container Registry (GCR), such as `gcr.io/google_containers/etcd:2.0.12`
- Use images hosted on [Docker Hub](https://hub.docker.com/search/?q=etcd) or [Quay.io](https://quay.io/repository/coreos/etcd), such as `quay.io/coreos/etcd:v2.2.0`
- Use etcd binary included in your OS distro.
@ -177,13 +180,15 @@ We recommend that you use the etcd version which is provided in the Kubernetes b
were tested extensively with this version of etcd and not with any other version.
The recommended version number can also be found as the value of `ETCD_VERSION` in `kubernetes/cluster/images/etcd/Makefile`.
The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry):
The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry):
- `HYPERKUBE_IMAGE==gcr.io/google_containers/hyperkube:$TAG`
- `ETCD_IMAGE=gcr.io/google_containers/etcd:$ETCD_VERSION`
### Security Models
There are two main options for security:
There are two main options for security:
- Access the apiserver using HTTP.
- Use a firewall for security.
- This is easier to setup.
@ -196,17 +201,20 @@ If following the HTTPS approach, you will need to prepare certs and credentials.
#### Preparing Certs
You need to prepare several certs:
You need to prepare several certs:
- The master needs a cert to act as an HTTPS server.
- The kubelets optionally need certs to identify themselves as clients of the master, and when
serving its own API over HTTPS.
Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs.
- see function `create-certs` in `cluster/gce/util.sh`
- see also `cluster/saltbase/salt/generate-cert/make-ca-cert.sh` and
Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs:
- See function `create-certs` in `cluster/gce/util.sh`
- See also `cluster/saltbase/salt/generate-cert/make-ca-cert.sh` and
`cluster/saltbase/salt/generate-cert/make-cert.sh`
You will end up with the following files (we will use these variables later on)
You will end up with the following files (we will use these variables later on):
- `CA_CERT`
- put in on node where apiserver runs, in e.g. `/srv/kubernetes/ca.crt`.
- `MASTER_CERT`
@ -221,7 +229,8 @@ You will end up with the following files (we will use these variables later on)
#### Preparing Credentials
The admin user (and any users) need:
The admin user (and any users) need:
- a token or a password to identify them.
- tokens are just long alphanumeric strings, e.g. 32 chars. See
- `TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)`
@ -233,7 +242,8 @@ The format for this file is described in the [authentication documentation](/{{p
For distributing credentials to clients, the convention in Kubernetes is to put the credentials
into a [kubeconfig file](/{{page.version}}/docs/user-guide/kubeconfig-file).
The kubeconfig file for the administrator can be created as follows:
The kubeconfig file for the administrator can be created as follows:
- If you have already used Kubernetes with a non-custom cluster (for example, used a Getting Started
Guide), you will already have a `$HOME/.kube/config` file.
- You need to add certs, keys, and the master IP to the kubeconfig file:
@ -247,7 +257,8 @@ The kubeconfig file for the administrator can be created as follows:
- `kubectl config use-context $CONTEXT_NAME`
Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how
many distinct files to make:
many distinct files to make:
1. Use the same credential as the admin
- This is simplest to setup.
1. One token and kubeconfig file for all kubelets, one for all kube-proxy, one for admin.
@ -274,8 +285,9 @@ contexts:
cluster: local
user: kubelet
name: service-account-context
current-context: service-account-context
current-context: service-account-context
```
Put the kubeconfig(s) on every node. The examples later in this
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
`/var/lib/kubelet/kubeconfig`.
@ -284,7 +296,8 @@ guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
This section discusses how to configure machines to be Kubernetes nodes.
You should run three daemons on every node:
You should run three daemons on every node:
- docker or rkt
- kubelet
- kube-proxy
@ -307,10 +320,12 @@ as follows before proceeding to configure Docker for Kubernetes.
```shell
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
brctl delbr docker0
```
The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network.
Some suggested docker options:
Some suggested docker options:
- create your own bridge for the per-node CIDR ranges, call it cbr0, and set `--bridge=cbr0` option on docker.
- set `--iptables=false` so docker will not manipulate iptables for host-ports (too coarse on older docker versions, may be fixed in newer versions)
so that kube-proxy can manage iptables instead of docker.
@ -324,7 +339,8 @@ so that kube-proxy can manage iptables instead of docker.
- `--insecure-registry $CLUSTER_SUBNET`
- to connect to a private registry, if you set one up, without using SSL.
You may want to increase the number of open files for docker:
You may want to increase the number of open files for docker:
- `DOCKER_NOFILE=1000000`
Where this config goes depends on your node OS. For example, GCE's Debian-based distro uses `/etc/default/docker`.
@ -345,14 +361,16 @@ minimum version required to match rkt v0.5.6 is
for rkt networking support. You can start rkt metadata service by using command like
`sudo systemd-run rkt metadata-service`
Then you need to configure your kubelet with flag:
Then you need to configure your kubelet with flag:
- `--container-runtime=rkt`
### kubelet
All nodes should run kubelet. See [Selecting Binaries](#selecting-binaries).
Arguments to consider:
Arguments to consider:
- If following the HTTPS security approach:
- `--api-servers=https://$MASTER_IP`
- `--kubeconfig=/var/lib/kubelet/kubeconfig`
@ -372,7 +390,8 @@ All nodes should run kube-proxy. (Running kube-proxy on a "master" node is not
strictly required, but being consistent is easier.) Obtain a binary as described for
kubelet.
Arguments to consider:
Arguments to consider:
- If following the HTTPS security approach:
- `--api-servers=https://$MASTER_IP`
- `--kubeconfig=/var/lib/kube-proxy/kubeconfig`
@ -391,11 +410,14 @@ this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
because of how this is used later.
- Recommended, automatic approach:
Recommended, automatic approach:
1. Set `--configure-cbr0=true` option in kubelet init script and restart kubelet service. Kubelet will configure cbr0 automatically.
It will wait to do this until the node controller has set Node.Spec.PodCIDR. Since you have not setup apiserver and node controller
yet, the bridge will not be setup immediately.
- Alternate, manual approach:
Alternate, manual approach:
1. Set `--configure-cbr0=false` on kubelet and restart.
1. Create a bridge
- e.g. `brctl addbr cbr0`.
@ -411,8 +433,9 @@ other, then you may need to do masquerading just for destination IPs outside
the cluster network. For example:
```shell
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE \! -d ${CLUSTER_SUBNET}
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE \! -d ${CLUSTER_SUBNET}
```
This will rewrite the source address from
the PodIP to the Node IP for traffic bound outside the cluster, and kernel
[connection tracking](http://www.iptables.info/en/connection-state)
@ -443,20 +466,23 @@ various Getting Started Guides.
While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using
traditional system administration/automation approaches, the remaining *master* components of Kubernetes are
all configured and managed *by Kubernetes*:
all configured and managed *by Kubernetes*:
- their options are specified in a Pod spec (yaml or json) rather than an /etc/init.d file or
systemd unit.
- they are kept running by Kubernetes rather than by init.
### etcd
You will need to run one or more instances of etcd.
You will need to run one or more instances of etcd.
- Recommended approach: run one etcd instance, with its log written to a directory backed
by durable storage (RAID, GCE PD)
- Alternative: run 3 or 5 etcd instances.
- Log can be written to non-durable storage because storage is replicated.
- run a single apiserver which connects to one of the etc nodes.
See [cluster-troubleshooting](/{{page.version}}/docs/admin/cluster-troubleshooting) for more discussion on factors affecting cluster
- run a single apiserver which connects to one of the etc nodes.
See [cluster-troubleshooting](/{{page.version}}/docs/admin/cluster-troubleshooting) for more discussion on factors affecting cluster
availability.
To run an etcd instance:
@ -550,8 +576,9 @@ For each of these components, the steps to start them running are similar:
}
]
}
}
}
```
Here are some apiserver flags you may need to set:
- `--cloud-provider=` see [cloud providers](#cloud-providers)
@ -577,7 +604,8 @@ If you are using the HTTPS approach, then set:
- `--token-auth-file=/srv/kubernetes/known_tokens.csv`
- `--basic-auth-file=/srv/kubernetes/basic_auth.csv`
This pod mounts several node file system directories using the `hostPath` volumes. Their purposes are:
This pod mounts several node file system directories using the `hostPath` volumes. Their purposes are:
- The `/etc/ssl` mount allows the apiserver to find the SSL root certs so it can
authenticate external services, such as a cloud provider.
- This is not required if you do not use a cloud provider (e.g. bare-metal).
@ -644,8 +672,8 @@ Complete this template for the scheduler pod:
]
}
}
```
Typically, no additional flags are required for the scheduler.
Optionally, you may want to mount `/var/log` as well and redirect output there.
@ -713,9 +741,10 @@ Template for controller manager pod:
]
}
}
```
Flags to consider using with controller manager:
Flags to consider using with controller manager:
- `--cluster-name=$CLUSTER_NAME`
- `--cluster-cidr=`
- *TODO*: explain this flag.
@ -736,8 +765,9 @@ Use `ps` or `docker ps` to verify that each process has started. For example, v
```shell
$ sudo docker ps | grep apiserver:
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
```
Then try to connect to the apiserver:
```shell
@ -748,8 +778,9 @@ $ curl -s http://localhost:8080/api
"versions": [
"v1"
]
}
}
```
If you have selected the `--register-node=true` option for kubelets, they will now begin self-registering with the apiserver.
You should soon be able to see all your nodes by running the `kubectl get nodes` command.
Otherwise, you will need to manually create node objects.

View File

@ -8,6 +8,7 @@ This document describes how to deploy Kubernetes on Ubuntu bare metal nodes with
This guide will set up a simple Kubernetes cluster with a master and two nodes. We will start the following processes with systemd:
On the Master:
- `etcd`
- `kube-apiserver`
- `kube-controller-manager`
@ -15,6 +16,7 @@ On the Master:
- `calico-node`
On each Node:
- `kube-proxy`
- `kube-kubelet`
- `calico-node`
@ -31,34 +33,34 @@ On each Node:
First, get the sample configurations for this tutorial
```
```shell
wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz
tar -xvf master.tar.gz
```
### Setup environment variables for systemd services on Master
Many of the sample systemd services provided rely on environment variables on a per-node basis. Here we'll edit those environment variables and move them into place.
1.) Copy the network-environment-template from the `master` directory for editing.
```
```shell
cp calico-kubernetes-ubuntu-demo-master/master/network-environment-template network-environment
```
2.) Edit `network-environment` to represent your current host's settings.
3.) Move the `network-environment` into `/etc`
```
```shell
sudo mv -f network-environment /etc
```
### Install Kubernetes on Master
1.) Build & Install Kubernetes binaries
```
```shell
# Get the Kubernetes Source
wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.3/kubernetes.tar.gz
@ -70,32 +72,32 @@ kubernetes/cluster/ubuntu/build.sh
# Add binaries to /usr/bin
sudo cp -f binaries/master/* /usr/bin
sudo cp -f binaries/kubectl /usr/bin
```
2.) Install the sample systemd processes settings for launching kubernetes services
```
```shell
sudo cp -f calico-kubernetes-ubuntu-demo-master/master/*.service /etc/systemd
sudo systemctl enable /etc/systemd/etcd.service
sudo systemctl enable /etc/systemd/kube-apiserver.service
sudo systemctl enable /etc/systemd/kube-controller-manager.service
sudo systemctl enable /etc/systemd/kube-scheduler.service
```
3.) Launch the processes.
```
```shell
sudo systemctl start etcd.service
sudo systemctl start kube-apiserver.service
sudo systemctl start kube-controller-manager.service
sudo systemctl start kube-scheduler.service
```
### Install Calico on Master
In order to allow the master to route to pods on our nodes, we will launch the calico-node daemon on our master. This will allow it to learn routes over BGP from the other calico-node daemons in the cluster. The docker daemon should already be running before calico is started.
```
```shell
# Install the calicoctl binary, which will be used to launch calico
wget https://github.com/projectcalico/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x calicoctl
@ -105,9 +107,9 @@ sudo cp -f calicoctl /usr/bin
sudo cp -f calico-kubernetes-ubuntu-demo-master/master/calico-node.service /etc/systemd
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
>Note: calico-node may take a few minutes on first boot while it downloads the calico-node docker image.
> Note: calico-node may take a few minutes on first boot while it downloads the calico-node docker image.
## Setup Nodes
@ -117,42 +119,42 @@ Perform these steps **once on each node**, ensuring you appropriately set the en
1.) Get the sample configurations for this tutorial
```
```shell
wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz
tar -xvf master.tar.gz
```
2.) Copy the network-environment-template from the `node` directory
```
```shell
cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment
```
3.) Edit `network-environment` to represent your current host's settings.
4.) Move `network-environment` into `/etc`
```
```shell
sudo mv -f network-environment /etc
```
### Configure Docker on the Node
#### Create the veth
Instead of using docker's default interface (docker0), we will configure a new one to use desired IP ranges
```
```shell
sudo apt-get install -y bridge-utils
sudo brctl addbr cbr0
sudo ifconfig cbr0 up
sudo ifconfig cbr0 <IP>/24
```
> Replace \<IP\> with the subnet for this host's containers. Example topology:
Node | cbr0 IP
-------- | -------------
Node | cbr0 IP
------- | -------------
node-1 | 192.168.1.1/24
node-2 | 192.168.2.1/24
node-X | 192.168.X.1/24
@ -167,16 +169,16 @@ The Docker daemon must be started and told to use the already configured cbr0 in
3.) Reload systemctl and restart docker.
```
```shell
sudo systemctl daemon-reload
sudo systemctl restart docker
```
### Install Calico on the Node
1.) Install Calico
```
```shell
# Get the calicoctl binary
wget https://github.com/projectcalico/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x calicoctl
@ -186,22 +188,22 @@ sudo cp -f calicoctl /usr/bin
sudo cp calico-kubernetes-ubuntu-demo-master/node/calico-node.service /etc/systemd
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
>The calico-node service will automatically get the kubernetes-calico plugin binary and install it on the host system.
2.) Use calicoctl to add an IP pool. We must specify the IP and port that the master's etcd is listening on.
**NOTE: This step only needs to be performed once per Kubernetes deployment, as it covers all the node's IP ranges.**
```
```shell
ETCD_AUTHORITY=<MASTER_IP>:4001 calicoctl pool add 192.168.0.0/16
```
### Install Kubernetes on the Node
1.) Build & Install Kubernetes binaries
```
```shell
# Get the Kubernetes Source
wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.3/kubernetes.tar.gz
@ -217,20 +219,20 @@ sudo cp -f binaries/minion/* /usr/bin
wget https://github.com/projectcalico/calico-kubernetes/releases/download/v0.1.1/kube-proxy
sudo cp kube-proxy /usr/bin/
sudo chmod +x /usr/bin/kube-proxy
```
2.) Install and launch the sample systemd processes settings for launching Kubernetes services
```
```shell
sudo cp calico-kubernetes-ubuntu-demo-master/node/kube-proxy.service /etc/systemd/
sudo cp calico-kubernetes-ubuntu-demo-master/node/kube-kubelet.service /etc/systemd/
sudo systemctl enable /etc/systemd/kube-proxy.service
sudo systemctl enable /etc/systemd/kube-kubelet.service
sudo systemctl start kube-proxy.service
sudo systemctl start kube-kubelet.service
```
>*You may want to consider checking their status after to ensure everything is running*
> *You may want to consider checking their status after to ensure everything is running*
## Install the DNS Addon
@ -253,23 +255,21 @@ With this sample configuration, because the containers have private `192.168.0.0
The simplest method for enabling connectivity from containers to the internet is to use an iptables masquerade rule. This is the standard mechanism [recommended](/{{page.version}}/docs/admin/networking.html#google-compute-engine-gce) in the Kubernetes GCE environment.
We need to NAT traffic that has a destination outside of the cluster. Internal traffic includes the master/nodes, and the container IP pools. A suitable masquerade chain would follow the pattern below, replacing the following variables:
- `CONTAINER_SUBNET`: The cluster-wide subnet from which container IPs are chosen. All cbr0 bridge subnets fall within this range. The above example uses `192.168.0.0/16`.
- `KUBERNETES_HOST_SUBNET`: The subnet from which Kubernetes node / master IP addresses have been chosen.
- `HOST_INTERFACE`: The interface on the Kubernetes node which is used for external connectivity. The above example uses `eth0`
```
```shell
sudo iptables -t nat -N KUBE-OUTBOUND-NAT
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d <CONTAINER_SUBNET> -o <HOST_INTERFACE> -j RETURN
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d <KUBERNETES_HOST_SUBNET> -o <HOST_INTERFACE> -j RETURN
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -j KUBE-OUTBOUND-NAT
```
This chain should be applied on the master and all nodes. In production, these rules should be persisted, e.g. with `iptables-persistent`.
### NAT at the border router
In a datacenter environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the datacenter, and so NAT is not needed on the nodes (though it may be enabled at the datacenter edge to allow outbound-only internet connectivity).
In a datacenter environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the datacenter, and so NAT is not needed on the nodes (though it may be enabled at the datacenter edge to allow outbound-only internet connectivity).

View File

@ -1,9 +1,6 @@
---
title: "Kubernetes Deployment On Bare-metal Ubuntu Nodes"
---
## Introduction
This document describes how to deploy kubernetes on ubuntu nodes, 1 master and 3 nodes involved
in the given examples. You can scale to **any number of nodes** by changing some settings with ease.
The original idea was heavily inspired by @jainvipin 's ubuntu single node
@ -11,7 +8,6 @@ work, which has been merge into this document.
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
{% include pagetoc.html %}
## Prerequisites
@ -32,15 +28,15 @@ First clone the kubernetes github repo
```shell
$ git clone https://github.com/kubernetes/kubernetes.git
```
Then download all the needed binaries into given directory (cluster/ubuntu/binaries)
```shell
$ cd kubernetes/cluster/ubuntu
$ ./build.sh
```
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` in build.sh, by default etcd version is 2.0.12,
flannel version is 0.4.0 and k8s version is 1.0.3.
@ -73,8 +69,8 @@ export NUM_MINIONS=${NUM_MINIONS:-3}
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16
```
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and
separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
@ -88,22 +84,28 @@ that you do have a valid private ip range defined here, because some IaaS provid
You can use below three private network range according to rfc1918. Besides you'd better not choose the one
that conflicts with your own private network range.
10.0.0.0 - 10.255.255.255 (10/8 prefix)
```shell
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
```
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network,
should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
**Note:** When deploying, master needs to connect the Internet to download the necessary files. If your machines locate in a private network that need proxy setting to connect the Internet, you can set the config `PROXY_SETTING` in cluster/ubuntu/config-default.sh such as:
PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"
```shell
PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"
```
After all the above variables being set correctly, we can use following command in cluster/ directory to bring up the whole cluster.
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh`
```shell
KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
```
The scripts automatically scp binaries and config files to all the machines and start the k8s service on them.
The only thing you need to do is to type the sudo password when promoted.
@ -112,14 +114,14 @@ The only thing you need to do is to type the sudo password when promoted.
Deploying minion on machine 10.10.103.223
...
[sudo] password to copy files and start minion:
```
If all things goes right, you will see the below message from console indicating the k8s is up.
```shell
Cluster validation succeeded
```
### Test it out
You can use `kubectl` command to check if the newly created k8s is working correctly.
@ -134,8 +136,8 @@ NAME LABELS STATUS
10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready
10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
```
Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/) to build a redis backend cluster on the k8s
@ -154,8 +156,8 @@ DNS_SERVER_IP="192.168.3.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
```
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the `SERVICE_CLUSTER_IP_RANGE`.
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
@ -163,15 +165,15 @@ By default, we also take care of kube-ui addon.
```shell
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
```
After all the above variables have been set, just type the following command.
```shell
$ cd cluster/ubuntu
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
```
After some time, you can use `$ kubectl get pods --namespace=kube-system` to see the DNS and UI pods are running in the cluster.
### On going
@ -197,17 +199,18 @@ Please try:
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like:
```sh
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
```
```shell
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
```
3. You may find following commands useful, the former one to bring down the cluster, while
the latter one could start it again.
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
```
4. You can also customize your own settings in `/etc/default/{component_name}`.
@ -217,9 +220,9 @@ If you already have a kubernetes cluster, and want to upgrade to a new version,
you can use following command in cluster/ directory to update the whole cluster or a specified node to a new version.
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version>
KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version>
```
It can be done for all components (by default), master(`-m`) or specified node(`-n`).
If the version is not specified, the script will try to use local binaries.You should ensure all the binaries are well prepared in path `cluster/ubuntu/binaries`.
@ -238,14 +241,14 @@ binaries/
'<27><>'<27><>'<27><> flanneld
'<27><>'<27><>'<27><> kubelet
'<27><>'<27><>'<27><> kube-proxy
```
Upgrading single node is experimental now. You can use following command to get a help.
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h
KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h
```
Some examples are as follows:
* upgrade master to version 1.0.5: `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -m 1.0.5`
@ -254,4 +257,4 @@ Some examples are as follows:
The script will not delete any resources of your cluster, it just replaces the binaries.
You can use `kubectl` command to check if the newly upgraded k8s is working correctly.
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.Or refer to [test-it-out](ubuntu.html#test-it-out)
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.Or refer to [test-it-out](ubuntu/#test-it-out)

View File

@ -1,11 +1,8 @@
---
title: "Getting started with Vagrant"
---
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
{% include pagetoc.html %}
### Prerequisites
@ -24,16 +21,18 @@ Setting up a cluster is as simple as running:
```shell
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
curl -sS https://get.k8s.io | bash
```
Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run:
```shell
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
./cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
@ -45,22 +44,25 @@ If you installed more than one Vagrant provider, Kubernetes will usually pick th
```shell
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
./cluster/kube-up.sh
```
By default, each VM in the cluster is running Fedora.
To access the master or any node:
```shell
vagrant ssh master
vagrant ssh minion-1
vagrant ssh minion-1
```
If you are running more than one node, you can access the others by:
```shell
vagrant ssh minion-2
vagrant ssh minion-3
vagrant ssh minion-3
```
Each node in the cluster installs the docker daemon and the kubelet.
The master node instantiates the Kubernetes master components as pods on the machine.
@ -79,8 +81,9 @@ To view the service status and/or logs on the kubernetes-master:
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
```
To view the services on any of the nodes:
```shell
@ -91,8 +94,9 @@ To view the services on any of the nodes:
[root@kubernetes-master ~] $ journalctl -ru kubelet
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
[root@kubernetes-master ~] $ journalctl -ru docker
```
### Interacting with your Kubernetes cluster with Vagrant.
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
@ -100,19 +104,22 @@ With your Kubernetes cluster up, you can manage the nodes in your cluster with t
To push updates to new Kubernetes code after making source changes:
```shell
./cluster/kube-push.sh
./cluster/kube-push.sh
```
To stop and then restart the cluster:
```shell
vagrant halt
./cluster/kube-up.sh
./cluster/kube-up.sh
```
To destroy the cluster:
```shell
vagrant destroy
vagrant destroy
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with `make`
@ -123,28 +130,32 @@ $ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.1.4 <none>
10.245.1.5 <none>
10.245.1.3 <none>
10.245.1.3 <none>
```
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
```shell
cat ~/.kubernetes_vagrant_auth
cat ~/.kubernetes_vagrant_auth
```
```json
{ "User": "vagrant",
"Password": "vagrant",
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
}
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
```shell
./cluster/kubectl.sh get nodes
./cluster/kubectl.sh get nodes
```
### Running containers
Your cluster is running, you can list the nodes in your cluster:
@ -155,8 +166,9 @@ $ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.2.4 <none>
10.245.2.3 <none>
10.245.2.2 <none>
10.245.2.2 <none>
```
Now start running some containers!
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
@ -170,13 +182,15 @@ $ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
Start a container running nginx with a replication controller and three replicas
```shell
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
```shell
@ -184,8 +198,9 @@ $ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 0/1 Pending 0 10s
my-nginx-gr3hh 0/1 Pending 0 10s
my-nginx-xql4j 0/1 Pending 0 10s
my-nginx-xql4j 0/1 Pending 0 10s
```
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
```shell
@ -194,8 +209,9 @@ kubernetes-minion-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
```shell
@ -205,8 +221,9 @@ kubernetes-minion-1:
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
```
Going back to listing the pods, services and replicationcontrollers, you now have:
```shell
@ -218,8 +235,9 @@ my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.0.0.1 <none> 80/TCP run=my-nginx 1h
my-nginx 10.0.0.1 <none> 80/TCP run=my-nginx 1h
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/README) application to learn how to create a service.
You can already play with scaling the replicas with:
@ -229,8 +247,9 @@ $ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 2m
my-nginx-gr3hh 1/1 Running 0 2m
my-nginx-gr3hh 1/1 Running 0 2m
```
Congratulations!
### Troubleshooting
@ -243,26 +262,30 @@ By default the Vagrantfile will download the box from S3. You can change this (a
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
./cluster/kube-up.sh
```
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
```shell
rm ~/.kubernetes_vagrant_auth
rm ~/.kubernetes_vagrant_auth
```
After using kubectl.sh make sure that the correct credentials are set:
```shell
cat ~/.kubernetes_vagrant_auth
cat ~/.kubernetes_vagrant_auth
```
```json
{
"User": "vagrant",
"Password": "vagrant"
}
}
```
#### I just created the cluster, but I do not see my container running!
If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
@ -280,22 +303,25 @@ Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion
You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so:
```shell
export NUM_MINIONS=1
export NUM_MINIONS=1
```
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
```shell
export KUBERNETES_MEMORY=2048
export KUBERNETES_MEMORY=2048
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
```shell
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_MINION_MEMORY=2048
export KUBERNETES_MINION_MEMORY=2048
```
#### I ran vagrant suspend and nothing works!
`vagrant suspend` seems to mess up the network. This is not supported at this time.
@ -305,5 +331,5 @@ export KUBERNETES_MINION_MEMORY=2048
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
```shell
export KUBERNETES_VAGRANT_USE_NFS=true
export KUBERNETES_VAGRANT_USE_NFS=true
```

View File

@ -6,8 +6,6 @@ Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This
cluster is set up and controlled from your workstation (or wherever you find
convenient).
{% include pagetoc.html %}
### Prerequisites
@ -19,13 +17,15 @@ convenient).
```shell
export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
export PATH=$PATH:$GOPATH/bin
```
4. Install the govc tool to interact with ESXi/vCenter:
```shell
go get github.com/vmware/govmomi/govc
go get github.com/vmware/govmomi/govc
```
5. Get or build a [binary release](binary_release)
### Setup
@ -35,8 +35,9 @@ Download a prebuilt Debian 7.7 VMDK that we'll use as a base image:
```shell
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2014-11-11/kube.vmdk.gz{,.md5}
md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
gzip -d kube.vmdk.gz
```
Import this VMDK into your vSphere datastore:
```shell
@ -45,13 +46,15 @@ export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
govc import.vmdk kube.vmdk ./kube/
govc import.vmdk kube.vmdk ./kube/
```
Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
```shell
govc datastore.ls ./kube/
govc datastore.ls ./kube/
```
Take a look at the file `cluster/vsphere/config-common.sh` fill in the required
parameters. The guest login for the image that you imported is `kube:kube`.
@ -63,8 +66,9 @@ This process takes about ~10 minutes.
```shell
cd kubernetes # Extracted binary release OR repository root
export KUBERNETES_PROVIDER=vsphere
cluster/kube-up.sh
cluster/kube-up.sh
```
Refer to the top level README and the getting started guide for Google Compute
Engine. Once you have successfully reached this point, your vSphere Kubernetes
deployment works just as any other one!

View File

@ -2,6 +2,7 @@
title: "Troubleshooting"
---
Sometimes things go wrong. This guide is aimed at making them right. It has two sections:
* [Troubleshooting your application](user-guide/application-troubleshooting) - Useful for users who are deploying code into Kubernetes and wondering why it is not working.
* [Troubleshooting your cluster](admin/cluster-troubleshooting) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy.
@ -16,11 +17,13 @@ If your problem isn't answered by any of the guides above, there are variety of
If you aren't familiar with it, many of your questions may be answered by the [user guide](user-guide/README).
We also have a number of FAQ pages:
* [User FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ)
* [Debugging FAQ](https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ)
* [Services FAQ](https://github.com/kubernetes/kubernetes/wiki/Services-FAQ)
You may also find the Stack Overflow topics relevant:
* [Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes)
* [Google Container Engine - GKE](http://stackoverflow.com/questions/tagged/google-container-engine)
@ -45,6 +48,7 @@ If you have what looks like a bug, or you would like to make a feature request,
Before you file an issue, please search existing issues to see if your issue is already covered.
If filing a bug, please include detailed information about how to reproduce the problem, such as:
* Kubernetes version: `kubectl version`
* Cloud provider, OS distro, network configuration, and Docker version
* Steps to reproduce the problem

View File

@ -1,9 +1,6 @@
---
title: "Accessing Clusters"
---
{% include pagetoc.html %}
## Accessing the cluster API
@ -21,8 +18,9 @@ or someone else setup the cluster and provided you with credentials and a locati
Check the location and credentials that kubectl knows about with this command:
```shell
$ kubectl config view
$ kubectl config view
```
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/master/examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl).
@ -30,7 +28,8 @@ kubectl and complete documentation is found in the [kubectl manual](kubectl/kube
Kubectl handles locating and authenticating to the apiserver.
If you want to directly access the REST API with an http client like
curl or wget, or a browser, there are several ways to locate and authenticate:
curl or wget, or a browser, there are several ways to locate and authenticate:
- Run kubectl in proxy mode.
- Recommended approach.
- Uses stored apiserver location.
@ -49,8 +48,9 @@ locating the apiserver and authenticating.
Run it like this:
```shell
$ kubectl proxy --port=8080 &
$ kubectl proxy --port=8080 &
```
See [kubectl proxy](kubectl/kubectl_proxy) for more details.
Then you can explore the API with curl, wget, or a browser, like so:
@ -61,8 +61,9 @@ $ curl http://localhost:8080/api/
"versions": [
"v1"
]
}
}
```
#### Without kubectl proxy
It is also possible to avoid using kubectl proxy by passing an authentication token
@ -76,8 +77,9 @@ $ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
"versions": [
"v1"
]
}
}
```
The above example uses the `--insecure` flag. This leaves it subject to MITM
attacks. When kubectl accesses the cluster it uses a stored root certificate
and client certificates to access the server. (These are installed in the
@ -116,14 +118,16 @@ is associated with a service account, and a credential (token) for that
service account is placed into the filesystem tree of each container in that pod,
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
From within a pod the recommended ways to connect to API are:
From within a pod the recommended ways to connect to API are:
- run a kubectl proxy as one of the containers in the pod, or as a background
process within a container. This proxies the
Kubernetes API to the localhost interface of the pod, so that other processes
in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](https://github.com/kubernetes/kubernetes/tree/master/examples/kubectl-container/).
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
This handles locating and authenticating to the apiserver.
This handles locating and authenticating to the apiserver.
In each case, the credentials of the pod are used to communicate securely with the apiserver.
@ -138,7 +142,8 @@ such as your desktop machine.
### Ways to connect
You have several options for connecting to nodes, pods and services from outside the cluster:
You have several options for connecting to nodes, pods and services from outside the cluster:
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](services) and
@ -177,8 +182,9 @@ $ kubectl cluster-info
kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kibana-logging
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kube-dns
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
```
This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
at `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example:
@ -209,11 +215,13 @@ about namespaces? 'proxy' verb? -->
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5
}
}
```
#### Using web browsers to access services running on the cluster
You may be able to put an apiserver proxy url into the address bar of a browser. However:
You may be able to put an apiserver proxy url into the address bar of a browser. However:
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
but your cluster may not be configured to accept basic auth.
- Some web apps may not work, particularly those with client side javascript that construct urls in a
@ -225,7 +233,8 @@ The redirect capabilities have been deprecated and removed. Please use a proxy
## So Many Proxies
There are several different proxies you may encounter when using Kubernetes:
There are several different proxies you may encounter when using Kubernetes:
1. The [kubectl proxy](#directly-accessing-the-rest-api):
- runs on a user's desktop or in a pod
- proxies from a localhost address to the Kubernetes apiserver
@ -257,7 +266,5 @@ There are several different proxies you may encounter when using Kubernetes:
- use UDP/TCP only
- implementation varies by cloud provider.
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
will typically ensure that the latter types are setup correctly.

View File

@ -12,8 +12,8 @@ Like labels, annotations are key-value maps.
"key1" : "value1",
"key2" : "value2"
}
```
Possible information that could be recorded in annotations:
* fields managed by a declarative configuration layer, to distinguish them from client- and/or server-set default values and other auto-generated fields, fields set by auto-sizing/auto-scaling systems, etc., in order to facilitate merging
@ -24,7 +24,4 @@ Possible information that could be recorded in annotations:
* lightweight rollout tool metadata (config and/or checkpoints)
* phone/pager number(s) of person(s) responsible, or directory entry where that info could be found, such as a team website
Yes, this information could be stored in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, etc.
Yes, this information could be stored in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, etc.

View File

@ -15,6 +15,7 @@ Users are highly encouraged to check out our [FAQ](https://github.com/kubernetes
The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or
your Service?
* [Debugging Pods](#debugging-pods)
* [Debugging Replication Controllers](#debugging-replication-controllers)
* [Debugging Services](#debugging-services)
@ -25,8 +26,8 @@ The first step in debugging a Pod is taking a look at it. Check the current sta
```shell
$ kubectl describe pods ${POD_NAME}
```
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
Continue debugging depending on the state of the pods.
@ -50,6 +51,7 @@ scheduled. In most cases, `hostPort` is unnecessary, try using a Service object
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
* Make sure that you have the name of the image correct
* Have you pushed the image to the repository?
* Run a manual `docker pull <image>` on your machine to see if the image can be pulled.
@ -61,28 +63,28 @@ the current container:
```shell
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
```
If your container has previously crashed, you can access the previous container's crash log with:
```shell
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
```
Alternately, you can run commands inside that container with `exec`:
```shell
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
```
Note that `-c ${CONTAINER_NAME}` is optional and can be omitted for Pods that only contain a single container.
As an example, to look at the logs from a running Cassandra pod, you might run
```shell
$ kubectl exec cassandra -- cat /var/log/cassandra/system.log
```
If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host,
but this should generally not be necessary given tools in the Kubernetes API. Therefore, if you find yourself needing to ssh into a machine, please file a
feature request on GitHub describing your use case and why these tools are insufficient.
@ -100,12 +102,12 @@ The first thing to do is to delete your pod and try creating it again with the `
For example, run `kubectl create --validate -f mypod.yaml`.
If you misspelled `command` as `commnd` then will give an error like this:
```
```shell
I0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd
I0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842
pods/mypod
```
<!-- TODO: Now that #11914 is merged, this advice may need to be updated -->
The next thing to check is whether the pod on the apiserver
@ -136,8 +138,8 @@ You can view this resource with:
```shell
$ kubectl get endpoints ${SERVICE_NAME}
```
Make sure that the endpoints match up with the number of containers that you expect to be a member of your service.
For example, if your Service is for an nginx container with 3 replicas, you would expect to see three different
IP addresses in the Service's endpoints.
@ -153,14 +155,14 @@ spec:
- selector:
name: nginx
type: frontend
```
You can use:
```shell
$ kubectl get pods --selector=name=nginx,type=frontend
```
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't
@ -176,6 +178,7 @@ in the endpoints list, it's likely that the proxy can't contact your pods.
There are three things to
check:
* Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods)
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
@ -184,7 +187,4 @@ check:
If none of the above solves your problem, follow the instructions in [Debugging Service document](debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
You may also visit [troubleshooting document](../troubleshooting) for more information.
You may also visit [troubleshooting document](../troubleshooting) for more information.

View File

@ -66,8 +66,8 @@ spec:
limits:
memory: "128Mi"
cpu: "500m"
```
## How Pods with Resource Requests are Scheduled
When a pod is created, the Kubernetes scheduler selects a node for the pod to
@ -86,6 +86,7 @@ When kubelet starts a container of a pod, it passes the CPU and memory limits to
runner (Docker or rkt).
When using Docker:
- The `spec.container[].resources.limits.cpu` is multiplied by 1024, converted to an integer, and
used as the value of the [`--cpu-shares`](
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
@ -125,13 +126,13 @@ $ kubectl describe pod frontend | grep -A 3 Events
Events:
FirstSeen LastSeen Count From Subobject PathReason Message
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
```
In the case shown above, the pod "frontend" fails to be scheduled due to insufficient
CPU resource on the node. Similar error messages can also suggest failure due to insufficient
memory (PodExceedsFreeMemory). In general, if a pod or pods are pending with this message and
alike, then there are several things to try:
- Add more nodes to the cluster.
- Terminate unneeded pods to make room for pending pods.
- Check that the pod is not larger than all the nodes. For example, if all the nodes
@ -162,8 +163,8 @@ TotalResourceLimits:
CPU(milliCPU): 910 (91% of total)
Memory(bytes): 2485125120 (59% of total)
[ ... lines removed for clarity ...]
```
Here you can see from the `Allocated resources` section that that a pod which ask for more than
90 millicpus or more than 1341MiB of memory will not be able to fit on this node.
@ -214,8 +215,8 @@ Events:
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
```
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.
You can call `get pod` with the `-o go-template=...` option to fetch the status of previously terminated containers:
@ -224,8 +225,8 @@ You can call `get pod` with the `-o go-template=...` option to fetch the status
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]][13:59:03] clusterScaleDoc ~/go/src/github.com/kubernetes/kubernetes $
```
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
## Planned Improvements
@ -244,7 +245,4 @@ Currently, one unit of CPU means different things on different cloud providers,
machine types within the same cloud providers. For example, on AWS, the capacity of a node
is reported in [ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
cores. We plan to revise the definition of the cpu resource to allow for more consistency
across providers and platforms.
across providers and platforms.

View File

@ -24,8 +24,8 @@ spec: # specification of the pod's contents
- name: hello
image: "ubuntu:14.04"
command: ["/bin/echo","hello'?,'?world"]
```
The value of `metadata.name`, `hello-world`, will be the name of the pod resource created, and must be unique within the cluster, whereas `containers[0].name` is just a nickname for the container within that pod. `image` is the name of the Docker image, which Kubernetes expects to be able to pull from a registry, the [Docker Hub](https://registry.hub.docker.com/) by default.
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
@ -35,15 +35,15 @@ The [`command`](containers.html#containers-and-commands) overrides the Docker co
```yaml
command: ["/bin/echo"]
args: ["hello","world"]
```
This pod can be created using the `create` command:
```shell
$ kubectl create -f ./hello-world.yaml
pods/hello-world
```
`kubectl` prints the resource type and name of the resource created when successful.
## Validating configuration
@ -52,16 +52,16 @@ If you're not sure you specified the resource correctly, you can ask `kubectl` t
```shell
$ kubectl create -f ./hello-world.yaml --validate
```
Let's say you specified `entrypoint` instead of `command`. You'd see output as follows:
```shell
I0709 06:33:05.600829 14160 schema.go:126] unknown field: entrypoint
I0709 06:33:05.600988 14160 schema.go:129] this may be a false alarm, see http://issue.k8s.io/6842
pods/hello-world
```
`kubectl create --validate` currently warns about problems it detects, but creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
View the [Pod API
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_pod)
@ -86,15 +86,15 @@ spec: # specification of the pod's contents
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
```
However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](/{{page.version}}/docs/design/expansion):
```yaml
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
```
## Viewing pod status
You can see the pod you created (actually all of your cluster's pods) using the `get` command.
@ -105,8 +105,8 @@ If you're quick, it will look as follows:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 Pending 0 0s
```
Initially, a newly created pod is unscheduled -- no node has been selected to run it. Scheduling happens after creation, but is fast, so you normally shouldn't see pods in an unscheduled state unless there's a problem.
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadn't been pulled already. After a few seconds, you should see the container running:
@ -115,8 +115,8 @@ After the pod has been scheduled, the image may need to be pulled to the node on
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 1/1 Running 0 5s
```
The `READY` column shows how many containers in the pod are running.
Almost immediately after it starts running, this command will terminate. `kubectl` shows that the container is no longer running and displays the exit status:
@ -125,8 +125,8 @@ Almost immediately after it starts running, this command will terminate. `kubect
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 ExitCode:0 0 15s
```
## Viewing pod output
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/userguide/usingdocker/), `kubectl logs` will show you the output:
@ -134,8 +134,8 @@ You probably want to see the output of the command you ran. As with [`docker log
```shell
$ kubectl logs hello-world
hello world
```
## Deleting pods
When you're done looking at the output, you should delete the pod:
@ -143,8 +143,8 @@ When you're done looking at the output, you should delete the pod:
```shell
$ kubectl delete pod hello-world
pods/hello-world
```
As with `create`, `kubectl` prints the resource type and name of the resource deleted when successful.
You can also use the resource/name format to specify the pod:
@ -152,8 +152,8 @@ You can also use the resource/name format to specify the pod:
```shell
$ kubectl delete pods/hello-world
pods/hello-world
```
Terminated pods aren't currently automatically deleted, so that you can observe their final status, so be sure to clean up your dead pods.
On the other hand, containers and their logs are eventually deleted automatically in order to free up disk space on the nodes.
@ -161,6 +161,3 @@ On the other hand, containers and their logs are eventually deleted automaticall
## What's next?
[Learn about deploying continuously running applications.](deploying-applications)

View File

@ -35,8 +35,8 @@ spec:
image: nginx
ports:
- containerPort: 80
```
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
```shell
@ -44,16 +44,16 @@ $ kubectl create -f ./nginxrc.yaml
$ kubectl get pods -l app=nginx -o wide
my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-minion-93ly
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-minion-93ly
```
Check your pods' IPs:
```shell
$ kubectl get pods -l app=nginx -o json | grep podIP
"podIP": "10.245.0.15",
"podIP": "10.245.0.14",
```
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
You can read more about [how we achieve this](../admin/networking.html#how-to-achieve-this) if you're curious.
@ -80,8 +80,8 @@ spec:
protocol: TCP
selector:
app: nginx
```
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_service) to see the list of supported fields in service definition.
Check your Service:
@ -90,8 +90,8 @@ $ kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.179.240.1 <none> 443/TCP <none> 8d
nginxsvc 10.179.252.126 122.222.183.144 80/TCP,81/TCP,82/TCP run=nginx2 11m
```
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Service's selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
```shell
@ -110,8 +110,8 @@ No events.
$ kubectl get ep
NAME ENDPOINTS
nginxsvc 10.245.0.14:80,10.245.0.15:80
```
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](services.html#virtual-ips-and-service-proxies).
## Accessing the Service
@ -126,8 +126,8 @@ When a Pod is run on a Node, the kubelet adds a set of environment variables for
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
```
Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the replication controller to recreate them. This time around the Service exists *before* the replicas. This will given you scheduler level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
```shell
@ -142,8 +142,8 @@ KUBERNETES_SERVICE_PORT=443
NGINXSVC_SERVICE_HOST=10.0.116.146
KUBERNETES_SERVICE_HOST=10.0.0.1
NGINXSVC_SERVICE_PORT=80
```
### DNS
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it's running on your cluster:
@ -152,8 +152,8 @@ Kubernetes offers a DNS cluster addon Service that uses skydns to automatically
$ kubectl get services kube-dns --namespace=kube-system
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kube-dns 10.179.240.10 <none> 53/UDP,53/TCP k8s-app=kube-dns 8d
```
If it isn't running, you can [enable it](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's create another pod to test this:
```yaml
@ -171,8 +171,8 @@ spec:
imagePullPolicy: IfNotPresent
name: curlcontainer
restartPolicy: Always
```
And perform a lookup of the nginx Service
```shell
@ -187,11 +187,12 @@ Server: 10.0.0.10
Address 1: 10.0.0.10
Name: nginxsvc
Address 1: 10.0.116.146
```
## Securing the Service
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
* Self signed certificates for https (unless you already have an identity certificate)
* An nginx server configured to use the certificates
* A [secret](secrets) that makes the certificates accessible to pods
@ -206,8 +207,8 @@ $ kubectl get secrets
NAME TYPE DATA
default-token-il9rc kubernetes.io/service-account-token 1
nginxsecret Opaque 2
```
Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
```yaml
@ -255,9 +256,10 @@ spec:
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
```
Noteworthy points about the nginx-app manifest:
- It contains both rc and service specification in the same file
- The [nginx server](https://github.com/kubernetes/kubernetes/tree/master/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
@ -268,8 +270,8 @@ replicationcontrollers/my-nginx
services/nginxsvc
services/nginxsvc
replicationcontrollers/my-nginx
```
At this point you can reach the nginx server from any node.
```shell
@ -278,8 +280,8 @@ $ kubectl get pods -o json | grep -i podip
node $ curl -k https://10.1.0.80
...
<h1>Welcome to nginx!</h1>
```
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
@ -322,8 +324,8 @@ $ kubectl exec curlpod -- curl https://nginxsvc --cacert /etc/nginx/ssl/nginx.cr
...
<title>Welcome to nginx!</title>
...
```
## Exposing the Service
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
@ -359,8 +361,8 @@ $ kubectl get nodes -o json | grep ExternalIP -C 2
$ curl https://104.197.63.17:30645 -k
...
<h1>Welcome to nginx!</h1>
```
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of Service in the nginx-app.yaml from `NodePort` to `LoadBalancer`:
```shell
@ -373,14 +375,11 @@ nginxsvc 10.179.252.126 162.222.184.144 80/TCP,81/TCP,82/TCP run=nginx2
$ curl https://162.22.184.144 -k
...
<title>Welcome to nginx!</title>
```
The IP address in the `EXTERNAL_IP` column is the one that is available on the public internet. The `CLUSTER_IP` is only available inside your
cluster/private cloud network.
## What's next?
[Learn about more Kubernetes features that will help you run containers reliably in production.](production-pods)

View File

@ -7,35 +7,40 @@ kubectl port-forward forwards connections to a local port to a port on a pod. It
```shell
$ kubectl create examples/redis/redis-master.yaml
pods/redis-master
pods/redis-master
```
wait until the Redis master pod is Running and Ready,
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master 2/2 Running 0 41s
redis-master 2/2 Running 0 41s
```
## Connecting to the Redis master[a]
The Redis master is listening on port 6397, to verify this,
```shell
$ kubectl get pods redis-master -t='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
6379
6379
```
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
```shell
$ kubectl port-forward redis-master 6379:6379
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
```
To verify the connection is successful, we run a redis-cli on the local workstation,
```shell
$ redis-cli
127.0.0.1:6379> ping
PONG
PONG
```
Now one can debug the database from the local workstation.

View File

@ -10,8 +10,9 @@ kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
```shell
$ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
```
if this command does not find the URL, try the steps [here](ui.html#accessing-the-ui).
@ -21,6 +22,7 @@ The above proxy URL is an access to the kube-ui service provided by the apiserve
```shell
$ kubectl proxy --port=8001
Starting to serve on localhost:8001
Starting to serve on localhost:8001
```
Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui)

View File

@ -1,10 +1,6 @@
---
title: "Kubernetes Container Environment"
---
{% include pagetoc.html %}
## Overview
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode).  In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
This cluster information makes it possible to build applications that are *cluster aware*.
@ -12,9 +8,10 @@ Additionally, the Kubernetes container environment defines a series of hooks tha
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](images) and one or more [volumes](volumes).
The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system.
{% include pagetoc.html %}
## Cluster Information
There are two types of information that are available within the container environment.  There is information about the container itself, and there is information about other objects in the system.
@ -36,8 +33,8 @@ For a service named **foo** that maps to a container port named **bar**, the fol
```shell
FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
```
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
## Container Hooks
@ -79,7 +76,4 @@ Hook handlers are the way that hooks are surfaced to containers.  Containers ca
* HTTP - Executes an HTTP request against a specific endpoint on the container.
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html

View File

@ -21,22 +21,22 @@ If the command "COMMAND" is expected to run in a `Pod` and produce "OUTPUT":
```shell
u@pod$ COMMAND
OUTPUT
```
If the command "COMMAND" is expected to run on a `Node` and produce "OUTPUT":
```shell
u@node$ COMMAND
OUTPUT
```
If the command is "kubectl ARGS":
```shell
$ kubectl ARGS
OUTPUT
```
## Running commands in a Pod
For many steps here you will want to see what a `Pod` running in the cluster
@ -58,22 +58,22 @@ spec:
- "1000000"
EOF
pods/busybox-sleep
```
Now, when you need to run a command (even an interactive shell) in a `Pod`-like
context, use:
```shell
$ kubectl exec busybox-sleep -- <COMMAND>
```
or
```shell
$ kubectl exec -ti busybox-sleep sh
/ #
```
## Setup
For the purposes of this walk-through, let's run some `Pod`s. Since you're
@ -87,8 +87,8 @@ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--replicas=3
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hostnames hostnames gcr.io/google_containers/serve_hostname app=hostnames 3
```
Note that this is the same as if you had started the `ReplicationController` with
the following YAML:
@ -112,8 +112,8 @@ spec:
ports:
- containerPort: 9376
protocol: TCP
```
Confirm your `Pod`s are running:
```shell
@ -122,8 +122,8 @@ NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 12s
hostnames-bvc05 1/1 Running 0 12s
hostnames-yp2kp 1/1 Running 0 12s
```
## Does the Service exist?
The astute reader will have noticed that we did not actually create a `Service`
@ -137,37 +137,37 @@ like:
```shell
u@pod$ wget -qO- hostnames
wget: bad address 'hostname'
```
or:
```shell
u@pod$ echo $HOSTNAMES_SERVICE_HOST
```
So the first thing to check is whether that `Service` actually exists:
```shell
$ kubectl get svc hostnames
Error from server: service "hostnames" not found
```
So we have a culprit, let's create the `Service`. As before, this is for the
walk-through - you can use your own `Service`'s details here.
```shell
$ kubectl expose rc hostnames --port=80 --target-port=9376
service "hostnames" exposed
```
And read it back, just to be sure:
```shell
$ kubectl get svc hostnames
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hostnames 10.0.0.1 <none> 80/TCP run=hostnames 1h
```
As before, this is the same as if you had started the `Service` with YAML:
```yaml
@ -183,8 +183,8 @@ spec:
protocol: TCP
port: 80
targetPort: 9376
```
Now you can confirm that the `Service` exists.
## Does the Service work by DNS?
@ -198,8 +198,8 @@ Address: 10.0.0.10#53
Name: hostnames
Address: 10.0.1.175
```
If this fails, perhaps your `Pod` and `Service` are in different
`Namespace`s, try a namespace-qualified name:
@ -210,8 +210,8 @@ Address: 10.0.0.10#53
Name: hostnames.default
Address: 10.0.1.175
```
If this works, you'll need to ensure that `Pod`s and `Service`s run in the same
`Namespace`. If this still fails, try a fully-qualified name:
@ -222,8 +222,8 @@ Address: 10.0.0.10#53
Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
```
Note the suffix here: "default.svc.cluster.local". The "default" is the
`Namespace` we're operating in. The "svc" denotes that this is a `Service`.
The "cluster.local" is your cluster domain.
@ -238,8 +238,8 @@ Address: 10.0.0.10#53
Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
```
If you are able to do a fully-qualified name lookup but not a relative one, you
need to check that your `kubelet` is running with the right flags.
The `--cluster-dns` flag needs to point to your DNS `Service`'s IP and the
@ -260,8 +260,8 @@ Address 1: 10.0.0.10
Name: kubernetes
Address 1: 10.0.0.1
```
If this fails, you might need to go to the kube-proxy section of this doc, or
even go back to the top of this document and start over, but instead of
debugging your own `Service`, debug DNS.
@ -280,8 +280,8 @@ hostnames-yp2kp
u@node$ curl 10.0.1.175:80
hostnames-bvc05
```
If your `Service` is working, you should get correct responses. If not, there
are a number of things that could be going wrong. Read on.
@ -328,8 +328,8 @@ $ kubectl get service hostnames -o json
"loadBalancer": {}
}
}
```
Is the port you are trying to access in `spec.ports[]`? Is the `targetPort`
correct for your `Pod`s? If you meant it to be a numeric port, is it a number
(9376) or a string "9376"? If you meant it to be a named port, do your `Pod`s
@ -350,8 +350,8 @@ NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 1h
hostnames-bvc05 1/1 Running 0 1h
hostnames-yp2kp 1/1 Running 0 1h
```
The "AGE" column says that these `Pod`s are about an hour old, which implies that
they are running fine and not crashing.
@ -363,8 +363,8 @@ selector of every `Service` and save the results into an `Endpoints` object.
$ kubectl get endpoints hostnames
NAME ENDPOINTS
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
```
This confirms that the control loop has found the correct `Pod`s for your
`Service`. If the `hostnames` row is blank, you should check that the
`spec.selector` field of your `Service` actually selects for `metadata.labels`
@ -385,8 +385,8 @@ hostnames-bvc05
u@pod$ wget -qO- 10.244.0.7:9376
hostnames-yp2kp
```
We expect each `Pod` in the `Endpoints` list to return its own hostname. If
this is not what happens (or whatever the correct behavior is for your own
`Pod`s), you should investigate what's happening there. You might find
@ -407,8 +407,8 @@ like the below:
```shell
u@node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
```
Next, confirm that it is not failing something obvious, like contacting the
master. To do this, you'll have to look at the logs. Accessing the logs
depends on your `Node` OS. On some OSes it is a file, such as
@ -434,8 +434,8 @@ I0707 17:34:54.902313 30031 proxysocket.go:130] Accepted TCP connection from 1
I0707 17:34:54.903107 30031 proxysocket.go:130] Accepted TCP connection from 10.244.3.3:42671 to 10.244.3.1:40074
I0707 17:35:46.015868 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:57493
I0707 17:35:46.017061 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:55471
```
If you see error messages about not being able to contact the master, you
should double-check your `Node` configuration and installation steps.
@ -449,8 +449,8 @@ written.
u@node$ iptables-save | grep hostnames
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
```
There should be 2 rules for each port on your `Service` (just one in this
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and
@ -463,8 +463,8 @@ Assuming you do see the above rules, try again to access your `Service` by IP:
```shell
u@node$ curl 10.0.1.175:80
hostnames-0uton
```
If this fails, we can try accessing the proxy directly. Look back at the
`iptables-save` output above, and extract the port number that `kube-proxy` is
using for your `Service`. In the above examples it is "48577". Now connect to
@ -473,14 +473,14 @@ that:
```shell
u@node$ curl localhost:48577
hostnames-yp2kp
```
If this still fails, look at the `kube-proxy` logs for specific lines like:
```shell
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
```
If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and
then look at the logs again.
@ -499,7 +499,4 @@ Contact us on
## More information
Visit [troubleshooting document](../troubleshooting) for more information.
Visit [troubleshooting document](../troubleshooting) for more information.

View File

@ -1,10 +1,10 @@
---
title: "Kubernetes User Guide: Managing Applications: Deploying continuously running applications"
---
{% include pagetoc.html %}
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](quick-start) and how to configure and launch single-run containers using pods ([Configuring containers](configuring-containers)). Here you'll use the configuration-based approach to deploy a continuously running, replicated application.
{% include pagetoc.html %}
## Launching a set of replicas using a configuration file
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](pods)) using [*Replication Controllers*](replication-controller).
@ -30,8 +30,8 @@ spec:
image: nginx
ports:
- containerPort: 80
```
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods don't need to be specified explicitly because they are generated from the name of the replication controller.
View the [replication controller API
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_replicationcontroller)
@ -42,8 +42,8 @@ This replication controller can be created using `create`, just as with pods:
```shell
$ kubectl create -f ./nginx-rc.yaml
replicationcontrollers/my-nginx
```
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
## Viewing replication controller status
@ -54,8 +54,8 @@ You can view the replication controller you created using `get`:
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
```
This tells you that your controller will ensure that you have two nginx replicas.
You can see those replicas using `get`, just as with pods you created directly:
@ -65,8 +65,8 @@ $ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-065jq 1/1 Running 0 51s
my-nginx-buaiq 1/1 Running 0 51s
```
## Deleting replication controllers
When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start):
@ -74,8 +74,8 @@ When you want to kill your application, delete your replication controller, as i
```shell
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
```
By default, this will also cause the pods managed by the replication controller to be deleted. If there were a large number of pods, this may take a while to complete. If you want to leave the pods running, specify `--cascade=false`.
If you try to delete the pods before deleting the replication controller, it will just replace them, as it is supposed to do.
@ -89,28 +89,25 @@ $ kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
my-nginx-afv12 0/1 Running 0 3s nginx
my-nginx-lg99z 0/1 Running 0 3s nginx
```
The labels from the pod template are copied to the replication controller's labels by default, as well -- all resources in Kubernetes support labels:
```shell
$ kubectl get rc my-nginx -L app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
```
More importantly, the pod template's labels are used to create a [`selector`](labels.html#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get):
```shell
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
map[app:nginx]
```
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didn't want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it won't match pods created by other replication controllers. The most straightforward way to ensure the latter is to create a unique label value for the replication controller, and to specify it in both the pod template's labels and in the selector.
## What's next?
[Learn about exposing applications to users and clients, and connecting tiers of your application together.](connecting-applications)
[Learn about exposing applications to users and clients, and connecting tiers of your application together.](connecting-applications)

View File

@ -12,6 +12,7 @@ Users can define deployments to create new resources, or replace existing ones
by new ones.
A typical use case is:
* Create a deployment to bring up a replication controller and pods.
* Later, update that deployment to recreate the pods (for ex: to use a new image).
@ -53,8 +54,8 @@ spec:
image: nginx:1.7.9
ports:
- containerPort: 80
```
[Download example](nginx-deployment.yaml)
<!-- END MUNGE: EXAMPLE nginx-deployment.yaml -->
@ -63,16 +64,16 @@ Run the example by downloading the example file and then running this command:
```shell
$ kubectl create -f docs/user-guide/nginx-deployment.yaml
deployment "nginx-deployment" created
```
Running a get immediately will give:
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 0/3 8s
```
This indicates that deployment is trying to update 3 replicas. It has not
updated any one of those yet.
@ -82,8 +83,8 @@ Running a get again after a minute, will give:
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 1m
```
This indicates that deployent has created all the 3 replicas.
Running ```kubectl get rc```
and ```kubectl get pods```
@ -94,16 +95,16 @@ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
REPLICAS AGE
deploymentrc-1975012602 nginx nginx:1.7.9 deployment.kubernetes.io/podTemplateHash=1975012602,app=nginx 3 2m
```
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1975012602-4f2tb 1/1 Running 0 1m
deploymentrc-1975012602-j975u 1/1 Running 0 1m
deploymentrc-1975012602-uashb 1/1 Running 0 1m
```
The created RC will ensure that there are 3 nginx pods at all time.
## Updating a Deployment
@ -131,8 +132,8 @@ spec:
image: nginx:1.9.1
ports:
- containerPort: 80
```
[Download example](new-nginx-deployment.yaml)
<!-- END MUNGE: EXAMPLE new-nginx-deployment.yaml -->
@ -140,16 +141,16 @@ spec:
```shell
$ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
deployment "nginx-deployment" configured
```
Running a get immediately will still give:
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 8s
```
This indicates that deployment status has not been updated yet (it is still
showing old status).
Running a get again after a minute, will give:
@ -158,8 +159,8 @@ Running a get again after a minute, will give:
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 1/3 1m
```
This indicates that deployment has updated one of the three pods that it needs
to update.
Eventually, it will get around to updating all the pods.
@ -168,9 +169,9 @@ Eventually, it will get around to updating all the pods.
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 3m
```
We can run ```kubectl get rc```
We can run `kubectl get rc`
to see that deployment updated the pods by creating a new RC
which it scaled up to 3 and scaled down the old RC to 0.
@ -179,8 +180,8 @@ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
deploymentrc-1562004724 nginx nginx:1.9.1 deployment.kubernetes.io/podTemplateHash=1562004724,app=nginx 3 5m
deploymentrc-1975012602 nginx nginx:1.7.9 deployment.kubernetes.io/podTemplateHash=1975012602,app=nginx 0 7m
```
Running get pods, will only show the new pods.
```shell
@ -189,8 +190,8 @@ NAME READY STATUS RESTARTS AGE
deploymentrc-1562004724-0tgk5 1/1 Running 0 9m
deploymentrc-1562004724-1rkfl 1/1 Running 0 8m
deploymentrc-1562004724-6v702 1/1 Running 0 8m
```
Next time we want to update pods, we can just update the deployment again.
Deployment ensures that not all pods are down while they are being updated. By
@ -219,8 +220,8 @@ Events:
2m 2m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 1
1m 1m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 3
1m 1m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 0
```
Here we see that when we first created the deployment, it created an RC and scaled it up to 3 replicas directly.
When we updated the deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times.
It then scaled up the new RC to 3 and when those pods were ready, it scaled down the old RC to 0.
@ -346,4 +347,4 @@ Note: This is not implemented yet.
### kubectl rolling update
[Kubectl rolling update](kubectl/kubectl_rolling-update) also updates pods and replication controllers in a similar fashion.
But deployments is declarative and is server side.
But deployments is declarative and is server side.

View File

@ -17,8 +17,8 @@ a9ec34d9878748d2f33dc20cb25c714ff21da8d40558b45bfaec9955859075d0
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 2 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
With kubectl:
```shell
@ -27,16 +27,16 @@ $ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
replicationcontroller "nginx-app" created
# expose a port through with a service
$ kubectl expose rc nginx-app --port=80 --name=nginx-http
```
With kubectl, we create a [replication controller](replication-controller) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services) with a selector that matches the replication controller's selector. See the [Quick start](quick-start) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
```shell
kubectl run [-i] [--tty] --attach <name> --image=<image>
```
Unlike `docker run ...`, if `--attach` is specified, we attach to `stdin`, `stdout` and `stderr`, there is no ability to control which streams are attached (`docker -a ...`).
Because we start a replication controller for your container, it will be restarted if you terminate the attached process (e.g. `ctrl-c`), this is different than `docker run -it`.
@ -52,16 +52,16 @@ With docker:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
With kubectl:
```shell
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 1h
```
#### docker attach
How do I attach to a process that is already running in a container? Checkout [kubectl attach](kubectl/kubectl_attach)
@ -74,8 +74,8 @@ CONTAINER ID IMAGE COMMAND CREATED
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker attach -it a9ec34d98787
...
```
With kubectl:
```shell
@ -84,9 +84,8 @@ NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
$ kubectl attach -it nginx-app-5jyvm
...
```
#### docker exec
How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubectl_exec).
@ -99,9 +98,8 @@ CONTAINER ID IMAGE COMMAND CREATED
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker exec a9ec34d98787 cat /etc/hostname
a9ec34d98787
```
With kubectl:
```shell
@ -110,9 +108,8 @@ NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
$ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm
```
What about interactive commands?
@ -120,20 +117,16 @@ With docker:
```shell
$ docker exec -ti a9ec34d98787 /bin/sh
# exit
```
With kubectl:
```shell
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
For more information see [Getting into containers](getting-into-containers).
#### docker logs
@ -147,27 +140,24 @@ With docker:
$ docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
```
With kubectl:
```shell
$ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
```shell
$ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
See [Logging](logging) for more information.
#### docker stop and docker rm
@ -184,9 +174,8 @@ $ docker stop a9ec34d98787
a9ec34d98787
$ docker rm a9ec34d98787
a9ec34d98787
```
With kubectl:
```shell
@ -201,9 +190,8 @@ NAME READY STATUS RESTARTS AGE
nginx-app-aualv 1/1 Running 0 16s
$ kubectl get po
NAME READY STATUS RESTARTS AGE
```
Notice that we don't delete the pod directly. With kubectl we want to delete the replication controller that owns the pod. If we delete the pod directly, the replication controller will recreate the pod.
#### docker login
@ -228,18 +216,16 @@ Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
```
With kubectl:
```shell
$ kubectl version
Client Version: version.Info{Major:"0", Minor:"20.1", GitVersion:"v0.20.1", GitCommit:"", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"0", Minor:"21+", GitVersion:"v0.21.1-411-g32699e873ae1ca-dirty", GitCommit:"32699e873ae1caa01812e41de7eab28df4358ee4", GitTreeState:"dirty"}
```
#### docker info
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](kubectl/kubectl_cluster-info).
@ -264,9 +250,8 @@ Total Memory: 31.32 GiB
Name: k8s-is-fun.mtv.corp.google.com
ID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO
WARNING: No swap limit support
```
With kubectl:
```shell
@ -277,6 +262,4 @@ KubeUI is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/s
Grafana is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
Heapster is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
InfluxDB is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
```
```

View File

@ -74,13 +74,12 @@ spec:
fieldRef:
fieldPath: status.podIP
restartPolicy: Never
```
[Download example](downward-api/dapi-pod.yaml)
<!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
### Downward API volume
Using a similar syntax it's possible to expose pod information to containers using plain text files.
@ -89,11 +88,11 @@ volume type and the different items represent the files to be created. `fieldPat
Downward API volume permits to store more complex data like [`metadata.labels`](labels) and [`metadata.annotations`](annotations). Currently key/value pair set fields are saved using `key="value"` format:
```
```conf
key1="value1"
key2="value2"
```
In future, it will be possible to specify an output format option.
Downward API volumes can expose:
@ -144,14 +143,12 @@ spec:
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
```
[Download example](downward-api/volume/dapi-volume.yaml)
<!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
Some more thorough examples:
* [environment variables](environment-guide/)
* [downward API](downward-api/)
* [downward API](downward-api/)

View File

@ -20,8 +20,8 @@ downward API.
```shell
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
```
### Examine the logs
This pod runs the `env` command in a container that consumes the downward API. You can grep
@ -32,5 +32,4 @@ $ kubectl logs dapi-test-pod | grep POD_
2015-04-30T20:22:18.568024817Z MY_POD_NAME=dapi-test-pod
2015-04-30T20:22:18.568087688Z MY_POD_NAMESPACE=default
2015-04-30T20:22:18.568092435Z MY_POD_IP=10.0.1.6
```
```

View File

@ -2,7 +2,7 @@
title: "Downward API volume plugin"
---
Following this example, you will create a pod with a downward API volume.
A downward API volume is a k8s volume plugin with the ability to save some pod information in a plain text file. The pod information can be for example some [metadata](..//{{page.version}}/docs/devel/api-conventions.html#metadata).
A downward API volume is a k8s volume plugin with the ability to save some pod information in a plain text file. The pod information can be for example some [metadata](/{{page.version}}/docs/devel/api-conventions/#metadata).
Supported metadata fields:
@ -13,7 +13,7 @@ Supported metadata fields:
### Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and the ```kubectl```
This example assumes you have a Kubernetes cluster installed and running, and the `kubectl`
command line tool somewhere in your path. Please see the [gettingstarted](..//{{page.version}}/docs/getting-started-guides/) for installation instructions for your platform.
### Step One: Create the pod
@ -22,8 +22,8 @@ Use the `docs/user-guide/downward-api/dapi-volume.yaml` file to create a Pod wit
```shell
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
```
### Step Two: Examine pod/container output
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
@ -37,8 +37,8 @@ build="two"
builder="john-doe"
kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
kubernetes.io/config.source="api"
```
### Internals
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
@ -62,6 +62,6 @@ drwxrwxrwt 3 0 0 180 Aug 24 13:03 ..
-rw-r--r-- 1 0 0 115 Aug 24 13:03 annotations
-rw-r--r-- 1 0 0 53 Aug 24 13:03 labels
/ #
```
The file `labels` is stored in a temporary directory (`..2015_08_24_13_03_44259413923` in the example above) which is symlinked to by `..downwardapi`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..downwardapi` indirection.  This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..downwardapi` symlink is updated atomically using `rename(2)`.

View File

@ -27,10 +27,12 @@ The code for the containers is under
## Get everything running
kubectl create -f ./backend-rc.yaml
kubectl create -f ./backend-srv.yaml
kubectl create -f ./show-rc.yaml
kubectl create -f ./show-srv.yaml
```shell
kubectl create -f ./backend-rc.yaml
kubectl create -f ./backend-srv.yaml
kubectl create -f ./show-rc.yaml
kubectl create -f ./show-srv.yaml
```
## Query the service
@ -38,13 +40,13 @@ Use `kubectl describe service show-srv` to determine the public IP of
your service.
> Note: If your platform does not support external load balancers,
you'll need to open the proper port and direct traffic to the
internal IP shown for the frontend service with the above command
> you'll need to open the proper port and direct traffic to the
> internal IP shown for the frontend service with the above command
Run `curl <public ip>:80` to query the service. You should get
something like this back:
```
```shell
Pod Name: show-rc-xxu6i
Pod Namespace: default
USER_VAR: important information
@ -64,8 +66,8 @@ Response from backend
Backend Container
Backend Pod Name: backend-rc-6qiya
Backend Namespace: default
```
First the frontend pod's information is printed. The pod name and
[namespace](/{{page.version}}/docs/design/namespaces) are retrieved from the
[Downward API](/{{page.version}}/docs/user-guide/downward-api). Next, `USER_VAR` is the name of
@ -84,9 +86,8 @@ service. This results in a different backend pod servicing each
request as well.
## Cleanup
kubectl delete rc,service -l type=show-type
kubectl delete rc,service -l type=backend-type
```shell
kubectl delete rc,service -l type=show-type
kubectl delete rc,service -l type=backend-type
```

View File

@ -7,20 +7,20 @@ Developers can use `kubectl exec` to run commands in a container. This guide dem
Kubernetes exposes [services](services.html#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
We first create a pod and a service,
```shell
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
$ kubectl create -f examples/guestbook/redis-master-service.yaml
$ kubectl create -f examples/guestbook/redis-master-service.yaml
```
wait until the pod is Running and Ready,
```shell
$ kubectl get pod
NAME READY REASON RESTARTS AGE
redis-master-ft9ex 1/1 Running 0 12s
redis-master-ft9ex 1/1 Running 0 12s
```
then we can check the environment variables of the pod,
```shell
@ -28,8 +28,9 @@ $ kubectl exec redis-master-ft9ex env
...
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_SERVICE_HOST=10.0.0.219
...
...
```
We can use these environment variables in applications to find the service.
@ -39,27 +40,31 @@ It is convenient to use `kubectl exec` to check if the volumes are mounted as ex
We first create a Pod with a volume mounted at /data/redis,
```shell
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
```
wait until the pod is Running and Ready,
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
storage 1/1 Running 0 1m
storage 1/1 Running 0 1m
```
we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
```shell
$ kubectl exec storage ls /data
redis
redis
```
## Using kubectl exec to open a bash terminal in a pod
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
```shell
$ kubectl exec -ti storage -- bash
root@storage:/data#
root@storage:/data#
```
This gets you a terminal.

View File

@ -32,8 +32,8 @@ replicationcontroller "php-apache" created
$ kubectl expose rc php-apache --port=80 --type=LoadBalancer
service "php-apache" exposed
```
Now, we will wait some time and verify that both the replication controller and the service were correctly created and are running. We will also determine the IP address of the service:
```shell
@ -43,15 +43,15 @@ php-apache-wa3t1 1/1 Running 0 12m
$ kubectl describe services php-apache | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 146.148.24.244
```
We may now check that php-apache server works correctly by calling ``curl`` with the service's IP:
We may now check that php-apache server works correctly by calling `curl` with the service's IP:
```shell
$ curl http://146.148.24.244
OK!
```
Please notice that when exposing the service we assumed that our cluster runs on a provider which supports load balancers (e.g.: on GCE).
If load balancers are not supported (e.g.: on Vagrant), we can expose php-apache service as ``ClusterIP`` and connect to it using the proxy on the master:
@ -64,8 +64,8 @@ Kubernetes master is running at https://146.148.6.215
$ curl -k -u <admin>:<password> https://146.148.6.215/api/v1/proxy/namespaces/default/services/php-apache/
OK!
```
## Step Two: Create horizontal pod autoscaler
Now that the server is running, we will create a horizontal pod autoscaler for it.
@ -86,8 +86,8 @@ spec:
maxReplicas: 10
cpuUtilization:
targetPercentage: 50
```
This defines a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache replication controller we created in the first step of these instructions.
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
@ -100,24 +100,24 @@ We will create the autoscaler by executing the following command:
```shell
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
```
Alternatively, we can create the autoscaler using [kubectl autoscale](../kubectl/kubectl_autoscale).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file:
```
```shell
$ kubectl autoscale rc php-apache --cpu-percent=50 --min=1 --max=10
replicationcontroller "php-apache" autoscaled
```
We may check the current status of autoscaler by running:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 27s
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding replication controller).
@ -128,16 +128,16 @@ We will start an infinite loop of queries to our server (please run it in a diff
```shell
$ while true; do curl http://146.148.6.244; done
```
We may examine, how CPU load was increased (the results should be visible after about 3-4 minutes) by executing:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 305% 1 10 4m
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the replication controller was resized to 7 replicas:
@ -145,14 +145,14 @@ As a result, the replication controller was resized to 7 replicas:
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 7 18m
```
Now, we may increase the load even more by running yet another infinite loop of queries (in yet another terminal):
```shell
$ while true; do curl http://146.148.6.244; done
```
In the case presented here, it increased the number of serving pods to 10:
```shell
@ -163,11 +163,12 @@ php-apache ReplicationController/default/php-apache/ 50% 65% 1
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 10 24m
```
## Step Four: Stop load
We will finish our example by stopping the user load.
We will terminate both infinite ``while`` loops sending requests to the server and verify the result state:
```shell
@ -178,9 +179,6 @@ php-apache ReplicationController/default/php-apache/ 50% 0% 1
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 1 31m
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.

View File

@ -21,6 +21,7 @@ your image.
Private registries may require keys to read images from them.
Credentials can be provided in several ways:
- Using Google Container Registry
- Per-cluster
- automatically configured on Google Compute Engine or Google Container Engine
@ -64,6 +65,7 @@ in the `$HOME` of `root` on a kubelet, then docker will use it.
Here are the recommended steps to configuring your nodes to use a private registry. In this
example, run these on your desktop/laptop:
1. run `docker login [server]` for each set of credentials you want to use.
1. view `$HOME/.dockercfg` in an editor to ensure it contains just the credentials you want to use.
1. get a list of your nodes
@ -89,22 +91,22 @@ EOF
$ kubectl create -f /tmp/private-image-test-1.yaml
pods/private-image-test-1
$
```
If everything is working, then, after a few moments, you should see:
```shell
$ kubectl logs private-image-test-1
SUCCESS
```
If it failed, then you will see:
```shell
$ kubectl describe pods/private-image-test-1 | grep "Failed"
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
```
You must ensure all nodes in the cluster have the same `.dockercfg`. Otherwise, pods will run on
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
template needs to include the `.dockercfg` or mount a drive that contains it.
@ -172,8 +174,8 @@ EOF
$ kubectl create -f /tmp/image-pull-secret.yaml
secrets/myregistrykey
$
```
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockercfg]: invalid value ...` it means
the data was successfully un-base64 encoded, but could not be parsed as a dockercfg file.
@ -194,8 +196,8 @@ spec:
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
```
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets
in a [serviceAccount](service-accounts) resource.
@ -231,4 +233,4 @@ common use cases and suggested solutions.
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
- DO NOT use imagePullSecrets for this use case yet.
1. A multi-tenant cluster where each tenant needs own private registry
- NOT supported yet.
- NOT supported yet.

View File

@ -22,8 +22,8 @@ internet
|
------------
[ Services ]
```
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
```
@ -32,13 +32,14 @@ internet
[ Ingress ]
--|-----|--
[ Services ]
```
It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Users request ingress by POSTing the Ingress resource to the API server. An [Ingress controller](#ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.
## Prerequisites
Before you start using the Ingress resource, there are a few things you should understand:
* The Ingress resource is not available in any Kubernetes release prior to 1.1
* You need an Ingress controller to satisfy an Ingress. Simply creating the resource will have no effect.
* On GCE/GKE there should be a [L7 cluster addon](https://releases.k8s.io/release-1.1/cluster/addons/cluster-loadbalancing/glbc/README.md#prerequisites), on other platforms you either need to write your own or [deploy an existing controller](https://github.com/kubernetes/contrib/tree/master/Ingress) as a pod.
@ -49,20 +50,20 @@ Before you start using the Ingress resource, there are a few things you should u
A minimal Ingress might look like:
```yaml
01. apiVersion: extensions/v1beta1
02. kind: Ingress
03. metadata:
04. name: test-ingress
05. spec:
06. rules:
07. - http:
08. paths:
09. - path: /testpath
10. backend:
11. serviceName: test
12. servicePort: 80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
```
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](simple-yaml), [here](configuring-containers), and [here](working-with-resources).
@ -96,8 +97,8 @@ spec:
backend:
serviceName: testsvc
servicePort: 80
```
[Download example](ingress.yaml)
<!-- END MUNGE: EXAMPLE ingress.yaml -->
@ -107,19 +108,19 @@ If you create it using `kubectl -f` you should see:
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 107.178.254.228
```
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy this Ingress. The `RULE` column shows that all traffic send to the IP is directed to the Kubernetes Service listed under `BACKEND`.
### Simple fanout
As described previously, pods within kubernetes have ips only visible on the cluster network, so we need something at the edge accepting ingress traffic and proxying it to the right endpoints. This component is usually a highly available loadbalancer/s. An Ingress allows you to keep the number of loadbalancers down to a minimum, for example, a setup like:
```
```shell
foo.bar.com -> 178.91.123.132 -> / foo s1:80
/ bar s2:80
```
would require an Ingress such as:
```yaml
@ -140,18 +141,17 @@ spec:
backend:
serviceName: s2
servicePort: 80
```
When you create the Ingress with `kubectl create -f`:
```
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test -
foo.bar.com
/foo s1:80
/bar s2:80
```
The Ingress controller will provision an implementation specific loadbalancer that satisfies the Ingress, as long as the services (s1, s2) exist. When it has done so, you will see the address of the loadbalancer under the last column of the Ingress.
@ -163,8 +163,8 @@ Name-based virtual hosts use multiple host names for the same IP address.
foo.bar.com --| |-> foo.bar.com s1:80
| 178.91.123.132 |
bar.foo.com --| |-> bar.foo.com s2:80
```
The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
```yaml
@ -186,8 +186,8 @@ spec:
- backend:
serviceName: s2
servicePort: 80
```
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the url of the request.
### Loadbalancing
@ -207,8 +207,8 @@ test - 178.91.123.132
foo.bar.com
/foo s1:80
$ kubectl edit ing test
```
This should pop up an editor with the existing yaml, modify it to include the new Host.
```yaml
@ -229,8 +229,8 @@ spec:
servicePort: 80
path: /foo
..
```
saving it will update the resource in the API server, which should tell the Ingress controller to reconfigure the loadbalancer.
```shell
@ -241,8 +241,8 @@ test - 178.91.123.132
/foo s1:80
bar.baz.com
/foo s2:80
```
You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file.
## Future Work
@ -257,6 +257,7 @@ Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kuberne
## Alternatives
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
* Use [Service.Type=LoadBalancer](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-loadbalancer)
* Use [Service.Type=NodePort](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-nodeport)
* Use a [Port Proxy] (https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service)

View File

@ -32,20 +32,20 @@ spec:
cpu: "500m"
ports:
- containerPort: 80
```
```shell
$ kubectl create -f ./my-nginx-rc.yaml
replicationcontrollers/my-nginx
```
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-gy1ij 1/1 Running 0 1m
my-nginx-yv5cn 1/1 Running 0 1m
```
We can retrieve a lot more information about each of these pods using `kubectl describe pod`. For example:
```shell
@ -81,8 +81,8 @@ Events:
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} pulled Successfully pulled image "nginx"
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
```
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
The container state is one of Waiting, Running, or Terminated. Depending on the state, additional information will be provided -- here you can see that for a container in Running state, the system tells you when the container started.
@ -107,8 +107,8 @@ my-nginx-b7zs9 0/1 Running 0 8s
my-nginx-i595c 0/1 Running 0 8s
my-nginx-iichp 0/1 Running 0 8s
my-nginx-tc2j9 0/1 Running 0 8s
```
To find out why the my-nginx-9unp9 pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
```shell
@ -134,24 +134,24 @@ Containers:
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 23:56:21 -0700 Fri, 10 Jul 2015 00:01:30 -0700 21 {scheduler } failedScheduling Failed for reason PodFitsResources and possibly others
```
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `PodFitsResources` (and possibly others). `PodFitsResources` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
To correct this situation, you can use `kubectl scale` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use
```
```shell
kubectl get events
```
but you have to remember that events are namespaced. This means that if you're interested in events for some namespaced object (e.g. what happened with Pods in namespace `my-namespace`) you need to explicitly provide a namespace to the command:
```
```shell
kubectl get events --namespace=my-namespace
```
To see events from all namespaces, you can use the `--all-namespaces` argument.
In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
@ -216,8 +216,8 @@ status:
phase: Running
podIP: 10.244.3.4
startTime: 2015-07-10T06:56:21Z
```
## Example: debugging a down/unreachable node
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that's running on the node, or to find out why a Pod won't schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
@ -301,16 +301,15 @@ status:
machineID: ""
osImage: Debian GNU/Linux 7 (wheezy)
systemUUID: ABE5F6B4-D44B-108B-C46A-24CCE16C8B6E
```
## What's next?
Learn about additional debugging tools, including:
* [Logging](logging)
* [Monitoring](monitoring)
* [Getting into containers via `exec`](getting-into-containers)
* [Connecting to containers via proxies](connecting-to-applications-proxy)
* [Connecting to containers via port forwarding](connecting-to-applications-port-forward)

View File

@ -39,8 +39,8 @@ spec:
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
```
[Download example](job.yaml)
<!-- END MUNGE: EXAMPLE job.yaml -->
@ -49,8 +49,8 @@ Run the example job by downloading the example file and then running this comman
```shell
$ kubectl create -f ./job.yaml
jobs/pi
```
Check on the status of the job using this command:
```shell
@ -67,9 +67,8 @@ Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><> '<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><> '<27><>'<27><>'<27><>'<27><>'<27><> '<27><>'<27><>'<27><>'<27><> '<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><> '<27><>'<27><>'<27><>'<27><>'<27><>'<27><> '<27><>'<27><>'<27><>'<27><>'<27><>'<27><>'<27><>
1m 1m 1 {job } SuccessfulCreate Created pod: pi-z548a
```
To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too.
To list all the pods that belong to job in a machine readable form, you can use a command like this:
@ -78,8 +77,8 @@ To list all the pods that belong to job in a machine readable form, you can use
$ pods=$(kubectl get pods --selector=app=pi --output=jsonpath={.items..metadata.name})
echo $pods
pi-aiw0a
```
Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression
that just gets the name from each pod in the returned list.
@ -88,8 +87,8 @@ View the standard output of one of the pods:
```shell
$ kubectl logs pi-aiw0a
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
```
## Writing a Job Spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
@ -116,6 +115,7 @@ Only a [`RestartPolicy`](pod-states) equal to `Never` or `OnFailure` are allowed
The `.spec.selector` field is a label query over a set of pods.
The `spec.selector` is an object consisting of two fields:
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](replication-controller)
* `matchExpressions` - allows to build more sophisticated selectors by specyfing key,
list of values and an operator that relates the key and values.
@ -206,4 +206,4 @@ similar functionality will be supported.
## Future work
Support for creating Jobs at specified times/dates (i.e. cron) is expected in the next minor
release.
release.

View File

@ -8,7 +8,6 @@ And we add three functions in addition to the original JSONPath syntax:
2. We can use `""` to quote text inside JSONPath expression.
3. We can use `range` operator to iterate list.
The result object is printed as its String() function.
Given the input:
@ -48,8 +47,8 @@ Given the input:
}
]
}
```
Function | Description | Example | Result
---------|--------------------|--------------------|------------------
text | the plain text | kind is {.kind} | kind is List

View File

@ -59,8 +59,9 @@ users:
- name: green-user
user:
client-certificate: path/to/my/client/cert
client-key: path/to/my/client/key
client-key: path/to/my/client/key
```
### Building your own kubeconfig file
NOTE, that if you are deploying k8s via kube-up.sh, you do not need to create your own kubeconfig files, the script will do it for you.
@ -71,10 +72,11 @@ So, lets do a quick walk through the basics of the above file so you can easily
The above file would likely correspond to an api-server which was launched using the `--token-auth-file=tokens.csv` option, where the tokens.csv file looked something like this:
```
```conf
blue-user,blue-user,1
mister-red,mister-red,2
mister-red,mister-red,2
```
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in succesfully, because we are providigin the green-user's client credentials.
@ -130,8 +132,9 @@ $ kubectl config set-cluster local-server --server=http://localhost:8080
$ kubectl config set-context default-context --cluster=local-server --user=myself
$ kubectl config use-context default-context
$ kubectl config set contexts.default-context.namespace the-right-prefix
$ kubectl config view
$ kubectl config view
```
produces this output
```yaml
@ -153,8 +156,9 @@ users:
- name: myself
user:
password: secret
username: admin
username: admin
```
and a kubeconfig file that looks like this
```yaml
@ -176,8 +180,9 @@ users:
- name: myself
user:
password: secret
username: admin
username: admin
```
#### Commands for the example file
```shell
@ -189,8 +194,9 @@ $ kubectl config set-credentials blue-user --token=blue-token
$ kubectl config set-credentials green-user --client-certificate=path/to/my/client/cert --client-key=path/to/my/client/key
$ kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns
$ kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
$ kubectl config use-context federal-context
$ kubectl config use-context federal-context
```
### Final notes for tying it all together
So, tying this all together, a quick start to creating your own kubeconfig file:
@ -199,9 +205,4 @@ So, tying this all together, a quick start to creating your own kubeconfig file:
- Replace the snippet above with information for your cluster's api-server endpoint.
- Make sure your api-server is launched in such a way that at least one user (i.e. green-user) credentials are provided to it. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.
- Make sure your api-server is launched in such a way that at least one user (i.e. green-user) credentials are provided to it. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.

View File

@ -9,25 +9,26 @@ TODO: Auto-generate this file to ensure it's always in sync with any `kubectl` c
Use the following syntax to run `kubectl` commands from your terminal window:
```
```shell
kubectl [command] [TYPE] [NAME] [flags]
```
where `command`, `TYPE`, `NAME`, and `flags` are:
* `command`: Specifies the operation that you want to perform on one or more resources, for example `create`, `get`, `describe`, `delete`.
* `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-sensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:
```
```shell
$ kubectl get pod pod1
$ kubectl get pods pod1
$ kubectl get po pod1
$ kubectl get pods pod1
$ kubectl get po pod1
```
```
* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `$ kubectl get pods`.
When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files:
* To specify resources by type and name:
When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files:
* To specify resources by type and name:
* To group resources if they are all the same type: `TYPE1 name1 name2 name<#>`<br/>
Example: `$ kubectl get pod example-pod1 example-pod2`
* To specify multiple resource types individually: `TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`<br/>
@ -35,7 +36,8 @@ $ kubectl get pod pod1
* To specify resources with one or more files: `-f file1 -f file2 -f file<#>`
[Use YAML rather than JSON](config-best-practices) since YAML tends to be more user-friendly, especially for configuration files.<br/>
Example: `$ kubectl get pod -f ./pod.yaml`
* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.<br/>
* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.
**Important**: Flags that you specify from the command line override default values and any corresponding environment variables.
If you need help, just run `kubectl help` from the terminal window.
@ -107,10 +109,10 @@ The default output format for all `kubectl` commands is the human readable plain
#### Syntax
```
```shell
kubectl [command] [TYPE] [NAME] -o=<output_format>
```
Depending on the `kubectl` operation, the following output formats are supported:
Output format | Description
@ -138,42 +140,41 @@ To define custom columns and output only the details that you want into a table,
##### Examples
* Inline:
Inline:
```shell
$ kubectl get pods <pod-name> -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion
```
* Template file:
Template file:
```shell
$ kubectl get pods <pod-name> -o=custom-columns-file=template.txt
```
where the `template.txt` file contains:
```
```
NAME RSRC
metadata.name metadata.resourceVersion
```
```
The result of running either command is:
```shell
NAME RSRC
submit-queue 610995
```
### Sorting list objects
To output objects to a sorted list in your terminal window, you can add the `--sort-by` flag to a supported `kubectl` command. Sort your objects by specifying any numeric or string field with the `--sort-by` flag. To specify a field, use a [jsonpath](jsonpath) expression.
#### Syntax
```
```shell
kubectl [command] [TYPE] [NAME] --sort-by=<jsonpath_exp>
```
##### Example
To print a list of pods sorted by name, you run:
@ -184,72 +185,84 @@ To print a list of pods sorted by name, you run:
Use the following set of examples to help you familiarize yourself with running the commonly used `kubectl` operations:
* `kubectl create` - Create a resource from a file or stdin.
`kubectl create` - Create a resource from a file or stdin.
// Create a service using the definition in example-service.yaml.
$ kubectl create -f example-service.yaml
```shell
// Create a service using the definition in example-service.yaml.
$ kubectl create -f example-service.yaml
// Create a replication controller using the definition in example-controller.yaml.
$ kubectl create -f example-controller.yaml
// Create a replication controller using the definition in example-controller.yaml.
$ kubectl create -f example-controller.yaml
// Create the objects that are defined in any .yaml, .yml, or .json file within the <directory> directory.
$ kubectl create -f <directory>
// Create the objects that are defined in any .yaml, .yml, or .json file within the <directory> directory.
$ kubectl create -f <directory>
```
* `kubectl get` - List one or more resources.
`kubectl get` - List one or more resources.
// List all pods in plain-text output format.
$ kubectl get pods
```shell
// List all pods in plain-text output format.
$ kubectl get pods
// List all pods in plain-text output format and includes additional information (such as node name).
$ kubectl get pods -o wide
// List all pods in plain-text output format and includes additional information (such as node name).
$ kubectl get pods -o wide
// List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.
$ kubectl get replicationcontroller <rc-name>
// List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.
$ kubectl get replicationcontroller <rc-name>
// List all replication controllers and services together in plain-text output format.
$ kubectl get rc,services
// List all replication controllers and services together in plain-text output format.
$ kubectl get rc,services
```
* `kubectl describe` - Display detailed state of one or more resources.
`kubectl describe` - Display detailed state of one or more resources.
// Display the details of the node with name <node-name>.
$ kubectl describe nodes <node-name>
```shell
// Display the details of the node with name <node-name>.
$ kubectl describe nodes <node-name>
// Display the details of the pod with name <pod-name>.
$ kubectl describe pods/<pod-name>
// Display the details of the pod with name <pod-name>.
$ kubectl describe pods/<pod-name>
// Display the details of all the pods that are managed by the replication controller named <rc-name>.
// Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.
$ kubectl describe pods <rc-name>
// Display the details of all the pods that are managed by the replication controller named <rc-name>.
// Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.
$ kubectl describe pods <rc-name>
```
* `kubectl delete` - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.
`kubectl delete` - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.
// Delete a pod using the type and name specified in the pod.yaml file.
$ kubectl delete -f pod.yaml
```shell
// Delete a pod using the type and name specified in the pod.yaml file.
$ kubectl delete -f pod.yaml
// Delete all the pods and services that have the label name=<label-name>.
$ kubectl delete pods,services -l name=<label-name>
// Delete all the pods and services that have the label name=<label-name>.
$ kubectl delete pods,services -l name=<label-name>
// Delete all pods.
$ kubectl delete pods --all
// Delete all pods.
$ kubectl delete pods --all
```
* `kubectl exec` - Execute a command against a container in a pod.
`kubectl exec` - Execute a command against a container in a pod.
// Get output from running 'date' from pod <pod-name>. By default, output is from the first container.
$ kubectl exec <pod-name> date
```shell
// Get output from running 'date' from pod <pod-name>. By default, output is from the first container.
$ kubectl exec <pod-name> date
// Get output from running 'date' in container <container-name> of pod <pod-name>.
$ kubectl exec <pod-name> -c <container-name> date
// Get output from running 'date' in container <container-name> of pod <pod-name>.
$ kubectl exec <pod-name> -c <container-name> date
// Get an interactive TTY and run /bin/bash from pod <pod-name>. By default, output is from the first container.
$ kubectl exec -ti <pod-name> /bin/bash
// Get an interactive TTY and run /bin/bash from pod <pod-name>. By default, output is from the first container.
$ kubectl exec -ti <pod-name> /bin/bash
```
* `kubectl logs` - Print the logs for a container in a pod.
`kubectl logs` - Print the logs for a container in a pod.
// Return a snapshot of the logs from pod <pod-name>.
$ kubectl logs <pod-name>
```shell
// Return a snapshot of the logs from pod <pod-name>.
$ kubectl logs <pod-name>
// Start streaming the logs from pod <pod-name>. This is similiar to the 'tail -f' Linux command.
$ kubectl logs -f <pod-name>
// Start streaming the logs from pod <pod-name>. This is similiar to the 'tail -f' Linux command.
$ kubectl logs -f <pod-name>
```
## Next steps

View File

@ -11,8 +11,8 @@ Each object can have a set of key/value labels defined. Each Key must be unique
"key1" : "value1",
"key2" : "value2"
}
```
We'll eventually index and reverse-index labels for efficient queries and watches, use them to sort and group in UIs and CLIs, etc. We don't want to pollute labels with non-identifying, especially large and/or structured, data. Non-identifying information should be recorded using [annotations](annotations).
{% include pagetoc.html %}
@ -61,8 +61,8 @@ Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _eq
```
environment = production
tier != frontend
```
The former selects all resources with key equal to `environment` and value equal to `production`.
The latter selects all resources with key equal to `tier` and value distinct from `frontend`, and all resources with no labels with the `tier` key.
One could filter for resources in `production` excluding `frontend` using the comma operator: `environment=production,tier!=frontend`
@ -77,8 +77,8 @@ environment in (production, qa)
tier notin (frontend, backend)
partition
!partition
```
The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`.
The second example selects all resources with key equal to `tier` and values other than `frontend` and `backend`, and all resources with no labels with the `tier` key.
The third example selects all resources including a label with key `partition`; no values are checked.
@ -102,26 +102,26 @@ Both label selector styles can be used to list or watch resources via a REST cli
```shell
$ kubectl get pods -l environment=production,tier=frontend
```
or using _set-based_ requirements:
```shell
$ kubectl get pods -l 'environment in (production),tier in (frontend)'
```
As already mentioned _set-based_ requirements are more expressive.  For instance, they can implement the _OR_ operator on values:
```shell
$ kubectl get pods -l 'environment in (production, qa)'
```
or restricting negative matching via _exists_ operator:
```shell
$ kubectl get pods -l 'environment,environment notin (frontend)'
```
### Set references in API objects
Some Kubernetes objects, such as [`service`s](services) and [`replicationcontroller`s](replication-controller), also use label selectors to specify sets of other resources, such as [pods](pods).
@ -136,15 +136,14 @@ Labels selectors for both objects are defined in `json` or `yaml` files using ma
"selector": {
"component" : "redis",
}
```
or
```yaml
selector:
component: redis
```
this selector (respectively in `json` or `yaml` format) is equivalent to `component=redis` or `component in (redis)`.
#### Job and other new resources
@ -158,8 +157,6 @@ selector:
matchExpressions:
- {key: tier, operator: In, values: [cache]}
- {key: environment, operator: NotIn, values: [dev]}
```
`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match.
`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match.

View File

@ -13,16 +13,16 @@ livenessProbe:
- /tmp/health
initialDelaySeconds: 15
timeoutSeconds: 1
```
Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.
Note that the container removes the `/tmp/health` file after 10 seconds,
```shell
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
```
so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
@ -35,8 +35,8 @@ livenessProbe:
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1
```
The Kubelet sends an HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails. The Kubelet sends the probe to the container's ip address by default which could be specified with `host` as part of httpGet probe. If the container listens on `127.0.0.1`, `host` should be specified as `127.0.0.1`. In general, if the container listens on its ip address or on all interfaces (0.0.0.0), there is no need to specify the `host` as part of the httpGet probe.
This [guide](../walkthrough/k8s201.html#health-checking) has more information on health checks.
@ -48,8 +48,8 @@ To show the health check is actually working, first create the pods:
```shell
$ kubectl create -f docs/user-guide/liveness/exec-liveness.yaml
$ kubectl create -f docs/user-guide/liveness/http-liveness.yaml
```
Check the status of the pods once they are created:
```shell
@ -58,8 +58,8 @@ NAME READY STATUS RESTARTS
[...]
liveness-exec 1/1 Running 0 13s
liveness-http 1/1 Running 0 13s
```
Check the status half a minute later, you will see the container restart count being incremented:
```shell
@ -68,8 +68,8 @@ NAME READY STATUS RESTARTS
[...]
liveness-exec 1/1 Running 1 36s
liveness-http 1/1 Running 1 36s
```
At the bottom of the *kubectl describe* output there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
```shell
@ -79,5 +79,4 @@ Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kube
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10
```
```

View File

@ -25,8 +25,9 @@ spec:
- name: count
image: ubuntu:14.04
args: [bash, -c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
```
[Download example](https://github.com/kubernetes/kubernetes/tree/master/examples/blog-logging/counter-pod.yaml)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
@ -34,8 +35,9 @@ we can run the pod:
```shell
$ kubectl create -f ./counter-pod.yaml
pods/counter
pods/counter
```
and then fetch the logs:
```shell
@ -46,8 +48,9 @@ $ kubectl logs counter
3: Tue Jun 2 21:37:34 UTC 2015
4: Tue Jun 2 21:37:35 UTC 2015
5: Tue Jun 2 21:37:36 UTC 2015
...
...
```
If a pod has more than one container then you need to specify which container's log files should
be fetched e.g.
@ -66,8 +69,9 @@ $ kubectl logs kube-dns-v3-7r1l9 etcd
2015/06/23 04:51:03 etcdserver: start to snapshot (applied: 60006, lastsnap: 50005)
2015/06/23 04:51:03 etcdserver: compacted log at index 60006
2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006
...
...
```
## Cluster level logging to Google Cloud Logging
The getting started guide [Cluster Level Logging to Google Cloud Logging](../getting-started-guides/logging)

View File

@ -39,30 +39,30 @@ spec:
image: nginx
ports:
- containerPort: 80
```
Multiple resources can be created the same way as a single resource:
```shell
$ kubectl create -f ./nginx-app.yaml
services/my-nginx-svc
replicationcontrollers/my-nginx
```
The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the replication controller(s).
`kubectl create` also accepts multiple `-f` arguments:
```shell
$ kubectl create -f ./nginx-svc.yaml -f ./nginx-rc.yaml
```
And a directory can be specified rather than or in addition to individual files:
```shell
$ kubectl create -f ./nginx/
```
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
@ -72,8 +72,8 @@ A URL can also be specified as a configuration source, which is handy for deploy
```shell
$ kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/docs/user-guide/replication.yaml
replicationcontrollers/nginx
```
## Bulk operations in kubectl
Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created:
@ -82,22 +82,22 @@ Resource creation isn't the only operation that `kubectl` can perform in bulk. I
$ kubectl delete -f ./nginx/
replicationcontrollers/my-nginx
services/my-nginx-svc
```
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
```shell
$ kubectl delete replicationcontrollers/my-nginx services/my-nginx-svc
```
For larger numbers of resources, one can use labels to filter resources. The selector is specified using `-l`:
```shell
$ kubectl delete all -lapp=nginx
replicationcontrollers/my-nginx
services/my-nginx-svc
```
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
```shell
@ -106,8 +106,8 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx-svc app=nginx app=nginx 10.0.152.174 80/TCP
```
## Using labels effectively
The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
@ -118,8 +118,8 @@ For instance, different applications would use different values for the `app` la
labels:
app: guestbook
tier: frontend
```
while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label:
```yaml
@ -127,8 +127,8 @@ labels:
app: guestbook
tier: backend
role: master
```
and
```yaml
@ -136,8 +136,8 @@ labels:
app: guestbook
tier: backend
role: slave
```
The labels allow us to slice and dice our resources along any dimension specified by a label:
```shell
@ -159,8 +159,8 @@ $ kubectl get pods -lapp=guestbook,role=slave
NAME READY STATUS RESTARTS AGE
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
guestbook-redis-slave-qgazl 1/1 Running 0 3m
```
## Canary deployments
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. For example, it is common practice to deploy a *canary* of a new application release (specified via image tag) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. For instance, a new release of the guestbook frontend might carry the following labels:
@ -170,8 +170,8 @@ labels:
app: guestbook
tier: frontend
track: canary
```
and the primary, stable release would have a different value of the `track` label, so that the sets of pods controlled by the two replication controllers would not overlap:
```yaml
@ -179,16 +179,16 @@ labels:
app: guestbook
tier: frontend
track: stable
```
The frontend service would span both sets of replicas by selecting the common subset of their labels, omitting the `track` label:
```yaml
selector:
app: guestbook
tier: frontend
```
## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. For example:
@ -212,8 +212,8 @@ my-nginx-v4-hayza 1/1 Running 0 14m fe
my-nginx-v4-mde6m 1/1 Running 0 18m fe
my-nginx-v4-sh6m8 1/1 Running 0 19m fe
my-nginx-v4-wfof4 1/1 Running 0 16m fe
```
## Scaling your application
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to increase the number of nginx replicas from 2 to 3, do:
@ -226,8 +226,8 @@ NAME READY STATUS RESTARTS AGE
my-nginx-1jgkf 1/1 Running 0 3m
my-nginx-divi2 1/1 Running 0 1h
my-nginx-o0ef1 1/1 Running 0 1h
```
## Updating your application without a service outage
At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
@ -253,15 +253,15 @@ spec:
image: nginx:1.7.9
ports:
- containerPort: 80
```
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](/{{page.version}}/docs/design/simple-rolling-update):
```shell
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
Creating my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
```
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
```shell
@ -273,8 +273,8 @@ my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0
my-nginx-divi2 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
my-nginx-o0ef1 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
my-nginx-q6all 1/1 Running 0 8m 2d1d7a8f682934a254002b56404b813e
```
`kubectl rolling-update` reports progress as it progresses:
```shell
@ -295,8 +295,8 @@ At end of loop: my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
Update succeeded. Deleting old controller: my-nginx
Renaming my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 to my-nginx
my-nginx
```
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`:
```shell
@ -306,8 +306,8 @@ Found desired replicas.Continuing update with existing controller my-nginx.
Stopping my-nginx-02ca3e87d8685813dbe1f8c164a46f02 replicas: 1 -> 0
Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
my-nginx
```
This is one example where the immutability of containers is a huge asset.
If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as:
@ -334,8 +334,8 @@ spec:
args: ['nginx'?,'?-T'?]
ports:
- containerPort: 80
```
and roll it out:
```shell
@ -358,8 +358,8 @@ Updating my-nginx replicas: 0, my-nginx-v4 replicas: 5
At end of loop: my-nginx replicas: 0, my-nginx-v4 replicas: 5
Update succeeded. Deleting my-nginx
my-nginx-v4
```
You can also run the [update demo](update-demo/) to see a visual representation of the rolling update process.
## In-place updates of resources
@ -376,8 +376,8 @@ metadata:
annotations:
description: my frontend running nginx
...
```
The patch is specified using json.
For more significant changes, you can `get` the resource, edit it, and then `replace` the resource with the updated version:
@ -388,8 +388,8 @@ $ vi /tmp/nginx.yaml
$ kubectl replace -f /tmp/nginx.yaml
replicationcontrollers/my-nginx-v4
$ rm $TMP
```
The system ensures that you don't clobber changes made by other users or components by confirming that the `resourceVersion` doesn't differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don't use your original configuration file as the source since additional fields most likely were set in the live state.
## Disruptive updates
@ -400,12 +400,9 @@ In some cases, you may need to update resource fields that cannot be updated onc
$ kubectl replace -f ./nginx-rc.yaml --force
replicationcontrollers/my-nginx-v4
replicationcontrollers/my-nginx-v4
```
## What's next?
- [Learn about how to use `kubectl` for application introspection and debugging.](introspection-and-debugging)
- [Tips and tricks when working with config](config-best-practices)
- [Tips and tricks when working with config](config-best-practices

View File

@ -19,7 +19,7 @@ In future versions of Kubernetes, objects in the same namespace will have the sa
access control policies by default.
It is not necessary to use multiple namespaces just to separate slightly different
resources, such as different versions of the same software: use [labels](#labels.md) to distinguish
resources, such as different versions of the same software: use [labels](labels) to distinguish
resources within the same namespace.
## Working with Namespaces
@ -36,9 +36,10 @@ $ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
```
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
@ -51,8 +52,8 @@ For example:
```shell
$ kubectl --namespace=<insert-namespace-name-here> run nginx --image=nginx
$ kubectl --namespace=<insert-namespace-name-here> get pods
```
### Setting the namespace preference
You can permanently save the namespace for all subsequent kubectl commands in that
@ -62,14 +63,14 @@ First get your current context:
```shell
$ export CONTEXT=$(kubectl config view | grep current-context | awk '{print $2}')
```
Then update the default namespace:
```shell
$ kubectl config set-context $(CONTEXT) --namespace=<insert-namespace-name-here>
```
## Namespaces and DNS
When you create a [Service](services), it creates a corresponding [DNS entry](../admin/dns).
@ -86,6 +87,3 @@ in a some namespace. However namespace resources are not themselves in a namesp
And, low-level resources, such as [nodes](/{{page.version}}/docs/admin/node) and
persistentVolumes, are not in any namespace. Events are an exception: they may or may not
have a namespace, depending on the object the event is about.

View File

@ -23,7 +23,7 @@ You can verify that it worked by re-running `kubectl get nodes` and checking tha
Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:
<pre>
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -34,11 +34,11 @@ spec:
containers:
- name: nginx
image: nginx
</pre>
```
Then add a nodeSelector like so:
<pre>
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -52,13 +52,10 @@ spec:
imagePullPolicy: IfNotPresent
<b>nodeSelector:
disktype: ssd</b>
</pre>
```
When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the pod was assigned to.
### Conclusion
While this example only covered one node, you can attach labels to as many nodes as you want. Then when you schedule a pod with a nodeSelector, it can be scheduled on any of the nodes that satisfy that nodeSelector. Be careful that it will match at least one node, however, because if it doesn't the pod won't be scheduled at all.
While this example only covered one node, you can attach labels to as many nodes as you want. Then when you schedule a pod with a nodeSelector, it can be scheduled on any of the nodes that satisfy that nodeSelector. Be careful that it will match at least one node, however, because if it doesn't the pod won't be scheduled at all.

View File

@ -61,7 +61,6 @@ The reclaim policy for a `PersistentVolume` tells the cluster what to do with th
Each PV contains a spec and status, which is the specification and status of the volume.
```yaml
apiVersion: v1
kind: PersistentVolume
@ -76,8 +75,8 @@ apiVersion: v1
nfs:
path: /tmp
server: 172.17.0.2
```
### Capacity
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](../design/resources) to understand the units expected by `capacity`.
@ -138,8 +137,8 @@ spec:
resources:
requests:
storage: 8Gi
```
### Access Modes
Claims use the same conventions as volumes when requesting storage with specific access modes.
@ -168,5 +167,4 @@ spec:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
```
```

View File

@ -19,22 +19,20 @@ for ease of development and testing. You'll create a local `HostPath` for this
> IMPORTANT! For `HostPath` to work, you will need to run a single node cluster. Kubernetes does not
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the `HostPath` resides.
```shell
# This will be nginx's webroot
$ mkdir /tmp/data01
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
```
PVs are created by posting them to the API server.
```shell
$ kubectl create -f docs/user-guide/persistent-volumes/volumes/local-01.yaml
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Available
```
## Requesting storage
Users of Kubernetes request persistent storage for their pods. They don't know how the underlying cluster is provisioned.
@ -60,8 +58,8 @@ myclaim-1 map[] Bound pv0001
$ kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Bound default/myclaim-1
```
## Using your claim as a volume
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
@ -78,8 +76,8 @@ $ kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
frontendservice 10.0.0.241 <none> 3000/TCP name=frontendhttp 1d
kubernetes 10.0.0.2 <none> 443/TCP <none> 2d
```
## Next steps
You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you
@ -88,11 +86,8 @@ need to disable SELinux (setenforce 0).
```shell
$ curl 10.0.0.241:3000
I love Kubernetes storage!
```
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join the team on [Slack](../../troubleshooting.html#slack) and ask!
Enjoy!
Enjoy!

View File

@ -18,14 +18,14 @@ The simplest way to install is to copy or move kubectl into a dir already in PAT
$ sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl
# Linux
$ sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl
```
You also need to ensure it's executable:
```shell
$ sudo chmod +x /usr/local/bin/kubectl
```
If you prefer not to copy kubectl, you need to ensure the tool is in your path:
```shell
@ -34,8 +34,8 @@ export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
## Configuring kubectl
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](kubeconfig-file), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](/{{page.version}}/docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](sharing-clusters).
@ -47,13 +47,10 @@ Check that kubectl is properly configured by getting the cluster state:
```shell
$ kubectl cluster-info
```
If you see a url response, you are ready to go.
## What's next?
[Learn how to launch and expose your application.](quick-start)
[Learn how to launch and expose your application.](quick-start)

View File

@ -1,10 +1,10 @@
---
title: "Kubernetes User Guide: Managing Applications: Working with pods and containers in production"
---
{% include pagetoc.html %}
You've seen [how to configure and deploy pods and containers](configuring-containers), using some of the most common configuration parameters. This section dives into additional features that are especially useful for running applications in production.
{% include pagetoc.html %}
## Persistent storage
The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. To access more-persistent storage, outside the container file system, you need a [*volume*](volumes). This is especially important to stateful applications, such as key-value stores and databases.
@ -36,8 +36,8 @@ spec:
volumeMounts:
- mountPath: /redis-master-data
name: data # must match the name of the volume, above
```
`emptyDir` volumes live for the lifespan of the [pod](pods), which is longer than the lifespan of any one container, so if the container fails and is restarted, our storage will live on.
In addition to the local disk storage provided by `emptyDir`, Kubernetes supports many different network-attached storage solutions, including PD on GCE and EBS on EC2, which are preferred for critical data, and will handle details such as mounting and unmounting the devices on the nodes. See [the volumes doc](volumes) for more details.
@ -57,8 +57,8 @@ type: Opaque
data:
password: dmFsdWUtMg0K
username: dmFsdWUtMQ0K
```
As with other resources, this secret can be instantiated using `create` and can be viewed with `get`:
```shell
@ -68,8 +68,8 @@ $ kubectl get secrets
NAME TYPE DATA
default-token-v9pyz kubernetes.io/service-account-token 2
mysecret Opaque 2
```
To use the secret, you need to reference it in a pod or pod template. The `secret` volume source enables you to mount it as an in-memory directory into your containers.
```yaml
@ -101,8 +101,8 @@ spec:
name: data # must match the name of the volume, above
- mountPath: /var/run/secrets/super
name: supersecret
```
For more details, see the [secrets document](secrets), [example](secrets/) and [design doc](/{{page.version}}/docs/design/secrets).
## Authenticating with a private image registry
@ -138,8 +138,8 @@ EOF
$ kubectl create -f ./image-pull-secret.yaml
secrets/myregistrykey
```
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
section to a pod definition.
@ -154,8 +154,8 @@ spec:
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
```
## Helper containers
[Pods](pods) support running multiple containers co-located together. They can be used to host vertically integrated application stacks, but their primary motivation is to support auxiliary helper programs that assist the primary application. Typical examples are data pullers, data pushers, and proxies.
@ -193,8 +193,8 @@ spec:
volumeMounts:
- mountPath: /data
name: www-data
```
More examples can be found in our [blog article](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns) and [presentation slides](http://www.slideshare.net/Docker/slideshare-burns).
## Resource management
@ -231,8 +231,8 @@ spec:
cpu: 500m
# memory units are bytes
memory: 64Mi
```
The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability. By specifying request, pod is guaranteed to be able to use that much of resource when needed. See [Resource QoS](../proposals/resource-qos) for the difference between resource limits and requests.
If you're not sure how much resources to request, you can first launch the application without specifying resources, and use [resource usage monitoring](monitoring) to determine appropriate values.
@ -267,8 +267,8 @@ spec:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
```
Other times, applications are only temporarily unable to serve, and will recover on their own. Typically in such cases you'd prefer not to kill the application, but don't want to send it requests, either, since the application won't respond correctly or at all. A common such scenario is loading large data or configuration files during application startup. Kubernetes provides *readiness probes* to detect and mitigate such situations. Readiness probes are configured similarly to liveness probes, just using the `readinessProbe` field. A pod with containers reporting that they are not ready will not receive traffic through Kubernetes [services](connecting-applications).
For more details (e.g., how to specify command-based probes), see the [example in the walkthrough](walkthrough/k8s201.html#health-checking), the [standalone example](liveness/), and the [documentation](pod-states.html#container-probes).
@ -304,8 +304,8 @@ spec:
exec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["/usr/sbin/nginx","-s","quit"]
```
## Termination message
In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](kubectl/kubectl) or the [UI](ui), in addition to general [log collection](logging). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
@ -323,8 +323,8 @@ spec:
image: "ubuntu:14.04"
command: ["/bin/sh","-c"]
args: ["sleep 60 && /bin/echo Sleep expired > /dev/termination-log"]
```
The message is recorded along with the other state of the last (i.e., most recent) termination:
```shell
@ -339,5 +339,4 @@ $ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatus
```
## What's next?
[Learn more about managing deployments.](managing-deployments)
[Learn more about managing deployments.](managing-deployments)

View File

@ -1,10 +1,10 @@
---
title: "Kubernetes User Guide: Managing Applications: Quick start"
---
{% include pagetoc.html %}
This guide will help you get oriented to Kubernetes and running your first containers on the cluster. If you are already familiar with the docker-cli, you can also checkout the docker-cli to kubectl migration guide [here](docker-cli-to-kubectl).
{% include pagetoc.html %}
## Launching a simple application
Once your application is packaged into a container and pushed to an image registry, you're ready to deploy it to Kubernetes.
@ -15,8 +15,8 @@ For example, [nginx](http://wiki.nginx.org/Main) is a popular HTTP server, with
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx my-nginx nginx run=my-nginx 2
```
You can see that they are running by:
```shell
@ -24,8 +24,8 @@ $ kubectl get po
NAME READY STATUS RESTARTS AGE
my-nginx-l8n3i 1/1 Running 0 29m
my-nginx-q7jo3 1/1 Running 0 29m
```
Kubernetes will ensure that your application keeps running, by automatically restarting containers that fail, spreading containers across nodes, and recreating containers on new nodes when nodes fail.
## Exposing your application to the Internet
@ -35,16 +35,16 @@ Through integration with some cloud providers (for example Google Compute Engine
```shell
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
service "my-nginx" exposed
```
To find the public IP address assigned to your application, execute:
```shell
$ kubectl get svc my-nginx
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.179.240.1 25.1.2.3 80/TCP run=nginx 8d
```
You may need to wait for a minute or two for the external ip address to be provisioned.
In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a [firewall to allow traffic on port 80](services-firewalls).
@ -58,11 +58,8 @@ $ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
$ kubectl delete svc my-nginx
services/my-nginx
```
## What's next?
[Learn about how to configure common container parameters, such as commands and environment variables.](configuring-containers)
[Learn about how to configure common container parameters, such as commands and environment variables.](configuring-containers)

View File

@ -47,8 +47,8 @@ type: Opaque
data:
password: dmFsdWUtMg0K
username: dmFsdWUtMQ0K
```
The data field is a map. Its keys must match
[`DNS_SUBDOMAIN`](../design/identifiers), except that leading dots are also
allowed. The values are arbitrary data, encoded using base64. The values of
@ -90,8 +90,8 @@ This is an example of a pod that mounts a secret in a volume:
}]
}
}
```
Each secret you want to use needs its own `spec.volumes`.
If there are multiple containers in the pod, then each container needs its
@ -160,8 +160,8 @@ $ cat /etc/foo/username
value-1
$ cat /etc/foo/password
value-2
```
The program in a container is responsible for reading the secret(s) from the
files. Currently, if a program expects a secret to be stored in an environment
variable, then the user needs to modify the image to populate the environment
@ -216,8 +216,8 @@ To create a pod that uses an ssh key stored as a secret, we first need to create
"id-rsa.pub": "dmFsdWUtMQ0K"
}
}
```
**Note:** The serialized JSON and YAML values of secret data are encoded as
base64 strings. Newlines are not valid within these strings and must be
omitted.
@ -259,12 +259,14 @@ consumes it in a volume:
]
}
}
```
When the container's command runs, the pieces of the key will be available in:
/etc/secret-volume/id-rsa.pub
/etc/secret-volume/id-rsa
```shell
/etc/secret-volume/id-rsa.pub
/etc/secret-volume/id-rsa
```
The container is then free to use the secret data to establish an ssh connection.
@ -304,8 +306,8 @@ The secrets:
}
}]
}
```
The pods:
```json
@ -380,15 +382,15 @@ The pods:
}
}]
}
```
Both containers will have the following files present on their filesystems:
```shell
/etc/secret-volume/username
/etc/secret-volume/password
/etc/secret-volume/password
```
Note how the specs for the two pods differ only in one field; this facilitates
creating pods with different capabilities from a common pod config template.
@ -415,8 +417,8 @@ one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
}
]
}
```
### Use-case: Secret visible to one container in a pod
<a name="use-case-two-containers"></a>
@ -485,7 +487,4 @@ Pod level](#use-case-two-containers).
- Currently, anyone with root on any node can read any secret from the apiserver,
by impersonating the kubelet. It is a planned feature to only send secrets to
nodes that actually require them, to restrict the impact of a root exploit on a
single node.
single node.

View File

@ -17,8 +17,8 @@ Use the [`examples/secrets/secret.yaml`](secret.yaml) file to create a secret:
```shell
$ kubectl create -f docs/user-guide/secrets/secret.yaml
```
You can use `kubectl` to see information about the secret:
```shell
@ -37,8 +37,8 @@ Data
====
data-1: 9 bytes
data-2: 11 bytes
```
## Step Two: Create a pod that consumes a secret
Pods consume secrets in volumes. Now that you have created a secret, you can create a pod that
@ -48,13 +48,12 @@ Use the [`examples/secrets/secret-pod.yaml`](secret-pod.yaml) file to create a P
```shell
$ kubectl create -f docs/user-guide/secrets/secret-pod.yaml
```
This pod runs a binary that displays the content of one of the pieces of secret data in the secret
volume:
```shell
$ kubectl logs secret-test-pod
2015-04-29T21:17:24.712206409Z content of file "/etc/secret-volume/data-1": value-1
```
```

View File

@ -38,8 +38,8 @@ You can list this and any other serviceAccount resources in the namespace with t
$ kubectl get serviceAccounts
NAME SECRETS
default 1
```
You can create additional serviceAccounts like this:
```shell
@ -51,8 +51,8 @@ metadata:
EOF
$ kubectl create -f /tmp/serviceaccount.yaml
serviceaccounts/build-robot
```
If you get a complete dump of the service account object, like this:
```shell
@ -68,8 +68,8 @@ metadata:
uid: 721ab723-13bc-11e5-aec2-42010af0021e
secrets:
- name: build-robot-token-bvbk5
```
then you will see that a token has automatically been created and is referenced by the service account.
In the future, you will be able to configure different access policies for each service account.
@ -85,8 +85,8 @@ You can clean up the service account from this example like this:
```shell
$ kubectl delete serviceaccount/build-robot
```
<!-- TODO: describe how to create a pod with no Service Account. -->
Note that if a pod does not have a `ServiceAccount` set, the `ServiceAccount` will be set to `default`.
@ -107,8 +107,8 @@ type: kubernetes.io/service-account-token
EOF
$ kubectl create -f /tmp/build-robot-secret.yaml
secrets/build-robot-secret
```
Now you can confirm that the newly built secret is populated with an API token for the "build-robot" service account.
```shell
@ -124,8 +124,8 @@ Data
====
ca.crt: 1220 bytes
token:
```
> Note that the content of `token` is elided here.
## Adding ImagePullSecrets to a service account
@ -137,8 +137,8 @@ Next, verify it has been created. For example:
$ kubectl get secrets myregistrykey
NAME TYPE DATA
myregistrykey kubernetes.io/dockercfg 1
```
Next, read/modify/write the service account for the namespace to use this secret as an imagePullSecret
```shell
@ -174,21 +174,19 @@ imagePullSecrets:
- name: myregistrykey
$ kubectl replace serviceaccount default -f ./sa.yaml
serviceaccounts/default
```
Now, any new pods created in the current namespace will have this added to their spec:
```yaml
spec:
imagePullSecrets:
- name: myregistrykey
```
## Adding Secrets to a service account.
TODO: Test and explain how to use additional non-K8s secrets with an existing service account.
TODO explain:
- The token goes to: "/var/run/secrets/kubernetes.io/serviceaccount/$WHATFILENAME"
- The token goes to: "/var/run/secrets/kubernetes.io/serviceaccount/$WHATFILENAME"

View File

@ -18,8 +18,8 @@ You can add a firewall with the `gcloud` command line tool:
```shell
$ gcloud compute firewall-rules create my-rule --allow=tcp:<port>
```
**Note**
There is one important security note when using firewalls on Google Compute Engine:
@ -31,6 +31,7 @@ as they listen on IP addresses that are different than the host node's external
IP address.
Consider:
* You create a Service with an external load balancer (IP Address 1.2.3.4)
and port 80
* You open the firewall for port 80 for all nodes in your cluster, so that
@ -47,7 +48,4 @@ This will be fixed in an upcoming release of Kubernetes.
### Other cloud providers
Coming soon.
Coming soon.

View File

@ -1,10 +1,6 @@
---
title: "Services in Kubernetes"
---
{% include pagetoc.html %}
## Overview
Kubernetes [`Pods`](pods) are mortal. They are born and they die, and they
are not resurrected. [`ReplicationControllers`](replication-controller) in
particular create and destroy `Pods` dynamically (e.g. when scaling up or down
@ -34,6 +30,8 @@ that is updated whenever the set of `Pods` in a `Service` changes. For
non-native applications, Kubernetes offers a virtual-IP-based bridge to Services
which redirects to the backend `Pods`.
{% include pagetoc.html %}
## Defining a service
A `Service` in Kubernetes is a REST object, similar to a `Pod`. Like all of the
@ -61,8 +59,8 @@ port 9376 and carry a label `"app=MyApp"`.
]
}
}
```
This specification will create a new `Service` object named "my-service" which
targets TCP port 9376 on any `Pod` with the `"app=MyApp"` label. This `Service`
will also be assigned an IP address (sometimes called the "cluster IP"), which
@ -113,8 +111,8 @@ In any of these scenarios you can define a service without a selector:
]
}
}
```
Because this has no selector, the corresponding `Endpoints` object will not be
created. You can manually map the service to your own specific endpoints:
@ -136,8 +134,8 @@ created. You can manually map the service to your own specific endpoints:
}
]
}
```
NOTE: Endpoint IPs may not be loopback (127.0.0.0/8), link-local
(169.254.0.0/16), or link-local multicast ((224.0.0.0/24).
@ -203,8 +201,8 @@ disambiguated. For example:
]
}
}
```
## Choosing your own IP address
You can specify your own cluster IP address as part of a `Service` creation
@ -257,8 +255,8 @@ REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
```
*This does imply an ordering requirement* - any `Service` that a `Pod` wants to
access must be created before the `Pod` itself, or else the environment
variables will not be populated. DNS does not have this restriction.
@ -389,8 +387,8 @@ information about the provisioned balancer will be published in the `Service`'s
}
}
}
```
Traffic from the external load balancer will be directed at the backend `Pods`,
though exactly how that works depends on the cloud provider. Some cloud providers allow
the `loadBalancerIP` to be specified. In those cases, the load-balancer will be created
@ -432,8 +430,8 @@ In the example below, my-service can be accessed by clients on 80.11.12.10:80 (e
]
}
}
```
## Shortcomings
We expect that using iptables and userspace proxies for VIPs will work at
@ -529,7 +527,4 @@ of which `Pods` they are actually accessing.
Service is a top-level resource in the kubernetes REST API. More details about the
API object can be found at: [Service API
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_service).
object](/{{ page.version }}/docs/api-reference/v1/definitions/#_v1_service).

View File

@ -10,28 +10,28 @@ by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
```shell
$ cluster/kube-up.sh
```
**2. Copy `kubeconfig` to new host**
```shell
$ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
```
**3. On new host, make copied `config` available to `kubectl`**
* Option A: copy to default location
```shell
$ mv /path/to/.kube/config $HOME/.kube/config
```
* Option B: copy to working directory (from which kubectl is run)
```shell
$ mv /path/to/.kube/config $PWD
```
* Option C: manually pass `kubeconfig` location to `kubectl`
```shell
@ -40,8 +40,8 @@ $ export KUBECONFIG=/path/to/.kube/config
# via commandline flag
$ kubectl ... --kubeconfig=/path/to/.kube/config
```
## Manually Generating `kubeconfig`
`kubeconfig` is generated by `kube-up` but you can generate your own
@ -71,9 +71,10 @@ $ kubectl config set-credentials $USER_NICK \
# create context entry
$ kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK
```
Notes:
* The `--embed-certs` flag is needed to generate a standalone
`kubeconfig`, that will work as-is on another host.
* `--kubeconfig` is both the preferred file to load config from and the file to
@ -82,8 +83,8 @@ omitted if you first run
```shell
$ export KUBECONFIG=/path/to/standalone/.kube/config
```
* The ca_file, key_file, and cert_file referenced above are generated on the
kube master at cluster turnup. They can be found on the master under
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
@ -113,9 +114,6 @@ $ export $KUBECONFIG=/path/to/other/.kube/config
$ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config
$ export $KUBECONFIG=/path/to/other/.kube/config
```
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](kubeconfig-file).
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file](kubeconfig-file).

View File

@ -13,26 +13,26 @@ The [`kubectl run`](kubectl/kubectl_run) line below will create two [nginx](http
```shell
kubectl run my-nginx --image=nginx --replicas=2 --port=80
```
Once the pods are created, you can list them to see what is up and running:
```shell
kubectl get pods
```
You can also see the replication controller that was created:
```shell
kubectl get rc
```
To stop the two replicated containers, stop the replication controller:
```shell
kubectl stop rc my-nginx
```
### Exposing your pods to the internet.
On some platforms (for example Google Compute Engine) the kubectl command can integrate with your cloud provider to add a [public IP address](services.html#external-services) for the pods,
@ -40,20 +40,17 @@ to do this run:
```shell
kubectl expose rc my-nginx --port=80 --type=LoadBalancer
```
This should print the service that has been created, and map an external IP address to the service. Where to find this external IP address will depend on the environment you run in. For instance, for Google Compute Engine the external IP address is listed as part of the newly created service and can be retrieved by running
```shell
kubectl get services
```
In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a firewall to allow traffic on port 80.
### Next: Configuration files
Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](simple-yaml)
is given in a different document.
is given in a different document.

View File

@ -11,8 +11,8 @@ can be code reviewed, producing a more robust, reliable and archival system.
```shell
$ cd kubernetes
$ kubectl create -f ./pod.yaml
```
Where pod.yaml contains something like:
<!-- BEGIN MUNGE: EXAMPLE pod.yaml -->
@ -30,8 +30,8 @@ spec:
image: nginx
ports:
- containerPort: 80
```
[Download example](pod.yaml)
<!-- END MUNGE: EXAMPLE pod.yaml -->
@ -39,14 +39,14 @@ You can see your cluster's pods:
```shell
$ kubectl get pods
```
and delete the pod you just created:
```shell
$ kubectl delete pods nginx
```
### Running a replicated set of containers from a configuration file
To run replicated containers, you need a [Replication Controller](replication-controller).
@ -56,8 +56,8 @@ cluster.
```shell
$ cd kubernetes
$ kubectl create -f ./replication.yaml
```
Where `replication.yaml` contains:
<!-- BEGIN MUNGE: EXAMPLE replication.yaml -->
@ -82,8 +82,8 @@ spec:
image: nginx
ports:
- containerPort: 80
```
[Download example](replication.yaml)
<!-- END MUNGE: EXAMPLE replication.yaml -->
@ -91,5 +91,4 @@ To delete the replication controller (and the pods it created):
```shell
$ kubectl delete rc nginx
```
```

View File

@ -11,8 +11,9 @@ If you find that you're not able to access the UI, it may be because the kube-ui
```shell
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
```
Normally, this should be taken care of automatically by the [`kube-addons.sh`](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kube-addons/kube-addons.sh) script that runs on the master.
## Using the UI

View File

@ -29,8 +29,9 @@ This example assumes that you have forked the repository and [turned up a Kubern
```shell
$ cd kubernetes
$ ./cluster/kube-up.sh
$ ./cluster/kube-up.sh
```
### Step One: Turn up the UX for the demo
You can use bash job control to run this in the background (note that you must use the default port -- 8001 -- for the following demonstration to work properly).
@ -40,8 +41,9 @@ Kubernetes repository. Otherwise you will get "404 page not found" errors as the
```shell
$ kubectl proxy --www=docs/user-guide/update-demo/local/ &
I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
```
Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet.
### Step Two: Run the replication controller
@ -49,8 +51,9 @@ Now visit the the [demo website](http://localhost:8001/static). You won't see a
Now we will turn up two replicas of an [image](../images). They all serve on internal port 80.
```shell
$ kubectl create -f docs/user-guide/update-demo/nautilus-rc.yaml
$ kubectl create -f docs/user-guide/update-demo/nautilus-rc.yaml
```
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
### Step Three: Try scaling the replication controller
@ -58,8 +61,9 @@ After pulling the image from the Docker Hub to your worker nodes (which may take
Now we will increase the number of replicas from two to four:
```shell
$ kubectl scale rc update-demo-nautilus --replicas=4
$ kubectl scale rc update-demo-nautilus --replicas=4
```
If you go back to the [demo website](http://localhost:8001/static/index) you should eventually see four boxes, one for each pod.
### Step Four: Update the docker image
@ -67,8 +71,9 @@ If you go back to the [demo website](http://localhost:8001/static/index) you sho
We will now update the docker image to serve a different image by doing a rolling update to a new Docker image.
```shell
$ kubectl rolling-update update-demo-nautilus --update-period=10s -f docs/user-guide/update-demo/kitten-rc.yaml
$ kubectl rolling-update update-demo-nautilus --update-period=10s -f docs/user-guide/update-demo/kitten-rc.yaml
```
The rolling-update command in kubectl will do 2 things:
1. Create a new [replication controller](/{{page.version}}/docs/user-guide/replication-controller) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
@ -81,8 +86,9 @@ But if the replica count had been specified, the final replica count of the new
### Step Five: Bring down the pods
```shell
$ kubectl delete rc update-demo-kitten
$ kubectl delete rc update-demo-kitten
```
This first stops the replication controller by turning the target number of replicas to 0 and then deletes the controller.
### Step Six: Cleanup
@ -90,8 +96,9 @@ This first stops the replication controller by turning the target number of repl
To turn down a Kubernetes cluster:
```shell
$ ./cluster/kube-down.sh
$ ./cluster/kube-down.sh
```
Kill the proxy running in the background:
After you are done running this demo make sure to kill it:
@ -99,16 +106,18 @@ After you are done running this demo make sure to kill it:
$ jobs
[1]+ Running ./kubectl proxy --www=local/ &
$ kill %1
[1]+ Terminated: 15 ./kubectl proxy --www=local/
[1]+ Terminated: 15 ./kubectl proxy --www=local/
```
### Updating the Docker images
If you want to build your own docker images, you can set `$DOCKER_HUB_USER` to your Docker user id and run the included shell script. It can take a few minutes to download/upload stuff.
```shell
$ export DOCKER_HUB_USER=my-docker-id
$ ./docs/user-guide/update-demo/build-images.sh
$ ./docs/user-guide/update-demo/build-images.sh
```
To use your custom docker image in the above examples, you will need to change the image name in `docs/user-guide/update-demo/nautilus-rc.yaml` and `docs/user-guide/update-demo/kitten-rc.yaml`.
### Image Copyright
@ -116,7 +125,4 @@ To use your custom docker image in the above examples, you will need to change t
Note that the images included here are public domain.
* [kitten](http://commons.wikimedia.org/wiki/File:Kitten-stare.jpg)
* [nautilus](http://commons.wikimedia.org/wiki/File:Nautilus_pompilius.jpg)
* [nautilus](http://commons.wikimedia.org/wiki/File:Nautilus_pompilius.jpg)

View File

@ -53,6 +53,7 @@ mount each volume.
## Types of Volumes
Kubernetes supports several types of Volumes:
* `emptyDir`
* `hostPath`
* `gcePersistentDisk`
@ -144,8 +145,8 @@ Before you can use a GCE PD with a pod, you need to create it.
```shell
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
```
#### Example pod
```yaml
@ -166,8 +167,8 @@ spec:
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
```
### awsElasticBlockStore
An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS
@ -192,8 +193,8 @@ Before you can use a EBS volume with a pod, you need to create it.
```shell
aws ec2 create-volume --availability-zone eu-west-1a --size 10 --volume-type gp2
```
Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume
type are suitable for your use!)
@ -217,8 +218,8 @@ spec:
awsElasticBlockStore:
volumeID: aws://<availability-zone>/<volume-id>
fsType: ext4
```
(Note: the syntax of volumeID is currently awkward; #10181 fixes it)
### nfs
@ -330,8 +331,8 @@ spec:
gitRepo:
repository: "git@somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
```
### secret
A `secret` volume is used to pass sensitive information, such as passwords, to
@ -373,7 +374,4 @@ pods.
In the future, we expect that `emptyDir` and `hostPath` volumes will be able to
request a certain amount of space using a [resource](compute-resources)
specification, and to select the type of media to use, for clusters that have
several media types.
several media types.

View File

@ -38,8 +38,8 @@ spec:
image: nginx
ports:
- containerPort: 80
```
A pod definition is a declaration of a _desired state_. Desired state is a very important concept in the Kubernetes model. Many things present a desired state to the system, and it is Kubernetes' responsibility to make sure that the current state matches the desired state. For example, when you create a Pod, you declare that you want the containers in it to be running. If the containers happen to not be running (e.g. program failure, ...), Kubernetes will continue to (re-)create them for you in order to drive them to the desired state. This process continues until the Pod is deleted.
See the [design document](../../design/README) for more details.
@ -51,28 +51,28 @@ Create a pod containing an nginx server ([pod-nginx.yaml](pod-nginx.yaml)):
```shell
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx.yaml
```
List all pods:
```shell
$ kubectl get pods
```
On most providers, the pod IPs are not externally accessible. The easiest way to test that the pod is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](../kubectl/kubectl_exec) for details.
Provided the pod IP is accessible, you should be able to access its http endpoint with curl on port 80:
```shell
$ curl http://$(kubectl get pod nginx -o go-template={{.status.podIP}})
```
Delete the pod by name:
```shell
$ kubectl delete pod nginx
```
#### Volumes
That's great for a simple static web server, but what about persistent storage?
@ -87,8 +87,8 @@ For this example we'll be creating a Redis pod with a named volume and volume mo
volumes:
- name: redis-persistent-storage
emptyDir: {}
```
2. Define a volume mount within a container definition:
```yaml
@ -97,8 +97,8 @@ volumeMounts:
- name: redis-persistent-storage
# mount path within the container
mountPath: /data/redis
```
Example Redis pod definition with a persistent storage volume ([pod-redis.yaml](pod-redis.yaml)):
<!-- BEGIN MUNGE: EXAMPLE pod-redis.yaml -->
@ -118,12 +118,13 @@ spec:
volumes:
- name: redis-persistent-storage
emptyDir: {}
```
[Download example](pod-redis.yaml)
<!-- END MUNGE: EXAMPLE pod-redis.yaml -->
Notes:
- The volume mount name is a reference to a specific empty dir volume.
- The volume mount path is the path to mount the empty dir volume within the container.
@ -167,8 +168,8 @@ spec:
volumes:
- name: www-data
emptyDir: {}
```
Note that we have also added a volume here. In this case, the volume is mounted into both containers. It is marked `readOnly` in the web server's case, since it doesn't need to write to the directory.
Finally, we have also introduced an environment variable to the `git-monitor` container, which allows us to parameterize that container with the particular git repository that we want to track.
@ -177,7 +178,4 @@ Finally, we have also introduced an environment variable to the `git-monitor` co
## What's Next?
Continue on to [Kubernetes 201](k8s201) or
for a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/README)
for a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/)

View File

@ -8,8 +8,6 @@ scaling.
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
{% include pagetoc.html %}
## Labels
@ -20,8 +18,9 @@ To add a label, add a labels section under metadata in the pod definition:
```yaml
labels:
app: nginx
app: nginx
```
For example, here is the nginx pod definition with labels ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
<!-- BEGIN MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
@ -38,21 +37,24 @@ spec:
- name: nginx
image: nginx
ports:
- containerPort: 80
- containerPort: 80
```
[Download example](pod-nginx-with-label.yaml)
<!-- END MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
```shell
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx-with-label.yaml
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx-with-label.yaml
```
List all pods with the label `app=nginx`:
```shell
$ kubectl get pods -l app=nginx
$ kubectl get pods -l app=nginx
```
For more information, see [Labels](../labels).
They are a core concept used by two additional Kubernetes building blocks: Replication Controllers and Services.
@ -91,8 +93,9 @@ spec:
- name: nginx
image: nginx
ports:
- containerPort: 80
- containerPort: 80
```
[Download example](replication-controller.yaml)
<!-- END MUNGE: EXAMPLE replication-controller.yaml -->
@ -101,18 +104,21 @@ spec:
Create an nginx replication controller ([replication-controller.yaml](replication-controller.yaml)):
```shell
$ kubectl create -f docs/user-guide/walkthrough/replication-controller.yaml
$ kubectl create -f docs/user-guide/walkthrough/replication-controller.yaml
```
List all replication controllers:
```shell
$ kubectl get rc
$ kubectl get rc
```
Delete the replication controller by name:
```shell
$ kubectl delete rc nginx-controller
$ kubectl delete rc nginx-controller
```
For more information, see [Replication Controllers](../replication-controller).
@ -140,8 +146,9 @@ spec:
# but this time it identifies the set of pods to load balance
# traffic to.
selector:
app: nginx
app: nginx
```
[Download example](service.yaml)
<!-- END MUNGE: EXAMPLE service.yaml -->
@ -150,13 +157,15 @@ spec:
Create an nginx service ([service.yaml](service.yaml)):
```shell
$ kubectl create -f docs/user-guide/walkthrough/service.yaml
$ kubectl create -f docs/user-guide/walkthrough/service.yaml
```
List all services:
```shell
$ kubectl get services
$ kubectl get services
```
On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](../kubectl/kubectl_exec) for details.
Provided the service IP is accessible, you should be able to access its http endpoint with curl on port 80:
@ -164,13 +173,15 @@ Provided the service IP is accessible, you should be able to access its http end
```shell
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template={{.spec.clusterIP}})
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template'={{(index .spec.ports 0).port}}')
$ curl http://${SERVICE_IP}:${SERVICE_PORT}
$ curl http://${SERVICE_IP}:${SERVICE_PORT}
```
To delete the service by name:
```shell
$ kubectl delete service nginx-service
$ kubectl delete service nginx-service
```
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some pod that is a member of the set identified by the label selector in the Service.
For more information, see [Services](../services).
@ -208,8 +219,9 @@ go func() {
}()
lockTwo.Lock();
lockOne.Lock();
lockOne.Lock();
```
This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
@ -250,8 +262,9 @@ spec:
initialDelaySeconds: 30
timeoutSeconds: 1
ports:
- containerPort: 80
- containerPort: 80
```
[Download example](pod-with-http-healthcheck.yaml)
<!-- END MUNGE: EXAMPLE pod-with-http-healthcheck.yaml -->

View File

@ -34,22 +34,21 @@ $ wc -l /tmp/original.yaml /tmp/current.yaml
51 /tmp/current.yaml
9 /tmp/original.yaml
60 total
```
The resource we posted had only 9 lines, but the one we got back had 51 lines.
If you `diff -u /tmp/original.yaml /tmp/current.yaml`, you can see the fields added to the pod.
The system adds fields in several ways:
- Some fields are added synchronously with creation of the resource and some are set asynchronously.
- For example: `metadata.uid` is set synchronously. (Read more about [metadata](../devel/api-conventions.html#metadata)).
- For example, `status.hostIP` is set only after the pod has been scheduled. This often happens fast, but you may notice pods which do not have this set yet. This is called Late Initialization. (Read mode about [status](../devel/api-conventions.html#spec-and-status) and [late initialization](../devel/api-conventions.html#late-initialization) ).
- Some fields are set to default values. Some defaults vary by cluster and some are fixed for the API at a certain version. (Read more about [defaulting](../devel/api-conventions.html#defaulting)).
- For example, `spec.containers[0].imagePullPolicy` always defaults to `IfNotPresent` in api v1.
- For example, `spec.containers[0].resources.limits.cpu` may be defaulted to `100m` on some clusters, to some other value on others, and not defaulted at all on others.
The API will generally not modify fields that you have set; it just sets ones which were unspecified.
## <a name="finding_schema_docs"></a>Finding Documentation on Resource Fields
You can browse auto-generated API documentation at the [project website](http://kubernetes.io/v1.1/api-ref) or on [github](https://releases.k8s.io/release-1.1/docs/api-reference).
You can browse auto-generated API documentation at the [project website](http://kubernetes.io/v1.1/api-ref) or on [github](https://releases.k8s.io/release-1.1/docs/api-reference).

Some files were not shown because too many files have changed in this diff Show More