1.2 additions for getting-started-guides/ and new non-Markdown files for user-guides

pull/61/head
John Mulhausen 2016-03-06 17:55:12 +00:00
parent 97c728d343
commit a6f6fd01cd
78 changed files with 2314 additions and 975 deletions

View File

@ -80,7 +80,11 @@ toc:
- title: Fedora With Calico Networking
path: /docs/getting-started-guides/fedora/fedora-calico/
- title: rkt
path: /docs/getting-started-guides/rkt/
section:
- title: Running Kubernetes on rat
path: /docs/getting-started-guides/rkt/
- title: Notes on Different UX with rkt Container Runtime
path: /docs/getting-started-guides/rkt/notes/
- title: Kubernetes on Mesos
path: /docs/getting-started-guides/mesos/
- title: Kubernetes on Mesos on Docker

View File

@ -1,8 +1,9 @@
---
---
* TOC
{:toc}
### Introduction
Garbage collection is a helpful function of kubelet that will clean up unreferenced images and unused containers. kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.

View File

@ -8,7 +8,7 @@
1. You need an AWS account. Visit [http://aws.amazon.com](http://aws.amazon.com) to get started
2. Install and configure [AWS Command Line Interface](http://aws.amazon.com/cli)
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles) with EC2 full access.
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
NOTE: This script use the 'default' AWS profile by default.
You may explicitly set AWS profile to use using the `AWS_DEFAULT_PROFILE` environment variable:
@ -36,28 +36,66 @@ This process takes about 5 to 10 minutes. Once the cluster is up, the IP address
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth.
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu.
You can override the variables defined in [config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh) to change this behavior as follows:
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with EC2 instances running on Ubuntu.
You can override the variables defined in [config-default.sh](http://releases.k8s.io/release-1.2/cluster/aws/config-default.sh) to change this behavior as follows:
```shell
export KUBE_AWS_ZONE=eu-west-1c
export NUM_MINIONS=2
export MINION_SIZE=m3.medium
export NUM_NODES=2
export MASTER_SIZE=m3.medium
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=mycompany-kubernetes-artifacts
export INSTANCE_PREFIX=k8s
...
```
It will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
If you don't specify master and minion sizes, the scripts will attempt to guess
the correct size of the master and worker nodes based on `${NUM_NODES}`. In
version 1.2 these default are:
* For the master, for clusters of less than 150 nodes it will use an
`m3.medium`, for clusters of greater than 150 nodes it will use an
`m3.large`.
* For worker nodes, for clusters less than 50 nodes it will use a `t2.micro`,
for clusters between 50 and 150 nodes it will use a `t2.small` and for
clusters with greater than 150 nodes it will use a `t2.medium`.
WARNING: beware that `t2` instances receive a limited number of CPU credits per hour and might not be suitable for clusters where the CPU is used
consistently. As a rough estimation, consider 15 pods/node the absolute limit a `t2.large` instance can handle before it starts exhausting its CPU credits
steadily, although this number depends heavily on the usage.
In prior versions of Kubernetes, we defaulted the master node to a t2-class
instance, but found that this sometimes gave hard-to-diagnose problems when the
master ran out of memory or CPU credits. If you are running a test cluster
and want to save money, you can specify `export MASTER_SIZE=t2.micro` but if
your master pauses do check the CPU credits in the AWS console.
For production usage, we recommend at least `export MASTER_SIZE=m3.medium` and
`export NODE_SIZE=m3.medium`. And once you get above a handful of nodes, be
aware that one m3.large instance has more storage than two m3.medium instances,
for the same price.
We generally recommend the m3 instances over the m4 instances, because the m3
instances include local instance storage. Historically local instance storage
has been more reliable than AWS EBS, and performance should be more consistent.
The ephemeral nature of this storage is a match for ephemeral container
workloads also!
If you use an m4 instance, or another instance type which does not have local
instance storage, you may want to increase the `NODE_ROOT_DISK_SIZE` value,
although the default value of 32 is probably sufficient for the smaller
instance types in the m4 family.
The script will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
If these already exist, make sure you want them to be used here.
NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key.
### Alternatives
A contributed [example](/docs/getting-started-guides/coreos/coreos_multinode_cluster) allows you to setup a Kubernetes cluster based on [CoreOS](http://www.coreos.com), using
EC2 with user data (cloud-config).
CoreOS maintains [a CLI tool](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html), `kube-aws` that will create and manage a Kubernetes cluster based on [CoreOS](http://www.coreos.com), using AWS tools: EC2, CloudFormation and Autoscaling.
## Getting started with your cluster
@ -71,6 +109,7 @@ Next, add the appropriate binary folder to your `PATH` to access kubectl:
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```

View File

@ -27,36 +27,19 @@ centos-minion = 192.168.121.65
**Prepare the hosts:**
* Create virt7-testing repo on all hosts - centos-{master,minion} with following information.
* Create a virt7-docker-common-release repo on all hosts - centos-{master,minion} with following information.
```conf
[virt7-testing]
name=virt7-testing
baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
```
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
```shell
yum -y install --enablerepo=virt7-testing kubernetes
```
* Note * Using etcd-0.4.6-7 (This is temporary update in documentation)
If you do not get etcd-0.4.6-7 installed with virt7-testing repo,
In the current virt7-testing repo, the etcd package is updated which causes service failure. To avoid this,
```shell
yum erase etcd
```
It will uninstall the current available etcd package
```shell
yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86_64/etcd-0.4.6-7.el7.centos.x86_64.rpm
yum -y install --enablerepo=virt7-testing kubernetes
yum -y install --enablerepo=virt7-docker-common-release kubernetes
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
@ -66,11 +49,11 @@ echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
```
* Edit `/etc/kubernetes/config` which will be the same on all hosts to contain:
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:4001"
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
@ -127,7 +110,7 @@ done
***We need to configure the kubelet and start the kubelet and proxy***
* Edit `/etc/kubernetes/kubelet` to appear as such:
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
# The address for the info server to serve on

View File

@ -3,7 +3,7 @@
CloudStack is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. [Exoscale](http://exoscale.ch) for instance makes a [CoreOS](http://coreos.com) template available, therefore instructions to deploy Kubernetes on coreOS can be used. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates) this template in their cloud before proceeding with these Kubernetes deployment instructions.
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](/docs/getting-started-guides/coreos/coreos_multinode_cluster).

View File

@ -8,21 +8,27 @@ There are multiple guides on running Kubernetes with [CoreOS](https://coreos.com
### Official CoreOS Guides
These guides are maintained by CoreOS and deploy Kubernetes the "CoreOS Way" with full TLS, the DNS add-on, and more. These guides pass Kubernetes conformance testing and we encourage you to [test this yourself](https://coreos.com/kubernetes/docs/latest/conformance-tests).
These guides are maintained by CoreOS and deploy Kubernetes the "CoreOS Way" with full TLS, the DNS add-on, and more. These guides pass Kubernetes conformance testing and we encourage you to [test this yourself](https://coreos.com/kubernetes/docs/latest/conformance-tests.html).
[**Vagrant Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant)
[**AWS Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)
Guide and CLI tool for setting up a multi-node cluster on AWS. CloudFormation is used to set up a master and multiple workers in auto-scaling groups.
<hr/>
[**Vagrant Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html)
Guide to setting up a multi-node cluster on Vagrant. The deployer can independently configure the number of etcd nodes, master nodes, and worker nodes to bring up a fully HA control plane.
<hr/>
[**Vagrant Single-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single)
[**Vagrant Single-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
The quickest way to set up a Kubernetes development environment locally. As easy as `git clone`, `vagrant up` and configuring `kubectl`.
<hr/>
[**Full Step by Step Guide**](https://coreos.com/kubernetes/docs/latest/getting-started)
[**Full Step by Step Guide**](https://coreos.com/kubernetes/docs/latest/getting-started.html)
A generic guide to setting up an HA cluster on any cloud or bare metal, with full TLS. Repeat the master or worker steps to configure more machines of that role.
@ -54,6 +60,12 @@ Configure a single master, multi-worker cluster locally, running on your choice
<hr/>
[**Single-node cluster using a small OS X App**](https://github.com/rimusz/kube-solo-osx/blob/master/README.md)
Guide to running a solo cluster (master + worker) controlled by an OS X menubar application. Uses xhyve + CoreOS under the hood.
<hr/>
[**Multi-node cluster with Vagrant and fleet units using a small OS X App**](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
Guide to running a single master, multi-worker cluster controlled by an OS X menubar application. Uses Vagrant under the hood.

View File

@ -1,22 +1,22 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v8
name: kube-dns-v9
namespace: kube-system
labels:
k8s-app: kube-dns
version: v8
version: v9
kubernetes.io/cluster-service: "true"
spec:
replicas: 3
selector:
k8s-app: kube-dns
version: v8
version: v9
template:
metadata:
labels:
k8s-app: kube-dns
version: v8
version: v9
kubernetes.io/cluster-service: "true"
spec:
containers:
@ -74,6 +74,13 @@ spec:
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 1
timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:

View File

@ -10,7 +10,7 @@ metadata:
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.1.0.3
clusterIP: 10.16.0.3
ports:
- name: dns
port: 53

View File

@ -25,13 +25,6 @@ coreos:
ExecStart=/bin/sh -x -c \
'until curl --silent --fail https://status.github.com/api/status.json | grep -q \"good\"; do sleep 2; done'
- name: docker.service
drop-ins:
- name: 50-weave-kubernetes.conf
content: |
[Service]
Environment=DOCKER_OPTS='--bridge="weave" -r="false"'
- name: weave-network.target
enable: true
content: |
@ -92,46 +85,46 @@ coreos:
content: |
[Unit]
After=network-online.target
After=docker.service
Before=weave.service
Before=weave-helper.service
Before=docker.service
Description=Install Weave
Documentation=http://docs.weave.works/
Requires=network-online.target
[Service]
EnvironmentFile=-/etc/weave.%H.env
EnvironmentFile=-/etc/weave.env
Type=oneshot
RemainAfterExit=yes
TimeoutStartSec=0
ExecStartPre=/bin/mkdir -p /opt/bin/
ExecStartPre=/opt/bin/curl-retry.sh \
--silent \
--location \
https://github.com/weaveworks/weave/releases/download/latest_release/weave \
git.io/weave \
--output /opt/bin/weave
ExecStartPre=/opt/bin/curl-retry.sh \
--silent \
--location \
https://raw.github.com/errordeveloper/weave-demos/master/poseidon/weave-helper \
--output /opt/bin/weave-helper
ExecStartPre=/usr/bin/chmod +x /opt/bin/weave
ExecStartPre=/usr/bin/chmod +x /opt/bin/weave-helper
ExecStart=/bin/echo Weave Installed
ExecStart=/opt/bin/weave setup
[Install]
WantedBy=weave-network.target
WantedBy=weave.service
- name: weave-helper.service
- name: weaveproxy.service
enable: true
content: |
[Unit]
After=install-weave.service
After=docker.service
Description=Weave Network Router
Description=Weave proxy for Docker API
Documentation=http://docs.weave.works/
Requires=docker.service
Requires=install-weave.service
[Service]
ExecStart=/opt/bin/weave-helper
Restart=always
EnvironmentFile=-/etc/weave.%H.env
EnvironmentFile=-/etc/weave.env
ExecStartPre=/opt/bin/weave launch-proxy --rewrite-inspect --without-dns
ExecStart=/usr/bin/docker attach weaveproxy
Restart=on-failure
ExecStop=/opt/bin/weave stop-proxy
[Install]
WantedBy=weave-network.target
@ -147,35 +140,35 @@ coreos:
Requires=install-weave.service
[Service]
TimeoutStartSec=0
EnvironmentFile=/etc/weave.%H.env
ExecStartPre=/opt/bin/weave setup
ExecStartPre=/opt/bin/weave launch $WEAVE_PEERS
EnvironmentFile=-/etc/weave.%H.env
EnvironmentFile=-/etc/weave.env
ExecStartPre=/opt/bin/weave launch-router $WEAVE_PEERS
ExecStart=/usr/bin/docker attach weave
Restart=on-failure
Restart=always
ExecStop=/opt/bin/weave stop
ExecStop=/opt/bin/weave stop-router
[Install]
WantedBy=weave-network.target
- name: weave-create-bridge.service
- name: weave-expose.service
enable: true
content: |
[Unit]
After=network.target
After=install-weave.service
Before=weave.service
Before=docker.service
Requires=network.target
After=weave.service
After=docker.service
Documentation=http://docs.weave.works/
Requires=docker.service
Requires=install-weave.service
Requires=weave.service
[Service]
Type=oneshot
EnvironmentFile=/etc/weave.%H.env
ExecStart=/opt/bin/weave --local create-bridge
ExecStart=/usr/bin/ip addr add dev weave $BRIDGE_ADDRESS_CIDR
ExecStart=/usr/bin/ip route add $BREAKOUT_ROUTE dev weave scope link
ExecStart=/usr/bin/ip route add 224.0.0.0/4 dev weave
RemainAfterExit=yes
TimeoutStartSec=0
EnvironmentFile=-/etc/weave.%H.env
EnvironmentFile=-/etc/weave.env
ExecStart=/opt/bin/weave expose
ExecStop=/opt/bin/weave hide
[Install]
WantedBy=multi-user.target
WantedBy=weave-network.target
- name: install-kubernetes.service
@ -191,7 +184,7 @@ coreos:
Documentation=http://kubernetes.io/
Requires=network-online.target
[Service]
Environment=KUBE_RELEASE_TARBALL=https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.1/kubernetes.tar.gz
Environment=KUBE_RELEASE_TARBALL=https://github.com/kubernetes/kubernetes/releases/download/v1.1.2/kubernetes.tar.gz
ExecStartPre=/bin/mkdir -p /opt/
ExecStart=/opt/bin/curl-retry.sh --silent --location $KUBE_RELEASE_TARBALL --output /tmp/kubernetes.tgz
ExecStart=/bin/tar xzvf /tmp/kubernetes.tgz -C /tmp/
@ -222,11 +215,13 @@ coreos:
ConditionHost=kube-00
[Service]
ExecStart=/opt/kubernetes/server/bin/kube-apiserver \
--address=0.0.0.0 \
--insecure-bind-address=0.0.0.0 \
--advertise-address=$public_ipv4 \
--port=8080 \
$ETCD_SERVERS \
--service-cluster-ip-range=10.1.0.0/16 \
--logtostderr=true --v=3
--service-cluster-ip-range=10.16.0.0/12 \
--cloud-provider=vagrant \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
@ -286,12 +281,13 @@ coreos:
[Service]
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/
ExecStart=/opt/kubernetes/server/bin/kubelet \
--docker-endpoint=unix:/var/run/weave/weave.sock \
--address=0.0.0.0 \
--port=10250 \
--hostname-override=%H \
--api-servers=http://kube-00:8080 \
--logtostderr=true \
--cluster-dns=10.1.0.3 \
--cluster-dns=10.16.0.3 \
--cluster-domain=kube.local \
--config=/etc/kubernetes/manifests/
Restart=always
@ -333,7 +329,7 @@ coreos:
[Service]
Type=oneshot
RemainAfterExit=no
ExecStart=/opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/
ExecStart=/bin/bash -c 'until /opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/; do sleep 2; done'
SuccessExitStatus=1
[Install]
WantedBy=kubernetes-master.target

View File

@ -1,13 +1,11 @@
---
---
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with [Weave](http://weave.works),
which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box
implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes
master and etcd nodes, and show how to scale the cluster with ease.
* TOC
{:toc}
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
### Prerequisites
@ -37,9 +35,14 @@ Now, all you need to do is:
./create-kubernetes-cluster.js
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes.
The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to
ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
If you need to pass Azure specific options for the creation script you can do this via additional environment variables e.g.
```shell
AZ_SUBSCRIPTION=<id> AZ_LOCATION="East US" ./create-kubernetes-cluster.js
# or
AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
```
![VMs in Azure](/images/docs/initial_cluster.png)

View File

@ -13,9 +13,9 @@ var inspect = require('util').inspect;
var util = require('./util.js');
var coreos_image_ids = {
'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-717.3.0',
'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-723.3.0', // untested
'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-745.1.0' // untested
'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-835.12.0', // untested
'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-899.6.0',
'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-942.0.0' // untested
};
var conf = {};
@ -107,8 +107,11 @@ var create_ssh_key = function (prefix) {
};
openssl.exec('req', opts, function (err, buffer) {
if (err) console.log(clr.red(err));
fs.chmod(opts.keyout, '0600', function (err) {
openssl.exec('rsa', { in: opts.keyout, out: opts.keyout }, function (err, buffer) {
if (err) console.log(clr.red(err));
fs.chmod(opts.keyout, '0600', function (err) {
if (err) console.log(clr.red(err));
});
});
});
return {
@ -156,11 +159,18 @@ var get_vm_size = function () {
}
}
var get_subscription= function () {
if (process.env['AZ_SUBSCRIPTION']) {
return '--subscription=' + process.env['AZ_SUBSCRIPTION'];
}
}
exports.queue_default_network = function () {
task_queue.push([
'network', 'vnet', 'create',
get_location(),
'--address-space=172.16.0.0',
get_subscription(),
conf.resources['vnet'],
]);
}
@ -172,11 +182,12 @@ exports.queue_storage_if_needed = function() {
'storage', 'account', 'create',
'--type=LRS',
get_location(),
get_subscription(),
conf.resources['storage_account'],
]);
process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account'];
} else {
// Preserve it for resizing, so we don't create a new one by accedent,
// Preserve it for resizing, so we don't create a new one by accident,
// when the environment variable is unset
conf.resources['storage_account'] = process.env['AZURE_STORAGE_ACCOUNT'];
}
@ -192,6 +203,7 @@ exports.queue_machines = function (name_prefix, coreos_update_channel, cloud_con
'--virtual-network-name=' + conf.resources['vnet'],
'--no-ssh-password',
'--ssh-cert=' + conf.resources['ssh_key']['pem'],
get_subscription(),
];
var cloud_config = cloud_config_creator(x, conf);
@ -216,6 +228,9 @@ exports.queue_machines = function (name_prefix, coreos_update_channel, cloud_con
if (conf.resizing && n < conf.old_size) {
return [];
} else {
if (process.env['AZ_VM_COREOS_CHANNEL']) {
coreos_update_channel = process.env['AZ_VM_COREOS_CHANNEL']
}
return vm_create_base_args.concat(next_host(n), [
coreos_image_ids[coreos_update_channel], 'core',
]);
@ -246,11 +261,11 @@ exports.destroy_cluster = function (state_file) {
conf.destroying = true;
task_queue = _.map(conf.hosts, function (host) {
return ['vm', 'delete', '--quiet', '--blob-delete', host.name];
return ['vm', 'delete', '--quiet', '--blob-delete', host.name, get_subscription()];
});
task_queue.push(['network', 'vnet', 'delete', '--quiet', conf.resources['vnet']]);
task_queue.push(['storage', 'account', 'delete', '--quiet', conf.resources['storage_account']]);
task_queue.push(['network', 'vnet', 'delete', '--quiet', conf.resources['vnet'], get_subscription()]);
task_queue.push(['storage', 'account', 'delete', '--quiet', conf.resources['storage_account'], get_subscription()]);
exports.run_task_queue();
};

View File

@ -9,7 +9,7 @@
"author": "Ilya Dmitrichenko <errordeveloper@gmail.com>",
"license": "Apache 2.0",
"dependencies": {
"azure-cli": "^0.9.5",
"azure-cli": "^0.9.9",
"colors": "^1.0.3",
"js-yaml": "^3.2.5",
"openssl-wrapper": "^0.2.1",

View File

@ -1,120 +1,197 @@
---
---
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
This guide explains how to deploy a bare-metal Kubernetes cluster on CoreOS using [Calico networking](http://www.projectcalico.org).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
Specifically, this guide will have you do the following:
- Deploy a Kubernetes master node on CoreOS using cloud-config.
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
- Configure `kubectl` to access your cluster.
- Deploy a Kubernetes master node on CoreOS using cloud-config
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
## Prerequisites and Assumptions
## Prerequisites
1. At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows
- 1 Kubernetes Master
- 2 Kubernetes Nodes
2. Your nodes should have IP connectivity.
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
- 1 Kubernetes Master
- 2 Kubernetes Nodes
- Your nodes should have IP connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
## Cloud-config
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
For ease of distribution, the cloud-config files required for this demonstration can be found on [GitHub](https://github.com/projectcalico/calico-kubernetes-coreos-demo).
We'll use two cloud-config files:
- `master-config.yaml`: cloud-config for the Kubernetes master
- `node-config.yaml`: cloud-config for each Kubernetes node
This repo includes two cloud config files:
- `master-config.yaml`: Cloud-config for the Kubernetes master
- `node-config.yaml`: Cloud-config for each Kubernetes compute host
In the next few steps you will be asked to configure these files and host them on an HTTP server where your cluster can access them.
## Building Kubernetes
To get the Kubernetes source, clone the GitHub repo, and build the binaries.
```shell
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
./build/release.sh
```
## Download CoreOS
Once the binaries are built, host the entire `<kubernetes>/_output/dockerized/bin/<OS>/<ARCHITECTURE>/` folder on an accessible HTTP server so they can be accessed by the cloud-config. You'll point your cloud-config files at this HTTP server later.
## Download CoreOS
Let's download the CoreOS bootable ISO. We'll use this image to boot and install CoreOS on each server.
```shell
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso
```
You can also download the ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master
Once you've downloaded the image, use it to boot your Kubernetes Master server. Once booted, you should be automatically logged in as the `core` user.
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
Let's get the master-config.yaml and fill in the necessary variables. Run the following commands on your HTTP server to get the cloud-config files.
2. *On another machine*, download the the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
3. Replace the following variables in the `master-config.yaml` file.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
### Configure TLS
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
2. Send the three files to your master host (using `scp` for example).
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```shell
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set Permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
```
4. Restart the kubelet to pick up the changes:
```shell
sudo systemctl restart kubelet
```
## Configure the compute nodes
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
4. Replace the following placeholders with the contents of their respective files.
- `<CA_CERT>`: Complete contents of `ca.pem`
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
>
> ```
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> <CA_CERT>
> ```
>
> should look like this once the certificate is in place:
>
> ```
> - path: /etc/kubernetes/ssl/ca.pem
> owner: core
> permissions: 0644
> content: |
> -----BEGIN CERTIFICATE-----
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
> ...<snip>...
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
> -----END CERTIFICATE-----
> ```
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
```
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
## Configure Kubeconfig
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
```shell
git clone https://github.com/Metaswitch/calico-kubernetes-demo.git
cd calico-kubernetes-demo/coreos
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
You'll need to replace the following variables in the `master-config.yaml` file to match your deployment.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_LOC>`: The address used to get the kubernetes binaries over HTTP.
> **Note:** The config will prepend `"http://"` and append `"/(kubernetes | kubectl | ...)"` to your `KUBERNETES_LOC` variable:, format accordingly
Host the modified `master-config.yaml` file and pull it on to your Kubernetes Master server.
The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS to disk and configure the install using cloud-config. The following command will download and install stable CoreOS, using the master-config.yaml file for configuration.
Check your work with `kubectl get nodes`.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
```shell
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
Once complete, eject the bootable ISO and restart the server. When it comes back up, you should have SSH access as the `core` user using the public key provided in the master-config.yaml file.
## Configure the compute hosts
>The following steps will set up a Kubernetes node for use as a compute host. This demo uses two compute hosts, so you should run the following steps on each.
First, boot up your node using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
Let's modify the `node-config.yaml` cloud-config file on your HTTP server. Make a copy for this node, and fill in the necessary variables.
You'll need to replace the following variables in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
- `<KUBERNETES_LOC>`: The address to use in order to get the kubernetes binaries over HTTP.
- `<DOCKER_BRIDGE_IP>`: The IP and subnet to use for pods on this node. By default, this should fall within the 192.168.0.0/16 subnet.
> Note: The DOCKER_BRIDGE_IP is the range used by this Kubernetes node to assign IP addresses to pods on this node. This subnet must not overlap with the subnets assigned to the other Kubernetes nodes in your cluster. Calico expects each DOCKER_BRIDGE_IP subnet to fall within 192.168.0.0/16 by default (e.g. 192.168.1.1/24 for node 1), but if you'd like to use pod IPs within a different subnet, simply run `calicoctl pool add <your_subnet>` and select DOCKER_BRIDGE_IP accordingly.
Host the modified `node-config.yaml` file and pull it on to your Kubernetes node.
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```shell
wget http://<http_server_ip>/node-config.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
Install and configure CoreOS on the node using the following command.
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```shell
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
Once complete, restart the server. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured. Once fully configured, you can check that the node is running with the following command on the Kubernetes master.
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```shell
/home/core/kubectl get nodes
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
## Testing the Cluster
You should now have a functional bare-metal Kubernetes cluster with one master and two compute hosts.
Try running the [guestbook demo](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to test out your new cluster!
### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).

View File

@ -1,12 +1,12 @@
---
---
Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system,
whether for testing a POC before the real deal, or you are restricted to be totally offline for your applications.
Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system, wither for testing a POC before the real deal, or you are restricted to be totally offline for your applications.
* TOC
{:toc}
## Prerequisites
1. Installed *CentOS 6* for PXE server
@ -31,9 +31,10 @@ whether for testing a POC before the real deal, or you are restricted to be tota
| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 |
| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 |
## Setup PXELINUX CentOS
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server). This section is the abbreviated version.
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version.
1. Install packages needed on CentOS
@ -690,5 +691,5 @@ kubectl get nodes
Kill all pods:
```shell
for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done
```
for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done
```

View File

@ -31,6 +31,8 @@ coreos:
fleet:
metadata: "role=master"
units:
- name: etcd2.service
command: start
- name: generate-serviceaccount-key.service
command: start
content: |
@ -76,14 +78,14 @@ coreos:
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service etcd2.service generate-serviceaccount-key.service
After=setup-network-environment.service etcd2.service generate-serviceaccount-key.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v1.0.3/bin/linux/amd64/kube-apiserver
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStartPre=/opt/bin/wupiao 127.0.0.1:2379/v2/machines
ExecStart=/opt/bin/kube-apiserver \
@ -107,12 +109,12 @@ coreos:
content: |
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Documentation=https://github.com/kubernetes/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v1.0.3/bin/linux/amd64/kube-controller-manager
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-controller-manager
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
ExecStart=/opt/bin/kube-controller-manager \
--service-account-private-key-file=/opt/bin/kube-serviceaccount.key \
@ -125,12 +127,12 @@ coreos:
content: |
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Documentation=https://github.com/kubernetes/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v1.0.3/bin/linux/amd64/kube-scheduler
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-scheduler
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
Restart=always

View File

@ -18,17 +18,12 @@ coreos:
fleet:
metadata: "role=node"
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Unit]
Requires=etcd2.service
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
- name: docker.service
command: start
- name: setup-network-environment.service
@ -52,12 +47,12 @@ coreos:
content: |
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v1.0.3/bin/linux/amd64/kube-proxy
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-proxy
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/wupiao <master-private-ip> 8080
@ -71,13 +66,13 @@ coreos:
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v1.0.3/bin/linux/amd64/kubelet
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/wupiao <master-private-ip> 8080

View File

@ -125,7 +125,7 @@ nova list
#### Get a Suitable CoreOS Image
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack)
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack.html)
Once you download that, upload it to glance. An example is shown below:
```shell
@ -159,8 +159,7 @@ kube-master
`<my_key>` is the keypair name that you already generated to access the instance.
`<flavor_id>` is the flavor ID you use to size the instance. Run `nova flavor-list`
to get the IDs. 3 on the system this was tested with gives the m1.large size.
`<flavor_id>` is the flavor ID you use to size the instance. Run `nova flavor-list` to get the IDs. 3 on the system this was tested with gives the m1.large size.
The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work.
@ -176,15 +175,11 @@ Get an IP address that's free and run:
nova floating-ip-associate kube-master <ip address>
```
...where `<ip address>` is the IP address that was available from the `nova floating-ip-list`
command.
where `<ip address>` is the IP address that was available from the `nova floating-ip-list` command.
#### Provision Worker Nodes
Edit `node.yaml`
and replace all instances of ```<master-private-ip>```
with the private IP address of the master node. You can get this by running ```nova show kube-master```
assuming you named your instance kube master. This is not the floating IP address you just assigned it.
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node. You can get this by running `nova show kube-master` assuming you named your instance kube master. This is not the floating IP address you just assigned it.
```shell
nova boot \

View File

@ -105,7 +105,7 @@ $ dcos kubectl get pods --namespace=kube-system
Names and ages may vary.
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) or the [Kubernetes User Guide](/docs/user-guide/).
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/README.md) or the [Kubernetes User Guide](/docs/user-guide/).
## Uninstall

View File

@ -1,29 +1,27 @@
---
---
_Note_:
These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.
* TOC
{:toc}
## Prerequisites
The only thing you need is a machine with **Docker 1.7.1 or higher**
## Overview
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/images/docs/k8s-docker.png)
_Note_:
These instructions are somewhat significantly more advanced than the [single node](/docs/getting-started-guides/docker) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.
_Note_:
There is a [bug](https://github.com/docker/docker/issues/14106) in Docker 1.7.0 that prevents this from working correctly.
Please install Docker 1.6.2 or Docker 1.7.1.
* TOC
{:toc}
## Prerequisites
You need a machine with docker of right version installed.
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
@ -34,10 +32,14 @@ This pattern is necessary because the `flannel` daemon is responsible for settin
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
You can specify k8s version on very node before install:
You can specify the version on every node before install:
```shell
export K8S_VERSION=<your_k8s_version (e.g. 1.0.3)>
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
Otherwise, we'll use latest `hyperkube` image as default k8s version.
@ -46,14 +48,16 @@ Otherwise, we'll use latest `hyperkube` image as default k8s version.
The first step in the process is to initialize the master node.
Clone the Kubernetes repo, and run [master.sh](/docs/getting-started-guides/docker-multinode/master.sh) on the master machine with root:
The MASTER_IP step here is optional, it defaults to the first value of `hostname -I`.
Clone the Kubernetes repo, and run [master.sh](/docs/getting-started-guides/docker-multinode/master.sh) on the master machine _with root_:
```shell
cd kubernetes/docs/getting-started-guides/docker-multinode/
./master.sh
...
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./master.sh
```
`Master done!`
```
See [here](/docs/getting-started-guides/docker-multinode/master) for detailed instructions explanation.
@ -61,17 +65,17 @@ See [here](/docs/getting-started-guides/docker-multinode/master) for detailed in
Once your master is up and running you can add one or more workers on different machines.
Clone the Kubernetes repo, and run [worker.sh](/docs/getting-started-guides/docker-multinode/worker.sh) on the worker machine with root:
Clone the Kubernetes repo, and run [worker.sh](/docs/getting-started-guides/docker-multinode/worker.sh) on the worker machine _with root_:
```shell
export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
cd kubernetes/docs/getting-started-guides/docker-multinode/
./worker.sh
...
`Worker done!`
````
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./worker.sh
```
See [here](/docs/getting-started-guides/docker-multinode/worker) for detailed instructions explanation.
`Worker done!`
See [here](/docs/getting-started-guides/docker-multinode/worker) for a detailed explanation.
## Deploy a DNS

View File

@ -3,15 +3,13 @@
### Get the template file
First of all, download the template dns rc and svc file from
First of all, download the dns template
[skydns-rc template](/docs/getting-started-guides/docker-multinode/skydns-rc.yaml.in)
[skydns template](/docs/getting-started-guides/docker-multinode/skydns.yaml.in)
[skydns-svc template](/docs/getting-started-guides/docker-multinode/skydns-svc.yaml.in)
### Set environment variables
### Set env
Then you need to set `DNS_REPLICAS` , `DNS_DOMAIN` , `DNS_SERVER_IP` , `KUBE_SERVER` ENV.
Then you need to set `DNS_REPLICAS`, `DNS_DOMAIN` and `DNS_SERVER_IP` envs
```shell
$ export DNS_REPLICAS=1
@ -19,25 +17,18 @@ $ export DNS_REPLICAS=1
$ export DNS_DOMAIN=cluster.local # specify in startup parameter `--cluster-domain` for containerized kubelet
$ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns` for containerized kubelet
$ export KUBE_SERVER=10.10.103.250 # your master server ip, you may change it
```
### Replace the corresponding value in the template.
### Replace the corresponding value in the template and create the pod
```shell
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{kube_server_url}/${KUBE_SERVER}/g;" skydns-rc.yaml.in > ./skydns-rc.yaml
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns.yaml.in > ./skydns.yaml
$ sed -e "s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns-svc.yaml.in > ./skydns-svc.yaml
```
# If the kube-system namespace isn't already created, create it
$ kubectl get ns
$ kubectl create -f ./kube-system.yaml
### Use `kubectl` to create skydns rc and service
```shell
$ kubectl -s "$KUBE_SERVER:8080" --namespace=kube-system create -f ./skydns-rc.yaml
$ kubectl -s "$KUBE_SERVER:8080" --namespace=kube-system create -f ./skydns-svc.yaml
$ kubectl create -f ./skydns.yaml
```
### Test if DNS works

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: kube-system

View File

@ -1,9 +1,23 @@
---
---
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is `${MASTER_IP}`
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine
is `${MASTER_IP}`. We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.7"
Enviroinment variables used:
```shell
export MASTER_IP=<the_master_ip_here>
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
There are two main phases to installing the master:
* [Setting up `flanneld` and `etcd`](#setting-up-flanneld-and-etcd)
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
@ -11,10 +25,9 @@ There are two main phases to installing the master:
## Setting up flanneld and etcd
_Note_:
There is a [bug](https://github.com/docker/docker/issues/14106) in Docker 1.7.0 that prevents this from working correctly.
Please install Docker 1.6.2 or Docker 1.7.1.
This guide expects **Docker 1.7.1 or higher**.
### Setup Docker-Bootstrap
### Setup Docker Bootstrap
We're going to use `flannel` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
@ -26,6 +39,12 @@ Run:
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_If you have Docker 1.8.0 or higher run this instead_
```shell
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
@ -36,15 +55,25 @@ across reboots and failures.
Run:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
/usr/local/bin/etcd \
--listen-client-urls=http://127.0.0.1:4001,http://${MASTER_IP}:4001 \
--advertise-client-urls=http://${MASTER_IP}:4001 \
--data-dir=/var/etcd/data
```
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/google_containers/etcd:2.0.12 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
sudo docker -H unix:///var/run/docker-bootstrap.sock run \
--net=host \
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```
### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.
@ -67,6 +96,12 @@ or
sudo systemctl stop docker
```
or
```shell
sudo service docker stop
```
or it may be something else.
#### Run flannel
@ -74,10 +109,16 @@ or it may be something else.
Now run flanneld itself:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
--ip-masq=${FLANNEL_IPMASQ} \
--iface=${FLANNEL_IFACE}
```
The previous command should have printed a really long hash, copy this hash.
The previous command should have printed a really long hash, the container id, copy this hash.
Now get the subnet settings from flannel:
@ -130,40 +171,63 @@ Ok, now that your networking is set up, you can startup Kubernetes, this is the
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
--pid=host \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests-multi --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--hostname-override=127.0.0.1 \
--config=/etc/kubernetes/manifests-multi \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
### Also run the service proxy
```shell
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!
Download the kubectl binary and make it available by editing your PATH ENV.
([OS X](http://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/darwin/amd64/kubectl))
([linux](http://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl))
Download the kubectl binary for `${K8S_VERSION}` ({{page.version}}) and make it available by editing your PATH environment variable.
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/darwin/amd64/kubectl))
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/darwin/386/kubectl))
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/amd64/kubectl))
([linux/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/386/kubectl))
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/arm/kubectl))
List the nodes
For example, OS X:
```console
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Linux:
```console
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Now you can list the nodes:
```shell
kubectl get nodes
```
This should print:
This should print something like:
```shell
NAME LABELS STATUS
@ -176,4 +240,4 @@ If all else fails, ask questions on [Slack](/docs/troubleshooting/#slack).
### Next steps
Move on to [adding one or more workers](/docs/getting-started-guides/docker-multinode/worker) or [deploy a dns](/docs/getting-started-guides/docker-multinode/deployDNS)
Move on to [adding one or more workers](/docs/getting-started-guides/docker-multinode/worker/) or [deploy a dns](/docs/getting-started-guides/docker-multinode/deployDNS/)

View File

@ -14,8 +14,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# A scripts to install k8s worker node.
# Author @wizard_cxy @reouser
# A script to setup the k8s master in docker containers.
# Authors @wizard_cxy @resouer
set -e
@ -26,13 +26,12 @@ if ( ! ps -ef | grep "/usr/bin/docker" | grep -v 'grep' &> /dev/null ); then
fi
# Make sure k8s version env is properly set
if [ -z ${K8S_VERSION} ]; then
K8S_VERSION="1.0.3"
echo "K8S_VERSION is not set, using default: ${K8S_VERSION}"
else
echo "k8s version is set to: ${K8S_VERSION}"
fi
K8S_VERSION=${K8S_VERSION:-"1.2.0-alpha.7"}
ETCD_VERSION=${ETCD_VERSION:-"2.2.1"}
FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
FLANNEL_IPMASQ=${FLANNEL_IPMASQ:-"true"}
FLANNEL_IFACE=${FLANNEL_IFACE:-"eth0"}
ARCH=${ARCH:-"amd64"}
# Run as root
if [ "$(id -u)" != "0" ]; then
@ -40,6 +39,19 @@ if [ "$(id -u)" != "0" ]; then
exit 1
fi
# Make sure master ip is properly set
if [ -z ${MASTER_IP} ]; then
MASTER_IP=$(hostname -I | awk '{print $1}')
fi
echo "K8S_VERSION is set to: ${K8S_VERSION}"
echo "ETCD_VERSION is set to: ${ETCD_VERSION}"
echo "FLANNEL_VERSION is set to: ${FLANNEL_VERSION}"
echo "FLANNEL_IFACE is set to: ${FLANNEL_IFACE}"
echo "FLANNEL_IPMASQ is set to: ${FLANNEL_IPMASQ}"
echo "MASTER_IP is set to: ${MASTER_IP}"
echo "ARCH is set to: ${ARCH}"
# Check if a command is valid
command_exists() {
command -v "$@" > /dev/null 2>&1
@ -49,13 +61,14 @@ lsb_dist=""
# Detect the OS distro, we support ubuntu, debian, mint, centos, fedora dist
detect_lsb() {
# TODO: remove this when ARM support is fully merged
case "$(uname -m)" in
*64)
;;
*)
echo "Error: We currently only support 64-bit platforms."
exit 1
;;
*64)
;;
*)
echo "Error: We currently only support 64-bit platforms."
exit 1
;;
esac
if command_exists lsb_release; then
@ -75,13 +88,39 @@ detect_lsb() {
fi
lsb_dist="$(echo ${lsb_dist} | tr '[:upper:]' '[:lower:]')"
case "${lsb_dist}" in
amzn|centos|debian|ubuntu)
;;
*)
echo "Error: We currently only support ubuntu|debian|amzn|centos."
exit 1
;;
esac
}
# Start the bootstrap daemon
# TODO: do not start docker-bootstrap if it's already running
bootstrap_daemon() {
sudo -b docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null
# Detecting docker version so we could run proper docker_daemon command
[[ $(eval "docker --version") =~ ([0-9][.][0-9][.][0-9]*) ]] && version="${BASH_REMATCH[1]}"
local got=$(echo -e "${version}\n1.8.0" | sed '/^$/d' | sort -nr | head -1)
if [[ "${got}" = "${version}" ]]; then
docker_daemon="docker -d"
else
docker_daemon="docker daemon"
fi
${docker_daemon} \
-H unix:///var/run/docker-bootstrap.sock \
-p /var/run/docker-bootstrap.pid \
--iptables=false \
--ip-masq=false \
--bridge=none \
--graph=/var/lib/docker-bootstrap \
2> /var/log/docker-bootstrap.log \
1> /dev/null &
sleep 5
}
@ -89,79 +128,107 @@ bootstrap_daemon() {
DOCKER_CONF=""
start_k8s(){
# Start etcd
docker -H unix:///var/run/docker-bootstrap.sock run --restart=always --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
# Start etcd
docker -H unix:///var/run/docker-bootstrap.sock run \
--restart=on-failure \
--net=host \
-d \
gcr.io/google_containers/etcd-${ARCH}:${ETCD_VERSION} \
/usr/local/bin/etcd \
--listen-client-urls=http://127.0.0.1:4001,http://${MASTER_IP}:4001 \
--advertise-client-urls=http://${MASTER_IP}:4001 \
--data-dir=/var/etcd/data
sleep 5
# Set flannel net config
docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/google_containers/etcd:2.0.12 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16", "Backend": {"Type": "vxlan"}}'
docker -H unix:///var/run/docker-bootstrap.sock run \
--net=host gcr.io/google_containers/etcd:${ETCD_VERSION} \
etcdctl \
set /coreos.com/network/config \
'{ "Network": "10.1.0.0/16", "Backend": {"Type": "vxlan"}}'
# iface may change to a private network interface, eth0 is for default
flannelCID=$(docker -H unix:///var/run/docker-bootstrap.sock run --restart=always -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0 /opt/bin/flanneld -iface="eth0")
flannelCID=$(docker -H unix:///var/run/docker-bootstrap.sock run \
--restart=on-failure \
-d \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
/opt/bin/flanneld \
--ip-masq="${FLANNEL_IPMASQ}" \
--iface="${FLANNEL_IFACE}")
sleep 8
# Copy flannel env out and source it on the host
docker -H unix:///var/run/docker-bootstrap.sock cp ${flannelCID}:/run/flannel/subnet.env .
docker -H unix:///var/run/docker-bootstrap.sock \
cp ${flannelCID}:/run/flannel/subnet.env .
source subnet.env
# Configure docker net settings, then restart it
case "$lsb_dist" in
fedora|centos|amzn)
case "${lsb_dist}" in
amzn)
DOCKER_CONF="/etc/sysconfig/docker"
;;
ubuntu|debian|linuxmint)
echo "OPTIONS=\"\$OPTIONS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
ifconfig docker0 down
yum -y -q install bridge-utils && brctl delbr docker0 && service docker restart
;;
centos)
DOCKER_CONF="/etc/sysconfig/docker"
echo "OPTIONS=\"\$OPTIONS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
if ! command_exists ifconfig; then
yum -y -q install net-tools
fi
ifconfig docker0 down
yum -y -q install bridge-utils && brctl delbr docker0 && systemctl restart docker
;;
ubuntu|debian)
DOCKER_CONF="/etc/default/docker"
;;
esac
# Append the docker opts
echo "DOCKER_OPTS=\"\$DOCKER_OPTS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | sudo tee -a ${DOCKER_CONF}
# sleep a little bit
ifconfig docker0 down
case "$lsb_dist" in
fedora|centos|amzn)
yum install bridge-utils && brctl delbr docker0 && systemctl restart docker
;;
ubuntu|debian|linuxmint)
apt-get install bridge-utils && brctl delbr docker0 && service docker restart
;;
echo "DOCKER_OPTS=\"\$DOCKER_OPTS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
ifconfig docker0 down
apt-get install bridge-utils
brctl delbr docker0
service docker stop
while [ `ps aux | grep /usr/bin/docker | grep -v grep | wc -l` -gt 0 ]; do
echo "Waiting for docker to terminate"
sleep 1
done
service docker start
;;
*)
echo "Unsupported operations system ${lsb_dist}"
exit 1
;;
esac
# sleep a little bit
sleep 5
# Start kubelet & proxy, then start master components as pods
# Start kubelet and then start master components as pods
docker run \
--net=host \
--pid=host \
--privileged \
--restart=always \
--restart=on-failure \
-d \
-v /sys:/sys:ro \
-v /var/run:/var/run:rw \
-v /:/rootfs:ro \
-v /dev:/dev \
-v /var/lib/docker/:/var/lib/docker:ro \
-v /var/lib/docker/:/var/lib/docker:rw \
-v /var/lib/kubelet/:/var/lib/kubelet:rw \
gcr.io/google_containers/hyperkube:v${K8S_VERSION} \
gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
/hyperkube kubelet \
--api-servers=http://localhost:8080 \
--v=2 --address=0.0.0.0 --enable-server \
--hostname-override=127.0.0.1 \
--config=/etc/kubernetes/manifests-multi \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
docker run \
-d \
--net=host \
--privileged \
gcr.io/google_containers/hyperkube:v${K8S_VERSION} \
/hyperkube proxy --master=http://127.0.0.1:8080 --v=2
--address=0.0.0.0 \
--allow-privileged=true \
--enable-server \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests-multi \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--containerized \
--v=2
}
echo "Detecting your OS distro ..."

View File

@ -1,20 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ pillar['dns_server'] }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -1,31 +1,35 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v8
name: kube-dns-v10
namespace: kube-system
labels:
k8s-app: kube-dns
version: v8
version: v10
kubernetes.io/cluster-service: "true"
spec:
replicas: {{ pillar['dns_replicas'] }}
selector:
k8s-app: kube-dns
version: v8
version: v10
template:
metadata:
labels:
k8s-app: kube-dns
version: v8
version: v10
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd:2.0.9
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
@ -40,25 +44,33 @@ spec:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.11
image: gcr.io/google_containers/kube2sky:1.12
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- -domain={{ pillar['dns_domain'] }}
- -kube_master_url=http://{kube_server_url}:8080
- --domain={{ pillar['dns_domain'] }}
- name: skydns
image: gcr.io/google_containers/skydns:2015-03-11-001
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://localhost:4001
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain={{ pillar['dns_domain'] }}.
ports:
- containerPort: 53
@ -74,14 +86,25 @@ spec:
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 1
timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} localhost >/dev/null
- -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
@ -90,3 +113,24 @@ spec:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ pillar['dns_server'] }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -31,7 +31,7 @@ now run `docker ps` you should see nginx running. You may need to wait a few mi
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (`CLUSTER_IP`), and the second one is the external load-balanced IP.
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
```shell
kubectl get svc nginx

View File

@ -3,21 +3,31 @@
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
You need to repeat these instructions for each node you want to join the cluster.
We will assume that the IP address of this node is `${NODE_IP}` and you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](/docs/getting-started-guides/docker-multinode/master).
We will assume that you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](/docs/getting-started-guides/docker-multinode/master/). We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.6"
Enviroinment variables used:
```sh
export MASTER_IP=<the_master_ip_here>
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.6)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
For each worker node, there are three steps:
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
### Set up Flanneld on the worker node
As before, the Flannel daemon is going to provide network connectivity.
_Note_:
There is a [bug](https://github.com/docker/docker/issues/14106) in Docker 1.7.0 that prevents this from working correctly.
Please install Docker 1.6.2 or wait for Docker 1.7.1.
This guide expects **Docker 1.7.1 or higher**.
#### Set up a bootstrap docker
@ -30,6 +40,12 @@ Run:
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_If you have Docker 1.8.0 or higher run this instead_
```sh
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
@ -57,10 +73,18 @@ or it may be something else.
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
/opt/bin/flanneld \
--ip-masq=${FLANNEL_IPMASQ} \
--etcd-endpoints=http://${MASTER_IP}:4001 \
--iface=${FLANNEL_IFACE}
```
The previous command should have printed a really long hash, copy this hash.
The previous command should have printed a really long hash, the container id, copy this hash.
Now get the subnet settings from flannel:
@ -68,6 +92,7 @@ Now get the subnet settings from flannel:
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
@ -99,7 +124,7 @@ Again this is system dependent, it may be:
sudo /etc/init.d/docker start
```
it may be:
or it may be:
```shell
systemctl start docker
@ -121,9 +146,18 @@ sudo docker run \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
--pid=host \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=$(hostname -i) --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://${MASTER_IP}:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
```
#### Run the service proxy
@ -131,9 +165,15 @@ sudo docker run \
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
```shell
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
sudo docker run -d \
--net=host \
--privileged \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube proxy \
--master=http://${MASTER_IP}:8080 \
--v=2
```
### Next steps
Move on to [testing your cluster](/docs/getting-started-guides/docker-multinode/testing) or add another node](#).
Move on to [testing your cluster](/docs/getting-started-guides/docker-multinode/testing/) or [add another node](#adding-a-kubernetes-worker-node-via-docker)

View File

@ -14,8 +14,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# A scripts to install k8s worker node.
# Author @wizard_cxy @reouser
# A script to the k8s worker in docker containers.
# Authors @wizard_cxy @resouer
set -e
@ -26,14 +26,11 @@ if ( ! ps -ef | grep "/usr/bin/docker" | grep -v 'grep' &> /dev/null ); then
fi
# Make sure k8s version env is properly set
if [ -z ${K8S_VERSION} ]; then
K8S_VERSION="1.0.3"
echo "K8S_VERSION is not set, using default: ${K8S_VERSION}"
else
echo "k8s version is set to: ${K8S_VERSION}"
fi
K8S_VERSION=${K8S_VERSION:-"1.2.0-alpha.7"}
FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
FLANNEL_IFACE=${FLANNEL_IFACE:-"eth0"}
FLANNEL_IPMASQ=${FLANNEL_IPMASQ:-"true"}
ARCH=${ARCH:-"amd64"}
# Run as root
if [ "$(id -u)" != "0" ]; then
@ -45,10 +42,15 @@ fi
if [ -z ${MASTER_IP} ]; then
echo "Please export MASTER_IP in your env"
exit 1
else
echo "k8s master is set to: ${MASTER_IP}"
fi
echo "K8S_VERSION is set to: ${K8S_VERSION}"
echo "FLANNEL_VERSION is set to: ${FLANNEL_VERSION}"
echo "FLANNEL_IFACE is set to: ${FLANNEL_IFACE}"
echo "FLANNEL_IPMASQ is set to: ${FLANNEL_IPMASQ}"
echo "MASTER_IP is set to: ${MASTER_IP}"
echo "ARCH is set to: ${ARCH}"
# Check if a command is valid
command_exists() {
command -v "$@" > /dev/null 2>&1
@ -59,12 +61,12 @@ lsb_dist=""
# Detect the OS distro, we support ubuntu, debian, mint, centos, fedora dist
detect_lsb() {
case "$(uname -m)" in
*64)
;;
*)
echo "Error: We currently only support 64-bit platforms."
exit 1
;;
*64)
;;
*)
echo "Error: We currently only support 64-bit platforms."
exit 1
;;
esac
if command_exists lsb_release; then
@ -84,12 +86,37 @@ detect_lsb() {
fi
lsb_dist="$(echo ${lsb_dist} | tr '[:upper:]' '[:lower:]')"
case "${lsb_dist}" in
amzn|centos|debian|ubuntu)
;;
*)
echo "Error: We currently only support ubuntu|debian|amzn|centos."
exit 1
;;
esac
}
# Start the bootstrap daemon
bootstrap_daemon() {
sudo -b docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null
# Detecting docker version so we could run proper docker_daemon command
[[ $(eval "docker --version") =~ ([0-9][.][0-9][.][0-9]*) ]] && version="${BASH_REMATCH[1]}"
local got=$(echo -e "${version}\n1.8.0" | sed '/^$/d' | sort -nr | head -1)
if [[ "${got}" = "${version}" ]]; then
docker_daemon="docker -d"
else
docker_daemon="docker daemon"
fi
${docker_daemon} \
-H unix:///var/run/docker-bootstrap.sock \
-p /var/run/docker-bootstrap.pid \
--iptables=false \
--ip-masq=false \
--bridge=none \
--graph=/var/lib/docker-bootstrap \
2> /var/log/docker-bootstrap.log \
1> /dev/null &
sleep 5
}
@ -99,67 +126,97 @@ DOCKER_CONF=""
# Start k8s components in containers
start_k8s() {
# Start flannel
flannelCID=$(sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --restart=always --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001 -iface="eth0")
flannelCID=$(docker -H unix:///var/run/docker-bootstrap.sock run \
-d \
--restart=on-failure \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
/opt/bin/flanneld \
--ip-masq="${FLANNEL_IPMASQ}" \
--etcd-endpoints=http://${MASTER_IP}:4001 \
--iface="${FLANNEL_IFACE}")
sleep 8
sleep 10
# Copy flannel env out and source it on the host
sudo docker -H unix:///var/run/docker-bootstrap.sock cp ${flannelCID}:/run/flannel/subnet.env .
docker -H unix:///var/run/docker-bootstrap.sock \
cp ${flannelCID}:/run/flannel/subnet.env .
source subnet.env
# Configure docker net settings, then restart it
case "$lsb_dist" in
fedora|centos|amzn)
case "${lsb_dist}" in
centos)
DOCKER_CONF="/etc/sysconfig/docker"
;;
ubuntu|debian|linuxmint)
echo "OPTIONS=\"\$OPTIONS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
if ! command_exists ifconfig; then
yum -y -q install net-tools
fi
ifconfig docker0 down
yum -y -q install bridge-utils && brctl delbr docker0 && systemctl restart docker
;;
amzn)
DOCKER_CONF="/etc/sysconfig/docker"
echo "OPTIONS=\"\$OPTIONS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
ifconfig docker0 down
yum -y -q install bridge-utils && brctl delbr docker0 && service docker restart
;;
ubuntu|debian) # TODO: today ubuntu uses systemd. Handle that too
DOCKER_CONF="/etc/default/docker"
;;
esac
echo "DOCKER_OPTS=\"\$DOCKER_OPTS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | sudo tee -a ${DOCKER_CONF}
ifconfig docker0 down
case "$lsb_dist" in
fedora|centos)
yum install bridge-utils && brctl delbr docker0 && systemctl restart docker
;;
ubuntu|debian|linuxmint)
apt-get install bridge-utils && brctl delbr docker0 && service docker restart
;;
echo "DOCKER_OPTS=\"\$DOCKER_OPTS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
ifconfig docker0 down
apt-get install bridge-utils
brctl delbr docker0
service docker stop
while [ `ps aux | grep /usr/bin/docker | grep -v grep | wc -l` -gt 0 ]; do
echo "Waiting for docker to terminate"
sleep 1
done
service docker start
;;
*)
echo "Unsupported operations system ${lsb_dist}"
exit 1
;;
esac
# sleep a little bit
sleep 5
# Start kubelet & proxy in container
# TODO: Use secure port for communication
docker run \
--net=host \
--pid=host \
--privileged \
--restart=always \
--restart=on-failure \
-d \
-v /sys:/sys:ro \
-v /var/run:/var/run:rw \
-v /dev:/dev \
-v /var/lib/docker/:/var/lib/docker:ro \
-v /:/rootfs:ro \
-v /var/lib/docker/:/var/lib/docker:rw \
-v /var/lib/kubelet/:/var/lib/kubelet:rw \
gcr.io/google_containers/hyperkube:v${K8S_VERSION} \
/hyperkube kubelet --api-servers=http://${MASTER_IP}:8080 \
--v=2 --address=0.0.0.0 --enable-server \
--hostname-override=$(hostname -i) \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://${MASTER_IP}:8080 \
--address=0.0.0.0 \
--enable-server \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--containerized \
--v=2
docker run \
-d \
--net=host \
--privileged \
--restart=always \
gcr.io/google_containers/hyperkube:v${K8S_VERSION} \
/hyperkube proxy --master=http://${MASTER_IP}:8080 \
--v=2
--restart=on-failure \
gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
/hyperkube proxy \
--master=http://${MASTER_IP}:8080 \
--v=2
}
echo "Detecting your OS distro ..."

View File

@ -13,80 +13,82 @@ Here's a diagram of what the final result will look like:
## Prerequisites
1. You need to have docker installed on one machine.
2. Your kernel should support memory and swap accounting. Ensure that the
following configs are turned on in your linux kernel:
2. Decide what Kubernetes version to use. Set the `${K8S_VERSION}` variable to
a released version of Kubernetes >= "1.2.0-alpha.7"
```shell
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
```
3. Enable the memory and swap accounting in the kernel, at boot, as command line
parameters as follows:
```shell
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
```
NOTE: The above is specifically for GRUB2.
You can check the command line parameters passed to your kernel by looking at the
output of /proc/cmdline:
```shell
$cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory
swapaccount=1
```
### Step One: Run etcd
```shell
docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
### Step Two: Run the master
### Run it
```shell
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged=true --v=2
```
This actually runs the kubelet, which in turn runs a [pod](/docs/user-guide/pods) that contains the other master components.
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
> If you would like to mount an external device as a volume, add `--volume=/dev:/dev` to the command above. It may however, cause some problems described in [#18230](https://github.com/kubernetes/kubernetes/issues/18230)
This actually runs the kubelet, which in turn runs a [pod](/docs/user-guide/pods/) that contains the other master components.
### Step Three: Run the service proxy
### Download `kubectl`
At this point you should have a running Kubernetes cluster. You can test this
by downloading the kubectl binary for `${K8S_VERSION}` (look at the URL in the
following links) and make it available by editing your PATH environment
variable.
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/darwin/amd64/kubectl))
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/darwin/386/kubectl))
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/amd64/kubectl))
([linux/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/386/kubectl))
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/arm/kubectl))
For example, OS X:
```shell
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Linux:
```shell
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
### Test it out
At this point you should have a running Kubernetes cluster. You can test this by downloading the kubectl
binary
([OS X](https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/darwin/amd64/kubectl))
([linux](https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl))
*Note:*
On OS/X you will need to set up port forwarding via ssh:
Create configuration:
```shell
boot2docker ssh -L8080:localhost:8080
$ kubectl config set-cluster test-doc --server=http://localhost:8080
$ kubectl config set-context test-doc --cluster=test-doc
$ kubectl config use-context test-doc
```
For Max OS X users instead of `localhost` you will have to use IP address of your docker machine,
which you can find by running `docker-machine env <machinename>` (see [documentation](https://docs.docker.com/machine/reference/env/)
for details).
### Test it out
List the nodes in your cluster by running:
```shell
@ -96,16 +98,14 @@ kubectl get nodes
This should print:
```shell
NAME LABELS STATUS
127.0.0.1 <none> Ready
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If you are running different Kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
### Run an application
```shell
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
kubectl run nginx --image=nginx --port=80
```
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
@ -116,7 +116,7 @@ Now run `docker ps` you should see nginx running. You may need to wait a few mi
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP (if a LoadBalancer is configured)
```shell
kubectl get svc nginx
@ -136,9 +136,46 @@ curl <insert-cluster-ip-here>
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
## Deploy a DNS
See [here](/docs/getting-started-guides/docker-multinode/deployDNS/) for instructions.
### A note on turning down your cluster
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
the cluster, you need to first kill the kubelet container, and then any other containers.
You may use `docker kill $(docker ps -aq)`, note this removes _all_ containers running under Docker, so use with caution.
### Troubleshooting
#### Node is in `NotReady` state
If you see your node as `NotReady` it's possible that your OS does not have memcg and swap enabled.
1. Your kernel should support memory and swap accounting. Ensure that the
following configs are turned on in your linux kernel:
```shell
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
```
2. Enable the memory and swap accounting in the kernel, at boot, as command line
parameters as follows:
```shell
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
```
NOTE: The above is specifically for GRUB2.
You can check the command line parameters passed to your kernel by looking at the
output of /proc/cmdline:
```shell
$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory swapaccount=1
```

View File

@ -61,7 +61,7 @@ kube-node-02.example.com
## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
@ -171,6 +171,7 @@ systemctl | grep -i kube
```shell
iptables -nvL
```
**Create /tmp/apache.json on the master with the following contents and deploy pod**
```json

View File

@ -10,16 +10,11 @@
## Instructions
This is a getting started guide for [Fedora](http://fedoraproject.org). It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking)
done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These
services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up
between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager,
and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes
that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
**System Information:**
@ -32,15 +27,9 @@ fed-node = 192.168.121.65
**Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master.
This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum
command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will
be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you
would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from
Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum
install command below.
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
```shell
yum -y install --enablerepo=updates-testing kubernetes
@ -84,8 +73,7 @@ systemctl stop iptables-services firewalld
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else.
They do not need to be routed or assigned to anything.
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```shell
# The address on the local server to listen to.

View File

@ -1,10 +1,7 @@
---
---
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
* TOC
{:toc}
{:toc}
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites
@ -14,7 +11,7 @@ You need 2 or more machines with Fedora installed.
**Perform following commands on the Kubernetes master**
Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```json
{

View File

@ -17,7 +17,7 @@ If you want to use custom binaries or pure open source Kubernetes, please contin
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details.
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
1. Then, make sure you have the `gcloud preview` command line component installed. Run `gcloud preview` at the command line - if it asks to install any components, go ahead and install them. If it simply shows help text, you're good to go. This is required as the cluster setup script uses GCE [Instance Groups](https://cloud.google.com/compute/docs/instance-groups/), which are in the gcloud preview namespace. You will also need to **enable [`Compute Engine Instance Group Manager API`](https://developers.google.com/console/help/new/#activatingapis)** in the developers console.
1. Enable the [Compute Engine Instance Group Manager API](https://developers.google.com/console/help/new/#activatingapis) in the [Google Cloud developers console](https://console.developers.google.com).
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
1. Make sure you have credentials for GCloud by running ` gcloud auth login`.
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.

View File

@ -118,7 +118,6 @@ Azure | CoreOS | CoreOS | Weave | [docs](/docs/gettin
Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns))
Docker Multi Node | Flannel | N/A | local | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns))
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project
Digital Ocean | custom | Fedora | Calico | [docs](/docs/getting-started-guides/fedora/fedora-calico) | | Community (@djosborne)
Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project
Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
@ -140,7 +139,6 @@ Joyent | Juju | Ubuntu | flannel | [docs](/docs/gettin
AWS | Saltstack | Ubuntu | OVS | [docs](/docs/getting-started-guides/aws) | | Community ([@justinsb](https://github.com/justinsb))
Bare-metal | custom | Ubuntu | Calico | [docs](/docs/getting-started-guides/ubuntu-calico) | | Community ([@djosborne](https://github.com/djosborne))
Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
Local | | | _none_ | [docs](/docs/getting-started-guides/locally) | | Community ([@preillyme](https://github.com/preillyme))
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos) | | Community ([@lhuard1A](https://github.com/lhuard1A))
oVirt | | | | [docs](/docs/getting-started-guides/ovirt) | | Community ([@simon3z](https://github.com/simon3z))
Rackspace | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/rackspace) | | Community ([@doublerr](https://github.com/doublerr))

View File

@ -230,18 +230,16 @@ github.com:
### Cloud compatibility
Juju runs natively against a variety of public cloud providers. Juju currently
works with:
- [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws)
- [Windows Azure](https://jujucharms.com/docs/stable/config-azure)
- [DigitalOcean](https://jujucharms.com/docs/stable/config-digitalocean)
- [Google Compute Engine](https://jujucharms.com/docs/stable/config-gce)
- [HP Public Cloud](https://jujucharms.com/docs/stable/config-hpcloud)
- [Joyent](https://jujucharms.com/docs/stable/config-joyent)
- [LXC](https://jujucharms.com/docs/stable/config-LXC)
- Any [OpenStack](https://jujucharms.com/docs/stable/config-openstack) deployment
- [Vagrant](https://jujucharms.com/docs/stable/config-vagrant)
- [Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware)
works with [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws),
[Windows Azure](https://jujucharms.com/docs/stable/config-azure),
[DigitalOcean](https://jujucharms.com/docs/stable/config-digitalocean),
[Google Compute Engine](https://jujucharms.com/docs/stable/config-gce),
[HP Public Cloud](https://jujucharms.com/docs/stable/config-hpcloud),
[Joyent](https://jujucharms.com/docs/stable/config-joyent),
[LXC](https://jujucharms.com/docs/stable/config-LXC), any
[OpenStack](https://jujucharms.com/docs/stable/config-openstack) deployment,
[Vagrant](https://jujucharms.com/docs/stable/config-vagrant), and
[Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware).
If you do not see your favorite cloud provider listed many clouds can be
configured for [manual provisioning](https://jujucharms.com/docs/stable/config-manual).

View File

@ -14,7 +14,7 @@
The primary goal of the `libvirt-coreos` cluster provider is to deploy a multi-node Kubernetes cluster on local VMs as fast as possible and to be as light as possible in term of resources used.
In order to achieve that goal, its deployment is very different from the 'standard production deployment'? method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
In order to achieve that goal, its deployment is very different from the "standard production deployment" method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
The `libvirt-coreos` cluster provider doesn't aim at being production look-alike.
@ -36,15 +36,16 @@ On the other hand, `libvirt-coreos` might be useful for people investigating low
### Prerequisites
1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc)
1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html)
2. Install [ebtables](http://ebtables.netfilter.org/)
3. Install [qemu](http://wiki.qemu.org/Main_Page)
4. Install [libvirt](http://libvirt.org/)
5. Enable and start the libvirt daemon, e.g:
* ``systemctl enable libvirtd``
* ``systemctl start libvirtd``
6. [Grant libvirt access to your user&sup1;](https://libvirt.org/aclpolkit)
7. Check that your $HOME is accessible to the qemu user&sup2;
5. Install [openssl](http://openssl.org/)
6. Enable and start the libvirt daemon, e.g:
* ``systemctl enable libvirtd && systemctl start libvirtd`` # for systemd-based systems
* ``/etc/init.d/libvirt-bin start`` # for init.d-based systems
7. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html)
8. Check that your $HOME is accessible to the qemu user²
#### &sup1; Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
@ -127,11 +128,11 @@ cluster/kube-up.sh
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
The `NUM_MINIONS` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
The `NUM_NODES` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
The `KUBE_PUSH` environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-'|.tar.gz`. This is built with `make release` or `make release-skip-tests`.
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-.tar.gz`. This is built with `make release` or `make release-skip-tests`.
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
You can check that your machines are there and running with:
@ -141,10 +142,10 @@ $ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_minion-01 running
17 kubernetes_minion-02 running
18 kubernetes_minion-03 running
```
16 kubernetes_node-01 running
17 kubernetes_node-02 running
18 kubernetes_node-03 running
```
You can check that the Kubernetes cluster is working with:
@ -168,7 +169,7 @@ Connect to `kubernetes_master`:
ssh core@192.168.10.1
```
Connect to `kubernetes_minion-01`:
Connect to `kubernetes_node-01`:
```shell
ssh core@192.168.10.2
@ -185,7 +186,7 @@ export KUBERNETES_PROVIDER=libvirt-coreos
Bring up a libvirt-CoreOS cluster of 5 nodes
```shell
NUM_MINIONS=5 cluster/kube-up.sh
NUM_NODES=5 cluster/kube-up.sh
```
Destroy the libvirt-CoreOS cluster

View File

@ -45,10 +45,10 @@ $ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-lz9j 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
@ -219,7 +219,7 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
}
```
The Elasticsearch website contains information about [URI search queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request) which can be used to extract the required logs.
The Elasticsearch website contains information about [URI search queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html) which can be used to extract the required logs.
Alternatively you can view the ingested logs using Kibana. The first time you visit the Kibana URL you will be
presented with a page that asks you to configure your view of the ingested logs. Select the option for

View File

@ -9,17 +9,17 @@ logging and DNS resolution for names of Kubernetes services:
```shell
$ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-node-0f64 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-node-27gf 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-node-pk22 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-node-20ej 1/1 Running 0 31m
kube-dns-v3-pk22 3/3 Running 0 32m
monitoring-heapster-v1-20ej 0/1 Running 9 32m
```
Here is the same information in a picture which shows how the pods might be placed on specific nodes.
![Cluster](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/diagrams/cloud-logging.png)
![Cluster](https://raw.githubusercontent.com/kubernetes/kubernetes/{{page.githubbranch}}/examples/blog-logging/diagrams/cloud-logging.png)
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod's execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
[cluster DNS service](/docs/admin/dns) runs on one of the nodes and a pod which provides monitoring support runs on another node.
@ -48,7 +48,7 @@ This step may take a few minutes to download the ubuntu:14.04 image during which
One of the nodes is now running the counter pod:
![Counter Pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/diagrams/27gf-counter.png)
![Counter Pod](https://raw.githubusercontent.com/kubernetes/kubernetes/{{page.githubbranch}}/examples/blog-logging/diagrams/27gf-counter.png)
When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod.
@ -75,10 +75,10 @@ root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
```
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let's find out. First let's stop the currently running counter.
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Lets find out. First let's delete the currently running counter.
```shell
$ kubectl stop pod counter
$ kubectl delete pod counter
pods/counter
```
@ -131,8 +131,8 @@ Note the first container counted to 108 and then it was terminated. When the nex
```shell
SELECT metadata.timestamp, structPayload.log
FROM [mylogs.kubernetes_counter_default_count_20150611]
ORDER BY metadata.timestamp DESC
FROM [mylogs.kubernetes_counter_default_count_20150611]
ORDER BY metadata.timestamp DESC
```
Here is some sample output:

View File

@ -42,7 +42,7 @@ The cluster consists of several docker containers linked together by docker-mana
| Mesos Master | mesosmaster1 | REST endpoint for interacting with Mesos |
| Mesos Slave (x2) | mesosslave1, mesosslave2 | Mesos agents that offer resources and run framework executors (e.g. Kubernetes Kublets) |
| Kubernetes API Server | apiserver | REST endpoint for interacting with Kubernetes |
| Kubernetes Controller Manager | controller | |
| Kubernetes Controller Manager | controller | |
| Kubernetes Scheduler | scheduler | Schedules container deployment by accepting Mesos offers |
## Prerequisites
@ -52,13 +52,13 @@ Required:
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - version control system
- [Docker CLI](https://docs.docker.com/) - container management command line client
- [Docker Engine](https://docs.docker.com/) - container management daemon
- On Mac, use [Boot2Docker](http://boot2docker.io/) or [Docker Machine](https://docs.docker.com/machine/install-machine/)
- On Mac, use [Docker Machine](https://docs.docker.com/machine/install-machine/)
- [Docker Compose](https://docs.docker.com/compose/install/) - multi-container application orchestration
Optional:
- [Virtual Box](https://www.virtualbox.org/wiki/Downloads) - x86 hardware virtualizer
- Required by Boot2Docker and Docker Machine
- [Virtual Box](https://www.virtualbox.org/wiki/Downloads)
- Free x86 virtualization engine with a Docker Machine driver
- [Golang](https://golang.org/doc/install) - Go programming language
- Required to build Kubernetes locally
- [Make](https://en.wikipedia.org/wiki/Make_(software)) - Utility for building executables from source
@ -70,14 +70,14 @@ It's possible to install all of the above via [Homebrew](http://brew.sh/) on a M
Some steps print instructions for configuring or launching. Make sure each is properly set up before continuing to the next step.
brew install git
brew install caskroom/cask/brew-cask
brew cask install virtualbox
brew install docker
brew install boot2docker
boot2docker init
boot2docker up
brew install docker-compose
```shell
brew install git
brew install caskroom/cask/brew-cask
brew cask install virtualbox
brew install docker
brew install docker-machine
brew install docker-compose
```
### Install on Linux
@ -91,27 +91,42 @@ In order to build Kubernetes, the current user must be in a docker group with su
See the docker docs for [instructions](https://docs.docker.com/installation/ubuntulinux/#create-a-docker-group).
### Boot2Docker Config (Mac)
#### Docker Machine Config (Mac)
If on a mac using boot2docker, the following steps will make the docker IPs (in the virtualbox VM) reachable from the
host machine (mac).
If on a Mac using docker-machine, the following steps will make the docker IPs (in the virtualbox VM) reachable from the
host machine (Mac).
1. Set the VM's host-only network to "promiscuous mode":
1. Create VM
boot2docker stop
VBoxManage modifyvm boot2docker-vm --nicpromisc2 allow-all
boot2docker start
oracle-virtualbox
```shell
docker-machine create --driver virtualbox kube-dev
eval "$(docker-machine env kube-dev)"
```
2. Set the VM's host-only network to "promiscuous mode":
oracle-virtualbox
```conf
docker-machine stop kube-dev
VBoxManage modifyvm kube-dev --nicpromisc2 allow-all
docker-machine start kube-dev
```
This allows the VM to accept packets that were sent to a different IP.
Since the host-only network routes traffic between VMs and the host, other VMs will also be able to access the docker
IPs, if they have the following route.
1. Route traffic to docker through the boot2docker IP:
1. Route traffic to docker through the docker-machine IP:
sudo route -n add -net 172.17.0.0 $(boot2docker ip)
```shell
sudo route -n add -net 172.17.0.0 $(docker-machine ip kube-dev)
```
Since the boot2docker IP can change when the VM is restarted, this route may need to be updated over time.
Since the docker-machine IP can change when the VM is restarted, this route may need to be updated over time.
To delete the route later: `sudo route delete 172.17.0.0`
@ -119,8 +134,10 @@ host machine (mac).
1. Checkout source
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
```
By default, that will get you the bleeding edge of master branch.
You may want a [release branch](https://github.com/kubernetes/kubernetes/releases) instead,
@ -132,7 +149,9 @@ host machine (mac).
Building a new release covers both cases:
KUBERNETES_CONTRIB=mesos build/release.sh
```shell
KUBERNETES_CONTRIB=mesos build/release.sh
```
For developers, it may be faster to [build locally](#build-locally).
@ -142,13 +161,17 @@ host machine (mac).
1. Test image includes all the dependencies required for running e2e tests.
./cluster/mesos/docker/test/build.sh
```shell
./cluster/mesos/docker/test/build.sh
```
In the future, this image may be available to download. It doesn't contain anything specific to the current release, except its build dependencies.
1. Kubernetes-Mesos image includes the compiled linux binaries.
./cluster/mesos/docker/km/build.sh
```shell
./cluster/mesos/docker/km/build.sh
```
This image needs to be built every time you recompile the server binaries.
@ -159,23 +182,29 @@ host machine (mac).
If you delete the `MESOS_RESOURCES` environment variables, the resource amounts will be auto-detected based on the host resources, which will over-provision by > 2x.
If the configured resources are not available on the host, you may want to increase the resources available to Docker Engine.
You may have to increase you VM disk, memory, or cpu allocation in VirtualBox,
[Docker Machine](https://docs.docker.com/machine/#oracle-virtualbox), or
[Boot2Docker](https://ryanfb.github.io/etc/2015/01/28/increasing_boot2docker_allocations_on_os_x).
You may have to increase you VM disk, memory, or cpu allocation. See the Docker Machine docs for details
([Virtualbox](https://docs.docker.com/machine/drivers/virtualbox))
1. Configure provider
export KUBERNETES_PROVIDER=mesos/docker
```shell
export KUBERNETES_PROVIDER=mesos/docker
```
This tells cluster scripts to use the code within `cluster/mesos/docker`.
1. Create cluster
./cluster/kube-up.sh
```shell
./cluster/kube-up.sh
```
If you manually built all the above docker images, you can skip that step during kube-up:
MESOS_DOCKER_SKIP_BUILD=true ./cluster/kube-up.sh
```shell
MESOS_DOCKER_SKIP_BUILD=true ./cluster/kube-up.sh
```
After deploying the cluster, `~/.kube/config` will be created or updated to configure kubectl to target the new cluster.
@ -188,7 +217,9 @@ host machine (mac).
1. Destroy cluster
./cluster/kube-down.sh
```shell
./cluster/kube-down.sh
```
## Addons
@ -196,7 +227,9 @@ The `kube-up` for the mesos/docker provider will automatically deploy KubeDNS an
Check their status with:
./cluster/kubectl.sh get pods --namespace=kube-system
```shell
./cluster/kubectl.sh get pods --namespace=kube-system
```
### KubeUI
@ -213,7 +246,9 @@ Warning: e2e tests can take a long time to run. You may not want to run them imm
While your cluster is up, you can run the end-to-end tests:
./cluster/test-e2e.sh
```shell
./cluster/test-e2e.sh
```
Notable parameters:
- Increase the logging verbosity: `-v=2`
@ -221,35 +256,45 @@ Notable parameters:
To build, deploy, test, and destroy, all in one command (plus unit & integration tests):
make test_e2e
```shell
make test_e2e
```
## Kubernetes CLI
When compiling from source, it's simplest to use the `./cluster/kubectl.sh` script, which detects your platform &
architecture and proxies commands to the appropriate `kubectl` binary.
`./cluster/kubectl.sh get pods`
ex: `./cluster/kubectl.sh get pods`
## Helpful scripts
Kill all docker containers
docker ps -q -a | xargs docker rm -f
- Kill all docker containers
Clean up unused docker volumes
```shell
docker ps -q -a | xargs docker rm -f
```
- Clean up unused docker volumes
```shell
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes
```
## Build Locally
The steps above tell you how to build in a container, for minimal local dependencies. But if you have Go and Make installed you can build locally much faster:
KUBERNETES_CONTRIB=mesos make
```shell
KUBERNETES_CONTRIB=mesos make
```
However, if you're not on linux, you'll still need to compile the linux/amd64 server binaries:
KUBERNETES_CONTRIB=mesos build/run.sh hack/build-go.sh
```shell
KUBERNETES_CONTRIB=mesos build/run.sh hack/build-go.sh
```
The above two steps should be significantly faster than cross-compiling a whole new release for every supported platform (which is what `./build/release.sh` does).

View File

@ -27,7 +27,7 @@ Further information is available in the Kubernetes on Mesos [contrib directory][
- A running [Mesos cluster on Google Compute Engine][5]
- A [VPN connection][10] to the cluster
- A machine in the cluster which should become the Kubernetes *master node* with:
- GoLang > 1.2
- Go (see [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/devel/development.md#go-versions) for required versions)
- make (i.e. build-essential)
- Docker
@ -66,7 +66,7 @@ Start etcd and verify that it is running:
```shell
sudo docker run -d --hostname $(uname -n) --name etcd \
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.0.12 \
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.2.1 \
--listen-client-urls http://0.0.0.0:4001 \
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
```
@ -74,7 +74,7 @@ sudo docker run -d --hostname $(uname -n) --name etcd \
```shell
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd7bac9e2301 quay.io/coreos/etcd:v2.0.12 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
fd7bac9e2301 quay.io/coreos/etcd:v2.2.1 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
```
It's also a good idea to ensure your etcd instance is reachable by testing it
@ -147,12 +147,6 @@ disown -a
#### Validate KM Services
Add the appropriate binary folder to your `PATH` to access kubectl:
```shell
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
Interact with the kubernetes-mesos framework via `kubectl`:
```shell

View File

@ -0,0 +1,3 @@
assignees:
- jdef
- karlkfi

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

View File

@ -20,7 +20,7 @@ Once the Kubernetes template is available it is possible to start instantiating
[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
[install]: http://www.ovirt.org/Quick_Start_Guide#Create_Virtual_Machines
[generate a template]: http://www.ovirt.org/Quick_Start_Guide#Using_Templates
[install the ovirt-guest-agent]: http://www.ovirt.org/How_to_install_the_guest_agent_in_Fedora
[install the ovirt-guest-agent]: http://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
## Using the oVirt Cloud Provider

View File

@ -21,7 +21,7 @@ The current cluster design is inspired by:
1. Python2.7
2. You need to have both `nova` and `swiftly` installed. It's recommended to use a python virtualenv to install these packages into.
3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova) for more details.
3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova.html) for more details.
## Provider: Rackspace
@ -44,7 +44,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo
- flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network.
2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password).
3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems.
4. We then boot as many nodes as defined via `$NUM_MINIONS`.
4. We then boot as many nodes as defined via `$NUM_NODES`.
## Some notes

View File

@ -14,12 +14,90 @@ We still have [a bunch of work](http://issue.k8s.io/8262) to do to make the expe
- Note that for rkt version later than v0.7.0, `metadata service` is not required for running pods in private networks. So now rkt pods will not register the metadata service be default.
- Since release [v1.2.0-alpha.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.2.0-alpha.5),
the [rkt API service](https://github.com/coreos/rkt/blob/master/api/v1alpha/README.md)
must be running on the node.
### Network Setup
rkt uses the [Container Network Interface (CNI)](https://github.com/appc/cni)
to manage container networking. By default, all pods attempt to join a network
called `rkt.kubernetes.io`, which is currently defined [in `rkt.go`]
(https://github.com/kubernetes/kubernetes/blob/v1.2.0-alpha.6/pkg/kubelet/rkt/rkt.go#L91).
In order for pods to get correct IP addresses, the CNI config file must be
edited to add this `rkt.kubernetes.io` network:
#### Using flannel
In addition to the basic prerequisites above, each node must be running
a [flannel](https://github.com/coreos/flannel) daemon. This implies
that a flannel-supporting etcd service must be available to the cluster
as well, apart from the Kubernetes etcd, which will not yet be
available at flannel configuration time. Once it's running, flannel can
be set up with a CNI config like:
```shell
$ cat <<EOF >/etc/rkt/net.d/k8s_cluster.conf
{
"name": "rkt.kubernetes.io",
"type": "flannel"
}
EOF
```
While `k8s_cluster.conf` is a rather arbitrary name for the config file itself,
and can be adjusted to suit local conventions, the keys and values should be exactly
as shown above. `name` must be `rkt.kubernetes.io` and `type` should be `flannel`.
More details about the flannel CNI plugin can be found
[in the CNI documentation](https://github.com/appc/cni/blob/master/Documentation/flannel.md).
#### On GCE
Each VM on GCE has an additional 256 IP addresses routed to it, so
it is possible to forego flannel in smaller clusters. This makes the
necessary CNI config file a bit more verbose:
```shell
$ cat <<EOF >/etc/rkt/net.d/k8s_cluster.conf
{
"name": "rkt.kubernetes.io",
"type": "bridge",
"bridge": "cbr0",
"isGateway": true,
"ipam": {
"type": "host-local",
"subnet": "10.255.228.1/24",
"gateway": "10.255.228.1"
},
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
EOF
```
This example creates a `bridge` plugin configuration for the CNI network, specifying
the bridge name `cbr0`. It also specifies the CIDR, in the `ipam` field.
Creating these files for any moderately-sized cluster is at best inconvenient.
Work is in progress to
[enable Kubernetes to use the CNI by default]
(https://github.com/kubernetes/kubernetes/pull/18795/files).
As that work matures, such manual CNI config munging will become unnecessary
for primary use cases. For early adopters, an initial example shows one way to
[automatically generate these CNI configurations]
(https://gist.github.com/yifan-gu/fbb911db83d785915543)
for rkt.
### Local cluster
To use rkt as the container runtime, we need to supply `--container-runtime=rkt` and `--rkt-path=$PATH_TO_RKT_BINARY` to kubelet. Additionally we can provide `--rkt-stage1-image` flag
as well to select which [stage1 image](https://github.com/coreos/rkt/blob/master/Documentation/running-lkvm-stage1.md) we want to use.
To use rkt as the container runtime, we need to supply the following flags to kubelet:
If you are using the [hack/local-up-cluster.sh](https://releases.k8s.io/{{page.githubbranch}}/hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to
- `--container-runtime=rkt` chooses the container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'.
- `--rkt-path=$PATH_TO_RKT_BINARY` sets the path of rkt binary. Leave empty to use the first rkt in $PATH.
- `--rkt-stage1-image` sets the path of the stage1 image. Local paths and http/https URLs are supported. Leave empty to use the 'stage1.aci' that locates in the same directory as the rkt binary.
If you are using the [hack/local-up-cluster.sh](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to
set these flags:
```shell
@ -40,21 +118,21 @@ To use rkt as the container runtime for your CoreOS cluster on GCE, you need to
```shell
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
$ export KUBE_GCE_NODE_IMAGE=<image_id>
$ export KUBE_GCE_NODE_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
$ export KUBE_RKT_VERSION=0.15.0
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
$ cluster/kube-up.sh
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
@ -96,16 +174,19 @@ See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/).
### Different UX with rkt container runtime
rkt and Docker have very different designs, as well as ACI and Docker image format. Users might experience some different experience when switching from one to the other. More information can be found [here](/docs/getting-started-guides/rkt/notes/).
### Debugging
Here are severals tips for you when you run into any issues.
Here are several tips for you when you run into any issues.
##### Check logs
By default, the log verbose level is 2. In order to see more logs related to rkt, we can set the verbose level to 4.
For local cluster, we can set the environment variable: `LOG_LEVEL=4`.
If the cluster is using salt, we can edit the [logging.sls](https://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/pillar/logging.sls) in the saltbase.
If the cluster is using salt, we can edit the [logging.sls](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster/saltbase/pillar/logging.sls) in the saltbase.
##### Check rkt pod status

View File

@ -0,0 +1,99 @@
---
---
# Notes on Different UX with rkt container runtime
### Doesn't support ENTRYPOINT + CMD feature
To run a Docker image, rkt will convert it into [App Container Image (ACI) format](https://github.com/appc/spec/blob/master/SPEC.md) first.
However, during the conversion, the `ENTRYPOINT` and `CMD` are concatentated to construct ACI's `Exec` field.
This means after the conversion, we are not able to replace only `ENTRYPOINT` or `CMD` without touching the other part.
So for now, users are recommended to specify the **executable path** in `Command` and **arguments** in `Args`.
(This has the same effect if users specify the **executable path + arguments** in `Command` or `Args` alone).
For example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
The above pod yaml file is valid as it's not specifying `Command` or `Args`, so the default `ENTRYPOINT` and `CMD` of the image will be used.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: busybox
spec:
containers:
- name: busybox
image: busybox
command:
- /bin/sleep
- 1000
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: busybox
spec:
containers:
- name: busybox
image: busybox
command:
- /bin/sleep
args:
- 1000
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sleep
- 1000
```
All the three examples above are valid as they contain both the executable path and the arguments.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
name: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- 1000
```
The last example is invalid, as we cannot override just the `CMD` of the image alone.

View File

@ -3,7 +3,7 @@
This guide is for people who want to craft a custom Kubernetes cluster. If you
can find an existing Getting Started Guide that meets your needs on [this
list](/docs/getting-started-guides/README/), then we recommend using it, as you will be able to benefit
list](/docs/getting-started-guides/), then we recommend using it, as you will be able to benefit
from the experience of others. However, if you have specific IaaS, networking,
configuration management, or operating system requirements not met by any of
those guides, then this guide will provide an outline of the steps you need to
@ -170,9 +170,9 @@ You have several choices for Kubernetes images:
command like `docker images`
For etcd, you can:
- Use images hosted on Google Container Registry (GCR), such as `gcr.io/google_containers/etcd:2.0.12`
- Use images hosted on [Docker Hub](https://hub.docker.com/search/?q=etcd) or [Quay.io](https://quay.io/repository/coreos/etcd), such as `quay.io/coreos/etcd:v2.2.0`
- Use images hosted on Google Container Registry (GCR), such as `gcr.io/google_containers/etcd:2.2.1`
- Use images hosted on [Docker Hub](https://hub.docker.com/search/?q=etcd) or [Quay.io](https://quay.io/repository/coreos/etcd), such as `quay.io/coreos/etcd:v2.2.1`
- Use etcd binary included in your OS distro.
- Build your own image
- You can do: `cd kubernetes/cluster/images/etcd; make`
@ -208,14 +208,14 @@ You need to prepare several certs:
- The kubelets optionally need certs to identify themselves as clients of the master, and when
serving its own API over HTTPS.
Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs:
- See function `create-certs` in `cluster/gce/util.sh`
- See also `cluster/saltbase/salt/generate-cert/make-ca-cert.sh` and
Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs.
- see function `create-certs` in `cluster/gce/util.sh`
- see also `cluster/saltbase/salt/generate-cert/make-ca-cert.sh` and
`cluster/saltbase/salt/generate-cert/make-cert.sh`
You will end up with the following files (we will use these variables later on):
You will end up with the following files (we will use these variables later on)
- `CA_CERT`
- put in on node where apiserver runs, in e.g. `/srv/kubernetes/ca.crt`.
- `MASTER_CERT`
@ -356,7 +356,7 @@ The minimum version required is [v0.5.6](https://github.com/coreos/rkt/releases/
[systemd](http://www.freedesktop.org/wiki/Software/systemd/) is required on your node to run rkt. The
minimum version required to match rkt v0.5.6 is
[systemd 215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903).
[systemd 215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
[rkt metadata service](https://github.com/coreos/rkt/blob/master/Documentation/networking.md) is also required
for rkt networking support. You can start rkt metadata service by using command like
@ -378,7 +378,7 @@ Arguments to consider:
- Otherwise, if taking the firewall-based security approach
- `--api-servers=http://$MASTER_IP`
- `--config=/etc/kubernetes/manifests`
- `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Addons](#starting-addons).)
- `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Cluster Services](#starting-cluster-services).)
- `--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses.
- `--docker-root=`
- `--root-dir=`
@ -411,35 +411,34 @@ this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
because of how this is used later.
Recommended, automatic approach:
- Recommended, automatic approach:
1. Set `--configure-cbr0=true` option in kubelet init script and restart kubelet service. Kubelet will configure cbr0 automatically.
It will wait to do this until the node controller has set Node.Spec.PodCIDR. Since you have not setup apiserver and node controller
yet, the bridge will not be setup immediately.
- Alternate, manual approach:
Alternate, manual approach:
1. Set `--configure-cbr0=false` on kubelet and restart.
1. Create a bridge
- e.g. `brctl addbr cbr0`.
1. Set appropriate MTU
- `ip link set dev cbr0 mtu 1460` (NOTE: the actual value of MTU will depend on your network environment)
1. Add the clusters network to the bridge (docker will go on other side of bridge).
- e.g. `ip addr add $NODE_X_BRIDGE_ADDR dev eth0`
- `brctl addbr cbr0`.
1. Set appropriate MTU. NOTE: the actual value of MTU will depend on your network environment
- `ip link set dev cbr0 mtu 1460`
1. Add the node's network to the bridge (docker will go on other side of bridge).
- `ip addr add $NODE_X_BRIDGE_ADDR dev cbr0`
1. Turn it on
- e.g. `ip link set dev cbr0 up`
- `ip link set dev cbr0 up`
If you have turned off Docker's IP masquerading to allow pods to talk to each
other, then you may need to do masquerading just for destination IPs outside
the cluster network. For example:
```shell
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE \! -d ${CLUSTER_SUBNET}
iptables -t nat -A POSTROUTING ! -d ${CLUSTER_SUBNET} -m addrtype ! --dst-type LOCAL -j MASQUERADE
```
This will rewrite the source address from
the PodIP to the Node IP for traffic bound outside the cluster, and kernel
[connection tracking](http://www.iptables.info/en/connection-state)
[connection tracking](http://www.iptables.info/en/connection-state.html)
will ensure that responses destined to the node still reach
the pod.
@ -786,17 +785,29 @@ If you have selected the `--register-node=true` option for kubelets, they will n
You should soon be able to see all your nodes by running the `kubectl get nodes` command.
Otherwise, you will need to manually create node objects.
### Logging
### Starting Cluster Services
**TODO** talk about starting Logging.
You will want to complete your Kubernetes clusters by adding cluster-wide
services. These are sometimes called *addons*, and [an overview
of their purpose is in the admin guide](/docs/admin/cluster-components/#addons).
### Monitoring
Notes for setting up each cluster service are given below:
**TODO** talk about starting Monitoring.
### DNS
**TODO** talk about starting DNS.
* Cluster DNS:
* required for many kubernetes examples
* [Setup instructions](http://releases.k8s.io/release-1.2/cluster/addons/dns/)
* [Admin Guide](../admin/dns.md)
* Cluster-level Logging
* Multiple implementations with different storage backends and UIs.
* [Elasticsearch Backend Setup Instructions](http://releases.k8s.io/release-1.2/cluster/addons/fluentd-elasticsearch/)
* [Google Cloud Logging Backend Setup Instructions](http://releases.k8s.io/release-1.2/cluster/addons/fluentd-gcp/).
* Both require running fluentd on each node.
* [User Guide](../user-guide/logging.md)
* Container Resource Monitoring
* [Setup instructions](http://releases.k8s.io/release-1.2/cluster/addons/cluster-monitoring/)
* GUI
* [Setup instructions](http://releases.k8s.io/release-1.2/cluster/addons/kube-ui/)
cluster.
## Troubleshooting

View File

@ -1,273 +1,465 @@
---
---
This document describes how to deploy Kubernetes on Ubuntu bare metal nodes with Calico Networking plugin. See [projectcalico.org](http://projectcalico.org) for more information on what Calico is, and [the calicoctl github](https://github.com/projectcalico/calico-docker) for more information on the command-line tool, `calicoctl`.
This document describes how to deploy Kubernetes with Calico networking from scratch on _bare metal_ Ubuntu. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
This guide will set up a simple Kubernetes cluster with a master and two nodes. We will start the following processes with systemd:
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
On the Master:
This guide will set up a simple Kubernetes cluster with a single Kubernetes master and two Kubernetes nodes. We'll run Calico's etcd cluster on the master and install the Calico daemon on the master and nodes.
- `etcd`
- `kube-apiserver`
- `kube-controller-manager`
- `kube-scheduler`
- `calico-node`
## Prerequisites and Assumptions
On each Node:
- This guide uses `systemd` for process management. Ubuntu 15.04 supports systemd natively as do a number of other Linux distributions.
- All machines should have Docker >= 1.7.0 installed.
- To install Docker on Ubuntu, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
- All machines should have connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
- This guide assumes that none of the hosts have been configured with any Kubernetes or Calico software.
- This guide will set up a secure, TLS-authenticated API server.
- `kube-proxy`
- `kube-kubelet`
- `calico-node`
## Set up the master
## Prerequisites
### Configure TLS
1. This guide uses `systemd` and thus uses Ubuntu 15.04 which supports systemd natively.
2. All machines should have the latest docker stable version installed. At the time of writing, that is Docker 1.7.0.
- To install docker, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
3. All hosts should be able to communicate with each other, as well as the internet, to download the necessary files.
4. This demo assumes that none of the hosts have been configured with any Kubernetes or Calico software yet.
The master requires the root CA public key, `ca.pem`; the apiserver certificate, `apiserver.pem` and its private key, `apiserver-key.pem`.
## Setup Master
1. Create the file `openssl.cnf` with the following contents.
First, get the sample configurations for this tutorial
```conf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
IP.1 = 10.100.0.1
IP.2 = ${MASTER_IPV4}
```
```shell
wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz
tar -xvf master.tar.gz
```
> Replace ${MASTER_IPV4} with the Master's IP address on which the Kubernetes API will be accessible.
### Setup environment variables for systemd services on Master
2. Generate the necessary TLS assets.
Many of the sample systemd services provided rely on environment variables on a per-node basis. Here we'll edit those environment variables and move them into place.
```shell
# Generate the root CA.
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
1.) Copy the network-environment-template from the `master` directory for editing.
# Generate the API server keypair.
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
```
```shell
cp calico-kubernetes-ubuntu-demo-master/master/network-environment-template network-environment
```
3. You should now have the following three files: `ca.pem`, `apiserver.pem`, and `apiserver-key.pem`. Send the three files to your master host (using `scp` for example).
4. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
2.) Edit `network-environment` to represent your current host's settings.
```shell
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
```
3.) Move the `network-environment` into `/etc`
### Install Kubernetes on the Master
```shell
sudo mv -f network-environment /etc
```
We'll use the `kubelet` to bootstrap the Kubernetes master.
### Install Kubernetes on Master
1. Download and install the `kubelet` and `kubectl` binaries:
1.) Build & Install Kubernetes binaries
```shell
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
sudo chmod +x /usr/bin/kubelet /usr/bin/kubectl
```
```shell
# Get the Kubernetes Source
wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.3/kubernetes.tar.gz
2. Install the `kubelet` systemd unit file and start the `kubelet`:
# Untar it
tar -xf kubernetes.tar.gz
tar -xf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
kubernetes/cluster/ubuntu/build.sh
```shell
# Install the unit file
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubelet.service
# Add binaries to /usr/bin
sudo cp -f binaries/master/* /usr/bin
sudo cp -f binaries/kubectl /usr/bin
```
# Enable the unit file so that it runs on boot
sudo systemctl enable /etc/systemd/kubelet.service
2.) Install the sample systemd processes settings for launching kubernetes services
# Start the kubelet service
sudo systemctl start kubelet.service
```
```shell
sudo cp -f calico-kubernetes-ubuntu-demo-master/master/*.service /etc/systemd
sudo systemctl enable /etc/systemd/etcd.service
sudo systemctl enable /etc/systemd/kube-apiserver.service
sudo systemctl enable /etc/systemd/kube-controller-manager.service
sudo systemctl enable /etc/systemd/kube-scheduler.service
```
3. Download and install the master manifest file, which will start the Kubernetes master services automatically:
3.) Launch the processes.
```shell
sudo mkdir -p /etc/kubernetes/manifests
sudo wget -N -P /etc/kubernetes/manifests https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubernetes-master.manifest
```
```shell
sudo systemctl start etcd.service
sudo systemctl start kube-apiserver.service
sudo systemctl start kube-controller-manager.service
sudo systemctl start kube-scheduler.service
```
4. Check the progress by running `docker ps`. After a while, you should see the `etcd`, `apiserver`, `controller-manager`, `scheduler`, and `kube-proxy` containers running.
### Install Calico on Master
> Note: it may take some time for all the containers to start. Don't worry if `docker ps` doesn't show any containers for a while or if some containers start before others.
In order to allow the master to route to pods on our nodes, we will launch the calico-node daemon on our master. This will allow it to learn routes over BGP from the other calico-node daemons in the cluster. The docker daemon should already be running before calico is started.
### Install Calico's etcd on the master
```shell
# Install the calicoctl binary, which will be used to launch calico
wget https://github.com/projectcalico/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x calicoctl
sudo cp -f calicoctl /usr/bin
Calico needs its own etcd cluster to store its state. In this guide we install a single-node cluster on the master server.
# Install and start the calico service
sudo cp -f calico-kubernetes-ubuntu-demo-master/master/calico-node.service /etc/systemd
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
> Note: In a production deployment we recommend running a distributed etcd cluster for redundancy. In this guide, we use a single etcd for simplicitly.
> Note: calico-node may take a few minutes on first boot while it downloads the calico-node docker image.
1. Download the template manifest file:
## Setup Nodes
```
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/calico-etcd.manifest
```
Perform these steps **once on each node**, ensuring you appropriately set the environment variables on each node
2. Replace all instances of `<MASTER_IPV4>` in the `calico-etcd.manifest` file with your master's IP address.
### Setup environment variables for systemd services on the Node
3. Then, move the file to the `/etc/kubernetes/manifests` directory:
1.) Get the sample configurations for this tutorial
```shell
sudo mv -f calico-etcd.manifest /etc/kubernetes/manifests
```
```shell
wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz
tar -xvf master.tar.gz
```
### Install Calico on the master
2.) Copy the network-environment-template from the `node` directory
We need to install Calico on the master. This allows the master to route packets to the pods on other nodes.
```shell
cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment
```
1. Install the `calicoctl` tool:
3.) Edit `network-environment` to represent your current host's settings.
```shell
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
chmod +x calicoctl
sudo mv calicoctl /usr/bin
```
4.) Move `network-environment` into `/etc`
2. Prefetch the calico/node container (this ensures that the Calico service starts immediately when we enable it):
```shell
sudo mv -f network-environment /etc
```
```
sudo docker pull calico/node:v0.15.0
```
### Configure Docker on the Node
3. Download the `network-environment` template from the `calico-kubernetes` repository:
#### Create the veth
```
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/network-environment-template
```
Instead of using docker's default interface (docker0), we will configure a new one to use desired IP ranges
4. Edit `network-environment` to represent this node's settings:
```shell
sudo apt-get install -y bridge-utils
sudo brctl addbr cbr0
sudo ifconfig cbr0 up
sudo ifconfig cbr0 <IP>/24
```
- Replace `<KUBERNETES_MASTER>` with the IP address of the master. This should be the source IP address used to reach the Kubernetes worker nodes.
> Replace \<IP\> with the subnet for this host's containers. Example topology:
5. Move `network-environment` into `/etc`:
Node | cbr0 IP
------- | -------------
node-1 | 192.168.1.1/24
node-2 | 192.168.2.1/24
node-X | 192.168.X.1/24
```shell
sudo mv -f network-environment /etc
```
#### Start docker on cbr0
6. Install, enable, and start the `calico-node` service:
The Docker daemon must be started and told to use the already configured cbr0 instead of using the usual docker0, as well as disabling ip-masquerading and modification of the ip-tables.
```shell
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
1.) Edit the ubuntu-15.04 docker.service for systemd at: `/lib/systemd/system/docker.service`
## Set up the nodes
2.) Find the line that reads `ExecStart=/usr/bin/docker -d -H fd://` and append the following flags: `--bridge=cbr0 --iptables=false --ip-masq=false`
The following steps should be run on each Kubernetes node.
3.) Reload systemctl and restart docker.
### Configure TLS
```shell
sudo systemctl daemon-reload
sudo systemctl restart docker
```
Worker nodes require three keys: `ca.pem`, `worker.pem`, and `worker-key.pem`. We've already generated
`ca.pem` for use on the Master. The worker public/private keypair should be generated for each Kubernetes node.
### Install Calico on the Node
1. Create the file `worker-openssl.cnf` with the following contents.
1.) Install Calico
```conf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = $ENV::WORKER_IP
```
```shell
# Get the calicoctl binary
wget https://github.com/projectcalico/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x calicoctl
sudo cp -f calicoctl /usr/bin
2. Generate the necessary TLS assets for this worker. This relies on the worker's IP address, and the `ca.pem` file generated earlier in the guide.
# Start calico on this node
sudo cp calico-kubernetes-ubuntu-demo-master/node/calico-node.service /etc/systemd
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
```shell
# Export this worker's IP address.
export WORKER_IP=<WORKER_IPV4>
```
>The calico-node service will automatically get the kubernetes-calico plugin binary and install it on the host system.
```shell
# Generate keys.
openssl genrsa -out worker-key.pem 2048
openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=worker-key" -config worker-openssl.cnf
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
```
2.) Use calicoctl to add an IP pool. We must specify the IP and port that the master's etcd is listening on.
**NOTE: This step only needs to be performed once per Kubernetes deployment, as it covers all the node's IP ranges.**
3. Send the three files (`ca.pem`, `worker.pem`, and `worker-key.pem`) to the host (using scp, for example).
```shell
ETCD_AUTHORITY=<MASTER_IP>:4001 calicoctl pool add 192.168.0.0/16
```
4. Move the files to the `/etc/kubernetes/ssl` folder with the appropriate permissions:
```
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem worker.pem worker-key.pem
# Set permissions
sudo chmod 600 /etc/kubernetes/ssl/worker-key.pem
sudo chown root:root /etc/kubernetes/ssl/worker-key.pem
```
### Configure the kubelet worker
1. With your certs in place, create a kubeconfig for worker authentication in `/etc/kubernetes/worker-kubeconfig.yaml`; replace `<KUBERNETES_MASTER>` with the IP address of the master:
```
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: https://<KUBERNETES_MASTER>:443
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
```
### Install Calico on the node
On your compute nodes, it is important that you install Calico before Kubernetes. We'll install Calico using the provided `calico-node.service` systemd unit file:
1. Install the `calicoctl` binary:
```shell
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
chmod +x calicoctl
sudo mv calicoctl /usr/bin
```
2. Fetch the calico/node container:
```shell
sudo docker pull calico/node:v0.15.0
```
3. Download the `network-environment` template from the `calico-cni` repository:
```shell
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/network-environment-template
```
4. Edit `network-environment` to represent this node's settings:
- Replace `<DEFAULT_IPV4>` with the IP address of the node.
- Replace `<KUBERNETES_MASTER>` with the IP or hostname of the master.
5. Move `network-environment` into `/etc`:
```shell
sudo mv -f network-environment /etc
```
6. Install the `calico-node` service:
```shell
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
7. Install the Calico CNI plugins:
```shell
sudo mkdir -p /opt/cni/bin/
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico-ipam
sudo chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
```
8. Create a CNI network configuration file, which tells Kubernetes to create a network named `calico-k8s-network` and to use the calico plugins for that network. Create file `/etc/cni/net.d/10-calico.conf` with the following contents, replacing `<KUBERNETES_MASTER>` with the IP of the master (this file should be the same on each node):
```shell
# Make the directory structure.
mkdir -p /etc/cni/net.d
# Make the network configuration file
cat >/etc/rkt/net.d/10-calico.conf <<EOF
{
"name": "calico-k8s-network",
"type": "calico",
"etcd_authority": "<KUBERNETES_MASTER>:6666",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
}
}
EOF
```
Since this is the only network we create, it will be used by default by the kubelet.
9. Verify that Calico started correctly:
```shell
calicoctl status
```
should show that Felix (Calico's per-node agent) is running and the there should be a BGP status line for each other node that you've configured and the master. The "Info" column should show "Established":
```
$ calicoctl status
calico-node container is running. Status: Up 15 hours
Running felix version 1.3.0rc5
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| Peer address | Peer type | State | Since | Info |
+---------------+-------------------+-------+----------+-------------+
| 172.18.203.41 | node-to-node mesh | up | 17:32:26 | Established |
| 172.18.203.42 | node-to-node mesh | up | 17:32:25 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+
```
If the "Info" column shows "Active" or some other value then Calico is having difficulty connecting to the other host. Check the IP address of the peer is correct and check that Calico is using the correct local IP address (set in the `network-environment` file above).
### Install Kubernetes on the Node
1.) Build & Install Kubernetes binaries
1. Download and Install the kubelet binary:
```shell
# Get the Kubernetes Source
wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.3/kubernetes.tar.gz
```shell
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
sudo chmod +x /usr/bin/kubelet
```
# Untar it
tar -xf kubernetes.tar.gz
tar -xf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
kubernetes/cluster/ubuntu/build.sh
2. Install the `kubelet` systemd unit file:
# Add binaries to /usr/bin
sudo cp -f binaries/minion/* /usr/bin
```shell
# Download the unit file.
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kubelet.service
# Get the iptables based kube-proxy reccomended for this demo
wget https://github.com/projectcalico/calico-kubernetes/releases/download/v0.1.1/kube-proxy
sudo cp kube-proxy /usr/bin/
sudo chmod +x /usr/bin/kube-proxy
```
# Enable and start the unit files so that they run on boot
sudo systemctl enable /etc/systemd/kubelet.service
sudo systemctl start kubelet.service
```
2.) Install and launch the sample systemd processes settings for launching Kubernetes services
3. Download the `kube-proxy` manifest:
```shell
sudo cp calico-kubernetes-ubuntu-demo-master/node/kube-proxy.service /etc/systemd/
sudo cp calico-kubernetes-ubuntu-demo-master/node/kube-kubelet.service /etc/systemd/
sudo systemctl enable /etc/systemd/kube-proxy.service
sudo systemctl enable /etc/systemd/kube-kubelet.service
sudo systemctl start kube-proxy.service
sudo systemctl start kube-kubelet.service
```
```shell
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kube-proxy.manifest
```
> *You may want to consider checking their status after to ensure everything is running*
4. In that file, replace `<KUBERNETES_MASTER>` with your master's IP. Then move it into place:
```shell
sudo mkdir -p /etc/kubernetes/manifests/
sudo mv kube-proxy.manifest /etc/kubernetes/manifests/
```
## Configure kubectl remote access
To administer your cluster from a separate host (e.g your laptop), you will need the root CA generated earlier, as well as an admin public/private keypair (`ca.pem`, `admin.pem`, `admin-key.pem`). Run the following steps on the machine which you will use to control your cluster.
1. Download the kubectl binary.
```shell
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
sudo chmod +x /usr/bin/kubectl
```
2. Generate the admin public/private keypair.
3. Export the necessary variables, substituting in correct values for your machine.
```shell
# Export the appropriate paths.
export CA_CERT_PATH=<PATH_TO_CA_PEM>
export ADMIN_CERT_PATH=<PATH_TO_ADMIN_PEM>
export ADMIN_KEY_PATH=<PATH_TO_ADMIN_KEY_PEM>
# Export the Master's IP address.
export MASTER_IPV4=<MASTER_IPV4>
```
4. Configure your host `kubectl` with the admin credentials:
```shell
kubectl config set-cluster calico-cluster --server=https://${MASTER_IPV4} --certificate-authority=${CA_CERT_PATH}
kubectl config set-credentials calico-admin --certificate-authority=${CA_CERT_PATH} --client-key=${ADMIN_KEY_PATH} --client-certificate=${ADMIN_CERT_PATH}
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico
```
Check your work with `kubectl get nodes`, which should succeed and display the nodes.
## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. For more on DNS service discovery, check [here](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns).
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided. This step makes use of the kubectl configuration made above.
The config repository for this guide comes with manifest files to start the DNS addon. To install DNS, do the following on your Master node.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
```
Replace `<MASTER_IP>` in `calico-kubernetes-ubuntu-demo-master/dns/skydns-rc.yaml` with your Master's IP address. Then, create `skydns-rc.yaml` and `skydns-svc.yaml` using `kubectl create -f <FILE>`.
## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
```
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) to set up other services on your cluster.
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
With this sample configuration, because the containers have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a full datacenter deployment, NAT is not always necessary, since Calico can peer with the border routers over BGP.
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use an iptables masquerade rule. This is the standard mechanism [recommended](/docs/admin/networking/#google-compute-engine-gce) in the Kubernetes GCE environment.
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
We need to NAT traffic that has a destination outside of the cluster. Internal traffic includes the master/nodes, and the container IP pools. A suitable masquerade chain would follow the pattern below, replacing the following variables:
- `CONTAINER_SUBNET`: The cluster-wide subnet from which container IPs are chosen. All cbr0 bridge subnets fall within this range. The above example uses `192.168.0.0/16`.
- `KUBERNETES_HOST_SUBNET`: The subnet from which Kubernetes node / master IP addresses have been chosen.
- `HOST_INTERFACE`: The interface on the Kubernetes node which is used for external connectivity. The above example uses `eth0`
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```shell
sudo iptables -t nat -N KUBE-OUTBOUND-NAT
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d <CONTAINER_SUBNET> -o <HOST_INTERFACE> -j RETURN
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d <KUBERNETES_HOST_SUBNET> -o <HOST_INTERFACE> -j RETURN
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -j KUBE-OUTBOUND-NAT
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
```
This chain should be applied on the master and all nodes. In production, these rules should be persisted, e.g. with `iptables-persistent`.
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
```
### NAT at the border router
In a datacenter environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the datacenter, and so NAT is not needed on the nodes (though it may be enabled at the datacenter edge to allow outbound-only internet connectivity).
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).

View File

@ -6,6 +6,12 @@ in the given examples. You can scale to **any number of nodes** by changing some
The original idea was heavily inspired by @jainvipin 's ubuntu single node
work, which has been merge into this document.
The scripting referenced here can be used to deploy Kubernetes with
networking based either on Flannel or on a CNI plugin that you supply.
This document is focused on the Flannel case. See
`kubernetes/cluster/ubuntu/config-default.sh` for remarks on how to
use a CNI plugin instead.
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
* TOC
@ -14,71 +20,82 @@ work, which has been merge into this document.
## Prerequisites
1. The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
2. All machines can communicate with each other. Master node needs to connect the Internet to download the necessary files, while working nodes do not.
2. All machines can communicate with each other. Master node needs to be connected to the
Internet to download the necessary files, while worker nodes do not.
3. These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it can not work with
Ubuntu 15 which use systemd instead of upstart. We are working around fixing this.
4. Dependencies of this guide: etcd-2.0.12, flannel-0.4.0, k8s-1.0.3, may work with higher versions.
Ubuntu 15 which uses systemd instead of upstart.
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.4, may work with higher versions.
5. All the remote servers can be ssh logged in without a password by using key authentication.
## Starting a Cluster
### Download binaries
### Set up working directory
First clone the kubernetes github repo
Clone the kubernetes github repo locally
```shell
$ git clone https://github.com/kubernetes/kubernetes.git
```
Then download all the needed binaries into given directory (cluster/ubuntu/binaries)
#### Configure and start the Kubernetes cluster
The startup process will first download all the required binaries automatically.
By default etcd version is 2.2.1, flannel version is 0.5.5 and k8s version is 1.1.4.
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` like following.
```shell
$ cd kubernetes/cluster/ubuntu
$ ./build.sh
$ export KUBE_VERSION=1.0.5
$ export FLANNEL_VERSION=0.5.0
$ export ETCD_VERSION=2.2.0
```
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` in build.sh, by default etcd version is 2.0.12,
flannel version is 0.4.0 and k8s version is 1.0.3.
**Note**
Make sure that the involved binaries are located properly in the binaries/master
or binaries/minion directory before you go ahead to the next step .
For users who want to bring up a cluster with k8s version v1.1.1, `controller manager` may fail to start
due to [a known issue](https://github.com/kubernetes/kubernetes/issues/17109). You could raise it
up manually by using following command on the remote master server. Note that
you should do this only after `api-server` is up. Moreover this issue is fixed in v1.1.2 and later.
```shell
$ sudo service kube-controller-manager start
```
Note that we use flannel here to set up overlay network, yet it's optional. Actually you can build up k8s
cluster natively, or use flannel, Open vSwitch or any other SDN tool you like.
#### Configure and start the Kubernetes cluster
An example cluster is listed below:
```shell
| IP Address | Role |
|-------------|----------|
|10.10.103.223| node |
|10.10.103.162| node |
|10.10.103.250| both master and node|
```
First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample.
First configure the cluster information in cluster/ubuntu/config-default.sh, following is a simple sample.
```shell
export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"
export role="ai i i"
export NUM_MINIONS=${NUM_MINIONS:-3}
export NUM_NODES=${NUM_NODES:-3}
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16
```
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and
The first variable `nodes` defines all your cluster nodes, master node comes first and
separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
Then the `role` variable defines the role of above machine in the same order, "ai" stands for machine
acts as both master and node, "a" stands for master, "i" stands for node.
The `NUM_MINIONS` variable defines the total number of nodes.
The `NUM_NODES` variable defines the total number of nodes.
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure
that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips.
@ -95,29 +112,32 @@ that conflicts with your own private network range.
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network,
should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
You can optionally provide additional Flannel network configuration
through `FLANNEL_OTHER_NET_CONFIG`, as explained in `cluster/ubuntu/config-default.sh`.
**Note:** When deploying, master needs to connect the Internet to download the necessary files. If your machines locate in a private network that need proxy setting to connect the Internet, you can set the config `PROXY_SETTING` in cluster/ubuntu/config-default.sh such as:
**Note:** When deploying, master needs to be connected to the Internet to download the necessary files.
If your machines are located in a private network that need proxy setting to connect the Internet,
you can set the config `PROXY_SETTING` in cluster/ubuntu/config-default.sh such as:
PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"
After all the above variables being set correctly, we can use following command in `cluster/` directory to
bring up the whole cluster.
```shell
PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
```
After all the above variables being set correctly, we can use following command in cluster/ directory to bring up the whole cluster.
The scripts automatically copy binaries and config files to all the machines via `scp` and start kubernetes
service on them. The only thing you need to do is to type the sudo password when promoted.
```shell
KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
```
The scripts automatically scp binaries and config files to all the machines and start the k8s service on them.
The only thing you need to do is to type the sudo password when promoted.
```shell
Deploying minion on machine 10.10.103.223
Deploying node on machine 10.10.103.223
...
[sudo] password to copy files and start minion:
[sudo] password to start node:
```
If all things goes right, you will see the below message from console indicating the k8s is up.
If everything works correctly, you will see the following message from console indicating the k8s cluster is up.
```shell
Cluster validation succeeded
@ -125,7 +145,7 @@ Cluster validation succeeded
### Test it out
You can use `kubectl` command to check if the newly created k8s is working correctly.
You can use `kubectl` command to check if the newly created cluster is working correctly.
The `kubectl` binary is under the `cluster/ubuntu/binaries` directory.
You can make it available via PATH, then you can use the below command smoothly.
@ -139,7 +159,7 @@ NAME LABELS STATUS
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
```
Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to build a redis backend cluster on the k8s
Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to build a redis backend cluster
### Deploy addons
@ -185,77 +205,79 @@ We are working on these features which we'd like to let everybody know:
to eliminate OS-distro differences.
2. Tearing Down scripts: clear and re-create the whole stack by one click.
### Trouble shooting
### Troubleshooting
Generally, what this approach does is quite simple:
1. Download and copy binaries and configuration files to proper directories on every node
2. Configure `etcd` using IPs based on input from user
3. Create and start flannel network
1. Download and copy binaries and configuration files to proper directories on every node.
2. Configure `etcd` for master node using IPs based on input from user.
3. Create and start flannel network for worker nodes.
So if you encounter a problem, **check etcd configuration first**
Please try:
So if you encounter a problem, check etcd configuration of master node first.
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like:
2. You may find following commands useful, the former one to bring down the cluster, while the latter one could start it again.
```shell
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
```
3. You may find following commands useful, the former one to bring down the cluster, while
the latter one could start it again.
```shell
KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
```
4. You can also customize your own settings in `/etc/default/{component_name}`.
3. You can also customize your own settings in `/etc/default/{component_name}` and restart it via
`$ sudo service {component_name} restart`.
### Upgrading a Cluster
## Upgrading a Cluster
If you already have a kubernetes cluster, and want to upgrade to a new version,
you can use following command in cluster/ directory to update the whole cluster or a specified node to a new version.
you can use following command in `cluster/` directory to update the whole cluster
or a specified node to a new version.
```shell
KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version>
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version>
```
It can be done for all components (by default), master(`-m`) or specified node(`-n`).
If the version is not specified, the script will try to use local binaries.You should ensure all the binaries are well prepared in path `cluster/ubuntu/binaries`.
Upgrading a single node is currently experimental.
If the version is not specified, the script will try to use local binaries. You should ensure all
the binaries are well prepared in the expected directory path cluster/ubuntu/binaries.
```shell
$ tree cluster/ubuntu/binaries
binaries/
'<27><>'<27><>'<27><> kubectl
'<27><>'<27><>'<27><> master
'<27><>   '<27><>'<27><>'<27><> etcd
'<27><>   '<27><>'<27><>'<27><> etcdctl
'<27><>   '<27><>'<27><>'<27><> flanneld
'<27><>   '<27><>'<27><>'<27><> kube-apiserver
'<27><>   '<27><>'<27><>'<27><> kube-controller-manager
'<27><>   '<27><>'<27><>'<27><> kube-scheduler
'<27><>'<27><>'<27><> minion
'<27><>'<27><>'<27><> flanneld
'<27><>'<27><>'<27><> kubelet
'<27><>'<27><>'<27><> kube-proxy
├── kubectl
├── master
│   ├── etcd
│   ├── etcdctl
│   ├── flanneld
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
└── minion
├── flanneld
├── kubelet
└── kube-proxy
```
Upgrading single node is experimental now. You can use following command to get a help.
You can use following command to get a help.
```shell
KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h
```
Some examples are as follows:
Here are some examples:
* upgrade master to version 1.0.5: `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -m 1.0.5`
* upgrade node 10.10.103.223 to version 1.0.5 : `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -n 10.10.103.223 1.0.5`
* upgrade node `vcap@10.10.103.223` to version 1.0.5 : `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -n 10.10.103.223 1.0.5`
* upgrade master and all nodes to version 1.0.5: `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh 1.0.5`
The script will not delete any resources of your cluster, it just replaces the binaries.
You can use `kubectl` command to check if the newly upgraded k8s is working correctly.
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.Or refer to [test-it-out](/docs/getting-started-guides/ubuntu/#test-it-out)
### Test it out
You can use the `kubectl` command to check if the newly upgraded kubernetes cluster is working correctly.
To make sure the version of the upgraded cluster is what you expect, you will find these commands helpful.
* upgrade all components or master: `$ kubectl version`. Check the *Server Version*.
* upgrade node `vcap@10.10.102.223`: `$ ssh -t vcap@10.10.102.223 'cd /opt/bin && sudo ./kubelet --version'`

View File

@ -8,9 +8,9 @@ Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
### Prerequisites
1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html
1. Install latest version >= 1.7.4 of vagrant from http://www.vagrantup.com/downloads.html
2. Install one of:
1. Version 4.3.28 of Virtual Box from https://www.virtualbox.org/wiki/Download_Old_Builds_4_3
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
@ -36,11 +36,11 @@ export KUBERNETES_PROVIDER=vagrant
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default) environment variable:
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
```shell
export VAGRANT_DEFAULT_PROVIDER=parallels
@ -54,14 +54,14 @@ To access the master or any node:
```shell
vagrant ssh master
vagrant ssh minion-1
vagrant ssh node-1
```
If you are running more than one node, you can access the others by:
```shell
vagrant ssh minion-2
vagrant ssh minion-3
vagrant ssh node-2
vagrant ssh node-3
```
Each node in the cluster installs the docker daemon and the kubelet.
@ -88,7 +88,7 @@ To view the service status and/or logs on the kubernetes-master:
To view the services on any of the nodes:
```shell
[vagrant@kubernetes-master ~] $ vagrant ssh minion-1
[vagrant@kubernetes-master ~] $ vagrant ssh node-1
[vagrant@kubernetes-master ~] $ sudo su
[root@kubernetes-master ~] $ systemctl status kubelet
@ -205,8 +205,8 @@ my-nginx-xql4j 0/1 Pending 0 10s
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
```shell
$ vagrant ssh minion-1 -c 'sudo docker images'
kubernetes-minion-1:
$ vagrant ssh node-1 -c 'sudo docker images'
kubernetes-node-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
@ -216,8 +216,8 @@ kubernetes-minion-1:
Once the docker image for nginx has been downloaded, the container will start and you can list it:
```shell
$ vagrant ssh minion-1 -c 'sudo docker ps'
kubernetes-minion-1:
$ vagrant ssh node-1 -c 'sudo docker ps'
kubernetes-node-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
@ -236,7 +236,10 @@ my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.0.0.1 <none> 80/TCP run=my-nginx 1h
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
my-nginx my-nginx nginx run=my-nginx 3 1m
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
@ -266,6 +269,43 @@ export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
#### I am getting timeouts when trying to curl the master from my host!
During provision of the cluster, you may see the following message:
```shell
Validating node-1
.............
Waiting for each node to be registered with cloud provider
error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout
```
Some users have reported VPNs may prevent traffic from being routed to the host machine into the virtual machine network.
To debug, first verify that the master is binding to the proper IP address:
```
$ vagrant ssh master
$ ifconfig | grep eth1 -C 2
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.245.1.2 netmask
255.255.255.0 broadcast 10.245.1.255
```
Then verify that your host machine has a network connection to a bridge that can serve that address:
```sh
$ ifconfig | grep 10.245.1 -C 2
vboxnet5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255
inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20<link>
ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet)
```
If you do not see a response on your host machine, you will most likely need to connect your host to the virtual network created by the virtualization provider.
If you do see a network, but are still unable to ping the machine, check if your VPN is blocking the request.
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
@ -297,14 +337,14 @@ To set up a vagrant cluster for hacking, follow the [vagrant developer guide](ht
#### I have brought Vagrant up but the nodes cannot validate!
Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
#### I want to change the number of nodes!
You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so:
You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so:
```shell
export NUM_MINIONS=1
export NUM_NODES=1
```
#### I want my VMs to have more memory!
@ -320,7 +360,7 @@ If you need more granular control, you can set the amount of memory for the mast
```shell
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_MINION_MEMORY=2048
export KUBERNETES_NODE_MEMORY=2048
```
#### I ran vagrant suspend and nothing works!
@ -329,8 +369,8 @@ export KUBERNETES_MINION_MEMORY=2048
#### I want vagrant to sync folders via nfs!
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
```shell
export KUBERNETES_VAGRANT_USE_NFS=true
```
```

View File

@ -12,13 +12,13 @@ convenient).
### Prerequisites
1. You need administrator credentials to an ESXi machine or vCenter instance.
2. You must have Go (version 1.2 or later) installed: [www.golang.org](http://www.golang.org).
2. You must have Go (see [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/development.md#go-versions) for supported versions) installed: [www.golang.org](http://www.golang.org).
3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
```shell
export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
```
4. Install the govc tool to interact with ESXi/vCenter:
@ -31,10 +31,10 @@ go get github.com/vmware/govmomi/govc
### Setup
Download a prebuilt Debian 7.7 VMDK that we'll use as a base image:
Download a prebuilt Debian 8.2 VMDK that we'll use as a base image:
```shell
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2014-11-11/kube.vmdk.gz{,.md5}
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2016-01-08/kube.vmdk.gz{,.md5}
md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
```
@ -42,7 +42,10 @@ gzip -d kube.vmdk.gz
Import this VMDK into your vSphere datastore:
```shell
export GOVC_URL='user:pass@hostname'
export GOVC_URL='hostname' # hostname of the vc
export GOVC_USERNAME='username' # username for logging into the vsphere.
export GOVC_PASSWORD='password' # password for the above username
export GOVC_NETWORK='Network Name' # Name of the network the vms should join. Many times it could be "VM Network"
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
@ -62,12 +65,21 @@ parameters. The guest login for the image that you imported is `kube:kube`.
### Starting a cluster
Now, let's continue with deploying Kubernetes.
This process takes about ~10 minutes.
This process takes about ~20-30 minutes depending on your network.
#### From extracted binary release
```shell
cd kubernetes
KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
```
#### Build from source
```shell
cd kubernetes # Extracted binary release OR repository root
export KUBERNETES_PROVIDER=vsphere
cluster/kube-up.sh
cd kubernetes
make release
KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
```
Refer to the top level README and the getting started guide for Google Compute
@ -81,4 +93,4 @@ deployment works just as any other one!
The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You
can log into any VM as the `kube` user to poke around and figure out what is
going on (find yourself authorized with your SSH key, or use the password
`kube` otherwise).
`kube` otherwise).

View File

@ -0,0 +1,117 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# ConfigMap example
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and that you have
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
## Step One: Create the ConfigMap
A ConfigMap contains a set of named strings.
Use the [`examples/configmap/configmap.yaml`](configmap.yaml) file to create a ConfigMap:
```console
$ kubectl create -f docs/user-guide/configmap/configmap.yaml
```
You can use `kubectl` to see information about the ConfigMap:
```console
$ kubectl get configmap
NAME DATA
test-secret 2
$ kubectl describe configMap test-configmap
Name: test-configmap
Labels: <none>
Annotations: <none>
Data
====
data-1: 7 bytes
data-2: 7 bytes
```
View the values of the keys with `kubectl get`:
```console
$ cluster/kubectl.sh get configmaps test-configmap -o yaml
apiVersion: v1
data:
data-1: value-1
data-2: value-2
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T20:28:50Z
name: test-configmap
namespace: default
resourceVersion: "1090"
selfLink: /api/v1/namespaces/default/configmaps/test-configmap
uid: 384bd365-d67e-11e5-8cd0-68f728db1985
```
## Step Two: Create a pod that consumes a configMap in environment variables
Use the [`examples/configmap/env-pod.yaml`](env-pod.yaml) file to create a Pod that consumes the
ConfigMap in environment variables.
```console
$ kubectl create -f docs/user-guide/configmap/env-pod.yaml
```
This pod runs the `env` command to display the environment of the container:
```console
$ kubectl logs secret-test-pod
KUBE_CONFIG_1=value-1
KUBE_CONFIG_2=value-2
```
## Step Three: Create a pod that sets the command line using ConfigMap
Use the [`examples/configmap/command-pod.yaml`](env-pod.yaml) file to create a Pod with a container
whose command is injected with the keys of a ConfigMap
```console
$ kubectl create -f docs/user-guide/configmap/env-pod.yaml
```
This pod runs an `echo` command to display the keys:
```console
value-1 value-2
```
## Step Four: Create a pod that consumes a configMap in a volume
Pods can also consume ConfigMaps in volumes. Use the [`examples/configmap/volume-pod.yaml`](volume-pod.yaml) file to create a Pod that consume the ConfigMap in a volume.
```console
$ kubectl create -f docs/user-guide/configmap/volume-pod.yaml
```
This pod runs a `cat` command to print the value of one of the keys in the volume:
```console
value-1
```
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/configmap/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: config-cmd-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "echo $(KUBE_CONFIG_1) $(KUBE_CONFIG_2)" ]
env:
- name: KUBE_CONFIG_1
valueFrom:
configMapKeyRef:
name: test-configmap
key: data-1
- name: KUBE_CONFIG_2
valueFrom:
configMapKeyRef:
name: test-configmap
key: data-2
restartPolicy: Never

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
data-1: value-1
data-2: value-2

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: config-env-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: KUBE_CONFIG_1
valueFrom:
configMapKeyRef:
name: test-configmap
key: data-1
- name: KUBE_CONFIG_2
valueFrom:
configMapKeyRef:
name: test-configmap
key: data-2
restartPolicy: Never

View File

@ -0,0 +1,7 @@
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30

View File

@ -0,0 +1,4 @@
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice

View File

@ -0,0 +1,2 @@
maxmemory 2mb
maxmemory-policy allkeys-lru

View File

@ -0,0 +1,30 @@
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: config-volume-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/config/path/to/special-key" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: test-configmap
items:
- key: data-1
path: path/to/special-key
restartPolicy: Never

View File

@ -0,0 +1,21 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

View File

@ -7,7 +7,7 @@ spec:
scaleRef:
kind: ReplicationController
name: php-apache
namespace: default
subresource: scale
minReplicas: 1
maxReplicas: 10
cpuUtilization:

View File

@ -1,3 +1,17 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM php:5-apache
ADD index.php /var/www/html/index.php

View File

@ -13,6 +13,9 @@ spec:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15
timeoutSeconds: 1
name: liveness

View File

@ -1,3 +1,17 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM scratch
ADD server /server

View File

@ -1,3 +1,17 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
all: push
server: server.go

View File

@ -1,10 +1,24 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Makefile for launching synthetic logging sources (any platform)
# and for reporting the forwarding rules for the
# Elasticsearch and Kibana pods for the GCE platform.
# For examples of how to observe the ingested logs please
# see the appropriate getting started guide e.g.
# Google Cloud Logging: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/logging.md
# With Elasticsearch and Kibana logging: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/logging-elasticsearch.md
# Google Cloud Logging: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/logging.md
# With Elasticsearch and Kibana logging: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/logging-elasticsearch.md
.PHONY: up down logger-up logger-down logger10-up logger10-down

View File

@ -0,0 +1,28 @@
apiVersion: v1
kind: Pod
metadata:
name: with-labels
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/e2e-az-name",
"operator": "In",
"values": ["e2e-az1", "e2e-az2"]
}
]
}
]
}
}
}
another-annotation-key: another-annotation-value
spec:
containers:
- name: with-labels
image: gcr.io/google_containers/pause:2.0

View File

@ -12,7 +12,7 @@ spec:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/var/www/html"
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd

View File

@ -10,4 +10,4 @@ spec:
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
path: "/somepath/data01"

View File

@ -10,5 +10,5 @@ spec:
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data02"
path: "/somepath/data02"
persistentVolumeReclaimPolicy: Recycle

View File

@ -8,5 +8,5 @@ spec:
accessModes:
- ReadWriteOnce
nfs:
path: /tmp
path: /somepath
server: 172.17.0.2

View File

@ -0,0 +1,42 @@
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: frontend
# these labels can be applied automatically
# from the labels in the pod template if not set
# labels:
# app: guestbook
# tier: frontend
spec:
# this replicas value is default
# modify it according to your case
replicas: 3
# selector can be applied automatically
# from the labels in the pod template if not set
# selector:
# matchLabels:
# app: guestbook
# tier: frontend
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80

View File

@ -0,0 +1,44 @@
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: redis-slave
# these labels can be applied automatically
# from the labels in the pod template if not set
# labels:
# app: redis
# role: slave
# tier: backend
spec:
# this replicas value is default
# modify it according to your case
replicas: 2
# selector can be applied automatically
# from the labels in the pod template if not set
# selector:
# app: guestbook
# role: slave
# tier: backend
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access an environment variable to find the master
# service's host, comment out the 'value: dns' line above, and
# uncomment the line below.
# value: env
ports:
- containerPort: 6379

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_SECRET_DATA
valueFrom:
secretKeyRef:
name: test-secret
key: data-1
restartPolicy: Never

View File

@ -1,4 +1,4 @@
# Copyright 2014 Google Inc. All rights reserved.
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
# Copyright 2014 Google Inc. All rights reserved.
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.