Merge branch 'master' into bcamp

pull/1203/head
steveperry-53 2016-09-10 22:51:01 -07:00
commit e2da95dafc
6 changed files with 12 additions and 326 deletions

View File

@ -195,8 +195,6 @@ toc:
path: /docs/getting-started-guides/openstack-heat/
- title: CoreOS on Multinode Cluster
path: /docs/getting-started-guides/coreos/coreos_multinode_cluster/
- title: Fedora With Calico Networking
path: /docs/getting-started-guides/fedora/fedora-calico/
- title: rkt
section:
- title: Running Kubernetes with rkt

View File

@ -65,7 +65,7 @@ Each of these options are overridable by `export`ing the values before running t
The first step in the process is to initialize the master node.
Clone the `kube-deploy` repo, and run [master.sh](master.sh) on the master machine _with root_:
Clone the `kube-deploy` repo, and run `master.sh` on the master machine _with root_:
```shell
$ git clone https://github.com/kubernetes/kube-deploy
@ -82,7 +82,7 @@ Lastly, it launches `kubelet` in the main docker daemon, and the `kubelet` in tu
Once your master is up and running you can add one or more workers on different machines.
Clone the `kube-deploy` repo, and run [worker.sh](worker.sh) on the worker machine _with root_:
Clone the `kube-deploy` repo, and run `worker.sh` on the worker machine _with root_:
```shell
$ git clone https://github.com/kubernetes/kube-deploy

View File

@ -1,313 +0,0 @@
---
assignees:
- caesarxuchao
---
This guide will walk you through the process of getting a Kubernetes Fedora cluster running on Digital Ocean with networking powered by Calico networking.
It will cover the installation and configuration of the following systemd processes on the following hosts:
Kubernetes Master:
- `kube-apiserver`
- `kube-controller-manager`
- `kube-scheduler`
- `etcd`
- `docker`
- `calico-node`
Kubernetes Node:
- `kubelet`
- `kube-proxy`
- `docker`
- `calico-node`
For this demo, we will be setting up one Master and one Node with the following information:
| Hostname | IP |
|-------------|-------------|
| kube-master |10.134.251.56|
| kube-node-1 |10.134.251.55|
This guide is scalable to multiple nodes provided you [configure interface-cbr0 with its own subnet on each Node](#configure-the-virtual-interface---cbr0)
and [add an entry to /etc/hosts for each host](#setup-communication-between-hosts).
Ensure you substitute the IP Addresses and Hostnames used in this guide with ones in your own setup.
* TOC
{:toc}
## Prerequisites
You need two or more Fedora 22 droplets on Digital Ocean with [Private Networking](https://www.digitalocean.com/community/tutorials/how-to-set-up-and-use-digitalocean-private-networking) enabled.
## Setup Communication Between Hosts
Digital Ocean private networking configures a private network on eth1 for each host. To simplify communication between the hosts, we will add an entry to /etc/hosts
so that all hosts in the cluster can hostname-resolve one another to this interface. **It is important that the hostname resolves to this interface instead of eth0, as
all Kubernetes and Calico services will be running on it.**
```shell
echo "10.134.251.56 kube-master" >> /etc/hosts
echo "10.134.251.55 kube-node-1" >> /etc/hosts
```
> Make sure that communication works between kube-master and each kube-node by using a utility such as ping.
## Setup Master
### Install etcd
* Both Calico and Kubernetes use etcd as their datastore. We will run etcd on Master and point all Kubernetes and Calico services at it.
```shell
yum -y install etcd
```
* Edit `/etc/etcd/etcd.conf`
```conf
ETCD_LISTEN_CLIENT_URLS="http://kube-master:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://kube-master:4001"
```
### Install Kubernetes
* Run the following command on Master to install the latest Kubernetes (as well as docker):
```shell
yum -y install kubernetes
```
* Edit `/etc/kubernetes/config `
```conf
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kube-master:8080"
```
* Edit `/etc/kubernetes/apiserver`
```conf
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001"
# Remove ServiceAccount from this line to run without API Tokens
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota"
```
* Create /var/run/kubernetes on master:
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
* Start the appropriate services on master:
```shell
for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICE
systemctl enable $SERVICE
systemctl status $SERVICE
done
```
### Install Calico
Next, we'll launch Calico on Master to allow communication between Pods and any services running on the Master.
* Install calicoctl, the calico configuration tool.
```shell
wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x ./calicoctl
sudo mv ./calicoctl /usr/bin
```
* Create `/etc/systemd/system/calico-node.service`
```conf
[Unit]
Description=calicoctl node
Requires=docker.service
After=docker.service
[Service]
User=root
Environment="ETCD_AUTHORITY=kube-master:4001"
PermissionsStartOnly=true
ExecStartPre=/usr/bin/calicoctl checksystem --fix
ExecStart=/usr/bin/calicoctl node --ip=10.134.251.56 --detach=false
[Install]
WantedBy=multi-user.target
```
>Be sure to substitute `--ip=10.134.251.56` with your Master's eth1 IP Address.
* Start Calico
```shell
systemctl enable calico-node.service
systemctl start calico-node.service
```
>Starting calico for the first time may take a few minutes as the calico-node docker image is downloaded.
## Setup Node
### Configure the Virtual Interface - cbr0
By default, docker will create and run on a virtual interface called `docker0`. This interface is automatically assigned the address range 172.17.42.1/16.
In order to set our own address range, we will create a new virtual interface called `cbr0` and then start docker on it.
* Add a virtual interface by creating `/etc/sysconfig/network-scripts/ifcfg-cbr0`:
```conf
DEVICE=cbr0
TYPE=Bridge
IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=static
```
>**Note for Multi-Node Clusters:** Each node should be assigned an IP address on a unique subnet. In this example, node-1 is using 192.168.1.1/24,
so node-2 should be assigned another pool on the 192.168.x.0/24 subnet, e.g. 192.168.2.1/24.
* Ensure that your system has bridge-utils installed. Then, restart the networking daemon to activate the new interface
```shell
systemctl restart network.service
```
### Install Docker
* Install Docker
```shell
yum -y install docker
```
* Configure docker to run on `cbr0` by editing `/etc/sysconfig/docker-network`:
```conf
DOCKER_NETWORK_OPTIONS="--bridge=cbr0 --iptables=false --ip-masq=false"
```
* Start docker
```shell
systemctl start docker
```
### Install Calico
* Install calicoctl, the calico configuration tool.
```shell
wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x ./calicoctl
sudo mv ./calicoctl /usr/bin
```
* Create `/etc/systemd/system/calico-node.service`
```conf
[Unit]
Description=calicoctl node
Requires=docker.service
After=docker.service
[Service]
User=root
Environment="ETCD_AUTHORITY=kube-master:4001"
PermissionsStartOnly=true
ExecStartPre=/usr/bin/calicoctl checksystem --fix
ExecStart=/usr/bin/calicoctl node --ip=10.134.251.55 --detach=false --kubernetes
[Install]
WantedBy=multi-user.target
```
> Note: You must replace the IP address with your node's eth1 IP Address!
* Start Calico
```shell
systemctl enable calico-node.service
systemctl start calico-node.service
```
* Configure the IP Address Pool
Most Kubernetes application deployments will require communication between Pods and the kube-apiserver on Master. On a standard Digital
Ocean Private Network, requests sent from Pods to the kube-apiserver will not be returned as the networking fabric will drop response packets
destined for any 192.168.0.0/16 address. To resolve this, you can have calicoctl add a masquerade rule to all outgoing traffic on the node:
```shell
ETCD_AUTHORITY=kube-master:4001 calicoctl pool add 192.168.0.0/16 --nat-outgoing
```
### Install Kubernetes
* First, install Kubernetes.
```shell
yum -y install kubernetes
```
* Edit `/etc/kubernetes/config`
```conf
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kube-master:8080"
```
* Edit `/etc/kubernetes/kubelet`
We'll pass in an extra parameter - `--network-plugin=calico` to tell the Kubelet to use the Calico networking plugin. Additionally, we'll add two
environment variables that will be used by the Calico networking plugin.
```shell
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
# KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://kube-master:8080"
# Add your own!
KUBELET_ARGS="--network-plugin=calico"
# The following are variables which the kubelet will pass to the calico-networking plugin
ETCD_AUTHORITY="kube-master:4001"
KUBE_API_ROOT="http://kube-master:8080/api/v1"
```
* Start Kubernetes on the node.
```shell
for SERVICE in kube-proxy kubelet; do
systemctl restart $SERVICE
systemctl enable $SERVICE
systemctl status $SERVICE
done
```
## Check Running Cluster
The cluster should be running! Check that your nodes are reporting as such:
```shell
kubectl get nodes
NAME LABELS STATUS
kube-node-1 kubernetes.io/hostname=kube-node-1 Ready
```

View File

@ -83,7 +83,7 @@ First configure the cluster information in cluster/ubuntu/config-default.sh, fol
```shell
export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"
export role="ai i i"
export roles="ai i i"
export NUM_NODES=${NUM_NODES:-3}
@ -95,7 +95,7 @@ export FLANNEL_NET=172.16.0.0/16
The first variable `nodes` defines all your cluster nodes, master node comes first and
separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
Then the `role` variable defines the role of above machine in the same order, "ai" stands for machine
Then the `roles` variable defines the roles of above machine in the same order, "ai" stands for machine
acts as both master and node, "a" stands for master, "i" stands for node.
The `NUM_NODES` variable defines the total number of nodes.

View File

@ -120,13 +120,14 @@ all running pods. Example:
alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"qps", "value": "10"}]}'
```
In this case if there are 4 pods running and each of them reports qps metric to be equal to 15 HPA will start 2 additional pods so there will be 6 pods in total. If there are multiple metrics passed in the annotation or CPU is configured as well then HPA will use the biggest
number of replicas that comes from the calculations.
In this case, if there are four pods running and each pod reports a QPS metric of 15 or higher, horizontal pod autoscaling will start two additional pods (for a total of six pods running).
If you specify multiple metrics in your annotation or if you set a target CPU utilization, horizontal pod autoscaling will scale to according to the metric that requires the highest number of replicas.
If you do not specify a target for CPU utilization, Kubernetes defaults to an 80% utilization threshold for horizontal pod autoscaling.
If you want to ensure that horizontal pod autoscaling calculates the number of required replicas based only on custom metrics, you should set the CPU utilization target to a very large value (such as 100000%). As this level of CPU utilization isn't possible, horizontal pod autoscaling will calculate based only on the custom metrics (and min/max limits).
At this moment even if target CPU utilization is not specified a default of 80% will be used.
To calculate number of desired replicas based only on custom metrics CPU utilization
target should be set to a very large value (e.g. 100000%). Then CPU-related logic
will want only 1 replica, leaving the decision about higher replica count to cusom metrics (and min/max limits).
## Further reading

View File

@ -25,7 +25,7 @@ If this fails with an "invalid command" error, you're likely using an older vers
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
You can verify that it worked by re-running `kubectl get nodes` and checking that the node now has a label.
You can verify that it worked by re-running `kubectl get nodes --show-labels` and checking that the node now has a label.
### Step Two: Add a nodeSelector field to your pod configuration