From fa29fedd3b0ab1d9c9266b4185b6f7692a980d14 Mon Sep 17 00:00:00 2001 From: jzhoucliqr Date: Sat, 20 Aug 2016 23:09:06 -0700 Subject: [PATCH 1/7] ubuntu provider environment variable 'roles' instead of 'role' --- docs/getting-started-guides/ubuntu.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/ubuntu.md b/docs/getting-started-guides/ubuntu.md index cfa7554d07..4322b8a5d5 100644 --- a/docs/getting-started-guides/ubuntu.md +++ b/docs/getting-started-guides/ubuntu.md @@ -83,7 +83,7 @@ First configure the cluster information in cluster/ubuntu/config-default.sh, fol ```shell export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223" -export role="ai i i" +export roles="ai i i" export NUM_NODES=${NUM_NODES:-3} @@ -95,7 +95,7 @@ export FLANNEL_NET=172.16.0.0/16 The first variable `nodes` defines all your cluster nodes, master node comes first and separated with blank space like ` ` -Then the `role` variable defines the role of above machine in the same order, "ai" stands for machine +Then the `roles` variable defines the roles of above machine in the same order, "ai" stands for machine acts as both master and node, "a" stands for master, "i" stands for node. The `NUM_NODES` variable defines the total number of nodes. From 99dfe0ee2d91fe4a8e04fa97d2f574387c7e8305 Mon Sep 17 00:00:00 2001 From: Eric Khun Date: Fri, 26 Aug 2016 16:16:07 +0200 Subject: [PATCH 2/7] Update command to show labels on nodes --- docs/user-guide/node-selection/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/node-selection/index.md b/docs/user-guide/node-selection/index.md index 1ae9d9fdef..74831ace0c 100644 --- a/docs/user-guide/node-selection/index.md +++ b/docs/user-guide/node-selection/index.md @@ -25,7 +25,7 @@ If this fails with an "invalid command" error, you're likely using an older vers Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters. -You can verify that it worked by re-running `kubectl get nodes` and checking that the node now has a label. +You can verify that it worked by re-running `kubectl get nodes --show-labels` and checking that the node now has a label. ### Step Two: Add a nodeSelector field to your pod configuration From 7f785f2ecf054d9e5228cb76b1a41364f3d332e6 Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Tue, 30 Aug 2016 17:32:37 -0700 Subject: [PATCH 3/7] Remove outdated Calico / Fedora guide. --- _data/guides.yml | 2 - .../fedora/fedora-calico.md | 313 ------------------ 2 files changed, 315 deletions(-) delete mode 100644 docs/getting-started-guides/fedora/fedora-calico.md diff --git a/_data/guides.yml b/_data/guides.yml index 3f076b7af9..ca61f07dc0 100644 --- a/_data/guides.yml +++ b/_data/guides.yml @@ -195,8 +195,6 @@ toc: path: /docs/getting-started-guides/openstack-heat/ - title: CoreOS on Multinode Cluster path: /docs/getting-started-guides/coreos/coreos_multinode_cluster/ - - title: Fedora With Calico Networking - path: /docs/getting-started-guides/fedora/fedora-calico/ - title: rkt section: - title: Running Kubernetes with rkt diff --git a/docs/getting-started-guides/fedora/fedora-calico.md b/docs/getting-started-guides/fedora/fedora-calico.md deleted file mode 100644 index c9c029e229..0000000000 --- a/docs/getting-started-guides/fedora/fedora-calico.md +++ /dev/null @@ -1,313 +0,0 @@ ---- -assignees: -- caesarxuchao - ---- - -This guide will walk you through the process of getting a Kubernetes Fedora cluster running on Digital Ocean with networking powered by Calico networking. -It will cover the installation and configuration of the following systemd processes on the following hosts: - -Kubernetes Master: - -- `kube-apiserver` -- `kube-controller-manager` -- `kube-scheduler` -- `etcd` -- `docker` -- `calico-node` - -Kubernetes Node: - -- `kubelet` -- `kube-proxy` -- `docker` -- `calico-node` - -For this demo, we will be setting up one Master and one Node with the following information: - -| Hostname | IP | -|-------------|-------------| -| kube-master |10.134.251.56| -| kube-node-1 |10.134.251.55| - -This guide is scalable to multiple nodes provided you [configure interface-cbr0 with its own subnet on each Node](#configure-the-virtual-interface---cbr0) -and [add an entry to /etc/hosts for each host](#setup-communication-between-hosts). - -Ensure you substitute the IP Addresses and Hostnames used in this guide with ones in your own setup. - -* TOC -{:toc} - -## Prerequisites - -You need two or more Fedora 22 droplets on Digital Ocean with [Private Networking](https://www.digitalocean.com/community/tutorials/how-to-set-up-and-use-digitalocean-private-networking) enabled. - -## Setup Communication Between Hosts - -Digital Ocean private networking configures a private network on eth1 for each host. To simplify communication between the hosts, we will add an entry to /etc/hosts -so that all hosts in the cluster can hostname-resolve one another to this interface. **It is important that the hostname resolves to this interface instead of eth0, as -all Kubernetes and Calico services will be running on it.** - -```shell -echo "10.134.251.56 kube-master" >> /etc/hosts -echo "10.134.251.55 kube-node-1" >> /etc/hosts -``` - -> Make sure that communication works between kube-master and each kube-node by using a utility such as ping. - -## Setup Master - -### Install etcd - -* Both Calico and Kubernetes use etcd as their datastore. We will run etcd on Master and point all Kubernetes and Calico services at it. - -```shell -yum -y install etcd -``` - -* Edit `/etc/etcd/etcd.conf` - -```conf -ETCD_LISTEN_CLIENT_URLS="http://kube-master:4001" - -ETCD_ADVERTISE_CLIENT_URLS="http://kube-master:4001" -``` - -### Install Kubernetes - -* Run the following command on Master to install the latest Kubernetes (as well as docker): - -```shell -yum -y install kubernetes -``` - -* Edit `/etc/kubernetes/config ` - -```conf -# How the controller-manager, scheduler, and proxy find the apiserver -KUBE_MASTER="--master=http://kube-master:8080" -``` - -* Edit `/etc/kubernetes/apiserver` - -```conf -# The address on the local server to listen to. -KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" - -KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001" - -# Remove ServiceAccount from this line to run without API Tokens -KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota" -``` - -* Create /var/run/kubernetes on master: - -```shell -mkdir /var/run/kubernetes -chown kube:kube /var/run/kubernetes -chmod 750 /var/run/kubernetes -``` - -* Start the appropriate services on master: - -```shell -for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do - systemctl restart $SERVICE - systemctl enable $SERVICE - systemctl status $SERVICE -done -``` - -### Install Calico - -Next, we'll launch Calico on Master to allow communication between Pods and any services running on the Master. -* Install calicoctl, the calico configuration tool. - -```shell -wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl -chmod +x ./calicoctl -sudo mv ./calicoctl /usr/bin -``` - -* Create `/etc/systemd/system/calico-node.service` - -```conf -[Unit] -Description=calicoctl node -Requires=docker.service -After=docker.service - -[Service] -User=root -Environment="ETCD_AUTHORITY=kube-master:4001" -PermissionsStartOnly=true -ExecStartPre=/usr/bin/calicoctl checksystem --fix -ExecStart=/usr/bin/calicoctl node --ip=10.134.251.56 --detach=false - -[Install] -WantedBy=multi-user.target -``` - ->Be sure to substitute `--ip=10.134.251.56` with your Master's eth1 IP Address. - -* Start Calico - -```shell -systemctl enable calico-node.service -systemctl start calico-node.service -``` - ->Starting calico for the first time may take a few minutes as the calico-node docker image is downloaded. - -## Setup Node - -### Configure the Virtual Interface - cbr0 - -By default, docker will create and run on a virtual interface called `docker0`. This interface is automatically assigned the address range 172.17.42.1/16. -In order to set our own address range, we will create a new virtual interface called `cbr0` and then start docker on it. - -* Add a virtual interface by creating `/etc/sysconfig/network-scripts/ifcfg-cbr0`: - -```conf -DEVICE=cbr0 -TYPE=Bridge -IPADDR=192.168.1.1 -NETMASK=255.255.255.0 -ONBOOT=yes -BOOTPROTO=static -``` - ->**Note for Multi-Node Clusters:** Each node should be assigned an IP address on a unique subnet. In this example, node-1 is using 192.168.1.1/24, -so node-2 should be assigned another pool on the 192.168.x.0/24 subnet, e.g. 192.168.2.1/24. - -* Ensure that your system has bridge-utils installed. Then, restart the networking daemon to activate the new interface - -```shell -systemctl restart network.service -``` - -### Install Docker - -* Install Docker - -```shell -yum -y install docker -``` - -* Configure docker to run on `cbr0` by editing `/etc/sysconfig/docker-network`: - -```conf -DOCKER_NETWORK_OPTIONS="--bridge=cbr0 --iptables=false --ip-masq=false" -``` - -* Start docker - -```shell -systemctl start docker -``` - -### Install Calico - -* Install calicoctl, the calico configuration tool. - -```shell -wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl -chmod +x ./calicoctl -sudo mv ./calicoctl /usr/bin -``` - -* Create `/etc/systemd/system/calico-node.service` - -```conf -[Unit] -Description=calicoctl node -Requires=docker.service -After=docker.service - -[Service] -User=root -Environment="ETCD_AUTHORITY=kube-master:4001" -PermissionsStartOnly=true -ExecStartPre=/usr/bin/calicoctl checksystem --fix -ExecStart=/usr/bin/calicoctl node --ip=10.134.251.55 --detach=false --kubernetes - -[Install] -WantedBy=multi-user.target -``` - -> Note: You must replace the IP address with your node's eth1 IP Address! - -* Start Calico - -```shell -systemctl enable calico-node.service -systemctl start calico-node.service -``` - -* Configure the IP Address Pool - - Most Kubernetes application deployments will require communication between Pods and the kube-apiserver on Master. On a standard Digital -Ocean Private Network, requests sent from Pods to the kube-apiserver will not be returned as the networking fabric will drop response packets -destined for any 192.168.0.0/16 address. To resolve this, you can have calicoctl add a masquerade rule to all outgoing traffic on the node: - -```shell -ETCD_AUTHORITY=kube-master:4001 calicoctl pool add 192.168.0.0/16 --nat-outgoing -``` - -### Install Kubernetes - -* First, install Kubernetes. - -```shell -yum -y install kubernetes -``` - -* Edit `/etc/kubernetes/config` - -```conf -# How the controller-manager, scheduler, and proxy find the apiserver -KUBE_MASTER="--master=http://kube-master:8080" -``` - -* Edit `/etc/kubernetes/kubelet` - - We'll pass in an extra parameter - `--network-plugin=calico` to tell the Kubelet to use the Calico networking plugin. Additionally, we'll add two -environment variables that will be used by the Calico networking plugin. - -```shell -# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) -KUBELET_ADDRESS="--address=0.0.0.0" - -# You may leave this blank to use the actual hostname -# KUBELET_HOSTNAME="--hostname-override=127.0.0.1" - -# location of the api-server -KUBELET_API_SERVER="--api-servers=http://kube-master:8080" - -# Add your own! -KUBELET_ARGS="--network-plugin=calico" - -# The following are variables which the kubelet will pass to the calico-networking plugin -ETCD_AUTHORITY="kube-master:4001" -KUBE_API_ROOT="http://kube-master:8080/api/v1" -``` - -* Start Kubernetes on the node. - -```shell -for SERVICE in kube-proxy kubelet; do - systemctl restart $SERVICE - systemctl enable $SERVICE - systemctl status $SERVICE -done -``` - -## Check Running Cluster - -The cluster should be running! Check that your nodes are reporting as such: - -```shell -kubectl get nodes -NAME LABELS STATUS -kube-node-1 kubernetes.io/hostname=kube-node-1 Ready -``` \ No newline at end of file From 30539b11f932cae7c649479aff5e2092486afe86 Mon Sep 17 00:00:00 2001 From: Evgeny L Date: Fri, 2 Sep 2016 17:58:51 +0300 Subject: [PATCH 4/7] Remove broken links to master.sh and worker.sh in docker-multinode --- docs/getting-started-guides/docker-multinode.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/docker-multinode.md b/docs/getting-started-guides/docker-multinode.md index 5826a1bb20..14e475ee10 100644 --- a/docs/getting-started-guides/docker-multinode.md +++ b/docs/getting-started-guides/docker-multinode.md @@ -65,7 +65,7 @@ Each of these options are overridable by `export`ing the values before running t The first step in the process is to initialize the master node. -Clone the `kube-deploy` repo, and run [master.sh](master.sh) on the master machine _with root_: +Clone the `kube-deploy` repo, and run `master.sh` on the master machine _with root_: ```shell $ git clone https://github.com/kubernetes/kube-deploy @@ -82,7 +82,7 @@ Lastly, it launches `kubelet` in the main docker daemon, and the `kubelet` in tu Once your master is up and running you can add one or more workers on different machines. -Clone the `kube-deploy` repo, and run [worker.sh](worker.sh) on the worker machine _with root_: +Clone the `kube-deploy` repo, and run `worker.sh` on the worker machine _with root_: ```shell $ git clone https://github.com/kubernetes/kube-deploy From c734060e6f0d188a89aab16c5a320179bbdb53cf Mon Sep 17 00:00:00 2001 From: Hyunchel Kim Date: Wed, 31 Aug 2016 14:45:10 -0500 Subject: [PATCH 5/7] Add missing commas, fix typos --- docs/user-guide/horizontal-pod-autoscaling/index.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/user-guide/horizontal-pod-autoscaling/index.md b/docs/user-guide/horizontal-pod-autoscaling/index.md index 181c1ba5f7..abd6d8314d 100644 --- a/docs/user-guide/horizontal-pod-autoscaling/index.md +++ b/docs/user-guide/horizontal-pod-autoscaling/index.md @@ -120,13 +120,13 @@ all running pods. Example: alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"qps", "value": "10"}]}' ``` -In this case if there are 4 pods running and each of them reports qps metric to be equal to 15 HPA will start 2 additional pods so there will be 6 pods in total. If there are multiple metrics passed in the annotation or CPU is configured as well then HPA will use the biggest +In this case, if there are 4 pods running and each of them reports qps metric to be equal to 15, HPA will start 2 additional pods so there will be 6 pods in total. If there are multiple metrics passed in the annotation or CPU is configured as well then HPA will use the biggest number of replicas that comes from the calculations. -At this moment even if target CPU utilization is not specified a default of 80% will be used. -To calculate number of desired replicas based only on custom metrics CPU utilization -target should be set to a very large value (e.g. 100000%). Then CPU-related logic -will want only 1 replica, leaving the decision about higher replica count to cusom metrics (and min/max limits). +At this moment, even if CPU utilization target is not specified, a default of 80% will be used. +To calculate number of desired replicas based only on custom metrics, CPU utilization +target should be set to a very large value (e.g. 100000%). +Then CPU utilization target will unlikely be reached, leaving the decision on desired number of replicas to the custom metrics (and min/max limits). ## Further reading From ceb5189d488d09cb4f14665db100eb4c6654bf4e Mon Sep 17 00:00:00 2001 From: Hyunchel Kim Date: Thu, 1 Sep 2016 08:51:08 -0500 Subject: [PATCH 6/7] Update sentences based on reviewed comments --- docs/user-guide/horizontal-pod-autoscaling/index.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/docs/user-guide/horizontal-pod-autoscaling/index.md b/docs/user-guide/horizontal-pod-autoscaling/index.md index abd6d8314d..a789a9c5fc 100644 --- a/docs/user-guide/horizontal-pod-autoscaling/index.md +++ b/docs/user-guide/horizontal-pod-autoscaling/index.md @@ -120,13 +120,14 @@ all running pods. Example: alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"qps", "value": "10"}]}' ``` -In this case, if there are 4 pods running and each of them reports qps metric to be equal to 15, HPA will start 2 additional pods so there will be 6 pods in total. If there are multiple metrics passed in the annotation or CPU is configured as well then HPA will use the biggest -number of replicas that comes from the calculations. +In this case, if there are four pods running and each pods reports a QPS metric of 15 or higher, horizontal pod autoscaling will start two additional pods (for a total of six pods running). + +If you specify multiple metrics in your annotation or if you set a target CPU utilization, horizontal pod autoscaling will scale to according to the metric that requires the highest number of replicas. + +If you do not specify a target for CPU utilization, Kubernetes defaults to an 80% utilization threshold for horizontal pod autoscaling. + +If you want to ensure that horizontal pod autoscaling calculates the number of required replicas based only on custom metrics, you should set the CPU utilization target to a very large value (such as 100000%). As this level of CPU utilization isn't possible, horizontal pod autoscaling will calculate based only on the custom metrics (and min/max limits). -At this moment, even if CPU utilization target is not specified, a default of 80% will be used. -To calculate number of desired replicas based only on custom metrics, CPU utilization -target should be set to a very large value (e.g. 100000%). -Then CPU utilization target will unlikely be reached, leaving the decision on desired number of replicas to the custom metrics (and min/max limits). ## Further reading From 820fa73edff8648c0671d1a1185ce9ff71b7c8b1 Mon Sep 17 00:00:00 2001 From: Hyunchel Kim Date: Tue, 6 Sep 2016 08:44:42 -0500 Subject: [PATCH 7/7] Fix typo --- docs/user-guide/horizontal-pod-autoscaling/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/horizontal-pod-autoscaling/index.md b/docs/user-guide/horizontal-pod-autoscaling/index.md index a789a9c5fc..fbdd11360a 100644 --- a/docs/user-guide/horizontal-pod-autoscaling/index.md +++ b/docs/user-guide/horizontal-pod-autoscaling/index.md @@ -120,7 +120,7 @@ all running pods. Example: alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"qps", "value": "10"}]}' ``` -In this case, if there are four pods running and each pods reports a QPS metric of 15 or higher, horizontal pod autoscaling will start two additional pods (for a total of six pods running). +In this case, if there are four pods running and each pod reports a QPS metric of 15 or higher, horizontal pod autoscaling will start two additional pods (for a total of six pods running). If you specify multiple metrics in your annotation or if you set a target CPU utilization, horizontal pod autoscaling will scale to according to the metric that requires the highest number of replicas.