Update fedora_manual_config.md

pull/552/head
DrDePhobia 2016-05-22 14:48:26 -07:00
parent e38623ac4c
commit 4052573aae
1 changed files with 61 additions and 51 deletions

View File

@ -1,7 +1,7 @@
---
---
---
---
* TOC
* TOC
{:toc}
## Prerequisites
@ -20,37 +20,37 @@ The Kubernetes package provides a few services: kube-apiserver, kube-scheduler,
Hosts:
```conf
```conf
fed-master = 192.168.121.9
fed-node = 192.168.121.65
```
```
**Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
```shell
```shell
yum -y install --enablerepo=updates-testing kubernetes
```
```
* Install etcd and iptables
```shell
```shell
yum -y install etcd iptables
```
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```shell
```shell
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```shell
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
@ -62,20 +62,20 @@ KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```shell
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```shell
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
@ -87,37 +87,37 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```shell
```shell
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
```
* Create /var/run/kubernetes on master:
```shell
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
```
* Start the appropriate services on master:
```shell
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
* Addition of nodes:
* Create following node.json file on Kubernetes master node:
```json
```json
{
"apiVersion": "v1",
"kind": "Node",
@ -129,18 +129,18 @@ done
"externalID": "fed-node"
}
}
```
```
Now create a node object internally in your Kubernetes cluster by running:
```shell
```shell
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
@ -153,7 +153,7 @@ a Kubernetes node (fed-node) below.
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
```shell
###
# Kubernetes kubelet (node) config
@ -168,36 +168,46 @@ KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
```
```
* Start the appropriate services on the node (fed-node).
```shell
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```shell
```shell
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
```
* Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```shell
```shell
kubectl delete -f ./node.json
```
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.