commit
157ab0d316
|
@ -76,8 +76,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
|
|||
|
||||
- [AWS + CoreOS](/docs/getting-started-guides/coreos)
|
||||
- [GCE + CoreOS](/docs/getting-started-guides/coreos)
|
||||
- [AWS + Ubuntu](/docs/getting-started-guides/juju)
|
||||
- [Joyent + Ubuntu](/docs/getting-started-guides/juju)
|
||||
- [AWS/GCE/Rackspace/Joyent + Ubuntu](/docs/getting-started-guides/ubuntu/automated)
|
||||
- [Rackspace + CoreOS](/docs/getting-started-guides/rackspace)
|
||||
|
||||
#### On-Premises VMs
|
||||
|
@ -86,7 +85,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
|
|||
- [CloudStack](/docs/getting-started-guides/cloudstack) (uses Ansible, CoreOS and flannel)
|
||||
- [Vmware vSphere](/docs/getting-started-guides/vsphere) (uses Debian)
|
||||
- [Vmware Photon Controller](/docs/getting-started-guides/photon-controller) (uses Debian)
|
||||
- [juju.md](/docs/getting-started-guides/juju) (uses Juju, Ubuntu and flannel)
|
||||
- [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/automated) (uses Juju, Ubuntu and flannel)
|
||||
- [Vmware](/docs/getting-started-guides/coreos) (uses CoreOS and flannel)
|
||||
- [libvirt-coreos.md](/docs/getting-started-guides/libvirt-coreos) (uses CoreOS)
|
||||
- [oVirt](/docs/getting-started-guides/ovirt)
|
||||
|
@ -101,7 +100,8 @@ These solutions are combinations of cloud provider and OS not covered by the abo
|
|||
- [Fedora single node](/docs/getting-started-guides/fedora/fedora_manual_config)
|
||||
- [Fedora multi node](/docs/getting-started-guides/fedora/flannel_multi_node_cluster)
|
||||
- [Centos](/docs/getting-started-guides/centos/centos_manual_config)
|
||||
- [Ubuntu](/docs/getting-started-guides/ubuntu)
|
||||
- [Bare Metal with Ubuntu](/docs/getting-started-guides/ubuntu/automated)
|
||||
- [Ubuntu Manual](/docs/getting-started-guides/ubuntu/manual)
|
||||
- [Docker Multi Node](/docs/getting-started-guides/docker-multinode)
|
||||
- [CoreOS](/docs/getting-started-guides/coreos)
|
||||
|
||||
|
@ -147,9 +147,11 @@ CloudStack | Ansible | CoreOS | flannel | [docs](/docs/gettin
|
|||
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@imkin](https://github.com/imkin))
|
||||
Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | | Community ([@alainroy](https://github.com/alainroy))
|
||||
Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap))
|
||||
AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
|
||||
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
|
||||
Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
|
||||
AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
|
||||
GCE | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
|
||||
Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
|
||||
Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
|
||||
Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
|
||||
AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
|
||||
|
@ -162,7 +164,7 @@ any | any | any | any | [docs](/docs/gettin
|
|||
|
||||
*Note*: The above table is ordered by version test/used in notes followed by support level.
|
||||
|
||||
Definition of columns:
|
||||
Definition of columns
|
||||
|
||||
- **IaaS Provider** is who/what provides the virtual or physical machines (nodes) that Kubernetes runs on.
|
||||
- **OS** is the base operating system of the nodes.
|
||||
|
|
|
@ -1,325 +0,0 @@
|
|||
---
|
||||
assignees:
|
||||
- caesarxuchao
|
||||
- erictune
|
||||
|
||||
---
|
||||
|
||||
[Juju](https://jujucharms.com/docs/2.0/about-juju) encapsulates the
|
||||
operational knowledge of provisioning, installing, and securing a Kubernetes
|
||||
cluster into one step. Juju allows you to deploy a Kubernetes cluster on
|
||||
different cloud providers with a consistent, repeatable user experience.
|
||||
Once deployed the cluster can easily scale up with one command.
|
||||
|
||||
The Juju Kubernetes work is curated by a dedicated team of community members,
|
||||
let us know how we are doing. If you find any problems please open an
|
||||
[issue on the kubernetes project](https://github.com/kubernetes/kubernetes/issues)
|
||||
and tag the issue with "juju" so we can find them.
|
||||
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
> Note: If you're running kube-up, on Ubuntu - all of the dependencies
|
||||
> will be handled for you. You may safely skip to the section:
|
||||
> [Launch a Kubernetes Cluster](#launch-a-kubernetes-cluster)
|
||||
|
||||
### On Ubuntu
|
||||
|
||||
[Install the Juju client](https://jujucharms.com/docs/2.0/getting-started-general)
|
||||
|
||||
> This documentation focuses on the Juju 2.0 release which will be
|
||||
> promoted to stable during the April 2016 release cycle.
|
||||
|
||||
To paraphrase, on your local Ubuntu system:
|
||||
|
||||
```shell
|
||||
sudo add-apt-repository ppa:juju/devel
|
||||
sudo apt-get update
|
||||
sudo apt-get install juju
|
||||
```
|
||||
|
||||
If you are using another distro/platform - please consult the
|
||||
[getting started guide](https://jujucharms.com/docs/2.0/getting-started-general)
|
||||
to install the Juju dependencies for your platform.
|
||||
|
||||
### With Docker
|
||||
|
||||
If you prefer the isolation of Docker, you can run the Juju client in a
|
||||
container. Create a local directory to store the Juju configuration, then
|
||||
volume mount the container:
|
||||
|
||||
```shell
|
||||
mkdir -p $HOME/.local/share/juju
|
||||
docker run --rm -ti \
|
||||
-v $HOME/.local/share/juju:/home/ubuntu/.local/share/juju \
|
||||
jujusolutions/charmbox:devel
|
||||
```
|
||||
|
||||
> While this is a common target, the charmbox flavors of images are
|
||||
> unofficial, and should be treated as experimental. If you encounter any issues
|
||||
> turning up the Kubernetes cluster with charmbox, please file a bug on the
|
||||
> [charmbox issue tracker](https://github.com/juju-solutions/charmbox/issues).
|
||||
|
||||
### Configure Juju to your favorite cloud provider
|
||||
|
||||
At this point you have access to the Juju client. Before you can deploy a
|
||||
cluster you have to configure Juju with the
|
||||
[cloud credentials](https://jujucharms.com/docs/2.0/credentials) for each
|
||||
cloud provider you would like to use.
|
||||
|
||||
Juju [supports a wide variety of public clouds](#cloud-compatibility) to set
|
||||
up the credentials for your chosen cloud see the
|
||||
[cloud setup page](https://jujucharms.com/docs/devel/getting-started-general#2.-choose-a-cloud).
|
||||
|
||||
After configuration is complete test your setup with a `juju bootstrap`
|
||||
command: `juju bootstrap $controllername $cloudtype` you are ready to launch
|
||||
the Kubernetes cluster.
|
||||
|
||||
## Launch a Kubernetes cluster
|
||||
|
||||
You can deploy a Kubernetes cluster with Juju from the `kubernetes` directory of
|
||||
the [kubernetes github project](https://github.com/kubernetes/kubernetes.git).
|
||||
Clone the repository on your local system. Export the `KUBERNETES_PROVIDER`
|
||||
environment variable before bringing up the cluster.
|
||||
|
||||
```shell
|
||||
cd kubernetes
|
||||
export KUBERNETES_PROVIDER=juju
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
If this is your first time running the `kube-up.sh` script, it will attempt to
|
||||
install the required dependencies to get started with Juju.
|
||||
|
||||
The script will deploy two nodes of Kubernetes, 1 unit of etcd, and network
|
||||
the units so containers on different hosts can communicate with each other.
|
||||
|
||||
## Exploring the cluster
|
||||
|
||||
The `juju status` command provides information about each unit in the cluster:
|
||||
|
||||
```shell
|
||||
$ juju status
|
||||
MODEL CONTROLLER CLOUD/REGION VERSION
|
||||
default windows azure/centralus 2.0-beta13
|
||||
|
||||
APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS
|
||||
etcd active false jujucharms etcd 3 ubuntu
|
||||
kubernetes active true jujucharms kubernetes 5 ubuntu
|
||||
|
||||
RELATION PROVIDES CONSUMES TYPE
|
||||
cluster etcd etcd peer
|
||||
etcd etcd kubernetes regular
|
||||
certificates kubernetes kubernetes peer
|
||||
|
||||
UNIT WORKLOAD AGENT MACHINE PORTS PUBLIC-ADDRESS MESSAGE
|
||||
etcd/0 active idle 0 2379/tcp 13.67.217.11 (leader) cluster is healthy
|
||||
kubernetes/0 active idle 1 8088/tcp 13.67.219.76 Kubernetes running.
|
||||
kubernetes/1 active idle 2 6443/tcp 13.67.219.182 (master) Kubernetes running.
|
||||
|
||||
MACHINE STATE DNS INS-ID SERIES AZ
|
||||
0 started 13.67.217.11 machine-0 trusty
|
||||
1 started 13.67.219.76 machine-1 trusty
|
||||
2 started 13.67.219.182 machine-2 trusty
|
||||
```
|
||||
|
||||
## Run some containers!
|
||||
|
||||
The `kubectl` file, and the TLS certificates along with the configuration are
|
||||
all available on the Kubernetes master unit. Fetch the kubectl package so you
|
||||
can run commands on the new Kuberntetes cluster.
|
||||
|
||||
Use the `juju status` command to figure out which unit is the master. In the
|
||||
example above the "kubernetes/1" unit is the master. Use the `juju scp`
|
||||
command to copy the file from the unit:
|
||||
|
||||
```shell
|
||||
juju scp kubernetes/1:kubectl_package.tar.gz .
|
||||
tar xvfz kubectl_package.tar.gz
|
||||
./kubectl --kubeconfig kubeconfig get pods
|
||||
```
|
||||
|
||||
If you are not on a Linux amd64 host system, you will need to find or build a
|
||||
kubectl binary package for your architecture.
|
||||
|
||||
Copy the `kubeconfig` file to the home directory so you don't have to specify
|
||||
it on the command line each time. The default location is
|
||||
`${HOME}/.kube/config`.
|
||||
|
||||
No pods will be available before starting a container:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
NAME READY STATUSRESTARTS AGE
|
||||
|
||||
kubectl get replicationcontrollers
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
```
|
||||
|
||||
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "hello",
|
||||
"labels": {
|
||||
"name": "hello",
|
||||
"environment": "testing"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"containers": [{
|
||||
"name": "hello",
|
||||
"image": "quay.io/kelseyhightower/hello",
|
||||
"ports": [{
|
||||
"containerPort": 80,
|
||||
"hostPort": 80
|
||||
}]
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create the pod with kubectl:
|
||||
|
||||
```shell
|
||||
kubectl create -f pod.json
|
||||
```
|
||||
|
||||
Get info on the pod:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
To test the hello app, we need to locate which node is hosting
|
||||
the container. We can use `juju run` and `juju status` commands to find
|
||||
our hello app.
|
||||
|
||||
Exit out of our ssh session and run:
|
||||
|
||||
```shell
|
||||
juju run --unit kubernetes/0 "docker ps -n=1"
|
||||
...
|
||||
juju run --unit kubernetes/1 "docker ps -n=1"
|
||||
CONTAINER IDIMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
02beb61339d8quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hourk8s_hello....
|
||||
```
|
||||
|
||||
We see "kubernetes/1" has our container, expose the kubernetes charm and open
|
||||
port 80:
|
||||
|
||||
```shell
|
||||
juju run --unit kubernetes/1 "open-port 80"
|
||||
juju expose kubernetes
|
||||
sudo apt-get install curl
|
||||
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
|
||||
```
|
||||
|
||||
Finally delete the pod:
|
||||
|
||||
```shell
|
||||
juju ssh kubernetes/0
|
||||
kubectl delete pods hello
|
||||
```
|
||||
|
||||
## Scale up cluster
|
||||
|
||||
Want larger Kubernetes nodes? It is easy to request different sizes of cloud
|
||||
resources from Juju by using **constraints**. You can increase the amount of
|
||||
CPU or memory (RAM) in any of the systems requested by Juju. This allows you
|
||||
to fine tune th Kubernetes cluster to fit your workload. Use flags on the
|
||||
bootstrap command or as a separate `juju constraints` command. Look to the
|
||||
[Juju documentation for machine](https://jujucharms.com/docs/2.0/charms-constraints)
|
||||
details.
|
||||
|
||||
## Scale out cluster
|
||||
|
||||
Need more workers? Juju makes it easy to add units of a charm:
|
||||
|
||||
```shell
|
||||
juju add-unit kubernetes
|
||||
```
|
||||
|
||||
Or multiple units at one time:
|
||||
|
||||
```shell
|
||||
juju add-unit -n3 kubernetes
|
||||
```
|
||||
|
||||
You can also scale the etcd charm for more fault tolerant key/value storage:
|
||||
|
||||
```shell
|
||||
juju add-unit -n2 etcd
|
||||
```
|
||||
|
||||
## Tear down cluster
|
||||
|
||||
We recommend that you use the `kube-down.sh` script when you are done using
|
||||
the cluster, as it properly brings down the cloud and removes some of the
|
||||
build directories.
|
||||
|
||||
```shell
|
||||
./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Alternately if you want stop the servers you can destroy the Juju model or the
|
||||
controller. Use the `juju switch` command to get the current controller name:
|
||||
|
||||
```shell
|
||||
juju switch
|
||||
juju destroy-controller $controllername --destroy-all-models
|
||||
```
|
||||
|
||||
## More Info
|
||||
|
||||
Juju works with charms and bundles to deploy solutions. The code that stands up
|
||||
a Kubernetes cluster is done in the charm code. The charm is built from using
|
||||
a layered approach to keep the code smaller and more focused on the operations
|
||||
of Kubernetes.
|
||||
|
||||
The Kubernetes layer and bundles can be found in the `kubernetes`
|
||||
project on github.com:
|
||||
|
||||
- [Bundle location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/bundles)
|
||||
- [Kubernetes charm layer location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes)
|
||||
- [More about Juju](https://jujucharms.com)
|
||||
|
||||
|
||||
### Cloud compatibility
|
||||
|
||||
Juju is cloud agnostic and gives you a consistent experience across different
|
||||
cloud providers. Juju supports a variety of public cloud providers: [Amazon Web Service](https://jujucharms.com/docs/2.0/help-aws),
|
||||
[Microsoft Azure](https://jujucharms.com/docs/2.0/help-azure),
|
||||
[Google Compute Engine](https://jujucharms.com/docs/2.0/help-google),
|
||||
[Joyent](https://jujucharms.com/docs/2.0/help-joyent),
|
||||
[Rackspace](https://jujucharms.com/docs/2.0/help-rackspace), any
|
||||
[OpenStack cloud](https://jujucharms.com/docs/2.0/clouds#specifying-additional-clouds),
|
||||
and
|
||||
[Vmware vSphere](https://jujucharms.com/docs/2.0/config-vmware).
|
||||
|
||||
If you do not see your favorite cloud provider listed many clouds with ssh
|
||||
access can be configured for
|
||||
[manual provisioning](https://jujucharms.com/docs/2.0/clouds-manual).
|
||||
|
||||
To change to a different cloud you can use the `juju switch` command and set
|
||||
up the credentials for that cloud provider and continue to use the `kubeup.sh`
|
||||
script.
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Amazon Web Services (AWS) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
|
||||
OpenStack | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
|
||||
Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
|
||||
Google Compute Engine (GCE) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
|
||||
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
|
@ -0,0 +1,288 @@
|
|||
---
|
||||
assignees:
|
||||
- caesarxuchao
|
||||
- erictune
|
||||
|
||||
---
|
||||
|
||||
Ubuntu 16.04 introduced the [Canonical Distribution of Kubernetes](https://jujucharms.com/canonical-kubernetes/), a pure upstream distribution of Kubernetes designed for production usage. Out of the box it comes with the following components on 12 machines:
|
||||
|
||||
- Kubernetes (automated deployment, operations, and scaling)
|
||||
- Three node Kubernetes cluster with one master and two worker nodes.
|
||||
- TLS used for communication between units for security.
|
||||
- Flannel Software Defined Network (SDN) plugin
|
||||
- A load balancer for HA kubernetes-master (Experimental)
|
||||
- Optional Ingress Controller (on worker)
|
||||
- Optional Dashboard addon (on master) including Heapster for cluster monitoring
|
||||
- EasyRSA
|
||||
- Performs the role of a certificate authority serving self signed certificates
|
||||
to the requesting units of the cluster.
|
||||
- Etcd (distributed key value store)
|
||||
- Three unit cluster for reliability.
|
||||
- Elastic stack
|
||||
- Two units for ElasticSearch
|
||||
- One units for a Kibana dashboard
|
||||
- Beats on every Kubernetes and Etcd units:
|
||||
- Filebeat for forwarding logs to ElasticSearch
|
||||
- Topbeat for inserting server monitoring data to ElasticSearch
|
||||
|
||||
|
||||
The Juju Kubernetes work is curated by a dedicated team of community members,
|
||||
let us know how we are doing. If you find any problems please open an
|
||||
[issue on our tracker](https://github.com/juju-solutions/bundle-canonical-kubernetes)
|
||||
so we can find them.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A working [Juju client](https://jujucharms.com/docs/2.0/getting-started-general); this does not have to be a Linux machine, it can also be Windows or OSX.
|
||||
- A [supported cloud](#cloud-compatibility).
|
||||
|
||||
### On Ubuntu
|
||||
|
||||
On your local Ubuntu system:
|
||||
|
||||
```shell
|
||||
sudo add-apt-repository ppa:juju/stable
|
||||
sudo apt-get update
|
||||
sudo apt-get install juju
|
||||
```
|
||||
|
||||
If you are using another distro/platform - please consult the
|
||||
[getting started guide](https://jujucharms.com/docs/2.0/getting-started-general)
|
||||
to install the Juju dependencies for your platform.
|
||||
|
||||
### Configure Juju to your favorite cloud provider
|
||||
|
||||
Deployment of the cluster is [supported on a wide variety of public clouds](#cloud-compatibility), private OpenStack clouds, or raw bare metal clusters.
|
||||
|
||||
After deciding which cloud to deploy to, follow the [cloud setup page](https://jujucharms.com/docs/devel/getting-started-general#2.-choose-a-cloud) to configure deploying to that cloud.
|
||||
|
||||
Load your [cloud credentials](https://jujucharms.com/docs/2.0/credentials) for each
|
||||
cloud provider you would like to use.
|
||||
|
||||
In this example
|
||||
|
||||
```shell
|
||||
juju add-credential aws
|
||||
credential name: my_credentials
|
||||
select auth-type [userpass, oauth, etc]: userpass
|
||||
enter username: jorge
|
||||
enter password: *******
|
||||
```
|
||||
|
||||
You can also just auto load credentials for popular clouds with the `juju autoload-credentials` command, which will auto import your credentials from the default files and environment variables for each cloud.
|
||||
|
||||
Next we need to bootstrap a controller to manage the cluster. You need to define the cloud you want to bootstrap on, the region, and then any name for your controller node:
|
||||
|
||||
```shell
|
||||
juju update-clouds # This command ensures all the latest regions are up to date on your client
|
||||
juju bootstrap aws/us-east-2
|
||||
```
|
||||
or, another example, this time on Azure:
|
||||
|
||||
```shell
|
||||
juju bootstrap azure/centralus
|
||||
```
|
||||
|
||||
You will need a controller node for each cloud or region you are deploying to. See the [controller documentation](https://jujucharms.com/docs/2.0/controllers) for more information.
|
||||
|
||||
Note that each controller can host multiple Kubernetes clusters in a given cloud or region.
|
||||
|
||||
## Launch a Kubernetes cluster
|
||||
|
||||
The following command will deploy the intial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but
|
||||
|
||||
```shell
|
||||
juju deploy canonical-kubernetes
|
||||
```
|
||||
|
||||
After this command executes we need to wait for the cloud to return back instances and for all the automated deployment tasks to execute.
|
||||
|
||||
## Monitor deployment
|
||||
|
||||
The `juju status` command provides information about each unit in the cluster. We recommend using the `watch -c juju status --color` command to get a real-time view of the cluster as it deploys. When all the states are green and "Idle", the cluster is ready to go.
|
||||
|
||||
|
||||
```shell
|
||||
$ juju status
|
||||
Model Controller Cloud/Region Version
|
||||
default aws-us-east-2 aws/us-east-2 2.0.1
|
||||
|
||||
App Version Status Scale Charm Store Rev OS Notes
|
||||
easyrsa 3.0.1 active 1 easyrsa jujucharms 3 ubuntu
|
||||
elasticsearch active 2 elasticsearch jujucharms 19 ubuntu
|
||||
etcd 2.2.5 active 3 etcd jujucharms 14 ubuntu
|
||||
filebeat active 4 filebeat jujucharms 5 ubuntu
|
||||
flannel 0.6.1 maintenance 4 flannel jujucharms 5 ubuntu
|
||||
kibana active 1 kibana jujucharms 15 ubuntu
|
||||
kubeapi-load-balancer 1.10.0 active 1 kubeapi-load-balancer jujucharms 3 ubuntu exposed
|
||||
kubernetes-master 1.4.5 active 1 kubernetes-master jujucharms 6 ubuntu
|
||||
kubernetes-worker 1.4.5 active 3 kubernetes-worker jujucharms 8 ubuntu exposed
|
||||
topbeat active 3 topbeat jujucharms 5 ubuntu
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
easyrsa/0* active idle 0 52.15.95.92 Certificate Authority connected.
|
||||
elasticsearch/0* active idle 1 52.15.67.111 9200/tcp Ready
|
||||
elasticsearch/1 active idle 2 52.15.109.132 9200/tcp Ready
|
||||
etcd/0 active idle 3 52.15.79.127 2379/tcp Healthy with 3 known peers.
|
||||
etcd/1* active idle 4 52.15.111.66 2379/tcp Healthy with 3 known peers. (leader)
|
||||
etcd/2 active idle 5 52.15.144.25 2379/tcp Healthy with 3 known peers.
|
||||
kibana/0* active idle 6 52.15.57.157 80/tcp,9200/tcp ready
|
||||
kubeapi-load-balancer/0* active idle 7 52.15.84.179 443/tcp Loadbalancer ready.
|
||||
kubernetes-master/0* active idle 8 52.15.106.225 6443/tcp Kubernetes master services ready.
|
||||
filebeat/3 active idle 52.15.106.225 Filebeat ready.
|
||||
flannel/3 maintenance idle 52.15.106.225 Installing flannel.
|
||||
kubernetes-worker/0* active idle 9 52.15.153.246 Kubernetes worker running.
|
||||
filebeat/2 active idle 52.15.153.246 Filebeat ready.
|
||||
flannel/2 active idle 52.15.153.246 Flannel subnet 10.1.53.1/24
|
||||
topbeat/2 active idle 52.15.153.246 Topbeat ready.
|
||||
kubernetes-worker/1 active idle 10 52.15.52.103 Kubernetes worker running.
|
||||
filebeat/0* active idle 52.15.52.103 Filebeat ready.
|
||||
flannel/0* active idle 52.15.52.103 Flannel subnet 10.1.31.1/24
|
||||
topbeat/0* active idle 52.15.52.103 Topbeat ready.
|
||||
kubernetes-worker/2 active idle 11 52.15.104.181 Kubernetes worker running.
|
||||
filebeat/1 active idle 52.15.104.181 Filebeat ready.
|
||||
flannel/1 active idle 52.15.104.181 Flannel subnet 10.1.83.1/24
|
||||
topbeat/1 active idle 52.15.104.181 Topbeat ready.
|
||||
|
||||
Machine State DNS Inst id Series AZ
|
||||
0 started 52.15.95.92 i-06e66414008eca61c xenial us-east-2c
|
||||
1 started 52.15.67.111 i-050cbd7eb35fa0fe6 trusty us-east-2a
|
||||
2 started 52.15.109.132 i-069196660db07c2f6 trusty us-east-2b
|
||||
3 started 52.15.79.127 i-0038186d2c5103739 xenial us-east-2b
|
||||
4 started 52.15.111.66 i-0ac66c86a8ec93b18 xenial us-east-2a
|
||||
5 started 52.15.144.25 i-078cfe79313d598c9 xenial us-east-2c
|
||||
6 started 52.15.57.157 i-09fd16d9328105ec0 trusty us-east-2a
|
||||
7 started 52.15.84.179 i-00fd70321a51b658b xenial us-east-2c
|
||||
8 started 52.15.106.225 i-0109a5fc942c53ed7 xenial us-east-2b
|
||||
9 started 52.15.153.246 i-0ab63e34959cace8d xenial us-east-2b
|
||||
10 started 52.15.52.103 i-0108a8cc0978954b5 xenial us-east-2a
|
||||
11 started 52.15.104.181 i-0f5562571c649f0f2 xenial us-east-2c
|
||||
```
|
||||
|
||||
## Interacting with the cluster
|
||||
|
||||
After the cluster is deployed you may assume control over the cluster from any kubernetes-master, or kubernetes-worker node.
|
||||
|
||||
First we need to download the credentials and client application to your local workstation:
|
||||
|
||||
Create the kubectl config directory.
|
||||
|
||||
```shell
|
||||
mkdir -p ~/.kube
|
||||
```
|
||||
Copy the kubeconfig file to the default location.
|
||||
|
||||
```shell
|
||||
juju scp kubernetes-master/0:config ~/.kube/config
|
||||
```
|
||||
|
||||
Fetch a binary for the architecture you have deployed. If your client is a
|
||||
different architecture you will need to get the appropriate `kubectl` binary
|
||||
through other means.
|
||||
|
||||
```shell
|
||||
juju scp kubernetes-master/0:kubectl ./kubectl
|
||||
```
|
||||
|
||||
Query the cluster.
|
||||
|
||||
```shell
|
||||
./kubectl cluster-info
|
||||
Kubernetes master is running at https://52.15.104.227:443
|
||||
Heapster is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/heapster
|
||||
KubeDNS is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
|
||||
Grafana is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
|
||||
InfluxDB is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
|
||||
```
|
||||
|
||||
Congratulations, you've now set up a Kubernetes cluster!
|
||||
|
||||
## Scale up cluster
|
||||
|
||||
Want larger Kubernetes nodes? It is easy to request different sizes of cloud
|
||||
resources from Juju by using **constraints**. You can increase the amount of
|
||||
CPU or memory (RAM) in any of the systems requested by Juju. This allows you
|
||||
to fine tune th Kubernetes cluster to fit your workload. Use flags on the
|
||||
bootstrap command or as a separate `juju constraints` command. Look to the
|
||||
[Juju documentation for machine](https://jujucharms.com/docs/2.0/charms-constraints)
|
||||
details.
|
||||
|
||||
## Scale out cluster
|
||||
|
||||
Need more workers? We just add more units:
|
||||
|
||||
```shell
|
||||
juju add-unit kubernetes-worker
|
||||
```
|
||||
|
||||
Or multiple units at one time:
|
||||
|
||||
```shell
|
||||
juju add-unit -n3 kubernetes-worker
|
||||
```
|
||||
You can also ask for specific instance types or other machine-specific constraints. See the [constraints documentation](https://jujucharms.com/docs/stable/reference-constraints) for more information. Here are some examples, note that generic constraints such as `cores` and `mem` are more portable between clouds. In this case we'll ask for a specific instance type from AWS:
|
||||
|
||||
```shell
|
||||
juju set-constraints kubernetes-worker instance-type=c4.large
|
||||
juju add-unit kubernetes-worker
|
||||
```
|
||||
|
||||
You can also scale the etcd charm for more fault tolerant key/value storage:
|
||||
|
||||
```shell
|
||||
juju add-unit -n3 etcd
|
||||
```
|
||||
It is strongly recommended to run an odd number of units for quorum.
|
||||
|
||||
## Tear down cluster
|
||||
|
||||
If you want stop the servers you can destroy the Juju model or the
|
||||
controller. Use the `juju switch` command to get the current controller name:
|
||||
|
||||
```shell
|
||||
juju switch
|
||||
juju destroy-controller $controllername --destroy-all-models
|
||||
```
|
||||
This will shutdown and terminate all running instances on that cloud.
|
||||
|
||||
## More Info
|
||||
|
||||
We stand up Kubernetes with open-source operations, or operations as code, known as charms. These charms are assembled from layers which keeps the code smaller and more focused on the operations of just Kubernetes and its components.
|
||||
|
||||
The Kubernetes layer and bundles can be found in the `kubernetes`
|
||||
project on github.com:
|
||||
|
||||
- [Bundle location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/bundles)
|
||||
- [Kubernetes charm layer location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes)
|
||||
- [Canonical Kubernetes home](https://jujucharms.com/canonical-kubernetes/)
|
||||
|
||||
Feature requests, bug reports, pull requests or any feedback would be much appreciated.
|
||||
|
||||
### Cloud compatibility
|
||||
|
||||
This deployment methodology is continually tested on the following clouds:
|
||||
|
||||
[Amazon Web Service](https://jujucharms.com/docs/2.0/help-aws),
|
||||
[Microsoft Azure](https://jujucharms.com/docs/2.0/help-azure),
|
||||
[Google Compute Engine](https://jujucharms.com/docs/2.0/help-google),
|
||||
[Joyent](https://jujucharms.com/docs/2.0/help-joyent),
|
||||
[Rackspace](https://jujucharms.com/docs/2.0/help-rackspace), any
|
||||
[OpenStack cloud](https://jujucharms.com/docs/2.0/clouds#specifying-additional-clouds),
|
||||
and
|
||||
[Vmware vSphere](https://jujucharms.com/docs/2.0/config-vmware).
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Amazon Web Services (AWS) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
|
||||
OpenStack | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
|
||||
Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
|
||||
Google Compute Engine (GCE) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
|
@ -0,0 +1,484 @@
|
|||
---
|
||||
|
||||
---
|
||||
|
||||
This document describes how to deploy Kubernetes with Calico networking from scratch on _bare metal_ Ubuntu. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
|
||||
|
||||
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
|
||||
|
||||
This guide will set up a simple Kubernetes cluster with a single Kubernetes master and two Kubernetes nodes. We'll run Calico's etcd cluster on the master and install the Calico daemon on the master and nodes.
|
||||
|
||||
## Prerequisites and Assumptions
|
||||
|
||||
- This guide uses `systemd` for process management. Ubuntu 15.04 supports systemd natively as do a number of other Linux distributions.
|
||||
- All machines should have Docker >= 1.7.0 installed.
|
||||
- To install Docker on Ubuntu, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
|
||||
- All machines should have connectivity to each other and the internet.
|
||||
- This guide assumes a DHCP server on your network to assign server IPs.
|
||||
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
|
||||
- This guide assumes that none of the hosts have been configured with any Kubernetes or Calico software.
|
||||
- This guide will set up a secure, TLS-authenticated API server.
|
||||
|
||||
## Set up the master
|
||||
|
||||
### Configure TLS
|
||||
|
||||
The master requires the root CA public key, `ca.pem`; the apiserver certificate, `apiserver.pem` and its private key, `apiserver-key.pem`.
|
||||
|
||||
1. Create the file `openssl.cnf` with the following contents.
|
||||
|
||||
```conf
|
||||
[req]
|
||||
req_extensions = v3_req
|
||||
distinguished_name = req_distinguished_name
|
||||
[req_distinguished_name]
|
||||
[ v3_req ]
|
||||
basicConstraints = CA:FALSE
|
||||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
DNS.1 = kubernetes
|
||||
DNS.2 = kubernetes.default
|
||||
IP.1 = 10.100.0.1
|
||||
IP.2 = ${MASTER_IPV4}
|
||||
```
|
||||
|
||||
> Replace ${MASTER_IPV4} with the Master's IP address on which the Kubernetes API will be accessible.
|
||||
|
||||
2. Generate the necessary TLS assets.
|
||||
|
||||
```shell
|
||||
# Generate the root CA.
|
||||
openssl genrsa -out ca-key.pem 2048
|
||||
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
|
||||
|
||||
# Generate the API server keypair.
|
||||
openssl genrsa -out apiserver-key.pem 2048
|
||||
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
|
||||
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
|
||||
```
|
||||
|
||||
3. You should now have the following three files: `ca.pem`, `apiserver.pem`, and `apiserver-key.pem`. Send the three files to your master host (using `scp` for example).
|
||||
4. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
|
||||
|
||||
```shell
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
|
||||
|
||||
# Set permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
|
||||
```
|
||||
|
||||
### Install Calico's etcd on the master
|
||||
|
||||
Calico needs its own etcd cluster to store its state. In this guide we install a single-node cluster on the master server.
|
||||
|
||||
> Note: In a production deployment we recommend running a distributed etcd cluster for redundancy. In this guide, we use a single etcd for simplicitly.
|
||||
|
||||
1. Download the template manifest file:
|
||||
|
||||
```shell
|
||||
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/calico-etcd.manifest
|
||||
```
|
||||
|
||||
2. Replace all instances of `<MASTER_IPV4>` in the `calico-etcd.manifest` file with your master's IP address.
|
||||
|
||||
3. Then, move the file to the `/etc/kubernetes/manifests` directory. This will not have any effect until we later run the kubelet, but Calico seems to tolerate the lack of its etcd in the interim.
|
||||
|
||||
```shell
|
||||
sudo mv -f calico-etcd.manifest /etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
### Install Calico on the master
|
||||
|
||||
We need to install Calico on the master. This allows the master to route packets to the pods on other nodes.
|
||||
|
||||
1. Install the `calicoctl` tool:
|
||||
|
||||
```shell
|
||||
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo mv calicoctl /usr/bin
|
||||
```
|
||||
|
||||
2. Prefetch the calico/node container (this ensures that the Calico service starts immediately when we enable it):
|
||||
|
||||
```shell
|
||||
sudo docker pull calico/node:v0.15.0
|
||||
```
|
||||
|
||||
3. Download the `network-environment` template from the `calico-kubernetes` repository:
|
||||
|
||||
```shell
|
||||
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/network-environment-template
|
||||
```
|
||||
|
||||
4. Edit `network-environment` to represent this node's settings:
|
||||
|
||||
- Replace `<KUBERNETES_MASTER>` with the IP address of the master. This should be the source IP address used to reach the Kubernetes worker nodes.
|
||||
|
||||
5. Move `network-environment` into `/etc`:
|
||||
|
||||
```shell
|
||||
sudo mv -f network-environment /etc
|
||||
```
|
||||
|
||||
6. Install, enable, and start the `calico-node` service:
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
|
||||
sudo systemctl enable /etc/systemd/calico-node.service
|
||||
sudo systemctl start calico-node.service
|
||||
```
|
||||
|
||||
### Install Kubernetes on the Master
|
||||
|
||||
We'll use the `kubelet` to bootstrap the Kubernetes master.
|
||||
|
||||
1. Download and install the `kubelet` and `kubectl` binaries:
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
|
||||
sudo chmod +x /usr/bin/kubelet /usr/bin/kubectl
|
||||
```
|
||||
|
||||
2. Install the `kubelet` systemd unit file and start the `kubelet`:
|
||||
|
||||
```shell
|
||||
# Install the unit file
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubelet.service
|
||||
|
||||
# Enable the unit file so that it runs on boot
|
||||
sudo systemctl enable /etc/systemd/kubelet.service
|
||||
|
||||
# Start the kubelet service
|
||||
sudo systemctl start kubelet.service
|
||||
```
|
||||
|
||||
3. Download and install the master manifest file, which will start the Kubernetes master services automatically:
|
||||
|
||||
```shell
|
||||
sudo mkdir -p /etc/kubernetes/manifests
|
||||
sudo wget -N -P /etc/kubernetes/manifests https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubernetes-master.manifest
|
||||
```
|
||||
|
||||
4. Check the progress by running `docker ps`. After a while, you should see the `etcd`, `apiserver`, `controller-manager`, `scheduler`, and `kube-proxy` containers running.
|
||||
|
||||
> Note: it may take some time for all the containers to start. Don't worry if `docker ps` doesn't show any containers for a while or if some containers start before others.
|
||||
|
||||
## Set up the nodes
|
||||
|
||||
The following steps should be run on each Kubernetes node.
|
||||
|
||||
### Configure TLS
|
||||
|
||||
Worker nodes require three keys: `ca.pem`, `worker.pem`, and `worker-key.pem`. We've already generated
|
||||
`ca.pem` and `ca-key.pem` for use on the Master. The worker public/private keypair should be generated for each Kubernetes node.
|
||||
|
||||
1. Create the file `worker-openssl.cnf` with the following contents.
|
||||
|
||||
```conf
|
||||
[req]
|
||||
req_extensions = v3_req
|
||||
distinguished_name = req_distinguished_name
|
||||
[req_distinguished_name]
|
||||
[ v3_req ]
|
||||
basicConstraints = CA:FALSE
|
||||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
IP.1 = $ENV::WORKER_IP
|
||||
```
|
||||
|
||||
2. Generate the necessary TLS assets for this worker. This relies on the worker's IP address, and the `ca.pem` and `ca-key.pem` files generated earlier in the guide.
|
||||
|
||||
```shell
|
||||
# Export this worker's IP address.
|
||||
export WORKER_IP=<WORKER_IPV4>
|
||||
```
|
||||
|
||||
```shell
|
||||
# Generate keys.
|
||||
openssl genrsa -out worker-key.pem 2048
|
||||
openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=worker-key" -config worker-openssl.cnf
|
||||
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
|
||||
```
|
||||
|
||||
3. Send the three files (`ca.pem`, `worker.pem`, and `worker-key.pem`) to the host (using scp, for example).
|
||||
|
||||
4. Move the files to the `/etc/kubernetes/ssl` folder with the appropriate permissions:
|
||||
|
||||
```shell
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem worker.pem worker-key.pem
|
||||
|
||||
# Set permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/worker-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/worker-key.pem
|
||||
```
|
||||
|
||||
### Configure the kubelet worker
|
||||
|
||||
1. With your certs in place, create a kubeconfig for worker authentication in `/etc/kubernetes/worker-kubeconfig.yaml`; replace `<KUBERNETES_MASTER>` with the IP address of the master:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- name: local
|
||||
cluster:
|
||||
server: https://<KUBERNETES_MASTER>:443
|
||||
certificate-authority: /etc/kubernetes/ssl/ca.pem
|
||||
users:
|
||||
- name: kubelet
|
||||
user:
|
||||
client-certificate: /etc/kubernetes/ssl/worker.pem
|
||||
client-key: /etc/kubernetes/ssl/worker-key.pem
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local
|
||||
user: kubelet
|
||||
name: kubelet-context
|
||||
current-context: kubelet-context
|
||||
```
|
||||
|
||||
### Install Calico on the node
|
||||
|
||||
On your compute nodes, it is important that you install Calico before Kubernetes. We'll install Calico using the provided `calico-node.service` systemd unit file:
|
||||
|
||||
1. Install the `calicoctl` binary:
|
||||
|
||||
```shell
|
||||
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo mv calicoctl /usr/bin
|
||||
```
|
||||
|
||||
2. Fetch the calico/node container:
|
||||
|
||||
```shell
|
||||
sudo docker pull calico/node:v0.15.0
|
||||
```
|
||||
|
||||
3. Download the `network-environment` template from the `calico-cni` repository:
|
||||
|
||||
```shell
|
||||
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/network-environment-template
|
||||
```
|
||||
|
||||
4. Edit `network-environment` to represent this node's settings:
|
||||
|
||||
- Replace `<DEFAULT_IPV4>` with the IP address of the node.
|
||||
- Replace `<KUBERNETES_MASTER>` with the IP or hostname of the master.
|
||||
|
||||
5. Move `network-environment` into `/etc`:
|
||||
|
||||
```shell
|
||||
sudo mv -f network-environment /etc
|
||||
```
|
||||
|
||||
6. Install the `calico-node` service:
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
|
||||
sudo systemctl enable /etc/systemd/calico-node.service
|
||||
sudo systemctl start calico-node.service
|
||||
```
|
||||
|
||||
7. Install the Calico CNI plugins:
|
||||
|
||||
```shell
|
||||
sudo mkdir -p /opt/cni/bin/
|
||||
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico
|
||||
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico-ipam
|
||||
sudo chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
|
||||
```
|
||||
|
||||
8. Create a CNI network configuration file, which tells Kubernetes to create a network named `calico-k8s-network` and to use the calico plugins for that network. Create file `/etc/cni/net.d/10-calico.conf` with the following contents, replacing `<KUBERNETES_MASTER>` with the IP of the master (this file should be the same on each node):
|
||||
|
||||
```shell
|
||||
# Make the directory structure.
|
||||
mkdir -p /etc/cni/net.d
|
||||
|
||||
# Make the network configuration file
|
||||
cat >/etc/cni/net.d/10-calico.conf <<EOF
|
||||
{
|
||||
"name": "calico-k8s-network",
|
||||
"type": "calico",
|
||||
"etcd_authority": "<KUBERNETES_MASTER>:6666",
|
||||
"log_level": "info",
|
||||
"ipam": {
|
||||
"type": "calico-ipam"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Since this is the only network we create, it will be used by default by the kubelet.
|
||||
|
||||
9. Verify that Calico started correctly:
|
||||
|
||||
```shell
|
||||
calicoctl status
|
||||
```
|
||||
|
||||
should show that Felix (Calico's per-node agent) is running and the there should be a BGP status line for each other node that you've configured and the master. The "Info" column should show "Established":
|
||||
|
||||
```
|
||||
$ calicoctl status
|
||||
calico-node container is running. Status: Up 15 hours
|
||||
Running felix version 1.3.0rc5
|
||||
|
||||
IPv4 BGP status
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
| Peer address | Peer type | State | Since | Info |
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
| 172.18.203.41 | node-to-node mesh | up | 17:32:26 | Established |
|
||||
| 172.18.203.42 | node-to-node mesh | up | 17:32:25 | Established |
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
|
||||
IPv6 BGP status
|
||||
+--------------+-----------+-------+-------+------+
|
||||
| Peer address | Peer type | State | Since | Info |
|
||||
+--------------+-----------+-------+-------+------+
|
||||
+--------------+-----------+-------+-------+------+
|
||||
```
|
||||
|
||||
If the "Info" column shows "Active" or some other value then Calico is having difficulty connecting to the other host. Check the IP address of the peer is correct and check that Calico is using the correct local IP address (set in the `network-environment` file above).
|
||||
|
||||
### Install Kubernetes on the Node
|
||||
|
||||
1. Download and Install the kubelet binary:
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
|
||||
sudo chmod +x /usr/bin/kubelet
|
||||
```
|
||||
|
||||
2. Install the `kubelet` systemd unit file:
|
||||
|
||||
```shell
|
||||
# Download the unit file.
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kubelet.service
|
||||
|
||||
# Enable and start the unit files so that they run on boot
|
||||
sudo systemctl enable /etc/systemd/kubelet.service
|
||||
sudo systemctl start kubelet.service
|
||||
```
|
||||
|
||||
3. Download the `kube-proxy` manifest:
|
||||
|
||||
```shell
|
||||
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kube-proxy.manifest
|
||||
```
|
||||
|
||||
4. In that file, replace `<KUBERNETES_MASTER>` with your master's IP. Then move it into place:
|
||||
|
||||
```shell
|
||||
sudo mkdir -p /etc/kubernetes/manifests/
|
||||
sudo mv kube-proxy.manifest /etc/kubernetes/manifests/
|
||||
```
|
||||
|
||||
## Configure kubectl remote access
|
||||
|
||||
To administer your cluster from a separate host (e.g your laptop), you will need the root CA generated earlier, as well as an admin public/private keypair (`ca.pem`, `admin.pem`, `admin-key.pem`). Run the following steps on the machine which you will use to control your cluster.
|
||||
|
||||
1. Download the kubectl binary.
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
|
||||
sudo chmod +x /usr/bin/kubectl
|
||||
```
|
||||
|
||||
2. Generate the admin public/private keypair.
|
||||
|
||||
3. Export the necessary variables, substituting in correct values for your machine.
|
||||
|
||||
```shell
|
||||
# Export the appropriate paths.
|
||||
export CA_CERT_PATH=<PATH_TO_CA_PEM>
|
||||
export ADMIN_CERT_PATH=<PATH_TO_ADMIN_PEM>
|
||||
export ADMIN_KEY_PATH=<PATH_TO_ADMIN_KEY_PEM>
|
||||
|
||||
# Export the Master's IP address.
|
||||
export MASTER_IPV4=<MASTER_IPV4>
|
||||
```
|
||||
|
||||
4. Configure your host `kubectl` with the admin credentials:
|
||||
|
||||
```shell
|
||||
kubectl config set-cluster calico-cluster --server=https://${MASTER_IPV4} --certificate-authority=${CA_CERT_PATH}
|
||||
kubectl config set-credentials calico-admin --certificate-authority=${CA_CERT_PATH} --client-key=${ADMIN_KEY_PATH} --client-certificate=${ADMIN_CERT_PATH}
|
||||
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
|
||||
kubectl config use-context calico
|
||||
```
|
||||
|
||||
Check your work with `kubectl get nodes`, which should succeed and display the nodes.
|
||||
|
||||
## Install the DNS Addon
|
||||
|
||||
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided. This step makes use of the kubectl configuration made above.
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
|
||||
```
|
||||
|
||||
## Install the Kubernetes UI Addon (Optional)
|
||||
|
||||
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
|
||||
```
|
||||
|
||||
Note: The Kubernetes UI addon is deprecated and has been replaced with Kubernetes dashboard. You can install it by running:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
|
||||
```
|
||||
|
||||
You can find the docs at [Kubernetes Dashboard](https://github.com/kubernetes/dashboard)
|
||||
|
||||
## Launch other Services With Calico-Kubernetes
|
||||
|
||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
|
||||
|
||||
## Connectivity to outside the cluster
|
||||
|
||||
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
|
||||
|
||||
### NAT on the nodes
|
||||
|
||||
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
|
||||
|
||||
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
|
||||
|
||||
```shell
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
|
||||
```
|
||||
|
||||
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
|
||||
|
||||
```shell
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
|
||||
```
|
||||
|
||||
### NAT at the border router
|
||||
|
||||
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
|
||||
|
||||
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Bare-metal | custom | Ubuntu | Calico | [docs](/docs/getting-started-guides/ubuntu-calico) | | Community ([@djosborne](https://github.com/djosborne))
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
Loading…
Reference in New Issue