Restructure the Ubuntu Getting Started Section.

- This supercedes PR #1770
 - This breaks up the section from one huge installation page into:
   - A new index page.
   - New support grid with commercial and community support options.
   - Move to lifecycle-based pages based on operational tasks.
     - Add support for local deployments via LXD.
       - Added screenshots to images/docs/ubuntu
     - Add backup page.
     - Add decommissioning page.
     - Add a glossary of terms page.
     - Rewritten installation page.
     - Add logging page.
     - Add monitoring page.
     - Add networking page, flannel only for now, calico in progress.
     - Add a scaling page.
     - Add a security page.
     - Add a storage page.
     - Add a troubleshooting page.
     - Add an upgrade page.
     - Add a cluster validation page.
     - Add new ubuntu content to the index page.
       - Add Ubuntu to _data TOC.
     - Add warning about choosing ipv6 with LXD, thanks Ed Baluf.
     - Template-ize all the pages per jaredb's review.
pull/1932/head
Jorge O. Castro 2016-11-30 16:59:07 -05:00
parent 586656c027
commit ec5d9b0ec6
25 changed files with 1293 additions and 100 deletions

View File

@ -125,7 +125,7 @@ toc:
- title: Custom Cloud Solutions
section:
- docs/getting-started-guides/coreos/index.md
- /docs/getting-started-guides/juju/
- docs/getting-started-guides/ubuntu/index.md
- docs/getting-started-guides/rackspace.md
- title: On-Premise VMs
section:
@ -133,7 +133,6 @@ toc:
- docs/getting-started-guides/cloudstack.md
- docs/getting-started-guides/vsphere.md
- docs/getting-started-guides/photon-controller.md
- /docs/getting-started-guides/juju/
- docs/getting-started-guides/dcos.md
- docs/getting-started-guides/libvirt-coreos.md
- docs/getting-started-guides/ovirt.md
@ -152,10 +151,25 @@ toc:
- docs/getting-started-guides/fedora/flannel_multi_node_cluster.md
- docs/getting-started-guides/centos/centos_manual_config.md
- docs/getting-started-guides/coreos/index.md
- docs/getting-started-guides/ubuntu/index.md
- title: Ubuntu
section:
- docs/getting-started-guides/ubuntu/automated.md
- docs/getting-started-guides/ubuntu/index.md
- docs/getting-started-guides/ubuntu/validation.md
- docs/getting-started-guides/ubuntu/backups.md
- docs/getting-started-guides/ubuntu/upgrades.md
- docs/getting-started-guides/ubuntu/scaling.md
- docs/getting-started-guides/ubuntu/installation.md
- docs/getting-started-guides/ubuntu/monitoring.md
- docs/getting-started-guides/ubuntu/networking.md
- docs/getting-started-guides/ubuntu/security.md
- docs/getting-started-guides/ubuntu/storage.md
- docs/getting-started-guides/ubuntu/troubleshooting.md
- docs/getting-started-guides/ubuntu/decommissioning.md
- docs/getting-started-guides/ubuntu/calico.md
- docs/getting-started-guides/ubuntu/glossary.md
- docs/getting-started-guides/ubuntu/local.md
- docs/getting-started-guides/ubuntu/logging.md
- docs/getting-started-guides/ubuntu/manual.md
- docs/getting-started-guides/windows/index.md
- docs/admin/node-conformance.md

View File

@ -31,6 +31,7 @@ a Kubernetes cluster from scratch.
Use the [Minikube getting started guide](/docs/getting-started-guides/minikube/) to try it out.
[Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local) - Ubuntu supports a 9-instance deployment on localhost via LXD.
### Hosted Solutions
@ -81,7 +82,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [AWS + CoreOS](/docs/getting-started-guides/coreos)
- [GCE + CoreOS](/docs/getting-started-guides/coreos)
- [AWS/GCE/Rackspace/Joyent + Ubuntu](/docs/getting-started-guides/ubuntu/automated)
- [Ubuntu + AWS/GCE/Rackspace/Joyent](/docs/getting-started-guides/ubuntu/)
- [Rackspace + CoreOS](/docs/getting-started-guides/rackspace)
#### On-Premises VMs
@ -90,7 +91,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [CloudStack](/docs/getting-started-guides/cloudstack) (uses Ansible, CoreOS and flannel)
- [Vmware vSphere](/docs/getting-started-guides/vsphere) (uses Debian)
- [Vmware Photon Controller](/docs/getting-started-guides/photon-controller) (uses Debian)
- [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/automated) (uses Juju, Ubuntu and flannel)
- [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel)
- [Vmware](/docs/getting-started-guides/coreos) (uses CoreOS and flannel)
- [libvirt-coreos.md](/docs/getting-started-guides/libvirt-coreos) (uses CoreOS)
- [oVirt](/docs/getting-started-guides/ovirt)
@ -105,7 +106,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo
- [Fedora single node](/docs/getting-started-guides/fedora/fedora_manual_config)
- [Fedora multi node](/docs/getting-started-guides/fedora/flannel_multi_node_cluster)
- [Centos](/docs/getting-started-guides/centos/centos_manual_config)
- [Bare Metal with Ubuntu](/docs/getting-started-guides/ubuntu/automated)
- [Bare Metal with Ubuntu](/docs/getting-started-guides/ubuntu/)
- [Ubuntu Manual](/docs/getting-started-guides/ubuntu/manual)
- [Docker Multi Node](/docs/getting-started-guides/docker-multinode)
- [CoreOS](/docs/getting-started-guides/coreos)
@ -152,11 +153,11 @@ CloudStack | Ansible | CoreOS | flannel | [docs](/docs/gettin
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | Community ([@imkin](https://github.com/imkin))
Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | Community ([@alainroy](https://github.com/alainroy))
Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | Community ([@coolsvap](https://github.com/coolsvap))
AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
GCE | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/automated) | [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck]*(https://github.com/chuckbutler) )
AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
GCE | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws) | Community ([@justinsb](https://github.com/justinsb))
AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops) | Community ([@justinsb](https://github.com/justinsb))
Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
@ -166,7 +167,6 @@ OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs]
Rackspace | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/rackspace) | Community ([@doublerr](https://github.com/doublerr))
any | any | any | any | [docs](/docs/getting-started-guides/scratch) | Community ([@erictune](https://github.com/erictune))
*Note*: The above table is ordered by version test/used in notes followed by support level.
Definition of columns

View File

@ -0,0 +1,142 @@
---
title: Backups
---
{% capture overview %}
This pages shows you how to backup and restore data from the different deployed services in a given cluster.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
## Exporting cluster data
Exporting of cluster data is not supported at this time.
## Restoring cluster data
Importing of cluster data is not supported at this time.
{% capture steps %}
## Exporting etcd data
Migrating etcd is a fairly easy task.
Step 1: Snapshot your existing cluster. This is encapsulated in the `snapshot`
action.
```
juju run-action etcd/0 snapshot
```
Results:
```
Action queued with id: b46d5d6f-5625-4320-8cda-b611c6ae580c
```
Step 2: Check the status of the action so you can grab the snapshot and verify
the sum. The copy.cmd result output is a copy/paste command for you to download
the exact snapshot that you just created.
Download the snapshot archive from the unit that created the snapshot and verify
the sha256 sum
```
juju show-action-output b46d5d6f-5625-4320-8cda-b611c6ae580c
```
Results:
```
results:
copy:
cmd: juju scp etcd/0:/home/ubuntu/etcd-snapshots/etcd-snapshot-2016-11-09-02.41.47.tar.gz
.
snapshot:
path: /home/ubuntu/etcd-snapshots/etcd-snapshot-2016-11-09-02.41.47.tar.gz
sha256: 1dea04627812397c51ee87e313433f3102f617a9cab1d1b79698323f6459953d
size: 68K
status: completed
```
Copy the snapshot to the local disk and then check the sha256sum.
```
juju scp etcd/0:/home/ubuntu/etcd-snapshots/etcd-snapshot-2016-11-09-02.41.47.tar.gz .
sha256sum etcd-snapshot-2016-11-09-02.41.47.tar.gz
```
Step 3: Deploy the new cluster leader, and attach the snapshot:
```
juju deploy etcd new-etcd --resource snapshot=./etcd-snapshot-2016-11-09-02.41.47.tar.gz
```
Step 4: Re-Initialize the master with the data from the resource we just attached
in step 3.
```
juju run-action new-etcd/0 restore
```
## Restoring etcd data
Allows the operator to restore the data from a cluster-data snapshot. This
comes with caveats and a very specific path to restore a cluster:
The cluster must be in a state of only having a single member. So it's best to
deploy a new cluster using the etcd charm, without adding any additional units.
```
juju deploy etcd new-etcd
```
> The above code snippet will deploy a single unit of etcd, as 'new-etcd'
```
juju run-action etcd/0 restore target=/mnt/etcd-backups
```
Once the restore action has completed, evaluate the cluster health. If the unit
is healthy, you may resume scaling the application to meet your needs.
- **param** target: destination directory to save the existing data.
- **param** skip-backup: Don't backup any existing data.
## Snapshot etcd data
Allows the operator to snapshot a running clusters data for use in cloning,
backing up, or migrating Etcd clusters.
juju run-action etcd/0 snapshot target=/mnt/etcd-backups
- **param** target: destination directory to save the resulting snapshot archive.
{% endcapture %}
{% capture discussion %}
# Known Limitations
#### Loss of PKI warning
If you destroy the leader - identified with the `*` text next to the unit number in status:
all TLS pki will be lost. No PKI migration occurs outside
of the units requesting and registering the certificates.
> Important: Mismanaging this configuration will result in locking yourself
> out of the cluster, and can potentially break existing deployments in very
> strange ways relating to x509 validation of certificates, which affects both
> servers and clients.
#### Restoring from snapshot on a scaled cluster
Restoring from a snapshot on a scaled cluster will result in a broken cluster.
Etcd performs clustering during unit turn-up, and state is stored in Etcd itself.
During the snapshot restore phase, a new cluster ID is initialized, and peers
are dropped from the snapshot state to enable snapshot restoration. Please
follow the migration instructions above in the restore action description.
{% endcapture %}
{% include templates/task.md %}

View File

@ -2,12 +2,15 @@
title: Deploying Kubernetes with Calico Networking on Ubuntu
---
{% capture overview %}
This document describes how to deploy Kubernetes with Calico networking from scratch on _bare metal_ Ubuntu. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
This guide will set up a simple Kubernetes cluster with a single Kubernetes master and two Kubernetes nodes. We'll run Calico's etcd cluster on the master and install the Calico daemon on the master and nodes.
{% endcapture %}
{% capture prerequisites %}
## Prerequisites and Assumptions
- This guide uses `systemd` for process management. Ubuntu 15.04 supports systemd natively as do a number of other Linux distributions.
@ -18,7 +21,9 @@ This guide will set up a simple Kubernetes cluster with a single Kubernetes mast
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
- This guide assumes that none of the hosts have been configured with any Kubernetes or Calico software.
- This guide will set up a secure, TLS-authenticated API server.
{% endcapture %}
{% capture steps %}
## Set up the master
### Configure TLS
@ -472,6 +477,8 @@ ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
{% endcapture %}
## Support Level
@ -482,3 +489,5 @@ Bare-metal | custom | Ubuntu | Calico | [docs](/docs/gettin
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
{% include templates/task.md %}

View File

@ -0,0 +1,74 @@
---
Title: Decommissioning
---
{% capture overview %}
This page shows you how to properly decommission a cluster.
{% endcapture %}
Warning: By the time you've reached this step you should have backed up your workloads and pertinent data, this section is for the complete destruction of a cluster.
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
{% capture steps %}
It is recommended to deploy individual Kubernetes clusters in their own models, so that there is a clean seperation between environments. To remove a cluster first find out which model it's in with `juju list-models`. The controller reserves an `admin` model for itself. If you have chosen to not name your model it might show up as `default`.
```
$ juju list-models
Controller: aws-us-east-2
Model Cloud/Region Status Machines Cores Access Last connection
controller aws/us-east-2 available 1 2 admin just now
my-kubernetes-cluster* aws/us-east-2 available 12 22 admin 2 minutes ago
```
You can then destroy the model, which will in turn destroy the cluster inside of it:
juju destroy-model my-kubernetes-cluster
```
$ juju destroy-model my-kubernetes-cluster
WARNING! This command will destroy the "my-kubernetes-cluster" model.
This includes all machines, applications, data and other resources.
Continue [y/N]? y
Destroying model
Waiting on model to be removed, 12 machine(s), 10 application(s)...
Waiting on model to be removed, 12 machine(s), 9 application(s)...
Waiting on model to be removed, 12 machine(s), 8 application(s)...
Waiting on model to be removed, 12 machine(s), 7 application(s)...
Waiting on model to be removed, 12 machine(s)...
Waiting on model to be removed...
$
```
This will destroy and decommission all nodes. You can confirm all nodes are destroyed by running `juju status`.
If you're using a public cloud this will terminate the instances. If you're on bare metal using MAAS this will release the nodes, optionally wipe the disk, power off the machines, and return them to available pool of machines to deploy from.
## Cleaning up the Controller
If you're not using the controller for anything else, you will also need to remove the controller instance:
```
$ juju list-controllers
Use --refresh flag with this command to see the latest information.
Controller Model User Access Cloud/Region Models Machines HA Version
aws-us-east-2* - admin superuser aws/us-east-2 2 1 none 2.0.1
$ juju destroy-controller aws-us-east-2
WARNING! This command will destroy the "aws-us-east-2" controller.
This includes all machines, applications, data and other resources.
Continue? (y/N):y
Destroying controller
Waiting for hosted model resources to be reclaimed
All hosted models reclaimed, cleaning up controller machines
$
```
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,28 @@
---
Title: Glossary and Terminology
---
{% capture overview %}
This page explains some of the terminology used in deploying Kubernetes with Juju.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
{% capture steps %}
controller - The management node of a cloud environment. Typically you have one controller per cloud region, or more in HA environments. The controller is responsible for managing all subsequent models in a given environment. It contains the Juju API server and its underlying database.
model - A collection of charms and their relationships that define a deployment. This includes machines and units. A controller can host multiple models. It is recommended to seperate Kubernetes clusters into individual models for management and isolation reasons.
charm - The definition of a service, including its metadata, dependencies with other services, required packages, and application management logic. It contains all the operational knowledge of deploying a Kubernetes cluster. Included charm examples are `kubernetes-core`, `easy-rsa`, `kibana`, and `etcd`.
unit - A given instance of a service. These may or may not use up a whole machine, and may be colocated on the same machine. So for example you might have a `kubernetes-worker`, and `filebeat`, and `topbeat` units running on a single machine, but they are three distinct units of different services.
machine - A physical node, these can either be bare metal nodes, or virtual machines provided by a cloud.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,53 @@
---
Title: Kubernetes on Ubuntu
---
{% capture overview %}
There are multiple ways to run a Kubernetes cluster with Ubuntu. These pages explain how to deploy Kubernetes on Ubuntu on multiple public and private clouds, as well as bare metal.
{% endcapture %}
{% capture body %}
## Official Ubuntu Guides
- [The Canonical Distribution of Kubernetes](/docs/getting-started-guides/ubuntu)
Supports AWS, GCE, Azure, Joyent, OpenStack, Bare Metal and local workstation deployment.
### Operational Guides
- [Installation](/docs/getting-started-guides/ubuntu/installation)
- [Validation](/docs/getting-started-guides/ubuntu/validation)
- [Backups](/docs/getting-started-guides/ubuntu/backups)
- [Upgrades](/docs/getting-started-guides/ubuntu/upgrades)
- [Scaling](/docs/getting-started-guides/ubuntu/scaling)
- [Logging](/docs/getting-started-guides/ubuntu/logging)
- [Monitoring](/docs/getting-started-guides/ubuntu/monitoring)
- [Networking](/docs/getting-started-guides/ubuntu/networking)
- [Security](/docs/getting-started-guides/ubuntu/security)
- [Storage](/docs/getting-started-guides/ubuntu/storage)
- [Troubleshooting](/docs/getting-started-guides/ubuntu/troubleshooting)
- [Decommissioning](/docs/getting-started-guides/ubuntu/decommissioning)
- [Glossary](/docs/getting-started-guides/ubuntu/glossary)
## Developer Guides
- [Local development using LXD](/docs/getting-started-guides/ubuntu/local)
## Community Ubuntu Guides
- [Manual Installation](/docs/getting-started-guides/ubuntu/manual)
- [Calico Configuration](/docs/getting-started-guides/ubuntu/calico)
Please feel free to submit guides to this section.
## Where to find us
We're normally following the following Slack channels:
- [sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
- [sig-cluster-ops](https://kubernetes.slack.com/messages/sig-cluster-ops/)
and we monitor the Kubernetes mailing lists.
{% endcapture %}
{% include templates/concept.md %}

View File

@ -5,7 +5,11 @@ assignees:
title: Setting up Kubernetes with Juju
---
Ubuntu 16.04 introduced the [Canonical Distribution of Kubernetes](https://jujucharms.com/canonical-kubernetes/), a pure upstream distribution of Kubernetes designed for production usage. Out of the box it comes with the following components on 12 machines:
{% capture overview %}
Ubuntu 16.04 introduced the [Canonical Distribution of Kubernetes](https://www.ubuntu.com/cloud/kubernetes), a pure upstream distribution of Kubernetes designed for production usage. This page shows you how to deploy a cluster.
{% endcapture %}
Out of the box it comes with the following components on 9 machines:
- Kubernetes (automated deployment, operations, and scaling)
- Three node Kubernetes cluster with one master and two worker nodes.
@ -17,46 +21,30 @@ Ubuntu 16.04 introduced the [Canonical Distribution of Kubernetes](https://jujuc
- EasyRSA
- Performs the role of a certificate authority serving self signed certificates
to the requesting units of the cluster.
- Etcd (distributed key value store)
- ETCD (distributed key value store)
- Three unit cluster for reliability.
- Elastic stack
- Two units for ElasticSearch
- One units for a Kibana dashboard
- Beats on every Kubernetes and Etcd units:
- Filebeat for forwarding logs to ElasticSearch
- Topbeat for inserting server monitoring data to ElasticSearch
The Juju Kubernetes work is curated by a dedicated team of community members,
let us know how we are doing. If you find any problems please open an
[issue on our tracker](https://github.com/juju-solutions/bundle-canonical-kubernetes)
so we can find them.
* TOC
{:toc}
{% capture prerequisites %}
## Prerequisites
- A working [Juju client](https://jujucharms.com/docs/2.0/getting-started-general); this does not have to be a Linux machine, it can also be Windows or OSX.
- A [supported cloud](#cloud-compatibility).
- Bare Metal deployments are supported via [MAAS](http://maas.io). Refer to the [MAAS documentation](http://maas.io/docs/) for configuration instructions.
- OpenStack deployments are currently only tested on Icehouse and newer.
- Network access to the following domains
- *.jujucharms.com
- gcr.io
- Access to an Ubuntu mirror (public or private)
### On Ubuntu
On your local Ubuntu system:
### Configure Juju to use your cloud provider
```shell
sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju
```
If you are using another distro/platform - please consult the
[getting started guide](https://jujucharms.com/docs/2.0/getting-started-general)
to install the Juju dependencies for your platform.
### Configure Juju to your favorite cloud provider
Deployment of the cluster is [supported on a wide variety of public clouds](#cloud-compatibility), private OpenStack clouds, or raw bare metal clusters.
Deployment of the cluster is [supported on a wide variety of public clouds](#cloud-compatibility), private OpenStack clouds, or raw bare metal clusters. Bare metal deployments are supported via [MAAS](http://maas.io/).
After deciding which cloud to deploy to, follow the [cloud setup page](https://jujucharms.com/docs/devel/getting-started-general#2.-choose-a-cloud) to configure deploying to that cloud.
@ -65,7 +53,7 @@ cloud provider you would like to use.
In this example
```shell
```
juju add-credential aws
credential name: my_credentials
select auth-type [userpass, oauth, etc]: userpass
@ -77,47 +65,48 @@ You can also just auto load credentials for popular clouds with the `juju autolo
Next we need to bootstrap a controller to manage the cluster. You need to define the cloud you want to bootstrap on, the region, and then any name for your controller node:
```shell
```
juju update-clouds # This command ensures all the latest regions are up to date on your client
juju bootstrap aws/us-east-2
```
or, another example, this time on Azure:
```shell
```
juju bootstrap azure/centralus
```
You will need a controller node for each cloud or region you are deploying to. See the [controller documentation](https://jujucharms.com/docs/2.0/controllers) for more information.
Note that each controller can host multiple Kubernetes clusters in a given cloud or region.
Note that each controller can host multiple Kubernetes clusters in a given cloud or region.
{% endcapture %}
{% capture steps %}
## Launch a Kubernetes cluster
The following command will deploy the initial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but
The following command will deploy the initial 9-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to:
```shell
```
juju deploy canonical-kubernetes
```
After this command executes we need to wait for the cloud to return back instances and for all the automated deployment tasks to execute.
After this command executes the cloud will then launch instances and begin the deployment process.
## Monitor deployment
The `juju status` command provides information about each unit in the cluster. We recommend using the `watch -c juju status --color` command to get a real-time view of the cluster as it deploys. When all the states are green and "Idle", the cluster is ready to go.
The `juju status` command provides information about each unit in the cluster. Use the `watch -c juju status --color` command to get a real-time view of the cluster as it deploys. When all the states are green and "Idle", the cluster is ready to be used:
```shell
$ juju status
juju status
Output:
```
Model Controller Cloud/Region Version
default aws-us-east-2 aws/us-east-2 2.0.1
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 3 ubuntu
elasticsearch active 2 elasticsearch jujucharms 19 ubuntu
etcd 2.2.5 active 3 etcd jujucharms 14 ubuntu
filebeat active 4 filebeat jujucharms 5 ubuntu
flannel 0.6.1 maintenance 4 flannel jujucharms 5 ubuntu
kibana active 1 kibana jujucharms 15 ubuntu
kubeapi-load-balancer 1.10.0 active 1 kubeapi-load-balancer jujucharms 3 ubuntu exposed
kubernetes-master 1.4.5 active 1 kubernetes-master jujucharms 6 ubuntu
kubernetes-worker 1.4.5 active 3 kubernetes-worker jujucharms 8 ubuntu exposed
@ -125,37 +114,24 @@ topbeat active 3 topbeat jujuc
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 52.15.95.92 Certificate Authority connected.
elasticsearch/0* active idle 1 52.15.67.111 9200/tcp Ready
elasticsearch/1 active idle 2 52.15.109.132 9200/tcp Ready
etcd/0 active idle 3 52.15.79.127 2379/tcp Healthy with 3 known peers.
etcd/1* active idle 4 52.15.111.66 2379/tcp Healthy with 3 known peers. (leader)
etcd/2 active idle 5 52.15.144.25 2379/tcp Healthy with 3 known peers.
kibana/0* active idle 6 52.15.57.157 80/tcp,9200/tcp ready
kubeapi-load-balancer/0* active idle 7 52.15.84.179 443/tcp Loadbalancer ready.
kubernetes-master/0* active idle 8 52.15.106.225 6443/tcp Kubernetes master services ready.
filebeat/3 active idle 52.15.106.225 Filebeat ready.
flannel/3 maintenance idle 52.15.106.225 Installing flannel.
flannel/3 active idle 52.15.106.225 Flannel subnet 10.1.48.1/24
kubernetes-worker/0* active idle 9 52.15.153.246 Kubernetes worker running.
filebeat/2 active idle 52.15.153.246 Filebeat ready.
flannel/2 active idle 52.15.153.246 Flannel subnet 10.1.53.1/24
topbeat/2 active idle 52.15.153.246 Topbeat ready.
kubernetes-worker/1 active idle 10 52.15.52.103 Kubernetes worker running.
filebeat/0* active idle 52.15.52.103 Filebeat ready.
flannel/0* active idle 52.15.52.103 Flannel subnet 10.1.31.1/24
topbeat/0* active idle 52.15.52.103 Topbeat ready.
kubernetes-worker/2 active idle 11 52.15.104.181 Kubernetes worker running.
filebeat/1 active idle 52.15.104.181 Filebeat ready.
flannel/1 active idle 52.15.104.181 Flannel subnet 10.1.83.1/24
topbeat/1 active idle 52.15.104.181 Topbeat ready.
Machine State DNS Inst id Series AZ
0 started 52.15.95.92 i-06e66414008eca61c xenial us-east-2c
1 started 52.15.67.111 i-050cbd7eb35fa0fe6 trusty us-east-2a
2 started 52.15.109.132 i-069196660db07c2f6 trusty us-east-2b
3 started 52.15.79.127 i-0038186d2c5103739 xenial us-east-2b
4 started 52.15.111.66 i-0ac66c86a8ec93b18 xenial us-east-2a
5 started 52.15.144.25 i-078cfe79313d598c9 xenial us-east-2c
6 started 52.15.57.157 i-09fd16d9328105ec0 trusty us-east-2a
7 started 52.15.84.179 i-00fd70321a51b658b xenial us-east-2c
8 started 52.15.106.225 i-0109a5fc942c53ed7 xenial us-east-2b
9 started 52.15.153.246 i-0ab63e34959cace8d xenial us-east-2b
@ -167,16 +143,16 @@ Machine State DNS Inst id Series AZ
After the cluster is deployed you may assume control over the cluster from any kubernetes-master, or kubernetes-worker node.
First we need to download the credentials and client application to your local workstation:
First you need to download the credentials and client application to your local workstation:
Create the kubectl config directory.
```shell
```
mkdir -p ~/.kube
```
Copy the kubeconfig file to the default location.
```shell
```
juju scp kubernetes-master/0:config ~/.kube/config
```
@ -184,14 +160,17 @@ Fetch a binary for the architecture you have deployed. If your client is a
different architecture you will need to get the appropriate `kubectl` binary
through other means.
```shell
```
juju scp kubernetes-master/0:kubectl ./kubectl
```
Query the cluster.
Query the cluster:
```shell
./kubectl cluster-info
./kubectl cluster-info
Output:
```
Kubernetes master is running at https://52.15.104.227:443
Heapster is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://52.15.104.227:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
@ -247,11 +226,13 @@ controller. Use the `juju switch` command to get the current controller name:
juju switch
juju destroy-controller $controllername --destroy-all-models
```
This will shutdown and terminate all running instances on that cloud.
This will shutdown and terminate all running instances on that cloud.
{% endcapture %}
{% capture discussion %}
## More Info
We stand up Kubernetes with open-source operations, or operations as code, known as charms. These charms are assembled from layers which keeps the code smaller and more focused on the operations of just Kubernetes and its components.
The Ubuntu Kubernetes deployment uses open-source operations, or operations as code, known as charms. These charms are assembled from layers which keeps the code smaller and more focused on the operations of just Kubernetes and its components.
The Kubernetes layer and bundles can be found in the `kubernetes`
project on github.com:
@ -260,29 +241,31 @@ project on github.com:
- [Kubernetes charm layer location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes)
- [Canonical Kubernetes home](https://jujucharms.com/canonical-kubernetes/)
Feature requests, bug reports, pull requests or any feedback would be much appreciated.
### Cloud compatibility
This deployment methodology is continually tested on the following clouds:
[Amazon Web Service](https://jujucharms.com/docs/2.0/help-aws),
[Microsoft Azure](https://jujucharms.com/docs/2.0/help-azure),
[Google Compute Engine](https://jujucharms.com/docs/2.0/help-google),
[Joyent](https://jujucharms.com/docs/2.0/help-joyent),
[Rackspace](https://jujucharms.com/docs/2.0/help-rackspace), any
[OpenStack cloud](https://jujucharms.com/docs/2.0/clouds#specifying-additional-clouds),
and
[Vmware vSphere](https://jujucharms.com/docs/2.0/config-vmware).
Feature requests, bug reports, pull requests or any feedback would be much appreciated.
{% endcapture %}
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Amazon Web Services (AWS) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
OpenStack | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Google Compute Engine (GCE) | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Amazon Web Services (AWS) | Juju | Ubuntu | flannel, calico* | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
OpenStack | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](http
s://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](http
s://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Google Compute Engine (GCE) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](http
s://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](http
s://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](http
s://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
VMWare vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](http
s://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](http
s://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) )
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
{% include templates/task.md %}

View File

@ -0,0 +1,100 @@
---
Title: Local Kubernetes development with LXD
---
{% capture overview %}
## Overview
Running Kubernetes locally has obvious development advantages, such as lower cost and faster iteration than constantly deploying and tearing down clusters on a public cloud. Ideally a Kubernetes developer can spawn all the instances locally and test code as they commit. This page will show you how to deploy a cluster on a local machine.
{% endcapture %}
The purpose of using [LXD](https://linuxcontainers.org/lxd/) on a local machine is to emulate the same deployment that a user would use in a cloud or bare metal. Each node is treated as a machine, with the same characteristics as production.
{% capture prerequisites %}
## Prerequisites
In order to simplify local deployment this method leverages the [Conjure Up tool](
http://conjure-up.io/).
This will provide a pseudo-graphical set up in a terminal that is simple enough for developers to use without having to learn the complexities of operating Kubernetes. This will enable new developers to get started with a working cluster.
{% endcapture %}
{% capture steps %}
## Getting Started
First, you need to configure LXD to be able to host a large number of containers. To do this we need to update the [kernel parameters for inotify](https://github.com/lxc/lxd/blob/master/doc/production-setup.md#etcsysctlconf).
On your system open up `/etc/sysctl.conf` *(as root) and add the following lines:
```
fs.inotify.max_user_instances = 1048576
fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_watches = 1048576
vm.max_map_count = 262144
```
_Note: This step may become unnecessary in the future_
Next, apply those kernel parameters (you should see the above options echoed back out to you):
sudo sysctl -p
Now you're ready to install conjure-up and deploy Kubernetes.
```
sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next
sudo apt update
sudo apt install conjure-up
```
Note: During this set up phase cojure-up will ask you to "Setup an ipv6 subnet" with LXD, ensure you answer NO. ipv6 with Juju/LXD is currently unsupported.
### Walkthrough
Initiate the installation with:
conjure-up kubernetes
For this walkthrough we are going to create a new controller, select the `localhost` Cloud type:
![Select Cloud](/images/docs/ubuntu/00-select-cloud.png)
Deploy the applications:
![Deploy Applications](/images/docs/ubuntu/01-deploy.png)
Wait for Juju bootstrap to finish:
![Bootstrap](/images/docs/ubuntu/02-bootstrap.png)
Wait for our Applications to be fully deployed:
![Waiting](/images/docs/ubuntu/03-waiting.png)
Run the final post processing steps to automatically configure your Kubernetes environment:
![Postprocessing](/images/docs/ubuntu/04-postprocessing.png)
Review the final summary screen:
[!Final Summary](/images/docs/ubuntu/05-final-summary.png)
### Accessing the Cluster
You can access your Kubernetes cluster by running the following:
~/kubectl --kubeconfig=~/.kube/config
Or if you've already run this once it'll create a new config file as shown in the summary screen.
~/kubectl --kubeconfig=~/.kube/config.conjure-up
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,20 @@
---
Title: Logging
---
{% capture overview %}
This page will explain how logging works within a Juju deployed cluster.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
## Agent Logging
The `juju debug-log` will show all of the consolidated logs of all the Juju agents running on each node of the cluster. This can be useful for finding out why a specific node hasn't deployed or is in an error state. These agent logs are located in `/var/lib/juju/agents` on every node.
See the [Juju documentation](https://jujucharms.com/docs/stable/troubleshooting-logs) for more information.

View File

@ -4,10 +4,12 @@ assignees:
title: Manually Deploying Kubernetes on Ubuntu Nodes
---
{% capture overview %}
This document describes how to deploy Kubernetes on ubuntu nodes, 1 master and 3 nodes involved
in the given examples. You can scale to **any number of nodes** by changing some settings with ease.
The original idea was heavily inspired by @jainvipin 's ubuntu single node
work, which has been merge into this document.
{% endcapture %}
The scripting referenced here can be used to deploy Kubernetes with
networking based either on Flannel or on a CNI plugin that you supply.
@ -17,9 +19,7 @@ use a CNI plugin instead.
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
* TOC
{:toc}
{% capture prerequisites %}
## Prerequisites
1. The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
@ -30,8 +30,9 @@ Ubuntu 15 which uses systemd instead of upstart.
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.2.0, may work with higher versions.
5. All the remote servers can be ssh logged in without a password by using key authentication.
6. The remote user on all machines is using /bin/bash as its login shell, and has sudo access.
{% endcapture %}
{% capture steps %}
## Starting a Cluster
### Set up working directory
@ -290,7 +291,9 @@ You can use the `kubectl` command to check if the newly upgraded Kubernetes clus
To make sure the version of the upgraded cluster is what you expect, you will find these commands helpful.
* upgrade all components or master: `$ kubectl version`. Check the *Server Version*.
* upgrade node `vcap@10.10.102.223`: `$ ssh -t vcap@10.10.102.223 'cd /opt/bin && sudo ./kubelet --version'`*
* upgrade node `vcap@10.10.102.223`: `$ ssh -t vcap@10.10.102.223 'cd /opt/bin && sudo ./kubelet --version'`*
{% endcapture %}
## Support Level
@ -301,3 +304,5 @@ Bare-metal | custom | Ubuntu | flannel | [docs](/docs/gettin
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
{% include templates/task.md %}

View File

@ -0,0 +1,135 @@
---
Title: Monitoring
---
{% capture overview %}
This page shows how to connect various logging solutions to a Juju deployed cluster.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
{% capture steps %}
## Connecting Datadog
Datadog is a SaaS offering which includes support for a range of integrations, including Kubernetes and ETCD. While the solution is SAAS/Commercial, they include a Free tier which is supported with the following method. To deploy a full Kubernetes stack with Datadog out of the box, do: `juju deploy canonical-kubernetes-datadog`
### Installation of Datadog
To start, deploy the latest version Datadog from the Charm Store:
```
juju deploy datadog
```
Configure Datadog with your api-key, found in the [Datadog dashboard](). Replace `XXXX` with your API key.
```
juju configure datadog api-key=XXXX
```
Finally, attach `datadog` to all applications you wish to montior. For example, kubernetes-master, kubernetes-worker, and etcd:
```
juju add-relation datadog kubernetes-worker
juju add-relation datadog kubernetes-master
juju add-relation datadog etcd
```
## Connecting Elastic stack
The Elastic stack, formally "ELK" stack, refers to Elastic Search and the suite of tools to facilitate log aggregation, monitoring, and dashboarding. To deploy a full Kubernetes stack with elastic out of the box, do: `juju deploy canonical-kubernetes-elastic`
### New install of ElasticSearch
To start, deploy the latest version of ElasticSearch, Kibana, Filebeat, and Topbeat from the Charm Store:
This can be done in one command as:
```
juju deploy beats-core
```
However, if you wish to customize the deployment, or proceed manually, the following commands can be issued:
```
juju deploy elasticsearch
juju deploy kibana
juju deploy filebeat
juju deploy topbeat
juju add-relation elasticsearch kibana
juju add-relation elasticsearch topbeat
juju add-relation elasticsearch filebeat
```
Finally, connect filebeat and topbeat to all applications you wish to monitor. For example, kubernetes-master and kubernetes-worker:
```
juju add-relation kubernetes-master topbeat
juju add-relation kubernetes-master filebeat
juju add-relation kubernetes-worker topbeat
juju add-relation kubernetes-worker filebeat
```
### Existing ElasticSearch cluster
In the event an ElasticSearch cluster already exists, the following can be used to connect and leverage it instead of creating a new, seprate, cluster. First deploy the two beats, filebeat and topbeat
```
juju deploy filebeat
juju deploy topbeat
```
Configure both filebeat and topbeat to connect to your ElasticSearch cluster, replacing `255.255.255.255` with the IP address in your setup.
```
juju configure filebeat elasticsearch=255.255.255.255
juju configure topbeat elasticsearch=255.255.255.255
```
Follow the above instructions on connect topbeat and filebeat to the applications you wish to monitor.
## Connecting Nagios
Nagios utilizes the Nagions Remote Execution Protocol (NRPE) as an agent on each node to derive machine level details of the health and applications.
### New install of Nagios
To start, deploy the latest version of the Nagios and NRPE charms from the store:
```
juju deploy nagios
juju deploy nrpe
```
Connect Nagios to NRPE
```
juju add-relation nagios nrpe
```
Finally, add NRPE to all applications deployed that you wish to monitor, for example `kubernetes-master`, `kubernetes-worker`, `etcd`, `easyrsa`, and `kubeapi-load-balancer`.
```
juju add-relation nrpe kubernetes-master
juju add-relation nrpe kubernetes-worker
juju add-relation nrpe etcd
juju add-relation nrpe easyrsa
juju add-relation nrpe kubeapi-load-balancer
```
### Existing install of Nagios
If you already have an exisiting Nagios installation, the `nrpe-external-master` charm can be used instead. This will allow you to supply configuration options that map your exisiting external Nagios installation to NRPE. Replace `255.255.255.255` with the IP address of the nagios instance.
```
juju deploy nrpe-external-master
juju configure nrpe-external-master nagios_master=255.255.255.255
```
Once configured, connect nrpe-external-master as outlined above.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,53 @@
---
Title: Networking
---
{% capture overview %}
This page shows how to the various network portions of a cluster work, and how to configure them.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
Kubernetes supports the [Container Network Interface (CNI)]](https://github.com/containernetworking/cni).
This is a network plugin architecture that allows you to use whatever
Kubernetes-friendly SDN you want. Currently this means support for Flannel.
{% capture steps %}
# Flannel
## Usage
The flannel charm is a
[subordinate](https://jujucharms.com/docs/stable/authors-subordinate-services).
This charm will require a principal charm that implements the `kubernetes-cni`
interface in order to properly deploy.
```
juju deploy flannel
juju deploy etcd
juju deploy kubernetes-master
juju add-relation flannel kubernetes-master
juju add-relation flannel etcd
```
## Configuration
**iface** The interface to configure the flannel SDN binding. If this value is
empty string or undefined the code will attempt to find the default network
adapter similar to the following command:
```bash
$ route | grep default | head -n 1 | awk {'print $8'}
```
**cidr** The network range to configure the flannel SDN to declare when
establishing networking setup with etcd. Ensure this network range is not active
on layers 2/3 you're deploying to, as it will cause collisions and odd behavior
if care is not taken when selecting a good CIDR range to assign to flannel. It's
also good practice to ensure you alot yourself a large enough IP range to support
how large your cluster will potentially scale. Class A IP ranges with /24 are
a good option.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,79 @@
---
Title: Scaling
---
{% capture overview %}
This page shows how to horizontally scale master and worker nodes on a cluster.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
Any of the applications can be scaled out post-deployment. The charms
update the status messages with progress, so it is recommended to run.
```
watch -c juju status --color
```
{% capture steps %}
## Kubernetes masters
The provided Kubernetes master nodes act as a control plane for the cluster. The deployment has been designed so that these nodes can be scaled independently of worker nodes to allow for more operational flexibility. To scale a master node up, simply execute:
juju add-unit kubernetes-master
This will add another master node to the control plane. See the [building high-availability clusters](http://kubernetes.io/docs/admin/high-availability) section of the documentation for more information.
## Kubernetes workers
The kubernetes-worker nodes are the load-bearing units of a Kubernetes cluster.
By default pods are automatically spread throughout the kubernetes-worker units
that you have deployed.
To add more kubernetes-worker units to the cluster:
```
juju add-unit kubernetes-worker
```
or specify machine constraints to create larger nodes:
```
juju set-constraints kubernetes-worker "cpu-cores=8 mem=32G"
juju add-unit kubernetes-worker
```
Refer to the
[machine constraints documentation](https://jujucharms.com/docs/stable/charms-constraints)
for other machine constraints that might be useful for the kubernetes-worker units.
## etcd
Etcd is used as a key-value store for the Kubernetes cluster. The bundle
defaults to one instance in this cluster.
For quorum reasons it is recommended to keep an odd number of etcd nodes. 3, 5, 7, and 9 nodes are the recommended amount of nodes, depending on your cluster size. The CoreOS etcd documentation has a chart for the
[optimal cluster size](https://coreos.com/etcd/docs/latest/admin_guide.html#optimal-cluster-size)
to determine fault tolerance.
To add an etcd unit:
```
juju add-unit etcd
```
Shrinking of an etcd cluster after growth is not recommended.
## Juju Controller
A single node is responsible for coordinating with all the Juju agents on each machine that manage Kubernetes, it is called the controller node. For production deployments it is recommended to enable HA of the controller node:
juju enable-ha
Enabling HA results in 3 controller nodes, this should be sufficient for most use cases. 5 and 7 controller nodes are also supported for extra large deployments.
Refer to the [Juju HA controller documentation](https://jujucharms.com/docs/2.0/controllers-ha) for more information.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,33 @@
---
Title: Security Considerations
---
{% capture overview %}
This page explains the security considerations of a deployed cluster and production recommendations.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
By default all connections between every provided node is secured via TLS by easyrsa, including the etcd cluster.
## Implementation
The TLS and easyrsa implementations use the following [layers](https://jujucharms.com/docs/2.0/developer-layers).
[layer-tls-client](https://github.com/juju-solutions/layer-tls-client)
[layer-easyrsa](https://github.com/juju-solutions/layer-easyrsa)
{% capture steps %}
## Limiting ssh access
By default the administrator can ssh to any deployed node in a cluster. You can mass disable ssh access to the cluster nodes by issuing the following command.
juju model-config proxy-ssh=true
Note: The Juju controller node will still have open ssh access in your cloud, and will be used as a jump host in this case.
Refer to the [model management](https://jujucharms.com/docs/2.0/models) page in the Juju documentation for instructions on how to manage ssh keys.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,91 @@
---
Title: Storage
---
{% capture overview %}
This page explains how to install and configure persistent storage on a cluster.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
{% capture steps %}
## Ceph Persistent Volumes
The Canonical Distribution of Kubernetes allows you to connect with durable
storage devices such as [Ceph](http://ceph.com). When paired with the
[Juju Storage](https://jujucharms.com/docs/2.0/charms-storage) feature you
can add durable storage easily and across clouds.
Deploy a minimum of three ceph-mon and three ceph-osd units.
```
juju deploy cs:ceph-mon -n 3
juju deploy cs:ceph-osd -n 3
```
Relate the units together:
```
juju add-relation ceph-mon ceph-osd
```
List the storage pools available to Juju for your cloud:
juju storage-pools
Output:
```
Name Provider Attrs
ebs ebs
ebs-ssd ebs volume-type=ssd
loop loop
rootfs rootfs
tmpfs tmpfs
```
> **Note**: This listing is for the Amazon Web Services public cloud.
> Different clouds may have different pool names.
Add a storage pool to the ceph-osd charm by NAME,SIZE,COUNT:
```
juju add-storage ceph-osd/0 osd-devices=ebs,10G,1
juju add-storage ceph-osd/1 osd-devices=ebs,10G,1
juju add-storage ceph-osd/2 osd-devices=ebs,10G,1
```
Next relate the storage cluster with the Kubernetes cluster:
```
juju add-relation kubernetes-master ceph-mon
```
We are now ready to enlist
[Persistent Volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/)
in Kubernetes which our workloads can consume via Persistent Volume (PV) claims.
```
juju run-action kubernetes-master/0 create-rbd-pv name=test size=50
```
This example created a "test" Radios Block Device (rbd) in the size of 50 MB.
Use watch on your Kubernetes cluster like the following, you should see the PV
become enlisted and be marked as available:
watch kubectl get pv
Output:
```
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
test 50M RWO Available 10s
```
To consume these Persistent Volumes, your pods will need an associated
Persistant Volume Claim with them, and is outside the scope of this README. See the
[Persistant Volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/)
documentation for more information.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,116 @@
---
title: Troubleshooting
---
{% capture overview %}
This document with highlighting how to troubleshoot the deployment of a Kubernetes cluster, it will not cover debugging of workloads inside Kubernetes.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
{% capture steps %}
## Understanding Cluster Status
Using `juju status` can give you some insight as to what's happening in a cluster:
```
Model Controller Cloud/Region Version
kubes work-multi aws/us-east-2 2.0.2.1
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 3 ubuntu
etcd 2.2.5 active 1 etcd jujucharms 17 ubuntu
flannel 0.6.1 active 2 flannel jujucharms 6 ubuntu
kubernetes-master 1.4.5 active 1 kubernetes-master jujucharms 8 ubuntu exposed
kubernetes-worker 1.4.5 active 1 kubernetes-worker jujucharms 11 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0/lxd/0 10.0.0.55 Certificate Authority connected.
etcd/0* active idle 0 52.15.47.228 2379/tcp Healthy with 1 known peers.
kubernetes-master/0* active idle 0 52.15.47.228 6443/tcp Kubernetes master services ready.
flannel/1 active idle 52.15.47.228 Flannel subnet 10.1.75.1/24
kubernetes-worker/0* active idle 1 52.15.177.233 80/tcp,443/tcp Kubernetes worker running.
flannel/0* active idle 52.15.177.233 Flannel subnet 10.1.63.1/24
Machine State DNS Inst id Series AZ
0 started 52.15.47.228 i-0bb211a18be691473 xenial us-east-2a
0/lxd/0 started 10.0.0.55 juju-153b74-0-lxd-0 xenial
1 started 52.15.177.233 i-0502d7de733be31bb xenial us-east-2b
```
In this example we can glean some information. The `Workload` column will show the status of a given service. The `Message` section will show you the health of a given service in the cluster. During deployment and maintenance these workload statuses will update to reflect what a given node is doing. For example the workload my say `maintenance` while message will describe this maintenance as `Installing docker`.
During normal oprtation the Workload should read `active`, the Agent column (which reflects what the Juju agent is doing) should read `idle`, and the messages will either say `Ready` or another descriptive term. `juju status --color` will also return all green results when a cluster's deployment is healthy.
Status can become unweildly for large clusters, it is then recommended to check status on individual services, for example to check the status on the workers only:
juju status kubernetes-workers
or just on the etcd cluster:
juju status etcd
Errors will have an obvious message, and will return a red result when used with `juju status --color`. Nodes that come up in this manner should be investigated.
## SSHing to units.
You can ssh to individual units easily with the following convention, `juju ssh <servicename>/<unit#>:
juju ssh kubernetes-worker/3
Will automatically ssh you to the 3rd worker unit.
juju ssh easyrsa/0
This will automatically ssh you to the easyrsa unit.
## Collecting Debug information
Sometimes it is useful to collect all the information from a node to share with a developer so problems can be identifying. This section will deal on how to use the debug action to collect this information. The debug action is only supported on `kubernetes-worker` nodes.
juju run-action kubernetes-worker/0 debug
Which returns:
```
Action queued with id: 4b26e339-7366-4dc7-80ed-255ac0377020`
```
This produces a .tar.gz file which you can retrieve:
juju show-action-output 4b26e339-7366-4dc7-80ed-255ac0377020
This will give you the path for the debug results:
```
results:
command: juju scp debug-test/0:/home/ubuntu/debug-20161110151539.tar.gz .
path: /home/ubuntu/debug-20161110151539.tar.gz
status: completed
timing:
completed: 2016-11-10 15:15:41 +0000 UTC
enqueued: 2016-11-10 15:15:38 +0000 UTC
started: 2016-11-10 15:15:40 +0000 UTC
```
You can now copy the results to your local machine:
juju scp kubernetes-worker/0:/home/ubuntu/debug-20161110151539.tar.gz .
The archive includes basic information such as systemctl status, Juju logs,
charm unit data, etc. Additional application-specific information may be
included as well.
## Common Problems
## etcd
## Kubernetes
By default there is no log aggregation of the Kubernetes nodes, each node logs locally. It is recommended to deploy the Elastic Stack for log aggregation if you desire centralized logging.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,97 @@
---
title: Upgrades
---
{% capture overview %}
This page will outline how to manage and execute a Kubernetes upgrade.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
## Assumptions
You should always back up all your data before attempting an upgrade. Don't forget to include the workload inside your cluster! Refer to the [backup documentation](/docs/getting-started-guides/ubuntu/backups).
{% endcapture %}
{% capture steps %}
## Preparing for an Upgrade
See if upgrades are available. The Kubernetes charms are updated bi-monthly and mentioned in the Kubernetes release notes. Important operational considerations and change in behaviour will always be documented in the release notes.
You can use `juju status` to see if an upgrade is available. There will either be an upgrade to kubernetes or etcd, or both.
# Upgrade etcd
Backing up etcd requires an export and snapshot, refer to the [backup documentation](/docs/getting-started-guides/ubuntu/backups) to create a snapshot. After the snapshot upgrade the etcd service with:
juju upgrade-charm etcd
This will handle upgrades between minor versions of etcd. Major upgrades from etcd 2.x to 3.x are currently unsupported. Upgrade viability will be investigated when etcd 3.0 is finalized.
# Upgrade Kubernetes
## Master Upgrades
First you need to upgrade the masters:
juju upgrade-charm kubernetes-master
NOTE: Always upgrade the masters before the workers.
## Worker Upgrades
Two methods of upgrading workers are supported. [Blue/Green Deployment](http://martinfowler.com/bliki/BlueGreenDeployment.html) and upgrade-in-place. Both methods are provided for operational flexibility and both are supported and tested. Blue/Green will require more hardware up front than inplace, but is a safer upgrade route.
## Blue/Green Upgrade
Given the following deployment, where the workers are named kubernetes-alpha.
Deploy new worker(s):
juju deploy kubernetes-beta
Pause the old workers so your workload migrates:
juju action kubernetes-alpha/# pause
Verify old workloads have migrated with:
kubectl get-pod -o wide
Tear down old workers with:
juju remove-application kubernetes-alpha
## In place worker upgrade
juju upgrade-charm kubernetes-worker
# Verify upgrade
`kubectl version` should return the newer version.
It is recommended to rerun a [cluster validation](/docs/getting-started-guides/ubuntu/validation) to ensure that the cluster upgrade has successfully completed.
# Upgrade Flannel
Upgrading flannel can be done at any time, it is independent of Kubernetes upgrades. Be advised that networking is interrupted during the upgrade. You can initiate a flannel upgrade:
juju upgrade-charm flannel
# Upgrade easyrsa
Upgrading easyrsa can be done at any time, it is independent of Kubernetes upgrades. Upgrading easyrsa should result in zero downtime as it is not a running service:
juju upgrade-charm easyrsa
## Rolling back etcd
At this time rolling back etcd is unsupported.
## Rolling back Kubernetes
At this time rolling back Kubernetes is unsupported.
{% endcapture %}
{% include templates/task.md %}

View File

@ -0,0 +1,161 @@
---
title: Validation
---
{% capture overview %}
This page will outline how to ensure that a Juju deployed Kubernetes cluster has stood up correctly and is ready to accept workloads.
{% endcapture %}
{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}
{% capture steps %}
# Validation
## End to End Testing
End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end
behavior of the system, and is the last signal to ensure end user operations
match developer specifications. Although unit and integration tests provide a
good signal, in a distributed system like Kubernetes it is not uncommon that a
minor change may pass all unit and integration tests, but cause unforeseen
changes at the system level.
The primary objectives of the e2e tests are to ensure a consistent and reliable
behavior of the kubernetes code base, and to catch hard-to-test bugs before
users do, when unit and integration tests are insufficient.
### Usage
To deploy the end-to-end test suite, you need to relate the `kubernetes-e2e` charm to your existing kubernetes-master nodes and easyrsa:
```
juju deploy kubernetes-e2e
juju add-relation kubernetes-e2e kubernetes-master
juju add-relation kubernetes-e2e easyrsa
```
Once the relations have settled, you can do `juju status` until the workload status results in
`Ready to test.` - you may then kick off an end to end validation test.
### Running the e2e test
The e2e test is encapsulated as an action to ensure consistent runs of the
end to end test. The defaults are sensible for most deployments.
juju run-action kubernetes-e2e/0 test
### Tuning the e2e test
The e2e test is configurable. By default it will focus on or skip the declared
conformance tests in a cloud agnostic way. Default behaviors are configurable.
This allows the operator to test only a subset of the conformance tests, or to
test more behaviors not enabled by default. You can see all tunable options on
the charm by inspecting the schema output of the actions:
juju actions kubernetes-e2e --format=yaml --schema
Output:
```
test:
description: Run end-to-end validation test suite
properties:
focus:
default: \[Conformance\]
description: Regex focus for executing the test
type: string
skip:
default: \[Flaky\]
description: Regex of tests to skip
type: string
timeout:
default: 30000
description: Timeout in nanoseconds
type: integer
title: test
type: object
```
As an example, you can run a more limited set of tests for rapid validation of
a deployed cluster. The following example will skip the `Flaky`, `Slow`, and
`Feature` labeled tests:
juju run-action kubernetes-e2e/0 skip='\[(Flaky|Slow|Feature:.*)\]'
> Note: the escaping of the regex due to how bash handles brackets.
To see the different types of tests the Kubernetes end-to-end charm has access
to, we encourage you to see the upstream documentation on the different types
of tests, and to strongly understand what subsets of the tests you are running.
[Kinds of tests](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md#kinds-of-tests)
### More information on end-to-end testing
Along with the above descriptions, end-to-end testing is a much larger subject
than this readme can encapsulate. There is far more information in the
[end-to-end testing guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md).
### Evaluating end-to-end results
It is not enough to just simply run the test. Result output is stored in two
places. The raw output of the e2e run is available in the `juju show-action-output`
command, as well as a flat file on disk on the `kubernetes-e2e` unit that
executed the test.
> Note: The results will only be available once the action has
completed the test run. End-to-end testing can be quite time intensive. Often
times taking **greater than 1 hour**, depending on configuration.
##### Flat file
Here's how to copy the output out as a file:
juju run-action kubernetes-e2e/0 test
Output:
Action queued with id: 4ceed33a-d96d-465a-8f31-20d63442e51b
Copy output to your local machine
juju scp kubernetes-e2e/0:4ceed33a-d96d-465a-8f31-20d63442e51b.log .
##### Action result output
Or you can just show the output inline::
juju run-action kubernetes-e2e/0 test
Output:
Action queued with id: 4ceed33a-d96d-465a-8f31-20d63442e51b
Show the results in your terminal:
juju show-action-output 4ceed33a-d96d-465a-8f31-20d63442e51b
### Known issues
The e2e test suite assumes egress network access. It will pull container
images from `gcr.io`. You will need to have this registry unblocked in your
firewall to successfully run e2e test results. Or you may use the exposed
proxy settings [properly configured](https://github.com/juju-solutions/bundle-canonical-kubernetes#proxy-configuration)
on the kubernetes-worker units.
## Upgrading the e2e tests
The e2e tests are always expanding, you can see if there's an upgrade available by running `juju status kubernetes-e2e`.
When an upgrade is available, upgrade your deployment:
juju upgrade-charm kubernetes-e2e
{% endcapture %}
{% include templates/task.md %}

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB