Spell fixes
parent
dbedb28b51
commit
f2e481abd8
|
@ -80,7 +80,7 @@ The master requires the root CA public key, `ca.pem`; the apiserver certificate,
|
||||||
|
|
||||||
Calico needs its own etcd cluster to store its state. In this guide we install a single-node cluster on the master server.
|
Calico needs its own etcd cluster to store its state. In this guide we install a single-node cluster on the master server.
|
||||||
|
|
||||||
> Note: In a production deployment we recommend running a distributed etcd cluster for redundancy. In this guide, we use a single etcd for simplicitly.
|
> Note: In a production deployment we recommend running a distributed etcd cluster for redundancy. In this guide, we use a single etcd for simplicity.
|
||||||
|
|
||||||
1. Download the template manifest file:
|
1. Download the template manifest file:
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ This page assumes you have a working Juju deployed cluster.
|
||||||
{% endcapture %}
|
{% endcapture %}
|
||||||
|
|
||||||
{% capture steps %}
|
{% capture steps %}
|
||||||
It is recommended to deploy individual Kubernetes clusters in their own models, so that there is a clean seperation between environments. To remove a cluster first find out which model it's in with `juju list-models`. The controller reserves an `admin` model for itself. If you have chosen to not name your model it might show up as `default`.
|
It is recommended to deploy individual Kubernetes clusters in their own models, so that there is a clean separation between environments. To remove a cluster first find out which model it's in with `juju list-models`. The controller reserves an `admin` model for itself. If you have chosen to not name your model it might show up as `default`.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ juju list-models
|
$ juju list-models
|
||||||
|
|
|
@ -16,7 +16,7 @@ This page assumes you have a working Juju deployed cluster.
|
||||||
|
|
||||||
controller - The management node of a cloud environment. Typically you have one controller per cloud region, or more in HA environments. The controller is responsible for managing all subsequent models in a given environment. It contains the Juju API server and its underlying database.
|
controller - The management node of a cloud environment. Typically you have one controller per cloud region, or more in HA environments. The controller is responsible for managing all subsequent models in a given environment. It contains the Juju API server and its underlying database.
|
||||||
|
|
||||||
model - A collection of charms and their relationships that define a deployment. This includes machines and units. A controller can host multiple models. It is recommended to seperate Kubernetes clusters into individual models for management and isolation reasons.
|
model - A collection of charms and their relationships that define a deployment. This includes machines and units. A controller can host multiple models. It is recommended to separate Kubernetes clusters into individual models for management and isolation reasons.
|
||||||
|
|
||||||
charm - The definition of a service, including its metadata, dependencies with other services, required packages, and application management logic. It contains all the operational knowledge of deploying a Kubernetes cluster. Included charm examples are `kubernetes-core`, `easy-rsa`, `kibana`, and `etcd`.
|
charm - The definition of a service, including its metadata, dependencies with other services, required packages, and application management logic. It contains all the operational knowledge of deploying a Kubernetes cluster. Included charm examples are `kubernetes-core`, `easy-rsa`, `kibana`, and `etcd`.
|
||||||
|
|
||||||
|
@ -25,4 +25,4 @@ unit - A given instance of a service. These may or may not use up a whole machin
|
||||||
machine - A physical node, these can either be bare metal nodes, or virtual machines provided by a cloud.
|
machine - A physical node, these can either be bare metal nodes, or virtual machines provided by a cloud.
|
||||||
{% endcapture %}
|
{% endcapture %}
|
||||||
|
|
||||||
{% include templates/task.md %}
|
{% include templates/task.md %}
|
||||||
|
|
|
@ -49,7 +49,7 @@ Now you're ready to install conjure-up and deploy Kubernetes.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: During this set up phase cojure-up will ask you to "Setup an ipv6 subnet" with LXD, ensure you answer NO. ipv6 with Juju/LXD is currently unsupported.
|
Note: During this set up phase conjure-up will ask you to "Setup an ipv6 subnet" with LXD, ensure you answer NO. ipv6 with Juju/LXD is currently unsupported.
|
||||||
|
|
||||||
### Walkthrough
|
### Walkthrough
|
||||||
|
|
||||||
|
|
|
@ -28,7 +28,7 @@ Configure Datadog with your api-key, found in the [Datadog dashboard](). Replace
|
||||||
juju configure datadog api-key=XXXX
|
juju configure datadog api-key=XXXX
|
||||||
```
|
```
|
||||||
|
|
||||||
Finally, attach `datadog` to all applications you wish to montior. For example, kubernetes-master, kubernetes-worker, and etcd:
|
Finally, attach `datadog` to all applications you wish to monitor. For example, kubernetes-master, kubernetes-worker, and etcd:
|
||||||
|
|
||||||
```
|
```
|
||||||
juju add-relation datadog kubernetes-worker
|
juju add-relation datadog kubernetes-worker
|
||||||
|
@ -74,7 +74,7 @@ juju add-relation kubernetes-worker filebeat
|
||||||
|
|
||||||
### Existing ElasticSearch cluster
|
### Existing ElasticSearch cluster
|
||||||
|
|
||||||
In the event an ElasticSearch cluster already exists, the following can be used to connect and leverage it instead of creating a new, seprate, cluster. First deploy the two beats, filebeat and topbeat
|
In the event an ElasticSearch cluster already exists, the following can be used to connect and leverage it instead of creating a new, separate, cluster. First deploy the two beats, filebeat and topbeat
|
||||||
|
|
||||||
```
|
```
|
||||||
juju deploy filebeat
|
juju deploy filebeat
|
||||||
|
@ -122,7 +122,7 @@ juju add-relation nrpe kubeapi-load-balancer
|
||||||
|
|
||||||
### Existing install of Nagios
|
### Existing install of Nagios
|
||||||
|
|
||||||
If you already have an exisiting Nagios installation, the `nrpe-external-master` charm can be used instead. This will allow you to supply configuration options that map your exisiting external Nagios installation to NRPE. Replace `255.255.255.255` with the IP address of the nagios instance.
|
If you already have an existing Nagios installation, the `nrpe-external-master` charm can be used instead. This will allow you to supply configuration options that map your existing external Nagios installation to NRPE. Replace `255.255.255.255` with the IP address of the nagios instance.
|
||||||
|
|
||||||
```
|
```
|
||||||
juju deploy nrpe-external-master
|
juju deploy nrpe-external-master
|
||||||
|
|
|
@ -45,7 +45,7 @@ $ route | grep default | head -n 1 | awk {'print $8'}
|
||||||
establishing networking setup with etcd. Ensure this network range is not active
|
establishing networking setup with etcd. Ensure this network range is not active
|
||||||
on layers 2/3 you're deploying to, as it will cause collisions and odd behavior
|
on layers 2/3 you're deploying to, as it will cause collisions and odd behavior
|
||||||
if care is not taken when selecting a good CIDR range to assign to flannel. It's
|
if care is not taken when selecting a good CIDR range to assign to flannel. It's
|
||||||
also good practice to ensure you alot yourself a large enough IP range to support
|
also good practice to ensure you allot yourself a large enough IP range to support
|
||||||
how large your cluster will potentially scale. Class A IP ranges with /24 are
|
how large your cluster will potentially scale. Class A IP ranges with /24 are
|
||||||
a good option.
|
a good option.
|
||||||
{% endcapture %}
|
{% endcapture %}
|
||||||
|
|
|
@ -83,8 +83,8 @@ test 50M RWO Available 10s
|
||||||
```
|
```
|
||||||
|
|
||||||
To consume these Persistent Volumes, your pods will need an associated
|
To consume these Persistent Volumes, your pods will need an associated
|
||||||
Persistant Volume Claim with them, and is outside the scope of this README. See the
|
Persistent Volume Claim with them, and is outside the scope of this README. See the
|
||||||
[Persistant Volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/)
|
[Persistent Volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/)
|
||||||
documentation for more information.
|
documentation for more information.
|
||||||
{% endcapture %}
|
{% endcapture %}
|
||||||
|
|
||||||
|
|
|
@ -42,9 +42,9 @@ Machine State DNS Inst id Series AZ
|
||||||
|
|
||||||
In this example we can glean some information. The `Workload` column will show the status of a given service. The `Message` section will show you the health of a given service in the cluster. During deployment and maintenance these workload statuses will update to reflect what a given node is doing. For example the workload my say `maintenance` while message will describe this maintenance as `Installing docker`.
|
In this example we can glean some information. The `Workload` column will show the status of a given service. The `Message` section will show you the health of a given service in the cluster. During deployment and maintenance these workload statuses will update to reflect what a given node is doing. For example the workload my say `maintenance` while message will describe this maintenance as `Installing docker`.
|
||||||
|
|
||||||
During normal oprtation the Workload should read `active`, the Agent column (which reflects what the Juju agent is doing) should read `idle`, and the messages will either say `Ready` or another descriptive term. `juju status --color` will also return all green results when a cluster's deployment is healthy.
|
During normal operation the Workload should read `active`, the Agent column (which reflects what the Juju agent is doing) should read `idle`, and the messages will either say `Ready` or another descriptive term. `juju status --color` will also return all green results when a cluster's deployment is healthy.
|
||||||
|
|
||||||
Status can become unweildly for large clusters, it is then recommended to check status on individual services, for example to check the status on the workers only:
|
Status can become unwieldy for large clusters, it is then recommended to check status on individual services, for example to check the status on the workers only:
|
||||||
|
|
||||||
juju status kubernetes-workers
|
juju status kubernetes-workers
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue