Update high-availability.md
parent
83534d82e0
commit
46ba2d3f13
|
@ -1,9 +1,6 @@
|
|||
---
|
||||
title: "High Availability Kubernetes Clusters"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
|
||||
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
|
||||
the simple [Docker based single node cluster instructions](/{{page.version}}/docs/getting-started-guides/docker),
|
||||
|
@ -12,7 +9,7 @@ or try [Google Container Engine](https://cloud.google.com/container-engine/) for
|
|||
Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will
|
||||
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
|
||||
|
||||
* TOC
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
@ -21,7 +18,7 @@ Setting up a truly reliable, highly available distributed system requires a numb
|
|||
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
|
||||
of these steps in detail, but a summary is given here to help guide and orient the user.
|
||||
|
||||
The steps involved are as follows:
|
||||
The steps involved are as follows:
|
||||
|
||||
* [Creating the reliable constituent nodes that collectively form our HA master implementation.](#reliable-nodes)
|
||||
* [Setting up a redundant, reliable storage layer with clustered etcd.](#establishing-a-redundant-reliable-data-storage-layer)
|
||||
|
@ -85,10 +82,10 @@ a simple cluster set up, using etcd's built in discovery to build our cluster.
|
|||
|
||||
First, hit the etcd discovery service to create a new token:
|
||||
|
||||
```shell
|
||||
```shell
|
||||
curl https://discovery.etcd.io/new?size=3
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
|
||||
|
||||
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
|
||||
|
@ -103,16 +100,16 @@ for `${NODE_IP}` on each machine.
|
|||
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
|
||||
|
||||
```shell
|
||||
```shell
|
||||
etcdctl member list
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```shell
|
||||
```shell
|
||||
etcdctl cluster-health
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
|
||||
on a different node.
|
||||
|
||||
|
@ -141,11 +138,11 @@ Once you have replicated etcd set up correctly, we will also install the apiserv
|
|||
|
||||
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
|
||||
|
||||
```shell
|
||||
```shell
|
||||
touch /var/log/kube-apiserver.log
|
||||
```
|
||||
|
||||
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
|
||||
```
|
||||
|
||||
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
|
||||
|
||||
* basic_auth.csv - basic auth user and password
|
||||
* ca.crt - Certificate Authority cert
|
||||
|
@ -194,11 +191,11 @@ In the future, we expect to more tightly integrate this lease-locking into the s
|
|||
|
||||
First, create empty log files on each node, so that Docker will mount the files not make new directories:
|
||||
|
||||
```shell
|
||||
```shell
|
||||
touch /var/log/kube-scheduler.log
|
||||
touch /var/log/kube-controller-manager.log
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Next, set up the descriptions of the scheduler and controller manager pods on each node.
|
||||
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/` directory.
|
||||
|
||||
|
@ -227,4 +224,4 @@ set the `--apiserver` flag to your replicated endpoint.
|
|||
|
||||
We indeed have an initial proof of concept tester for this, which is available [here](https://releases.k8s.io/release-1.1/examples/high-availability).
|
||||
|
||||
It implements the major concepts (with a few minor reductions for simplicity), of the podmaster HA implementation alongside a quick smoke test using k8petstore.
|
||||
It implements the major concepts (with a few minor reductions for simplicity), of the podmaster HA implementation alongside a quick smoke test using k8petstore.
|
||||
|
|
Loading…
Reference in New Issue