Delete useless spaces
Signed-off-by: yupengzte <yu.peng36@zte.com.cn>pull/3125/head^2
parent
914e779c97
commit
054d08df96
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Building High-Availability Clusters
|
||||
---
|
||||
|
||||
---
|
||||
title: Building High-Availability Clusters
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
|
||||
|
@ -63,7 +63,6 @@ If you are using monit, you should also install the monit daemon (`apt-get insta
|
|||
|
||||
On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.
|
||||
|
||||
|
||||
## Establishing a redundant, reliable data storage layer
|
||||
|
||||
The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is
|
||||
|
@ -97,7 +96,6 @@ Note that in `etcd.yaml` you should substitute the token URL you got above for `
|
|||
and you should substitute a different name (e.g. `node-1`) for `${NODE_NAME}` and the correct IP address
|
||||
for `${NODE_IP}` on each machine.
|
||||
|
||||
|
||||
#### Validating your cluster
|
||||
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate on master with
|
||||
|
@ -130,7 +128,6 @@ Regardless of how you choose to implement it, if you chose to use one of these o
|
|||
to each machine. If your storage is shared between the three masters in your cluster, you should create a different directory on the storage
|
||||
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in `/var/etcd/data`
|
||||
|
||||
|
||||
## Replicated API Servers
|
||||
|
||||
Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.
|
||||
|
@ -184,9 +181,9 @@ cluster state, such as the controller manager and scheduler. To achieve this re
|
|||
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in the API to perform
|
||||
master election. We will use the `--leader-elect` flag for each scheduler and controller-manager, using a lease in the API will ensure that only 1 instance of the scheduler and controller-manager are running at once.
|
||||
|
||||
The scheduler and controller-manager can be configured to talk to the API server that is on the same node (i.e. 127.0.0.1), or it can be configured to communicate using the load balanced IP address of the API servers. Regardless of how they are configured, the scheduler and controller-manager will complete the leader election process mentioned above when using the `--leader-elect` flag.
|
||||
The scheduler and controller-manager can be configured to talk to the API server that is on the same node (i.e. 127.0.0.1), or it can be configured to communicate using the load balanced IP address of the API servers. Regardless of how they are configured, the scheduler and controller-manager will complete the leader election process mentioned above when using the `--leader-elect` flag.
|
||||
|
||||
In case of a failure accessing the API server, the elected leader will not be able to renew the lease, causing a new leader to be elected. This is especially relevant when configuring the scheduler and controller-manager to access the API server via 127.0.0.1, and the API server on the same node is unavailable.
|
||||
In case of a failure accessing the API server, the elected leader will not be able to renew the lease, causing a new leader to be elected. This is especially relevant when configuring the scheduler and controller-manager to access the API server via 127.0.0.1, and the API server on the same node is unavailable.
|
||||
|
||||
### Installing configuration files
|
||||
|
||||
|
|
Loading…
Reference in New Issue