For a more detailed explanation on the failover process and election terms please see the full paper describing the protocol: [In Search of an Understandable Consensus Algorithm][raft-paper].
Many distributed systems are built upon the Paxos protocol but Paxos can be difficult to understand and there are many gaps between Paxos and real world implementation.
With these two constructs, you can build a system that can maintain state across multiple servers -- even in the event of multiple failures.
### Leader Election
The Raft protocol effectively works as a master-slave system whereby state changes are written to a single server in the cluster and are distributed out to the rest of the servers in the cluster.
Raft ensures that there is only one leader at a time.
It does this by performing elections among the nodes in the cluster and requiring that a node must receive a majority of the votes in order to become leader.
For example, if you have 3 nodes in your cluster then a single node would need 2 votes in order to become the leader.
For a 5 node cluster, a server would need 3 votes to become leader.
To maintain state, a log of commands is maintained.
Each command makes a change to the state of the server and the command is deterministic.
By ensuring that this log is replicated identically between all the nodes in the cluster we can replicate the state at any point in time in the log by running each command sequentially.
Replicating the log under normal conditions is done by sending an `AppendEntries` RPC from the leader to each of the other servers in the cluster (called Peers).
Each peer will append the entries from the leader through a 2-phase commit process which ensure that a majority of servers in the cluster have entries written to log.