This adds some basic ability to join a node to an existing cluster. It
uses a rpc layer to initiate a join request to an existing memeber. The
response indicates whether the joining node should take part in the raft
cluster and who it's peers should be. If raft should not be started, the
peers are the addresses of the current raft members that it should delegate
consensus operations.
To keep the meta store implementation agnostic of whether it's running
a local raft or not, a consensusStrategy type was also added.
This adds some basic plumbing to make remote procedure calls to other cluster
members. This first implementation allows a node to contact the raft leader
and fetch a copy of the meta data. This will be used by non-raft members to
pull down the latest metadata.
With this change remote mapping no longer uses HTTP, as the HTTP ports
exposed by nodes on the cluster are not known cluster wide. The TCP
ports exposed by the cluster service are, so this change uses that
functionality. Each RemoteMapper has its own dedicated connection pool
for each node, and remote mapping TCP connections are in no way coupled
with query TCP connections.
The multiple checks for Mapper and Executor type -- the lack of DRYness
in this code -- meant the same checks would need to be copied. Therefore
this change, as well as fixing the bug, improves the situation a little
bit by *asking* the Mappers what type of Executor is required. This code
is still not ideal.
Fixes#3355.
With this change, the query engine code gathers information about
shards and tagsets by working with individual shards, collating the
information, and returning that to the client. It does not assume that any
particular shard is local, and accesses all shards through abstracted
Mappers, of which there are two types -- a Mapper type for Raw queries
and a second type for Aggregate queries. There are corresponding
Executors for each type of Mapper, but both types of Executors share the
same interface.
This commit adds a write ahead log to the shard. Entries are cached
in memory and periodically flushed back into the index. The WAL and
the cache are both partitioned into buckets so that flushing doesn't
stop the world as long.
Statements were only being normalized if a default database was included
in the query (usually via the query param 'db'). However if no default
database was included, and none was an explicit part of the measurement
name, no database-existence check was run. This result in a later panic
with wildcard expansion.