* Improve the ping endpoint so that it can optionally check for leader agreement across all meta servers
* Add Ping method to the meta client
* Fix ClusterID tests
* Remove WaitForLeader from meta client and remove unnecessary references to it
* Add dir, hostname, and bind address to top level config since it applies to services other than meta
* Add enabled flags to example toml for data and meta services
* Wire up add/remove raft peers and meta servers to meta service
* Update DROP SERVER to be either DROP META SERVER or DROP DATA SERVER
* Bring over statement executor from old meta package
* Start meta service client implementation
* Update meta service test to use the client
* Wire up node ID/meta server storage information
Server registration and stats reporting has been removed from what was
once http://enterprise.influxdata.com. The app that lived there, now
runs at http://usage.influxdata.com, so that the subdomain can
eventually be repurposed. Because we also want to repurpose the
`enterprise-client` repo, we have also renamed that to `usage-client`.
InfluxDB no longer needs the `registration` service now, since all of
the endpoints it communicates with simply discard the data provided to
them.
Registration also involves statistics and diagnostics upload, for the
purposes of remote management. This means there will be long-running
goroutines in effect. Therefore move the code to a service model.
Since INTO queries need to have absolute information about the database
to work, we need to create a loopback interface back to the cluster
in order to perform them.
The server was closing by stopping the most depended on services first
which causes various panics while higher level services are still processing
task when the server closes.
Fixes#3881
When starting a influxd in a docker container, the processess needs to know
the hosts address and port in order to create its NodeInfo correctly. -hostname
previously only allowed us to change the hostname and the port would always be 8088
which may not be correctly if running multiple containers on the same host.
Hostnames were always being resolved to an IP address and the IP
address was used as the host address and raft peer address. There
was no way to use an actual hostname instead of an IP address.
There is a race when stopping servers where the meta.Store is closing
but the server has not signaled it is closing so the reporting goroutine
repeeatedly errors out in fast loop during this time. It creates a lot
of noise in the logs.
This adds some basic plumbing to make remote procedure calls to other cluster
members. This first implementation allows a node to contact the raft leader
and fetch a copy of the meta data. This will be used by non-raft members to
pull down the latest metadata.
With this change remote mapping no longer uses HTTP, as the HTTP ports
exposed by nodes on the cluster are not known cluster wide. The TCP
ports exposed by the cluster service are, so this change uses that
functionality. Each RemoteMapper has its own dedicated connection pool
for each node, and remote mapping TCP connections are in no way coupled
with query TCP connections.
With this change, the query engine code gathers information about
shards and tagsets by working with individual shards, collating the
information, and returning that to the client. It does not assume that any
particular shard is local, and accesses all shards through abstracted
Mappers, of which there are two types -- a Mapper type for Raw queries
and a second type for Aggregate queries. There are corresponding
Executors for each type of Mapper, but both types of Executors share the
same interface.
This commit adds a write ahead log to the shard. Entries are cached
in memory and periodically flushed back into the index. The WAL and
the cache are both partitioned into buckets so that flushing doesn't
stop the world as long.