This changes backup and restore to work for TSM. It breaks it for b1 and bz1, but since those are getting removed it's ok.
The backup runs against any host that is specified and can backup either the metasstore, a database, specific retention policy, or a specific shard. It can also take incremental backups with the `since` flag, which will only backup TSM files that have been created since that timestamp.
The backup is safe to run online. However, for shards that are still hot for writes, they won't be able to create new TSM files while the backup for that single shard runs. If the backup isn't too large and the write throughput isn't too high this shouldn't be a problem since the writes will just go into the WAL cache.
Closing the store did not properly return an error for in-flight
writes because the closing channel was set to nil when closed. A
nil channel is not selectable so writes continue on past the guard
checks and trigger panics.
Start of a lower-level file inspection tool. This currently dumps
summary statistics for the shards, index and WAL that can be used to
understand the shape of the data is in the local shards. This util
operates on the shards itself and not through the server and is intended
more for debugging/troubleshooting.
The TSDBStore was returning a nil mapper if the shard did not exist. The caller always
assumed the mapper would not be nil causing a panic. Instead, have the mapper skip the mapping
phase if it's shard reference is nil. This fixes queries against data-only nodes and against
shards that are not fully replicated in the cluster.
Fixes#3574
* Update the store to remove the WAL directories associated with a shard or database when they are deleted.
* Fix the Store so that it creates separate WAL directories for databases and retention policies.
* Capitalize first letter of message
* Log all services staring consistently
* Remove some extraneous log statements in meta.Store
* Log data dirs for meta, data and hinted handoff
The multiple checks for Mapper and Executor type -- the lack of DRYness
in this code -- meant the same checks would need to be copied. Therefore
this change, as well as fixing the bug, improves the situation a little
bit by *asking* the Mappers what type of Executor is required. This code
is still not ideal.
Fixes#3355.
With this change, the query engine code gathers information about
shards and tagsets by working with individual shards, collating the
information, and returning that to the client. It does not assume that any
particular shard is local, and accesses all shards through abstracted
Mappers, of which there are two types -- a Mapper type for Raw queries
and a second type for Aggregate queries. There are corresponding
Executors for each type of Mapper, but both types of Executors share the
same interface.
This commit adds a write ahead log to the shard. Entries are cached
in memory and periodically flushed back into the index. The WAL and
the cache are both partitioned into buckets so that flushing doesn't
stop the world as long.
* Add deleteMeasurement to store and shard
* Add DropMeasurement to DatabaseIndex
* Update ErrMeasurementNotFound and ErrDatabaseNotFound to not include the first line of the stack trace.