This has a few changes in it (unfortuantely). The main change is to run compactions
concurrently. While implementing this, a few query and performance bugs showed up that
are also fixed by this commit.
* remove rolloverTSMFileSize constant that is no longer used
* remove the maxGenerationFileCount since it is no longer a limitation that's necessary with the new compaction scheme. We no longer read WAL segments as part of the compaction so memory is only used as we read in each individual key
* remove minFileCount and switch to a user configurable variable
* remove the mutex from WALSegmentWriter. There's never more than one open in the WAL at one time and it's not exported through any function so the lock on the WAL should be used. This simplified keeping track of the last write time and removed a bunch of unnecessary locks.
* update WALSegmentWriter.Write to take the compressed bytes so that encoding and compression can occur before the call to write (while we don't hold the WAL lock)
* remove a bunch of unnecessary locking in WAL.writeToLog
* Add check for TSM file magic number and vesion
* Remove old tsm, log, and unused cursor code
* Remove references to tsm1dev everywhere except in the inspector
* Clean up config options for compaction and snapshotting
* Remove old TSM configuration options
* Update the config.sample.toml with TSM options
* Update WAL compact to force if it has been cold for writes for a configurable period of time (1h by default)
If a tsm file was partially written, we were not able to read the
raw block data because we panic/exited when reading the corrupted
index. This allows us to read the raw blocks if we can.
Avoids a panic if a series ID exists in the tsm file but not in the IDs file.
Also handles the case were we don't have an ids file and just a tsm file.
This will read a tsm file and dump index, block and compression level info from the file.
It reads the file directly as opposed to reading it through the tsm engine which should
help with debugging and troubleshooting data file issues.
The implementation is not pretty but the output is very useful. In the future, we can
add data extraction, recovery and verification functionality if needed.