The full compaction planner could return a plan that only included
one generation. If this happened, a full compaction would run on that
generation producing just one generation again. The planner would then
repeat the plan.
This could happen if there were two generations that were both over
the max TSM file size and the second one happened to be in level 3 or
lower.
When this situation occurs, one cpu is pegged running a full compaction
continuously and the disks become very busy basically rewriting the
same files over and over again. This can eventually cause disk and CPU
saturation if it occurs with more than one shard.
Fixes#7074
The logic for determining whether a series key was already in the
the set of TSM series was too restrictive. It allowed only the first
field of a series to be added leaving all the remaing fields.
The logic for determining whether a series key was already in the
the set of TSM series was too restrictive. It allowed only the first
field of a series to be added leaving all the remaing fields.
Negative timestamps are now supported. We also now refuse two
nanoseconds that are at the edge of the minimum time window. One of the
nanoseconds we do not accept is because we need MinInt64 to be used for
some internal comparisons in the TSM engine and it was causing an
underflow when we subtracted one from the minimum time. The second is so
we can have one minimum time that signifies the default minimum that
nobody can write to (so we can implicitly rewrite the timestamp on
aggregate queries) but still use the explicit timestamp if it is given
to us by the user. We aren't able to tell the difference between if the
user provided it or if it was implicit without those values being
different.
If the default minimum time is used with an aggregate query, we rewrite
the time to be the epoch for backwards compatibility since we believe
that's more important than supporting that extra nanosecond.
The path info only contained the file name which caused tombstone
files to not be removed if there were queries running against
a file that was compacted.
This is now consistent with the TSMReader.Path which returns the
full path info.
If they were left around, re-enabling them again could cause
future compactions to continuously fail. A restart of the
server would clean them up correctly though.
If there were multiple TSM files and a delete/drop was run,
we would write the delete series to the tombstone file N
times for each file. This occurred because FileStore.WalkKeys walks
every key in every TSM file which can return duplicate keys.
This issue caused TSM files to be much larger than they should be
and also cause large memory usage during the delete.
This keeps some memory bounds when reloading a TSM files tombstones
so that the heap does not grow exceedintly fast and stay there
after the deletes are applied.
Tombstone were read fully into memory at startup which could consume
a lot of RAM and OOM the process if there were a lot of deleted
series and many TSM files.
This now walks the tombstone file and iteratively applies the tombstone
which uses significantly less RAM. This may be slightly slower in the
generate cause, but should scale better.
Normally, compactions do not conflict on the files they are compacting.
If the full cold threshold is set very low, it can cause conflicts where
two compactions compact the same files. The full compaction was the
only place this could happen as it's planning is greedy.
To make this safer for concurrent execution, the compaction tracks which
files are current being compacted and prevents any new compactions from
starting if the file set overlaps.
Fixes#6595
If a query is interrupted via kill query, the tsm files managed
by the file store purger would never get removeed because
KeyCursor.Close was never called.
KeyCursor.Close should always be called now.
If a query was running against a file being compacted, we close the file
and the query would end wherever it had read up to. This could result
in queries that randomly lost data, but running them again showed the
full results.
We now use a reference counting approach and move the in-use files out
of the way in the filestore and allow the queries to complete against
the old tsm files. The new files are installed and new queries will
use them.
Fixes#5501
benchmark old ns/op new ns/op delta
BenchmarkBooleanDecoder_2048-4 9954 7846 -21.18%
benchmark old allocs new allocs delta
BenchmarkBooleanDecoder_2048-4 0 0 +0.00%
benchmark old bytes new bytes delta
BenchmarkBooleanDecoder_2048-4 0 0 +0.00%