If a delete for a time that does not exist was run, we would not
remove the series key from the slice of series to remove from the
index.
This could be triggered by running somethin like "delete from cpu where
time = 0" and if there was no data at time 0, the series would still
be removed from the index.
If there were many individual deletes to a series that ended up
deleting every value in the block and the tombstone timestamps
were not contigous, it was possible for the TSMKeyIterator to
return false for Next incorrectly. This causes the compaction to
drop any remaining data in the file.
Normally, if all the data is deleted via tombstones, we remove the
whole key from the TSM index. In this case, we're not able to determine
that the key is fully deleted until the block is decode and tombstones
are applied.
This changes the TSMKeyIterator to detect this condition and continue
to the next key instead of aborting.
The loop to check if a series still exists in a TSM file was wrong
in that it 1) exited early after one iteration and 2) had an off
by one error that causes the wrong series to be marked as existing.
This fixes both of these cases which can cause the index to become
inconsistent with the data store on disk.
This fixes a regression in the Cache introduced in ca40c1ad3c where
not all the values in the cache entry would be removed. Previously,
calling Exclude did not require the values to be sorted. The change
in ca40c1ad3c relies on the values being sorted so it was possible for
it to find the wrong indexes in when calling FindRange and leave some
data that should be deleted.
Fixes#9161
When refactoring the query engine, I thought calling
`count(distinct(value))` multiple times was disallowed and so the
refactor made it so that wasn't possible.
It turns out that this pattern is allowed because since the distinct is
nested, it is aggregated anyway and can be combined with other
aggregates.
This removes the erroneously placed restriction.
This commit firstly ensures that a shard's size on disk is accurately
reported when using the tsi1 index, by including the on-disk size of the
tsi1 index in the calculation.
Secondly, this commit add support for shard streaming/copying when using
the tsi1 index. Prior to this, a tsi1 index would not be correctly
restored when streaming shards.
The Cache.ApplyEntryFn iterates keys according to the partitions
and hashed values. This can cause the deleteKeys slice to contain
unsorted keys when deleting series. The code uses a binary search
on this slice later on and this can fail to detect that the series
should still exists. The series is then removed from the index
even though it has data still.
Fixes#9116
If the close happens when next is being called, it can result in a race
condition where the current iterator gets set to nil after the initial
check.
This also fixes the finalizer so it runs the close method in a goroutine
instead of running it by itself. This is because all finalizers run on
the same goroutine so a close that takes a long time can cause a backup
for all finalizers. This also removes the redundant call to
`runtime.SetFinalizer` from the finalizer itself because a finalizer,
when called, has already cleared itself.