The InUse call on TSMFiles is inherently racy in the presence of
Ref calls outside of the file store mutex. In addition, we return
some TSMFiles to callers without them being Ref'd which might allow
them to be closed from underneath. While I believe it is the case
that it would be impossible, as the only thing that gets a handle
externally is compaction, and compaction enforces that only one
handle exists at a time, and thus is only deleted once after the
compaction is done with it, it's not very obvious or enforced.
Instead, always return a TSMFile with a Ref call under the read
lock, and require that no one else calls Ref. That way, it cannot
transition to referenced if the InUse call returns false under the
write lock.
The CreateSnapshot method was racy in a number of ways in the presence
of multiple calls or compactions: it did not take references to the
TSMFiles, and the temporary directory it creates could have been
shared with concurrent CreateSnapshot calls. In addition, the
files slice could have been concurrently mutated during a compaction
as well.
Instead, under the write lock, make a local copy of the state for
the compaction, including Ref calls (write locks are implicitly
read locks). Then, there is no need for a lock at all afterward.
Add some comments to explain these issues at the call sites of InUse,
and document that the Files method that returns the slice unprotected
is only for tests.
- reduce allocations by making leaf a value type with a bool
- make longestPrefix inlineable and have no bounds checks
- delete any code for functions we don't plan to use
- operate on []byte and only copy when necessary
- inline calls to sort.Search to avoid allocations and indirections
- insert directly in the correct location for addEdge
- reduce allocations during copying with a buffer helper
results:
name old time/op new time/op delta
Tree_Insert-8 1.10ms ± 4% 0.73ms ± 4% -33.54% (p=0.000 n=10+10)
Tree_InsertNew-8 3.18ms ± 2% 1.91ms ± 6% -39.90% (p=0.000 n=10+10)
name old speed new speed delta
Tree_Insert-8 9.12MB/s ± 4% 13.72MB/s ± 4% +50.46% (p=0.000 n=10+10)
Tree_InsertNew-8 3.15MB/s ± 2% 5.24MB/s ± 6% +66.42% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
Tree_InsertNew-8 1.62MB ± 0% 1.60MB ± 0% -1.28% (p=0.000 n=10+9)
name old allocs/op new allocs/op delta
Tree_InsertNew-8 35.0k ± 0% 15.0k ± 0% -57.04% (p=0.000 n=10+10)
MB/sec in this case is 1 byte per key inserted, so it's really millions
of keys inserted per second.
This is the start of per-series validation that occurs in the
Engine write path. It uses an in-memory radix tree to reduce
memory usage and is re-built on demand the first time a series
is written.
does some basic sanity checks. it's hard to be more exhaustive without
either taking a crazy amount of time, or being non-deterministic,
but at least this makes sure we barf in some cases.
No appreciable changes in benchmark results. It seems like this
function is less than 4% of cpu time in the write workloads in the
benchmarks at least.
at some point, the Inmem field on the engine options became
required, but the benchmarks weren't updated.
also uses filepath everywhere when manipulating file paths.
* filters allow specific combinations of database, retention policy and
shard groups to be opened. This was added to reduce the start-up time
of the export tool and limit the memory usage.
* Check for errors from binary.Uvarint when reading TSI logs
* also check len(parsed) == len(input)
* wrap binary.Uvarint
* make uvarint() more generally useful/used