This removes the containsSeries func which ends up creating a map
sized to the slice of keys passed in. This doesn't scale well to
high cardinalities and creates a lot of garbage.
The query language min and max times are slighly different than the
values used in the engine. This allows faster codes to be used when
the whole time range is deleted.
This is a version of DeleteRange that take a func predicate to determine
whether a series key should be deleted or not. This avoids the large
slice allocations with higher cardinalities.
This adds a new v4 tombstone format that extends the v3 format
by allowing multiple batches of tombstones to be written without
having to re-read all the existing tombstones. This uses gzip
multi stream to append multiple v3 files together to create a v4
format.
The previous sha was taken from a revision on a devel branch that I
thought would continue staying in the tree after it was merged. That
revision was rebased away and the API was changed for the logger.
This updates the usage of the logger and adds a simple package for
constructing the base logger.
The 1.0 version of zap changed the format of the default console logger
so this change moves over to this new logger instead of attempting to
retain backwards compatibility with the old format.
The integration test was intended to use the temporary directory for the
files that were created, but `INFLUXDB_WAL_DIR` is supposed to be
`INFLUXDB_DATA_WAL_DIR`.
There was a very small window where it was possible to deadlock during
the close of the Store. When closing, the Store waited on its Waitgroup
under a `Lock`. Naturally, all other goroutines must have been in a
position to call `Done` on the `Waitgroup` before the `Wait` call in
`Close` would return.
For the goroutine running the `monitorShards` method it was possible
that it would be unable to do this. Specifically, if the `monitorShards`
goroutine was jumping into the `t.C` case as the `Close()` goroutine was
acquiring the `Lock` then then `monitorShards` goroutine would be unable
to acquire the `RLock`. Since it would also be unable to progress around
its loop to jump into the `s.closing` case, it would be unable to call
`Done` on the `WaitGroup` and we would have a deadlock.
This was identified during an AppVeyor CI run, though I was unable to
reproduce this locally.