Unify logic around compaction execution to a single place.
Also report on the error stats that we track. Previously they were not
emitted in the stats output.
If a delete takes a long time to process while writes to the
shard are occuring, it was possible for the cache to fill up
and writes to be rejected. This occurred because we disabled
all compactions while writing tombstone file to prevent deleted
data from re-appearing after a compaction completed.
Instead, we only disable the level compactions and allow snapshot
compactions to continue. Snapshots already handle deleted data
with the cache and wal.
Fixes#7161
Previously, calling derivative with a non-duration second argument was
allowed during parsing but would panic during execution due to a failed
type conversion. This change ensures the second argument is a duration
literal.
If you pipe in a file to the `influx` CLI, it will not try to open the
interactive line reader, but instead just send the contents of the
entire file to the server.
Instead of assigning a boolean value of true to the filter expressions
when there was no meaningful expression, this drops a boolean expression
of true from the filter expressions so we don't have to perform a map
assignment. This allows us to reduce allocations and assignments when a
`WHERE` clause only contains tag comparisons and no field comparisons.
The functionality works the same as wildcards, but this time, you can
specify a regular expression.
One limitation is that you can't specify whether you only want to select
fields or tags. Since the regex can be changed to suit the person's
needs, I don't currently think this is an issue.
Strings would always return an empty string and stddev is meaningless
when it comes to strings. This removes that functionality so strings
don't automatically get picked up when using a wildcard.
This changes the behavior of the max-series-per-database and
max-values-per-tag limits to drop points that would exceed the limits
and allow the remaining points to be written. Previously, the whole
batch would fail and return and 500 error to the client.
This now will write the allow points and return a `partial write`
error indicating some of the points were dropped, how many were
dropped and one of the problem measureent and tags.
On my machine with about 20 shards, it would take 10+ seconds to shut
down InfluxDB with SIGINT. After this change, it shuts down in nearly
instantly.
(*tsdb.Store).Close was shutting down each of its shards sequentially.
Each shard's engine would signal to its compaction goroutines to quit,
and because each compaction goroutine has a hardcoded 1-second sleep in
between checks, waiting for the goroutines would often block for up to a
second.
This change closes all of the TSDB store's shards in parallel. This
means it's possible that multiple close values could error at once, but
we're still only returning the first error, consistent with previous
behavior. That being said, the return value of (*tsdb.Store).Close is
ignored in (*cmd/influxd/run.Server).Close anyway.