* Add AuthorizeDatabase API to QueryAuthorizer to verify a user has
appropriate access to the specified database
* Update serverFluxQuery handler to require a meta.User when auth is
enabled
* update Flux createFromSource and createBucketsSource dependencies to
require Authorizer when auth is enabled in configuration
* update createFromSource to verify read permissions for each bucket
specified in a Flux query
* update BucketsDecoder, which implements the buckets() Flux function,
to return buckets that the user has read or write permissions to
* add unit tests to verify authentication is required for Flux HTTP
requests when auth is enabled in configuration
* gen-init initializes a database based on the provided CLI spec
* gen-exec generates the data for the target database based on the same
CLI spec as gen-init
The query authorizer was not being properly passed to subqueries so
rejections did not happen when a subquery was the one reading the value.
Similarly, the max series limit was not being propagated downwards
either.
When an NaN value was computed, it would be written back incorrectly as
a string type instead of being omitted. This happened very rarely in the
case that `stddev()` of a single value was computed and only when it was
being done on a new shard.
This correctly drops the value. The reason this wasn't correctly dropped
previously is because NaN values are represented as a `(*float64)(nil)`
which does not equal `nil` so the writeback system thought it was a
non-nil point, but the writer encoded it as a string.
In addition to the above, this also fixes the point writer to report the
number of points actually written rather than the number of points
desired to be written. Previously, if there was an error writing a point
for some reason, the point would be silently dropped, but still recorded
as a point that had been written. Now it reports the number of points
that were written and omits the ones that were dropped.
This commit limits the number of files that can be compacted in
a single group when forcing a full compaction or when a shard
becomes cold. This is to prevent too many files being compacted
at the same time.
Before this, if you deleted everything with `delete where true`
for example, then you would be left with all of your measurements
in the fields index. That would cause ghost fields to reappear
if someone reinserted to the measurement.
This fixes that by making it so the deepest most delete code
checks if the measurement was removed from the index, and if so
cleaning it up out of the fields index.
Additionally, it fixes bugs in that cleanup code where if you had
a measurement like "m1" and "m10", when iterating over the cache
or file store, "m1" would match "m10" due to it only checking the
prefix. This also has it check the character right after the
measurement to be either a comma because tags started, or the first
character of the field separator.
There are some problematic races that occur when deletes happen
against writes to the same points at the same time. This change
introduces guards and an epoch based system to coordinate these
modifications.
A guard matches a point based on the time, measurement name, and
some conditions loaded from an influxql expression. The intent
is to be as precise as possible without allowing any false
neagatives: if a point would be deleted, the guard must match it.
We are allowed to match more points than necessary, at the cost
of slowing down writes.
The epoch based system keeps track of outstanding writes and
deletes and their associated guards. When a delete operation
is going to start, it waits until all current writes are
done, and installs its guard, blocking all future writes that
contain points that may conflict with the delete. This allows
writes to disjoint points to proceed uncontended, and the
implementation is optimized for assuming there are few
outstanding deletes. For example, in the case that there are no
deletes, a write just has to take a mutex, bump a counter, and
compare a value against zero. The epoch trackers are per shard,
so that different shards never have to contend with one another.
TSI1 and inmem indexes have different properties during deletes.
Specifically, inmem shares a global index across all shards, where
every tsi1 index is contained to a specific shard. When deleting
a series, it may cause the last reference to the series across all
shards to be dropped, necessitating a removal from the series file.
Since the inmem index shares the index across all shards, removing
the series when it's removed from the series file is sufficient.
However, in the case of a mixed index database, if the last shard
is a TSI1 shard, the other inmem indexes are not available when we
discover that it was the last reference to the series. This ends
up leaving the series in the inmem index without a series id in
the series file, causing all sorts of misbehavior.
Rather than continue curling ourselves into a ball to try to fix
this unsupported mode, give a helpful error message to the user
that they must run their database in a non-mixed index mode to
allow deletes.