This has various benefits:
- Users embedding InfluxDB within other Go programs can specify a different logger / prefix easily.
- More consistent with code used elsewhere in InfluxDB (e.g. services, other `run.Server.*` fields, etc).
- This is also more efficient, because it means `executeQuery` no longer allocates a single `*log.Logger` each time it is called.
This also switches the remaining iterators to be lazy so they can return
errors properly. They needed to be converted to lazy initialization
anyway, which has the side effect of making it much easier for us to
propagate the underlying error during initialization.
Updated the Emitter to return errors when it cannot read properly from
the iterators.
The deprecated message is now attached to a new attribute returned with
the results. This message can then be read by clients to warn a user
about upcoming changes to the query engine.
The `influx` client has already been modified to read this message and
print it out for every format except CSV.
The first warning message is a deprecated message about removing `IF NOT
EXISTS` from `CREATE DATABASE`.
The message will also be printed to the server log.
Fixes#5707.
If a non-existant shard was requested as part of the iterator,
the influxd server would panic. Now it handles this situation by
skipping the offending shard and returning properly when none could be
found.
The tsdb package can't have a dependency on the meta package so it takes
a slice of uint64 types. The clustering implementation needs the full
ShardInfo to know the shard owners though, so a different implementation
needs to be used by clustering.
The `*tsdb.Store` type gets wrapped in the cluster package so it can
implement the `IteratorCreator` function without having a dependency on
the meta package.
The QueryExecutor had a lot of dead code made obsolete by the query
engine refactor that has now been removed. The TSDBStore interface has
also been cleaned up so we can have multiple implementations of this
(such as a local and remote version).
A StatementExecutor interface has been created for adding custom
functionality to the QueryExecutor that may not be available in the open
source version. The QueryExecutor delegate all statement execution to
the StatementExecutor and the QueryExecutor will only keep track of
housekeeping. Implementing additional queries is as simple as wrapping
the cluster.StatementExecutor struct or replacing it with something
completely different.
The PointsWriter in the QueryExecutor has been changed to a simple
interface that implements the one method needed by the query executor.
This is to allow different PointsWriter implementations to be used by
the QueryExecutor. It has also been moved into the StatementExecutor
instead.
The TSDBStore interface has now been modified to contain the code for
creating an IteratorCreator. This is so the underlying TSDBStore can
implement different ways of accessing the underlying shards rather than
always having to access each shard individually (such as batch
requests).
Remove the show servers handling. This isn't a valid command in the open
source version of InfluxDB anymore.
The QueryManager interface is now built into QueryExecutor and is no
longer necessary. The StatementExecutor and QueryExecutor split allows
task management to much more easily be built into QueryExecutor rather
than as a separate struct.
If the http.CloseNotifier didn't go off for some reason (and it's not
guaranteed to go off just because the HTTP connection is closed), the
query wouldn't get correctly recycled when chunked output was requested.
The query id in the query executor was also not being set correctly.
This seems to have been an oversight when merging the point limit
monitor.
A bigger refactor of these functions is needed to support #3290, but
this will work for the more common case that someone uses double quotes
instead of single quotes when surrounding a time literal.
Fixes#3932.
Allows configuration of shard group duration at database creation, and retention
policy create/alter time.
Query examples:
```
CREATE DATABASE testdb WITH DURATION 90d SHARD DURATION 30m NAME rp_testdb
CREATE RETENTION POLICY rp_testdb2 ON testdb DURATION INF REPLICATION 1 SHARD DURATION 30m
ALTER RETENTION POLICY rp_testdb2 ON testdb SHARD DURATION 1h
```
This can be useful with long duration retention policies with lots of data, where
you can split into smaller shards to relieve memory pressure.
This commit adds a configurable limit to the number of series that
can be returned from a `SELECT` statement. The limit is checked
immediately after planning and is determined by the use of iterator
stats.
Fixes#6076
This commit adds an `IteratorStats` that holds aggregate
iterator processing information. A method is also added to
`Iterator` to return the stats:
Stats() influxql.IteratorStats
The remote iterators will also emit their stats in the point
stream upon first connection, on a given interval, and then
finally once the last point has been sent.
Use of the iterator is spread out into both `IteratorCreators` and
inside of the iterators themselves. Part of the interrupt must be
handled inside of the engine so it stops trying to emit points when an
interrupt is found and another part of the interrupt has to happen when
combining the iterators so it doesn't just start reading the next shard.
While this allows a query to be killed, it doesn't really do anything
yet since the interrupt happens only after the first row gets emitted
(the entire first series).
This section of code will likely have to be refactored to make this work
since we need a way to interrupt a currently running iterator.
The currently running queries can be listed with the command
`SHOW QUERIES` and it will display the current commands that have been
run, the database they were run against, and how long they have been
running.
This commit moves the `tsdb.Store.ExpandSources()` function onto
the `influxql.IteratorCreator` and provides support for issuing
source expansion across a cluster.
There was a race where a remote write could arrive before a meta
client cache update arrived. When this happened, the receiving node
would drop the write because it could not determine what database
and retention policy the shard it was supposed to create belonged
to.
This change sends the db and rp along with the write so that the
receiving node does not need to consult the meta store. It also
allows us to not send writes for shards that no longer exist instead
of always sending them and having the receiving node logs fill up
with dropped write requests. This second situation can occur when
shards are deleted and some nodes still have writes queued in hinted
handoff for those shards.
Fixes#5610
The RPC handler for remote queries would attempt to reuse a closed
connection for certain commands that didn't use pooling. The RPC
commands that close the connection have been fixed to not try reusing
the connection.
When creating an iterator, if there are no points to return, the points
decoder would hit an EOF that it didn't catch and would return that
error back to the client who made the request. It now properly returns
no points by using a `nilFloatIterator` if there are no points to
return.
This fixes remote execution when a cluster has nothing to return.
Fixes#5612, #5573 and #5518.
Using the MetaExecuter, queries that need to run on both data nodes
and optionally the meta store will be executed across all data nodes
in the cluster.
Fixes#5653 and #5394.
Previously dropping retention policies did not propogate to local TSDB
shards. Instead, the retention policiess would just be removed from the
Meta Store.
This PR adds ensures that data associated with retention policies is
removed, when the retention policy is dropped.
Also, it cleans up a couple of other methods in `tsdb`, including the
requirement to provide (redundant) shardIDs when deleting databases.