Fixes#6211.
In Go-land packages with the same name, e.g., internal, do not clash
with each other when they're in different parts of the project. However
with protobufs definitions will clash if they share the same package
name.
This commit renames the influxql protobuf package to `influxql` to
avoid a clash with a message definition in another protobuf package
called internal. Go package aliases allow us to continue to refer to the
internal package as `internal` rather than `influxql`.
The QueryExecutor had a lot of dead code made obsolete by the query
engine refactor that has now been removed. The TSDBStore interface has
also been cleaned up so we can have multiple implementations of this
(such as a local and remote version).
A StatementExecutor interface has been created for adding custom
functionality to the QueryExecutor that may not be available in the open
source version. The QueryExecutor delegate all statement execution to
the StatementExecutor and the QueryExecutor will only keep track of
housekeeping. Implementing additional queries is as simple as wrapping
the cluster.StatementExecutor struct or replacing it with something
completely different.
The PointsWriter in the QueryExecutor has been changed to a simple
interface that implements the one method needed by the query executor.
This is to allow different PointsWriter implementations to be used by
the QueryExecutor. It has also been moved into the StatementExecutor
instead.
The TSDBStore interface has now been modified to contain the code for
creating an IteratorCreator. This is so the underlying TSDBStore can
implement different ways of accessing the underlying shards rather than
always having to access each shard individually (such as batch
requests).
Remove the show servers handling. This isn't a valid command in the open
source version of InfluxDB anymore.
The QueryManager interface is now built into QueryExecutor and is no
longer necessary. The StatementExecutor and QueryExecutor split allows
task management to much more easily be built into QueryExecutor rather
than as a separate struct.
A bigger refactor of these functions is needed to support #3290, but
this will work for the more common case that someone uses double quotes
instead of single quotes when surrounding a time literal.
Fixes#3932.
The simple moving average will gradually emit points instead of waiting
until the end. This should apply to derivative and difference in the
future too.
Fixes#6112.
The difference function is implemented very similar to how derivative is
implemented. It is an aggregate function that acts over the entire
aggregate. This function will also have the same problems that
derivative has with getting values from the previous interval or point.
This will be fixed separately as part of #5943.
Fixes#1825.
Allows configuration of shard group duration at database creation, and retention
policy create/alter time.
Query examples:
```
CREATE DATABASE testdb WITH DURATION 90d SHARD DURATION 30m NAME rp_testdb
CREATE RETENTION POLICY rp_testdb2 ON testdb DURATION INF REPLICATION 1 SHARD DURATION 30m
ALTER RETENTION POLICY rp_testdb2 ON testdb SHARD DURATION 1h
```
This can be useful with long duration retention policies with lots of data, where
you can split into smaller shards to relieve memory pressure.
While this allows a query to be killed, it doesn't really do anything
yet since the interrupt happens only after the first row gets emitted
(the entire first series).
This section of code will likely have to be refactored to make this work
since we need a way to interrupt a currently running iterator.
The currently running queries can be listed with the command
`SHOW QUERIES` and it will display the current commands that have been
run, the database they were run against, and how long they have been
running.
Numbers in the query without any decimal will now be emitted as integers
instead and be parsed as an IntegerLiteral. This ensures we keep the
original context that a query was issued with and allows us to act more
similar to how programming languages are typically structured when it
comes to floats and ints.
This adds functionality for dealing with integers promoting to floats in
the various different places where math are used.
Fixes#5744 and #5629.
Internal system series start with an underscore prefix but
restricting this prevents users who already use an underscore
prefix in their series names.
Fixes#5870
This commit moves the `tsdb.Store.ExpandSources()` function onto
the `influxql.IteratorCreator` and provides support for issuing
source expansion across a cluster.
Also fixes derivative calls with an aggregate function to require a
group by interval. The call without a group by interval doesn't make
sense as it will never return anything since it will always have one
point.
Fixes#5968.
`SHOW TAG VALUES` output has been modified to print the measurement name
for every measurement and to return the output in two columns: key and
value. An example output might be:
> SHOW TAG VALUES WITH KEY IN (host, region)
name: cpu
---------
key value
host server01
region useast
name: mem
---------
key value
host server02
region useast
`measurementsByExpr` has been taught how to handle reserved keys (ones
with an underscore at the beginning) to allow reusing that function and
skipping over expressions that don't matter to the call.
Fixes#5593.
The dimensions array in `RewriteWildcards` gets emptied by an earlier
section of the code and then tries to iterate over that empty slice to
append it to the list of dimensions.
That makes the loop dead code that can't ever be hit.
Also improve the efficiency of this method by not creating a new slice
when there are no wildcards. We already check at the beginning of the
function if there is a wildcard out of necessity. There's no point in
making a new slice and copying the contents if we know that there will
be no wildcards to expand.
It also improves memory efficiency by assuming that if a wildcard
exists, there is only one and the pre-allocated slice can take advantage
of that. If there are multiple wildcards, then a new slice will have to
be created in the middle of the loop to raise the capacity.
When a wildcard is specified for the field but not the dimensions, the
dimensions get added to the list of fields as part of
`RewriteWildcards()`.
But when a dimension was given with no wildcard, the dimension didn't
get removed from the wildcard in the fields section. This teaches the
rewriter to disclude dimensions explicitly included from being expanded
as a field. Now this statement when a measurement has one tag named host
and a field named value:
SELECT * FROM cpu GROUP BY host
Would expand to this:
SELECT value FROM cpu GROUP BY host
Instead of this:
SELECT host, value FROM cpu GROUP BY host
If you want the latter behavior, you can include it like this:
SELECT host, * FROM cpu GROUP BY host
Fixes#5770.
The name of the column will be every measurement located inside of the
math expression in the order they are encountered in within the
expression.
Also handle `*influxql.ParenExpr` in the function
`(*influxql.Field).Name()`
Fixes#5730.
Aux iterators now ask the iterator creator what series will be returned
and determine which aux fields to create based on the results.
The `tsdb.Shards` struct also creates a call iterator around the
iterators returned from each shard.
Out of a list of iterators, an overarching iterator type is chosen and
only iterators of that type are returned for the merge iterator. If a
type can be cast to another type, an extra cast iterator is created to
handle that casting.
The only supported cast is from integers to floats.
Previously if you issued a CQ with a resample interval higher than the
query interval, such as the following:
CREATE CONTINUOUS QUERY cq ON db
RESAMPLE EVERY 4m
BEGIN
SELECT mean(value) INTO cpu_mean FROM cpu GROUP BY time(2m)
END
This would result in strange behavior because the FOR value defaulted to
the GROUP BY interval and the minimum time passing before a CQ ran was
also the resample interval, so it wouldn't run the appropriate intervals
even if you set the resample duration to a higher value.
This tweaks the CQ runner to set the minimum interval before a bucket
becomes capable of running to the lower of the query interval or the
resample interval instead of always using the resample interval.
It also sets the default resample duration to be the higher value of the
query interval or the resample interval so the above query gets a
default of 4m instead of 2m and will execute 2 queries every 4 minutes.
If you manually set the resample duration to a lower value than the
resample interval, the old behavior will still happen and should be
considered an error.
This also makes trying to create a continuous query with a resample
duration of below the resample interval or query interval (whichever is
higher) as an error returned by the parser.
Fixes#5286.
* Add dir, hostname, and bind address to top level config since it applies to services other than meta
* Add enabled flags to example toml for data and meta services
* Wire up add/remove raft peers and meta servers to meta service
* Update DROP SERVER to be either DROP META SERVER or DROP DATA SERVER
* Bring over statement executor from old meta package
* Start meta service client implementation
* Update meta service test to use the client
* Wire up node ID/meta server storage information
This makes the following syntax possible:
CREATE CONTINUOUS QUERY mycq ON mydb
RESAMPLE EVERY 1m FOR 1h
BEGIN
SELECT mean(value) INTO cpu_mean FROM cpu GROUP BY time(5m)
END
The RESAMPLE option customizes how often an interval will be sampled and
the duration. The interval is customized with EVERY. Any intervals
within the resampling duration on a multiple of the resample interval
will be updated with the new results from the query.
The duration is customized with FOR. This determines how long an
interval will participate in resampling.
Both options are optional. If RESAMPLE is in the syntax, at least one of
the two needs to be given. The default for both is the interval of the
continuous query.
The service also improves tracking of the last run time and the logic of
when a query for an interval should be run. When determining the oldest
interval to run for a query, the continuous query service determines
what would have been the optimal time to perform the next query based on
the last run time. It then uses this time to determine the oldest
interval that should be run using the resample duration and will
resample all intervals between this time and the current time as opposed
to potentially forgetting about the last run in an interval if the
continuous query service gets delayed for some reason.
This removes the previous config options for customizing continuous
queries since they are no longer relevant and adds a new option of
customizing the run interval. The run interval determines how often the
continuous query service polls for when it should execute a query. This
option defaults to 1s, but can be set to 1m if the least common factor
of all continuous queries' intervals is a higher value (like 1m).