Additional changes:
* fix(query/stdlib): update rewrite rules for schema mutation
The schema mutator was wrapped in a dual implementation spec so the
rewrite rules were type asserting on the wrong type.
This rule reorders group and window so it will switch from using
`ReadGroup` to using `ReadWindowAggregate` when the intent is to
aggregate a grouped window. It will then add a group node that groups by
the given columns and the start and stop columns and then reperform the
aggregate. This is more performant than performing the group first.
Annotate the context with feature flags when handling flux queries in influxdb.
Taking advantage of this in flux end-to-end tests. Using a custom flagger that
can set overrides based on the test case that is about to be run, allowing us
to enable features in the end-to-end tests.
This enables a new rule that will push down the full `aggregateWindow`
query including the `duplicate` and `window(every: inf)` that recombines
the tables. When the full rule is used, the table is not split into
tables for each window and instead retains itself as a single table. The
start or stop column is renamed to `_time` and `_start` and `_stop` will
be the boundaries of the query.
* feat: flags for pushing down new aggregates
* refactor: grouped aggregate rewrite rules
The storage operation ReadGroup aggregates per series on the storage
side. The planner will rewrite grouped aggregate queries to call
ReadGroup, which will perform a partial aggregation, followed by
another operation that will perform the rest of the aggregation on
the compute side.
* feat: storage capabilities for grouped aggregates
* fix: changes from review
* feat: group read operation name should include aggregate
This implements create empty for the window table reader and allows this
table read function to be used when it is specified. It will pass down
the create empty flag from the original window call into the storage
read function.
This also fixes the window table reader so it properly creates
individual tables for each window. Previously, it was constructing one
table for an entire series instead of one table per window.
Tests have been added to verify three edge case behaviors. The first is
the normal read operation where all values are present. The second is
when create empty is specified so null values may be created. The third
is with truncated boundaries to ensure that storage is read from and the
start and stop timestamps get correctly truncated.
Added a (disabled) planner rule that matches:
ReadGroupPhys -> { count }
It uses the same physical spec node for group to implement the aggregate. The
rule requires:
* the pushDownGroupAggregateCount feature flag enabled
* no existing aggregate present in the ReadGroup
* use of the "_value" column only
The column reader passed to `flux.Table.Do` is automatically released.
The function passed to the column reader should never release it
manually. This causes a double release which causes the table to be
erroneously freed when it might be referenced by another transformation.
In particular, this affected the following:
tables
|> yield()
|> to()
This is because this would produce a buffered table with two references
and pass it to both `yield()` and `to()` because `yield()` is a
pseudo-node that doesn't really exist. The real graph looks more like:
tables |> yield()
tables |> to()
The `yield()` would double release which would release the `to()`
transformation's copy of the column readers. The `to()` method would
then be invoked with an invalid column reader.
The e2e test driver in influxdb runs the tests twice to get past the fact that there
is no way to force order between the write to storage and the read back. When
the json.Marshal call became mandatory it was added to the first run, but not
the second.
Added a (disabled and feature-flagged) planner rule that matches:
ReadRange -> window -> { min, max, mean, count, sum }
The rule requires:
* the pushDownWindowAggregate{Count,Rest} feature flags enabled
* having WindowAggregateCapability
(which StorageReader does not currently have)
* use of "_value" columns only
* window.period == window.every
* window.every.months == 0
* window.every is positive
* window.offset == 0
* standard time columns
* createEmpty is false