A static initialization is not desirable in the main binaries, as it forces all
paths of code to init, but it is still useful in tests. It allows static
intialization to be performed once for all tests and eliminates the need to
always add the FluxInit call. Added a fluxinit/static package that calls
fluxinit.FluxInit() to replace the builtin package. This hides the nature of
the initialization and makes it clear that it is mandatory initialization code
getting called.
It appears that the double write caused by using to() inside a separate
execution environment (experimental.chain) causes flux e2e tests to behave
unpredictably, when coupled with the 1.x storage engine. Removing the second
write by using two passes, one to write to the db, then another to run the
test, eliminates the flakiness. Verified by running e2e tests in parallel times
8 for 12 hours without any flakiness observed. Before the fix, the flakiness
would take approx 30 minutes on avgerage to exhibit.
This commit also removes universe/to_time from the skipped tests because it was
added when this flakiness was discovered.
This is a backport of #14262 to the 1.x storage engine. The 1.x storage
engine is now the primary engine for open source so when we switched we
regressed to the old behavior.
This also fixes `go generate` for the tsm1 package by running `tmpl`
with `go run` instead of assuming the correct one is installed in the
path.
This is required to keep the system resources low when running
the Flux end-to-end tests, which create a bucket for each test. A
bucket creates at least 17 files after the first write:
* 8 for the `_series` segment files
* 8 for the `index` log files
* 1 for the `wal`
Enables the mix and max aggregates for the ReadGroupAggregte pushdown behind a feature flag.
Co-authored-by: Jonathan A. Sternberg <jonathan@influxdata.com>
Force the writing of data and running of the test to happen sequentially. As
the results come out, collect them and report an error only if the diff results
are not empty.
Additional changes:
* fix(query/stdlib): update rewrite rules for schema mutation
The schema mutator was wrapped in a dual implementation spec so the
rewrite rules were type asserting on the wrong type.
This rule reorders group and window so it will switch from using
`ReadGroup` to using `ReadWindowAggregate` when the intent is to
aggregate a grouped window. It will then add a group node that groups by
the given columns and the start and stop columns and then reperform the
aggregate. This is more performant than performing the group first.
Annotate the context with feature flags when handling flux queries in influxdb.
Taking advantage of this in flux end-to-end tests. Using a custom flagger that
can set overrides based on the test case that is about to be run, allowing us
to enable features in the end-to-end tests.
This enables a new rule that will push down the full `aggregateWindow`
query including the `duplicate` and `window(every: inf)` that recombines
the tables. When the full rule is used, the table is not split into
tables for each window and instead retains itself as a single table. The
start or stop column is renamed to `_time` and `_start` and `_stop` will
be the boundaries of the query.
* feat: flags for pushing down new aggregates
* refactor: grouped aggregate rewrite rules
The storage operation ReadGroup aggregates per series on the storage
side. The planner will rewrite grouped aggregate queries to call
ReadGroup, which will perform a partial aggregation, followed by
another operation that will perform the rest of the aggregation on
the compute side.
* feat: storage capabilities for grouped aggregates
* fix: changes from review
* feat: group read operation name should include aggregate
This implements create empty for the window table reader and allows this
table read function to be used when it is specified. It will pass down
the create empty flag from the original window call into the storage
read function.
This also fixes the window table reader so it properly creates
individual tables for each window. Previously, it was constructing one
table for an entire series instead of one table per window.
Tests have been added to verify three edge case behaviors. The first is
the normal read operation where all values are present. The second is
when create empty is specified so null values may be created. The third
is with truncated boundaries to ensure that storage is read from and the
start and stop timestamps get correctly truncated.
Added a (disabled) planner rule that matches:
ReadGroupPhys -> { count }
It uses the same physical spec node for group to implement the aggregate. The
rule requires:
* the pushDownGroupAggregateCount feature flag enabled
* no existing aggregate present in the ReadGroup
* use of the "_value" column only
The e2e test driver in influxdb runs the tests twice to get past the fact that there
is no way to force order between the write to storage and the read back. When
the json.Marshal call became mandatory it was added to the first run, but not
the second.
Added a (disabled and feature-flagged) planner rule that matches:
ReadRange -> window -> { min, max, mean, count, sum }
The rule requires:
* the pushDownWindowAggregate{Count,Rest} feature flags enabled
* having WindowAggregateCapability
(which StorageReader does not currently have)
* use of "_value" columns only
* window.period == window.every
* window.every.months == 0
* window.every is positive
* window.offset == 0
* standard time columns
* createEmpty is false
This modifies the read window aggregate interfaces to future-proof it
if and when we add additional capabilities to the method. Previously,
the interface was all or nothing. If we modified the RPC call itself, we
would have to make a new interface to denote the change to the Go code.
This changes the interface so now a `WindowAggregateCapability` exists.
This way, we can modify the struct to include things like:
```
type WindowAggregateCapability struct {
WindowPeriodCapability bool
MeanAggregateCapability bool
}
```
This way we can learn if the RPC call itself supports some specific
option. If the first iteration doesn't support a mean aggregate or the
mean aggregate is only supported by single server implementations, the
window aggregate can tell the caller that it won't be able to compute
the mean aggregate.
Since it fills in a struct with these capabilities, the struct can
safely introduce new values. If a downstream consumer wants to take
advantage of that functionality, then all interfaces in the chain have
to be updated to consume the upstream capabilities.
The `ReadWindowAggregateSource` will invoke the `ReadWindowAggregate`
method on the `influxdb.Reader` and return the table. It is implemented
using the same common methods that are used for the other sources.
Added an interface for an additional storage capability. This interface
will allow for checking if the reader supports the window aggregate call
and another method for invoking the call if it does.
This is implemented using a single interface. If the reader implements
the interface, it indicates that the client is capable of reading the
response. The `HasXXX` method is intended to check if the store supports
the operation. This method also takes a context because it could require
a remote call or to wait for one.
This updates the semantic graph usage to accomodate the change to the
semantic graph that removed the ambiguity of the body so now it is
always a block instead of being a block or an expression.
The storage filters are modified to use the predicates directly so we do
not have to pass `semantic.FunctionExpression` around. Instead, since
simple expressions are all that are supported anyway, we transform
suitable function expressions into predicates as part of the push down
rule and this simplifies the influxdb reader code.
This also moves the storage predicate conversion code into the standard
library package as it is the only location that uses this code now that
the predicate conversion is done as part of the push down rule.
This refactor was prompted by another refactor of the
`semantic.FunctionExpression` that would cause it to always contain a
`semantic.Block`. Since the push down filter needs the expressions and
to combine them, this refactor allows us not do construct a combined
filter inside of blocks which allows us to have better type safety.
The `buckets()` and `v1.databases()` functions have been updated to
support their remote counterparts that were added to flux. These
functions now do the same thing as the `from()` call where they will
default to the current organization when run against the server and will
use the remote versions from the repl.
Algorithm W will return a semantic graph where every function block
always uses a block and a return statement. This is in contrast to the
Go code which would have the semantic graph be an expression or a block.
The push down code would not introspect blocks which meant that any
function expression produced by algorithm w would never be pushed down.
This fixes it so the code will now extract the semantic expression from
inside of a block if there is exactly one statement and the statement is
a return statement.
Co-authored-by: Jonathan A. Sternberg <jonathan@influxdata.com>