A static initialization is not desirable in the main binaries, as it forces all
paths of code to init, but it is still useful in tests. It allows static
intialization to be performed once for all tests and eliminates the need to
always add the FluxInit call. Added a fluxinit/static package that calls
fluxinit.FluxInit() to replace the builtin package. This hides the nature of
the initialization and makes it clear that it is mandatory initialization code
getting called.
It appears that the double write caused by using to() inside a separate
execution environment (experimental.chain) causes flux e2e tests to behave
unpredictably, when coupled with the 1.x storage engine. Removing the second
write by using two passes, one to write to the db, then another to run the
test, eliminates the flakiness. Verified by running e2e tests in parallel times
8 for 12 hours without any flakiness observed. Before the fix, the flakiness
would take approx 30 minutes on avgerage to exhibit.
This commit also removes universe/to_time from the skipped tests because it was
added when this flakiness was discovered.
This is a backport of #14262 to the 1.x storage engine. The 1.x storage
engine is now the primary engine for open source so when we switched we
regressed to the old behavior.
This also fixes `go generate` for the tsm1 package by running `tmpl`
with `go run` instead of assuming the correct one is installed in the
path.
This is required to keep the system resources low when running
the Flux end-to-end tests, which create a bucket for each test. A
bucket creates at least 17 files after the first write:
* 8 for the `_series` segment files
* 8 for the `index` log files
* 1 for the `wal`
Can specify that a key must be present in the query response metadata before
LoggingProxyQueryService logs the query. Will use this in gateway to only log
the query when the connection to queryd fails.
Enables the mix and max aggregates for the ReadGroupAggregte pushdown behind a feature flag.
Co-authored-by: Jonathan A. Sternberg <jonathan@influxdata.com>
The `buckets()` command would use a bucket lookup that wrapped the
`FindBuckets` API. It did not use the pagination aspect of this API
correctly. When the underlying implementation was changed to a version
that correctly implemented pagination, this broke the query `buckets()`
command. Since it was query that used the API incorrectly rather than a
regression in the `FindBuckets` implementation, this fixes the usage to
correctly use pagination.