This updates the flux integration to use the `flux/array` package rather
than directly using the `arrow/array` package.
Flux recently switched to wrapping the array types from arrow and
creating its own array package to be used for table columns instead of
directly referencing the arrow package. This allows us to keep a
consistent interface, but potentially change internal implementations
without changing downstream consumers. Most recently, the
`*array.String` type has some of its own optimizations for certain array
patterns.
This change updates the flux integration to use the new API.
* feat(query): enable min/max pushdown
* fix(query): fix the group last pushdown to use descending cursors
* test(storage): add read group test with no agg
Co-authored-by: Jonathan A. Sternberg <jonathan@influxdata.com>
The tables produced by `storage/flux` didn't previously pass our table
tests. The `Empty()` call is supposed to return false if the table was
ever not empty, but reading the table or calling `Done()` would cause
the table implementations here to return that they were always empty.
This messes up the csv encoder which then believes that it just emitted
an empty table.
The table tests for valid table implementations states that this is an
error for the table implementation. This change introduces a simple test
for `ReadFilter` and also runs the table tests on the filter iterator.
This implements create empty for the window table reader and allows this
table read function to be used when it is specified. It will pass down
the create empty flag from the original window call into the storage
read function.
This also fixes the window table reader so it properly creates
individual tables for each window. Previously, it was constructing one
table for an entire series instead of one table per window.
Tests have been added to verify three edge case behaviors. The first is
the normal read operation where all values are present. The second is
when create empty is specified so null values may be created. The third
is with truncated boundaries to ensure that storage is read from and the
start and stop timestamps get correctly truncated.