The `databases()` function was moved into the `influxdata/influxdb/v1`
package and so it wouldn't work anymore. The `percentile()` call was
changed to `quantile()` and the argument was changed to `q`. This also
updates `median()` to use `median()` since we now produce AST's and not
the spec so we can use the `median()` definition instead.
This refactors everything to generate and use a flux AST when
transpiling an influxql query. This also updates the spectests so they
use flux instead of writing out the AST and compare the resulting
AST's.
We reorganized the functions in flux to have the structure:
/functions
/inputs
/transformations
/outputs
this PR catches up platform to work with the new package layout.
As a separate refactoring issue, we should discuss:
from(bucket: ) should migrate from flux --> platform
to_http and to_kafka should migrate from platform --> flux
The transpiler will normalize the `_time` column by dropping any
existing time column and then duplicating `_start` when the query is an
aggregate type.
This works for the selectors because they did not normalize their
`_time` column at all and, while the aggregates did normalize their
`_time` column, we have made the decision to remove that functionality
and have aggregates not set a `_time` column at all.
The transpiler compilation tests will now not allow skip to be
specified. Instead, it must return an error message that starts with
`unimplemented` and then the reason will be used as the skip message.
This way, it will be easier to identify the failing tests in the
transpiler. In the previous method, it was possible for a test to be
marked as skip, but for the transpiler to return the wrong error message
because the test did not differentiate between an unimplemented error
message and an incorrect error message.
There are a few changes to how the transpiler works. The first is that
the streams are now abstracted behind a `cursor` interface. The
interface keeps track of which AST nodes (like variables or function
calls) are represented by the data inside of the stream and the method
of how to access the underlying data. This makes it easier to make a
generic interface for things like the join and map operations. This also
makes it easier to, in the future, use the same code from the map
operation for a filter so we can implement conditions.
This also follows the transpiler readme's methods and takes advantage of
the updates to the ifql language. This means it will group the relevant
cursors into a cursor group, perform any necessary joins, and allow us
to continue building on this as we flesh out more parts of the
transpiler and the language.
The cursor interface makes it so we no longer have to keep a symbol
table mapping the generated names to the locations because that is all
kept within the incoming cursor rather than as a separate data
structure.
It also splits the transpiler into more files so it is easier to find
the relevant code for each stage of the transpiler.