I did this with a dumb editor macro, so some comments changed too.
Also rename root package from platform to influxdb.
In interest of minimizing risk, anyone importing the root package has
now aliased it to "platform" so that no changes beyond imports were
necessary in those files.
Lastly, replace the old platform module to local path /dev/null so that
nobody can accidentally reintroduce a platform dependency while
migrating platform code to influxdb.
The package contains all of the transpiler specs and allows them to be
put into different files instead of keeping all of the tests in the same
file. They are all Go code so they are type checked rather than being
loaded as JSON from disk.
Additionally, to make it easier for a developer, the tests will report
the exact file and line where the test was created. So rather than
hunting for which file a test is located in, you will get something nice
like the following:
--- FAIL: TestTranspiler/SELECT_count(value)_FROM_db0..cpu_WHERE_host_=_'server01' (0.00s)
testing.go:51: aggregates_with_condition.go:16: unexpected error: unimplemented function: "count"
As can be seen, the test that failed can be found in the
`aggregates_with_condition.go` file at line 16 which is where the test
was created by the `AggregateTest` function and the relevant spec can be
found in that same file.
The transpiler compilation tests will now not allow skip to be
specified. Instead, it must return an error message that starts with
`unimplemented` and then the reason will be used as the skip message.
This way, it will be easier to identify the failing tests in the
transpiler. In the previous method, it was possible for a test to be
marked as skip, but for the transpiler to return the wrong error message
because the test did not differentiate between an unimplemented error
message and an incorrect error message.
The transpiler now supports basic windowing. The window offsets are not
supported yet at all.
For windowing, we use the window function to split the points, perform
the aggregate/selector operation, and then we put them back into the
same window so they are within the same table as they originally were
located in. This is now reflected in the spec and the code.
The compiler tests from the github.com/influxdata/influxdb/query have
been moved over to the influxql transpiler in platform. The framework
has updated to include a skip option so that all of the tests can be
there, but not all of them have to succeed at the moment. If a test
starts succeeding but is marked as one that should be skipped, it will
also cause an error to prevent us from doing work on the transpiler
without marking the test as something not to skip anymore (so progress
is always made).
This extra flexibility makes it easier for the transpiler to generate a
specification since the map step can be focused on only generating the
columns related to fields. In particular, it makes it easier to
implement wildcards for tags because the tags will get passed along with
the partition key.
The spec says to use the `_time` column for the time in the output, but
we were mapping `r._time` to `time` and using the `time` variable. This
modifies the encoder to use the `_time` column and rename it to `time`
for the column name.
The transpiler should use a bucket for the `from()` call instead of the
database parameter which will likely be deprecated. The bucket that it
will read data from is `db/rp` and, if the retention policy isn't
specified, `autogen` will be used as the default.
There are a few changes to how the transpiler works. The first is that
the streams are now abstracted behind a `cursor` interface. The
interface keeps track of which AST nodes (like variables or function
calls) are represented by the data inside of the stream and the method
of how to access the underlying data. This makes it easier to make a
generic interface for things like the join and map operations. This also
makes it easier to, in the future, use the same code from the map
operation for a filter so we can implement conditions.
This also follows the transpiler readme's methods and takes advantage of
the updates to the ifql language. This means it will group the relevant
cursors into a cursor group, perform any necessary joins, and allow us
to continue building on this as we flesh out more parts of the
transpiler and the language.
The cursor interface makes it so we no longer have to keep a symbol
table mapping the generated names to the locations because that is all
kept within the incoming cursor rather than as a separate data
structure.
It also splits the transpiler into more files so it is easier to find
the relevant code for each stage of the transpiler.
This fixes the encoder so that it will encode the response correctly to
a JSON blob using the outputs of the transpiler. The transpiler has also
been modified to pass through the correct values so the map function is
correctly constructed and the aggregate function is also correctly
constructed.
This removes the group function temporarily because it does not seem to
be working.
The transpiler will now yield each statement using the statement id so
the result encoder can properly order the results and encode the
statement id. This behavior is now in the transpiler spec.