* Add initial rename specification to SPEC.md
* Tweak language in spec to be more explicit
* Update spec to be in accordance with final design decisions
This is so Chronograf doesn't have to import the builtin package, which
finalizes builtin registration.
Also clarify that the builtin package should only be imported from main
or test packages.
The now time is stamped by the influxql transpiler and used inside of
the actual query. It will result in more accuracy if we take the
timestamp we have created and send it as part of the spec to queryd
rather than force ourselves to ensure absolute times exist everywhere.
If a query is attempting to be enqueued and it gets canceled, it will
now stop attempting to add it to the new queries queue and return the
error reported by the context. This allows the http server to cancel a
running query when the client disconnects for whatever reason without
continuing to attempt to process the canceled query.
Introduces the Statisticser interface which ResultIterators may
implement.
The HTTP implementation uses HTTP trailers to preserve the statistics.
This way we do not need to have all encoders and decoders support
statistics.
These tests were part of a PR when the rename was made.
The changes were not rebased before merge so we did not discover the
failures till after the merge.
previous versions only supported the first parameter to time() that set the window size. This version supports the second parameter, which shifts the offset a fixed amount from the epoch
* feat(query/influxql): add regex support to transpiler
also added test case to querytest
* Add raw_with_condition test to transpiler unit tests
* add unit tests for regex conditions on raw query
The package contains all of the transpiler specs and allows them to be
put into different files instead of keeping all of the tests in the same
file. They are all Go code so they are type checked rather than being
loaded as JSON from disk.
Additionally, to make it easier for a developer, the tests will report
the exact file and line where the test was created. So rather than
hunting for which file a test is located in, you will get something nice
like the following:
--- FAIL: TestTranspiler/SELECT_count(value)_FROM_db0..cpu_WHERE_host_=_'server01' (0.00s)
testing.go:51: aggregates_with_condition.go:16: unexpected error: unimplemented function: "count"
As can be seen, the test that failed can be found in the
`aggregates_with_condition.go` file at line 16 which is where the test
was created by the `AggregateTest` function and the relevant spec can be
found in that same file.
The transpiler compilation tests will now not allow skip to be
specified. Instead, it must return an error message that starts with
`unimplemented` and then the reason will be used as the skip message.
This way, it will be easier to identify the failing tests in the
transpiler. In the previous method, it was possible for a test to be
marked as skip, but for the transpiler to return the wrong error message
because the test did not differentiate between an unimplemented error
message and an incorrect error message.
The transpiler now supports basic windowing. The window offsets are not
supported yet at all.
For windowing, we use the window function to split the points, perform
the aggregate/selector operation, and then we put them back into the
same window so they are within the same table as they originally were
located in. This is now reflected in the spec and the code.
The transpiler now supports basic windowing. The window offsets are not
supported yet at all.
For windowing, we use the window function to split the points, perform
the aggregate/selector operation, and then we put them back into the
same window so they are within the same table as they originally were
located in. This is now reflected in the spec and the code.
The compiler tests from the github.com/influxdata/influxdb/query have
been moved over to the influxql transpiler in platform. The framework
has updated to include a skip option so that all of the tests can be
there, but not all of them have to succeed at the moment. If a test
starts succeeding but is marked as one that should be skipped, it will
also cause an error to prevent us from doing work on the transpiler
without marking the test as something not to skip anymore (so progress
is always made).
This extra flexibility makes it easier for the transpiler to generate a
specification since the map step can be focused on only generating the
columns related to fields. In particular, it makes it easier to
implement wildcards for tags because the tags will get passed along with
the partition key.
The spec says to use the `_time` column for the time in the output, but
we were mapping `r._time` to `time` and using the `time` variable. This
modifies the encoder to use the `_time` column and rename it to `time`
for the column name.
This commit adds the option statement as a recognized node in
the semantic graph of a Flux query.
Flux interpreter must visit an option statement node
interpreter parses option statement
Semantically, variable declarations should be statements
Also update option statement test to validate proper scope
only test option statement
It was effectively a copied and pasted platform.ID, so change it to a
type alias. Once our known references to the query/id package are
updated to platform.ID, we'll delete the package.
The transpiler should use a bucket for the `from()` call instead of the
database parameter which will likely be deprecated. The bucket that it
will read data from is `db/rp` and, if the retention policy isn't
specified, `autogen` will be used as the default.
There are a few changes to how the transpiler works. The first is that
the streams are now abstracted behind a `cursor` interface. The
interface keeps track of which AST nodes (like variables or function
calls) are represented by the data inside of the stream and the method
of how to access the underlying data. This makes it easier to make a
generic interface for things like the join and map operations. This also
makes it easier to, in the future, use the same code from the map
operation for a filter so we can implement conditions.
This also follows the transpiler readme's methods and takes advantage of
the updates to the ifql language. This means it will group the relevant
cursors into a cursor group, perform any necessary joins, and allow us
to continue building on this as we flesh out more parts of the
transpiler and the language.
The cursor interface makes it so we no longer have to keep a symbol
table mapping the generated names to the locations because that is all
kept within the incoming cursor rather than as a separate data
structure.
It also splits the transpiler into more files so it is easier to find
the relevant code for each stage of the transpiler.