The `buckets()` and `v1.databases()` functions have been updated to
support their remote counterparts that were added to flux. These
functions now do the same thing as the `from()` call where they will
default to the current organization when run against the server and will
use the remote versions from the repl.
Algorithm W will return a semantic graph where every function block
always uses a block and a return statement. This is in contrast to the
Go code which would have the semantic graph be an expression or a block.
The push down code would not introspect blocks which meant that any
function expression produced by algorithm w would never be pushed down.
This fixes it so the code will now extract the semantic expression from
inside of a block if there is exactly one statement and the statement is
a return statement.
Co-authored-by: Jonathan A. Sternberg <jonathan@influxdata.com>
This removes the storage dependency on libflux by moving the interfaces
it implements to the `query` package so it can reference the definitions
rather than the package with the implementation and the registration
with the runtime. This breaks the dependency where a storage package
depends on a flux runtime package.
This updates the repl to support the new influxdb source and use it by
default in the repl. It will automatically set some default variables
for the influxdb source to make it easier to use the cli. In particular,
it will set the default organization, token, and the host. The
organization gets set to the one specified in the repl command and the
token gets filled in with the user installed one. The host defaults to
localhost but will change to whichever one was specified on the cli.
In addition, this will replace the http client with one that sets
insecure skip verify if the `--skip-verify` flag is used.
The storage engine isn't capable of sending back empty tables when a
series is empty. Because of this, we disable the push down and let flux
do the filtering in the case where there is a filter and it is specified
to keep the empty tables.
this is a step towards providing a shared http client that manages pooling connections,
timeouts, and reducing GC for by not creating/GCing a client each req. Bring on the red!
When `exists` was used in conjunction with any other pushed down
expression, the `exists` was not rewritten properly because the rewrite
did not descend into logical expressions.
This is now fixed so those expressions will be rewritten correctly. This
affected the following form:
filter(fn: (r) => r._measurement == "cpu" and exists r.host)
It did not affect the following:
filter(fn: (r) => r._measurement == "cpu")
|> filter(fn: (r) => exists r.host)
Writes directly to a PointsWriter require the tag key, value pairs
are sorted in lexicographically ascending order. This commit uses
new API from the `models` package to ensure this invariant is
maintained.
The `v1.databases()` call did not correctly filter buckets based on
auth. Fortunately, it did not cause any improper permissions such as
allowing a person to see buckets that they had no read access to.
The error instead was that if a user did not have read access to one of
the buckets that was returned, the entire command would fail rather than
filter out the bucket that didn't have permissions.
This changes it so that if the user doesn't gets an unauthorized error
when accessing a bucket, it will filter it from the list instead of
failing. It also changes it so the error message is marked as
`ENotFound` instead of as an internal error.
This change makes it so that if an org or orgID are missing on calls to the `to` function
that the orgID is retrieved from the request context.
This is consistent with how `from` works.
The `exists` operator now gets pushed down to storage correctly. If
`exists` is used on a tag, then it will be rewritten to `tag != ""`
which is how storage defines if a tag exists. If `not exists` is used,
then it will use `tag == ""` which is how you would query storage for
only if a tag doesn't exist.
The `tag == ""` and `tag != ""` are different. For `tag == ""`, the
predicate is impossible for the storage layer to return true with.
Ideally, we would just rewrite this to return nothing and we wouldn't
bother with even querying storage. Instead, we just do not rewrite this
predicate because it cannot be rewritten to make sense with storage. If
we see `tag != ""`, it is the only one that can be passed through as-is
because `tag != ""` returns the same values as `exists tag`. It will
return true for every non-null value.
The storage table reader will now work correctly when there are multiple
outputs. The table interface now implements the new table and column
reader interfaces and works properly with `execute.CopyTable`. The
source uses `execute.CopyTable` to buffer the table in memory when there
are multiple output transformations.
The RPC call should translate `_measurement` and `_field` to their
proper shortened byte strings when requesting the tag values.
This also fixes the planner rewrites to return the root node even when
no rewrite happened as this is required by the planner.
If a pattern is seen that matches the `v1.tagValues(...)` call, then it
will be replaced with a direct RPC call to read the tag values for the
selected tag key which should be better optimized than reading from the
storage engine tsm1 files.
If a pattern is seen that matches reading the tag keys, it will be
replaced with a direct RPC call to read the tag keys which should be
better optimized than reading from the storage engine tsm1 files.