This change allows for the InfluxQL language type to be used with the
/v2/query API endpoint.
This change also introduces a way to give the transpiler an explicit
bucket name instead of using the DBRPMapping service.
Requests to the endpoint will know the bucket name directly but will
likely not have run the migration step to populate the DBRP mappings.
The storage engine isn't capable of sending back empty tables when a
series is empty. Because of this, we disable the push down and let flux
do the filtering in the case where there is a filter and it is specified
to keep the empty tables.
Update CONTRIBUTING.md
Added Security Vulnerability Reporting
Updated the text to include the simple changes. This branch still needs updating to reflect 2.0 API etc.
~
fix(contribution): updated the text for V2.
fixes#13370
Update the 'Getting the source' section'
Remove the 'Cloning a fork' seciton
* If they have forked the repo, it should be clear how to clone the fork.
refactor
last refactor
Use # for section headings
Minor grammar edit.
Update CONTRIBUTING.md
Fix tripple backticks
Backticks weren't being picked up by Github's md renderer properly.
Fixed formatting
Made tabs and spaces consistent (went for tabs, since that's what go uses). Made cli commands consistent by including $ at the start of the line. Fixed copy a little bit.
Softened the language
Fixes: https://github.com/influxdata/influxdb/pull/13370#discussion_r359393716
Softened the language a bit.
Update CONTRIBUTING.md
Co-Authored-By: Stuart Carnie <stuart.carnie@gmail.com>
chore: improve CONTRIBUTING.md
this is a step towards providing a shared http client that manages pooling connections,
timeouts, and reducing GC for by not creating/GCing a client each req. Bring on the red!
* chore: Remove several instances of WithLogger
* chore: unexport Logger fields
* chore: unexport some more Logger fields
* chore: go fmt
chore: fix test
chore: s/logger/log
chore: fix test
chore: revert http.Handler.Handler constructor initialization
* refactor: integrate review feedback, fix all test nop loggers
* refactor: capitalize all log messages
* refactor: rename two logger to log
Signed-off-by: Lorenzo Affetti <lorenzo.affetti@gmail.com>
Signed-off-by: Julius Volz <julius.volz@gmail.com>
move to internal
update flux to v0.50
Revert "move to internal"
This reverts commit bcd4caffbd44135f1dbeac4163cb2a22a751f45a.
promtests/internal --> internal/promtests
When `exists` was used in conjunction with any other pushed down
expression, the `exists` was not rewritten properly because the rewrite
did not descend into logical expressions.
This is now fixed so those expressions will be rewritten correctly. This
affected the following form:
filter(fn: (r) => r._measurement == "cpu" and exists r.host)
It did not affect the following:
filter(fn: (r) => r._measurement == "cpu")
|> filter(fn: (r) => exists r.host)
The controller now supports setting an initial memory limit and setting
a maximum amount of memory that the controller may use separately from
the memory quota per query and the concurrency quota.
This allows the controller to increase the concurrency quota to a larger
number while setting the maximum amount of memory to a lower amount than
would be required for all queries to use 100% of their allowable memory.
Functionally, this means that a query will have a soft limit for an
initial memory byte quota that a query is guaranteed to have, a shared
pool that it is allowed access to in the case it uses more, and a hard
limit that no query may exceed to prevent runaway queries from taking
over the entire pool.
This change is completely backwards compatible with older configurations
as the new options will default to values that mimic the old behavior
where a query is allocated the full amount of its memory quota and the
maximum amount of memory is based on the concurrency quota and this
maximum memory quota.
In addition to the above, this also fixes a bug in the controller that
allowed it to run more than its concurrency as executing queries. This
happened when the results had finished being sent by the executor, but
the query had not yet been read and/or serialized. The executor would be
freed up and would take the next query even though the previous query
hadn't yet been finalized with `Done()`.
The QueryServiceProxyBridge would not check for errors properly because
it would return any error encountered when running the query as a read
error on the `io.Reader`. This made it so that the csv decoder could not
identify if the error was related to the query or if it was related to
reading. The csv decoder needed to tell the difference because an error
with reading from the `io.Reader` needs to be returned as a decoder
error while an error from the query needs to be returned as-is.
Instead of adapting the csv decoder to do that, we instead lazily
initialize the result iterator when `More()` is called and call `Peek()`
on the reader. If no bytes can be read, we assume this was an error
while executing the query and return it as such. If we are able to read
at least one byte, we decode it through the csv decoder.