Previously, we implicitly added a newline and had to add one to the
number of bytes transmitted because we added that byte. That was removed
at some point and the metric was not updated to record the correct
value.
The query killing functionality depends on the ResponseWriter exposing a
CloseNotify method. Since we wrap the http.ResponseWriter, the new
struct does not have that method and the HTTP handler would skip past
calling that method.
Instead of duplicating `Flush()` and `CloseNotify()` for every response
formatter, we will unify all of that under a single struct and create
formatters instead.
Also, fixes a bug where the header information from a query would not be
returned until some other data was returned with it because of
buffering and another bug in the gzipResponseWriter that wouldn't flush
the actual underlying ResponseWriter.
This allows a long series of uninterruptible statements to still be
interrupted for a long running query that might do something like create
or drop many databases.
The query can be uploaded from a file using `multipart/form-data` and
setting the file name to `q`. An example of using curl to execute an
async query would be:
curl -F "q=@database.iql" -F "async=true" http://localhost:8086/query
It will return a 204 No Content as long as the query is accepted
(immediate errors will be returned, but not individual errors with
specific queries). The only way to kill the query is by using the task
manager.
CSV doesn't offer a way to separate different sheets from each other and
it doesn't really have a standard format. We separate sheets with a
newline so they can be imported into something like Excel or LibreOffice
more easily.
The number of columns for each sheet is inferred from the first returned
row in each statement since they should all be the same.
It is now possible to use a mixed duration unit like `1h30m`. The
duration units can be in whatever order as long as they are connected to
each other.
There is a change to the scanner. A token such as `10x` will be scanned
as a duration literal, but will then fail to parse as an invalid
duration. This should not be a breaking change as there is no situation
where `10m10` was a valid order of tokens for the parser.
Fixes#3634.
The query executor would only store the number of active queries and the
query duration so it was impossible to determine how many queries were
actually executed during that timeframe because quick queries would be
gone before the call to gather statistics was made.
This adds two new statistics so track when queries start and when
queries finish and doesn't decrement the counter so the number of
executed queries can be obtained using `derivative()` and
`difference()`.
Instead of having the parser set the defaults, the command will set the
defaults so that the constants for that are actually used. This way we
can also identify which things the user provided and which ones we are
filling with default values.
This allows the meta client to be able to make smarter decisions when
determining if the user requested a conflict or if the requested
capabilities match with what is currently available. If you just say
`CREATE DATABASE WITH NAME myrp`, the user doesn't really care what the
duration of the retention policy is and just wants to use the default.
Now, we can use that information to determine if an existing retention
policy would conflict with what the user requested rather than returning
an error if a default value ever gets changed since the meta client
command can communicate intent more easily.
This commit fixes the `MaxSelectSeriesN` limit which was broken by
the implementation of lazy iterators. The setting previously limited
the total number of series but the new implementation limits the
concurrent number of series being processed.
This commit limits queries to only process one shard at a time.
However, within a shard, multiple series can still be processed in
parallel. Shard iterators are lazily instantiated during query
execution to limit the amount of memory a given query uses.