Allows configuration of shard group duration at database creation, and retention
policy create/alter time.
Query examples:
```
CREATE DATABASE testdb WITH DURATION 90d SHARD DURATION 30m NAME rp_testdb
CREATE RETENTION POLICY rp_testdb2 ON testdb DURATION INF REPLICATION 1 SHARD DURATION 30m
ALTER RETENTION POLICY rp_testdb2 ON testdb SHARD DURATION 1h
```
This can be useful with long duration retention policies with lots of data, where
you can split into smaller shards to relieve memory pressure.
This commit adds a configurable limit to the number of series that
can be returned from a `SELECT` statement. The limit is checked
immediately after planning and is determined by the use of iterator
stats.
Fixes#6076
This commit adds a buffer for stats to be updated without
requiring a mutex lock/unlock on every point. The tradeoff
is that stats are not exactly precise. This works for our
use case because stats are only periodically checked.
This allows multiple semicolons in a row now and also requires that a
semicolon separate commands. The query specification says this is
required, but a boolean error in `ParseQuery` makes one semicolon
optional and multiple semicolons an error.
Fixes#5728.
If an OR was used, merging filters between different expressions would
not work correctly. If one of the sides had a set of series ids with a
condition and the other side had no series ids associated with the
expression, all of the series from the side with a condition would have
the condition ignored. Instead of defaulting a non-existant series
filter to true, it should just be false and the evaluation of the one
side that does exist should take care of determining if the series id
should be included or not. The AND condition used false correctly so did
not have to be changed.
If a tag did not exist and `!=` or `!~` were used, it would return false
even though the neither a field or a tag equaled those values. This has
now been modified to correctly return the correct series ids and the
correct condition.
Also fixed a panic that would occur when a tag caused a field access to
become unnecessary. The filter using the field access still got created
and used even though it was unnecessary, resulting in an attempted
access to a non-initialized map.
Fixes#5152 and a bunch of other miscellaneous issues.
After reading the initial buffer, ORDER BY desc would read the next
block into the buffer and only read the first element. It's because the
code that was copied from the ascending cursor wasn't modified correctly
to set the position to the last element in the buffer.
The buffer size has also been lowered from 1000 to 10 to match with the
ascending cursor for performance with limit queries.
Fixes#6055.
This commit adds an `IteratorStats` that holds aggregate
iterator processing information. A method is also added to
`Iterator` to return the stats:
Stats() influxql.IteratorStats
The remote iterators will also emit their stats in the point
stream upon first connection, on a given interval, and then
finally once the last point has been sent.
It previously used the content type `application/x-www-form-urlencoded`.
When the HTTP write handler was changed to check an HTTP form parameter
in d96eef4, the body for any POST requests with that content type would
result in an empty body.
The TSM writer uses a bufio.Writer that needs to be flushed before
it's closed. If the flush fails for some reason, the error is not
handled by the defer and the compactor continues on as if all is good.
This can create files with truncated indexes or zero-length TSM files.
Fixes#5889
This seems to have been an oversight since all of the response writers
are supposed to implement this interface, but the gzipResponseWriter
didn't implement this interface for some reason.
Use of the iterator is spread out into both `IteratorCreators` and
inside of the iterators themselves. Part of the interrupt must be
handled inside of the engine so it stops trying to emit points when an
interrupt is found and another part of the interrupt has to happen when
combining the iterators so it doesn't just start reading the next shard.
While this allows a query to be killed, it doesn't really do anything
yet since the interrupt happens only after the first row gets emitted
(the entire first series).
This section of code will likely have to be refactored to make this work
since we need a way to interrupt a currently running iterator.