The reducers already had a local RNG but mistakenly did not use it when
sampling points.
Because the local RNG is not protected by a mutex, there is a slight
speedup as a result of this change:
benchmark old ns/op new ns/op delta
BenchmarkSampleIterator_1k-4 418 418 +0.00%
BenchmarkSampleIterator_100k-4 434 422 -2.76%
BenchmarkSampleIterator_1M-4 449 439 -2.23%
benchmark old allocs new allocs delta
BenchmarkSampleIterator_1k-4 3 3 +0.00%
BenchmarkSampleIterator_100k-4 3 3 +0.00%
BenchmarkSampleIterator_1M-4 3 3 +0.00%
benchmark old bytes new bytes delta
BenchmarkSampleIterator_1k-4 304 304 +0.00%
BenchmarkSampleIterator_100k-4 304 304 +0.00%
BenchmarkSampleIterator_1M-4 304 304 +0.00%
The speedup would presumably increase when multiple sample iterators are
used concurrently.
It looks like the real import path to the project is go.uber.org/zap
instead of github.com/uber-go/zap since the example in the project
references that path.
The logging library has been switched to use uber-go/zap. While the
logging has been changed to use structured logging, this commit does not
change any of the logging statements to take advantage of the new
structured log or new log levels. Those changes will come in future
commits.
`percentile()` is supposed to be a selector and return the time of the
point, but that only got changed when the input was a float. Updating
the integer processor to also return the time of the point rather than
the beginning of the interval.
The `partial` tag has been added to the JSON response of a series and
the result so that a client knows when more of the series or result will
be sent in a future JSON chunk.
This helps interactive clients who don't want to wait for all of the
data to know if it is done processing the current series or the current
result. Previously, the client had to guess if the next chunk would
refer to the same result or a new result and it had to match the name
and tags of the two series to know if they were the same series. Now,
the client just needs to check the `partial` field included with the
response to know if it should expect more.
Fixed `max-row-limit` so it counts rows instead of results and it
truncates the response when the `max-row-limit` is reached.
When the `max-row-limit` was hit, the goroutine reading from the results
channel would stop reading from the channel, but it didn't signal to the
sender that it was no longer reading from the results. This caused the
sender to continue trying to send results even though nobody would ever
read it and this created a deadlock.
Include an `AbortCh` on the `ExecutionContext` that will signal when
results are no longer desired so the sender can abort instead of
deadlocking.
When a query would use a grouping with two different aggregates, it was
possible for one of the aggregates to return a value from a different
series key than the second aggregate. When these series keys didn't
match, the returned grouping would be screwed up because it sorted by
time before checking for name and tags.
This did not happen when the aggregates returned values for the same
series keys because then the iterators were aligned with each other.
The parser was updated previously in #7295 and the functionality was
supposed to be there, but the wiring in the query engine for that to
happen was never written.
The previous version was showing the microseconds unit when it was
outputting nanoseconds. Now we correctly identify which sub-second unit
to use (milliseconds, microseconds, or nanoseconds) and use the correct
unit while dividing the duration unit correctly to produce the correct
output.
Also updated to use the default duration string instead of our own
custom formatters. It turns out that the string method for
`time.Duration` does the correct thing as long as we truncate the value
first.
When a limit is exceeded, we return errors and sometimes log (if appropriate)
that a limit was exceeded. The messages don't always provide an indication
as to where or how they are configured.
Instead, return the config option (easily searchable for) as well as the limit
currently set and the value that exceeded it when possible.
The delete and drop statements apply to the measurement within a db.
The parser allowed a db or rp to be specified and these values were
silently ignored. This could cause data loss as someone would think
they are only deleting the series within a rp, but they are actually
deleting all their data.
Instead, we return a parse error if a db or rp is specified in the
delete or drop statements. Ideally, we'd be able to respect the
db and rp, but that requires significant work in the query engine
and tsdb store to make that work.
Fixes#7053
This commit adds support for replacing regexes with non-regex conditions
when possible. Currently the following regexes are supported:
- host =~ /^foo$/ will be converted into host = 'foo'
- host !~ /^foo$/ will be converted into host != 'foo'
Note: if the regex expression contains character classes, grouping,
repetition or similar, it may not be rewritten.
For example, the condition: name =~ /^foo|bar$/ will not be rewritten.
Support for this may arrive in the future.
Regexes that can be converted into simpler expression will be able to
take advantage of the tsdb index, making them significantly faster.
Previously, calling derivative with a non-duration second argument was
allowed during parsing but would panic during execution due to a failed
type conversion. This change ensures the second argument is a duration
literal.
The functionality works the same as wildcards, but this time, you can
specify a regular expression.
One limitation is that you can't specify whether you only want to select
fields or tags. Since the regex can be changed to suit the person's
needs, I don't currently think this is an issue.
Strings would always return an empty string and stddev is meaningless
when it comes to strings. This removes that functionality so strings
don't automatically get picked up when using a wildcard.
The `cumulative_sum()` function can be used to sum each new point and
output the current total. For the following points:
cpu value=2 0
cpu value=4 10
cpu value=6 20
This would output the following points:
> SELECT cumulative_sum(value) FROM cpu
time value
---- -----
0 2
10 6
20 12
As can be seen, each new point adds to the sum of the previous point and
outputs the value with the same timestamp.
The function can also be used with an aggregate like `derivative()`.
> SELECT cumulative_sum(mean(value) FROM cpu WHERE time >= now() - 10m GROUP BY time(1m)
First Pass at implementing sample
Add sample iterators for all types
Remove size from sample struct
Fix off by one error when generating random number
Add benchmarks for sample iterator
Add test and associated fixes for off by one error
Add test for sample function
Remove NumericLiteral from sample function call
Make clear that the counter is incr w/ each call
Rename IsRandom to AllSamplesSeen
Add a rng for each reducer that is created
The default rng that comes with math/rand has a global lock. To avoid
having to worry about any contention on the lock, each reducer now has
its own time seeded rng.
Add sample function to changelog
Clean up template for fill average
Change fill(average) to fill(linear)
Update average to linear in infuxql spec
Add Integer Tests and associated fixes
Update CHANGELOG for fill(linear)
Manual use of system queries could result in a user using the query
incorrect. Rather than check to make sure the query was used correctly,
we're just going to prevent users from using those sources so they can't
use them incorrectly.
The vet checks for some files did not pass for go 1.7. As part of a
preliminary start to making go 1.7 work with this software, go vet
should pass.
Also updated the gogo/protobuf dependency which fixed the code generator
to work with go 1.7 too. Ran `go generate` on the entire repository to
ensure every file was up to date.
The derivative() call would panic if it received two points at the same
time because it tried to divide by zero. The derivative call now skips
past these points. To avoid skipping past these points, use `GROUP BY *`
so that each series is kept separated into their own series.
The difference() call has also been modified to skip past these points.
Even though difference doesn't divide by the time, difference is
supposed to perform the same as derivative, but without dividing by the
time.
Return an error when we encounter the same option twice in ALTER
RETENTION POLICY and remove the `maxNumOptions` number from the parsing
loop. The `maxNumOptions` number would need to be modified if another
option was added to the parsing loop and it didn't correctly prevent
duplicate options from being reported as an error anyway.
Normalize all of the SHOW commands so they allow both using ON to
specify the database and using the default database. Some commands would
require one and some would require the other and it was confusing when
using the query language.
Affected commands:
* SHOW RETENTION POLICIES
* SHOW MEASUREMENTS
* SHOW SERIES
* SHOW TAG KEYS
* SHOW TAG VALUES
* SHOW FIELD KEYS
When attempting to reduce the WHERE clause, the time literals had not
been converted from string literals yet. This adds the functionality to
have it handle the same time math when the time literal is still a
string literal.
The dollar sign would sometimes be accepted as whitespace if it was
immediately followed by a reserved keyword or an invalid character. It
now reads these properly as a bound parameter rather than ignoring the
dollar sign.
Instead of having the parser set the defaults, the command will set the
defaults so that the constants for that are actually used. This way we
can also identify which things the user provided and which ones we are
filling with default values.
This allows the meta client to be able to make smarter decisions when
determining if the user requested a conflict or if the requested
capabilities match with what is currently available. If you just say
`CREATE DATABASE WITH NAME myrp`, the user doesn't really care what the
duration of the retention policy is and just wants to use the default.
Now, we can use that information to determine if an existing retention
policy would conflict with what the user requested rather than returning
an error if a default value ever gets changed since the meta client
command can communicate intent more easily.
We added `SHARD DURATION` as an extra option, but forgot to increase the
maximum number of allowable options from 3 to 4. So if 4 options were
used, the last one was ignored. This was commonly `DEFAULT`, but it
could have been any of the options.
Negative timestamps are now supported. We also now refuse two
nanoseconds that are at the edge of the minimum time window. One of the
nanoseconds we do not accept is because we need MinInt64 to be used for
some internal comparisons in the TSM engine and it was causing an
underflow when we subtracted one from the minimum time. The second is so
we can have one minimum time that signifies the default minimum that
nobody can write to (so we can implicitly rewrite the timestamp on
aggregate queries) but still use the explicit timestamp if it is given
to us by the user. We aren't able to tell the difference between if the
user provided it or if it was implicit without those values being
different.
If the default minimum time is used with an aggregate query, we rewrite
the time to be the epoch for backwards compatibility since we believe
that's more important than supporting that extra nanosecond.
This allows a long series of uninterruptible statements to still be
interrupted for a long running query that might do something like create
or drop many databases.
It is now possible to use a mixed duration unit like `1h30m`. The
duration units can be in whatever order as long as they are connected to
each other.
There is a change to the scanner. A token such as `10x` will be scanned
as a duration literal, but will then fail to parse as an invalid
duration. This should not be a breaking change as there is no situation
where `10m10` was a valid order of tokens for the parser.
Fixes#3634.
The query executor would only store the number of active queries and the
query duration so it was impossible to determine how many queries were
actually executed during that timeframe because quick queries would be
gone before the call to gather statistics was made.
This adds two new statistics so track when queries start and when
queries finish and doesn't decrement the counter so the number of
executed queries can be obtained using `derivative()` and
`difference()`.
Instead of having the parser set the defaults, the command will set the
defaults so that the constants for that are actually used. This way we
can also identify which things the user provided and which ones we are
filling with default values.
This allows the meta client to be able to make smarter decisions when
determining if the user requested a conflict or if the requested
capabilities match with what is currently available. If you just say
`CREATE DATABASE WITH NAME myrp`, the user doesn't really care what the
duration of the retention policy is and just wants to use the default.
Now, we can use that information to determine if an existing retention
policy would conflict with what the user requested rather than returning
an error if a default value ever gets changed since the meta client
command can communicate intent more easily.
This commit fixes the `MaxSelectSeriesN` limit which was broken by
the implementation of lazy iterators. The setting previously limited
the total number of series but the new implementation limits the
concurrent number of series being processed.
This commit limits queries to only process one shard at a time.
However, within a shard, multiple series can still be processed in
parallel. Shard iterators are lazily instantiated during query
execution to limit the amount of memory a given query uses.
The previous parseFill would try to parse an expression and only unscan
one token when it failed. This caused it to not put back the correct
number of tokens with some expression.
Now it has been modified to check for the fill ident ahead of time and
then use ParseExpr() to parse the call. If the expression fails to parse
into a call, it will send an error instead of trying to continue with an
invalid parser state.
Fixes#6543.
The `SHOW MEASUREMENTS` and `SHOW TAG VALUES` cannot go through the
query engine to get the speed they need. They also only need access to
the database index and do not need access to specific shards. This
removes the query rewriting that was done to turn these two queries into
a select statement and reimplements them inside of the coordinator as an
interface on the TSDBStore.
Truncate the time interval output of the monitor service to be on even
time intervals rather than on every minute based on the start time. This
normalizes the output from the monitor service.
Previously, it encoded the text representation of the regex literal
which included the surrounding slashes used in the query language. The
binary encoding should only include the exact string used to create the
regular expression.
The tsdb package had a substantial amount of dead code related to the
old query engine still in there. It is no longer used, so it was removed
since it was left unmaintained. There is likely still more code that is
the same, but wasn't found as part of this code cleanup.
influxql has dead code show up because of the code generation so it is
not included in this pruning.
The highest time represented by a nanosecond needs to be used for an
exclusive range, so the maximum time needs to be one less than the
possible maximum number of nanoseconds representable by an int64 so that
we don't lose a point at that one time.
Previously worked in the open source version because the timestamp used
for finding a shard would be truncated by the retention policy so the
lookup time didn't run into this edge case because it didn't rest on the
truncation boundary. Since that point didn't really belong in that shard
group and was placed there by mistake, it's best to fix this bug since
the timestamp used to create the shard group should be capable of
retrieving it.
This adds support for using regex expressions in SHOW TAG VALUES when
selecting the key. Also supporting the `!=` operation for the
comparison. Now you can do any of the following:
SHOW TAG VALUES WITH KEY != "region"
SHOW TAG VALUES WITH KEY =~ /region/
SHOW TAG VALUES WITH KEY !~ /region/
It also adds a new SetLiteral AST node that will potentially be used in
the future to allow set operations for other comparisons in the future.
Fixes#4532.
The task manager now acts as its own statement executor so that a custom
statement executor can perform custom actions for KillQueryStatement and
ShowQueriesStatement.
This allows us to add additional options to ExecuteQuery without
creating parameter bloat.
Removing the unused Series structs. Their necessity was removed by a
previous commit, but the structs were not removed yet.
Add another type of interrupt iterator that monitors the interrupt
channel and calls `Close()` on the iterator when the interrupt happens.
It will primarily be used for asynchronously closing the ReaderIterator,
but it will only close the read side of the connection properly. More
work needs to be done to allow closing the write side efficiently.
The current code would compare every string literal it crossed and tried
to coerce them to time literals if the _looked_ like date/time strings.
The only time the TimeLiteral was used is when comparing to the the
'time' value in a where clause. This change moves the string parsing
code until we attempt to compare 'time' to a string, at which point we
know we need/want a TimeLiteral, and not just an ordinary string.
Fixes#6727
The default retention policy name is changed to "autogen" instead of
"default" since it ends up being ambiguous when we tell a user to check
the default retention policy, it is uncertain if we are referring to the
default retention policy (which can be changed) or the retention policy
with the name "default".
Now the automatically generated retention policy name is "autogen".
The default retention policy is now also configurable through the
configuration file so an administrator can customize what they think
should be the default.
Fixes#3733.
A query's String method is called multiple times per query. This commit
ensures all calls to query.String share use of a strings.NewReplacer.
This approximately halves the number of allocations for the benchmarked
query.
If you use a statement like this:
SELECT value FROM one..cpu, two..cpu
It will access both the `one` and `two` databases as if you had selected
the `cpu` measurement twice for both of them. Updated the `tsdb.Shard`
create iterator function to filter out any sources that do not apply to
that shard so this duplication doesn't happen.
Fixes#6701.
The parser can be passed a map of keys to literal values to be replaced
into the query. Parameters are preceded by a dollar sign (`$`). If a
parameter key is missing, an error is thrown by the parser.
Fixes#2926.
Casting syntax is done with the PostgreSQL syntax `field1::float` to
specify which type should be used when selecting a field. You can also
do `field1::field` or `tag1::tag` to specify that a field or tag should
be selected.
This makes it possible to select a tag when a field key and a tag key
conflict with each other in a measurement. It also means it's possible
to choose a field with a specific type if multiple shards disagree. If
no types are given, the same ordering for how a type is chosen is used
to determine which type to return.
The FieldDimensions method has been updated to return the data type for
the fields that get returned. The SeriesKeys function has also been
removed since it is no longer needed. SeriesKeys was originally used for
the fill iterator, but then expanded to be used by auxiliary iterators
for determining the channel iterator types. The fill iterator doesn't
need it anymore and the auxiliary types are better served by
FieldDimensions implementing that functionality, so SeriesKeys is no
longer needed.
Fixes#6519.
encodeTags would encode the tags by outputting every key followed by
every value in alphabetical order. decodeTags would try to read this in
an old format that printed tags in key/value order.
This fix matches decodeTags to match the same format encodeTags outputs.
This commit moves the `CallIterator` to wrap the individual series
instead of wrapping a shard. This allows individual points to be
aggregated before being merged.
This will cause a small increase in memory usuage per series but
it shows a 20% decrease in query time when there are a moderate
number of points per series.
This commit changes the `SeriesIterator` to process one measurement
at a time and uses a `floatFastDedupeIterator` to avoid point
encoding during deduplication.
If a shard is empty for a specific field and the field type is something
other than a float, a nil iterator would get returned from one of the
empty shards and cause the combined iterators to be cast to the float
type and all other iterator types to be discarded (or for integers, to
be cast).
This is rare since most aggregates don't accept strings or booleans, but
for queries like:
SELECT distinct(string) FROM mydata
It would result in nothing getting returned if one of the shards didn't
have a value for `string`.
This change modifies the query engine to return nil for the shards
instead of a fake iterator and then to only use the fake iterator if the
final aggregate iterator is nil (meaning that no iterators could be
constructed for the field from any shard).
Fixes#6495.
An offset of `time(1m, now())` will anchor the offset to the current
time of the query. The default offset is `0s` which is the current
default anyway.
This fixes#2074 by making time zone offset support unnecessary. Time
comparisons can use timezones inside of the time clause and the offset
needed for non-hour timezone differences can be used as part of the
offset argument.
Also removes the functions `HasSimpleCount()` and `HasCountDistinct()`
as they are no longer useful. They had a small role in validation that
has now been moved into `validateAggregates()`.
Fixes#6472.
The time of the point will be returned with a selector when there is no
group by interval and when there is only one selector. Any other
conditions will return the start time of the interval.
Fixes#5890.
In order to follow REST a bit more carefully, all write operations
should go through a POST in the future. We still allow read operations
through either GET or POST (similar to the Graphite /render endpoint),
but write operations will trigger a returned warning as part of the JSON
response and will eventually return an error.
Also updates the Golang client libraries to always use POST instead of
GET.
Fixes#6290.
The interrupt iterator currently introduces a non-trivial amount of
overhead to queries by checking for interrupts every 256 points.
This commit adjusts that check to every 5000 points.
There are also several places where nested field access has been
adjusted to minimize field lookups.
This commit removes support for `SHOW SERVERS` and `DROP SERVER`
from the `influxql` package. It also removes extraneous cluster
testing code from `cmd/influxd/run`.
Fixes#6465
The derivative function had an arbitrary limitation that would cause it
to set the value to zero if the previous value was after the next value.
This caused all `ORDER BY desc` queries with `derivative()` to always
return zero values.
Fixes#4675.
Sanitizing is now done through pattern matching rather than parsing the
query and replacing the password in the query. This prevents
accidentally redacting the wrong part of a query and revealing what the
password is through association.
Fixes#3883.
This removes the previous restrictions that kept derivative as only
capable of being used in a single field and only at the top level.
This lets users determine how they want to use derivative more freely
and opens up the possibility of also using math between derivatives.
This may open up some problems when it comes to math between derivatives
as timestamps may not match correctly. That is likely a problem related
to any binary math to begin with though and can probably be ignored by
the derivatives. I'm also not sure it makes sense to perform any math
between a derivative and a difference or perform math between a
derivative and a mean.
Fixes#6118.
This has various benefits:
- Users embedding InfluxDB within other Go programs can specify a different logger / prefix easily.
- More consistent with code used elsewhere in InfluxDB (e.g. services, other `run.Server.*` fields, etc).
- This is also more efficient, because it means `executeQuery` no longer allocates a single `*log.Logger` each time it is called.
The series keys within a tag set were previously not sorted which would
cause the output to be non-deterministic. This sorts the output series
by their keys so it has a consistent output especially when using
limits.
Fixes#3166.
This also switches the remaining iterators to be lazy so they can return
errors properly. They needed to be converted to lazy initialization
anyway, which has the side effect of making it much easier for us to
propagate the underlying error during initialization.
Updated the Emitter to return errors when it cannot read properly from
the iterators.
The optional sections of the command consumed the semicolon token and
didn't put it back for the outer loop. The code shouldn't explicitly
check for a semicolon or EOF anyway, so these checks were removed and
the token gets unscanned if it doesn't match the optional token that the
parser is looking for.
Fixes#6398.
For aggregate queries, derivatives will now alter the start time to one
interval behind and will use that interval to find the derivative of the
first point instead of giving no value for that interval. Null values
will still be discarded so if the interval before the one you are
querying is null, then it will be discarded like if it were in the
middle of the query. You can use `fill(0)` to fill in these values.
This does not apply to raw queries yet.
Also modified the derivative and difference aggregates to use the stream
iterator instead of the reduce slice iterator for space efficiency.
Fixes#3247. Contributes to #5943.
The deprecated message is now attached to a new attribute returned with
the results. This message can then be read by clients to warn a user
about upcoming changes to the query engine.
The `influx` client has already been modified to read this message and
print it out for every format except CSV.
The first warning message is a deprecated message about removing `IF NOT
EXISTS` from `CREATE DATABASE`.
The message will also be printed to the server log.
Fixes#5707.
This commit changes the channel iterators to use a double buffer
to reduce allocations. The caller of `Iterator.Next()` must copy
out the point before calling `Next()` again.
The `percentile()` call previously did not validate that the first
argument was a variable reference and that would let an invalid query
slip by that would panic the query engine.
Added checking for this case and also included test cases for the other
calls that require a variable reference as the first argument.
Fixes#6379.
Change distinct so it uses a custom reducer that keeps internal state
instead of requiring all of the points to be kept as a slice in memory.
Fixes#6261.
This commit makes a number of performance improvements to
reduce allocations during query execution. Several objects
and buffers are now reused across the components to avoid
allocations.
Previously a simple `count(value)` query across 1M points
would require 26,000+ allocations. After the changes in
this commit that number has been reduced to 88.
If a point had no tags at all and was asked for the subset of tags with
at least one key, it would return a new set of tags that was completely
empty. In contrast, if the point had any tags at all, it would return a
set of tags with the tag value being an empty string. This lead to
a point with no tags being treated differently than a point with at
least one tag.
Fixing this so the tag value will always be an empty string for
consistency. A missing tag should always be empty.
The following query was fixed previously:
SELECT 'value' FROM cpu
This ended up hitting the `buildExprIterator()` code path and was
handled properly. But this query:
SELECT 'value', value FROM cpu
This took a different code path that would trigger a panic because it
triggered a panic instead of an error condition. This code path has now
been modified to trigger an error instead of a panic.
Fixes#6248.
Fixes#6211.
In Go-land packages with the same name, e.g., internal, do not clash
with each other when they're in different parts of the project. However
with protobufs definitions will clash if they share the same package
name.
This commit renames the influxql protobuf package to `influxql` to
avoid a clash with a message definition in another protobuf package
called internal. Go package aliases allow us to continue to refer to the
internal package as `internal` rather than `influxql`.
The QueryExecutor had a lot of dead code made obsolete by the query
engine refactor that has now been removed. The TSDBStore interface has
also been cleaned up so we can have multiple implementations of this
(such as a local and remote version).
A StatementExecutor interface has been created for adding custom
functionality to the QueryExecutor that may not be available in the open
source version. The QueryExecutor delegate all statement execution to
the StatementExecutor and the QueryExecutor will only keep track of
housekeeping. Implementing additional queries is as simple as wrapping
the cluster.StatementExecutor struct or replacing it with something
completely different.
The PointsWriter in the QueryExecutor has been changed to a simple
interface that implements the one method needed by the query executor.
This is to allow different PointsWriter implementations to be used by
the QueryExecutor. It has also been moved into the StatementExecutor
instead.
The TSDBStore interface has now been modified to contain the code for
creating an IteratorCreator. This is so the underlying TSDBStore can
implement different ways of accessing the underlying shards rather than
always having to access each shard individually (such as batch
requests).
Remove the show servers handling. This isn't a valid command in the open
source version of InfluxDB anymore.
The QueryManager interface is now built into QueryExecutor and is no
longer necessary. The StatementExecutor and QueryExecutor split allows
task management to much more easily be built into QueryExecutor rather
than as a separate struct.
A bigger refactor of these functions is needed to support #3290, but
this will work for the more common case that someone uses double quotes
instead of single quotes when surrounding a time literal.
Fixes#3932.
This commit sets the `MergeIterator.init` flag after initialization.
Previously this would generate a new heap on every call to `Next()`
which caused some aggregate queries to slow by ~10,000%.
The MergeIterator creation function would call `peek()` on the iterator
to initialize the heap. Since this function can sometimes take a long
time (such as a huge aggregate query on a shard), the
`influxql.Select()` wouldn't return until the query had already been
completed.
The `influxql.Select()` call should be just the creation of the
iterators and shouldn't calculate anything. This is important for future
features like the point limiter that have to be initialized after the
`influxql.Select()` call.
The simple moving average will gradually emit points instead of waiting
until the end. This should apply to derivative and difference in the
future too.
Fixes#6112.
Related to #6140, but won't actually fix that problem. It will correctly
stop new queries from being started during shutdown and will send the
interrupt signal to queries during shutdown.
Since the interrupt signal is asynchronous, there isn't currently a way
to wait for the queries to complete themselves before shutting down the
engine.
The difference function is implemented very similar to how derivative is
implemented. It is an aggregate function that acts over the entire
aggregate. This function will also have the same problems that
derivative has with getting values from the previous interval or point.
This will be fixed separately as part of #5943.
Fixes#1825.
Allows configuration of shard group duration at database creation, and retention
policy create/alter time.
Query examples:
```
CREATE DATABASE testdb WITH DURATION 90d SHARD DURATION 30m NAME rp_testdb
CREATE RETENTION POLICY rp_testdb2 ON testdb DURATION INF REPLICATION 1 SHARD DURATION 30m
ALTER RETENTION POLICY rp_testdb2 ON testdb SHARD DURATION 1h
```
This can be useful with long duration retention policies with lots of data, where
you can split into smaller shards to relieve memory pressure.