The MergeIterator creation function would call `peek()` on the iterator
to initialize the heap. Since this function can sometimes take a long
time (such as a huge aggregate query on a shard), the
`influxql.Select()` wouldn't return until the query had already been
completed.
The `influxql.Select()` call should be just the creation of the
iterators and shouldn't calculate anything. This is important for future
features like the point limiter that have to be initialized after the
`influxql.Select()` call.
The simple moving average will gradually emit points instead of waiting
until the end. This should apply to derivative and difference in the
future too.
Fixes#6112.
This commit adds an `IteratorStats` that holds aggregate
iterator processing information. A method is also added to
`Iterator` to return the stats:
Stats() influxql.IteratorStats
The remote iterators will also emit their stats in the point
stream upon first connection, on a given interval, and then
finally once the last point has been sent.
Use of the iterator is spread out into both `IteratorCreators` and
inside of the iterators themselves. Part of the interrupt must be
handled inside of the engine so it stops trying to emit points when an
interrupt is found and another part of the interrupt has to happen when
combining the iterators so it doesn't just start reading the next shard.
This commit moves the `tsdb.Store.ExpandSources()` function onto
the `influxql.IteratorCreator` and provides support for issuing
source expansion across a cluster.
Now the AuxIterator will know when it is backgrounded so that it can
stop reading from the primary iterator when all of the child iterators
have been closed.
This commit moves the `tsdb.Store.ExpandSources()` function onto
the `influxql.IteratorCreator` and provides support for issuing
source expansion across a cluster.
The primary input iterator for an aux iterator would continue trying to
send points to a closed channel even after an aux iterator had already
been closed.
This changes the aux iterators to use sync.Cond instead of channels and
lower level syncing primitives for handling buffered input/output.
Fixes#5974.
Previously the call iterator would normalize the time to the interval
for all calls. This meant that when `first()` or `last()` was called
with no group by interval the value would be found for each shard, the
time was normalized, then it tried to find the value between the shards
(but no longer with any time data as that had already been eliminated).
This removes part of the time logic from the call iterators and makes a
new iterator `IntervalIterator` to normalize the times as they come out
of the underlying iterator.
Fixes#5890.
All three of these iterators are supposed to support all four types of
iterators, but the implementation was never done for string or boolean.
Fixes#5886.
This refactor is primarily to support Kapacitor. Kapacitor doesn't care
about the iterators and mostly keeps the points it handles in memory.
The iterator interface is more than Kapacitor cares about.
This commit refactors and opens up the internals of aggregating and
reducing incoming points so it can be used by an outside library with
the same code. It also makes the iterators used by the call iterators
publically usable with new functionality.
Reducers are split into two methods which are separate interfaces that
can be combined for dealing with casting between different types. The
Aggregator interfaces accept points into the aggregator and retain any
internal state they need. The Emitter interface will then create a point
from that aggregated state which can be fed to the iterator. The
Emitters do not fill in the name or tag of the point as that is expected
to be done by the person aggregating the point. While the Emitters do
sometimes fill in the time, that value will also be overwritten by the
iterator. Filling in the time is to allow a future version that will
allow returning the point time instead of just the interval time.
The limit iterator would short circuit if there were no dimensions and
all points had been read. It also needs to consider that multiple
sources will require reading the entire iterator too, so the short
circuit requires only a single source.
Fixes#5871.
A new attribute has been added to points to track how many points were
used to calculate that point. This is particularly useful for finding
the mean as we can then split mean calculation into two phases: one at
the shard level and a second at the shards level.
This optimization is now used so we don't have to hold so many points in
memory while calculating the mean.
Querying an integer field with a fill value will cause a cast error
because the underlying type is a float64 rather than an int64. Add a
function that will coerce the value to the correct type.
It may be more appropriate in the future to have the fill iterator read
the underlying iterator and cast to the appropriate type rather than
coerce the fill value to the correct type, but this solution works for
our current scenario well.
The additional locks shouldn't be necessary due to how the code is used,
but should prevent any potential data races in case we accidentally do
something bad.
The AuxIterator streams points to the underlying iterators. When it
started automatically, race conditions occurred between the stream
closing the iterators and creating iterators from the AuxIterator.
Aux iterators now ask the iterator creator what series will be returned
and determine which aux fields to create based on the results.
The `tsdb.Shards` struct also creates a call iterator around the
iterators returned from each shard.
Fill requires an additional function for IteratorCreator to retrieve the
series that will be returned from the iterator. When fill is required
for an aggregate, the IteratorCreator will be asked what series will be
returned by the created iterator.
When multiple sources are used, emit all points for a certain source
(like cpu) before another source (like mem) regardless of which window
they are in. If the sources are the same, then sort by window.
Continue to ignore tags since we don't need to sort nicely by tags with
a MergeIterator, only SortedMergeIterator.
Previously reduce iterators just separated points by tags. If you had
identical tags but different names, it would group those together so you
could have these two points:
cpu value=1
mem value=2
When you performed a `mean(value)` call and included both cpu and mem as
sources, it would return one mem series with a value of 1.5 instead of
two serieses.
Out of a list of iterators, an overarching iterator type is chosen and
only iterators of that type are returned for the merge iterator. If a
type can be cast to another type, an extra cast iterator is created to
handle that casting.
The only supported cast is from integers to floats.
last() would always return the last output of the iterator (which isn't
necessarily the last time value due to how the merge iterator works) and
first() would always return the first output of the iterator (wrong for
the same reason).
Now the time is kept by the reduce function and the times are wiped as
part of the reduce iterator after the value has been found.