This refactor is primarily to support Kapacitor. Kapacitor doesn't care
about the iterators and mostly keeps the points it handles in memory.
The iterator interface is more than Kapacitor cares about.
This commit refactors and opens up the internals of aggregating and
reducing incoming points so it can be used by an outside library with
the same code. It also makes the iterators used by the call iterators
publically usable with new functionality.
Reducers are split into two methods which are separate interfaces that
can be combined for dealing with casting between different types. The
Aggregator interfaces accept points into the aggregator and retain any
internal state they need. The Emitter interface will then create a point
from that aggregated state which can be fed to the iterator. The
Emitters do not fill in the name or tag of the point as that is expected
to be done by the person aggregating the point. While the Emitters do
sometimes fill in the time, that value will also be overwritten by the
iterator. Filling in the time is to allow a future version that will
allow returning the point time instead of just the interval time.
Go 1.4.3 was a security release that also created a strange edge-case
that caused connections to not be kept alive and reused when Close()
is called on the Body of the request. Close() hasn't been required on
the Body of a request for some time, so there is no harm is not calling
it anymore.
There was a race where a remote write could arrive before a meta
client cache update arrived. When this happened, the receiving node
would drop the write because it could not determine what database
and retention policy the shard it was supposed to create belonged
to.
This change sends the db and rp along with the write so that the
receiving node does not need to consult the meta store. It also
allows us to not send writes for shards that no longer exist instead
of always sending them and having the receiving node logs fill up
with dropped write requests. This second situation can occur when
shards are deleted and some nodes still have writes queued in hinted
handoff for those shards.
Fixes#5610
The limit iterator would short circuit if there were no dimensions and
all points had been read. It also needs to consider that multiple
sources will require reading the entire iterator too, so the short
circuit requires only a single source.
Fixes#5871.
- Removed a lot of unused code
- Consolidated types
- Improved allocations for converting b1 shards
- Eliminated allocations when sorting cursors
- Eliminated allocations from removing NaN and Infinity values,
they are removed by the cursors now
- Separated out the stats from the conversion tracker
- Removed allocations from shard reader buffers
- Improved logic for shard reader Next()/Read()
... by extracting the db/rp from the given path.
Now that the code has "standardized" on extracting db/rp this way, the
ShardLocation struct is no longer necessary and thus has been removed.
We're back on the previous style of passing the path and walPath to
NewShard.
The current go compiler at the tip of the go master (1d5001af) has a modified implementation of
testing.quick.Check that now generates nil slices as test data. (See: https://gophers.slack.com/archives/general/p14567053570110). The existing tests expect round tripping to work in this case
but it does not. So, in these cases we change the expectation to reflect actual behaviour.
This needs to be checked for reasonableness.
The RPC handler for remote queries would attempt to reuse a closed
connection for certain commands that didn't use pooling. The RPC
commands that close the connection have been fixed to not try reusing
the connection.
When creating an iterator, if there are no points to return, the points
decoder would hit an EOF that it didn't catch and would return that
error back to the client who made the request. It now properly returns
no points by using a `nilFloatIterator` if there are no points to
return.
This fixes remote execution when a cluster has nothing to return.
This commit updates tsdb.Shard to contain a ShardConfig and updates
tsdb.Store to directly reference a map of tsdb.Shard rather than the
previous tsdb.shardLocation abstraction.
Previously, the for loop at the end of the method assumed that all entries
had been deduplicated, including the entry discovered in the snapshot.
However, this wasn't actually true. With this change, we make it true.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
Consider the write sequence: 6,1,snapshot,7,2.
The hot cache gets deduplicated, so is 2,7.
Now consider the test if 1 >= 2, this is false, so needSort is not set to true.
The problem is the implicit assumption that the snapshot is always sorted
by the time that merged() runs, but this may not be true.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>