The TSDBStore was returning a nil mapper if the shard did not exist. The caller always
assumed the mapper would not be nil causing a panic. Instead, have the mapper skip the mapping
phase if it's shard reference is nil. This fixes queries against data-only nodes and against
shards that are not fully replicated in the cluster.
Fixes#3574
If we've seeked a cursor, then we can be sure that there will be no
data between it and the point that was seeked to. Take advantage of
this fact to only seek when it would yield us a value that would be
different from the last.
In addition, only init the pointsheap when doing a raw query. For
aggregate queries, it is reinitialized on every time bucket, so no
need to seek through all the cursors
For a synthetic database where there was only entries for a tiny
slice of time, it cut queries from 112 seconds to 30 seconds doing
`select mean(value) from cpu where time > now - 2h group by time(1h)`
Since we already got a pointsHeapItem, let's just reuse it instead
of allocating a new one. This cuts allocated memory of a 1 million
points aggregate query from 4881.97MB to 4139.86MB
CPU profiling shows that computing the tagset-based key of each point is significant CPU cost. These keys actually don't change per cursor, so precompute once at mapper-open time, and then use those values as points are drained from the cursor.
Before this change the cursor tag was getting computed on every point, which involved marshalling tags.
This change moves tracking of next timestamp and values to simple
slices, as performance measurement showed that Peek() on TagSet cursors
was a huge performance drain. There is much more that can be done here,
but with this in place query performance has been restored to 0.9.1
levels.
This change also uses -1 to indicate that no value is available for a
given timestamp.
With this change remote mapping no longer uses HTTP, as the HTTP ports
exposed by nodes on the cluster are not known cluster wide. The TCP
ports exposed by the cluster service are, so this change uses that
functionality. Each RemoteMapper has its own dedicated connection pool
for each node, and remote mapping TCP connections are in no way coupled
with query TCP connections.