Point was accessed from multiple goroutines and there was a race on the the internal
cachedFields and cachedName fields. Accessing these fields is unnecessary work as it
requires the point to be unmarshal into Go types and then remarshaled back into protbuf
types. Instead, just send the line protocol version already available on the point via
the protobuf. This avoid accesssing these cached fields and eliminates some extra work.
Possible fix for #4069
Immediately return once the required number of writes are completed,
otherwise requests running with relaxed consistency levels (e.g. any
or one) would be blocked unexpectedly, for instance, waiting for dead
nodes to respond.
This commit converts meta.ShardInfo.OwnerIDs from a slice of ids
to a slice of objects. This is to support adding statuses for a
shard for a given node. For example, a node may have a shard
assigned to it but it is currently copying the shard and is not
ready to serve data for it.
The old `OwnerIDs` is marked as deprecated, however, the code
still supports loading from older protobuf-encoded data.
Running show measurements in a partially replicated cluster produces inconsistent
results due to the connection pooling. When running remote meta-data queries,
the cluster service ends ups keeping map shard request open but still checks the connection
back into the pool. This causes inconsistent results because data from the last request
interferes with the new request.
This removes the connection pool which fixes the issue. It also has the side effect of fixing
a nodes pool connections that have gone bad when a node restarts. For example, in a 3 node cluster
that has been responding to queries correctly, restarting 1 node will cause all the other to fail
to query that node indefinitely. This is now fixed as well.
* Capitalize first letter of message
* Log all services staring consistently
* Remove some extraneous log statements in meta.Store
* Log data dirs for meta, data and hinted handoff
With this change remote mapping no longer uses HTTP, as the HTTP ports
exposed by nodes on the cluster are not known cluster wide. The TCP
ports exposed by the cluster service are, so this change uses that
functionality. Each RemoteMapper has its own dedicated connection pool
for each node, and remote mapping TCP connections are in no way coupled
with query TCP connections.
With this change, the query engine code gathers information about
shards and tagsets by working with individual shards, collating the
information, and returning that to the client. It does not assume that any
particular shard is local, and accesses all shards through abstracted
Mappers, of which there are two types -- a Mapper type for Raw queries
and a second type for Aggregate queries. There are corresponding
Executors for each type of Mapper, but both types of Executors share the
same interface.