There was a race where a remote write could arrive before a meta
client cache update arrived. When this happened, the receiving node
would drop the write because it could not determine what database
and retention policy the shard it was supposed to create belonged
to.
This change sends the db and rp along with the write so that the
receiving node does not need to consult the meta store. It also
allows us to not send writes for shards that no longer exist instead
of always sending them and having the receiving node logs fill up
with dropped write requests. This second situation can occur when
shards are deleted and some nodes still have writes queued in hinted
handoff for those shards.
Fixes#5610
Under highly conncurrent write load, the coordinating node would
create a connection to any other node that is part of the replica
group. Since each connection can be expensive, OOM sitations could
occur because there was no bounds on the number of new connections
that would be created. If writes on a remote node were slow, connections
could pile up an exacerbate the problem.
This switches the pool to be bounded and has a checkout that is blocking
with a timeout. If a connection is available, it's returned immediately.
If the pool still has room for more connections, it will create one if needed.
Otherwise, the call will block until a connection becomes available or
the timeout expires. In the case of a timeout, it is propogated back up
to the PointsWriter that determine what do return to the client.
Float values are not supported in the existing engine and the tsm1
engines. This changes NewPoint to return an error if a field value
contains a NaN field. It also allows us to validate fields to prevent
other unsupported types from sneaking in through other input plugins.