This reduces some of the lock contention when writing to the cache.
When a new entry is created, it avoids an allocation. It also skips
a check to see if we need to sorted if we already know it needs to sorted.
Parsing the line protocol again on the receiving side of the remote
write consumes a lot cpu. This uses a different marshaling format
that is much faster to parse after we already parsed the point on
the write side.
The history file is cleared before WriteHistory is called after each
command/exit() to prevent exponential file growth.
This commit addresses issue #5436, please see PR for full explanation.
Under highly conncurrent write load, the coordinating node would
create a connection to any other node that is part of the replica
group. Since each connection can be expensive, OOM sitations could
occur because there was no bounds on the number of new connections
that would be created. If writes on a remote node were slow, connections
could pile up an exacerbate the problem.
This switches the pool to be bounded and has a checkout that is blocking
with a timeout. If a connection is available, it's returned immediately.
If the pool still has room for more connections, it will create one if needed.
Otherwise, the call will block until a connection becomes available or
the timeout expires. In the case of a timeout, it is propogated back up
to the PointsWriter that determine what do return to the client.
Writing the snapshot would deduplicate the snapshot points
while still holding the engine write-lock. This can be expensive
under high load and cause writes to back up and OOM the server.
Instead, grab the snapshot under the lock and dedup it after releasing
the lock.
Possible fix for #5442
Possible fix for #5437. meta.Client.RetentionPolicy acquired a read-lock and
then called Database which called data() which acquired a read-lock again.
If a write lock was taken between these two read-locks (likely by Authenticate),
the write-lock would block, and the second read-lock would also block
causing a dead-lock.
Previously if you issued a CQ with a resample interval higher than the
query interval, such as the following:
CREATE CONTINUOUS QUERY cq ON db
RESAMPLE EVERY 4m
BEGIN
SELECT mean(value) INTO cpu_mean FROM cpu GROUP BY time(2m)
END
This would result in strange behavior because the FOR value defaulted to
the GROUP BY interval and the minimum time passing before a CQ ran was
also the resample interval, so it wouldn't run the appropriate intervals
even if you set the resample duration to a higher value.
This tweaks the CQ runner to set the minimum interval before a bucket
becomes capable of running to the lower of the query interval or the
resample interval instead of always using the resample interval.
It also sets the default resample duration to be the higher value of the
query interval or the resample interval so the above query gets a
default of 4m instead of 2m and will execute 2 queries every 4 minutes.
If you manually set the resample duration to a lower value than the
resample interval, the old behavior will still happen and should be
considered an error.
This also makes trying to create a continuous query with a resample
duration of below the resample interval or query interval (whichever is
higher) as an error returned by the parser.
Fixes#5286.