Through profiling of writes, point.Fields() and point.Name() were called
repeatedly in PointsWriter and the Shard. These calls are somewhat expensive
when writing large batches so we can cache them to avoid wasting CPU cycles.
Using influx_stress with default settings
Before:
Wrote 10000000 points at average rate of 202570
Average response time: 235.450355ms
After:
Wrote 10000000 points at average rate of 246120
Average response time: 182.881008ms
We now redact the credentials in the logger, so the function implemented
by the deleted lines now seems redudndant.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
Previously password redaction only occurred inside the
authentication handler and the authentication handler is not on
the request path for OPTIONS requests and, in any case, would
not be invoked because of an early return on OPTIONS
requests by the CORS handler.
Now, we change the response logger to explictly replace any
occurrence of the 'p' parameter from the query string with
'[REDACTED]' prior to logging the response.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
If no chunking was requested by the user, the co-ordinating node buffers all
results in RAM before emitting a single result. However buffering was not
merging results for rows which had data for the same series. This change fixes this.
Fixes issue #3242.
If the hinted handoff segment is corrupt, the size read could be
invalid and attempting to create a slice using that size causes
a panic. Ideally, we'd have a checksum on the seqment record but
for now just return an error when the size is larger than the
segment file.
Fixes#3687
* Capitalize first letter of message
* Log all services staring consistently
* Remove some extraneous log statements in meta.Store
* Log data dirs for meta, data and hinted handoff
A short write has occurred and we do not have enough bytes to determine
the size of the payload. This is corrupted record that we should drop.
Instead of panicing, log the error and advance the queue since the error
at this location is unreoverable currently.
Fixes#3436
With this change remote mapping no longer uses HTTP, as the HTTP ports
exposed by nodes on the cluster are not known cluster wide. The TCP
ports exposed by the cluster service are, so this change uses that
functionality. Each RemoteMapper has its own dedicated connection pool
for each node, and remote mapping TCP connections are in no way coupled
with query TCP connections.
With this change, the query engine code gathers information about
shards and tagsets by working with individual shards, collating the
information, and returning that to the client. It does not assume that any
particular shard is local, and accesses all shards through abstracted
Mappers, of which there are two types -- a Mapper type for Raw queries
and a second type for Aggregate queries. There are corresponding
Executors for each type of Mapper, but both types of Executors share the
same interface.
Previously there was a batcher per connection and each batcher was
flushed when the connection was closed. This didn't have much of an
effect when multiple clients connected and disconnected since it would
flush the batch immediately. It also did not help UDP traffic.
Instead, there is now a shared batcher for the service so that multiple
connections will not cause frequent flushes.
Provides a little more flexibility in controlling the parsed
metric names for metris like:
servers.localhost.cpu.cpu0.user
Previously, you could only use a single field like "cpu", "user"
or a wildcard to match "cpu.cpu0.user". You can now pull out "cpu"
and "user" and join them together in the metric name using a custom
separator character. By default this is ".".
This adds a sorted search tree for matchining filters to a template
more efficiently. Each filter is split on "." and each element is
added to the tree. Patterns with matching prefixes are added under
the same subtree.
These are tags that can be added at the template level. They
will override any global tags and any parsed tags with the same
name from the metric will override these.
This filter implementation is fairly naive and won't scale well
to large numbers of templates and filters. It will be replaced
with a trie-based approach in the future.