Benchmark improvements with this change:
benchmark old ns/op new ns/op delta
BenchmarkExportTSMFloats_100s_250vps-4 23206480 10279106 -55.71%
BenchmarkExportTSMInts_100s_250vps-4 17995000 5762310 -67.98%
BenchmarkExportTSMBools_100s_250vps-4 17067605 4235467 -75.18%
BenchmarkExportTSMStrings_100s_250vps-4 54846997 34682568 -36.76%
BenchmarkExportWALFloats_100s_250vps-4 23459937 10436297 -55.51%
BenchmarkExportWALInts_100s_250vps-4 18747150 6236062 -66.74%
BenchmarkExportWALBools_100s_250vps-4 17988273 4814358 -73.24%
BenchmarkExportWALStrings_100s_250vps-4 59700802 35815739 -40.01%
benchmark old allocs new allocs delta
BenchmarkExportTSMFloats_100s_250vps-4 201442 51738 -74.32%
BenchmarkExportTSMInts_100s_250vps-4 201442 51728 -74.32%
BenchmarkExportTSMBools_100s_250vps-4 201441 51638 -74.37%
BenchmarkExportTSMStrings_100s_250vps-4 404092 201584 -50.11%
BenchmarkExportWALFloats_100s_250vps-4 250322 75627 -69.79%
BenchmarkExportWALInts_100s_250vps-4 250323 75617 -69.79%
BenchmarkExportWALBools_100s_250vps-4 250321 75527 -69.83%
BenchmarkExportWALStrings_100s_250vps-4 452868 225291 -50.25%
benchmark old bytes new bytes delta
BenchmarkExportTSMFloats_100s_250vps-4 5170539 2351789 -54.52%
BenchmarkExportTSMInts_100s_250vps-4 5143189 2331276 -54.67%
BenchmarkExportTSMBools_100s_250vps-4 3724951 2143780 -42.45%
BenchmarkExportTSMStrings_100s_250vps-4 17131400 10796281 -36.98%
BenchmarkExportWALFloats_100s_250vps-4 4487868 1468109 -67.29%
BenchmarkExportWALInts_100s_250vps-4 4458395 1452359 -67.42%
BenchmarkExportWALBools_100s_250vps-4 2838719 1258755 -55.66%
BenchmarkExportWALStrings_100s_250vps-4 16787201 10010700 -40.37%
Also, after improving those benchmarks, I did a time-filtered export on
a 450MB TSM file to a 21GB plain text output, with and without the
bufio.BufferedWriter.
Without buffering, it took about 263s, and with buffering, it took about
60s, for a delta of about -77%.
It looks like the real import path to the project is go.uber.org/zap
instead of github.com/uber-go/zap since the example in the project
references that path.
This was needed when we were on go 1.4 but hasn't been needed since go
1.5. It was kept because we weren't sure if we were going to have to
rollback to an older version of Go at that time and we kept it so we
wouldn't forget to readd it.
Now that we are on go 1.7 with go 1.4 deprecated, there is no going back
so we might as well remove this so people can set GOMAXPROCS to a custom
value using environment variables.
The logging library has been switched to use uber-go/zap. While the
logging has been changed to use structured logging, this commit does not
change any of the logging statements to take advantage of the new
structured log or new log levels. Those changes will come in future
commits.
`percentile()` is supposed to be a selector and return the time of the
point, but that only got changed when the input was a float. Updating
the integer processor to also return the time of the point rather than
the beginning of the interval.
NO-OP on platforms with unix path separator.
On Windows paths get converted to slashes before adding to archive and back to backslashes during restore.
The `partial` tag has been added to the JSON response of a series and
the result so that a client knows when more of the series or result will
be sent in a future JSON chunk.
This helps interactive clients who don't want to wait for all of the
data to know if it is done processing the current series or the current
result. Previously, the client had to guess if the next chunk would
refer to the same result or a new result and it had to match the name
and tags of the two series to know if they were the same series. Now,
the client just needs to check the `partial` field included with the
response to know if it should expect more.
Fixed `max-row-limit` so it counts rows instead of results and it
truncates the response when the `max-row-limit` is reached.
There are 2 new keys in the configuration file.
- security-level: "none", "sign", or "encrypt".
- auth-file: The location of the user/password file.
Please see the collectd network doc for more details.
When a query would use a grouping with two different aggregates, it was
possible for one of the aggregates to return a value from a different
series key than the second aggregate. When these series keys didn't
match, the returned grouping would be screwed up because it sorted by
time before checking for name and tags.
This did not happen when the aggregates returned values for the same
series keys because then the iterators were aligned with each other.
When a limit is exceeded, we return errors and sometimes log (if appropriate)
that a limit was exceeded. The messages don't always provide an indication
as to where or how they are configured.
Instead, return the config option (easily searchable for) as well as the limit
currently set and the value that exceeded it when possible.
The admin console would dynamically discover the version from the
InfluxDB server, but for patch releases, it included the patch in the
link to the documentation and that wasn't a valid link.
Truncate the version so the documentation url is correct since we only
do documentation for `major.minor`.
Changes the default time boundaries for raw queries so raw queries will
range until the end of time. Aggregate queries continue to have their
default end time be `now()`.
The information in the usage string for the `help` command about
how to get more information about a specific command was incorrect:
"influxd help [command]" does not work, but "influxd [command] --help"
does.