Benchmark improvements with this change:
benchmark old ns/op new ns/op delta
BenchmarkExportTSMFloats_100s_250vps-4 23206480 10279106 -55.71%
BenchmarkExportTSMInts_100s_250vps-4 17995000 5762310 -67.98%
BenchmarkExportTSMBools_100s_250vps-4 17067605 4235467 -75.18%
BenchmarkExportTSMStrings_100s_250vps-4 54846997 34682568 -36.76%
BenchmarkExportWALFloats_100s_250vps-4 23459937 10436297 -55.51%
BenchmarkExportWALInts_100s_250vps-4 18747150 6236062 -66.74%
BenchmarkExportWALBools_100s_250vps-4 17988273 4814358 -73.24%
BenchmarkExportWALStrings_100s_250vps-4 59700802 35815739 -40.01%
benchmark old allocs new allocs delta
BenchmarkExportTSMFloats_100s_250vps-4 201442 51738 -74.32%
BenchmarkExportTSMInts_100s_250vps-4 201442 51728 -74.32%
BenchmarkExportTSMBools_100s_250vps-4 201441 51638 -74.37%
BenchmarkExportTSMStrings_100s_250vps-4 404092 201584 -50.11%
BenchmarkExportWALFloats_100s_250vps-4 250322 75627 -69.79%
BenchmarkExportWALInts_100s_250vps-4 250323 75617 -69.79%
BenchmarkExportWALBools_100s_250vps-4 250321 75527 -69.83%
BenchmarkExportWALStrings_100s_250vps-4 452868 225291 -50.25%
benchmark old bytes new bytes delta
BenchmarkExportTSMFloats_100s_250vps-4 5170539 2351789 -54.52%
BenchmarkExportTSMInts_100s_250vps-4 5143189 2331276 -54.67%
BenchmarkExportTSMBools_100s_250vps-4 3724951 2143780 -42.45%
BenchmarkExportTSMStrings_100s_250vps-4 17131400 10796281 -36.98%
BenchmarkExportWALFloats_100s_250vps-4 4487868 1468109 -67.29%
BenchmarkExportWALInts_100s_250vps-4 4458395 1452359 -67.42%
BenchmarkExportWALBools_100s_250vps-4 2838719 1258755 -55.66%
BenchmarkExportWALStrings_100s_250vps-4 16787201 10010700 -40.37%
Also, after improving those benchmarks, I did a time-filtered export on
a 450MB TSM file to a 21GB plain text output, with and without the
bufio.BufferedWriter.
Without buffering, it took about 263s, and with buffering, it took about
60s, for a delta of about -77%.
The reducers already had a local RNG but mistakenly did not use it when
sampling points.
Because the local RNG is not protected by a mutex, there is a slight
speedup as a result of this change:
benchmark old ns/op new ns/op delta
BenchmarkSampleIterator_1k-4 418 418 +0.00%
BenchmarkSampleIterator_100k-4 434 422 -2.76%
BenchmarkSampleIterator_1M-4 449 439 -2.23%
benchmark old allocs new allocs delta
BenchmarkSampleIterator_1k-4 3 3 +0.00%
BenchmarkSampleIterator_100k-4 3 3 +0.00%
BenchmarkSampleIterator_1M-4 3 3 +0.00%
benchmark old bytes new bytes delta
BenchmarkSampleIterator_1k-4 304 304 +0.00%
BenchmarkSampleIterator_100k-4 304 304 +0.00%
BenchmarkSampleIterator_1M-4 304 304 +0.00%
The speedup would presumably increase when multiple sample iterators are
used concurrently.
It looks like the real import path to the project is go.uber.org/zap
instead of github.com/uber-go/zap since the example in the project
references that path.
Currently, whenever a snapshot occurs the Cache is reset and so many
allocations are repeated, as the same type of data is re-added to
the Cache.
This commit allows the stores to keep track of the number of values
within an entry, and use that size as a hint when the same entry needs
to be recreated after a snapshot.
To avoid hints persisting over a long period of time they are deleting
after every snapshot, and rebuilt using the most recent entries only.
This was needed when we were on go 1.4 but hasn't been needed since go
1.5. It was kept because we weren't sure if we were going to have to
rollback to an older version of Go at that time and we kept it so we
wouldn't forget to readd it.
Now that we are on go 1.7 with go 1.4 deprecated, there is no going back
so we might as well remove this so people can set GOMAXPROCS to a custom
value using environment variables.
The logging library has been switched to use uber-go/zap. While the
logging has been changed to use structured logging, this commit does not
change any of the logging statements to take advantage of the new
structured log or new log levels. Those changes will come in future
commits.
Deduplicate is called from various places in the engine and can cause
a lot of garbage to get created. It first creates a map and then
adds each value to the map in order (1st alloc). It then creates a
new slice (2nd alloc) and appends everything from the map to the slice.
Finally, it sorted the new slice (3rd alloc).
This switches the algorithm to use stable sorting and resuing the existing
slice to avoid allocations.
The url must have a scheme of udp,http,https and a port number.
CREATE SUBSCRIPTION will fail if there are invalid destinations.
Additionally Service.createSubscription fail invalid destinations are detected.
Fixes#7615