Commit Graph

1036 Commits (70817350b7e66938bc5dbd8a482264394f375b69)

Author SHA1 Message Date
Jason Wilder 70817350b7 Ensure temp index files are cleaned up on error 2017-10-03 10:48:14 -06:00
Jason Wilder a5afaf7499 Fix cache mem size not including key size 2017-10-03 10:48:14 -06:00
Jason Wilder ae821f4e2d Rework compaction scheduling
This changes the compaction scheduling to better utilize the available
cores that are free.  Previously, a level was planned in its own goroutine
and would kick off a number of compactions groups.  The problem with this
model was that if there were 4 groups, and 3 completed quickly, the planning
would be blocked for that level until the last group finished.  If the compactions
at the prior level are running more quickly, a large backlog could accumlate.

This now moves the planning to a single goroutine that plans each level in
succession and starts as many groups as it can.  When one group finishes,
the planning will start the next group for the level.
2017-10-03 10:48:13 -06:00
Jason Wilder f668b0cc3f Only use O_SYNC for tsm file writing
Doing this for the WAL reduces throughput quite a bit.
2017-10-03 10:48:13 -06:00
Jason Wilder 1610ae5727 Don't return tsm files part of a compaction plan 2017-10-03 10:48:13 -06:00
Joe LeGasse 1443b22379 auth: add series auth to 'show tag values' 2017-09-27 20:01:18 -04:00
Edd Robinson e0cba4477c Merge pull request #8885 from influxdata/er-entry-race
Fix race on Cache entry
2017-09-27 18:42:45 +01:00
Edd Robinson d0b81c1e6c Fix race on Cache entry 2017-09-27 18:10:23 +01:00
Edd Robinson a1b67160f6 Use math/bits in encoder 2017-09-26 12:51:08 +01:00
Jason Wilder 122a74c692 Use synchronous IO for wal and tsm writing
The fysncs due to large writes when writing to TSM files and the
WAL can eventually cause large pauses.  Since we already buffer
writes, using synchronous IO reduces fsync latency by ensuring
the individiual writes hit disk.  This spreads out the latecncy
across multiple writes better.
2017-09-25 12:44:57 -06:00
Jason Wilder 5774b44a4c Remove MADV_RANDOM
This was inadvertently added when merging the solaris and unix
mmap files.  This causes large delays due to major page faults.
2017-09-25 10:25:06 -06:00
Jason Wilder 94aba64b88 Re-use index entries slice when writing TSM index 2017-09-21 12:48:16 -06:00
Jason Wilder db204f3eb7 Default concurrent compactions to 50% of available cores 2017-09-21 12:48:11 -06:00
Jason Wilder deef0c5649 Fix 32bit alignment 2017-09-20 10:00:20 -06:00
Jason Wilder 61ca1243c7 Increase index disk writer buffer 2017-09-20 09:05:30 -06:00
Jason Wilder 796de3dcea Reduce encoder pool checkout contention
With higher cardinalities, the encoder pools where become a bottleneck.
This changes the snapshot compactions ot checkout one encoder of each
type and re-use it while writing the snapshots as opposed to repeatedly
checking it out and in.
2017-09-19 15:27:26 -06:00
Jason Wilder 391a6288c6 Write parallel snapshot for higher cardinalities 2017-09-19 15:27:26 -06:00
Jason Wilder 0d52b060df Skip onFileStoreReplace with tsi 2017-09-19 15:27:25 -06:00
Jason Wilder 4fe81aeee6 Remove manual Gosched from compactions
At higher cardinalities, this dramatically slows down compaction throughput.
2017-09-19 15:27:25 -06:00
Jason Wilder 31e785d676 Don't deduplicate a single value 2017-09-19 15:27:25 -06:00
Jason Wilder 2ca9ccee1f Reset snapshot cache outside of write lock 2017-09-19 15:27:25 -06:00
Jason Wilder ddeba2c86b Split large snapshots and write concurrently 2017-09-19 15:27:25 -06:00
Jason Wilder 9ee305f6f5 Periodically re-allocate cache store
This perioically re-allocates the cache store to avoid memory
fragmentation and gradual slow down of the store after repeated
deletes and inserts into the map.
2017-09-19 15:27:25 -06:00
Jason Wilder 2885b9b310 Remove entrySizeHints map
There is a lot of overhead for calculating the hints for larger
cardinalities.  This slows down resetting the partitions in the ring.
2017-09-19 15:27:25 -06:00
Jason Wilder 4124a8ed97 Simplify cache ring
The continuum slice is not needed since the number of partitions
doesn't change.  This removes the slice to make the mapping simpler.
2017-09-19 15:27:25 -06:00
Stuart Carnie ed7bc9d825 fix FindValues panic for empty array 2017-09-19 14:23:32 -07:00
Stuart Carnie 92756ec0ad Reduce allocations, improve readEntries performance by simplifying loop
* callers of ReadEntries and Key API can cache allocated slice
2017-09-19 11:57:10 -07:00
Stuart Carnie baa05de3f8 add benchmarks 2017-09-19 11:47:48 -07:00
Stuart Carnie cfc6a1cd9f implement optimization for Include function
```
benchmark                                            old ns/op     new ns/op     delta
BenchmarkIntegerValues_IncludeNone_1000-8            651           6.69          -98.97%
BenchmarkIntegerValues_IncludeMiddleHalf_1000-8      1131          114           -89.92%
BenchmarkIntegerValues_IncludeFirst_1000-8           638           33.9          -94.69%
BenchmarkIntegerValues_IncludeLast_1000-8            1269          32.2          -97.46%
BenchmarkIntegerValues_IncludeNone_10000-8           7751          6.76          -99.91%
BenchmarkIntegerValues_IncludeMiddleHalf_10000-8     11582         1378          -88.10%
BenchmarkIntegerValues_IncludeFirst_10000-8          7911          43.8          -99.45%
BenchmarkIntegerValues_IncludeLast_10000-8           12442         38.4          -99.69%
```

(cherry picked from commit fb93ad5)
2017-09-19 09:53:28 -07:00
Stuart Carnie ca40c1ad3c <type>Values.Exclude function uses binary search and copy builtin
```
± benchcmp old.txt new.txt
benchmark                                            old ns/op     new ns/op     delta
BenchmarkIntegerValues_ExcludeNone_1000-8            1285          7.34          -99.43%
BenchmarkIntegerValues_ExcludeMiddleHalf_1000-8      1258          148           -88.24%
BenchmarkIntegerValues_ExcludeFirst_1000-8           1268          7.51          -99.41%
BenchmarkIntegerValues_ExcludeLast_1000-8            1125          27.7          -97.54%
BenchmarkIntegerValues_ExcludeNone_10000-8           12665         7.31          -99.94%
BenchmarkIntegerValues_ExcludeMiddleHalf_10000-8     12039         976           -91.89%
BenchmarkIntegerValues_ExcludeFirst_10000-8          12663         7.29          -99.94%
BenchmarkIntegerValues_ExcludeLast_10000-8           10990         34.9          -99.68%
```

(cherry picked from commit d7a3c23)
2017-09-19 09:53:26 -07:00
Jason Wilder 31646aae3a Release mmap pages when shard is cold
This instructs the kernel that it can release memory used by mmap'd
TSM files when they are not actively being used.  It the mappings are
use, the kernel will fault the pages back in.  On linux, this causes
RES memory to drop immediately when run.
2017-09-18 11:51:51 -06:00
Jason Wilder 7d467c2047 Fix windows unmapping of anonymous index slice 2017-09-12 10:30:10 -06:00
Jason Wilder b4b3c159cc Fixup rebase 2017-09-11 17:04:10 -06:00
Jason Wilder 26f92ce6ac Remove commented out code 2017-09-11 15:30:05 -06:00
Jason Wilder 820856347c Don't use disk temp file for snapshots 2017-09-11 15:29:26 -06:00
Jason Wilder 4ed9c75896 Fix unmapping anonymous memory slice 2017-09-11 15:29:26 -06:00
Jason Wilder 97f7857715 Remove mutex on TSMWriter
This isn't used by more than one goroutine so locks are unnecessary.
2017-09-11 15:29:26 -06:00
Jason Wilder a93a5e9bdf Include the size of the key in the cache size 2017-09-11 15:29:26 -06:00
Jason Wilder 7388eb9499 Use disk when writing TSM index 2017-09-11 15:29:25 -06:00
Jason Wilder d3e832b462 Use offheap memory for indirect index offsets slice 2017-09-11 15:29:25 -06:00
Jason Wilder 91eb9de341 Use existing TSMReader from file store during compactions
Compactions would create their own TSMReaders for simplicity. With
very high cardinality compactions, creating the reader and indirectIndex
can start to use a significant amount of memory.

This changes the compactions to use a reader that is already allocated
and managed by the FileStore.
2017-09-11 15:29:25 -06:00
Jason Wilder 739ecd2ebd Fix a compaction planning bug
There was a race where the plan returned was for files that were just
compacted so the compaction would immediately abort.
2017-09-11 15:26:25 -06:00
Jason Wilder bc4fb0ea10 Sort index entries if necessary
These are already sorted during compaction, so switch to sorting lazily
to avoid the CPU and allocations.  This would only occur when using if
using the writer directly.
2017-09-11 15:26:25 -06:00
Jason Wilder f18dec6a4a Use sorted slice for writing TSM index
The directIndex used by the TSMWriter maintained a map of series keys
to index entries.  When the index is written to the TSM file, the keys
are sorted and then written out in order.

The reason for this is because directIndex used to be the only index
and it was optimized more for reading.  The reading has been replaced
by the indirectIndex so the map of keys ends up wasting space.

During compactions, the series keys (and index entries) are already sorted
so this change uses the sorting to avoid the map and sort when writing the
index.  This reduces allocations and CPU usage quite a bit for larger cardinality
TSM files.
2017-09-11 15:26:24 -06:00
Jason Wilder 2a0d7935d7 Switch level 3 compactions to use fast compaction strategy
This leaves the slower compactions that create full blocks to only
the full compaction.  This helps reduce CPU usage and memory while shards
are hot, but increases disk usage (reduced compression) slightly.
2017-09-11 15:26:24 -06:00
Jason Wilder 94e229ff59 Merge branch 'master' into jw-drop-series 2017-09-08 15:34:32 -06:00
Jason Wilder 78922f9821 Set rc to nil when closing WALSegmentReader 2017-09-08 14:55:02 -06:00
Jason Wilder b9b648e2a0 Dynamically allocate cache store
The cache store can be memory intensive with many shards.  This
lazyily allocates it when needed and frees it when the cache is
empty and cold.
2017-09-07 16:35:08 -06:00
Jason Wilder 5581f8b4ae Re-use WALSegmentReaders at startup 2017-09-07 12:56:17 -06:00
Jason Wilder e39276b96f Skip reading 0 byte wal segments 2017-09-07 12:24:54 -06:00