Commit Graph

1441 Commits (b17f27a5d987585b2277d435797247e45c2898a0)

Author SHA1 Message Date
Gianluca Arbezzano 30621ca9ec chore(tsm1): skip WriteSnapshot during backup if snapshotter is busy
When an InfluxDB database is very busy writing new points the backup
the process can fail because it can not write a new snapshot.

The error is: `operation timed out with error: create snapshot: snapshot in progress`.

This happens because InfluxDB takes almost "continuously" a snapshot
from the cache caused by the high number of points ingested.

This PR skips snapshots if the `snapshotter` does not come available
after three attempts when a backup is requested.

The backup won't contain the data in the cache or WAL.

Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>
2020-02-04 20:09:50 +01:00
Edd Robinson 22798fa290 fix(storage): ensure all block data returned
This commit prevents multiple blocks for the same series key having
values truncated when they are being read into an empty buffer.

The current cursor reader code has an optimisation that incorrectly
assumes the incoming array will be limited to 1,000 values (the maximum
block size), but arrays can contain values from multiple matching
blocks.
2020-01-21 18:00:11 +00:00
Sean Brickley fe55d728f0 fix(tsm1): Compaction log error 2020-01-13 20:05:05 -05:00
tmgordeeva f1d26652e9
fix(storage): skip TSM files with block read errors (#15885)
* fix(storage): skip TSM files with block read errors

When we find a bad TSM file during compaction, propagate the error up and move
the bad file aside. The engine will disregard the file so the next compaction
will not hit the same error.
2019-12-13 15:05:39 -08:00
Edd Robinson f7f19c5904 refactor(storage): add tombstone extension 2019-11-17 14:43:38 +00:00
David Norton 102fcd671b fix(tsm1): make Digest() safe for concurrent use
This change adds a lock around digest creation so that it is safe for
concurrent calls. Prior to this change, calls from multiple goroutines
resulted in "Digest aborted, problem renaming tmp digest" errors.
2019-11-12 18:02:41 -05:00
elbehery a4bb1083f2 fix(storage): Renaming corrupt data files fails
fixes#14107
2019-10-28 17:32:58 +01:00
maxunt 021dd3114d
Merge pull request #14245 from influxdata/mu-tsm-clean-1.8-14058
Clean tmp tsm files when replace fails
2019-08-08 17:02:01 -07:00
Max U c6c0a5d3b1 change log level from info to error 2019-07-08 14:52:53 -04:00
Mark Rushakoff 2dfafe822c Remove stray fmt.Println in tsm1.StringArrayEncodeAll
It was introduced in #13699.

Updates #14265.
2019-07-05 15:58:53 -07:00
Max U 9091d72ba7 initial commit for 1.8 2019-07-02 13:18:20 -04:00
Edd Robinson 43e144a923 fix(storage): ensure WAL size correctly set on startup 2019-06-28 16:20:45 +01:00
Edd Robinson ecff62b9e4 test(storage): add test for reproducing #14229 2019-06-28 16:18:32 +01:00
Stuart Carnie a0f7c15b08
chore: Fix constant for 32-bit architecture 2019-06-07 11:00:13 -07:00
Stuart Carnie 86734e7fcd
fix(storage): Don't panic when length of source slice is too large
StringArrayEncodeAll will panic if the total length of strings
contained in the src slice is > 0xffffffff. This change adds a unit
test to replicate the issue and an associated fix to return an error.

This also raises an issue that compactions will be unable to make
progress under the following condition:

* multiple string blocks are to be merged to a single block and
* the total length of all strings exceeds the maximum block size that
  snappy will encode (0xffffffff)

The observable effect of this is errors in the logs indicating a
compaction failure.

Fixes #13687
2019-06-04 17:05:01 -07:00
Stuart Carnie f1e1164e96
fix(storage): Don't panic when length of source slice is too large
StringArrayEncodeAll will panic if the total length of strings
contained in the src slice is > 0xffffffff. This change adds a unit
test to replicate the issue and an associated fix to return an error.

This also raises an issue that compactions will be unable to make
progress under the following condition:

* multiple string blocks are to be merged to a single block and
* the total length of all strings exceeds the maximum block size that
  snappy will encode (0xffffffff)

The observable effect of this is errors in the logs indicating a
compaction failure.

Fixes #13687
2019-05-30 08:23:58 -07:00
Ben Johnson aa3dfc0662
Merge pull request #11791 from influxdata/bj-revert-limit-full-compaction-1.8
Revert "Limit force-full and cold compaction size."
2019-02-11 12:50:55 -07:00
Ben Johnson 198f6fde38
Fix deleteSeriesRange() race condition. 2019-02-11 11:29:09 -07:00
Ben Johnson 2dd913d71b
Revert "Limit force-full and cold compaction size."
This reverts commit 40db64d0b9.
2019-02-11 11:07:44 -07:00
Edd Robinson 05e7def600
Merge pull request #10332 from ludweeg/ludweeg/unslice
Simplify s[:] to s where s is a slice
2019-02-11 10:24:43 +00:00
Jonathan A. Sternberg 2811cde76d
Merge pull request #10414 from seebs/seebs/valuerReuse
reuse ValuerEval objects
2019-02-06 09:14:15 -06:00
Seebs 5525240de3 reuse ValuerEval objects
Scanner objects and iterators often need a ValuerEval. This
object is created, often with a function call, and has at
least one interface in it, so it allocates storage. Then it's
dropped again right away. The only part of it that might be
subject to change is usually a map. While the map's contents
change over time, the actual map doesn't change for the
lifetime of the object.

So, in both iterators and scanners, stash the ValuerEval
and continue reusing it. On a query returning a fair number
of data points, this produces a small (<5% in practice)
improvement in observed performance, visible as a significant
reduction in time spent in runtime (mallocgc, newobject,
etcetera).

The performance improvement isn't big, but it's reasonably
easy to evaluate it and establish that it's a safe change
to make.

Signed-off-by: seebs <seebs@seebs.net>
2019-02-05 15:10:23 -06:00
Ben Johnson def9589584
Merge pull request #10522 from hahnjo/fix-compaction-cache-snapshots
Fix compaction logic on infrequent cache snapshots
2019-02-04 08:34:03 -07:00
Ben Johnson 4083ae01e3
Merge branch '1.8' into hpb-no-series-rebuild-on-delete-when-series-still-in-cache 2019-02-04 08:32:04 -07:00
Edd Robinson 3a81921bb0
Merge pull request #10505 from hpbieker/hpb-no-series-rebuild-on-delete-without-overlap-timerange
Do not rebuild series index on delete for series not overlapping in time
2019-02-04 03:22:00 -08:00
Ben Wells e9bada090f Fix misspelling identified by misspell 2019-02-03 20:27:43 +00:00
Ben Johnson 0c6d77d952
Merge pull request #9944 from michaelyou/hotfix-hashring-mod
Hash ring's hash mod
2019-02-01 12:54:15 -08:00
Edd Robinson 348dac1672 Add repro test case for UTF-8 issue 2018-12-19 14:38:31 +00:00
Ben Johnson 40db64d0b9
Limit force-full and cold compaction size.
This commit limits the number of files that can be compacted in
a single group when forcing a full compaction or when a shard
becomes cold. This is to prevent too many files being compacted
at the same time.
2018-12-05 10:18:56 -07:00
Stuart Carnie 39a3d2335e chore(flux): Update to Flux 0.7.1
Resolve breaking API changes
2018-11-30 10:38:56 -07:00
Jeff Wendling 0a2f6191a6 tsdb: clean up fields index for every kind of delete
Before this, if you deleted everything with `delete where true`
for example, then you would be left with all of your measurements
in the fields index. That would cause ghost fields to reappear
if someone reinserted to the measurement.

This fixes that by making it so the deepest most delete code
checks if the measurement was removed from the index, and if so
cleaning it up out of the fields index.

Additionally, it fixes bugs in that cleanup code where if you had
a measurement like "m1" and "m10", when iterating over the cache
or file store, "m1" would match "m10" due to it only checking the
prefix. This also has it check the character right after the
measurement to be either a comma because tags started, or the first
character of the field separator.
2018-11-27 16:12:06 -07:00
Jonas Hahnfeld 217772752d Fix compaction logic on infrequent cache snapshots
This change fixes #10511 that manifests when a shard is considered cold
faster than its cache is snapshotted. This can happen if WAL is enabled
because previously the code only considered the last modification of
compacted tsm1 files. Instead Engine.LastModified() also takes the WAL
into account if necessary.
2018-11-26 22:05:37 +01:00
Hans Petter Bieker 4670b8d65e Removed file that should not have been added. 2018-11-20 16:39:27 +01:00
Hans Petter Bieker 1d5463e0a0 Do not rebuild series index on delete for series not overlapping in time. 2018-11-20 16:24:13 +01:00
Hans Petter Bieker 926f78d832 Do not rebuild series index on delete when the series still exists in the cache. 2018-11-20 10:34:59 +01:00
Stuart Carnie c3d7f3de2b fix: Allow compactor to make progress if v.MaxTime() != entry.MaxTime 2018-11-14 09:13:13 -07:00
Stuart Carnie 5d083887a5 chore: Compactor test which replicates issue #10465
Due to an encoding bug with simple8b, it is possible that the
MaxTime for a TSM index entry does not match the last encoded timestamp.
2018-11-14 09:13:13 -07:00
Jonathan A. Sternberg a16096cbc4
Merge pull request #9943 from michaelyou/hotfix-typo
Some typo and Wrong position of comment
2018-11-05 12:36:05 -06:00
Edd Robinson be662a5853 Fix TSM index maxtime modification 2018-10-29 15:44:31 +00:00
David Norton 3ad44c0ff4 error if manifest is read/written more than once
This change makes the shard digest writer and reader return an error if
the manifest is written or read more than once.
2018-10-22 14:42:05 -04:00
David Norton 3d01051dfc make digest reader skip manifest if needed
This change makes the digest reader read and discard the manifest if
needed. Not all readers of a digest are interested in the manifest.

This change also makes it a requirement for the writer to write a
manifest because it is a non-optional part of a digest file.
2018-10-22 13:14:35 -04:00
Stuart Carnie 0734f6fe21 feat(tsm1): Improve performance of Gorilla float block decoding
```
name                        old time/op   new time/op    delta
FloatArrayDecodeAll/1-8      45.9ns ± 1%    13.8ns ± 1%   -70.00%  (p=0.000 n=9+9)
FloatArrayDecodeAll/55-8      686ns ± 0%     232ns ± 1%   -66.10%  (p=0.000 n=9+8)
FloatArrayDecodeAll/550-8    5.78µs ± 0%    2.22µs ± 1%   -61.61%  (p=0.000 n=9+9)
FloatArrayDecodeAll/1000-8   10.2µs ± 2%     4.0µs ± 5%   -60.47%  (p=0.000 n=10+10)

name                        old speed     new speed      delta
FloatArrayDecodeAll/1-8     414MB/s ± 1%  1383MB/s ± 1%  +233.76%  (p=0.000 n=9+9)
FloatArrayDecodeAll/55-8    144MB/s ± 0%   424MB/s ± 1%  +194.19%  (p=0.000 n=9+9)
FloatArrayDecodeAll/550-8   133MB/s ± 0%   346MB/s ± 1%  +160.09%  (p=0.000 n=9+10)
FloatArrayDecodeAll/1000-8  135MB/s ± 2%   340MB/s ± 5%  +153.03%  (p=0.000 n=10+10)
```
2018-10-16 17:28:36 -07:00
Stuart Carnie 4dccba29c3 chore(tsm1): go fmt file 2018-10-16 17:07:19 -07:00
Ben Johnson a989b01356
Merge pull request #10249 from hpbieker/hpb-delete-from-prevent-rebuild-series
Prevent DELETE FROM to rebuild series files for shards where nothing is deleted
2018-10-16 14:53:09 -06:00
Edd Robinson 5054d6fae4 Address PR feedback 2018-10-16 13:37:49 +01:00
Stuart Carnie a792fbbdfa fix(encoding): Improve array string encoding perf a little more
Encode the compressed data at the start internal buffer. This ensures
the returned slice maintains the entire capacity and is available for
subsequent use.

When we pool / reuse string buffers, this will help considerably.

Improvements over previous commit:

```
name                        old time/op    new time/op    delta
EncodeStrings/10/batch-8       542ns ± 1%     355ns ± 2%   -34.53%  (p=0.008 n=5+5)
EncodeStrings/100/batch-8     5.29µs ± 1%    3.58µs ± 2%   -32.20%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8    48.6µs ± 0%    36.2µs ± 2%   -25.40%  (p=0.008 n=5+5)

name                        old alloc/op   new alloc/op   delta
EncodeStrings/10/batch-8        704B ± 0%        0B       -100.00%  (p=0.008 n=5+5)
EncodeStrings/100/batch-8     9.47kB ± 0%    0.00kB       -100.00%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8    90.1kB ± 0%     0.0kB       -100.00%  (p=0.008 n=5+5)

name                        old allocs/op  new allocs/op  delta
EncodeStrings/10/batch-8        0.00           0.00           ~     (all equal)
EncodeStrings/100/batch-8       1.00 ± 0%      0.00       -100.00%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8      1.00 ± 0%      0.00       -100.00%  (p=0.008 n=5+5)
```
2018-10-16 12:08:12 +01:00
Stuart Carnie 964bc3c19e fix(encoding): Improve simple8b another 6%; fix inconsequential bug
simple8b encodes deltas[1:], thus deltas[0] >= simple8b.MaxValue is
invalid.

Also changed loop calculating deltas, RLE and max to be similar to
batch timestamp, for greater consistency.

Improvements over previous commit:

```
name                             old time/op    new time/op    delta
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    1.50µs ± 1%    1.48µs ± 1%  -1.40%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    6.10µs ± 0%    5.69µs ± 2%  -6.58%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    1.50µs ± 1%    1.49µs ± 0%  -1.21%  (p=0.008 n=5+5)
```

Improvements overall:

```
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    2.04µs ± 0%    1.48µs ± 1%  -27.25%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    8.80µs ± 2%    5.69µs ± 2%  -35.29%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    2.03µs ± 1%    1.49µs ± 0%  -26.93%  (p=0.008 n=5+5)
```
2018-10-16 12:08:12 +01:00
Stuart Carnie 43f96a6ddf feat(encoding): Improve timestamp encoding
Timestamp improvements prior to any improvements to simple8b

```
name                               old time/op    new time/op    delta
name                               old time/op    new time/op    delta
EncodeTimestamps/1000_seq/batch-8    2.64µs ± 1%    1.36µs ± 1%  -48.25%  (p=0.008 n=5+5)
EncodeTimestamps/1000_ran/batch-8    64.0µs ± 1%    32.2µs ± 1%  -49.64%  (p=0.008 n=5+5)
EncodeTimestamps/1000_dup/batch-8    9.32µs ± 0%    1.30µs ± 1%  -86.06%  (p=0.008 n=5+5)
```
2018-10-16 12:08:12 +01:00
Stuart Carnie e9531b7830 feat(encoding): Improve integer and simple8b encoding performance
simple8b EncodeAll improvements should

```
name                     old time/op  new time/op  delta
EncodeAll/1_bit-8        28.5µs ± 1%  28.6µs ± 1%     ~     (p=0.133 n=9+10)
EncodeAll/2_bits-8       28.9µs ± 2%  28.7µs ± 0%     ~     (p=0.068 n=10+8)
EncodeAll/3_bits-8       29.3µs ± 1%  28.8µs ± 0%   -1.70%  (p=0.000 n=10+10)
EncodeAll/4_bits-8       29.6µs ± 1%  29.1µs ± 1%   -1.85%  (p=0.000 n=10+10)
EncodeAll/5_bits-8       30.6µs ± 1%  29.8µs ± 2%   -2.70%  (p=0.000 n=10+10)
EncodeAll/6_bits-8       31.3µs ± 1%  30.0µs ± 1%   -4.08%  (p=0.000 n=9+9)
EncodeAll/7_bits-8       32.6µs ± 1%  30.8µs ± 0%   -5.49%  (p=0.000 n=9+9)
EncodeAll/8_bits-8       33.6µs ± 2%  31.0µs ± 1%   -7.77%  (p=0.000 n=10+9)
EncodeAll/10_bits-8      34.9µs ± 0%  31.9µs ± 2%   -8.55%  (p=0.000 n=9+10)
EncodeAll/12_bits-8      36.8µs ± 1%  32.6µs ± 1%  -11.35%  (p=0.000 n=9+10)
EncodeAll/15_bits-8      39.8µs ± 1%  34.1µs ± 2%  -14.40%  (p=0.000 n=10+10)
EncodeAll/20_bits-8      45.2µs ± 3%  36.2µs ± 1%  -19.97%  (p=0.000 n=10+9)
EncodeAll/30_bits-8      55.0µs ± 0%  40.9µs ± 1%  -25.62%  (p=0.000 n=9+9)
EncodeAll/60_bits-8      86.2µs ± 1%  55.2µs ± 1%  -35.92%  (p=0.000 n=10+10)
EncodeAll/combination-8   582µs ± 2%   502µs ± 1%  -13.80%  (p=0.000 n=9+9)
```

EncodeIntegers:

```
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    2.04µs ± 0%    1.50µs ± 1%  -26.22%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    8.80µs ± 2%    6.10µs ± 0%  -30.73%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    2.03µs ± 1%    1.50µs ± 1%  -26.04%  (p=0.008 n=5+5)
```

EncodeTimestamps (ran is improved due to simple8b improvements)

```
name                               old time/op    new time/op    delta
EncodeTimestamps/1000_seq/batch-8    2.64µs ± 1%    2.65µs ± 2%     ~     (p=0.310 n=5+5)
EncodeTimestamps/1000_ran/batch-8    64.0µs ± 1%    33.8µs ± 1%  -47.23%  (p=0.008 n=5+5)
EncodeTimestamps/1000_dup/batch-8    9.32µs ± 0%    9.28µs ± 1%     ~     (p=0.087 n=5+5)
```
2018-10-16 12:08:12 +01:00
Edd Robinson 91d0a8c3d2 Fix index bug in float encoder 2018-10-16 12:08:12 +01:00
Edd Robinson 09da18c08e Add TSM batch key iterator
The batch focussed TSM key iterator iterates TSM blocks, decoding and
merging blocks where appropriate using the the batch focussed
approaches.
2018-10-16 12:08:12 +01:00
Edd Robinson 51233b71a5 Add batch block encoders 2018-10-16 12:05:52 +01:00
Edd Robinson 592127e411 Batch oriented unsigned encoder 2018-10-16 12:05:52 +01:00
Edd Robinson a7a70a920e Batch oriented boolean encoders
This commit adds a tsm1 function for encoding a batch of booleans into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders using
randomly generated sets of booleans.
2018-10-16 12:05:52 +01:00
Jeff Wendling a4d4ef6999 Improvements to batch float encoder
- Inlined the closure to avoid a function call.
- Changed append(b, make([]byte, 8)...) to inline the make call.
- Check for NaN once at the end assuming NaN is infrequent.

New performance delta comparing the current iterators to the new batch
function:

name                   old time/op    new time/op    delta
EncodeFloats/10_seq      1.32µs ± 2%    0.17µs ± 2%  -87.39%  (p=0.000 n=10+10)
EncodeFloats/10_ran      2.09µs ± 1%    0.15µs ± 0%  -92.97%  (p=0.000 n=10+9)
EncodeFloats/100_seq     8.37µs ± 2%    1.28µs ± 2%  -84.74%  (p=0.000 n=10+10)
EncodeFloats/100_ran     19.1µs ± 1%     1.3µs ± 1%  -93.08%  (p=0.000 n=9+9)
EncodeFloats/1000_seq    60.4µs ± 1%    12.6µs ± 0%  -79.13%  (p=0.000 n=9+7)
EncodeFloats/1000_ran     212µs ± 1%      12µs ± 1%  -94.53%  (p=0.000 n=9+8)

name                   old alloc/op   new alloc/op   delta
EncodeFloats/10_seq       0.00B          0.00B          ~     (all equal)
EncodeFloats/10_ran       0.00B          0.00B          ~     (all equal)
EncodeFloats/100_seq      0.00B          0.00B          ~     (all equal)
EncodeFloats/100_ran      0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_ran     0.00B          0.00B          ~     (all equal)

name                   old allocs/op  new allocs/op  delta
EncodeFloats/10_seq        0.00           0.00          ~     (all equal)
EncodeFloats/10_ran        0.00           0.00          ~     (all equal)
EncodeFloats/100_seq       0.00           0.00          ~     (all equal)
EncodeFloats/100_ran       0.00           0.00          ~     (all equal)
EncodeFloats/1000_seq      0.00           0.00          ~     (all equal)
EncodeFloats/1000_ran      0.00           0.00          ~     (all equal)
2018-10-16 12:05:52 +01:00
Edd Robinson ee607f9288 Batch oriented string encoders
This commit adds a tsm1 function for encoding a batch of strings into a
provided buffer. The new function also shares the buffer between the
input data and the snappy encoded output, reducing allocations.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders using
randomly generated strings.

name                old time/op    new time/op    delta
EncodeStrings/10      2.14µs ± 4%    1.42µs ± 4%   -33.56%  (p=0.000 n=10+10)
EncodeStrings/100     12.7µs ± 3%    10.9µs ± 2%   -14.46%  (p=0.000 n=10+10)
EncodeStrings/1000     132µs ± 2%     114µs ± 2%   -13.88%  (p=0.000 n=10+9)

name                old alloc/op   new alloc/op   delta
EncodeStrings/10        657B ± 0%      704B ± 0%    +7.15%  (p=0.000 n=10+10)
EncodeStrings/100     6.14kB ± 0%    9.47kB ± 0%   +54.14%  (p=0.000 n=10+10)
EncodeStrings/1000    61.4kB ± 0%    90.1kB ± 0%   +46.66%  (p=0.000 n=10+10)

name                old allocs/op  new allocs/op  delta
EncodeStrings/10        3.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeStrings/100       3.00 ± 0%      1.00 ± 0%   -66.67%  (p=0.000 n=10+10)
EncodeStrings/1000      3.00 ± 0%      1.00 ± 0%   -66.67%  (p=0.000 n=10+10)
2018-10-16 12:05:52 +01:00
Edd Robinson d1b7e02483 Batch oriented timestamp encoders
This commit adds a tsm1 function for encoding a batch of timestamps into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice, a randomly generated input slice and a
duplicate slice. All slices are sorted.

name                       old time/op    new time/op    delta
EncodeTimestamps/10_seq       153ns ± 2%     104ns ± 2%  -31.62%  (p=0.000 n=9+10)
EncodeTimestamps/10_ran       191ns ± 2%     142ns ± 0%  -25.73%  (p=0.000 n=10+9)
EncodeTimestamps/10_dup       114ns ± 1%      68ns ± 4%  -39.77%  (p=0.000 n=8+10)
EncodeTimestamps/100_seq      704ns ± 2%     321ns ± 2%  -54.44%  (p=0.000 n=9+9)
EncodeTimestamps/100_ran     7.27µs ± 4%    7.01µs ± 2%   -3.59%  (p=0.000 n=10+10)
EncodeTimestamps/100_dup      756ns ± 3%     396ns ± 2%  -47.57%  (p=0.000 n=10+10)
EncodeTimestamps/1000_seq    6.32µs ± 1%    2.46µs ± 2%  -61.01%  (p=0.000 n=8+10)
EncodeTimestamps/1000_ran     108µs ± 0%      68µs ± 3%  -37.57%  (p=0.000 n=8+10)
EncodeTimestamps/1000_dup    7.26µs ± 1%    3.64µs ± 1%  -49.80%  (p=0.000 n=10+8)

name                       old alloc/op   new alloc/op   delta
EncodeTimestamps/10_seq       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/10_ran       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/10_dup       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_seq      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_ran      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_dup      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_ran     0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_dup     0.00B          0.00B          ~     (all equal)

name                       old allocs/op  new allocs/op  delta
EncodeTimestamps/10_seq        0.00           0.00          ~     (all equal)
EncodeTimestamps/10_ran        0.00           0.00          ~     (all equal)
EncodeTimestamps/10_dup        0.00           0.00          ~     (all equal)
EncodeTimestamps/100_seq       0.00           0.00          ~     (all equal)
EncodeTimestamps/100_ran       0.00           0.00          ~     (all equal)
EncodeTimestamps/100_dup       0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_seq      0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_ran      0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_dup      0.00           0.00          ~     (all equal)
2018-10-16 12:05:52 +01:00
Edd Robinson de5ca4a108 Batch oriented int encoders
This commit adds a tsm1 function for encoding a batch of ints into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice, a randomly generated input slice and a
duplicate slice:

name                     old time/op    new time/op    delta
EncodeIntegers/10_seq       144ns ± 2%      41ns ± 1%   -71.46%  (p=0.000 n=10+10)
EncodeIntegers/10_ran       304ns ± 7%     140ns ± 2%   -53.99%  (p=0.000 n=10+10)
EncodeIntegers/10_dup       147ns ± 4%      41ns ± 2%   -72.14%  (p=0.000 n=10+9)
EncodeIntegers/100_seq      483ns ± 7%     208ns ± 1%   -56.98%  (p=0.000 n=10+9)
EncodeIntegers/100_ran     1.64µs ± 7%    1.01µs ± 1%   -38.42%  (p=0.000 n=9+9)
EncodeIntegers/100_dup      484ns ±14%     210ns ± 2%   -56.63%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq    3.11µs ± 2%    1.81µs ± 2%   -41.68%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran    16.9µs ±10%    11.0µs ± 2%   -34.58%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup    3.05µs ± 3%    1.81µs ± 2%   -40.71%  (p=0.000 n=10+8)

name                     old alloc/op   new alloc/op   delta
EncodeIntegers/10_seq       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_ran       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_dup       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_seq      32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_ran       128B ± 0%        0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_dup      32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq     32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran    1.15kB ± 0%    0.00kB       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup     32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)

name                     old allocs/op  new allocs/op  delta
EncodeIntegers/10_seq        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_ran        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_dup        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_seq       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_ran       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_dup       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
2018-10-16 12:05:52 +01:00
Edd Robinson 6b52231a37 Batch oriented float encoders
This commit adds a tsm1 function for encoding a batch of floats into a
buffer. Further, it replaces the `bitstream` library used in the
existing encoders (and all the current decoders) with inlined bit
expressions within the encoder, significantly reducing the function call
overhead for larger batches.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice and a randomly generated input slice.

name                   old time/op    new time/op    delta
EncodeFloats/10_seq      1.14µs ± 3%    0.24µs ± 3%  -78.94%  (p=0.000 n=10+10)
EncodeFloats/10_ran      1.69µs ± 2%    0.21µs ± 3%  -87.43%  (p=0.000 n=10+10)
EncodeFloats/100_seq     7.07µs ± 1%    1.72µs ± 1%  -75.62%  (p=0.000 n=7+9)
EncodeFloats/100_ran     15.8µs ± 4%     1.8µs ± 1%  -88.60%  (p=0.000 n=10+9)
EncodeFloats/1000_seq    50.2µs ± 3%    16.2µs ± 2%  -67.66%  (p=0.000 n=10+10)
EncodeFloats/1000_ran     174µs ± 2%      16µs ± 2%  -90.77%  (p=0.000 n=10+10)

name                   old alloc/op   new alloc/op   delta
EncodeFloats/10_seq       0.00B          0.00B          ~     (all equal)
EncodeFloats/10_ran       0.00B          0.00B          ~     (all equal)
EncodeFloats/100_seq      0.00B          0.00B          ~     (all equal)
EncodeFloats/100_ran      0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_ran     0.00B          0.00B          ~     (all equal)

name                   old allocs/op  new allocs/op  delta
EncodeFloats/10_seq        0.00           0.00          ~     (all equal)
EncodeFloats/10_ran        0.00           0.00          ~     (all equal)
EncodeFloats/100_seq       0.00           0.00          ~     (all equal)
EncodeFloats/100_ran       0.00           0.00          ~     (all equal)
EncodeFloats/1000_seq      0.00           0.00          ~     (all equal)
EncodeFloats/1000_ran      0.00           0.00          ~     (all equal)
2018-10-16 12:05:52 +01:00
Edd Robinson 9ecadd1a9c Rename time batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson c1d82fccf0 Rename unsigned batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson 536e7bb62f Rename string batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson d819378ad3 Rename boolean batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson f81e75f4c1 Rename integer batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson 12bb8881be Rename float batch decoders 2018-10-16 12:05:52 +01:00
Ben Johnson 844b7ef9bf
Merge pull request #10299 from influxdata/bj-tsm1-panic-fix
Fix TSM1 panic on reader error.
2018-10-10 08:12:17 -06:00
ludweeg 5622355526 Simplify s[:] to s where s is a slice 2018-10-04 17:10:21 +03:00
Edd Robinson d649d5928b Cleanup failed TSM snapshot
If there was an error after the cache has been snapshotted to one or
more TSM files, but before the cache and WAL are cleaned up, then the
cache would be repeatedly snapshotted, generated duplicate level 1 TSM
files.

This commit attempts to clean those files up by removing the temporary
TSM file(s). The snapshot will be retried.
2018-10-03 16:34:54 +01:00
Ben Johnson da2dfa495e
Fix TSM1 panic on reader error.
This commit fixes an error check so that a `nil` TSM reader does
not cause a panic.
2018-09-24 08:54:28 -06:00
linxGnu 1dde9a1e12 Update test case 2018-09-14 14:09:24 -07:00
Stuart Carnie a940ebd45a fix(tsm1): Fix FloatBatchDecodeAll to return empty slice an no error
FloatBatchDecodeAll behaves the same as the iterator-based float
decoder, returning an empty slice and no error when passed a buffer
with no encoded float values.

Fixes #10270
2018-09-10 13:59:47 -07:00
Hans Petter Bieker de3a2d657d Fixed indentation. 2018-08-31 11:01:45 +02:00
Hans Petter Bieker 28f5fb4ea5 Prevent rebuilding of series files for shards where nothing is deleted. 2018-08-31 10:51:38 +02:00
Jeff Wendling 4e62c3f795 fix(tsm1): return boolean array iterator for booleans
booleans are still not strings.
2018-08-27 11:32:15 -06:00
Stuart Carnie 2f4fcd8255 chore: Remove BatchCursor references 2018-08-24 11:56:04 -07:00
David Norton 05d979d6b1
Merge pull request #10215 from influxdata/dn-snappy-digests
Switch digests to use snappy compression
2018-08-23 13:17:28 -04:00
David Norton 2f6a1fc03b switch digests to use snappy compression 2018-08-23 13:02:12 -04:00
Stuart Carnie e685556c81 fix(tsm1): Fix panic when calling Close twice on a descending cursor. 2018-08-22 13:49:59 -07:00
Jeff Wendling 6150bc1eea
Merge pull request #10217 from influxdata/jmw-cursor-iterator-fix
fix(tsm1): return boolean iterator for booleans
2018-08-21 14:31:38 -06:00
Jeff Wendling d258361fb4 fix(tsm1): return boolean iterator for booleans 2018-08-21 13:51:35 -06:00
Edd Robinson dece5b847f Refactor index names 2018-08-21 14:32:30 +01:00
Edd Robinson 035b26cadd Refactor DropSeriesGlobal 2018-08-20 16:37:55 +01:00
Stuart Carnie 12f7f45707 feat(tsdb): Add CursorType to enable selection of batch cursors 2018-08-10 06:39:14 -07:00
Edd Robinson 3bcb8ad9b2
Merge pull request #10161 from influxdata/er-tidy
Simplify loops
2018-08-06 15:24:49 +01:00
Edd Robinson 9eece563b1 Simplify loops 2018-08-05 15:16:33 +01:00
David Norton 50bbf11299 add digest manifest 2018-08-03 15:17:08 -04:00
Edd Robinson 996bb9bfa6 Wire in mmap advise hint to TSMReader 2018-08-03 16:27:39 +01:00
Stuart Carnie d977c0ac24 fix(tsdb): Fix existing Prometheus tests based on batch cursors 2018-07-16 08:55:37 -07:00
Stuart Carnie 497fc42779 pr(tsdb): Feedback items from megacheck
* batch cursors and cursorIterator will be removed in a follow up
  PR using Arrow array data structures
2018-07-16 08:55:37 -07:00
Stuart Carnie 910d0fe5e6 feat(tsm1): ArrayCursor interfaces and implementations
Array cursors are enabled for storage RPC calls

tsm1:

* Implemented cursors that utilize Array decoders

storage:

* Abstractions to easily switch to Array cursors
2018-07-16 08:55:37 -07:00
Stuart Carnie 3632df77a6 feat(tsm1): Add Read<type>ArrayBlock APIs to FileStore
* introduced tmpl from Arrow, which allows existing templates to be
  reused with additional command-line properties to control output.
* duplicated suite of ReadFloatBlock tests for ReadFloatArrayBlock
    * only the float data type is tested as the Read APIs are generated
      from a single template.
2018-07-16 08:55:37 -07:00
Stuart Carnie 790639d728 feat(tsm1): Add Read<Type>ArrayBlock APIs to TSMReader and mmapAccessor 2018-07-16 08:55:37 -07:00
Stuart Carnie 0841c51d93 pr(tsdb): Feedback items from PR review 2018-07-13 11:42:02 -07:00
Stuart Carnie 9cd31520ec feat(tsm1): Implement APIs to decode TSM data into array data structures
* These APIs will be used by `TSMReader` and `KeyCursor` types via new
  APIs, using similar naming convention (Array)
2018-07-13 11:42:02 -07:00
Stuart Carnie b3e53ae2dc feat(tsm1): New APIs to decode an entire buffer of data
* APIs decode an entire byte slice of encoded data into the provided
  `dst` slice
* APIs are stateless and in almost all cases avoid any allocations
* Intended to be used future batch-oriented TSM block decode APIs
* duplicated tests from original iterator-based APIs
2018-07-13 11:42:02 -07:00
Stuart Carnie 06257822c2 fix(tsm1): Reset vals to ensure Include is correctly tested 2018-07-13 11:42:02 -07:00
Stuart Carnie 7948a8e217 chore(tsm1): Add benchmarks for existing typed decoders
These benchmarks will be implemented in batched decoders to compare
performance.
2018-07-13 11:42:02 -07:00
Jeff Wendling 1c0e49e002 tsm1: ensure all written tsm files are fsynced
we were asserting to an *os.File in order to call Sync, but in some
cases the file handle has been wrapped, for example with limiting.
instead, assert to minimal interfaces for the functionality we need
and attempt to add some robustness in the code that creates the
writers by using a stronger interface with a Sync method.

fixes #9991
2018-06-25 11:36:22 -06:00
michaelyou 88ccbe43b3 Some typo and Wrong position of comment 2018-06-21 10:46:10 +08:00
David Norton b4fd65baf1 add digest logging 2018-06-15 16:55:59 -04:00