Commit Graph

2657 Commits (3f3b7b5160a882ba513943fa0a72908a21ea6c50)

Author SHA1 Message Date
Stuart Carnie 86734e7fcd
fix(storage): Don't panic when length of source slice is too large
StringArrayEncodeAll will panic if the total length of strings
contained in the src slice is > 0xffffffff. This change adds a unit
test to replicate the issue and an associated fix to return an error.

This also raises an issue that compactions will be unable to make
progress under the following condition:

* multiple string blocks are to be merged to a single block and
* the total length of all strings exceeds the maximum block size that
  snappy will encode (0xffffffff)

The observable effect of this is errors in the logs indicating a
compaction failure.

Fixes #13687
2019-06-04 17:05:01 -07:00
Stuart Carnie f1e1164e96
fix(storage): Don't panic when length of source slice is too large
StringArrayEncodeAll will panic if the total length of strings
contained in the src slice is > 0xffffffff. This change adds a unit
test to replicate the issue and an associated fix to return an error.

This also raises an issue that compactions will be unable to make
progress under the following condition:

* multiple string blocks are to be merged to a single block and
* the total length of all strings exceeds the maximum block size that
  snappy will encode (0xffffffff)

The observable effect of this is errors in the logs indicating a
compaction failure.

Fixes #13687
2019-05-30 08:23:58 -07:00
Jacob Marble 12a52a0321
fix(series file): Sync series segment after truncate (#13836) 2019-05-09 08:29:25 -07:00
Jonathan A. Sternberg 3406e8c7ff
Merge pull request #13153 from influxdata/deps/upgrade-flux
Upgrade flux to the latest version and remove the platform dependency
2019-04-04 11:41:27 -05:00
Jonathan A. Sternberg 31501c9dcf
Upgrade flux to the latest version and remove the platform dependency
This integrates the influxdb 1.x series to the latest version of Flux
and updates the code to use it. It also removes the dependency on
platform and copies the necessary code from storage into the 1.x series
so the dependency is unneeded.

The flux functions specific to 1.x have been moved to the same structure
that flux changed to with having a `stdlib` directory instead of a
`functions` directory. It also adds a `databases()` function that
returns the databases from the meta client.
2019-04-04 10:55:09 -05:00
Ben Johnson 2edbc907a1
Add nil check for tagKeyValueEntry.setIDs()
Previously it was possible to set IDs on a `nil` entry which would
in turn cause a panic. If this panic was recovered by the server
then it would result in a mutex in the `inmem` index staying locked
indefinitely.
2019-04-02 10:04:39 -06:00
Jeff Wendling 8e56b3ff1e
Merge pull request #11970 from influxdata/jmw-shard-epoch-races
Fix some more shard epoch races
2019-02-19 10:40:09 -07:00
Edd Robinson 67cfa21732
Merge pull request #10541 from gpomykala/10540
Fix open/close race in SeriesFile
2019-02-19 16:53:25 +00:00
Jeff Wendling 1d9ce868e2 Fix some more shard epoch races
We're not allowed to access the s.epochs map without holding the
mutex against shard creation and deletion, so create a copy of
all of the epoch trackers we will need while we hold the mutex.
2019-02-19 08:59:13 -07:00
Ben Johnson c61db43dc2
Update tagKeyValue mutex to write lock.
This commit changes the read lock to a write lock when calling the
`ids()` function because `ids()` can mutate the underlying series
ids slice.
2019-02-15 09:29:48 -07:00
Jeff Wendling 40d2b70376 Ensure that cached series id sets are Go heap backed 2019-02-12 15:20:24 -07:00
Ben Johnson 6e5226437a
Convert TagValueSeriesIDCache to use string fields.
This commit changes `name`, `key`, and `value` to from `[]byte`
to `string`.
2019-02-12 14:22:40 -07:00
Ben Johnson aa3dfc0662
Merge pull request #11791 from influxdata/bj-revert-limit-full-compaction-1.8
Revert "Limit force-full and cold compaction size."
2019-02-11 12:50:55 -07:00
Ben Johnson b87605f521
Fix shard epoch race. 2019-02-11 12:15:46 -07:00
Ben Johnson 198f6fde38
Fix deleteSeriesRange() race condition. 2019-02-11 11:29:09 -07:00
Ben Johnson 2dd913d71b
Revert "Limit force-full and cold compaction size."
This reverts commit 40db64d0b9.
2019-02-11 11:07:44 -07:00
Edd Robinson 05e7def600
Merge pull request #10332 from ludweeg/ludweeg/unslice
Simplify s[:] to s where s is a slice
2019-02-11 10:24:43 +00:00
Grzegorz Pomykala 8448cf4a9c build fixed 2019-02-06 16:27:59 +01:00
Jonathan A. Sternberg 2811cde76d
Merge pull request #10414 from seebs/seebs/valuerReuse
reuse ValuerEval objects
2019-02-06 09:14:15 -06:00
Grzegorz Pomykala fb3c837de9 code review sugestions applied 2019-02-06 09:10:51 +01:00
Seebs 5525240de3 reuse ValuerEval objects
Scanner objects and iterators often need a ValuerEval. This
object is created, often with a function call, and has at
least one interface in it, so it allocates storage. Then it's
dropped again right away. The only part of it that might be
subject to change is usually a map. While the map's contents
change over time, the actual map doesn't change for the
lifetime of the object.

So, in both iterators and scanners, stash the ValuerEval
and continue reusing it. On a query returning a fair number
of data points, this produces a small (<5% in practice)
improvement in observed performance, visible as a significant
reduction in time spent in runtime (mallocgc, newobject,
etcetera).

The performance improvement isn't big, but it's reasonably
easy to evaluate it and establish that it's a safe change
to make.

Signed-off-by: seebs <seebs@seebs.net>
2019-02-05 15:10:23 -06:00
Ben Johnson def9589584
Merge pull request #10522 from hahnjo/fix-compaction-cache-snapshots
Fix compaction logic on infrequent cache snapshots
2019-02-04 08:34:03 -07:00
Ben Johnson 4083ae01e3
Merge branch '1.8' into hpb-no-series-rebuild-on-delete-when-series-still-in-cache 2019-02-04 08:32:04 -07:00
Edd Robinson 3a81921bb0
Merge pull request #10505 from hpbieker/hpb-no-series-rebuild-on-delete-without-overlap-timerange
Do not rebuild series index on delete for series not overlapping in time
2019-02-04 03:22:00 -08:00
Ben Wells e9bada090f Fix misspelling identified by misspell 2019-02-03 20:27:43 +00:00
Ben Johnson 0c6d77d952
Merge pull request #9944 from michaelyou/hotfix-hashring-mod
Hash ring's hash mod
2019-02-01 12:54:15 -08:00
Edd Robinson 301ab71ba0 Remove copy-on-write when caching bitmaps
In the case of caching TSI bitmaps belonging to immutable .tsi files,
the underlying bitset data can be mmapped. It is possible, though rare,
for this data to be unmapped (e.g., via a TSI compaction) but for the
cached bitmap to be subsequently read. This leads to a segfault.

This only happens when copy-on-write is set to true on the roaring
bitmap, because in that case only the internal pointers are cloned.

This change will reduce the TSI cache performance by around 10%, which I
have deemed to account for only a few microseconds typically.
2019-01-25 18:02:48 +00:00
Edd Robinson efdddbb31a Allow TSI bitset cache size to be configured
This commit adds a config option to the tsdb Config allowing the size of
the bitset cached in the TSI index to be specified.

Setting the cache size to 0 will disable the cache.
2019-01-24 17:41:45 +00:00
Edd Robinson e20541d2ba Expose functional option for setting TSI cache size 2019-01-23 17:15:48 +00:00
Edd Robinson 3a055a6107 Fix cardinality estimation error
This commit fixes an error in the TSI index with estimating the
cardinality of series recently added and then removed.
2019-01-10 17:46:30 +00:00
Edd Robinson 77fe5a9a62 Treat fields and measurements as raw bytes 2018-12-19 14:38:50 +00:00
Edd Robinson 348dac1672 Add repro test case for UTF-8 issue 2018-12-19 14:38:31 +00:00
Ben Johnson b88d852c54
Merge pull request #10536 from influxdata/bj-limit-full-compaction
Limit force-full and cold compaction size.
2018-12-17 10:44:10 -07:00
Tanya Gordeeva 0a39786ea7 tsdb: mixed shard tests
Specifically tests around the global index for fields with mixed shard types.
2018-12-13 08:31:49 -08:00
Grzegorz Pomykala be97f36e66 trace on SeriesFile.Open() failure 2018-12-06 15:58:05 +01:00
Grzegorz Pomykala fbfcfa0b31 reproduction for #10540 2018-12-06 14:36:44 +01:00
Grzegorz Pomykala a346109198 do not acquire a lock upon closing a SeriesFile when called from Open() method 2018-12-06 11:59:03 +01:00
Ben Johnson 40db64d0b9
Limit force-full and cold compaction size.
This commit limits the number of files that can be compacted in
a single group when forcing a full compaction or when a shard
becomes cold. This is to prevent too many files being compacted
at the same time.
2018-12-05 10:18:56 -07:00
Stuart Carnie 39a3d2335e chore(flux): Update to Flux 0.7.1
Resolve breaking API changes
2018-11-30 10:38:56 -07:00
Jeff Wendling 9f0cd683b9
Merge pull request #10516 from influxdata/jmw-conflict-concurrency
tsdb: conflict based concurrency resolution
2018-11-29 14:14:24 -07:00
Ben Johnson cd1e1ca755
Merge pull request #10525 from influxdata/bj-warn-series-file
Skip and warn series files in retention policy directory.
2018-11-28 11:42:29 -07:00
Jeff Wendling cca97bf9b9
Merge pull request #10517 from influxdata/jmw-always-cleanup-fields-index
tsdb: clean up fields index for every kind of delete
2018-11-28 11:33:34 -07:00
Ben Johnson 298eddb82c
Skip and warn series files in retention policy directory. 2018-11-28 11:20:18 -07:00
Jeff Wendling 259f3fe6e5 tsdb: consider measurement drops per shard on inmem 2018-11-27 16:59:17 -07:00
Jeff Wendling 0a2f6191a6 tsdb: clean up fields index for every kind of delete
Before this, if you deleted everything with `delete where true`
for example, then you would be left with all of your measurements
in the fields index. That would cause ghost fields to reappear
if someone reinserted to the measurement.

This fixes that by making it so the deepest most delete code
checks if the measurement was removed from the index, and if so
cleaning it up out of the fields index.

Additionally, it fixes bugs in that cleanup code where if you had
a measurement like "m1" and "m10", when iterating over the cache
or file store, "m1" would match "m10" due to it only checking the
prefix. This also has it check the character right after the
measurement to be either a comma because tags started, or the first
character of the field separator.
2018-11-27 16:12:06 -07:00
Jonas Hahnfeld 217772752d Fix compaction logic on infrequent cache snapshots
This change fixes #10511 that manifests when a shard is considered cold
faster than its cache is snapshotted. This can happen if WAL is enabled
because previously the code only considered the last modification of
compacted tsm1 files. Instead Engine.LastModified() also takes the WAL
into account if necessary.
2018-11-26 22:05:37 +01:00
Jeff Wendling 4cad51a604 tsdb: conflict based concurrency resolution
There are some problematic races that occur when deletes happen
against writes to the same points at the same time. This change
introduces guards and an epoch based system to coordinate these
modifications.

A guard matches a point based on the time, measurement name, and
some conditions loaded from an influxql expression. The intent
is to be as precise as possible without allowing any false
neagatives: if a point would be deleted, the guard must match it.
We are allowed to match more points than necessary, at the cost
of slowing down writes.

The epoch based system keeps track of outstanding writes and
deletes and their associated guards. When a delete operation
is going to start, it waits until all current writes are
done, and installs its guard, blocking all future writes that
contain points that may conflict with the delete. This allows
writes to disjoint points to proceed uncontended, and the
implementation is optimized for assuming there are few
outstanding deletes. For example, in the case that there are no
deletes, a write just has to take a mutex, bump a counter, and
compare a value against zero. The epoch trackers are per shard,
so that different shards never have to contend with one another.
2018-11-21 19:19:53 -07:00
Jeff Wendling 030adf4bd5 tsdb: don't allow deletes to a database in mixed index mode
TSI1 and inmem indexes have different properties during deletes.
Specifically, inmem shares a global index across all shards, where
every tsi1 index is contained to a specific shard. When deleting
a series, it may cause the last reference to the series across all
shards to be dropped, necessitating a removal from the series file.
Since the inmem index shares the index across all shards, removing
the series when it's removed from the series file is sufficient.

However, in the case of a mixed index database, if the last shard
is a TSI1 shard, the other inmem indexes are not available when we
discover that it was the last reference to the series. This ends
up leaving the series in the inmem index without a series id in
the series file, causing all sorts of misbehavior.

Rather than continue curling ourselves into a ball to try to fix
this unsupported mode, give a helpful error message to the user
that they must run their database in a non-mixed index mode to
allow deletes.
2018-11-21 18:18:38 -07:00
Hans Petter Bieker 4670b8d65e Removed file that should not have been added. 2018-11-20 16:39:27 +01:00
Hans Petter Bieker 1d5463e0a0 Do not rebuild series index on delete for series not overlapping in time. 2018-11-20 16:24:13 +01:00
Hans Petter Bieker 926f78d832 Do not rebuild series index on delete when the series still exists in the cache. 2018-11-20 10:34:59 +01:00
Stuart Carnie c3d7f3de2b fix: Allow compactor to make progress if v.MaxTime() != entry.MaxTime 2018-11-14 09:13:13 -07:00
Stuart Carnie 5d083887a5 chore: Compactor test which replicates issue #10465
Due to an encoding bug with simple8b, it is possible that the
MaxTime for a TSM index entry does not match the last encoded timestamp.
2018-11-14 09:13:13 -07:00
Jonathan A. Sternberg a16096cbc4
Merge pull request #9943 from michaelyou/hotfix-typo
Some typo and Wrong position of comment
2018-11-05 12:36:05 -06:00
Tanya Gordeeva 8b8421049e tsdb: benchmark for many fields 2018-11-02 18:49:28 -07:00
Tanya Gordeeva 7c9ff60413 tsdb/shard: reduce measurement field copying
Removes cloning measurement fields on writes, instead atomically swaps out
measurement field sets when fields are added (with new overhead of copying
existing fields whenever a new one is added).
2018-11-02 18:49:17 -07:00
Tanya Gordeeva f13a1293f2 tsdb/shard_test: add comparitive benchmarks for measurement cardinalities
Reuses some existing benchmarks, but ensuring that we write equal numbers of
points for comparison.
2018-11-02 18:49:17 -07:00
Edd Robinson be662a5853 Fix TSM index maxtime modification 2018-10-29 15:44:31 +00:00
Edd Robinson 0f67d8f294
Merge pull request #10387 from influxdata/er-index-vars
Add shards' index types to /debug/vars
2018-10-29 10:12:05 +00:00
Edd Robinson 42827219f3
Merge pull request #10423 from influxdata/er-nil-shard
Fix panic in IndexSet
2018-10-26 19:05:08 +01:00
Jeff Wendling 5c2d36225d fix(tsdb): copy measurement names when expression is provided
We already make copies when no expression is provided, because
the backing slices may go away if the shard they came from is
closed. This fixes the other spot where some backing slices
would be returned.
2018-10-26 11:25:25 -06:00
Edd Robinson cade59e253 Fix panic in IndexSet
This commit fixes a panic where a concurrent removal of a shard and meta
query could cause a `nil` index to be added to the IndexSet`.
2018-10-26 18:23:54 +01:00
David Norton 3ad44c0ff4 error if manifest is read/written more than once
This change makes the shard digest writer and reader return an error if
the manifest is written or read more than once.
2018-10-22 14:42:05 -04:00
David Norton 3d01051dfc make digest reader skip manifest if needed
This change makes the digest reader read and discard the manifest if
needed. Not all readers of a digest are interested in the manifest.

This change also makes it a requirement for the writer to write a
manifest because it is a non-optional part of a digest file.
2018-10-22 13:14:35 -04:00
Edd Robinson 9b4cf1e39c Add the shard index type to /debug/vars
This commit adds an `indexType` key to the shard sections of the
`/debug/vars` endpoint, as well as the `_internal` shard statistics.

The tag will be reported as `"indexType": "inmem"` or `"indexType":
"tsi1"`.
2018-10-18 13:46:12 +01:00
Stuart Carnie 9520b8d956 fix(tsdb): Fix race calling filterShards outside a lock
Move filterShards inside the lock, as it enumerates the shards map,
which can result in data race when the map is written concurrently.
2018-10-17 14:14:53 -07:00
Stuart Carnie 0734f6fe21 feat(tsm1): Improve performance of Gorilla float block decoding
```
name                        old time/op   new time/op    delta
FloatArrayDecodeAll/1-8      45.9ns ± 1%    13.8ns ± 1%   -70.00%  (p=0.000 n=9+9)
FloatArrayDecodeAll/55-8      686ns ± 0%     232ns ± 1%   -66.10%  (p=0.000 n=9+8)
FloatArrayDecodeAll/550-8    5.78µs ± 0%    2.22µs ± 1%   -61.61%  (p=0.000 n=9+9)
FloatArrayDecodeAll/1000-8   10.2µs ± 2%     4.0µs ± 5%   -60.47%  (p=0.000 n=10+10)

name                        old speed     new speed      delta
FloatArrayDecodeAll/1-8     414MB/s ± 1%  1383MB/s ± 1%  +233.76%  (p=0.000 n=9+9)
FloatArrayDecodeAll/55-8    144MB/s ± 0%   424MB/s ± 1%  +194.19%  (p=0.000 n=9+9)
FloatArrayDecodeAll/550-8   133MB/s ± 0%   346MB/s ± 1%  +160.09%  (p=0.000 n=9+10)
FloatArrayDecodeAll/1000-8  135MB/s ± 2%   340MB/s ± 5%  +153.03%  (p=0.000 n=10+10)
```
2018-10-16 17:28:36 -07:00
Stuart Carnie 4dccba29c3 chore(tsm1): go fmt file 2018-10-16 17:07:19 -07:00
Ben Johnson a989b01356
Merge pull request #10249 from hpbieker/hpb-delete-from-prevent-rebuild-series
Prevent DELETE FROM to rebuild series files for shards where nothing is deleted
2018-10-16 14:53:09 -06:00
Edd Robinson 5054d6fae4 Address PR feedback 2018-10-16 13:37:49 +01:00
Stuart Carnie a792fbbdfa fix(encoding): Improve array string encoding perf a little more
Encode the compressed data at the start internal buffer. This ensures
the returned slice maintains the entire capacity and is available for
subsequent use.

When we pool / reuse string buffers, this will help considerably.

Improvements over previous commit:

```
name                        old time/op    new time/op    delta
EncodeStrings/10/batch-8       542ns ± 1%     355ns ± 2%   -34.53%  (p=0.008 n=5+5)
EncodeStrings/100/batch-8     5.29µs ± 1%    3.58µs ± 2%   -32.20%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8    48.6µs ± 0%    36.2µs ± 2%   -25.40%  (p=0.008 n=5+5)

name                        old alloc/op   new alloc/op   delta
EncodeStrings/10/batch-8        704B ± 0%        0B       -100.00%  (p=0.008 n=5+5)
EncodeStrings/100/batch-8     9.47kB ± 0%    0.00kB       -100.00%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8    90.1kB ± 0%     0.0kB       -100.00%  (p=0.008 n=5+5)

name                        old allocs/op  new allocs/op  delta
EncodeStrings/10/batch-8        0.00           0.00           ~     (all equal)
EncodeStrings/100/batch-8       1.00 ± 0%      0.00       -100.00%  (p=0.008 n=5+5)
EncodeStrings/1000/batch-8      1.00 ± 0%      0.00       -100.00%  (p=0.008 n=5+5)
```
2018-10-16 12:08:12 +01:00
Stuart Carnie 964bc3c19e fix(encoding): Improve simple8b another 6%; fix inconsequential bug
simple8b encodes deltas[1:], thus deltas[0] >= simple8b.MaxValue is
invalid.

Also changed loop calculating deltas, RLE and max to be similar to
batch timestamp, for greater consistency.

Improvements over previous commit:

```
name                             old time/op    new time/op    delta
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    1.50µs ± 1%    1.48µs ± 1%  -1.40%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    6.10µs ± 0%    5.69µs ± 2%  -6.58%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    1.50µs ± 1%    1.49µs ± 0%  -1.21%  (p=0.008 n=5+5)
```

Improvements overall:

```
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    2.04µs ± 0%    1.48µs ± 1%  -27.25%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    8.80µs ± 2%    5.69µs ± 2%  -35.29%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    2.03µs ± 1%    1.49µs ± 0%  -26.93%  (p=0.008 n=5+5)
```
2018-10-16 12:08:12 +01:00
Stuart Carnie 43f96a6ddf feat(encoding): Improve timestamp encoding
Timestamp improvements prior to any improvements to simple8b

```
name                               old time/op    new time/op    delta
name                               old time/op    new time/op    delta
EncodeTimestamps/1000_seq/batch-8    2.64µs ± 1%    1.36µs ± 1%  -48.25%  (p=0.008 n=5+5)
EncodeTimestamps/1000_ran/batch-8    64.0µs ± 1%    32.2µs ± 1%  -49.64%  (p=0.008 n=5+5)
EncodeTimestamps/1000_dup/batch-8    9.32µs ± 0%    1.30µs ± 1%  -86.06%  (p=0.008 n=5+5)
```
2018-10-16 12:08:12 +01:00
Stuart Carnie e9531b7830 feat(encoding): Improve integer and simple8b encoding performance
simple8b EncodeAll improvements should

```
name                     old time/op  new time/op  delta
EncodeAll/1_bit-8        28.5µs ± 1%  28.6µs ± 1%     ~     (p=0.133 n=9+10)
EncodeAll/2_bits-8       28.9µs ± 2%  28.7µs ± 0%     ~     (p=0.068 n=10+8)
EncodeAll/3_bits-8       29.3µs ± 1%  28.8µs ± 0%   -1.70%  (p=0.000 n=10+10)
EncodeAll/4_bits-8       29.6µs ± 1%  29.1µs ± 1%   -1.85%  (p=0.000 n=10+10)
EncodeAll/5_bits-8       30.6µs ± 1%  29.8µs ± 2%   -2.70%  (p=0.000 n=10+10)
EncodeAll/6_bits-8       31.3µs ± 1%  30.0µs ± 1%   -4.08%  (p=0.000 n=9+9)
EncodeAll/7_bits-8       32.6µs ± 1%  30.8µs ± 0%   -5.49%  (p=0.000 n=9+9)
EncodeAll/8_bits-8       33.6µs ± 2%  31.0µs ± 1%   -7.77%  (p=0.000 n=10+9)
EncodeAll/10_bits-8      34.9µs ± 0%  31.9µs ± 2%   -8.55%  (p=0.000 n=9+10)
EncodeAll/12_bits-8      36.8µs ± 1%  32.6µs ± 1%  -11.35%  (p=0.000 n=9+10)
EncodeAll/15_bits-8      39.8µs ± 1%  34.1µs ± 2%  -14.40%  (p=0.000 n=10+10)
EncodeAll/20_bits-8      45.2µs ± 3%  36.2µs ± 1%  -19.97%  (p=0.000 n=10+9)
EncodeAll/30_bits-8      55.0µs ± 0%  40.9µs ± 1%  -25.62%  (p=0.000 n=9+9)
EncodeAll/60_bits-8      86.2µs ± 1%  55.2µs ± 1%  -35.92%  (p=0.000 n=10+10)
EncodeAll/combination-8   582µs ± 2%   502µs ± 1%  -13.80%  (p=0.000 n=9+9)
```

EncodeIntegers:

```
name                             old time/op    new time/op    delta
EncodeIntegers/1000_seq/batch-8    2.04µs ± 0%    1.50µs ± 1%  -26.22%  (p=0.008 n=5+5)
EncodeIntegers/1000_ran/batch-8    8.80µs ± 2%    6.10µs ± 0%  -30.73%  (p=0.008 n=5+5)
EncodeIntegers/1000_dup/batch-8    2.03µs ± 1%    1.50µs ± 1%  -26.04%  (p=0.008 n=5+5)
```

EncodeTimestamps (ran is improved due to simple8b improvements)

```
name                               old time/op    new time/op    delta
EncodeTimestamps/1000_seq/batch-8    2.64µs ± 1%    2.65µs ± 2%     ~     (p=0.310 n=5+5)
EncodeTimestamps/1000_ran/batch-8    64.0µs ± 1%    33.8µs ± 1%  -47.23%  (p=0.008 n=5+5)
EncodeTimestamps/1000_dup/batch-8    9.32µs ± 0%    9.28µs ± 1%     ~     (p=0.087 n=5+5)
```
2018-10-16 12:08:12 +01:00
Edd Robinson 91d0a8c3d2 Fix index bug in float encoder 2018-10-16 12:08:12 +01:00
Edd Robinson 09da18c08e Add TSM batch key iterator
The batch focussed TSM key iterator iterates TSM blocks, decoding and
merging blocks where appropriate using the the batch focussed
approaches.
2018-10-16 12:08:12 +01:00
Edd Robinson 51233b71a5 Add batch block encoders 2018-10-16 12:05:52 +01:00
Edd Robinson 592127e411 Batch oriented unsigned encoder 2018-10-16 12:05:52 +01:00
Edd Robinson a7a70a920e Batch oriented boolean encoders
This commit adds a tsm1 function for encoding a batch of booleans into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders using
randomly generated sets of booleans.
2018-10-16 12:05:52 +01:00
Jeff Wendling a4d4ef6999 Improvements to batch float encoder
- Inlined the closure to avoid a function call.
- Changed append(b, make([]byte, 8)...) to inline the make call.
- Check for NaN once at the end assuming NaN is infrequent.

New performance delta comparing the current iterators to the new batch
function:

name                   old time/op    new time/op    delta
EncodeFloats/10_seq      1.32µs ± 2%    0.17µs ± 2%  -87.39%  (p=0.000 n=10+10)
EncodeFloats/10_ran      2.09µs ± 1%    0.15µs ± 0%  -92.97%  (p=0.000 n=10+9)
EncodeFloats/100_seq     8.37µs ± 2%    1.28µs ± 2%  -84.74%  (p=0.000 n=10+10)
EncodeFloats/100_ran     19.1µs ± 1%     1.3µs ± 1%  -93.08%  (p=0.000 n=9+9)
EncodeFloats/1000_seq    60.4µs ± 1%    12.6µs ± 0%  -79.13%  (p=0.000 n=9+7)
EncodeFloats/1000_ran     212µs ± 1%      12µs ± 1%  -94.53%  (p=0.000 n=9+8)

name                   old alloc/op   new alloc/op   delta
EncodeFloats/10_seq       0.00B          0.00B          ~     (all equal)
EncodeFloats/10_ran       0.00B          0.00B          ~     (all equal)
EncodeFloats/100_seq      0.00B          0.00B          ~     (all equal)
EncodeFloats/100_ran      0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_ran     0.00B          0.00B          ~     (all equal)

name                   old allocs/op  new allocs/op  delta
EncodeFloats/10_seq        0.00           0.00          ~     (all equal)
EncodeFloats/10_ran        0.00           0.00          ~     (all equal)
EncodeFloats/100_seq       0.00           0.00          ~     (all equal)
EncodeFloats/100_ran       0.00           0.00          ~     (all equal)
EncodeFloats/1000_seq      0.00           0.00          ~     (all equal)
EncodeFloats/1000_ran      0.00           0.00          ~     (all equal)
2018-10-16 12:05:52 +01:00
Edd Robinson ee607f9288 Batch oriented string encoders
This commit adds a tsm1 function for encoding a batch of strings into a
provided buffer. The new function also shares the buffer between the
input data and the snappy encoded output, reducing allocations.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders using
randomly generated strings.

name                old time/op    new time/op    delta
EncodeStrings/10      2.14µs ± 4%    1.42µs ± 4%   -33.56%  (p=0.000 n=10+10)
EncodeStrings/100     12.7µs ± 3%    10.9µs ± 2%   -14.46%  (p=0.000 n=10+10)
EncodeStrings/1000     132µs ± 2%     114µs ± 2%   -13.88%  (p=0.000 n=10+9)

name                old alloc/op   new alloc/op   delta
EncodeStrings/10        657B ± 0%      704B ± 0%    +7.15%  (p=0.000 n=10+10)
EncodeStrings/100     6.14kB ± 0%    9.47kB ± 0%   +54.14%  (p=0.000 n=10+10)
EncodeStrings/1000    61.4kB ± 0%    90.1kB ± 0%   +46.66%  (p=0.000 n=10+10)

name                old allocs/op  new allocs/op  delta
EncodeStrings/10        3.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeStrings/100       3.00 ± 0%      1.00 ± 0%   -66.67%  (p=0.000 n=10+10)
EncodeStrings/1000      3.00 ± 0%      1.00 ± 0%   -66.67%  (p=0.000 n=10+10)
2018-10-16 12:05:52 +01:00
Edd Robinson d1b7e02483 Batch oriented timestamp encoders
This commit adds a tsm1 function for encoding a batch of timestamps into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice, a randomly generated input slice and a
duplicate slice. All slices are sorted.

name                       old time/op    new time/op    delta
EncodeTimestamps/10_seq       153ns ± 2%     104ns ± 2%  -31.62%  (p=0.000 n=9+10)
EncodeTimestamps/10_ran       191ns ± 2%     142ns ± 0%  -25.73%  (p=0.000 n=10+9)
EncodeTimestamps/10_dup       114ns ± 1%      68ns ± 4%  -39.77%  (p=0.000 n=8+10)
EncodeTimestamps/100_seq      704ns ± 2%     321ns ± 2%  -54.44%  (p=0.000 n=9+9)
EncodeTimestamps/100_ran     7.27µs ± 4%    7.01µs ± 2%   -3.59%  (p=0.000 n=10+10)
EncodeTimestamps/100_dup      756ns ± 3%     396ns ± 2%  -47.57%  (p=0.000 n=10+10)
EncodeTimestamps/1000_seq    6.32µs ± 1%    2.46µs ± 2%  -61.01%  (p=0.000 n=8+10)
EncodeTimestamps/1000_ran     108µs ± 0%      68µs ± 3%  -37.57%  (p=0.000 n=8+10)
EncodeTimestamps/1000_dup    7.26µs ± 1%    3.64µs ± 1%  -49.80%  (p=0.000 n=10+8)

name                       old alloc/op   new alloc/op   delta
EncodeTimestamps/10_seq       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/10_ran       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/10_dup       0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_seq      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_ran      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/100_dup      0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_ran     0.00B          0.00B          ~     (all equal)
EncodeTimestamps/1000_dup     0.00B          0.00B          ~     (all equal)

name                       old allocs/op  new allocs/op  delta
EncodeTimestamps/10_seq        0.00           0.00          ~     (all equal)
EncodeTimestamps/10_ran        0.00           0.00          ~     (all equal)
EncodeTimestamps/10_dup        0.00           0.00          ~     (all equal)
EncodeTimestamps/100_seq       0.00           0.00          ~     (all equal)
EncodeTimestamps/100_ran       0.00           0.00          ~     (all equal)
EncodeTimestamps/100_dup       0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_seq      0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_ran      0.00           0.00          ~     (all equal)
EncodeTimestamps/1000_dup      0.00           0.00          ~     (all equal)
2018-10-16 12:05:52 +01:00
Edd Robinson de5ca4a108 Batch oriented int encoders
This commit adds a tsm1 function for encoding a batch of ints into a
provided buffer.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice, a randomly generated input slice and a
duplicate slice:

name                     old time/op    new time/op    delta
EncodeIntegers/10_seq       144ns ± 2%      41ns ± 1%   -71.46%  (p=0.000 n=10+10)
EncodeIntegers/10_ran       304ns ± 7%     140ns ± 2%   -53.99%  (p=0.000 n=10+10)
EncodeIntegers/10_dup       147ns ± 4%      41ns ± 2%   -72.14%  (p=0.000 n=10+9)
EncodeIntegers/100_seq      483ns ± 7%     208ns ± 1%   -56.98%  (p=0.000 n=10+9)
EncodeIntegers/100_ran     1.64µs ± 7%    1.01µs ± 1%   -38.42%  (p=0.000 n=9+9)
EncodeIntegers/100_dup      484ns ±14%     210ns ± 2%   -56.63%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq    3.11µs ± 2%    1.81µs ± 2%   -41.68%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran    16.9µs ±10%    11.0µs ± 2%   -34.58%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup    3.05µs ± 3%    1.81µs ± 2%   -40.71%  (p=0.000 n=10+8)

name                     old alloc/op   new alloc/op   delta
EncodeIntegers/10_seq       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_ran       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_dup       32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_seq      32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_ran       128B ± 0%        0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_dup      32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq     32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran    1.15kB ± 0%    0.00kB       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup     32.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)

name                     old allocs/op  new allocs/op  delta
EncodeIntegers/10_seq        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_ran        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/10_dup        1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_seq       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_ran       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/100_dup       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_seq      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_ran      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
EncodeIntegers/1000_dup      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
2018-10-16 12:05:52 +01:00
Edd Robinson 6b52231a37 Batch oriented float encoders
This commit adds a tsm1 function for encoding a batch of floats into a
buffer. Further, it replaces the `bitstream` library used in the
existing encoders (and all the current decoders) with inlined bit
expressions within the encoder, significantly reducing the function call
overhead for larger batches.

The following benchmarks compare the performance of the existing
iterator based encoders, and the new batch oriented encoders. They look
at a sequential input slice and a randomly generated input slice.

name                   old time/op    new time/op    delta
EncodeFloats/10_seq      1.14µs ± 3%    0.24µs ± 3%  -78.94%  (p=0.000 n=10+10)
EncodeFloats/10_ran      1.69µs ± 2%    0.21µs ± 3%  -87.43%  (p=0.000 n=10+10)
EncodeFloats/100_seq     7.07µs ± 1%    1.72µs ± 1%  -75.62%  (p=0.000 n=7+9)
EncodeFloats/100_ran     15.8µs ± 4%     1.8µs ± 1%  -88.60%  (p=0.000 n=10+9)
EncodeFloats/1000_seq    50.2µs ± 3%    16.2µs ± 2%  -67.66%  (p=0.000 n=10+10)
EncodeFloats/1000_ran     174µs ± 2%      16µs ± 2%  -90.77%  (p=0.000 n=10+10)

name                   old alloc/op   new alloc/op   delta
EncodeFloats/10_seq       0.00B          0.00B          ~     (all equal)
EncodeFloats/10_ran       0.00B          0.00B          ~     (all equal)
EncodeFloats/100_seq      0.00B          0.00B          ~     (all equal)
EncodeFloats/100_ran      0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_seq     0.00B          0.00B          ~     (all equal)
EncodeFloats/1000_ran     0.00B          0.00B          ~     (all equal)

name                   old allocs/op  new allocs/op  delta
EncodeFloats/10_seq        0.00           0.00          ~     (all equal)
EncodeFloats/10_ran        0.00           0.00          ~     (all equal)
EncodeFloats/100_seq       0.00           0.00          ~     (all equal)
EncodeFloats/100_ran       0.00           0.00          ~     (all equal)
EncodeFloats/1000_seq      0.00           0.00          ~     (all equal)
EncodeFloats/1000_ran      0.00           0.00          ~     (all equal)
2018-10-16 12:05:52 +01:00
Edd Robinson 9ecadd1a9c Rename time batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson c1d82fccf0 Rename unsigned batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson 536e7bb62f Rename string batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson d819378ad3 Rename boolean batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson f81e75f4c1 Rename integer batch decoders 2018-10-16 12:05:52 +01:00
Edd Robinson 12bb8881be Rename float batch decoders 2018-10-16 12:05:52 +01:00
Jonathan A. Sternberg af8bf99256
Do not panic when a series id iterator is nil 2018-10-11 15:16:59 -05:00
Jeff Wendling 69dc031a75 Use platform for most of the read service code
This commit deletes most of the code to service reads from influxdb
and pulls it in from platform instead.

Of note, the models.Tag and models.Tags types are now aliases to the
platform models.Tag and models.Tags types. Additionally, many types
in the tsdb package relating to cursors are also aliases to the same
types in the platform cursors package.

This updates the platform and flux repos to the current master in the
Gopkg.lock.
2018-10-10 11:20:25 -06:00
Ben Johnson 844b7ef9bf
Merge pull request #10299 from influxdata/bj-tsm1-panic-fix
Fix TSM1 panic on reader error.
2018-10-10 08:12:17 -06:00
Edd Robinson ee61ed3dca
Merge pull request #10327 from influxdata/er-duplicate-tsm
Cleanup failed TSM snapshots
2018-10-09 17:08:04 +01:00
Ben Johnson 1580f90be4
Merge pull request #10339 from influxdata/bj-fix-series-index-tombstone
Fix series file tombstoning.
2018-10-08 08:35:05 -06:00
Ben Johnson 2cb97146f0
Fix series file tombstoning.
This commit fixes an issue with the series file compaction process
where tombstones are lost after compaction and series existence
checks are not correct. This commit also fixes some smaller flushing
issues within the series file that mainly related to testing.
2018-10-05 08:23:25 -06:00
ludweeg 5622355526 Simplify s[:] to s where s is a slice 2018-10-04 17:10:21 +03:00
Edd Robinson d649d5928b Cleanup failed TSM snapshot
If there was an error after the cache has been snapshotted to one or
more TSM files, but before the cache and WAL are cleaned up, then the
cache would be repeatedly snapshotted, generated duplicate level 1 TSM
files.

This commit attempts to clean those files up by removing the temporary
TSM file(s). The snapshot will be retried.
2018-10-03 16:34:54 +01:00
Ben Johnson bdcbad3fc9
Fix append of possible nil iterator.
This commit updates an iterator list to ignore `nil` iterators.
Adding a `nil` caused the `SeriesIterators.Close()` to panic.
2018-10-02 13:19:21 -06:00
Ben Johnson 0d777ad423
Fix tsi1 sketch locking. 2018-09-26 17:01:47 -06:00