Commit Graph

1631 Commits (40ec85aacd07069e4815c33d051d646411d0076e)

Author SHA1 Message Date
Edd Robinson 05bc4dec00
Refactor 2017-01-05 09:50:23 -07:00
Edd Robinson c535e3899a
Remove in-memory index from Shard and Store 2017-01-05 09:47:09 -07:00
Edd Robinson 2171d9471b
Initialise index in shards 2017-01-05 09:42:48 -07:00
Ben Johnson 57d0556174
Fix 32-bit issues. 2017-01-05 09:34:37 -07:00
Ben Johnson 41f2babe66
Minor TSI index benchmark refactor 2017-01-05 09:34:37 -07:00
Ben Johnson ac9c6a0207
Add TSI index benchmark. 2017-01-05 09:34:37 -07:00
Ben Johnson 8d40ceb00c
TSI1 Index 2017-01-05 09:34:36 -07:00
Ben Johnson 9b62df23d2
Add MeasurementBlock. 2017-01-05 09:34:36 -07:00
Ben Johnson 3240af07e0
Fix RHH packing. 2017-01-05 09:34:36 -07:00
Ben Johnson e25d61e4bd
TagSet writer & reader. 2017-01-05 09:34:36 -07:00
Ben Johnson 4eeb81ef38
Add SeriesList tombstoning. 2017-01-05 09:34:36 -07:00
Ben Johnson 2c34b24f5c
Implemented SeriesList 2017-01-05 09:34:36 -07:00
Ben Johnson 6523675c20
Implemented RHH hash map. 2017-01-05 09:34:35 -07:00
Mark Rushakoff 6a94d200c8 Merge remote-tracking branch 'influx/master' into mr-godoc 2017-01-04 13:27:36 -08:00
Mark Rushakoff 89a587e865 Use one atomic operation in (*Cache).decreaseSize
The previous implementation was susceptible to a race condition (of
correctness) since c.decreaseSize is called without a lock in
(*Cache).WriteMulti.

There were already tests which asserted the correctness of the result of
decreaseSize, so no tests were added or modified.
2017-01-04 13:13:31 -08:00
Cory LaNou 3c518f8927
panicing is bad -> error returns are good 2017-01-03 14:28:29 -06:00
Mark Rushakoff 07b87f2630 Miscellaneous lint cleanup 2017-01-03 09:47:32 -08:00
Mark Rushakoff 41415cf2fb Update godoc for tsm1 package 2017-01-02 07:30:18 -08:00
Mark Rushakoff 4a774eb600 Update godoc for the tsdb package 2016-12-30 21:12:37 -08:00
Gustav Westling 26b33307ae
Resolved PR comments on test files 2016-12-30 11:42:38 +01:00
Gustav Westling 56d98325da
Removed ineffective assignments, and added checks for errors that previsouly was not checked 2016-12-29 20:26:15 +01:00
Jason Wilder 2468347ffb Fix comment 2016-12-19 14:17:49 -07:00
Jason Wilder 326557e539 Fix race in partition.reset 2016-12-19 14:17:01 -07:00
Jason Wilder e91e45d71c Fix panic in cache benchmark 2016-12-19 14:17:01 -07:00
Jason Wilder 0b6b9ea1cb Use atomics for cache.snapshotSize stat 2016-12-19 14:17:01 -07:00
Jason Wilder 637a67ea35 Reduce lock contention on measurementFields 2016-12-19 14:17:01 -07:00
Jason Wilder b7c1e625b0 Move needSort tracking to Deduplicate
This eliminates some *UnixNano() calls and also simplifies the cache
logic so that it does not need to worry about whether entries are
sorted.
2016-12-19 14:17:01 -07:00
Jason Wilder dea87703cd Reduce UnixNano pointer call 2016-12-19 14:17:01 -07:00
Mark Rushakoff 722b6345fe Fix unchecked error in templated Read${TYPE}Block 2016-12-19 09:31:26 -08:00
Jonathan A. Sternberg ec57108520 Use proper uber-go/zap import path
It looks like the real import path to the project is go.uber.org/zap
instead of github.com/uber-go/zap since the example in the project
references that path.
2016-12-15 08:54:14 -06:00
Edd Robinson ec27c57127 Further optimisations and a race fix 2016-12-14 18:23:36 +00:00
Edd Robinson 05ec6ad9ad Add to index safely 2016-12-14 18:23:36 +00:00
Edd Robinson d78ca1a0f3 Fix some races 2016-12-14 18:23:36 +00:00
Edd Robinson d2923c7bf9 Add hints as to how to pre-allocate entry values
Currently, whenever a snapshot occurs the Cache is reset and so many
allocations are repeated, as the same type of data is re-added to
the Cache.

This commit allows the stores to keep track of the number of values
within an entry, and use that size as a hint when the same entry needs
to be recreated after a snapshot.

To avoid hints persisting over a long period of time they are deleting
after every snapshot, and rebuilt using the most recent entries only.
2016-12-14 18:23:36 +00:00
Edd Robinson f2b5c7f5be Reduce contention when adding entries 2016-12-14 18:23:36 +00:00
Edd Robinson 98f0392ca6 Update size using atomic 2016-12-14 18:23:36 +00:00
Edd Robinson 66edb32182 Sharded Cache using a hash ring 2016-12-14 18:23:36 +00:00
Edd Robinson d3e6d4e7ca Add benchmarks 2016-12-14 18:21:50 +00:00
Jonathan A. Sternberg 21502a39e8 Switch logging to use structured logging everywhere
The logging library has been switched to use uber-go/zap. While the
logging has been changed to use structured logging, this commit does not
change any of the logging statements to take advantage of the new
structured log or new log levels. Those changes will come in future
commits.
2016-12-14 10:45:15 -06:00
gunnaraasen 78b1a0e771 Add stats on dropped measurements and series; Fixes #7697 2016-12-13 15:17:31 -08:00
Jason Wilder 4f28c90b54 Optimize Value.Deduplicate
Deduplicate is called from various places in the engine and can cause
a lot of garbage to get created.  It first creates a map and then
adds each value to the map in order (1st alloc).  It then creates a
new slice (2nd alloc) and appends everything from the map to the slice.
Finally, it sorted the new slice (3rd alloc).

This switches the algorithm to use stable sorting and resuing the existing
slice to avoid allocations.
2016-12-08 21:10:56 -07:00
Hrvoje Marjanovic 9483b8b409 gofmt 2016-12-03 22:06:38 +01:00
Hrvoje Marjanovic 6ed708e3fd Reduce pool size, change WAL writers default
Big pool can lead to huge memory usage in certain loads.

See #7640 for detailed discussion.
2016-12-02 18:45:43 +01:00
Allen Petersen 31129ab0e9 Use slash separator for filenames in tar archives
NO-OP on platforms with unix path separator.
On Windows paths get converted to slashes before adding to archive and back to backslashes during restore.
2016-11-29 09:44:08 -08:00
Jason Wilder 27d157763a Merge pull request #7651 from influxdata/jw-shard-last-modified
Expose Shard.LastModified
2016-11-23 10:19:26 -07:00
Jason Wilder e8a28cfbab Expose Shard.LastModified
This returns the LastModified time of the shard.  The LastModified
time is the wall time when a change to the shards state occurred.
It uses the WAL or FileStore to determine the max mod time.
2016-11-23 10:04:07 -07:00
Edd Robinson b83b8df32f Merge pull request #7635 from influxdata/er-msg
Fix incorrect error message
2016-11-23 13:58:33 +00:00
Edd Robinson 9e9719749f Sprinkle some golint 2016-11-17 16:31:38 +00:00
Edd Robinson 28ba8ced74 Fixes #7625 2016-11-17 16:31:36 +00:00
Jason Wilder 3a5a01181b Switch all Value types from pointers 2016-11-15 16:13:55 -07:00
Jason Wilder bf17074f58 Avoid allocation when counting tag keys
A new sorted slice was called by the monitor func every 10s.  The
tag keys don't need to be sorted so this avoid the allocation of the
slice and one during sorting.
2016-11-15 16:13:55 -07:00
Jason Wilder 0ee58c208a Switch time.Sleep to time.Ticker
Avoids an allocation when calling time.Sleep
2016-11-15 16:13:55 -07:00
Jason Wilder 73b8f52ca0 Cache results onf findGenerations
This allocates quite a bit and it's called multiple times per
second per shard.  The generations don't change until a compaction
has occurred so most of the time is re-calculating the same thing
and creating garbage.
2016-11-15 16:13:55 -07:00
Jason Wilder 0b6f5441b9 Add config option to messages when limits exceeded
When a limit is exceeded, we return errors and sometimes log (if appropriate)
that a limit was exceeded.  The messages don't always provide an indication
as to where or how they are configured.

Instead, return the config option (easily searchable for) as well as the limit
currently set and the value that exceeded it when possible.
2016-10-28 14:54:45 -06:00
Jason Wilder b1ceb5e66d Add cache write OK, Dropped, Error stats
Adds a new dropped stat as well as fixes OK and error stats not
actually get collected and stored.
2016-10-28 12:15:50 -06:00
Jason Wilder 873189e0c2 Fix panic: interface conversion: tsm1.Value is *tsm1.FloatValue, not *tsm1.StringValue
If concurrent writes to the same shard occur, it's possible for different types to
be added to the cache for the same series.  The way the measurementFields map on the
shard is updated is racy in this scenario which would normally prevent this from occurring.
When this occurs, the snapshot compaction panics because it can't encode different types
in the same series.

To prevent this, we have the cache return an error a different type is added to existing
values in the cache.

Fixes #7498
2016-10-28 12:15:50 -06:00
Jason Wilder e388912b6c Fix race in findGenerations
The file store stats slice is re-used which causes the race below:

WARNING: DATA RACE
Write at 0x00c42007e140 by goroutine 43:
  github.com/influxdata/influxdb/tsdb/engine/tsm1.(*FileStore).Stats()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/file_store.go:511 +0x22e
  github.com/influxdata/influxdb/tsdb/engine/tsm1.(*DefaultPlanner).findGenerations()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/compact.go:461 +0x6f
  github.com/influxdata/influxdb/tsdb/engine/tsm1.(*DefaultPlanner).PlanLevel()

Previous read at 0x00c42007e140 by goroutine 40:
  github.com/influxdata/influxdb/tsdb/engine/tsm1.(*DefaultPlanner).findGenerations()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/compact.go:463 +0x13d
  github.com/influxdata/influxdb/tsdb/engine/tsm1.(*DefaultPlanner).PlanOptimize()
2016-10-28 12:15:49 -06:00
Jason Wilder 96c9fb3648 Actually update the defaults for TSM
7510 update the defaults in the sample config, but did not update
the code.  This updates the defaults in the config that changed.
2016-10-26 09:49:25 -06:00
Steven Hartland 3f16197243 Improve tsm1 cache performance
Reduce the cache lock contention by widening the cache lock scope in WriteMulti, while this sounds counter intuitive it was:
* 1 x Read Lock to read the size
* 1 x Read Lock per values
* 1 x Write Lock per values on race
* 1 x Write Lock to update the size

We now have:
* 1 x Write Lock

This also reduces contention on the entries Values lock too as we have the global cache lock.

Move the calculation of the added size before taking the lock as it takes time and doesn't need the lock.

This also fixes a race in WriteMulti due to the lock not being held across the entire operation, which could cause the cache size to have an invalid value if Snapshot has been run in the between the addition of the values and the size update.

Fix the cache benchmark which where benchmarking the creation of the cache not its operation and add a parallel test for more real world scenario, however this could still be improved.

Add a fast path newEntryValues values for the new case which avoids taking the values lock and all the other calculations.

Drop the lock before performing the sort in Cache.Keys().
2016-10-25 15:24:51 -06:00
Jonathan A. Sternberg a515aeda39 Optimize first/last when no group by interval is present
The `first()` and `last()` functions response rate would increase linear
to the number of points even though it seems like it shouldn't. This
optimization greatly reduces the amount of time to return a response
when no `GROUP BY time(...)` clause is present in a query.
2016-10-25 09:57:31 -05:00
Jason Wilder 686d1a7ba4 Remove unused config options 2016-10-24 15:32:38 -06:00
Edd Robinson 0ee093f1fb Memoize output of FileStore.Stats 2016-10-24 10:23:20 -06:00
Jonathan A. Sternberg 3681bc8a43 Filter out series within shards that do not have data for that series
Previously, we would return a full tag set for every shard and the tag
set would include all series that existed in the database index
including series that didn't physically exist within that shard. This
led to the tag sets returned being incredibly huge when we had high
cardinality but sparse data. Since the data was sparse, it was
unexpected that it would cause such a large strain on the system by most
people.

Now we filter out the series ids that are not assigned to the current
shard when computing a tag set for that shard. This lowers the memory
usage for high cardinality sparse data drastically and allows queries on
those to complete successfully.

This does not resolve issues for high cardinality data in every shard
that is also spread out over a long series of time. That situation isn't
nearly as common as the above situation though.
2016-10-20 14:15:34 -05:00
Jason Wilder 2e473e9518 Fix panic in AppendSeriesKeyByID
Calling this function with a series ID that does not exist in
the measurement causes a panic.

Fixes #7334
2016-10-19 11:07:19 -06:00
Jason Wilder b50d9558cf Merge pull request #7479 from influxdata/jw-clean-err
Skip cleanup if dir does not exist
2016-10-18 15:49:09 -06:00
Jason Wilder f30b00c24f Skip cleanup if dir does not exist 2016-10-18 15:33:39 -06:00
Mark Rushakoff 377c40f122 Add stats for active compactions
Unify logic around compaction execution to a single place.

Also report on the error stats that we track. Previously they were not
emitted in the stats output.
2016-10-18 14:12:21 -07:00
Joe LeGasse de9c743004 TSM: update comments for disabling level compactions 2016-10-18 14:14:59 -06:00
Joe LeGasse eda8f70372 TSM: Handle concurrent deletes for compaction 2016-10-18 14:14:59 -06:00
Jason Wilder 47b8049e48 Update comment 2016-10-18 14:14:53 -06:00
Jason Wilder ed7975874f Rename Enabled -> Enable 2016-10-18 12:22:00 -06:00
Jason Wilder f254b4f3ae Allow snapshot compactions during deletes
If a delete takes a long time to process while writes to the
shard are occuring, it was possible for the cache to fill up
and writes to be rejected.  This occurred because we disabled
all compactions while writing tombstone file to prevent deleted
data from re-appearing after a compaction completed.

Instead, we only disable the level compactions and allow snapshot
compactions to continue.  Snapshots already handle deleted data
with the cache and wal.

Fixes #7161
2016-10-18 12:14:51 -06:00
Jonathan A. Sternberg 41e4e73d4e Reduce map allocations when computing the TagSets of a measurement
Instead of assigning a boolean value of true to the filter expressions
when there was no meaningful expression, this drops a boolean expression
of true from the filter expressions so we don't have to perform a map
assignment. This allows us to reduce allocations and assignments when a
`WHERE` clause only contains tag comparisons and no field comparisons.
2016-10-17 12:13:19 -05:00
Jason Wilder a5f871d62c Rework monitoring to avoid allocations 2016-10-10 11:42:15 -06:00
Jason Wilder bbecb3f03d Drop points that would execeed limits
This changes the behavior of the max-series-per-database and
max-values-per-tag limits to drop points that would exceed the limits
and allow the remaining points to be written.  Previously, the whole
batch would fail and return and 500 error to the client.

This now will write the allow points and return a `partial write`
error indicating some of the points were dropped, how many were
dropped and one of the problem measureent and tags.
2016-10-10 11:42:15 -06:00
Jason Wilder 8fce6bba48 Add tag value cardinality limit 2016-10-10 11:42:15 -06:00
Mark Rushakoff 5ae8cf8312 Speed up shutdown
On my machine with about 20 shards, it would take 10+ seconds to shut
down InfluxDB with SIGINT. After this change, it shuts down in nearly
instantly.

(*tsdb.Store).Close was shutting down each of its shards sequentially.
Each shard's engine would signal to its compaction goroutines to quit,
and because each compaction goroutine has a hardcoded 1-second sleep in
between checks, waiting for the goroutines would often block for up to a
second.

This change closes all of the TSDB store's shards in parallel. This
means it's possible that multiple close values could error at once, but
we're still only returning the first error, consistent with previous
behavior. That being said, the return value of (*tsdb.Store).Close is
ignored in (*cmd/influxd/run.Server).Close anyway.
2016-10-10 09:18:47 -07:00
Jason Wilder 798fa0a9f8 Return error with unknown field type
This will just panic when trying to snapshot the value because EmptyValue
can't be written to TSM files.
2016-10-03 16:30:21 -06:00
Jason Wilder 125f106956 Pre-size the values map when write points 2016-10-03 16:30:21 -06:00
Joe LeGasse 743946fafb models: Add FieldIterator type
The FieldIterator is used to scan over the fields of a point, providing
information, and delaying parsing/decoding the value until it is needed.
This change uses this new type to avoid the allocation of a map for the
fields which is then thrown away as soon as the points get converted
into columns within the datastore.
2016-10-03 16:30:21 -06:00
Jason Wilder 20f1fb3f7f Replace gotos with anonymous functions 2016-10-03 12:08:53 -06:00
Jason Wilder 750c8b3932 Reduce lock contention in cache.Values
The cache read lock was held for the whole duration of the call when it
only needs to be held at the beginning since entries have their
own locks.
2016-10-03 10:21:54 -06:00
Jason Wilder 1b462312a9 Re-use decoder pools
The decoders were held onto each iterator to avoid creating them all
the time.  Some of them have use quite a bit of memory so they can
be expensive to create when querying across many series.

Intead, more them to a re-usable pool where we create the minimum that
could active be in use.  This reduces garbage as well as makes the iterators
less expensive to create.
2016-10-03 10:21:54 -06:00
Jason Wilder f727effd7f Merge pull request #7385 from influxdata/jw-query-allocs
Reduce query planning allocations
2016-10-03 09:08:36 -06:00
Jason Wilder a15a416eaa Fix decoding RLE integer blocks with negative deltas
Integer blocks that were run length encoded could produce the wrong
value when read back out because the deltas were not zig zag decoded
before scaling the final value.  If the deltas were negative, as would
be seen in a counter that decrements by a constant value, the results
would be random with som negative and positive values.

Fixes #7391
2016-10-02 23:51:29 -06:00
Jason Wilder 68dd312bb1 Reduce allocations when calculating tagsets
The TagSets function was creating a lot of intermediate maps and
slices to calculate the sorted tag sets.  It first creates a map
to group tag sets with their series, it then created an equally
sized slice of the tag keys and sorted then.  Finally, it created
a new slice and added the tag sets in the original map by the ordering
of the sorted keys.  It was also recreating the tags map multiple time
creating extra garbage in the loop.

This simplifies the code to create one map for grouping and than adding
the distinct sets to a slice which is then sorted.  It also fixes the
multple tag maps getting created.
2016-09-29 16:02:29 -06:00
Mark Rushakoff 97c2f6f5c1 Add walPath tag to shard stats
Without the WAL path as a tag, the diskBytes field looked like it was
reporting the size of the data directory incorrectly.

Fixes #7382.
2016-09-29 10:19:11 -07:00
Jason Wilder dcb65865a2 Merge pull request #7376 from influxdata/jw-revert
Revert re-using byte slices during compactions
2016-09-28 08:24:35 -06:00
joelegasse 87ecd97e7b Merge pull request #7371 from influxdata/2016-09-27--rw--use-gotos-for-encoding-cleanup
Gotos to simplify uses of the new encoder pools.
2016-09-28 08:57:33 -04:00
Jason Wilder 1755f20d2a Revent re-using byte slices during compactions
This is causing a fatal error: fault panic when packing blocks.
2016-09-27 23:41:06 -06:00
Jonathan A. Sternberg e22e33d5fd Merge pull request #7374 from influxdata/merge-from-1.0.1
Merge tag 'v1.0.1'
2016-09-27 20:32:58 -05:00
Jonathan A. Sternberg 3afdf3cd94 Merge tag 'v1.0.1' 2016-09-27 17:53:33 -05:00
rw c3fc87b619 Remove dangling named return value. 2016-09-27 14:18:32 -07:00
rw fcd425c8c6 Incorporate style feedback from Joe. 2016-09-27 14:07:06 -07:00
rw 47c1c6763c Use encoder reset to save on allocs. 2016-09-27 13:31:35 -07:00
rw 9429a2f96a Gotos to simplify uses of the new encoder pools.
For maintainability.
2016-09-27 11:47:25 -07:00
Jason Wilder 5367372253 Merge pull request #7364 from influxdata/2016-09-26-fix-data-race-in-write-path
Fix data race in *tsdb.Shard write path.
2016-09-26 18:34:19 -06:00
rw f131d3cc77 Fix off-by-one error that could panic. 2016-09-26 17:03:03 -07:00
rw 3e0d3be461 Use pre-existing function. 2016-09-26 13:12:10 -07:00
rw bea010b5f3 Fix data race in *tsdb.Shard write path.
Ensure that the Shard's Index is read-locked before calculating the
count of its constituent series.
2016-09-26 12:42:35 -07:00
joelegasse a17d095aae Merge pull request #7350 from influxdata/2016-09-22-reduce-allocs-in-validate-series-and-fields
Remove a few short-lived string allocs. Thanks @rw
2016-09-26 15:01:53 -04:00
Jason Wilder 4b5d989905 Merge pull request #7335 from influxdata/jw-tsm-syscalls
Avoid stat syscall when planning compactions
2016-09-26 12:30:05 -06:00
rw 68c2212aac Shorten name of static-lifetime string var. 2016-09-26 11:26:24 -07:00
rw 02c86ea9db Remove unnecessary string constant. 2016-09-26 11:25:04 -07:00
Jason Wilder 139ef8062e Simplify encoder buffer usage 2016-09-26 12:19:16 -06:00
Jason Wilder 658149a6ff Removed commented out code 2016-09-26 12:19:15 -06:00
Jason Wilder 7f96d78b79 Make encoder re-usable
This allows encoders to be re-used and maintained in a pool to
avoid allocating new ones on every compactions and write of an encoded
block.  The pool used is not a sync.Pool to ensure that the encoders
will not be garbage collected.
2016-09-26 12:19:15 -06:00
Jason Wilder 0401527093 Pre-allocate cache store and entries
These were not sized so they always had to be grown causing
garbage to be created.
2016-09-26 12:19:15 -06:00
Jason Wilder 730ceeea46 Re-used allocated byte slices during compactions 2016-09-26 12:19:15 -06:00
Jason Wilder 6671ef00f0 Reduce allocations in idsForExpr 2016-09-26 08:36:59 -06:00
Jason Wilder c2cfd63091 Avoid stat syscall when planning compactions
When the planner runs, it needs to determine if any files have tombstones.
The code to determine if a tombstone existed involved stating the .tombstone
file.  Since the planner runs very frequently when there are many shards, this
causea a lot of system calls that are unnecessary.

Instead, cache the results of the stats calls and only refresh them when we
haven't checked at least once or we write new tombstone data.

This also caches the results of the TSMReader.Stats call to avoid creating
garbage.
2016-09-24 15:53:28 -06:00
rw b86885c5cd Remove a few short-lived string allocs.
(*tsdb.Shard).validateSeriesAndFields uses fewer string allocs in some
hot spots.
2016-09-22 17:55:57 -07:00
Jason Wilder 39ade11944 Unload index before closing shard
When deleting a shard, the shard is locked and then removed from the
index.  Removal from the index can be slow if there are a lot of
series.  During this time, the shard is still expected to exist by
the meta store and tsdb store so stats collections, queries and writes
could all be run on this shard while it's locked.  This can cause everything
to lock up until the unindexing completes and the shard can be unlocked.

Fixes #7226
2016-09-22 11:16:45 -06:00
Jason Wilder d06b28992d Unload index before closing shard
When deleting a shard, the shard is locked and then removed from the
index.  Removal from the index can be slow if there are a lot of
series.  During this time, the shard is still expected to exist by
the meta store and tsdb store so stats collections, queries and writes
could all be run on this shard while it's locked.  This can cause everything
to lock up until the unindexing completes and the shard can be unlocked.

Fixes #7226
2016-09-16 12:01:50 -06:00
Edd Robinson ed41122ade Pre-allocate map for performance 2016-09-15 18:28:46 +01:00
Jonathan A. Sternberg 477d6231db Update source files to pass vet checks for go 1.7
The vet checks for some files did not pass for go 1.7. As part of a
preliminary start to making go 1.7 work with this software, go vet
should pass.

Also updated the gogo/protobuf dependency which fixed the code generator
to work with go 1.7 too. Ran `go generate` on the entire repository to
ensure every file was up to date.
2016-09-14 15:01:22 -05:00
Edd Robinson 2a99ef751d Emit fieldsCreated stat in shard measurement 2016-09-13 16:41:11 +01:00
Jonathan A. Sternberg 46508cb8c9 Fix engine tags in stats 2016-09-09 17:16:53 -05:00
Jason Wilder 95682faec2 Merge branch '1.0' into jw-merge-10 2016-09-08 09:00:51 -06:00
Edd Robinson 5023419adc Ensure ErrFieldTypeConflict value returned 2016-09-05 13:34:35 +01:00
Jason Wilder 1a35c0a3fc Fix neverending full compactions
The full compaction planner could return a plan that only included
one generation.  If this happened, a full compaction would run on that
generation producing just one generation again.  The planner would then
repeat the plan.

This could happen if there were two generations that were both over
the max TSM file size and the second one happened to be in level 3 or
lower.

When this situation occurs, one cpu is pegged running a full compaction
continuously and the disks become very busy basically rewriting the
same files over and over again.  This can eventually cause disk and CPU
saturation if it occurs with more than one shard.

Fixes #7074
2016-09-03 17:35:14 -06:00
Jason Wilder a6f6fda415 Fix DeleteSeries when multiple fields exists
The logic for determining whether a series key was already in the
the set of TSM series was too restrictive.  It allowed only the first
field of a series to be added leaving all the remaing fields.
2016-08-31 20:53:10 -06:00
Jason Wilder 190537a557 Fix DeleteSeries when multiple fields exists
The logic for determining whether a series key was already in the
the set of TSM series was too restrictive.  It allowed only the first
field of a series to be added leaving all the remaing fields.
2016-08-31 20:35:35 -06:00
Jonathan A. Sternberg dc2527ce86 Merge branch '1.0' 2016-08-31 14:45:57 -05:00
Jonathan A. Sternberg 964341eb20 Optimize queries that compare a tag value to an empty string
The behavior for querying tag values with an empty string was originally
fixed in #6283, but it also added a performance problem when the
cardinality of the tag was high. Since a call to `Union()` or `Reject()`
would happen for every series key and it would be called N times for N
cardinality, the comparisons against a blank string were unnecessarily
slow with large memory allocations.

This optimizes these queries so it doesn't use those methods anymore.
Those methods are still useful and used when combining AND and OR
clauses, but they aren't useful when finding the series ids for a single
clause. These methods were unnecessary anyway because the series ids for
the tags were unique anyway and didn't have to be merged as a set.
2016-08-31 14:03:23 -05:00
Jonathan A. Sternberg f67558c2a7 Merge pull request #7236 from influxdata/js-7220-revert-limit-shard-concurrency
Revert "limit shard concurrency"
2016-08-29 13:41:46 -05:00
Jonathan A. Sternberg c05c7f6360 Revert "limit shard concurrency"
This reverts commit 6c7d56d4bc.
2016-08-29 12:39:52 -05:00
Jason Wilder 3d411371f2 Merge pull request #7233 from influxdata/jw-stats2
Write path stats
2016-08-29 10:15:23 -06:00
Jason Wilder d878d30d18 Fix shard write stats
* Rename *Fail to *Err for consistency with other metrics
* Use index Series count instead of sepaate counter
2016-08-29 09:46:11 -06:00
Jason Wilder e203323776 Add wal write success/error stats 2016-08-29 09:38:48 -06:00
Jason Wilder 83ca8c3867 Decrement cache memory stat when deleting series 2016-08-29 09:38:41 -06:00
Jason Wilder 03326f993f Add cache write success/error stats 2016-08-29 09:38:32 -06:00
Jason Wilder b31bf798f1 Fix runtime: goroutine stack exceeds 1000000000-byte limit
Fixes #7225
2016-08-29 09:26:48 -06:00
Jonathan A. Sternberg 8b234546a8 Merge pull request #7204 from influxdata/1.0
Merge 1.0 branch to master
2016-08-25 15:20:30 -05:00
Jonathan A. Sternberg 10029caf2f Support negative timestamps in the query engine
Negative timestamps are now supported. We also now refuse two
nanoseconds that are at the edge of the minimum time window. One of the
nanoseconds we do not accept is because we need MinInt64 to be used for
some internal comparisons in the TSM engine and it was causing an
underflow when we subtracted one from the minimum time. The second is so
we can have one minimum time that signifies the default minimum that
nobody can write to (so we can implicitly rewrite the timestamp on
aggregate queries) but still use the explicit timestamp if it is given
to us by the user. We aren't able to tell the difference between if the
user provided it or if it was implicit without those values being
different.

If the default minimum time is used with an aggregate query, we rewrite
the time to be the epoch for backwards compatibility since we believe
that's more important than supporting that extra nanosecond.
2016-08-25 12:52:41 -05:00
Ben Johnson a30f9b6c70 Merge pull request #7196 from benbjohnson/mmap-fix
Fix mmap dereferencing
2016-08-24 10:48:28 -06:00
Ben Johnson cc628a1097
Fix mmap dereferencing
Adds a missing dereference call to `Close()` as well as fixes
a tag copy issue.
2016-08-24 10:48:07 -06:00
Edd Robinson 6cafdbc604 Ensure we don't mutate provided statistics tags 2016-08-24 11:40:13 +01:00
Edd Robinson 90ff713f21 Fix base64 encoding issue in stats
Fixes #7177.
2016-08-22 15:21:31 +01:00
Ben Johnson 65536676a4 Merge pull request #7138 from benbjohnson/optimize-shard-open
Reduce memory allocations in index
2016-08-17 15:27:33 -06:00
Ben Johnson 8aa224b22d
reduce memory allocations in index
This commit changes the index to point to index data in the shards
instead of keeping it in-memory on the heap.
2016-08-16 14:09:00 -06:00
Jonathan A. Sternberg 6b5b24a3e3 Decrement number of measurements only once when deleting the last series from a measurement 2016-08-15 13:57:08 -05:00
Jonathan A. Sternberg 9621bee195 Drop time when used as a tag or field key
The "time" field and tags are unqueryable so we prevent those from being
written so we don't have unreadable data.
2016-08-10 10:02:01 -05:00
Ben Johnson 55b3e63ced
concurrent series limit
This commit fixes the `MaxSelectSeriesN` limit which was broken by
the implementation of lazy iterators. The setting previously limited
the total number of series but the new implementation limits the
concurrent number of series being processed.
2016-08-09 08:58:01 -06:00
Jason Wilder 0ea645642b Remove compaction assert that should not be there
This assert was not removed when the issue that cause the assert
to trigger was fixed in 0f5e994.

Fixes #7121
2016-08-08 09:59:45 -06:00
Jonathan A. Sternberg b98763a3d8 Merge pull request #7118 from influxdata/js-go-generate
go generate on every package to ensure they are generated with the correct dependency
2016-08-08 09:02:32 -05:00
David Norton 064db3c5b3 Merge pull request #7095 from influxdata/dgn-cardinality-limits
feat #6679: add series limit config setting
2016-08-05 16:34:25 -04:00
Jonathan A. Sternberg ed2f81357f go generate on every package to ensure they are generated with the correct dependency 2016-08-05 14:35:07 -05:00
Ben Johnson 6c7d56d4bc
limit shard concurrency
This commit limits queries to only process one shard at a time.
However, within a shard, multiple series can still be processed in
parallel. Shard iterators are lazily instantiated during query
execution to limit the amount of memory a given query uses.
2016-08-05 09:45:57 -06:00
Jason Wilder 19546faab3 Release cursor/iterator resources aggressively 2016-08-03 00:21:39 -06:00
Jason Wilder e8e6bc44a7 Remove defers in TSM reader read path 2016-08-02 16:39:45 -06:00
David Norton 0c4559722c feat #6679: add series limit config setting 2016-08-01 08:28:46 -04:00
Jason Wilder 5576e7fedb Simplifications 2016-07-28 20:25:37 -06:00
Jason Wilder 8367771d35 Fix go vet 2016-07-28 20:25:37 -06:00
Jason Wilder 030f1ef622 Include full for tombstone files
The path info only contained the file name which caused tombstone
files to not be removed if there were queries running against
a file that was compacted.

This is now consistent with the TSMReader.Path which returns the
full path info.
2016-07-28 20:25:37 -06:00
Jason Wilder c3fda24cf9 Make sure all in-use files are tracked
break cause the first one to be tracked and all others would
leak as temp files that would not be removed until the server
restarted.
2016-07-28 20:25:37 -06:00
Jason Wilder c1a94e8861 Remove temp TSM files when disabling compactions
If they were left around, re-enabling them again could cause
future compactions to continuously fail.  A restart of the
server would clean them up correctly though.
2016-07-28 20:25:37 -06:00
Jason Wilder 602a2e80ce Ensure aux and cond cursors are closed when iterator is closed 2016-07-28 20:25:37 -06:00
Jason Wilder 5764a730d5 Prevent tombstoning series keys more than once
If there were multiple TSM files and a delete/drop was run,
we would write the delete series to the tombstone file N
times for each file.  This occurred because FileStore.WalkKeys walks
every key in every TSM file which can return duplicate keys.

This issue caused TSM files to be much larger than they should be
and also cause large memory usage during the delete.
2016-07-28 20:25:36 -06:00
Jason Wilder ef8ecf0e90 Apply reload tombstones in batches
This keeps some memory bounds when reloading a TSM files tombstones
so that the heap does not grow exceedintly fast and stay there
after the deletes are applied.
2016-07-28 20:25:36 -06:00
Jason Wilder 4436e65fb9 Apply deletes to TSM files concurrently 2016-07-28 20:25:36 -06:00
Jason Wilder a8c69e222a Use scanner for reading v1 tombstones
Use a bufio.Scanner to read v1 tombstones instead of reading in
the whole file and parsing it from memory.
2016-07-28 20:25:36 -06:00
Jason Wilder 7b8959f6f2 Apply tombstones iteratively at startup
Tombstone were read fully into memory at startup which could consume
a lot of RAM and OOM the process if there were a lot of deleted
series and many TSM files.

This now walks the tombstone file and iteratively applies the tombstone
which uses significantly less RAM.  This may be slightly slower in the
generate cause, but should scale better.
2016-07-28 20:25:36 -06:00
Jonathan A. Sternberg 86bd97f3b9 Switch SHOW MEASUREMENTS and SHOW TAG VALUES to directly access the tsdb.Store
The `SHOW MEASUREMENTS` and `SHOW TAG VALUES` cannot go through the
query engine to get the speed they need. They also only need access to
the database index and do not need access to specific shards. This
removes the query rewriting that was done to turn these two queries into
a select statement and reimplements them inside of the coordinator as an
interface on the TSDBStore.
2016-07-28 17:38:11 -05:00
Mark Rushakoff f34a7430e3 Fix length of (*DatabaseIndex).SeriesKeys()
Previously, it would return as many empty strings in the first half of
the slice as valid values at the end of the slice.
2016-07-27 16:07:39 -07:00
Jason Wilder 7c3d1aac68 Simplify purger.add logic 2016-07-26 13:02:08 -06:00
Jason Wilder cab84ae279 Prevent concurrent compactions from stepping on each other
Normally, compactions do not conflict on the files they are compacting.
If the full cold threshold is set very low, it can cause conflicts where
two compactions compact the same files.  The full compaction was the
only place this could happen as it's planning is greedy.

To make this safer for concurrent execution, the compaction tracks which
files are current being compacted and prevents any new compactions from
starting if the file set overlaps.

Fixes #6595
2016-07-26 12:58:25 -06:00
Jason Wilder ded6e40d47 Remove lastPlanCheck var
This causes full compactions to not run if the server is running, but
after a restart they do run.
2016-07-26 12:58:25 -06:00
Jason Wilder 2f78c4ec83 Fix race when creating temp file
Using os.O_EXCL is safer than checking and then creating the file.
2016-07-26 12:58:25 -06:00
Cory LaNou 063675b928 updates to make snappy compression tests work again 2016-07-22 14:33:20 -05:00
Cory LaNou 968d322d6d finish tsm file exporter 2016-07-21 17:20:51 -05:00
Jason Wilder fb5a143b08 Fix typos 2016-07-21 12:13:04 -06:00
Jason Wilder 13147efb24 Close underlying cursors when closing iterators
If a query is interrupted via kill query, the tsm files managed
by the file store purger would never get removeed because
KeyCursor.Close was never called.

KeyCursor.Close should always be called now.
2016-07-21 12:13:04 -06:00
Jason Wilder 822f409b31 Allow queries to complete before closing TSM files
If a query was running against a file being compacted, we close the file
and the query would end wherever it had read up to.  This could result
in queries that randomly lost data, but running them again showed the
full results.

We now use a reference counting approach and move the in-use files out
of the way in the filestore and allow the queries to complete against
the old tsm files.  The new files are installed and new queries will
use them.

Fixes #5501
2016-07-21 12:13:04 -06:00
Cory LaNou fd86670518 remove limiter from walkShards 2016-07-21 11:23:31 -05:00
Edd Robinson f37e726869 Add trace logging statements to tsdb 2016-07-21 11:14:29 +01:00
Edd Robinson 44231abcbd Add trace logger controlled via DataLoggingEnabled 2016-07-21 11:14:29 +01:00
Edd Robinson 217bd4de84 Disable trace logging by default 2016-07-21 11:14:29 +01:00
Edd Robinson 83cc580ff8 Tidy up logging 2016-07-21 11:14:29 +01:00
Mark Rushakoff 518bd3b565 Micro-optimize BooleanDecoder for 20% speedup
benchmark                          old ns/op     new ns/op     delta
BenchmarkBooleanDecoder_2048-4     9954          7846          -21.18%

benchmark                          old allocs     new allocs     delta
BenchmarkBooleanDecoder_2048-4     0              0              +0.00%

benchmark                          old bytes     new bytes     delta
BenchmarkBooleanDecoder_2048-4     0             0             +0.00%
2016-07-20 08:43:05 -07:00
Mark Rushakoff 523aea715a Protect against bounds errors in FloatDecoder 2016-07-19 15:59:27 -07:00
Mark Rushakoff e483689563 Protect against bounds errors in BooleanDecoder 2016-07-19 15:59:27 -07:00
Mark Rushakoff 35e3adc890 Protect against bounds errors in IntegerDecoder 2016-07-19 15:43:27 -07:00
Mark Rushakoff 42b35ca068 Protect against bounds errors in TimeDecoder 2016-07-19 15:43:27 -07:00
Mark Rushakoff be589a6760 Protect against bounds errors in StringDecoder 2016-07-19 15:43:27 -07:00
Mark Rushakoff 5b549ffdfe Handle bounds errors in UnpackBlock 2016-07-19 15:43:27 -07:00
Mark Rushakoff 39f12e376c Defend against some boundary errors in TSM reading 2016-07-19 15:43:27 -07:00
Mark Rushakoff 28f31b4a0c Add test cases to repro corruption panics 2016-07-19 15:36:17 -07:00
Jason Wilder c31f0c25b4 Fix duplicate series getting created
There was a race where the same series would get added to the in-memory
index for a measurement more than once.  This would result in the same
series being returned more than once during queries causing duplicate
results.  The issue was that we check for the series under the read
lock, but did not check again under the write lock where there was
a small window where the series could be added by another goroutine.

We now check for the series under the write lock.

Fixes #6946
2016-07-18 16:46:36 -06:00
Jason Wilder 757f31bd45 Fix panic:runtime error: invalid memory address or nil pointer dereference
github.com/influxdata/influxdb/tsdb.(*Shard).FieldDimensions(0xc820244000, 0xc821b70fb0, 0x1, 0x1, 0xc822b9cc00, 0xc822b9cc30, 0x0, 0x0)
    /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/shard.go:588 +0xa62
github.com/influxdata/influxdb/tsdb.(*shardIteratorCreator).FieldDimensions(0xc8202b6078, 0xc821b70fb0, 0x1, 0x1, 0xc822b9cbd0, 0x0, 0x0, 0x0)
    /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/shard.go:818 +0x53
github.com/influxdata/influxdb/influxql.IteratorCreators.FieldDimensions(0xc821b71250, 0x1, 0x1, 0xc821b70fb0, 0x1, 0x1, 0xc822b9cba0, 0xc822b9cbd0, 0x0, 0x0)
    /Users/jason/go/src/github.com/influxdata/influxdb/influxql/iterator.go:639 +0x15a
github.com/influxdata/influxdb/influxql.(*IteratorCreators).FieldDimensions(0xc822a32ae0, 0xc821b70fb0, 0x1, 0x1, 0x20, 0x18, 0x0, 0x0)
    <autogenerated>:163 +0xd3
2016-07-18 16:35:33 -06:00
Jonathan A. Sternberg 30efa2d922 Merge pull request #6989 from influxdata/js-6950-show-measurements-performance
Optimize SHOW MEASUREMENTS so it consults the database index directly
2016-07-18 15:23:17 -05:00
Jason Wilder b692ef4f48 Rename throttle package to limiter 2016-07-18 12:00:58 -06:00
Jonathan A. Sternberg 4121590b01 Optimize SHOW MEASUREMENTS so it consults the database index directly
SHOW MEASUREMENTS doesn't need to visit every shard in the open source
version since all of them contain the same database index.
2016-07-18 12:53:23 -05:00
Jason Wilder c2370b437b Limit in-flight wal writes/encodings
A slower disk can can cause excessive allocations to occur when
writing to the WAL because the slower encoding and compression occurs
before taking the write lock.  The encoding/compression grabs a large
byte slice from a pool and ultimately waits until it can acquire the
write lock.

This adds a throttle to limit how many inflight WAL writes can be queued
up to prevent OOMing the processess with slower disks and heavy writes.
2016-07-17 23:53:12 -06:00
Jason Wilder 46fdcba6e3 Remove compaction enabled logging
Too verbose
2016-07-17 23:53:12 -06:00
Jason Wilder 2fa28ba1d3 Don't log error when compactions are aborted 2016-07-17 23:53:12 -06:00
Jason Wilder b48d88ce9e Abort running compactions when series are deleted
If a delete is issued while a compaction is running, the a newly
deleted series could re-appear after the compaction completed. This
could occur the compaction had already written the blocks for series
that were just deleted.  When the compaction completes, the newly
written tombstone files would be deleted, essentially undeleting the
series.
2016-07-17 23:53:12 -06:00
Jason Wilder cc4a668be5 Don't return statistic if engine is closed 2016-07-17 23:53:12 -06:00
Jason Wilder 6710c69aa5 Merge pull request #7015 from influxdata/jw-drop
Speed up delete/drop statements
2016-07-15 12:41:08 -06:00
Jason Wilder 21dbe7e854 Simplify throttle type 2016-07-15 12:14:25 -06:00
Jason Wilder d1556e3964 Fix missing read locks before filtering 2016-07-15 10:08:26 -06:00
Jason Wilder ff5d61d024 Speed up delete series
Reduce lock contention and process shards in concurrently.
2016-07-14 17:31:34 -06:00
Jason Wilder 8f3ec3be43 Inline deleteShard
Only used by one caller now
2016-07-14 17:31:34 -06:00
Jason Wilder 78201e19d0 Refactor DeleteDatabase to use filter/walk funcs 2016-07-14 17:31:34 -06:00
Jason Wilder e0122efcf8 Speed up drop retention policy
Reduce the lock contention on tsdb.Store by taking a short lived
read-lock instead of a long write lock.  Also close shards in parallel
and drop the whole RP dir in bulk instead of each shard dir.
2016-07-14 17:31:34 -06:00
Jason Wilder 6d3d2f6fe9 Speed up drop measurement
Reduces the lock contention on the tsdb.Store by taking a short
read lock instead of a long write lock.  Also processes shards
in parallel instead of serially.
2016-07-14 17:31:29 -06:00
Jason Wilder 4254ad304c Merge pull request #6851 from influxdata/md-add-benchmarks
Add additional benchmarks for various schemas
2016-07-14 15:04:29 -06:00
Jason Wilder 0f5e994383 Fix panic in full compactions due to duplciate data in blocks
Due to a bug in compactions, it's possible some blocks may have duplicate
points stored.  If those blocks are decoded and re-compacted, an assertion
panic could trigger.

We now dedup those blocks if necessary to remove the duplicate points
and avoid the panic.
2016-07-14 11:32:36 -06:00
Jason Wilder 0264966f5c Add index optimize planning step
For larger datasets, it's possible for shards to get into a state where
many large, dense TSM files exist.  While the shard is still hot for
writes, full compactions will skip these files since they are already
fairly optimized and full compactions are expensive.  If the write volume
is large enough, the shard can accumulate lots of these files.  When
a file is in this state, it's index can contain every series which
causes startup times to increase since each file must parse the full
set of series keys for every file.  If the number of series is high,
the index can be quite large causing large amount of disk IO at startup.

To fix this, a optmize compaction is run when a full compaction planning
step decides there is nothing to do.  The optimize compaction combines
and spreads the data and series keys across all files resulting in each
file containing the full series data for that shard and a subset of the
total set of keys in the shard.

This allows a shard to only store a series key once in the shard reducing
storage size as well allows a shard to only load each key once at startup.
2016-07-14 11:32:36 -06:00
Jason Wilder 5ee20e04a8 Fix compaction level planner
Large files created early in the leveled compactions could cause
a shard to get into a bad state.  This reworks the level planner
to handle those cases as well as splits large compactions up into
multiple groups to leverage more CPUs when possible.
2016-07-14 11:14:09 -06:00
Jonathan A. Sternberg 12a33fe0d3 Add stats and diagnostics to the TSM engine
Track the number of TSM files in the file store and keep engine
statistics related to the number of TSM compactions.
2016-07-07 19:35:55 -05:00
Jonathan A. Sternberg 837a9804cf Refactoring the monitor service to avoid expvar
Truncate the time interval output of the monitor service to be on even
time intervals rather than on every minute based on the start time. This
normalizes the output from the monitor service.
2016-07-07 11:13:58 -05:00
Jason Wilder 2f82d9a525 Truncate the slice when merging the caches 2016-07-05 12:12:21 -05:00
Jason Wilder 5aae28e14f Merge pull request #6922 from influxdata/jw-6829
Fix panic: runtime error: index out of range
2016-06-28 09:38:19 -06:00
Jason Wilder fdf0bac717 Fix panic: runtime error: index out of range
Fixes #6829
2016-06-27 18:50:48 -06:00
kun 77ed719bc1 delete redundant code in NewStore function 2016-06-24 17:14:00 +08:00
Michael Desa 517d8d5881 Move benchmarks beneath other NewSeries 2016-06-23 10:15:37 -07:00
Jason Wilder ca6bfac01a Fix out of order blocks returned during query
If there were blocks in later TSM files that were for overwritten
points or writes into the past, they could be returned more than
once or out of order causing the cursor values to be unsorted.

One effect of this is that graphs in graphana would render with
the line going all over the place in spots.

This might also cause duplicate data to be returned.

Fixes #6738
2016-06-22 17:34:44 -06:00
Jonathan A. Sternberg 7bdcd669a8 Merge pull request #6879 from influxdata/js-prune-deadcode
Removing dead code from every package except influxql
2016-06-22 08:12:19 -05:00
Jonathan A. Sternberg 497db2a6d3 Removing dead code from every package except influxql
The tsdb package had a substantial amount of dead code related to the
old query engine still in there. It is no longer used, so it was removed
since it was left unmaintained. There is likely still more code that is
the same, but wasn't found as part of this code cleanup.

influxql has dead code show up because of the code generation so it is
not included in this pruning.
2016-06-20 22:41:07 -05:00
Jonathan A. Sternberg 8812bc8a93 Remove a double lock in the tsm1 index writer 2016-06-20 17:32:34 -05:00
Jonathan A. Sternberg 1d03151631 Remove FieldCodec from tsdb package
Updated `influx_inspect` to use the `FieldDimensions` method instead
(more reliable anyway). The `influx_tsm` program used its own vendored
copy of `FieldCodec` so it is not affected by this change. `FieldCodec`
was only used for the `b1` and `bz1` engines which were removed in 0.12,
but the code that created the field codec was never removed. This
limited the maximum number of fields to 255 even though that restriction
was removed with the `tsm1` engine.

Fixes #6869.
2016-06-19 21:38:43 -05:00
Jonathan A. Sternberg 6e205ce135 Set the condition cursor instead of aux iterator when creating a nil condition cursor
A copy/paste error had nil cursors destined for a condition cursor get
set to the auxiliary cursor instead. When the number of conditions
exceeded the number of auxiliary fields, this would result in a stack
trace in some situations. When the number of conditions was less than or
equal to the number of auxiliary fields, it means that an auxiliary
cursor may have been overwritten with a nil cursor accidentally and a
leak might have happened since it was never closed.

Fixes #6859.
2016-06-17 14:54:48 -05:00
Michael Desa 0c867e4b2c Fix benchmark test names
Previously the test names included an `s` for the name of a singular
component.
2016-06-16 08:45:36 -07:00
Michael Desa 9dfaa182a7 Add additional benchmarks for various schemas
Anecdotally, the relationship between memory consumption and series
cardinality was thought to be exponential. I suspect that this is false.
The intent of the added benchmarks is to verify my suspicion. Eventually
the these benchmarks will run nightly to serve as a basis to evualuate
the memory performance in a controlled environment.

https://github.com/influxdata/docs.influxdata.com/issues/392
2016-06-15 14:54:14 -07:00
Ben Johnson 7d4bea7153
add node id to execution options
This commit changes the `ExecutionOptions` and `SelectOptions` to
allow a `NodeID` for specifying an exact node to query against.
2016-06-10 09:20:44 -06:00
Jason Wilder ac6addd0b5 Ensure restore doesn't write broken files
Restore would try to open the shard if there was an error.  If there
was an error, the files written are very likely to be partially written
and they can cause the server to panic.

To prevent a shard from trying to open broken files, we now write to
a temp file and rename it to the actual name only after fully writing
and fsyncing the file.
2016-06-07 14:36:46 -06:00
Jonathan A. Sternberg fe3f0d0e3d Remove the DatabaseIndex method from TSDBStore interface
The TSDBStore interface needs to also allow for remote TSDBStore but the
DatabaseIndex is only for a local TSDB instance. Moved the optimized
SHOW TAG VALUES path to do a typecast to the LocalTSDBStore struct
instead of always attempting to use the optimized version.

If the TSDBStore is not local and does not have the DatabaseIndex, it
will default to using the distributed query instead.
2016-06-07 11:34:34 -05:00
Ben Johnson bf3c22689b Merge pull request #6792 from benbjohnson/show-tag-values
Optimize SHOW TAG VALUES
2016-06-06 16:00:12 -06:00
Ben Johnson 1b94cd2686
optimize SHOW TAG VALUES
This commit optimizes `SHOW TAG VALUES` so that it avoids the
`SELECT` query engine execution and iterator creation. There
are also optimizations to reduce individual memory allocations
and to reduce in-memory heap size by only operating on one
measurement at a time.

Execution time has been reduce to approximately 900ms for
500,000 rows. This is about 2µs per row. Of this time,
approximately 1µs is spent retrieving and sorting the row
and 1µs is spent encoding into JSON and writing to the
response body.
2016-06-06 15:50:53 -06:00
Jason Wilder 838a29cca8 Fix race in cache
If cache.Deduplicate is called while writes are in-flight on the cache, a data race
could occur.

WARNING: DATA RACE
Write by goroutine 15:
  runtime.mapassign1()
      /usr/local/go/src/runtime/hashmap.go:429 +0x0
  github.com/influxdata/influxdb/tsdb/engine/tsm1.(*Cache).entry()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache.go:482 +0x27e
  github.com/influxdata/influxdb/tsdb/engine/tsm1.(*Cache).WriteMulti()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache.go:207 +0x3b2
  github.com/influxdata/influxdb/tsdb/engine/tsm1.TestCache_Deduplicate_Concurrent.func1()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache_test.go:421 +0x73

Previous read by goroutine 16:
  runtime.mapiterinit()
      /usr/local/go/src/runtime/hashmap.go:607 +0x0
  github.com/influxdata/influxdb/tsdb/engine/tsm1.(*Cache).Deduplicate()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache.go:272 +0x7c
  github.com/influxdata/influxdb/tsdb/engine/tsm1.TestCache_Deduplicate_Concurrent.func2()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache_test.go:429 +0x69

Goroutine 15 (running) created at:
  github.com/influxdata/influxdb/tsdb/engine/tsm1.TestCache_Deduplicate_Concurrent()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache_test.go:423 +0x3f2
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:473 +0xdc

Goroutine 16 (finished) created at:
  github.com/influxdata/influxdb/tsdb/engine/tsm1.TestCache_Deduplicate_Concurrent()
      /Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache_test.go:431 +0x43b
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:473 +0xdc
2016-06-06 15:45:01 -06:00
Jason Wilder bc76048371 Fix panic in cache.DeleteRange
Deleting keys that did not exist in the cache could cause a panic
because the entry returned would be nil and was not checked.
2016-06-06 14:48:53 -06:00
Jason Wilder cd336095ca Merge pull request #6768 from influxdata/jw-disable-open
Allow creating shards in a disabled state
2016-06-02 08:34:51 -06:00
Jason Wilder 579923d95f Fix sporadic write failures with influx_stress
This Unlock was moved which seems to create a deadlock situation
sometimes under high write load.  This deadlock causes writes to
fail with timeouts.
2016-06-01 17:25:47 -06:00
Jason Wilder a74ea4cbf4 Allow creating shards in a disable state
For restoring a shard, we need to be able to have the shard open,
but disabled.  It was racy to open it and then disable it separately
since writes/queries could occur in between that time.
2016-06-01 16:17:18 -06:00
Jason Wilder d0023dee5d Convert inline errors to constants 2016-05-31 10:51:54 -06:00
Jason Wilder 1ff8ecf4fb Add ability to disable shards
Disabling a shard causes all writes and queries to a shard to return
an error.  This also disables compactions for the shard.
2016-05-31 10:51:54 -06:00
Edd Robinson baf5d505e6 Merge pull request #6754 from influxdata/er-fs
Prevent ReadFloatBlock from panicking when no values
2016-05-31 16:41:29 +01:00
Edd Robinson 003c30989a Check for no values 2016-05-31 16:28:17 +01:00
rw dcec206f2e Dedup `.RUnlock` between two conditionals. 2016-05-29 10:20:58 -07:00
rw 1b160d1af0 Low-contention path for pre-existing cache entries.
This change appears to increase bulk ingestion throughput by 2x-3x in
multiprocessor environments.
2016-05-28 23:50:11 -07:00
Jason Wilder dd58101061 Merge pull request #6743 from influxdata/jw-parse-key
Optimize series key parsing on startup
2016-05-27 15:00:42 -06:00
Jason Wilder ff1447202c Reduce lock contention in Measurement.AddSeries 2016-05-27 10:30:08 -06:00
Jason Wilder 11959005f4 Switch backup to use shard.Snapshot
This switch the backup shard call to use the shard Snapshot that
internally creates a snapshot by hardlinking all of the TSM and
tombstone files instead.  This reduces the time that the FileStore
is locked and will allow for larger shards to be backup more easily.
2016-05-27 09:30:25 -06:00
David Norton 381059a55c Merge pull request #6736 from influxdata/benchmark-write-points-allocs
Benchmarks to count allocs in WritePoints.
2016-05-27 10:13:17 -04:00
Alex Russell-Saw 7edb14bffd assign engine to shard after engine is initialized 2016-05-27 13:45:16 +01:00
Edd Robinson 6a7f9527e3 Revert d2672a3 and 1e0a4e9 2016-05-27 10:34:14 +01:00
rw 92e7fec5cf Benchmarks to count allocs in WritePoints. 2016-05-26 17:13:14 -07:00
Edd Robinson d2672a3280 Update Go version 2016-05-26 15:26:09 +01:00
Edd Robinson 1e0a4e9119 Move fields under mutex 2016-05-26 12:00:46 +01:00
Jason Wilder d6661060a3 Merge pull request #6719 from shurcooL/fix-tombstone-open-error-check
tsdb/engine/tsm1: Check os.Open error before using file.
2016-05-25 12:11:26 -06:00
Jason Wilder a77dd4fe4c Merge pull request #6725 from influxdata/jw-tsm-query
Fix pathological TSM query case
2016-05-25 11:23:38 -06:00
Jason Wilder 7d50970631 Fix continous compaction edge case
The level planner would keep including the same TSM files to be
recompacted even if they were already quite compacted and split
across several TSM files.

Fixes #6683
2016-05-25 10:36:24 -06:00
Jason Wilder 0b481ff627 Fix pathalogical TSM query case
This fixes a pathalogical query condition cause by and problematic
structuring of TSM files based on how points were written.  The
condition can occur when there are multiple TSM files and a large
number of points are written into the past.  The earlier existing
TSM files must also have points in the past and close to the present
causing their time range to eclipse the later files.

When this condition occurs, some queries can spend an excessive amount
of time merge all the overlapping blocks.

The fix was to constrain the window of overlapping blocks based on
the first one we ran into.  There was also a simple case in the Merge
where we could skip the binary search path and just append the two
inputs.
2016-05-25 09:14:17 -06:00
Dmitri Shuralyov c03ebf896b tsdb/engine/tsm1: Check os.Open error before using file.
os.Open is documented as:

> Open opens the named file for reading. If successful, methods on
> the returned file can be used for reading;

That suggests the file's methods should only be called if opening
was successful. The original code would defer f.Close() right after
os.Open, before ensuring that err is nil, so f.Close() would run
even if os.Open did not return successfully.

Apply https://github.com/golang/go/wiki/CodeReviewComments#indent-error-flow
suggestion to keep the normal path at minimal indentation, and indent
the error handling code instead. This improves code readability.
2016-05-24 21:08:35 -07:00
Jonathan A. Sternberg 32e42b93ae Merge pull request #6705 from influxdata/js-6701-duplicate-points-with-select
Filter out sources that do not match the shard database/retention policy
2016-05-24 09:48:31 -04:00
Jonathan A. Sternberg 5e7e0bd19b Filter out sources that do not match the shard database/retention policy
If you use a statement like this:

    SELECT value FROM one..cpu, two..cpu

It will access both the `one` and `two` databases as if you had selected
the `cpu` measurement twice for both of them. Updated the `tsdb.Shard`
create iterator function to filter out any sources that do not apply to
that shard so this duplication doesn't happen.

Fixes #6701.
2016-05-23 17:05:33 -04:00
Jason Wilder f48a106860 Optimized timestamp run-length decoding
Removes the up-front allocation of decoded values and return them
as needed.
2016-05-23 14:05:25 -06:00
Edd Robinson 0b2a806789 Merge pull request #6690 from influxdata/jw-shard-size
Fix panic in shard.DiskSize()
2016-05-20 15:29:53 +01:00
Edd Robinson 40732a35d0 Merge pull request #6660 from influxdata/er-vet
Fix vet issues
2016-05-20 11:12:25 +01:00
Jason Wilder d324777bfc Fix panic in shard.DiskSize()
If the wal or data dir is not accessible (possibly deleted), the
DiskSize walk funcs could panic because they did not check the
error passed in.
2016-05-19 23:19:44 -06:00
Jonathan A. Sternberg 5621ccc2ce Remove limit optimization when using an aggregate
The limit optimization was put into the wrong place and caused only part
of the shard to be read when a limit was used. The optimization is
possible, but requires a bit of refactoring to the code here so the call
iterator is created per series before handed to the limit iterator.

Fixes #6661.
2016-05-19 10:29:38 -04:00
Jason Wilder 4c089a56f4 Fix read tombstones: EOF
Due to an bug in TSM tombstone files, it was possible to create
empty tombstone files.  At startup, the TSM file would error out
and not load the TSM file.

Instead, treat it as an empty v1 file so the TSM file can load
correctly.

Fixes #6641
2016-05-18 23:29:25 -06:00
Jason Wilder 7fb7faaaca Fix points already read from being returned more than once
If there were duplicate points in multiple blocks, we would correctly
dedup the points and mark the regions of the blocks we've read.
Unfortunately, we were not excluding the already points as the cursor
moved to points in the later blocks which could cause points to be
return twice incorrectly.

Fixes #6611
2016-05-18 17:21:10 -06:00
Jason Wilder 9f89420b4c Merge pull request #6653 from influxdata/jw-compact-fix
Compaction fixes
2016-05-18 16:10:10 -06:00
Jason Wilder 121195a865 Merge pull request #6665 from influxdata/jw-series-stats
Reload series count stat at startup
2016-05-18 15:58:15 -06:00
Edd Robinson 09dc48b847 Merge pull request #6664 from influxdata/jw-shard-size
Store shard size on disk statistic
2016-05-18 22:39:12 +01:00
Jason Wilder 209dd005c5 Merge pull request #6627 from influxdata/jw-deadlock
Fix possible deadlock when queries and delete series run concurrently
2016-05-18 15:30:37 -06:00
Jason Wilder f2bcf9d9ab Code review fixes 2016-05-18 15:25:56 -06:00
Jason Wilder d32ad26d27 Fix data not getting reloaded
The optimization to speed up shard loading had the side effect of
skipping adding series to the index that already exist.  The skipping
was in the wrong location and also skipped the shards measurementFields
index which is required in order to query that series in the shard.
2016-05-18 15:25:56 -06:00
Jason Wilder e859141b75 Speed up tests
Switched the max keys test to write int64 of the same value so RLE
would kick in and the file size will be smaller (84MB vs 3.8MB).

Removed the chunking test which was skipped because the code will
not downsize a block into smaller chunks now.

Skip MaxKeys tests in various environments because it needs to
write too much data to run reliably.
2016-05-18 15:25:56 -06:00
Jason Wilder eff71cbe23 Rollover to new TSM file when max blocks exceeded
Fixes #6406
2016-05-18 15:25:55 -06:00
Jason Wilder 8fda621d8b Fix memory spike when compacting overwritten points
If a large series contains a point that is overwritten, the compactor
would load the whole series into RAM during a full compaction.  If
the series was large, it could cause very large RAM spikes and OOMs.

The change reworks the compactor to merge blocks more incrementally
similar to the fix done in #6556.

Fixes #6557
2016-05-18 15:25:55 -06:00
Jason Wilder f1ab89561a Reload series count stat at startup 2016-05-18 15:21:57 -06:00
Edd Robinson 28ad7c687b Add const for interval 2016-05-18 22:14:59 +01:00
Jason Wilder cbc551f9dc Collect shard size stats 2016-05-18 22:14:59 +01:00
Jonathan A. Sternberg 946968ba23 Fixing panic in SHOW FIELD KEYS caused by 733a17d
The list of field keys in the index may have differed from the field
keys in the actual shard. Fixing `SHOW FIELD KEYS` so it relies only on
the shard rather than the index.

Fixes #6659.
2016-05-18 14:43:50 -04:00
Edd Robinson f78e67d09c Fix concurrent map access panic 2016-05-18 17:56:50 +01:00
Edd Robinson f680ab0f0d Fix vet issues 2016-05-18 13:34:11 +01:00
Joe LeGasse af432e7d12 Fix loop variable reuse in database close
Fixes #6650
2016-05-17 11:25:39 -04:00
Jonathan A. Sternberg 42cdaf0365 Merge pull request #6529 from influxdata/js-6519-select-tag-key-specifier
Support cast syntax for selecting a specific type
2016-05-16 12:30:14 -04:00
Jonathan A. Sternberg 23f6a706bb Support cast syntax for selecting a specific type
Casting syntax is done with the PostgreSQL syntax `field1::float` to
specify which type should be used when selecting a field. You can also
do `field1::field` or `tag1::tag` to specify that a field or tag should
be selected.

This makes it possible to select a tag when a field key and a tag key
conflict with each other in a measurement. It also means it's possible
to choose a field with a specific type if multiple shards disagree. If
no types are given, the same ordering for how a type is chosen is used
to determine which type to return.

The FieldDimensions method has been updated to return the data type for
the fields that get returned. The SeriesKeys function has also been
removed since it is no longer needed. SeriesKeys was originally used for
the fill iterator, but then expanded to be used by auxiliary iterators
for determining the channel iterator types. The fill iterator doesn't
need it anymore and the auxiliary types are better served by
FieldDimensions implementing that functionality, so SeriesKeys is no
longer needed.

Fixes #6519.
2016-05-16 12:08:29 -04:00
Jason Wilder ce141eae37 Merge pull request #6637 from influxdata/jw-revert-compact
Revert "Fix memory spike when compacting overwritten points"
2016-05-16 09:46:24 -06:00
Jason Wilder 23fc9ff748 Revert "Fix memory spike when compacting overwritten points"
This reverts commit d99c5e26f6.
2016-05-16 09:30:34 -06:00
Jonathan A. Sternberg a17f3d960a SHOW TAG VALUES accepts != and !~ in WHERE clause
Fixes #6607.
2016-05-16 08:51:09 -04:00
Jason Wilder 57d4becaec Fix possible deadlock when queries and delete series run concurrently
This locks showeed up in a deadlock systems running queries and
delete series across a large dataset.  Queries should not need to
lock the tsdb.Store for writes
2016-05-13 17:04:12 -06:00
Jason Wilder 5b6f3afefa Limit concurrent shards loading to number of cores available 2016-05-13 15:41:32 -06:00
Jason Wilder 11871958c6 Merge pull request #6618 from influxdata/jw-shard-load
Optimize shard index loading
2016-05-13 14:16:17 -06:00
Jason Wilder 9e54adc719 Speed up drop database
Drop database was closing and deleting each shard dir individually and
serially.  It would then delete the empty database dirs.

This changes drop database to close all shards in parallel and run
one os.RemoveAll to remove everything under the db dir which is more
efficient.

This also reworked the locking to avoid locking the tsdb.Store for
long periods of time.  That can cause queries and writes for other
databases to block as well.
2016-05-13 10:26:28 -06:00
Jason Wilder 0dbd4893da Optimize shard index loading
On data sets with many series and potentially large series keys,
the cost of parsing the key and re-indexing can be high.

Loading the TSM keys into the index was being done repeatedly for
series that were already index by an earlier TSM file.  This was
wasted worked and slows down shard loading.

Parsing the key was also innefficient and allocated a new string
slice.  This was simplified to remove that allocation.
2016-05-12 14:02:42 -06:00
Ben Johnson 7afb73aa99 Merge pull request #6598 from benbjohnson/parallelize-planning
Parallelize query planning
2016-05-12 09:00:58 -06:00
Jonathan A. Sternberg 89346bb618 Merge pull request #6600 from influxdata/0.13
Merge 0.13 release candidate back to master
2016-05-11 13:04:26 -04:00
Ben Johnson 668bae57df
parallelize query planning
This commit changes the `tsm1.Engine` to create individual series
iterators in batches so that it can be parallelized. Iterators
are combined at the end so they can be redistributed to the
parallelized merge iterator.
2016-05-11 10:38:11 -06:00
Cory LaNou c32906a366 Merge pull request #6593 from influxdata/cjl-copyshard
create shard snapshot
2016-05-10 20:01:59 -05:00
Jonathan A. Sternberg 8353b0c20f Merge pull request #6592 from influxdata/js-3451-show-field-keys-with-field-type
Update SHOW FIELD KEYS to return the field type with the field key
2016-05-10 14:13:17 -04:00
Jason Wilder d8490f1170 Merge pull request #6587 from influxdata/jw-validate-fields
Fix for merge values
2016-05-10 11:56:07 -06:00
Jonathan A. Sternberg 733a17d9e9 Update SHOW FIELD KEYS to return the field type with the field key
Fixes #3451.
2016-05-10 13:16:57 -04:00
Cory LaNou f415cf89ad wip 2016-05-10 11:01:03 -05:00
Jason Wilder 9b86bfea2a Merge pull request #6582 from eleme/fix_engine_cache_size
fix cache size of engine
2016-05-10 09:01:03 -06:00
Jason Wilder 8839cabd41 Add benchmark for Merge 2016-05-10 08:39:55 -06:00