Commit Graph

1686 Commits (1975940f767a72746a3df0c6047751356e97032a)

Author SHA1 Message Date
Ben Johnson 1975940f76
intermediate compaction commit 2017-05-23 08:42:25 -06:00
Ben Johnson 79edc0979c
Add temporary debugging stats for offset lookups. 2017-05-23 08:41:31 -06:00
Ben Johnson 48a06432df
Add tsi1 bloom filter. 2017-05-23 08:41:31 -06:00
Ben Johnson f3e08c5871
Delta encode tag and measurement block series data. 2017-05-23 08:41:31 -06:00
Ben Johnson 6f58149052
Increase tsi compaction factor. 2017-05-23 08:40:26 -06:00
Stuart Carnie 5c5bea2baa move Measurement and Series to inmem package 2017-05-19 08:17:09 -07:00
Jason Wilder 9445ccbad3 Expose shard meta info on Shard 2017-05-16 11:18:02 -06:00
Stuart Carnie c863923e68 cache MarshalSize 2017-05-12 14:05:25 -06:00
Stuart Carnie 0151afe31c check size and allocate once 2017-05-12 14:05:25 -06:00
Stuart Carnie 096d6f65b4 explicit sizes 2017-05-12 14:05:24 -06:00
Jason Wilder 4d002bb370 Limit concurrent compactions within a shard
This changes full compactions within a shard to run sequentially
instead of running all the compaction groups in parallel.  Normally,
there is only 1 full compaction group to run.  At times, there could
be several which causes instability if they are all running concurrently
as they tie up a cpu for long periods of time.

Level compactions are also capped to a max of 4 concurrently running for each level
in a shard.  This prevents sudden spikes in CPU and disk usage due to a large backlog
of tsm files at a given level.
2017-05-12 14:05:24 -06:00
Jason Wilder 2cac46ebbc Convert usage of strings to []byte
Measurement name and field were converted between []byte and string
repetively causing lots of garbage.  This switches the code to use
[]byte in the write path.
2017-05-12 14:05:19 -06:00
Jason Wilder 503d41a08f Add LimitedBytePool for wal buffers
This pool was previously a pool.Bytes to avoid repetitive allocations.
It was recently switchted to a sync.Pool because pool.Bytes held onto
very larger buffers at times which were never released.  sync.Pool is
showing up in allocation profiles quite frequently.

This switches the pool to a new pool that limits how many buffers are
in the pool as well as the max size of each buffer in the pool.  This
provides better bounds on allocations.
2017-05-11 11:27:00 -06:00
Jason Wilder e17be9f4ba Merge pull request #8377 from influxdata/jw-encoders
Speed up time encoding/decoding
2017-05-11 10:38:27 -06:00
Joe LeGasse 087d9f4670 tsm: fixed test to not require sorted backup tarball 2017-05-11 12:00:19 -04:00
Jason Wilder b150a6293c Merge pull request #8380 from influxdata/jw-wal-buffer
Use buffer writer for wal segments
2017-05-11 08:34:44 -06:00
Jason Wilder b81ac21bcb Merge pull request #8378 from influxdata/jw-snapshot-disable
Don't disable snapshots when snapshot compactions are disabled
2017-05-10 12:00:27 -06:00
Jason Wilder e102fcca9c Use buffer writer for wal segments 2017-05-10 11:42:32 -06:00
Jason Wilder 39a829c1ae Speed up time encoding/decoding
This speeds up time encoding and decoding by skipping the divisor
scaling if scaling by 1.  Since division and multiplication are expensive
cpu and scaling by 1 has no effect, this just slows encoding and decoding
down.
2017-05-10 11:12:35 -06:00
Jason Wilder 4e3e707abc Fix packed time encoded benchmark 2017-05-10 10:35:44 -06:00
Jason Wilder e6f31c38b5 Merge pull request #8372 from influxdata/jw-tombstone-range
Fix deletes triggering unnecessary compactions
2017-05-08 16:52:59 -06:00
Jason Wilder 29c2b1958e Fix deletes triggering unnecessary compactions
Tombstone files would be written to all TSM files even if the deleted
keys or timerange did not exist in the TSM file.  This had the side
effect of causing shards to get recompacted back to the same state. If
any shards or large numbers of TSM files existed, disk usage and CPU
utilization would spike causing issues.

This prevents tombstones being written for TSM files that could not
possiby contain the series keys being deleted or if the delted time
range is outside the range of the file.
2017-05-08 14:52:28 -06:00
Jason Wilder 9374c4f513 Reduce allocations when monitoring shards
When monitoring shards, a slice of measurements is allocated for
each shard.  With many shards and measurements, these allocations
can be large.  Since inmem shards share the same index, we only
need to do this once since the resulting slices are all the same.
This reduces memory usage when monitoring shard cardinality.
2017-05-08 13:34:40 -06:00
Jason Wilder 00bdf62b83 Make shard is ready before returning index type
Shard can be created before they are opened and not have an index
setup yet.  This can cause a panic if IndexType is called.
2017-05-08 12:48:35 -06:00
Jason Wilder 041262af0e Fix race in shard
engine was accessed outside of an RLock which can cause a race when
montitoring goroutines access the shard while it's closed/closing.
2017-05-08 12:37:18 -06:00
Ben Johnson 489c89bea4
Add tsi support tooling. 2017-05-08 11:00:15 -06:00
Jason Wilder c0c6ad6880 Don't disable snapshots when snapshot compactions are disabled
Snapshot compactions can be disabled independently of snapshotting
capability.  This prevents taking backups of shards that have compactions
disabled.
2017-05-05 14:15:45 -06:00
Jason Wilder 73ddd4787b Fix race in SeriesN and CreateSeriesIfNotExists 2017-05-04 14:40:50 -06:00
Jason Wilder fc34d30038 Uses SeriesN instead of copying sketches
Avoids some extra allocations.
2017-05-04 10:12:38 -06:00
Jason Wilder bc639c5982 Make disableLevelCompactions lighter weight
Since this is called more frequently now, the cleanup func was invoked
quite a bit which makes several syscalls per shard.  This should only
be called the first time compactions are disabled.
2017-05-04 09:56:15 -06:00
Jason Wilder 7371f1067b Fix deadlock in Index.ForEachMeasurementTagKey
Index.ForEachMeasurementTagKey held an RLock while call the fn,
if the fn made another call into the index which acquired an RLock
and after another goroutine tried to acquire a Lock, it would deadlock.
2017-05-03 22:48:10 -06:00
Jason Wilder b4ea523910 Include snapshot size in the total cache size
This was causing a shard to appear idle when in fact a snapshot compaction
was running.  If the time was write, the compactions would be disabled and
the snapshot compaction would be aborted.
2017-05-03 16:31:58 -06:00
Jason Wilder 88848a9426 Remove per shard monitor goroutine
The monitor goroutine ran for each shard and updated disk stats
as well as logged cardinality warnings.  This goroutine has been
removed by making the disks stats more lightweight and callable
direclty from Statisics and move the logging to the tsdb.Store.  The
latter allows one goroutine to handle all shards.
2017-05-03 16:31:57 -06:00
Jason Wilder f87fd7c7ed Stop background compaction goroutines when shard is cold
Each shard has a number of goroutines for compacting different levels
of TSM files.  When a shard goes cold and is fully compacted, these
goroutines are still running.

This change will stop background shard goroutines when the shard goes
cold and start them back up if new writes arrive.
2017-05-03 16:31:57 -06:00
Jason Wilder 3d1c0cd981 Don't return compaction plans for files already part of a plan
The compactor prevents the same file from being compacted by different
compaction runs, but it can result in warning errors in the logs that
are confusing.

This adds compaction plan tracking to the planner so that files are
only part of one plan at a given time.
2017-05-03 16:31:57 -06:00
Jason Wilder 8fc9853ed8 Add max-concurrent-compactions limit
This limit allows the number of concurrent level and full compactions
to be throttled.  Snapshot compactions are not affected by this limit
as then need to run continously.

This limit can be used to control how much CPU is consumed by compactions.
The default is to limit to the number of CPU available.
2017-05-03 16:31:57 -06:00
Jason Wilder 80fef4af4a Enable shards after loading
Compactions are enabled as soon as the shard is opened.  This can
slow down startup or cause the system to spike in CPU usage at startup
if many shards need to be compacted.

This now delays compactions until after they are loaded.
2017-05-03 16:31:57 -06:00
Jason Wilder 02e22f4a00 Fix deadlock in Measurement
The lazy sorting of series caused a deadlock since it can not take
a Lock when a caller may have already acquired an RLock.

filters should be called w/o any locks as the function already acquires
locks as needed.
2017-05-03 13:49:56 -06:00
Jason Wilder 3c130cd39c Expose TSMWriter.Flush
Allows flushing the writer so we don't always need to close and
re-open the file handle.
2017-04-28 14:00:50 -06:00
Jason Wilder 141f0d71cd Update index when import files 2017-04-28 14:00:45 -06:00
Jason Wilder a76146e34a Add Store.Import capability
This allows the contents of a backup to be imported into a shard without
requiring the whole shard to be replaced.
2017-04-28 13:30:46 -06:00
Jason Wilder 3839fe34ea Remove FileStore.Add/Remove
Can use Replace which handles files in-use and stats correctly.
2017-04-28 13:20:55 -06:00
Jason Wilder 137d0c0d09 Rename WAL.WritePoints to WAL.WriteMulti
To match Cache.WriteMulti
2017-04-28 13:20:55 -06:00
Jason Wilder 28422f2fec Use consistent receiver var name for Value types 2017-04-28 13:20:55 -06:00
Jason Wilder 1bc4936336 Export Reader.ReadBytes 2017-04-28 13:20:55 -06:00
Stuart Carnie b2d2976466 update reason messages 2017-04-28 11:21:57 -07:00
Stuart Carnie 8097e817f6 prefix partial write errors with `partial write:`
NOTE: parser errors (via http API) are also transformed into
PartialWriteError
2017-04-28 11:00:14 -07:00
Ben Johnson aa64c908d0 Merge pull request #8314 from benbjohnson/tsi-doc
Add TSI documentation
2017-04-24 10:58:31 -06:00
Ben Johnson ba7108f94e
Add TSI documentation. 2017-04-21 14:45:03 -06:00
Jason Wilder d88604f6f2 Move repetive loop checks outside of values loop 2017-04-20 13:45:04 -06:00