Commit Graph

169 Commits (db/wait-timeout-utility)

Author SHA1 Message Date
Jason Wilder 70817350b7 Ensure temp index files are cleaned up on error 2017-10-03 10:48:14 -06:00
Jason Wilder ae821f4e2d Rework compaction scheduling
This changes the compaction scheduling to better utilize the available
cores that are free.  Previously, a level was planned in its own goroutine
and would kick off a number of compactions groups.  The problem with this
model was that if there were 4 groups, and 3 completed quickly, the planning
would be blocked for that level until the last group finished.  If the compactions
at the prior level are running more quickly, a large backlog could accumlate.

This now moves the planning to a single goroutine that plans each level in
succession and starts as many groups as it can.  When one group finishes,
the planning will start the next group for the level.
2017-10-03 10:48:13 -06:00
Jason Wilder 1610ae5727 Don't return tsm files part of a compaction plan 2017-10-03 10:48:13 -06:00
Jason Wilder 122a74c692 Use synchronous IO for wal and tsm writing
The fysncs due to large writes when writing to TSM files and the
WAL can eventually cause large pauses.  Since we already buffer
writes, using synchronous IO reduces fsync latency by ensuring
the individiual writes hit disk.  This spreads out the latecncy
across multiple writes better.
2017-09-25 12:44:57 -06:00
Jason Wilder db204f3eb7 Default concurrent compactions to 50% of available cores 2017-09-21 12:48:11 -06:00
Jason Wilder 796de3dcea Reduce encoder pool checkout contention
With higher cardinalities, the encoder pools where become a bottleneck.
This changes the snapshot compactions ot checkout one encoder of each
type and re-use it while writing the snapshots as opposed to repeatedly
checking it out and in.
2017-09-19 15:27:26 -06:00
Jason Wilder 391a6288c6 Write parallel snapshot for higher cardinalities 2017-09-19 15:27:26 -06:00
Jason Wilder ddeba2c86b Split large snapshots and write concurrently 2017-09-19 15:27:25 -06:00
Jason Wilder 91eb9de341 Use existing TSMReader from file store during compactions
Compactions would create their own TSMReaders for simplicity. With
very high cardinality compactions, creating the reader and indirectIndex
can start to use a significant amount of memory.

This changes the compactions to use a reader that is already allocated
and managed by the FileStore.
2017-09-11 15:29:25 -06:00
Jason Wilder a8d9eeef36 Reduce lock contention when deleting high cardinality series
Deleting high cardinality series could take a very long time, cause
write timeouts as well as dead lock the process.  This fixes these
issue to by changing the approach for cleaning up the indexes and
reducing lock contention.

The prior approach delete each series and updated every index (inmem)
during the delete.  This was very slow and cause the index to be locked
while it items in a slice were removed one by one.  This has been changed
to mark series as deleted and then rebuild the index asynchronously which
speeds up the process.

There was also a dead lock that could occur when deleing the field set.
Deleting the field set held a write lock and the function it invoked under
the lock could try to take a read lock on the field set.  This would then
deadlock.  This approach was also very slow and caused time out for writes.
It now uses faster approach that checks for the existing of the measurment
in the cache and filestore which does not take write locks.
2017-09-07 11:36:02 -06:00
Jason Wilder e265d150be Fix leaking tmp file when large compaction aborted
If a large compaction was running and was aborted. It could would leave
some tmp files around for files that it had fully written.  The current
active file was cleaned up, but already completed ones would not.  This
would occur when a TSM file needed to rollover due to size.
2017-08-21 17:04:57 -06:00
Jason Wilder 61b13eb12b Fix partiallyRead logic
The partiallyRead func didn't account for the initial values and would
return true for blocks that had not been read at all.  This causes a
slower path during compactions that forces a block to be decoded when
it could just be merged as is without decoded.  This causes compactions
to consume more CPU and run slower at times.
2017-08-14 16:44:32 -06:00
Jason Wilder 778000435a Conver all keys from string to []byte in TSM engine
This switches all the interfaces that take string series key to
take a []byte.  This eliminates many small allocations where we
convert between to two repeatedly.  Eventually, this change should
propogate futher up the stack.
2017-07-28 11:00:50 -06:00
Jason Wilder 18a02d50d7 Interrupt in progress TSM compactions
When snapshots and compactions are disabled, the check to see if
the compaction should be aborted occurs in between writing to the
next TSM file.  If a large compaction is running, it might take
a while for the file to be finished writing causing long delays.

This now interrupts compactions while iterating over the blocks to
write which allows them to abort immediately.
2017-07-27 15:58:56 -06:00
Stuart Carnie eec80692c4 Taught tsm1 storage engine how to read and write uint64 values
* introduced UnsignedValue type
  * leveraged existing int64 compression algorithms (RLE, Simple 8B)
* tsm and WAL can read and write UnsignedValue
* compaction is aware of UnsignedValue
* unsigned support to model, cursors and write points

NOTE: there is no support to create unsigned points, as the line
protocol has not been modified.
2017-07-24 09:03:22 -07:00
Edd Robinson 101af89987 Update CHANGELOG 2017-07-05 16:35:41 +01:00
Edd Robinson 0748d28986 Ensure tmp files cleaned up when compaction disabled 2017-07-04 20:04:23 +01:00
Jason Wilder 29e4287fd2 Preven masking root errors when compactions are in progress
The root error when creating a tmp file when writing a snapshot
was hidden making it difficult to determine why snapshots were
failing.
2017-05-23 12:09:36 -06:00
Jason Wilder 4e582f297a Fix race in findGenerations
It was possible that the findGenerations could get stuck returning
no files even when generations existed on disk.
2017-05-23 12:05:47 -06:00
Jason Wilder 1833475c09 Fix TSM tmp files leaking
TMP files could leak when compactions failed for various reasons. They
were also being deleted inadvertently when compactions were disabled causing
other errors to be reported in the logs.
2017-05-22 14:51:18 -06:00
Jason Wilder 4d002bb370 Limit concurrent compactions within a shard
This changes full compactions within a shard to run sequentially
instead of running all the compaction groups in parallel.  Normally,
there is only 1 full compaction group to run.  At times, there could
be several which causes instability if they are all running concurrently
as they tie up a cpu for long periods of time.

Level compactions are also capped to a max of 4 concurrently running for each level
in a shard.  This prevents sudden spikes in CPU and disk usage due to a large backlog
of tsm files at a given level.
2017-05-12 14:05:24 -06:00
Jason Wilder f87fd7c7ed Stop background compaction goroutines when shard is cold
Each shard has a number of goroutines for compacting different levels
of TSM files.  When a shard goes cold and is fully compacted, these
goroutines are still running.

This change will stop background shard goroutines when the shard goes
cold and start them back up if new writes arrive.
2017-05-03 16:31:57 -06:00
Jason Wilder 3d1c0cd981 Don't return compaction plans for files already part of a plan
The compactor prevents the same file from being compacted by different
compaction runs, but it can result in warning errors in the logs that
are confusing.

This adds compaction plan tracking to the planner so that files are
only part of one plan at a given time.
2017-05-03 16:31:57 -06:00
Jason Wilder 2f7d4995b4 Use typed values to avoid allocations
This switches compactions to use type values (FloatValues) from the
generic Values type.  It avoids a bunch of allocations where each value
much be converted from a specific type to an interface{}.
2017-03-09 16:27:07 -07:00
Jason Wilder 78b7815c49 Add block type for BlockIterator 2017-03-09 09:16:59 -07:00
Jason Wilder 675d7c9d65 Merge branch '1.2' into jw-merge12 2017-03-06 11:09:05 -07:00
Jason Wilder eab012ef61 Fix points missing after compaction
If blocks containing overlapping ranges of time where partially
recombined, it was possible for the some points to get dropped
during compactions.  This occurred because the window of time of
the points we need to merge did not account for the partial blocks
created from a prior merge.

Fixes #8084
2017-03-06 10:17:11 -07:00
Jason Wilder 784a851742 Release cpu during compactions 2017-01-31 17:04:36 -07:00
Edd Robinson 292b30b82b Fix subtle bugs and remove dead code from tsdb 2017-01-17 09:47:34 -08:00
Jason Wilder 06a8fd6ca2 Simplifications and cleanup 2017-01-12 09:55:38 -07:00
Jason Wilder c433ff331f Encode snapshots concurrently
The CacheKeyIterator (used for snapshot compactions), iterated over
each key and serially encoded the values for that key as the TSM
file is written.  With many series, this can be slow and will only
use 1 CPU core even if more are available.

This changes it so that the key space is split amongst a number of
goroutines that start encoding all keys in parallel to improve
throughput.
2017-01-11 17:54:27 -07:00
Mark Rushakoff a135906b43 Merge pull request #7747 from influxdata/mr-lint-cleanup
Miscellaneous lint cleanup
2017-01-10 08:22:00 -08:00
Mark Rushakoff 07b87f2630 Miscellaneous lint cleanup 2017-01-03 09:47:32 -08:00
Mark Rushakoff 41415cf2fb Update godoc for tsm1 package 2017-01-02 07:30:18 -08:00
Jason Wilder 73b8f52ca0 Cache results onf findGenerations
This allocates quite a bit and it's called multiple times per
second per shard.  The generations don't change until a compaction
has occurred so most of the time is re-calculating the same thing
and creating garbage.
2016-11-15 16:13:55 -07:00
Jason Wilder 47b8049e48 Update comment 2016-10-18 14:14:53 -06:00
Jason Wilder ed7975874f Rename Enabled -> Enable 2016-10-18 12:22:00 -06:00
Jason Wilder f254b4f3ae Allow snapshot compactions during deletes
If a delete takes a long time to process while writes to the
shard are occuring, it was possible for the cache to fill up
and writes to be rejected.  This occurred because we disabled
all compactions while writing tombstone file to prevent deleted
data from re-appearing after a compaction completed.

Instead, we only disable the level compactions and allow snapshot
compactions to continue.  Snapshots already handle deleted data
with the cache and wal.

Fixes #7161
2016-10-18 12:14:51 -06:00
Jason Wilder 1755f20d2a Revent re-using byte slices during compactions
This is causing a fatal error: fault panic when packing blocks.
2016-09-27 23:41:06 -06:00
Jason Wilder 4b5d989905 Merge pull request #7335 from influxdata/jw-tsm-syscalls
Avoid stat syscall when planning compactions
2016-09-26 12:30:05 -06:00
Jason Wilder 730ceeea46 Re-used allocated byte slices during compactions 2016-09-26 12:19:15 -06:00
Jason Wilder c2cfd63091 Avoid stat syscall when planning compactions
When the planner runs, it needs to determine if any files have tombstones.
The code to determine if a tombstone existed involved stating the .tombstone
file.  Since the planner runs very frequently when there are many shards, this
causea a lot of system calls that are unnecessary.

Instead, cache the results of the stats calls and only refresh them when we
haven't checked at least once or we write new tombstone data.

This also caches the results of the TSMReader.Stats call to avoid creating
garbage.
2016-09-24 15:53:28 -06:00
Jason Wilder 1a35c0a3fc Fix neverending full compactions
The full compaction planner could return a plan that only included
one generation.  If this happened, a full compaction would run on that
generation producing just one generation again.  The planner would then
repeat the plan.

This could happen if there were two generations that were both over
the max TSM file size and the second one happened to be in level 3 or
lower.

When this situation occurs, one cpu is pegged running a full compaction
continuously and the disks become very busy basically rewriting the
same files over and over again.  This can eventually cause disk and CPU
saturation if it occurs with more than one shard.

Fixes #7074
2016-09-03 17:35:14 -06:00
Jason Wilder 0ea645642b Remove compaction assert that should not be there
This assert was not removed when the issue that cause the assert
to trigger was fixed in 0f5e994.

Fixes #7121
2016-08-08 09:59:45 -06:00
Jason Wilder 7c3d1aac68 Simplify purger.add logic 2016-07-26 13:02:08 -06:00
Jason Wilder cab84ae279 Prevent concurrent compactions from stepping on each other
Normally, compactions do not conflict on the files they are compacting.
If the full cold threshold is set very low, it can cause conflicts where
two compactions compact the same files.  The full compaction was the
only place this could happen as it's planning is greedy.

To make this safer for concurrent execution, the compaction tracks which
files are current being compacted and prevents any new compactions from
starting if the file set overlaps.

Fixes #6595
2016-07-26 12:58:25 -06:00
Jason Wilder ded6e40d47 Remove lastPlanCheck var
This causes full compactions to not run if the server is running, but
after a restart they do run.
2016-07-26 12:58:25 -06:00
Jason Wilder 2f78c4ec83 Fix race when creating temp file
Using os.O_EXCL is safer than checking and then creating the file.
2016-07-26 12:58:25 -06:00
Jason Wilder b48d88ce9e Abort running compactions when series are deleted
If a delete is issued while a compaction is running, the a newly
deleted series could re-appear after the compaction completed. This
could occur the compaction had already written the blocks for series
that were just deleted.  When the compaction completes, the newly
written tombstone files would be deleted, essentially undeleting the
series.
2016-07-17 23:53:12 -06:00
Jason Wilder 0264966f5c Add index optimize planning step
For larger datasets, it's possible for shards to get into a state where
many large, dense TSM files exist.  While the shard is still hot for
writes, full compactions will skip these files since they are already
fairly optimized and full compactions are expensive.  If the write volume
is large enough, the shard can accumulate lots of these files.  When
a file is in this state, it's index can contain every series which
causes startup times to increase since each file must parse the full
set of series keys for every file.  If the number of series is high,
the index can be quite large causing large amount of disk IO at startup.

To fix this, a optmize compaction is run when a full compaction planning
step decides there is nothing to do.  The optimize compaction combines
and spreads the data and series keys across all files resulting in each
file containing the full series data for that shard and a subset of the
total set of keys in the shard.

This allows a shard to only store a series key once in the shard reducing
storage size as well allows a shard to only load each key once at startup.
2016-07-14 11:32:36 -06:00
Jason Wilder 5ee20e04a8 Fix compaction level planner
Large files created early in the leveled compactions could cause
a shard to get into a bad state.  This reworks the level planner
to handle those cases as well as splits large compactions up into
multiple groups to leverage more CPUs when possible.
2016-07-14 11:14:09 -06:00
Jason Wilder d0023dee5d Convert inline errors to constants 2016-05-31 10:51:54 -06:00
Jason Wilder 1ff8ecf4fb Add ability to disable shards
Disabling a shard causes all writes and queries to a shard to return
an error.  This also disables compactions for the shard.
2016-05-31 10:51:54 -06:00
Jason Wilder 7d50970631 Fix continous compaction edge case
The level planner would keep including the same TSM files to be
recompacted even if they were already quite compacted and split
across several TSM files.

Fixes #6683
2016-05-25 10:36:24 -06:00
Jason Wilder f2bcf9d9ab Code review fixes 2016-05-18 15:25:56 -06:00
Jason Wilder eff71cbe23 Rollover to new TSM file when max blocks exceeded
Fixes #6406
2016-05-18 15:25:55 -06:00
Jason Wilder 8fda621d8b Fix memory spike when compacting overwritten points
If a large series contains a point that is overwritten, the compactor
would load the whole series into RAM during a full compaction.  If
the series was large, it could cause very large RAM spikes and OOMs.

The change reworks the compactor to merge blocks more incrementally
similar to the fix done in #6556.

Fixes #6557
2016-05-18 15:25:55 -06:00
Jason Wilder 23fc9ff748 Revert "Fix memory spike when compacting overwritten points"
This reverts commit d99c5e26f6.
2016-05-16 09:30:34 -06:00
Jason Wilder d99c5e26f6 Fix memory spike when compacting overwritten points
If a large series contains a point that is overwritten, the compactor
would load the whole series into RAM during a full compaction.  If
the series was large, it could cause very large RAM spikes and OOMs.

The change reworks the compactor to merge blocks more incrementally
similar to the fix done in #6556.
2016-05-05 22:31:30 -06:00
Jason Wilder a0ac754802 Fix loading huge series into RAM when points are overwritten
In some query scenarios, if there are a lot of points on disk spread
across many blocks in TSM files and a point is overwritten near the
begginning of the shard's timerange, the full series could be loaded
into RAM triggering OOMs and huge allocations.

The issue was that the KeyCursor code that handles overwriting points
had a simple implementation that just deduped the whole series in this
case.  This falls over when the series is quite large.

Instead, the KeyCursor has been changed to only decode blocks with
updated points.  It then keeps track of what section of the blocks
have been read so they are not re-read when the later points are
decoded.

Since the points in a block are always sorted, the code was also changed
to remove the Deduplicate calls since they end up
reallocating the slice.  Instead, we do a sorted merge and re-use
the slice as much as we can.
2016-05-05 09:34:44 -06:00
Jason Wilder abcb559b09 Remove index meta data when series and measurements are gone
This remove the dropMeta param from the tsdb.Store.DeleteSeries and
lets the shard determine when to remove the meta data from the index
based on what series still have data in the shard.

This uncovered a nasty bug in compactions where a fully deleted series would
prematurely end the compactions and not carry forward the rest of the data
in the TSM file.  This is now fixed as well.
2016-04-29 16:31:57 -06:00
Jason Wilder 4e353867d5 Fix first block not getting purged when deleting series 2016-04-27 17:08:00 -06:00
Jason Wilder 6042e114a1 Remove tombstoned values during compaction
This will skip blocks that are fully tombstoned as well as remove
points that have been removed within a block.
2016-04-27 13:09:53 -06:00
Jason Wilder a789e819a3 Remove NewTSMReaderWithOptions
There are two TSMIndex implementations, the directIndex and the
indirectIndex.  Originally, we only had the directIndex and later
added the indirectIndex and NewTSMReaderWithOptions in order to
allow both indexes to be used in tests and code.  This has created
a problem since we really only use the directIndex for writing and
always use the indirectIndex for reading.

This changes removes the NewTSMReaderWithOptions func so that it is
no longer possible to create a TSMReader with a directIndex.  This
will allow a lot of the block reading code used by the directIndex
to be removed and simplify maintainence.  It also gives better test
coverage of the code that is actually used by the TSM engine now.
2016-04-27 13:09:52 -06:00
Seif Lotfy c6e3c87e00 Add Block checksum validation and "influx_inspect verify" tool
Fixes #5502
2016-04-19 22:33:03 +02:00
Jason Wilder a4e5446ddd Return error when TSM writer close returns one
The TSM writer uses a bufio.Writer that needs to be flushed before
it's closed.  If the flush fails for some reason, the error is not
handled by the defer and the compactor continues on as if all is good.
This can create files with truncated indexes or zero-length TSM files.

Fixes #5889
2016-03-21 15:00:36 -06:00
Jason Wilder 8d70d65a82 Convert time.Time to int64 2016-02-25 15:15:01 -07:00
Ben Johnson 5a0d1ab7c1 rename influxdb/influxdb to influxdata/influxdb
This commit changes all the import and URL references from:

    github.com/influxdb/influxdb

to:

    github.com/influxdata/influxdb
2016-02-10 10:26:18 -07:00
Jason Wilder 756421ec4a Look for fully compacted block in addition to max size during compaction
Some data shapes would cause files to grow larger than the max size more
quickly which resulted in them getting skipped by the full compaction planner
at times.  Some datasets that could make this happen are very large keys or
very large numbers of keys (10M).  When this happened, multiple max sized
files would accumulate but the blocks would not be full.  When the shard went
cold for writes, these files would get recompacted down to the optimal size, but
a lot of space would be wasted in the mean time.
2016-01-07 15:18:42 -07:00
Jason Wilder b6da176a4b Fix direct index size not calculated 2015-12-23 18:01:11 -07:00
Jason Wilder f9ae8077da Allow compactions to run when files have tombstones 2015-12-23 18:01:11 -07:00
Jason Wilder a38c95ec85 Update compactions to run concurrently
This has a few changes in it (unfortuantely).  The main change is to run compactions
concurrently.  While implementing this, a few query and performance bugs showed up that
are also fixed by this commit.
2015-12-23 18:01:11 -07:00
Jason Wilder 48d4156eac Fix blocks not sorted correctly when chunking 2015-12-23 18:01:11 -07:00
Jason Wilder bb2562b2ab Return CompactionGroups from planning 2015-12-23 18:01:11 -07:00
Jason Wilder 611017f4ed Add comments 2015-12-18 10:00:07 -07:00
Jason Wilder fd2a409ea3 Skip decoding blocks that are already full 2015-12-17 12:47:05 -07:00
Jason Wilder 825296ddd8 Add comments 2015-12-16 11:30:06 -07:00
Jason Wilder 3893bc60e1 Speed up TSM compactor
Just keep the current block for each iterator in the buffers.
2015-12-16 11:16:17 -07:00
Jason Wilder 00f570441b Convert TSMKeyIterator to return blocks 2015-12-16 11:16:17 -07:00
Jason Wilder 59a57d8f73 Convert CacheKeyIterator to return encoded blocks 2015-12-16 11:16:17 -07:00
Jason Wilder 0623648140 Add chunking support back to TSMKeyIterator
Was removed when MergeIterator was deleted.
2015-12-16 11:16:17 -07:00
Jason Wilder 31b97c3fe0 Add max points per block back for CacheKeyIterator
Was removed when MergeIterator was removeed.
2015-12-16 11:16:16 -07:00
Jason Wilder 992aea7bd3 Merge pull request #5060 from influxdb/jw-drop-db
Cancel writing TSM files when engine closes
2015-12-08 16:16:07 -07:00
Paul Dix b192136887 Merge pull request #5058 from influxdb/pd-update-compaction-logic
Update TSM compaction logic
2015-12-08 18:14:15 -05:00
Paul Dix 27cc2ea0cc Update compact.Plan 2015-12-08 18:01:31 -05:00
Jason Wilder d7cff651d1 Cancel writing TSM files when engine closes
If the engine is closed while a compaction is going on, the close call
blocks until the goroutine exits.  This could be several minutes because
the control does not return back up to the channel selector while there is
still data to write.
2015-12-08 15:41:53 -07:00
Paul Dix 96445a53a7 Update TSM compaction logic
* Update compaction to look at newest files of the smallest step first
* Update compaction to look at older files in larger steps if newer files don't have enough small steps to compact
* Changed the TestDefaultCompactionPlanner_CombineSequence test to reflect what's possible now. We'd only have multiple files in the same generation if the all files but one were over the max allowable size.
* Clean up the logic on when full compactions are run and when planning can be skipped
2015-12-08 17:33:38 -05:00
Jason Wilder 99c313ddae Fix leaking TSM files when compacting
The files being read were not closed after the compaction ran causing
them to leak.

Fixes #5046
2015-12-08 12:55:30 -07:00
Jason Wilder f245b44afa Set full compaction duration option on planner
Was set on engine and not planner so it was always 0.
2015-12-08 09:56:36 -07:00
Paul Dix 8096c6b845 Update TSM, address PR #5011 comments
* Moved TSM file extension to a constant
* Fixed typos
* Changed group.size() back to being a uint64 since it can have multiple files up to 4GB each.
2015-12-07 14:47:17 -05:00
Paul Dix 440a8a8a1f Change all TSM file sizes to uint32 2015-12-07 10:12:24 -05:00
Paul Dix 937233d988 Update TSM compaction planning logic
* Update Plan to do a full compaction if cold for writes
* Remove MaxFileSize as a config variable from Compactor. Should be a set constant
* Update Plan to keep track of if the last check was fully compacted so we can skip future planning calls
* Update compact min file count to 3 so that compactions run more frequently
2015-12-07 08:26:30 -05:00
Paul Dix 1bee7d1512 Update TSM, remove old version, add config
* remove rolloverTSMFileSize constant that is no longer used
* remove the maxGenerationFileCount since it is no longer a limitation that's necessary with the new compaction scheme. We no longer read WAL segments as part of the compaction so memory is only used as we read in each individual key
* remove minFileCount and switch to a user configurable variable
* remove the mutex from WALSegmentWriter. There's never more than one open in the WAL at one time and it's not exported through any function so the lock on the WAL should be used. This simplified keeping track of the last write time and removed a bunch of unnecessary locks.
* update WALSegmentWriter.Write to take the compressed bytes so that encoding and compression can occur before the call to write (while we don't hold the WAL lock)
* remove a bunch of unnecessary locking in WAL.writeToLog
* Add check for TSM file magic number and vesion
* Remove old tsm, log, and unused cursor code
* Remove references to tsm1dev everywhere except in the inspector
* Clean up config options for compaction and snapshotting
* Remove old TSM configuration options
* Update the config.sample.toml with TSM options
* Update WAL compact to force if it has been cold for writes for a configurable period of time (1h by default)
2015-12-06 18:50:39 -05:00
Jason Wilder 33a33e6a23 Fix 32bit int overflow of constant value 2015-12-05 13:09:18 -07:00
Jason Wilder 41b24995a7 Compcation fixes 2015-12-05 12:19:28 -07:00
Jason Wilder 6592615958 Updated compaction strategy
This changes compacting files to merge sequences of files in lower generations
up to later generations
2015-12-04 23:30:39 -07:00
Jason Wilder 357b88c439 Increment sequence of max generation when compaction files 2015-12-04 13:46:28 -07:00
Jason Wilder 52bec1f7f6 Change TSM file naming to generation-sequence.tsm 2015-12-04 11:51:33 -07:00
Jason Wilder 4b7cc6720a Merge pull request #4983 from influxdb/jw-tsm-deletes2
Implement delete series/measurement
2015-12-04 10:02:11 -07:00
Jason Wilder c54a3da0ca Implement delete series/measurement 2015-12-04 09:10:26 -07:00