Commit Graph

216 Commits (3318c94a2f47783a6c77a1f66035bcbd90d5679b)

Author SHA1 Message Date
Ben Johnson 493c1ed0d1
inmem tests passing. 2017-12-05 10:49:58 -07:00
Ben Johnson f5f85d65f9
Fixing more tests. 2017-12-04 10:29:04 -07:00
Ben Johnson f1cf55ca99
Merge branch 'er-tsi-index-part' of https://github.com/influxdata/influxdb into er-tsi-index-part 2017-11-30 05:45:40 -07:00
Ben Johnson ca09f18e65
intermediate: tsdb compile 2017-11-29 11:20:18 -07:00
Edd Robinson 6dbb070ce9 Fix race on sfiles in Store 2017-11-27 15:41:16 +00:00
Ben Johnson fc966a1b67
Add series file backup/restore. 2017-11-22 08:55:54 -07:00
Ben Johnson ede3fcf98e
intermediate 2017-11-15 16:09:25 -07:00
Ben Johnson ba4c9e0317
Merge remote-tracking branch 'upstream/master' into er-tsi-index-part 2017-11-14 16:14:13 -07:00
Jason Wilder aee395d3bd Make DeleteSeriesRange take SeriesIterator 2017-11-13 09:02:10 -07:00
Jason Wilder f893beb6d8 Use MeasurementSeriesKeysByExprIterator for deletes 2017-11-13 09:02:10 -07:00
Jonathan A. Sternberg 0b7c56bcd8 Update the zap logger dependency
The previous sha was taken from a revision on a devel branch that I
thought would continue staying in the tree after it was merged. That
revision was rebased away and the API was changed for the logger.

This updates the usage of the logger and adds a simple package for
constructing the base logger.

The 1.0 version of zap changed the format of the default console logger
so this change moves over to this new logger instead of attempting to
retain backwards compatibility with the old format.
2017-11-10 16:27:16 -06:00
Ben Johnson 9ad2b53881
intermediate 2017-11-09 09:18:33 -07:00
Edd Robinson 59c4e4b1bc Skip shards we don't have 2017-11-08 13:33:52 +00:00
Ben Johnson 156f25ac23
Improve SHOW TAG KEYS performance. 2017-11-07 10:59:19 -07:00
Edd Robinson e762da9aca Fix race on store close
There was a very small window where it was possible to deadlock during
the close of the Store. When closing, the Store waited on its Waitgroup
under a `Lock`. Naturally, all other goroutines must have been in a
position to call `Done` on the `Waitgroup` before the `Wait` call in
`Close` would return.

For the goroutine running the `monitorShards` method it was possible
that it would be unable to do this. Specifically, if the `monitorShards`
goroutine was jumping into the `t.C` case as the `Close()` goroutine was
acquiring the `Lock` then then `monitorShards` goroutine would be unable
to acquire the `RLock`. Since it would also be unable to progress around
its loop to jump into the `s.closing` case, it would be unable to call
`Done` on the `WaitGroup` and we would have a deadlock.

This was identified during an AppVeyor CI run, though I was unable to
reproduce this locally.
2017-11-07 15:26:46 +00:00
Edd Robinson 88e2ea822d Add inmem shard optimisation to SHOW MEASUREMENTS 2017-11-06 19:15:01 +00:00
Edd Robinson f8353bf300 Check shard index type correctly
Previously we used the EngineOptions to determine which shard index
type we were using. However, these options are set once at runtime
initialisation. Therefore if you're running with TSI enabled but then
accessing a legacy database with the inmem index, TagValues would not
have taken advantage of the inmem index.

This change ensures we always check the actual index of the shard(s).
2017-11-06 19:15:01 +00:00
Edd Robinson fbcb299b8a Support WHERE time clause in SHOW TAG VALUES
This commit adds time support to SHOW TAG VALUES. Time can be used as
both a lower and upper boundary. However, there are some caveats.

For the `inmem` index, filtering by time will still return all results
because the index data is shared across shards.

For the `tsi1` index, filtering by time will only work down to the shard
lever. Specifically, when querying by time all shards within that time
range will be used to generate the results.
2017-11-06 19:15:01 +00:00
Stuart Carnie f3d45ba301 influxdata/influxdb/influxql -> influxdata/influxql 2017-10-30 14:40:26 -07:00
Jason Wilder 71071ed67a Add compaction backlog stat
This gives an indication as to whether compactions are backed up
or not.
2017-10-03 10:48:14 -06:00
Jason Wilder ae821f4e2d Rework compaction scheduling
This changes the compaction scheduling to better utilize the available
cores that are free.  Previously, a level was planned in its own goroutine
and would kick off a number of compactions groups.  The problem with this
model was that if there were 4 groups, and 3 completed quickly, the planning
would be blocked for that level until the last group finished.  If the compactions
at the prior level are running more quickly, a large backlog could accumlate.

This now moves the planning to a single goroutine that plans each level in
succession and starts as many groups as it can.  When one group finishes,
the planning will start the next group for the level.
2017-10-03 10:48:13 -06:00
Joe LeGasse 1443b22379 auth: add series auth to 'show tag values' 2017-09-27 20:01:18 -04:00
Edd Robinson 4a67f92acc Prevent store from directly accessing Shard's engine 2017-09-25 17:43:01 +01:00
Edd Robinson 8e9cabbb9c Fix race in TagValues when reaching into engine 2017-09-25 17:43:01 +01:00
Jason Wilder db204f3eb7 Default concurrent compactions to 50% of available cores 2017-09-21 12:48:11 -06:00
Jason Wilder 31646aae3a Release mmap pages when shard is cold
This instructs the kernel that it can release memory used by mmap'd
TSM files when they are not actively being used.  It the mappings are
use, the kernel will fault the pages back in.  On linux, this causes
RES memory to drop immediately when run.
2017-09-18 11:51:51 -06:00
Jason Wilder 38460ec37e Re-enable compactions during writes
A cold shard that suddenly receives a lot of writes could get a very
big cache that takes a long time to snapshot or causes the cache
max memory limit to be hit more quickly.  This re-enables the compactions
if necessary during writes so we don't have to wait for the shard monitor
goroutine to re-enable them.
2017-09-11 15:29:26 -06:00
Jonathan A. Sternberg 697759613c Remove time comparisons from the inner sections of the storage engine 2017-08-16 16:51:13 -05:00
Jonathan A. Sternberg 8bd04ebe39 Remove TimeRange function and replace with a more accurate ConditionExpr function
The ConditionExpr function is more accurate because it parses the
condition and ensures that time conditions are actually used correctly.
That means that attempting to combine conditions with OR will not result
in the query silently pretending it's an AND and nested conditions work
correctly so there is only one way to read the query.

It also extracts the non-time conditions into a separate condition so we
can stop attempting to parse around the time conditions in lower layers
of the storage engine. This change does not remove those hacks, but a
following commit should be able to sanitize the condition and remove
them.
2017-08-16 16:45:35 -05:00
Jason Wilder c74932de94 Limit shard cardinality checks to 1 per database
The tag cardinality checks were run for all inmem shards.  Since inmem
shards share the same index, a lot of the work is redundant.  Inmem shards
also need to sort their measurmenet and tag keys which can be CPU intensive
with many shards or higher cardinality.

This changes the monitoring to just check one shard in each database which
should lower CPU usage due to excessive sorting.  The longer term solution
is to use TSI which would not have this check or required sorting.
2017-08-15 12:17:18 -06:00
Edd Robinson aa7095be5a Use a merge-based approach for TagValues 2017-08-02 14:10:52 +01:00
Jason Wilder 94a48774b7 Pull in new index filter 2017-08-02 14:10:52 +01:00
Edd Robinson 1e9ce8e0a7 Add test for TagValues 2017-08-02 14:10:52 +01:00
Jason Wilder c75ac3076f Limit delete to run one shard at a time
There was a change to speed up deleting and dropping measurements
that executed the deletes in parallel for all shards at once. #7015

When TSI was merged in #7618, the series keys passed into Shard.DeleteMeasurement
were removed and were expanded lower down.  This causes memory to blow up
when a delete across many shards occurs as we now expand the set of series
keys N times instead of just once as before.

While running the deletes in parallel would be ideal, there have been a number
of optimizations in the delete path that make running deletes serially pretty
good.  This change just limits the concurrency of the deletes which keeps memory
more stable.
2017-07-27 16:01:47 -06:00
Ben Johnson 3128c6a42e
Fix SHOW TAG VALUES deduplication. 2017-06-01 15:38:35 -06:00
Jason Wilder 9374c4f513 Reduce allocations when monitoring shards
When monitoring shards, a slice of measurements is allocated for
each shard.  With many shards and measurements, these allocations
can be large.  Since inmem shards share the same index, we only
need to do this once since the resulting slices are all the same.
This reduces memory usage when monitoring shard cardinality.
2017-05-08 13:34:40 -06:00
Jason Wilder 88848a9426 Remove per shard monitor goroutine
The monitor goroutine ran for each shard and updated disk stats
as well as logged cardinality warnings.  This goroutine has been
removed by making the disks stats more lightweight and callable
direclty from Statisics and move the logging to the tsdb.Store.  The
latter allows one goroutine to handle all shards.
2017-05-03 16:31:57 -06:00
Jason Wilder f87fd7c7ed Stop background compaction goroutines when shard is cold
Each shard has a number of goroutines for compacting different levels
of TSM files.  When a shard goes cold and is fully compacted, these
goroutines are still running.

This change will stop background shard goroutines when the shard goes
cold and start them back up if new writes arrive.
2017-05-03 16:31:57 -06:00
Jason Wilder 8fc9853ed8 Add max-concurrent-compactions limit
This limit allows the number of concurrent level and full compactions
to be throttled.  Snapshot compactions are not affected by this limit
as then need to run continously.

This limit can be used to control how much CPU is consumed by compactions.
The default is to limit to the number of CPU available.
2017-05-03 16:31:57 -06:00
Jason Wilder 80fef4af4a Enable shards after loading
Compactions are enabled as soon as the shard is opened.  This can
slow down startup or cause the system to spike in CPU usage at startup
if many shards need to be compacted.

This now delays compactions until after they are loaded.
2017-05-03 16:31:57 -06:00
Jason Wilder a76146e34a Add Store.Import capability
This allows the contents of a backup to be imported into a shard without
requiring the whole shard to be replaced.
2017-04-28 13:30:46 -06:00
Jason Wilder 927acb5ab9 Ensure MeasurementNames deduplicates measurements across shards 2017-04-06 12:17:29 -06:00
Jason Wilder 8da84e6144 Merge branch 'master' into tsi 2017-04-03 11:21:02 -06:00
Jason Wilder 68f73e64d1 Lazily sort Measurement.SeriesIDs
Removing series while trying to maintain the sorted series list
does not perform well when removing many series.  This causes
drop DB, RP, series, to be very slow in some cases.

Instead, lazily create a sorted series list when first requested and
invalidate it when dropping series.
2017-04-03 08:57:53 -06:00
Jason Wilder 32c4d43952 Speed up drop measurement
This reworks drop measurement to use a sorted list of series keys
instead of creating an intermediate map.  It remove allocations
and some extra garbage that is created during drop measurement.
2017-04-03 08:57:53 -06:00
Edd Robinson 5e342a2ddd Ensure shared index removed on database drop
When using the inmem index, if one drops a database, and then creates it
again, the previous index object will be reused. This includes the
previous cardinality estimation sketches, leading to inaccurate
cardinality estimations.
2017-03-30 13:05:31 +01:00
Edd Robinson ddf7f0fd7b Remove uncalled method 2017-03-30 12:48:22 +01:00
Edd Robinson fddaff2cc8 Merge master in 2017-03-29 18:00:28 +01:00
Edd Robinson 45f843fc91 Don't unassign shards when system shutting down 2017-03-29 11:57:38 +01:00
Edd Robinson f89de550ed Significantly speed up DROP DATABASE 2017-03-21 11:35:31 +00:00