Commit Graph

306 Commits (c665c749fe0aeb858ca2f059991826c923348949)

Author SHA1 Message Date
Jonathan A. Sternberg 0b7c56bcd8 Update the zap logger dependency
The previous sha was taken from a revision on a devel branch that I
thought would continue staying in the tree after it was merged. That
revision was rebased away and the API was changed for the logger.

This updates the usage of the logger and adds a simple package for
constructing the base logger.

The 1.0 version of zap changed the format of the default console logger
so this change moves over to this new logger instead of attempting to
retain backwards compatibility with the old format.
2017-11-10 16:27:16 -06:00
Ben Johnson 9ad2b53881
intermediate 2017-11-09 09:18:33 -07:00
Edd Robinson 59c4e4b1bc Skip shards we don't have 2017-11-08 13:33:52 +00:00
Ben Johnson 156f25ac23
Improve SHOW TAG KEYS performance. 2017-11-07 10:59:19 -07:00
Edd Robinson e762da9aca Fix race on store close
There was a very small window where it was possible to deadlock during
the close of the Store. When closing, the Store waited on its Waitgroup
under a `Lock`. Naturally, all other goroutines must have been in a
position to call `Done` on the `Waitgroup` before the `Wait` call in
`Close` would return.

For the goroutine running the `monitorShards` method it was possible
that it would be unable to do this. Specifically, if the `monitorShards`
goroutine was jumping into the `t.C` case as the `Close()` goroutine was
acquiring the `Lock` then then `monitorShards` goroutine would be unable
to acquire the `RLock`. Since it would also be unable to progress around
its loop to jump into the `s.closing` case, it would be unable to call
`Done` on the `WaitGroup` and we would have a deadlock.

This was identified during an AppVeyor CI run, though I was unable to
reproduce this locally.
2017-11-07 15:26:46 +00:00
Edd Robinson 88e2ea822d Add inmem shard optimisation to SHOW MEASUREMENTS 2017-11-06 19:15:01 +00:00
Edd Robinson f8353bf300 Check shard index type correctly
Previously we used the EngineOptions to determine which shard index
type we were using. However, these options are set once at runtime
initialisation. Therefore if you're running with TSI enabled but then
accessing a legacy database with the inmem index, TagValues would not
have taken advantage of the inmem index.

This change ensures we always check the actual index of the shard(s).
2017-11-06 19:15:01 +00:00
Edd Robinson fbcb299b8a Support WHERE time clause in SHOW TAG VALUES
This commit adds time support to SHOW TAG VALUES. Time can be used as
both a lower and upper boundary. However, there are some caveats.

For the `inmem` index, filtering by time will still return all results
because the index data is shared across shards.

For the `tsi1` index, filtering by time will only work down to the shard
lever. Specifically, when querying by time all shards within that time
range will be used to generate the results.
2017-11-06 19:15:01 +00:00
Stuart Carnie f3d45ba301 influxdata/influxdb/influxql -> influxdata/influxql 2017-10-30 14:40:26 -07:00
Jason Wilder 71071ed67a Add compaction backlog stat
This gives an indication as to whether compactions are backed up
or not.
2017-10-03 10:48:14 -06:00
Jason Wilder ae821f4e2d Rework compaction scheduling
This changes the compaction scheduling to better utilize the available
cores that are free.  Previously, a level was planned in its own goroutine
and would kick off a number of compactions groups.  The problem with this
model was that if there were 4 groups, and 3 completed quickly, the planning
would be blocked for that level until the last group finished.  If the compactions
at the prior level are running more quickly, a large backlog could accumlate.

This now moves the planning to a single goroutine that plans each level in
succession and starts as many groups as it can.  When one group finishes,
the planning will start the next group for the level.
2017-10-03 10:48:13 -06:00
Joe LeGasse 1443b22379 auth: add series auth to 'show tag values' 2017-09-27 20:01:18 -04:00
Edd Robinson 4a67f92acc Prevent store from directly accessing Shard's engine 2017-09-25 17:43:01 +01:00
Edd Robinson 8e9cabbb9c Fix race in TagValues when reaching into engine 2017-09-25 17:43:01 +01:00
Jason Wilder db204f3eb7 Default concurrent compactions to 50% of available cores 2017-09-21 12:48:11 -06:00
Jason Wilder 31646aae3a Release mmap pages when shard is cold
This instructs the kernel that it can release memory used by mmap'd
TSM files when they are not actively being used.  It the mappings are
use, the kernel will fault the pages back in.  On linux, this causes
RES memory to drop immediately when run.
2017-09-18 11:51:51 -06:00
Jason Wilder 38460ec37e Re-enable compactions during writes
A cold shard that suddenly receives a lot of writes could get a very
big cache that takes a long time to snapshot or causes the cache
max memory limit to be hit more quickly.  This re-enables the compactions
if necessary during writes so we don't have to wait for the shard monitor
goroutine to re-enable them.
2017-09-11 15:29:26 -06:00
Jonathan A. Sternberg 697759613c Remove time comparisons from the inner sections of the storage engine 2017-08-16 16:51:13 -05:00
Jonathan A. Sternberg 8bd04ebe39 Remove TimeRange function and replace with a more accurate ConditionExpr function
The ConditionExpr function is more accurate because it parses the
condition and ensures that time conditions are actually used correctly.
That means that attempting to combine conditions with OR will not result
in the query silently pretending it's an AND and nested conditions work
correctly so there is only one way to read the query.

It also extracts the non-time conditions into a separate condition so we
can stop attempting to parse around the time conditions in lower layers
of the storage engine. This change does not remove those hacks, but a
following commit should be able to sanitize the condition and remove
them.
2017-08-16 16:45:35 -05:00
Jason Wilder c74932de94 Limit shard cardinality checks to 1 per database
The tag cardinality checks were run for all inmem shards.  Since inmem
shards share the same index, a lot of the work is redundant.  Inmem shards
also need to sort their measurmenet and tag keys which can be CPU intensive
with many shards or higher cardinality.

This changes the monitoring to just check one shard in each database which
should lower CPU usage due to excessive sorting.  The longer term solution
is to use TSI which would not have this check or required sorting.
2017-08-15 12:17:18 -06:00
Edd Robinson aa7095be5a Use a merge-based approach for TagValues 2017-08-02 14:10:52 +01:00
Jason Wilder 94a48774b7 Pull in new index filter 2017-08-02 14:10:52 +01:00
Edd Robinson 1e9ce8e0a7 Add test for TagValues 2017-08-02 14:10:52 +01:00
Jason Wilder c75ac3076f Limit delete to run one shard at a time
There was a change to speed up deleting and dropping measurements
that executed the deletes in parallel for all shards at once. #7015

When TSI was merged in #7618, the series keys passed into Shard.DeleteMeasurement
were removed and were expanded lower down.  This causes memory to blow up
when a delete across many shards occurs as we now expand the set of series
keys N times instead of just once as before.

While running the deletes in parallel would be ideal, there have been a number
of optimizations in the delete path that make running deletes serially pretty
good.  This change just limits the concurrency of the deletes which keeps memory
more stable.
2017-07-27 16:01:47 -06:00
Ben Johnson 3128c6a42e
Fix SHOW TAG VALUES deduplication. 2017-06-01 15:38:35 -06:00
Jason Wilder 9374c4f513 Reduce allocations when monitoring shards
When monitoring shards, a slice of measurements is allocated for
each shard.  With many shards and measurements, these allocations
can be large.  Since inmem shards share the same index, we only
need to do this once since the resulting slices are all the same.
This reduces memory usage when monitoring shard cardinality.
2017-05-08 13:34:40 -06:00
Jason Wilder 88848a9426 Remove per shard monitor goroutine
The monitor goroutine ran for each shard and updated disk stats
as well as logged cardinality warnings.  This goroutine has been
removed by making the disks stats more lightweight and callable
direclty from Statisics and move the logging to the tsdb.Store.  The
latter allows one goroutine to handle all shards.
2017-05-03 16:31:57 -06:00
Jason Wilder f87fd7c7ed Stop background compaction goroutines when shard is cold
Each shard has a number of goroutines for compacting different levels
of TSM files.  When a shard goes cold and is fully compacted, these
goroutines are still running.

This change will stop background shard goroutines when the shard goes
cold and start them back up if new writes arrive.
2017-05-03 16:31:57 -06:00
Jason Wilder 8fc9853ed8 Add max-concurrent-compactions limit
This limit allows the number of concurrent level and full compactions
to be throttled.  Snapshot compactions are not affected by this limit
as then need to run continously.

This limit can be used to control how much CPU is consumed by compactions.
The default is to limit to the number of CPU available.
2017-05-03 16:31:57 -06:00
Jason Wilder 80fef4af4a Enable shards after loading
Compactions are enabled as soon as the shard is opened.  This can
slow down startup or cause the system to spike in CPU usage at startup
if many shards need to be compacted.

This now delays compactions until after they are loaded.
2017-05-03 16:31:57 -06:00
Jason Wilder a76146e34a Add Store.Import capability
This allows the contents of a backup to be imported into a shard without
requiring the whole shard to be replaced.
2017-04-28 13:30:46 -06:00
Jason Wilder 927acb5ab9 Ensure MeasurementNames deduplicates measurements across shards 2017-04-06 12:17:29 -06:00
Jason Wilder 8da84e6144 Merge branch 'master' into tsi 2017-04-03 11:21:02 -06:00
Jason Wilder 68f73e64d1 Lazily sort Measurement.SeriesIDs
Removing series while trying to maintain the sorted series list
does not perform well when removing many series.  This causes
drop DB, RP, series, to be very slow in some cases.

Instead, lazily create a sorted series list when first requested and
invalidate it when dropping series.
2017-04-03 08:57:53 -06:00
Jason Wilder 32c4d43952 Speed up drop measurement
This reworks drop measurement to use a sorted list of series keys
instead of creating an intermediate map.  It remove allocations
and some extra garbage that is created during drop measurement.
2017-04-03 08:57:53 -06:00
Edd Robinson 5e342a2ddd Ensure shared index removed on database drop
When using the inmem index, if one drops a database, and then creates it
again, the previous index object will be reused. This includes the
previous cardinality estimation sketches, leading to inaccurate
cardinality estimations.
2017-03-30 13:05:31 +01:00
Edd Robinson ddf7f0fd7b Remove uncalled method 2017-03-30 12:48:22 +01:00
Edd Robinson fddaff2cc8 Merge master in 2017-03-29 18:00:28 +01:00
Edd Robinson 45f843fc91 Don't unassign shards when system shutting down 2017-03-29 11:57:38 +01:00
Edd Robinson f89de550ed Significantly speed up DROP DATABASE 2017-03-21 11:35:31 +00:00
Ben Johnson ee2e046853
Merge remote-tracking branch 'upstream/tsi-log-compact' into tsi 2017-03-15 10:22:32 -06:00
Ben Johnson 358b1e0b05
Merge remote-tracking branch 'upstream/master' into tsi 2017-03-15 10:13:32 -06:00
Edd Robinson 7d997d508a Fixes #8138 2017-03-15 12:50:22 +00:00
Mark Rushakoff 601cbcd084 Merge branch '1.2' into mr-merge-12 2017-02-17 16:14:22 -08:00
Jonathan A. Sternberg 2fe48d6781 Rename zap import back to github.com/uber-go/zap
They rebased a revision we were previously relying upon that allowed us
to use the vanity name so we are reverting back to an older version with
the old import path.
2017-02-17 17:17:22 -06:00
Jason Wilder 2e95b4043c Merge branch '1.2' into jw-merge-12 2017-02-02 16:40:36 -07:00
Ben Johnson 76235f1e00
Use original index type for existing shards. 2017-02-02 10:43:48 -07:00
Ben Johnson c246f3d9b0
Use inmem index on existing shards. 2017-02-02 10:04:25 -07:00
Ben Johnson faef0a99c9
Perform series tag iteration under lock.
Adds a `tsdb.Series.ForEachTag()` function for safely iterating
over a series' tags within the context of a lock. This preverts
tags from being dereferenced during iteration which can cause
a seg fault.
2017-02-01 16:25:53 -07:00
Ben Johnson 047c21f4d9
Merge remote-tracking branch 'upstream/master' into tsi 2017-01-24 09:28:58 -07:00
Edd Robinson 292b30b82b Fix subtle bugs and remove dead code from tsdb 2017-01-17 09:47:34 -08:00
Joe LeGasse 2db0250b22 Add db/rp name validation
This change adds some very basic name validation with the following
plain-english description: names must be non-zero sequence of printable
characters that do not contain slashes ('/' or '\') and are not equal to
either "." or "..".

The intent is that, since we currently just use database and retention
policy names directly as path elements, these rules will hopefully leave
us with names that should be at least close to valid directory names.

Ideally, we would restrict names even further or not use them as path
elements directly, but this should be a step towards the former without
restricting names "too much"
2017-01-12 17:38:10 -05:00
Joe LeGasse b19260fb26 Add some checks before removing directories
Fixes #7822

This change first ensures that databases and retention policies exist
before attempting to remove them from the Store. It also adds some
checks in the `DeleteDatabase` and `DeleteRetentionPolicy` to ensure
that maliciously named entries won't remove anything outside of the
configured data directory.
2017-01-12 17:38:10 -05:00
Mark Rushakoff a135906b43 Merge pull request #7747 from influxdata/mr-lint-cleanup
Miscellaneous lint cleanup
2017-01-10 08:22:00 -08:00
Jonathan A. Sternberg d7c8c7ca4f Support subquery execution in the query language
This adds query syntax support for subqueries and adds support to the
query engine to execute queries on subqueries.

Subqueries act as a source for another query. It is the equivalent of
writing the results of a query to a temporary database, executing
a query on that temporary database, and then deleting the database
(except this is all performed in-memory).

The syntax is like this:

    SELECT sum(derivative) FROM (SELECT derivative(mean(value)) FROM cpu GROUP BY *)

This will execute derivative and then sum the result of those derivatives.
Another example:

    SELECT max(min) FROM (SELECT min(value) FROM cpu GROUP BY host)

This would let you find the maximum minimum value of each host.

There is complete freedom to mix subqueries with auxiliary fields. The only
caveat is that the following two queries:

    SELECT mean(value) FROM cpu
    SELECT mean(value) FROM (SELECT value FROM cpu)

Have different performance characteristics. The first will calculate
`mean(value)` at the shard level and will be faster, especially when it comes to
clustered setups. The second will process the mean at the top level and will not
include that optimization.
2017-01-07 13:00:48 -06:00
Ben Johnson d1f1e19591
Fixing rebase. 2017-01-06 09:31:25 -07:00
Ben Johnson f9efcb3365
Re-add shared in-memory index. 2017-01-05 10:17:09 -07:00
Edd Robinson 0f9b2bfe6a
Fix tests 2017-01-05 10:16:15 -07:00
Edd Robinson 4ccb8dbab1
Move series count check to shard 2017-01-05 10:16:13 -07:00
Ben Johnson 745b1973a8
tsi compaction 2017-01-05 10:15:37 -07:00
Ben Johnson 183418dcbd
Fix tsi TAG KEYS iterator. 2017-01-05 10:15:36 -07:00
Ben Johnson 9f8b206b51
Fix measurement system queries. 2017-01-05 10:15:34 -07:00
Ben Johnson 4aa78383d1
Fix tsi1 series deletion. 2017-01-05 10:14:48 -07:00
Ben Johnson cb93f10120
Remove per-shard in-memory index. 2017-01-05 10:11:09 -07:00
Ben Johnson 409b0165f5
shared in-memory index 2017-01-05 10:09:57 -07:00
Ben Johnson a812502ea3
reintegrating in-memory index 2017-01-05 10:07:35 -07:00
Ben Johnson 5f5b02e052
intermediate 2017-01-05 10:01:49 -07:00
Edd Robinson e2c3b52ca4
Adds a custom HyperLogLog++ implementation 2017-01-05 10:00:14 -07:00
Edd Robinson 33623c1fa9
Revert back to original approach 2017-01-05 09:58:39 -07:00
Edd Robinson 9ed6040265
Tidy up 2017-01-05 09:58:37 -07:00
Edd Robinson 2d9bd09784
Use []byte where possible in Index 2017-01-05 09:57:34 -07:00
Edd Robinson 3edbfb9197
Prevent panic when shard nil 2017-01-05 09:56:51 -07:00
Edd Robinson 4b1ef68dc9
Move series and measurement stats to store 2017-01-05 09:54:05 -07:00
Edd Robinson aaf85ae38d
Tombstoning with series cardinality part 1 2017-01-05 09:54:04 -07:00
Edd Robinson bd8dd9a291
Sketches working 2017-01-05 09:54:04 -07:00
Edd Robinson d19fbf5ab4
Wire in HLL estimator 2017-01-05 09:54:03 -07:00
Edd Robinson 05bc4dec00
Refactor 2017-01-05 09:50:23 -07:00
Edd Robinson c535e3899a
Remove in-memory index from Shard and Store 2017-01-05 09:47:09 -07:00
Edd Robinson 2171d9471b
Initialise index in shards 2017-01-05 09:42:48 -07:00
Mark Rushakoff 07b87f2630 Miscellaneous lint cleanup 2017-01-03 09:47:32 -08:00
Mark Rushakoff 4a774eb600 Update godoc for the tsdb package 2016-12-30 21:12:37 -08:00
Jonathan A. Sternberg ec57108520 Use proper uber-go/zap import path
It looks like the real import path to the project is go.uber.org/zap
instead of github.com/uber-go/zap since the example in the project
references that path.
2016-12-15 08:54:14 -06:00
Jonathan A. Sternberg 21502a39e8 Switch logging to use structured logging everywhere
The logging library has been switched to use uber-go/zap. While the
logging has been changed to use structured logging, this commit does not
change any of the logging statements to take advantage of the new
structured log or new log levels. Those changes will come in future
commits.
2016-12-14 10:45:15 -06:00
Mark Rushakoff 5ae8cf8312 Speed up shutdown
On my machine with about 20 shards, it would take 10+ seconds to shut
down InfluxDB with SIGINT. After this change, it shuts down in nearly
instantly.

(*tsdb.Store).Close was shutting down each of its shards sequentially.
Each shard's engine would signal to its compaction goroutines to quit,
and because each compaction goroutine has a hardcoded 1-second sleep in
between checks, waiting for the goroutines would often block for up to a
second.

This change closes all of the TSDB store's shards in parallel. This
means it's possible that multiple close values could error at once, but
we're still only returning the first error, consistent with previous
behavior. That being said, the return value of (*tsdb.Store).Close is
ignored in (*cmd/influxd/run.Server).Close anyway.
2016-10-10 09:18:47 -07:00
Joe LeGasse 743946fafb models: Add FieldIterator type
The FieldIterator is used to scan over the fields of a point, providing
information, and delaying parsing/decoding the value until it is needed.
This change uses this new type to avoid the allocation of a map for the
fields which is then thrown away as soon as the points get converted
into columns within the datastore.
2016-10-03 16:30:21 -06:00
Jason Wilder d06b28992d Unload index before closing shard
When deleting a shard, the shard is locked and then removed from the
index.  Removal from the index can be slow if there are a lot of
series.  During this time, the shard is still expected to exist by
the meta store and tsdb store so stats collections, queries and writes
could all be run on this shard while it's locked.  This can cause everything
to lock up until the unindexing completes and the shard can be unlocked.

Fixes #7226
2016-09-16 12:01:50 -06:00
Jonathan A. Sternberg dc2527ce86 Merge branch '1.0' 2016-08-31 14:45:57 -05:00
Jonathan A. Sternberg c05c7f6360 Revert "limit shard concurrency"
This reverts commit 6c7d56d4bc.
2016-08-29 12:39:52 -05:00
Ben Johnson 8aa224b22d
reduce memory allocations in index
This commit changes the index to point to index data in the shards
instead of keeping it in-memory on the heap.
2016-08-16 14:09:00 -06:00
Ben Johnson 55b3e63ced
concurrent series limit
This commit fixes the `MaxSelectSeriesN` limit which was broken by
the implementation of lazy iterators. The setting previously limited
the total number of series but the new implementation limits the
concurrent number of series being processed.
2016-08-09 08:58:01 -06:00
Ben Johnson 6c7d56d4bc
limit shard concurrency
This commit limits queries to only process one shard at a time.
However, within a shard, multiple series can still be processed in
parallel. Shard iterators are lazily instantiated during query
execution to limit the amount of memory a given query uses.
2016-08-05 09:45:57 -06:00
Jonathan A. Sternberg 86bd97f3b9 Switch SHOW MEASUREMENTS and SHOW TAG VALUES to directly access the tsdb.Store
The `SHOW MEASUREMENTS` and `SHOW TAG VALUES` cannot go through the
query engine to get the speed they need. They also only need access to
the database index and do not need access to specific shards. This
removes the query rewriting that was done to turn these two queries into
a select statement and reimplements them inside of the coordinator as an
interface on the TSDBStore.
2016-07-28 17:38:11 -05:00
Cory LaNou fd86670518 remove limiter from walkShards 2016-07-21 11:23:31 -05:00
Edd Robinson 83cc580ff8 Tidy up logging 2016-07-21 11:14:29 +01:00
Jason Wilder b692ef4f48 Rename throttle package to limiter 2016-07-18 12:00:58 -06:00
Jason Wilder c2370b437b Limit in-flight wal writes/encodings
A slower disk can can cause excessive allocations to occur when
writing to the WAL because the slower encoding and compression occurs
before taking the write lock.  The encoding/compression grabs a large
byte slice from a pool and ultimately waits until it can acquire the
write lock.

This adds a throttle to limit how many inflight WAL writes can be queued
up to prevent OOMing the processess with slower disks and heavy writes.
2016-07-17 23:53:12 -06:00
Jason Wilder 21dbe7e854 Simplify throttle type 2016-07-15 12:14:25 -06:00
Jason Wilder d1556e3964 Fix missing read locks before filtering 2016-07-15 10:08:26 -06:00
Jason Wilder ff5d61d024 Speed up delete series
Reduce lock contention and process shards in concurrently.
2016-07-14 17:31:34 -06:00
Jason Wilder 8f3ec3be43 Inline deleteShard
Only used by one caller now
2016-07-14 17:31:34 -06:00
Jason Wilder 78201e19d0 Refactor DeleteDatabase to use filter/walk funcs 2016-07-14 17:31:34 -06:00
Jason Wilder e0122efcf8 Speed up drop retention policy
Reduce the lock contention on tsdb.Store by taking a short lived
read-lock instead of a long write lock.  Also close shards in parallel
and drop the whole RP dir in bulk instead of each shard dir.
2016-07-14 17:31:34 -06:00
Jason Wilder 6d3d2f6fe9 Speed up drop measurement
Reduces the lock contention on the tsdb.Store by taking a short
read lock instead of a long write lock.  Also processes shards
in parallel instead of serially.
2016-07-14 17:31:29 -06:00
Jonathan A. Sternberg 837a9804cf Refactoring the monitor service to avoid expvar
Truncate the time interval output of the monitor service to be on even
time intervals rather than on every minute based on the start time. This
normalizes the output from the monitor service.
2016-07-07 11:13:58 -05:00
kun 77ed719bc1 delete redundant code in NewStore function 2016-06-24 17:14:00 +08:00
Jonathan A. Sternberg 497db2a6d3 Removing dead code from every package except influxql
The tsdb package had a substantial amount of dead code related to the
old query engine still in there. It is no longer used, so it was removed
since it was left unmaintained. There is likely still more code that is
the same, but wasn't found as part of this code cleanup.

influxql has dead code show up because of the code generation so it is
not included in this pruning.
2016-06-20 22:41:07 -05:00
Ben Johnson 7d4bea7153
add node id to execution options
This commit changes the `ExecutionOptions` and `SelectOptions` to
allow a `NodeID` for specifying an exact node to query against.
2016-06-10 09:20:44 -06:00
Jason Wilder a74ea4cbf4 Allow creating shards in a disable state
For restoring a shard, we need to be able to have the shard open,
but disabled.  It was racy to open it and then disable it separately
since writes/queries could occur in between that time.
2016-06-01 16:17:18 -06:00
Jason Wilder 1ff8ecf4fb Add ability to disable shards
Disabling a shard causes all writes and queries to a shard to return
an error.  This also disables compactions for the shard.
2016-05-31 10:51:54 -06:00
Jason Wilder 209dd005c5 Merge pull request #6627 from influxdata/jw-deadlock
Fix possible deadlock when queries and delete series run concurrently
2016-05-18 15:30:37 -06:00
Joe LeGasse af432e7d12 Fix loop variable reuse in database close
Fixes #6650
2016-05-17 11:25:39 -04:00
Jason Wilder 57d4becaec Fix possible deadlock when queries and delete series run concurrently
This locks showeed up in a deadlock systems running queries and
delete series across a large dataset.  Queries should not need to
lock the tsdb.Store for writes
2016-05-13 17:04:12 -06:00
Jason Wilder 5b6f3afefa Limit concurrent shards loading to number of cores available 2016-05-13 15:41:32 -06:00
Jason Wilder 9e54adc719 Speed up drop database
Drop database was closing and deleting each shard dir individually and
serially.  It would then delete the empty database dirs.

This changes drop database to close all shards in parallel and run
one os.RemoveAll to remove everything under the db dir which is more
efficient.

This also reworked the locking to avoid locking the tsdb.Store for
long periods of time.  That can cause queries and writes for other
databases to block as well.
2016-05-13 10:26:28 -06:00
Cory LaNou f415cf89ad wip 2016-05-10 11:01:03 -05:00
Cory LaNou a3bf3e2ef1 added baseline backup/restore plumbing 2016-05-10 08:14:51 -05:00
Jason Wilder 61e0d8ff93 Fix log prefix formatting 2016-05-02 11:36:04 -06:00
Jason Wilder abcb559b09 Remove index meta data when series and measurements are gone
This remove the dropMeta param from the tsdb.Store.DeleteSeries and
lets the shard determine when to remove the meta data from the index
based on what series still have data in the shard.

This uncovered a nasty bug in compactions where a fully deleted series would
prematurely end the compactions and not carry forward the rest of the data
in the TSM file.  This is now fixed as well.
2016-04-29 16:31:57 -06:00
Ben Johnson f7af787aef
add DELETE query support
This commit adds query language support for deleting series with a
`DELETE` query.
2016-04-27 15:16:23 -06:00
Stephen Gutekanst 9dc09c5257 Make logging output location more programmatically configurable (#6213)
This has various benefits:

- Users embedding InfluxDB within other Go programs can specify a different logger / prefix easily.
- More consistent with code used elsewhere in InfluxDB (e.g. services, other `run.Server.*` fields, etc).
- This is also more efficient, because it means `executeQuery` no longer allocates a single `*log.Logger` each time it is called.
2016-04-20 21:07:08 +01:00
joelegasse 84f8dd7c85 Merge pull request #6190 from influxdata/jw-race
Fix race on measurementFields
2016-04-06 08:13:58 -04:00
Jonathan A. Sternberg 37b63cedec Cleanup QueryExecutor and split statement execution code
The QueryExecutor had a lot of dead code made obsolete by the query
engine refactor that has now been removed. The TSDBStore interface has
also been cleaned up so we can have multiple implementations of this
(such as a local and remote version).

A StatementExecutor interface has been created for adding custom
functionality to the QueryExecutor that may not be available in the open
source version. The QueryExecutor delegate all statement execution to
the StatementExecutor and the QueryExecutor will only keep track of
housekeeping. Implementing additional queries is as simple as wrapping
the cluster.StatementExecutor struct or replacing it with something
completely different.

The PointsWriter in the QueryExecutor has been changed to a simple
interface that implements the one method needed by the query executor.
This is to allow different PointsWriter implementations to be used by
the QueryExecutor. It has also been moved into the StatementExecutor
instead.

The TSDBStore interface has now been modified to contain the code for
creating an IteratorCreator. This is so the underlying TSDBStore can
implement different ways of accessing the underlying shards rather than
always having to access each shard individually (such as batch
requests).

Remove the show servers handling. This isn't a valid command in the open
source version of InfluxDB anymore.

The QueryManager interface is now built into QueryExecutor and is no
longer necessary. The StatementExecutor and QueryExecutor split allows
task management to much more easily be built into QueryExecutor rather
than as a separate struct.
2016-04-04 13:27:17 -04:00
Jason Wilder 3f4c5a5585 Fix race on measurementFields
Both Shard and Engine had the same reference to the measurementField map,
but they each protected it with their own locks.  This causes a race when
write and queries are occurring because writes can add new fields to the
map while queries are reading from it.

The fix moves the ownership to the Engine and provides protected accessors
to that Shard now users.  For the most parts, the access on shard were old
dead code.

Fixing the measurementFields map race created a new race on the internal
fields map.  This is now unexported and protected via MeasurementFields
exported funcs.

Fixes #6188
2016-04-01 18:57:01 -06:00
Jason Wilder 96e076b6df Limit how many shards are loaded concurrently
Since loading a shard can allocate a lot of memory, running them all
at once could OOM the process.  This limits the number of shards
loaded to 4.  This will be changed to a config option provided the
approach helps.
2016-03-29 12:59:26 -06:00
Jason Wilder 03ced4cc90 Load shards concurrently 2016-03-29 12:58:52 -06:00
Ben Johnson beda072426 add support for remote expansion of regex
This commit moves the `tsdb.Store.ExpandSources()` function onto
the `influxql.IteratorCreator` and provides support for issuing
source expansion across a cluster.
2016-03-11 12:40:07 -07:00
Jason Wilder c44195d999 Convert measurementToRegex to exported func
Make it consistent with other conventions where exported funcs
take a lock.
2016-03-09 17:45:37 -07:00
Jason Wilder 992c78ee22 Remove period shard maintenance goroutine
This is no longer used in tsm and just peridocially locks everything
for no reason now.
2016-03-09 17:31:02 -07:00
Jason Wilder ae2360df7c Use read lock to expand sources
A write-lock was taken which locks the whole store during a query
that needs to expand sources.  Under load, writes can start to fail.
2016-03-09 17:22:57 -07:00
Edd Robinson 7dbc0f49d3 Merge pull request #5818 from influxdata/er-upgrade-error
Highlight upgrade info for old shards
2016-03-09 19:39:59 +00:00
Ben Johnson 41dde61226 SHOW SERIES 2016-03-08 11:47:57 -07:00
Mark Rushakoff cdcb079769 Tag TSM stats with database, retention policy
... by extracting the db/rp from the given path.

Now that the code has "standardized" on extracting db/rp this way, the
ShardLocation struct is no longer necessary and thus has been removed.
We're back on the previous style of passing the path and walPath to
NewShard.
2016-02-29 09:17:34 -08:00
Mark Rushakoff 40a98e0d55 Add database, RP as tags on shard stats
This commit updates tsdb.Shard to contain a ShardConfig and updates
tsdb.Store to directly reference a map of tsdb.Shard rather than the
previous tsdb.shardLocation abstraction.
2016-02-25 13:41:55 -08:00
Mark Rushakoff e7bb855ab2 Merge pull request #5816 from influxdata/mr-database-stats
Track stats for number of series, measurements
2016-02-25 08:13:04 -08:00
Ben Johnson 0dda9f6608 add remote execution
This commit adds remote execution to the query engine.
2016-02-25 08:41:20 -07:00
Mark Rushakoff fb83374389 Track stats for number of series, measurements
Per database: track number of series and measurements
Per measurement: track number of series
2016-02-24 08:10:16 -08:00
Edd Robinson 16995b6c23 Add ShardError to provide context about shard that errored 2016-02-24 13:33:07 +00:00
Edd Robinson 99a7341701 Wire up DROP retention policy to TSDB store.
Fixes #5653 and #5394.

Previously dropping retention policies did not propogate to local TSDB
shards. Instead, the retention policiess would just be removed from the
Meta Store.

This PR adds ensures that data associated with retention policies is
removed, when the retention policy is dropped.

Also, it cleans up a couple of other methods in `tsdb`, including the
requirement to provide (redundant) shardIDs when deleting databases.
2016-02-19 11:15:00 +00:00
Ben Johnson e3b4b71c13 refactor query executor
This commit moves the `QueryExecutor` to the `cluster` package
and provides an interface to it inside the `influxql` package.
2016-02-17 15:13:56 -07:00
Mark Rushakoff fc9ab7a46f Miscellaneous cleanup in tsdb package
* When possible, initialize maps/slices to exact length/capacity
  * See slice benchmarks at
    https://gist.github.com/mark-rushakoff/b5650bd8f06bece0b9fd
* Fixed some typos
* Removed an unnecessary loop in stringset.intersect
2016-02-10 18:00:47 -08:00
Justin Nuß 82c276756a Lint tsdb and tsdb/engine package 2016-02-10 21:33:46 +01:00
Ben Johnson d9a6a7340f add canonical paths 2016-02-10 11:30:52 -07:00
Ben Johnson 5a0d1ab7c1 rename influxdb/influxdb to influxdata/influxdb
This commit changes all the import and URL references from:

    github.com/influxdb/influxdb

to:

    github.com/influxdata/influxdb
2016-02-10 10:26:18 -07:00
Ben Johnson 32be4c250f fix non-existent shard handling
This commit removes `nil` shards returned from `tsdb.Store.Shards()`
which caused panics in some SELECTs. This can occur if the meta
store has created shards before the store or if the shards are
distributed throughout a cluster.

Fixes #5555
2016-02-10 09:40:29 -07:00
Ben Johnson 627cd9d486 add dedupe iterator 2016-02-10 09:40:29 -07:00
Ben Johnson 2bdc9404ef revert meta execution 2016-02-10 09:40:28 -07:00
Ben Johnson cde973f409 refactor query engine 2016-02-10 09:40:24 -07:00
Paul Dix 9cede5fb71 Address PR comments 2015-12-30 18:06:51 -05:00
Paul Dix 59fbd371fc Implement backup/restore for TSM.
This changes backup and restore to work for TSM. It breaks it for b1 and bz1, but since those are getting removed it's ok.

The backup runs against any host that is specified and can backup either the metasstore, a database, specific retention policy, or a specific shard. It can also take incremental backups with the `since` flag, which will only backup TSM files that have been created since that timestamp.

The backup is safe to run online. However, for shards that are still hot for writes, they won't be able to create new TSM files while the backup for that single shard runs. If the backup isn't too large and the write throughput isn't too high this shouldn't be a problem since the writes will just go into the WAL cache.
2015-12-30 18:06:50 -05:00
David Norton 3014fb90e4 fix #4303: don't drop from multiple databases 2015-12-12 13:54:23 -05:00