When the `max-row-limit` was hit, the goroutine reading from the results
channel would stop reading from the channel, but it didn't signal to the
sender that it was no longer reading from the results. This caused the
sender to continue trying to send results even though nobody would ever
read it and this created a deadlock.
Include an `AbortCh` on the `ExecutionContext` that will signal when
results are no longer desired so the sender can abort instead of
deadlocking.
When the `max-row-limit` was hit, the goroutine reading from the results
channel would stop reading from the channel, but it didn't signal to the
sender that it was no longer reading from the results. This caused the
sender to continue trying to send results even though nobody would ever
read it and this created a deadlock.
Include an `AbortCh` on the `ExecutionContext` that will signal when
results are no longer desired so the sender can abort instead of
deadlocking.
There are 2 new keys in the configuration file.
- security-level: "none", "sign", or "encrypt".
- auth-file: The location of the user/password file.
Please see the collectd network doc for more details.
When a query would use a grouping with two different aggregates, it was
possible for one of the aggregates to return a value from a different
series key than the second aggregate. When these series keys didn't
match, the returned grouping would be screwed up because it sorted by
time before checking for name and tags.
This did not happen when the aggregates returned values for the same
series keys because then the iterators were aligned with each other.
When a query would use a grouping with two different aggregates, it was
possible for one of the aggregates to return a value from a different
series key than the second aggregate. When these series keys didn't
match, the returned grouping would be screwed up because it sorted by
time before checking for name and tags.
This did not happen when the aggregates returned values for the same
series keys because then the iterators were aligned with each other.
The parser was updated previously in #7295 and the functionality was
supposed to be there, but the wiring in the query engine for that to
happen was never written.
The parser was updated previously in #7295 and the functionality was
supposed to be there, but the wiring in the query engine for that to
happen was never written.
The previous version was showing the microseconds unit when it was
outputting nanoseconds. Now we correctly identify which sub-second unit
to use (milliseconds, microseconds, or nanoseconds) and use the correct
unit while dividing the duration unit correctly to produce the correct
output.
Also updated to use the default duration string instead of our own
custom formatters. It turns out that the string method for
`time.Duration` does the correct thing as long as we truncate the value
first.
When a limit is exceeded, we return errors and sometimes log (if appropriate)
that a limit was exceeded. The messages don't always provide an indication
as to where or how they are configured.
Instead, return the config option (easily searchable for) as well as the limit
currently set and the value that exceeded it when possible.
If concurrent writes to the same shard occur, it's possible for different types to
be added to the cache for the same series. The way the measurementFields map on the
shard is updated is racy in this scenario which would normally prevent this from occurring.
When this occurs, the snapshot compaction panics because it can't encode different types
in the same series.
To prevent this, we have the cache return an error a different type is added to existing
values in the cache.
Fixes#7498
The file store stats slice is re-used which causes the race below:
WARNING: DATA RACE
Write at 0x00c42007e140 by goroutine 43:
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*FileStore).Stats()
/Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/file_store.go:511 +0x22e
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*DefaultPlanner).findGenerations()
/Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/compact.go:461 +0x6f
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*DefaultPlanner).PlanLevel()
Previous read at 0x00c42007e140 by goroutine 40:
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*DefaultPlanner).findGenerations()
/Users/jason/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/compact.go:463 +0x13d
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*DefaultPlanner).PlanOptimize()