Unify logic around compaction execution to a single place.
Also report on the error stats that we track. Previously they were not
emitted in the stats output.
If a delete takes a long time to process while writes to the
shard are occuring, it was possible for the cache to fill up
and writes to be rejected. This occurred because we disabled
all compactions while writing tombstone file to prevent deleted
data from re-appearing after a compaction completed.
Instead, we only disable the level compactions and allow snapshot
compactions to continue. Snapshots already handle deleted data
with the cache and wal.
Fixes#7161
If you pipe in a file to the `influx` CLI, it will not try to open the
interactive line reader, but instead just send the contents of the
entire file to the server.
Instead of assigning a boolean value of true to the filter expressions
when there was no meaningful expression, this drops a boolean expression
of true from the filter expressions so we don't have to perform a map
assignment. This allows us to reduce allocations and assignments when a
`WHERE` clause only contains tag comparisons and no field comparisons.
The functionality works the same as wildcards, but this time, you can
specify a regular expression.
One limitation is that you can't specify whether you only want to select
fields or tags. Since the regex can be changed to suit the person's
needs, I don't currently think this is an issue.
Strings would always return an empty string and stddev is meaningless
when it comes to strings. This removes that functionality so strings
don't automatically get picked up when using a wildcard.
This changes the behavior of the max-series-per-database and
max-values-per-tag limits to drop points that would exceed the limits
and allow the remaining points to be written. Previously, the whole
batch would fail and return and 500 error to the client.
This now will write the allow points and return a `partial write`
error indicating some of the points were dropped, how many were
dropped and one of the problem measureent and tags.
On my machine with about 20 shards, it would take 10+ seconds to shut
down InfluxDB with SIGINT. After this change, it shuts down in nearly
instantly.
(*tsdb.Store).Close was shutting down each of its shards sequentially.
Each shard's engine would signal to its compaction goroutines to quit,
and because each compaction goroutine has a hardcoded 1-second sleep in
between checks, waiting for the goroutines would often block for up to a
second.
This change closes all of the TSDB store's shards in parallel. This
means it's possible that multiple close values could error at once, but
we're still only returning the first error, consistent with previous
behavior. That being said, the return value of (*tsdb.Store).Close is
ignored in (*cmd/influxd/run.Server).Close anyway.
The `cumulative_sum()` function can be used to sum each new point and
output the current total. For the following points:
cpu value=2 0
cpu value=4 10
cpu value=6 20
This would output the following points:
> SELECT cumulative_sum(value) FROM cpu
time value
---- -----
0 2
10 6
20 12
As can be seen, each new point adds to the sum of the previous point and
outputs the value with the same timestamp.
The function can also be used with an aggregate like `derivative()`.
> SELECT cumulative_sum(mean(value) FROM cpu WHERE time >= now() - 10m GROUP BY time(1m)
First Pass at implementing sample
Add sample iterators for all types
Remove size from sample struct
Fix off by one error when generating random number
Add benchmarks for sample iterator
Add test and associated fixes for off by one error
Add test for sample function
Remove NumericLiteral from sample function call
Make clear that the counter is incr w/ each call
Rename IsRandom to AllSamplesSeen
Add a rng for each reducer that is created
The default rng that comes with math/rand has a global lock. To avoid
having to worry about any contention on the lock, each reducer now has
its own time seeded rng.
Add sample function to changelog
Clean up template for fill average
Change fill(average) to fill(linear)
Update average to linear in infuxql spec
Add Integer Tests and associated fixes
Update CHANGELOG for fill(linear)
The subscriber write goroutine would drop points if the write load
was higher than it could process. This could happen with a just
a few writers to the server.
Instead, process the channel with multiple writers to avoid dropping
writes so easily. This also adds some config options to control how
large the channel buffer is as well as how many goroutines are started.
Fixes#7330
Integer blocks that were run length encoded could produce the wrong
value when read back out because the deltas were not zig zag decoded
before scaling the final value. If the deltas were negative, as would
be seen in a counter that decrements by a constant value, the results
would be random with som negative and positive values.
Fixes#7391
Integer blocks that were run length encoded could produce the wrong
value when read back out because the deltas were not zig zag decoded
before scaling the final value. If the deltas were negative, as would
be seen in a counter that decrements by a constant value, the results
would be random with som negative and positive values.
Fixes#7391
The TagSets function was creating a lot of intermediate maps and
slices to calculate the sorted tag sets. It first creates a map
to group tag sets with their series, it then created an equally
sized slice of the tag keys and sorted then. Finally, it created
a new slice and added the tag sets in the original map by the ordering
of the sorted keys. It was also recreating the tags map multiple time
creating extra garbage in the loop.
This simplifies the code to create one map for grouping and than adding
the distinct sets to a slice which is then sorted. It also fixes the
multple tag maps getting created.
Manual use of system queries could result in a user using the query
incorrect. Rather than check to make sure the query was used correctly,
we're just going to prevent users from using those sources so they can't
use them incorrectly.
When deleting a shard, the shard is locked and then removed from the
index. Removal from the index can be slow if there are a lot of
series. During this time, the shard is still expected to exist by
the meta store and tsdb store so stats collections, queries and writes
could all be run on this shard while it's locked. This can cause everything
to lock up until the unindexing completes and the shard can be unlocked.
Fixes#7226
When deleting a shard, the shard is locked and then removed from the
index. Removal from the index can be slow if there are a lot of
series. During this time, the shard is still expected to exist by
the meta store and tsdb store so stats collections, queries and writes
could all be run on this shard while it's locked. This can cause everything
to lock up until the unindexing completes and the shard can be unlocked.
Fixes#7226
Updating the package to compress the man pages fully and removes the
filename and timestamp from being stored in the man page. Lintian
complains that the packages aren't compressed using the best compression
method.
https://lintian.debian.org/tags/manpage-not-compressed.html
The derivative() call would panic if it received two points at the same
time because it tried to divide by zero. The derivative call now skips
past these points. To avoid skipping past these points, use `GROUP BY *`
so that each series is kept separated into their own series.
The difference() call has also been modified to skip past these points.
Even though difference doesn't divide by the time, difference is
supposed to perform the same as derivative, but without dividing by the
time.
Return an error when we encounter the same option twice in ALTER
RETENTION POLICY and remove the `maxNumOptions` number from the parsing
loop. The `maxNumOptions` number would need to be modified if another
option was added to the parsing loop and it didn't correctly prevent
duplicate options from being reported as an error anyway.
Normalize all of the SHOW commands so they allow both using ON to
specify the database and using the default database. Some commands would
require one and some would require the other and it was confusing when
using the query language.
Affected commands:
* SHOW RETENTION POLICIES
* SHOW MEASUREMENTS
* SHOW SERIES
* SHOW TAG KEYS
* SHOW TAG VALUES
* SHOW FIELD KEYS