If you pipe in a file to the `influx` CLI, it will not try to open the
interactive line reader, but instead just send the contents of the
entire file to the server.
Instead of assigning a boolean value of true to the filter expressions
when there was no meaningful expression, this drops a boolean expression
of true from the filter expressions so we don't have to perform a map
assignment. This allows us to reduce allocations and assignments when a
`WHERE` clause only contains tag comparisons and no field comparisons.
The functionality works the same as wildcards, but this time, you can
specify a regular expression.
One limitation is that you can't specify whether you only want to select
fields or tags. Since the regex can be changed to suit the person's
needs, I don't currently think this is an issue.
Strings would always return an empty string and stddev is meaningless
when it comes to strings. This removes that functionality so strings
don't automatically get picked up when using a wildcard.
This changes the behavior of the max-series-per-database and
max-values-per-tag limits to drop points that would exceed the limits
and allow the remaining points to be written. Previously, the whole
batch would fail and return and 500 error to the client.
This now will write the allow points and return a `partial write`
error indicating some of the points were dropped, how many were
dropped and one of the problem measureent and tags.
On my machine with about 20 shards, it would take 10+ seconds to shut
down InfluxDB with SIGINT. After this change, it shuts down in nearly
instantly.
(*tsdb.Store).Close was shutting down each of its shards sequentially.
Each shard's engine would signal to its compaction goroutines to quit,
and because each compaction goroutine has a hardcoded 1-second sleep in
between checks, waiting for the goroutines would often block for up to a
second.
This change closes all of the TSDB store's shards in parallel. This
means it's possible that multiple close values could error at once, but
we're still only returning the first error, consistent with previous
behavior. That being said, the return value of (*tsdb.Store).Close is
ignored in (*cmd/influxd/run.Server).Close anyway.
The `cumulative_sum()` function can be used to sum each new point and
output the current total. For the following points:
cpu value=2 0
cpu value=4 10
cpu value=6 20
This would output the following points:
> SELECT cumulative_sum(value) FROM cpu
time value
---- -----
0 2
10 6
20 12
As can be seen, each new point adds to the sum of the previous point and
outputs the value with the same timestamp.
The function can also be used with an aggregate like `derivative()`.
> SELECT cumulative_sum(mean(value) FROM cpu WHERE time >= now() - 10m GROUP BY time(1m)
First Pass at implementing sample
Add sample iterators for all types
Remove size from sample struct
Fix off by one error when generating random number
Add benchmarks for sample iterator
Add test and associated fixes for off by one error
Add test for sample function
Remove NumericLiteral from sample function call
Make clear that the counter is incr w/ each call
Rename IsRandom to AllSamplesSeen
Add a rng for each reducer that is created
The default rng that comes with math/rand has a global lock. To avoid
having to worry about any contention on the lock, each reducer now has
its own time seeded rng.
Add sample function to changelog
Clean up template for fill average
Change fill(average) to fill(linear)
Update average to linear in infuxql spec
Add Integer Tests and associated fixes
Update CHANGELOG for fill(linear)
The subscriber write goroutine would drop points if the write load
was higher than it could process. This could happen with a just
a few writers to the server.
Instead, process the channel with multiple writers to avoid dropping
writes so easily. This also adds some config options to control how
large the channel buffer is as well as how many goroutines are started.
Fixes#7330
The FieldIterator is used to scan over the fields of a point, providing
information, and delaying parsing/decoding the value until it is needed.
This change uses this new type to avoid the allocation of a map for the
fields which is then thrown away as soon as the points get converted
into columns within the datastore.
The decoders were held onto each iterator to avoid creating them all
the time. Some of them have use quite a bit of memory so they can
be expensive to create when querying across many series.
Intead, more them to a re-usable pool where we create the minimum that
could active be in use. This reduces garbage as well as makes the iterators
less expensive to create.
When the planner runs, it needs to determine if any files have tombstones.
The code to determine if a tombstone existed involved stating the .tombstone
file. Since the planner runs very frequently when there are many shards, this
causea a lot of system calls that are unnecessary.
Instead, cache the results of the stats calls and only refresh them when we
haven't checked at least once or we write new tombstone data.
This also caches the results of the TSMReader.Stats call to avoid creating
garbage.
Integer blocks that were run length encoded could produce the wrong
value when read back out because the deltas were not zig zag decoded
before scaling the final value. If the deltas were negative, as would
be seen in a counter that decrements by a constant value, the results
would be random with som negative and positive values.
Fixes#7391