First Pass at implementing sample
Add sample iterators for all types
Remove size from sample struct
Fix off by one error when generating random number
Add benchmarks for sample iterator
Add test and associated fixes for off by one error
Add test for sample function
Remove NumericLiteral from sample function call
Make clear that the counter is incr w/ each call
Rename IsRandom to AllSamplesSeen
Add a rng for each reducer that is created
The default rng that comes with math/rand has a global lock. To avoid
having to worry about any contention on the lock, each reducer now has
its own time seeded rng.
Add sample function to changelog
Clean up template for fill average
Change fill(average) to fill(linear)
Update average to linear in infuxql spec
Add Integer Tests and associated fixes
Update CHANGELOG for fill(linear)
The subscriber write goroutine would drop points if the write load
was higher than it could process. This could happen with a just
a few writers to the server.
Instead, process the channel with multiple writers to avoid dropping
writes so easily. This also adds some config options to control how
large the channel buffer is as well as how many goroutines are started.
Fixes#7330
Integer blocks that were run length encoded could produce the wrong
value when read back out because the deltas were not zig zag decoded
before scaling the final value. If the deltas were negative, as would
be seen in a counter that decrements by a constant value, the results
would be random with som negative and positive values.
Fixes#7391
Integer blocks that were run length encoded could produce the wrong
value when read back out because the deltas were not zig zag decoded
before scaling the final value. If the deltas were negative, as would
be seen in a counter that decrements by a constant value, the results
would be random with som negative and positive values.
Fixes#7391
The TagSets function was creating a lot of intermediate maps and
slices to calculate the sorted tag sets. It first creates a map
to group tag sets with their series, it then created an equally
sized slice of the tag keys and sorted then. Finally, it created
a new slice and added the tag sets in the original map by the ordering
of the sorted keys. It was also recreating the tags map multiple time
creating extra garbage in the loop.
This simplifies the code to create one map for grouping and than adding
the distinct sets to a slice which is then sorted. It also fixes the
multple tag maps getting created.
Manual use of system queries could result in a user using the query
incorrect. Rather than check to make sure the query was used correctly,
we're just going to prevent users from using those sources so they can't
use them incorrectly.
When deleting a shard, the shard is locked and then removed from the
index. Removal from the index can be slow if there are a lot of
series. During this time, the shard is still expected to exist by
the meta store and tsdb store so stats collections, queries and writes
could all be run on this shard while it's locked. This can cause everything
to lock up until the unindexing completes and the shard can be unlocked.
Fixes#7226
When deleting a shard, the shard is locked and then removed from the
index. Removal from the index can be slow if there are a lot of
series. During this time, the shard is still expected to exist by
the meta store and tsdb store so stats collections, queries and writes
could all be run on this shard while it's locked. This can cause everything
to lock up until the unindexing completes and the shard can be unlocked.
Fixes#7226
Updating the package to compress the man pages fully and removes the
filename and timestamp from being stored in the man page. Lintian
complains that the packages aren't compressed using the best compression
method.
https://lintian.debian.org/tags/manpage-not-compressed.html
The derivative() call would panic if it received two points at the same
time because it tried to divide by zero. The derivative call now skips
past these points. To avoid skipping past these points, use `GROUP BY *`
so that each series is kept separated into their own series.
The difference() call has also been modified to skip past these points.
Even though difference doesn't divide by the time, difference is
supposed to perform the same as derivative, but without dividing by the
time.
Return an error when we encounter the same option twice in ALTER
RETENTION POLICY and remove the `maxNumOptions` number from the parsing
loop. The `maxNumOptions` number would need to be modified if another
option was added to the parsing loop and it didn't correctly prevent
duplicate options from being reported as an error anyway.
Normalize all of the SHOW commands so they allow both using ON to
specify the database and using the default database. Some commands would
require one and some would require the other and it was confusing when
using the query language.
Affected commands:
* SHOW RETENTION POLICIES
* SHOW MEASUREMENTS
* SHOW SERIES
* SHOW TAG KEYS
* SHOW TAG VALUES
* SHOW FIELD KEYS
There were three different outputs that could be output with columns
that were rather strange depending on if there was a name and if there
were tags with the response.
Normalized output now has the dashes always under the column names and
no dashes anywhere else for consistency.
- Single commit, PR follows conventions laid out by @Gouthamve in #5822
* main.go: struct field CpuFile should be CPUFile
* influx_inspect: loop equivalent to `for key := range...`
* adds comments to exported fields and consts
* fixes typo in `CHANGELOG.md`: text for #4702 now matches number
When we refactored expvar, the cmdline and memstats sections were not
readded to the output. This adds it back if they can be found inside of
`expvar`.
It also stops trying to sort the output of the statistics so they get
returned faster. JSON doesn't need them to be sorted and it causes
enough latency problems that sorting them hurts performance.
When attempting to reduce the WHERE clause, the time literals had not
been converted from string literals yet. This adds the functionality to
have it handle the same time math when the time literal is still a
string literal.
The full compaction planner could return a plan that only included
one generation. If this happened, a full compaction would run on that
generation producing just one generation again. The planner would then
repeat the plan.
This could happen if there were two generations that were both over
the max TSM file size and the second one happened to be in level 3 or
lower.
When this situation occurs, one cpu is pegged running a full compaction
continuously and the disks become very busy basically rewriting the
same files over and over again. This can eventually cause disk and CPU
saturation if it occurs with more than one shard.
Fixes#7074
The dollar sign would sometimes be accepted as whitespace if it was
immediately followed by a reserved keyword or an invalid character. It
now reads these properly as a bound parameter rather than ignoring the
dollar sign.
The behavior for querying tag values with an empty string was originally
fixed in #6283, but it also added a performance problem when the
cardinality of the tag was high. Since a call to `Union()` or `Reject()`
would happen for every series key and it would be called N times for N
cardinality, the comparisons against a blank string were unnecessarily
slow with large memory allocations.
This optimizes these queries so it doesn't use those methods anymore.
Those methods are still useful and used when combining AND and OR
clauses, but they aren't useful when finding the series ids for a single
clause. These methods were unnecessary anyway because the series ids for
the tags were unique anyway and didn't have to be merged as a set.
Instead of having the parser set the defaults, the command will set the
defaults so that the constants for that are actually used. This way we
can also identify which things the user provided and which ones we are
filling with default values.
This allows the meta client to be able to make smarter decisions when
determining if the user requested a conflict or if the requested
capabilities match with what is currently available. If you just say
`CREATE DATABASE WITH NAME myrp`, the user doesn't really care what the
duration of the retention policy is and just wants to use the default.
Now, we can use that information to determine if an existing retention
policy would conflict with what the user requested rather than returning
an error if a default value ever gets changed since the meta client
command can communicate intent more easily.