Any operation between an int64 and uint64 results in the type becoming a
uint64. This is because the most common will usually be an unsigned
cursor being modified by an int literal. Even in cases where we were to
add an unsigned literal to an integer iterator, that would just result
in it being recast back and overflow.
When using `SHOW MEASUREMENTS ON <db>`, the required permissions didn't
take into account `<db>` and would only use the `db` parameters from the
HTTP parameters.
When an integer cannot be parsed, we attempt to parse it as a uint64. If
that succeeds, we then have an unsigned literal that can then be used to
compare unsigned values.
This method allows us not to introduce new syntax to the language and
continue just doing the right thing at the right time. But, it also
delegates a lot of the heavy lifting to implicit casting in Reduce and
Eval.
All time ranges are combined with AND regardless of context and
regardless of whether it makes any logical sense.
That was the previous behavior and, unfortunately, a lot of people rely
on it.
This is used quite a bit to determine which fields are needed in a
condition. When the condition gets large, the memory usage begins to
slow it down considerably and it doesn't take care of duplicates.
There are several places in the code where comma-ok map retrieval was
being used poorly. Some were benign, like checking existence before
issuing an unconditional delete with no cleanup. Others were potentially
far more serious: assuming that if 'ok' was true, then the resulting
pointer retrieved from the map would be non-nil. `nil` is a perfectly
valid value to store in a map of pointers, and the comma-ok syntax is
meant for when membership is distinct from having a non-zero value.
There was only one or two cases that I saw that being used correctly for
maps of pointers.
This refactors the validation code so it is more flexible and performs a
small bit of work to make preparing and executing the query easier.
The general idea is that compilation will eventually do more heavy
lifting in creating the initial plan and prepare will construct an
actual plan rather than just doing some basic field rewriting.
This change at least sets us up for that change in the future and moves
the validation code to the query execution instead of in the parser.
This also frees up the parser to parse the complete AST without worrying
if the query itself is valid. That could be useful for client code that
wants to compile a partial query to an AST and then perform
modifications on the AST for some reason.
The statement rewriting logic should be in the query engine as part of
preparing a query. This creates a shard mapper interface that the query
engine expects and then passes it to the query engine instead of
requiring the query to be preprocessed before being input into the query
engine. This interface is (mostly) the same as the old interface, just
moved to a different package.
The ConditionExpr function is more accurate because it parses the
condition and ensures that time conditions are actually used correctly.
That means that attempting to combine conditions with OR will not result
in the query silently pretending it's an AND and nested conditions work
correctly so there is only one way to read the query.
It also extracts the non-time conditions into a separate condition so we
can stop attempting to parse around the time conditions in lower layers
of the storage engine. This change does not remove those hacks, but a
following commit should be able to sanitize the condition and remove
them.
This change provides a clear separation between the query engine
mechanics and the query language so that the language can be parsed and
dealt with separate from the query engine itself.
Previously pseudo iterators could be created for meta data such
as series, measurement, and tag data. These iterators were created
at a higher level and lacked a lot of the power of the query engine.
This commit moves system iterators down to the series level and
supports the following:
- _name
- _seriesKey
- _tagKey
- _tagValue
- _fieldKey
These can be used as normal fields such as:
SELECT _seriesKey FROM cpu
This will return all the series keys for `cpu`.
When a time zone shift happened at the edge of a time bucket, the logic
was incorrect. When we added an hour to the day with a time grouping of
2 hours, we took the hour away from the previous bucket instead of the
next bucket so the first bucket of the day would be from midnight to 1
AM and the second bucket would include 1 AM to 4 AM (2 AM to 3 AM does
not exist when shifting forwards). The correct buckets should have been
12 AM to 2 AM for the first bucket of the day and 3 AM to 4 AM for the
second (since 2 AM to 3 AM does not exist).
This also modifies the tests to use table tests and sub tests.
* introduced UnsignedValue type
* leveraged existing int64 compression algorithms (RLE, Simple 8B)
* tsm and WAL can read and write UnsignedValue
* compaction is aware of UnsignedValue
* unsigned support to model, cursors and write points
NOTE: there is no support to create unsigned points, as the line
protocol has not been modified.
The language is now defined in a way similar to many HTTP routers with
the left prefix being placed into a parse tree and then eventually
invoking a function to parse the arguments.
This allows dynamically adding additional components to the parse tree
for either query language extensions or enterprise.
This commit adds a new environment variable INFLUXDB_PANIC_CRASH, which
when set to a truthy value, e.g., true, TRUE, 1, will prevent the server
from recovering from a panic.
Recover currently occurs in two places: the HTTP handler and the
QueryExecutor. INFLUXDB_PANIC_CRASH will control both.
Further, this commit adds _internal stats that will monitor the
occurrence of panics all the time (regardless of if INFLUXDB_PANIC_CRASH
has been set to true or not).
The recovered panic frequency can be inspected with the following
queries:
SELECT "recoveredPanics" FROM "_internal"."monitor"."httpd";
SELECT "recoveredPanics" FROM "_internal"."monitor"."queryExecutor";
It didn't properly pass the variable reference when creating the
variable iterator so a null iterator got passed back instead.
Duplicate the `top()` tests in TopInt to also test `bottom()` with the
same queries so `bottom()` stops getting neglected so often.
When a `SELECT ... INTO ...` is used with `top()` or `bottom()` used
with tags, the points will be written with the tags still intact instead
of converted to fields.
The previous version of `top()` and `bottom()` would gather all of the
points to use in a slice, filter them (if necessary), then use a
slightly modified heap sort to retrieve the top or bottom values.
This performed horrendously from the standpoint of memory. Since it
consumed so much memory and spent so much time in allocations (along
with sorting a potentially very large slice), this affected speed too.
These calls have now been modified so they keep the top or bottom points
in a min or max heap. For `top()`, a new point will read the minimum
value from the heap. If the new point is greater than the minimum point,
it will replace the minimum point and fix the heap with the new value.
If the new point is smaller, it discards that point. For `bottom()`, the
process is the opposite.
It will then sort the final result to ensure the correct ordering of the
selected points.
When `top()` or `bottom()` contain a tag to select, they have now been
modified so this query:
SELECT top(value, host, 2) FROM cpu
Essentially becomes this query:
SELECT top(value, 2), host FROM (
SELECT max(value) FROM cpu GROUP BY host
)
This should drastically increase the performance of all `top()` and
`bottom()` queries.
This change refactors the subquery code into a separate builder class to
help allow for more reuse and make the functions smaller and easier to
read.
The previous function that handled most of the code was too big and
impossible to reason through.
This also goes and replaces the complicated logic of aggregates that had
a subquery source with the simpler IteratorMapper. I think the overhead
from the IteratorMapper will be more, but I also believe that the actual
code is simpler and more robust to produce more accurate answers. It
might be a future project to optimize that section of code, but I don't
have any actual numbers for the efficiency of one method and I believe
accuracy and code clarity may be more important at the moment since I am
otherwise incapable of reading my own code.
When LIMIT and OFFSET were used with any functions that were not handled
directly by the query engine (anything other than count, max, min, mean,
first, or last), the input to the function would be limited instead of
receiving the full stream of values it was supposed to receive.
This also fixes a bug that caused the server to hang when LIMIT and
OFFSET were used with a selector. When using a selector, the limit and
offset should be handled before the points go to the auxiliary iterator
to be split into different iterators. Limiting happened afterwards which
caused the auxiliary iterator to hang forever.
`top()` and `bottom()` will now organize the points by time and also
keep the points original time even when a time grouping is used. At the
same time, `top()` and `bottom()` will no longer honor any fill options
that are present since they don't really make sense for these specific
functions.
This also fixes the aggregate and selectors to honor the ordered
iterator option so iterator remain ordered and to also respect the
buckets that are created by the final dimensions of the query so that
two buckets don't overlap each other within the same reducer. A test has
been added for this situation. This should clarify and encourage the use
of the ordered attribute within the query engine.
The Window function will now check before it adjusts the offset whether
it is going to overflow or underflow. If it is going to do either, it
sets the start or end time to MinTime or MaxTime.
The following functions require ordered input but were not guaranteed to
received ordered input:
* `distinct()`
* `sample()`
* `holt_winters()`
* `holt_winters_with_fit()`
* `derivative()`
* `non_negative_derivative()`
* `difference()`
* `moving_average()`
* `elapsed()`
* `cumulative_sum()`
* `top()`
* `bottom()`
These function calls have now been modified to request that their input
be ordered by the query engine. This will prevent the improper output
that could have been caused by multiple series being merged together or
multiple shards being merged together potentially incorrectly when no
time grouping was specified.
Two additional functions were already correct to begin with (so there
are no bugs with these two, but I'm including their names for
completeness).
* `median()`
* `percentile()`
If there were multiple selectors and math, the query engine would
mistakenly think it was the only selector in the query and would not
match their timestamps.
Fixed the query engine to pass whether the selector should be treated as
a selector so queries like `max(value) * 1, min(value) * 1` will match
the timestamps of the result.
Additionally, support unary addition and subtraction for variables,
calls, and parenthesis expressions. Doing `-value` will be the
equivalent of doing `-1 * value` now.
The timezone for a query can now be added to the end with something like
`TZ("America/Los_Angeles")` and it will localize the results of the
query to be in that timezone. The offset will automatically be set to
the offset for that timezone and offsets will automatically adjust for
daylight savings time so grouping by a day will result in a 25 hour day
once a year and a 23 hour day another day of the year.
The automatic adjustment of intervals for timezone offsets changing will
only happen if the group by period is greater than the timezone offset
would be. That means grouping by an hour or less will not be affected by
daylight savings time, but a 2 hour or 1 day interval will be.
The default timezone is UTC and existing queries are unaffected by this
change.
When times are returned as strings (when `epoch=1` is not used), the
results will be returned using the requested timezone format in RFC3339
format.
There is a lot of confusion in the code if the range is [start, end) or
[start, end]. This is not made easier because it is acts one way in some
areas and in another way in some other areas, but it is usually [start,
end]. The `time = ?` syntax assumed that it was [start, end) and added
an extra nanosecond to the end time to accomodate for that, but the
range was actually [start, end] and that caused it to include one extra
nanosecond when it shouldn't have.
This change fixes it so exactly one timestamp is selected when `time = ?`
is used.
The underlying iterators were not closed when a query was kill so
although the client would receive an error, the query would continue
on until completion.
When there were multiple series and anything other than the last series
had any null values, the series would start using the first point from
the next series to interpolate points.
Interpolation should not cross between series. Now, the linear fill
checks to make sure the next point is within the same series before
using it to perform interpolation.
When rewriting fields, wildcards within binary expressions were skipped.
This now throws an error whenever it finds a wildcard within a binary
expression in order to prevent the panic that occurs.
A single line comment will read until the end of a line and is started
with `--` (just like SQL). A multiline comment is with `/* */`. You
cannot nest multiline comments.
Previously, subqueries would only be able to select tag fields within a
subquery if the subquery used a selector. But, it didn't filter out
aggregates like `mean()` so it would panic instead.
Now it is possible to select the tag directly instead of rewriting the
query in an invalid way.
Some queries in this form will still not work though. For example, the
following still does not function (but also doesn't panic):
SELECT count(host) FROM (SELECT mean, host FROM (SELECT mean(value) FROM cpu GROUP BY host))
The builder used for subqueries does not handle parenthesis, but a set
of parenthesis wrapping a field would cause it to panic. This code now
reduces the expression so the parenthesis are removed before being
processed.
They rebased a revision we were previously relying upon that allowed us
to use the vanity name so we are reverting back to an older version with
the old import path.
This commit introduces a new interface type, influxql.Authorizer, that
is passed as part of a statement's execution context and determines
whether the context is permitted to access a given database. In the
future, the Authorizer interface may be expanded to other resources
besides databases. In this commit, the Authorizer interface is
specifically used to determine which databases are returned when
executing SHOW DATABASES.
When HTTP authentication is enabled, the existing meta.UserInfo struct
implements Authorizer, meaning admin users can SHOW every database, and
non-admin users can SHOW only databases for which they have read and/or
write permission.
When HTTP authentication is disabled, all databases are visible through
SHOW DATABASES.
This addresses a long-standing issue where Chronograf or Grafana would
be unable to list databases if the logged-in user did not have admin
privileges.
Fixes#4785.