Previously, only time expressions got propagated inwards. The reason for
this was simple. If the outer query was going to filter to a specific
time range, then it would be unnecessary for the inner query to output
points within that time frame. It started as an optimization, but became
a feature because there was no reason to have the user repeat the same
time clause for the inner query as the outer query. So we allowed an
aggregate query with an interval to pass validation in the subquery if
the outer query had a time range. But `GROUP BY` clauses were not
propagated because that same logic didn't apply to them. It's not an
optimization there. So while grouping by a tag in the outer query
without grouping by it in the inner query was useless, there wasn't any
particular reason to care.
Then a bug was found where wildcards would propagate the dimensions
correctly, but the outer query containing a group by with the inner
query omitting it wouldn't correctly filter out the outer group by. We
could fix that filtering, but on further review, I had been seeing
people make that same mistake a lot. People seem to just believe that
the grouping should be propagated inwards. Instead of trying to fight
what the user wanted and explicitly erase groupings that weren't
propagated manually, we might as well just propagate them for the user
to make their lives easier. There is no useful situation where you would
want to group into buckets that can't physically exist so we might as
well do _something_ useful.
This will also now propagate time intervals to inner queries since the
same applies there. But, while the interval propagates, the following
query will not pass validation since it is still not possible to use a
grouping interval with a raw query (even if the inner query is an
aggregate):
SELECT * FROM (SELECT mean(value) FROM cpu) WHERE time > now() - 5m GROUP BY time(1m)
This also means wildcards will behave a bit differently. They will
retrieve dimensions from the sources in the inner query rather than just
using the dimensions in the group by.
Fixing top() and bottom() to return the correct auxiliary fields.
Unfortunately, we were not copying the buffer with the auxiliary fields
so those values would be overwritten by a later point.
When an error that appears to be an SSL error happens without SSL
enabled, the client will attempt to reconnect with SSL just to see if
that works. If it works, it exits with an error message telling the user
to add `-ssl`. It will also do the same if the SSL connection is unsafe
although it will warn that this is insecure.
The backup command can fail if a snapshot is running which silently
closes the connection. This causes the backup shard command to continue
on as if nothing failed.
This adds query syntax support for subqueries and adds support to the
query engine to execute queries on subqueries.
Subqueries act as a source for another query. It is the equivalent of
writing the results of a query to a temporary database, executing
a query on that temporary database, and then deleting the database
(except this is all performed in-memory).
The syntax is like this:
SELECT sum(derivative) FROM (SELECT derivative(mean(value)) FROM cpu GROUP BY *)
This will execute derivative and then sum the result of those derivatives.
Another example:
SELECT max(min) FROM (SELECT min(value) FROM cpu GROUP BY host)
This would let you find the maximum minimum value of each host.
There is complete freedom to mix subqueries with auxiliary fields. The only
caveat is that the following two queries:
SELECT mean(value) FROM cpu
SELECT mean(value) FROM (SELECT value FROM cpu)
Have different performance characteristics. The first will calculate
`mean(value)` at the shard level and will be faster, especially when it comes to
clustered setups. The second will process the mean at the top level and will not
include that optimization.
Benchmark improvements with this change:
benchmark old ns/op new ns/op delta
BenchmarkExportTSMFloats_100s_250vps-4 23206480 10279106 -55.71%
BenchmarkExportTSMInts_100s_250vps-4 17995000 5762310 -67.98%
BenchmarkExportTSMBools_100s_250vps-4 17067605 4235467 -75.18%
BenchmarkExportTSMStrings_100s_250vps-4 54846997 34682568 -36.76%
BenchmarkExportWALFloats_100s_250vps-4 23459937 10436297 -55.51%
BenchmarkExportWALInts_100s_250vps-4 18747150 6236062 -66.74%
BenchmarkExportWALBools_100s_250vps-4 17988273 4814358 -73.24%
BenchmarkExportWALStrings_100s_250vps-4 59700802 35815739 -40.01%
benchmark old allocs new allocs delta
BenchmarkExportTSMFloats_100s_250vps-4 201442 51738 -74.32%
BenchmarkExportTSMInts_100s_250vps-4 201442 51728 -74.32%
BenchmarkExportTSMBools_100s_250vps-4 201441 51638 -74.37%
BenchmarkExportTSMStrings_100s_250vps-4 404092 201584 -50.11%
BenchmarkExportWALFloats_100s_250vps-4 250322 75627 -69.79%
BenchmarkExportWALInts_100s_250vps-4 250323 75617 -69.79%
BenchmarkExportWALBools_100s_250vps-4 250321 75527 -69.83%
BenchmarkExportWALStrings_100s_250vps-4 452868 225291 -50.25%
benchmark old bytes new bytes delta
BenchmarkExportTSMFloats_100s_250vps-4 5170539 2351789 -54.52%
BenchmarkExportTSMInts_100s_250vps-4 5143189 2331276 -54.67%
BenchmarkExportTSMBools_100s_250vps-4 3724951 2143780 -42.45%
BenchmarkExportTSMStrings_100s_250vps-4 17131400 10796281 -36.98%
BenchmarkExportWALFloats_100s_250vps-4 4487868 1468109 -67.29%
BenchmarkExportWALInts_100s_250vps-4 4458395 1452359 -67.42%
BenchmarkExportWALBools_100s_250vps-4 2838719 1258755 -55.66%
BenchmarkExportWALStrings_100s_250vps-4 16787201 10010700 -40.37%
Also, after improving those benchmarks, I did a time-filtered export on
a 450MB TSM file to a 21GB plain text output, with and without the
bufio.BufferedWriter.
Without buffering, it took about 263s, and with buffering, it took about
60s, for a delta of about -77%.
It looks like the real import path to the project is go.uber.org/zap
instead of github.com/uber-go/zap since the example in the project
references that path.
This was needed when we were on go 1.4 but hasn't been needed since go
1.5. It was kept because we weren't sure if we were going to have to
rollback to an older version of Go at that time and we kept it so we
wouldn't forget to readd it.
Now that we are on go 1.7 with go 1.4 deprecated, there is no going back
so we might as well remove this so people can set GOMAXPROCS to a custom
value using environment variables.
The logging library has been switched to use uber-go/zap. While the
logging has been changed to use structured logging, this commit does not
change any of the logging statements to take advantage of the new
structured log or new log levels. Those changes will come in future
commits.
`percentile()` is supposed to be a selector and return the time of the
point, but that only got changed when the input was a float. Updating
the integer processor to also return the time of the point rather than
the beginning of the interval.
NO-OP on platforms with unix path separator.
On Windows paths get converted to slashes before adding to archive and back to backslashes during restore.
The `partial` tag has been added to the JSON response of a series and
the result so that a client knows when more of the series or result will
be sent in a future JSON chunk.
This helps interactive clients who don't want to wait for all of the
data to know if it is done processing the current series or the current
result. Previously, the client had to guess if the next chunk would
refer to the same result or a new result and it had to match the name
and tags of the two series to know if they were the same series. Now,
the client just needs to check the `partial` field included with the
response to know if it should expect more.
Fixed `max-row-limit` so it counts rows instead of results and it
truncates the response when the `max-row-limit` is reached.
There are 2 new keys in the configuration file.
- security-level: "none", "sign", or "encrypt".
- auth-file: The location of the user/password file.
Please see the collectd network doc for more details.
When a query would use a grouping with two different aggregates, it was
possible for one of the aggregates to return a value from a different
series key than the second aggregate. When these series keys didn't
match, the returned grouping would be screwed up because it sorted by
time before checking for name and tags.
This did not happen when the aggregates returned values for the same
series keys because then the iterators were aligned with each other.
When a limit is exceeded, we return errors and sometimes log (if appropriate)
that a limit was exceeded. The messages don't always provide an indication
as to where or how they are configured.
Instead, return the config option (easily searchable for) as well as the limit
currently set and the value that exceeded it when possible.
The admin console would dynamically discover the version from the
InfluxDB server, but for patch releases, it included the patch in the
link to the documentation and that wasn't a valid link.
Truncate the version so the documentation url is correct since we only
do documentation for `major.minor`.
Changes the default time boundaries for raw queries so raw queries will
range until the end of time. Aggregate queries continue to have their
default end time be `now()`.
The information in the usage string for the `help` command about
how to get more information about a specific command was incorrect:
"influxd help [command]" does not work, but "influxd [command] --help"
does.
If you pipe in a file to the `influx` CLI, it will not try to open the
interactive line reader, but instead just send the contents of the
entire file to the server.
The `cumulative_sum()` function can be used to sum each new point and
output the current total. For the following points:
cpu value=2 0
cpu value=4 10
cpu value=6 20
This would output the following points:
> SELECT cumulative_sum(value) FROM cpu
time value
---- -----
0 2
10 6
20 12
As can be seen, each new point adds to the sum of the previous point and
outputs the value with the same timestamp.
The function can also be used with an aggregate like `derivative()`.
> SELECT cumulative_sum(mean(value) FROM cpu WHERE time >= now() - 10m GROUP BY time(1m)
The tags passed into Statistics() calls are not supposed to be modified.
The balanceWriter in subscribers tried to modify them triggering a panic
because they can be nil.
The vet checks for some files did not pass for go 1.7. As part of a
preliminary start to making go 1.7 work with this software, go vet
should pass.
Also updated the gogo/protobuf dependency which fixed the code generator
to work with go 1.7 too. Ran `go generate` on the entire repository to
ensure every file was up to date.
There were three different outputs that could be output with columns
that were rather strange depending on if there was a name and if there
were tags with the response.
Normalized output now has the dashes always under the column names and
no dashes anywhere else for consistency.
- Single commit, PR follows conventions laid out by @Gouthamve in #5822
* main.go: struct field CpuFile should be CPUFile
* influx_inspect: loop equivalent to `for key := range...`
* adds comments to exported fields and consts
* fixes typo in `CHANGELOG.md`: text for #4702 now matches number
It is now possible to use a mixed duration unit like `1h30m`. The
duration units can be in whatever order as long as they are connected to
each other.
There is a change to the scanner. A token such as `10x` will be scanned
as a duration literal, but will then fail to parse as an invalid
duration. This should not be a breaking change as there is no situation
where `10m10` was a valid order of tokens for the parser.
Fixes#3634.
Instead of having the parser set the defaults, the command will set the
defaults so that the constants for that are actually used. This way we
can also identify which things the user provided and which ones we are
filling with default values.
This allows the meta client to be able to make smarter decisions when
determining if the user requested a conflict or if the requested
capabilities match with what is currently available. If you just say
`CREATE DATABASE WITH NAME myrp`, the user doesn't really care what the
duration of the retention policy is and just wants to use the default.
Now, we can use that information to determine if an existing retention
policy would conflict with what the user requested rather than returning
an error if a default value ever gets changed since the meta client
command can communicate intent more easily.
This commit fixes the `MaxSelectSeriesN` limit which was broken by
the implementation of lazy iterators. The setting previously limited
the total number of series but the new implementation limits the
concurrent number of series being processed.
Updating the help formatting for all of the commands for consistency.
Removed tabs from the output in favor of using spaces so it is more
clear how the output is intended to look.
Truncate the time interval output of the monitor service to be on even
time intervals rather than on every minute based on the start time. This
normalizes the output from the monitor service.
Removing the data only build tag because it is no longer used. It was
previously used for building a data only node from the open source build
that could be used in clustering, but all of the RPC code was removed
since the separation was too much of a hindrance on development whenever
we found a bug.
Since the original purpose of this is now useless, there's no point in
having a confusing build tag around.
Previously, a non-admin could not call "use" in the influx cli since the
`SHOW DATABASES` command requires admin permissions to run. The correct
solution to this is likely to allow non-admins to call `SHOW DATABASES`,
but only see the databases they should be capable of seeing.
Since we don't have this kind of fine-grained authorization yet and
plans for it are still in the works, we do need someway to not
arbitrarily cripple non-admins attempting to use the cli program. This
is a temporary solution that will ignore any authorization errors from
`SHOW DATABASES` if authorization has been set. A warning message will
be printed and the database will be switched. This should be enough to
ensure that there is some warning that you may not have switched to a
valid database while not crippling non-admin users.
A temporary solution for #6397.
Normalize the output for the various help options so they all follow the
same format and display all relevant options.
Removing some of the unused config options from the configuration file
and updating the help documentation. Removing some remaining references
to clustering within the open source version.
Removes the old implementation of `dumptsm`. It was for an older version
of the tsm1 files that is no longer used and now just panics when used
on a tsm1 file. dumptsmdev has been renamed to `dumptsm`, but the old
`dumptsmdev` command still works for compatibility.
Updated `influx_inspect` to use the `FieldDimensions` method instead
(more reliable anyway). The `influx_tsm` program used its own vendored
copy of `FieldCodec` so it is not affected by this change. `FieldCodec`
was only used for the `b1` and `bz1` engines which were removed in 0.12,
but the code that created the field codec was never removed. This
limited the maximum number of fields to 255 even though that restriction
was removed with the `tsm1` engine.
Fixes#6869.
This adds support for using regex expressions in SHOW TAG VALUES when
selecting the key. Also supporting the `!=` operation for the
comparison. Now you can do any of the following:
SHOW TAG VALUES WITH KEY != "region"
SHOW TAG VALUES WITH KEY =~ /region/
SHOW TAG VALUES WITH KEY !~ /region/
It also adds a new SetLiteral AST node that will potentially be used in
the future to allow set operations for other comparisons in the future.
Fixes#4532.
The task manager now acts as its own statement executor so that a custom
statement executor can perform custom actions for KillQueryStatement and
ShowQueriesStatement.
Previously, the goroutine would block for 24 hours and would only check
every 24 hours if the server was closed. Since this never happened, the
goroutine would just be abruptly killed when the main process ended
(since nobody waited for this goroutine to end).
Updating the code to exit when the server is closed immediately and
switching the timer to a ticker instance that can be stopped.
This switch the backup shard call to use the shard Snapshot that
internally creates a snapshot by hardlinking all of the TSM and
tombstone files instead. This reduces the time that the FileStore
is locked and will allow for larger shards to be backup more easily.
If all points in a series are timestamped in the future, the SHOW
queries will not return anything from these series.
This commit changes the to time used when querying shards for the SHOW
queries to the maximum time, in order to ensure future points are
considered in the results for these queries.
Fixes#6599.
The default retention policy name is changed to "autogen" instead of
"default" since it ends up being ambiguous when we tell a user to check
the default retention policy, it is uncertain if we are referring to the
default retention policy (which can be changed) or the retention policy
with the name "default".
Now the automatically generated retention policy name is "autogen".
The default retention policy is now also configurable through the
configuration file so an administrator can customize what they think
should be the default.
Fixes#3733.