* off by default, enabled by `query-stats-enabled`
* writes to cq_query measurement of configured monitor database
* see CHANGELOG for schema of individual points
* fix issue when panicking (before Write) gzip writer is closed, causing
header to be written and default status of 200 OK being written.
* update recovery middleware to set 500 Internal Server Error
Currently, when debugging issues with InfluxDB we often ask for the
following profiles:
curl -o block.txt "http://localhost:8086/debug/pprof/block?debug=1"
curl -o goroutine.txt
"http://localhost:8086/debug/pprof/goroutine?debug=1"
curl -o heap.txt "http://localhost:8086/debug/pprof/heap?debug=1"
curl -o cpu.txt "http://localhost:8086/debug/pprof/profile
This can be bothersome for users, or even difficult if they're
unfamiliar with cURL (or it's not on their system).
This commit adds a new endpoint: /debug/pprof/all which will return a
single compressed archive of all of the above profiles. The CPU profile
is optional, and not returned by default. To include a CPU profile the
URL to request should be: /debug/pprof/all?cpu=true. It's also possible
to vary the length of the CPU profile by adding a `seconds=x` parameter,
where x defaults to 30, if absent.
The new command for gathering profiles from users should now be:
curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all"
Or, if we need to see a CPU profile:
curl -o profiles.tar.gz
"http://localhost:8086/debug/pprof/all?cpu=true"
It's important to remember that a CPU profile is a blocking operation
and by default it will take 30 seconds for the response to be returned
to the user.
Finally, if the user is unfamiliar with cURL, they will now be able to
visit http://localhost:8086/debug/pprof/all in a web browser, and the
archive will be downloaded to their machine.
Measurement name and field were converted between []byte and string
repetively causing lots of garbage. This switches the code to use
[]byte in the write path.
After using `/debug/requests`, the client will wait for 30 seconds
(configurable by specifying `seconds=` in the query parameters) and the
HTTP handler will track every incoming query and write to the system.
After that time period has passed, it will output a JSON blob that looks
very similar to `/debug/vars` that shows every IP address and user
account (if authentication is used) that connected to the host during
that time.
In the future, we can add more metrics to track. This is an initial
start to aid with debugging machines that connect too often by looking
at a sample of time (like `/debug/pprof`).
This commits adds a caching mechanism to the Data object, such that
when large numbers of users exist in the system, the cost of determining
if there is at least one admin user will be low.
To ensure that previously marshalled Data objects contain the correct
cached admin user value, we exhaustively determine if there is an admin
user present whenever we unmarshal a Data object.
Instead of incrementing the `queryOk` statistic with or without the
continuous query running, it will only increment when the query is
actually executed.
max-row-limit was set at 10000 since 1.0, but due to a bug it was
effectively 0 (disabled). 1.2 fixed this bug via #7368, but this
caused a breaking change w/ Grafana and any users upgrading from <1.2
who had not disabled the config manually.
They rebased a revision we were previously relying upon that allowed us
to use the vanity name so we are reverting back to an older version with
the old import path.
This commit introduces a new interface type, influxql.Authorizer, that
is passed as part of a statement's execution context and determines
whether the context is permitted to access a given database. In the
future, the Authorizer interface may be expanded to other resources
besides databases. In this commit, the Authorizer interface is
specifically used to determine which databases are returned when
executing SHOW DATABASES.
When HTTP authentication is enabled, the existing meta.UserInfo struct
implements Authorizer, meaning admin users can SHOW every database, and
non-admin users can SHOW only databases for which they have read and/or
write permission.
When HTTP authentication is disabled, all databases are visible through
SHOW DATABASES.
This addresses a long-standing issue where Chronograf or Grafana would
be unable to list databases if the logged-in user did not have admin
privileges.
Fixes#4785.
This change adds some very basic name validation with the following
plain-english description: names must be non-zero sequence of printable
characters that do not contain slashes ('/' or '\') and are not equal to
either "." or "..".
The intent is that, since we currently just use database and retention
policy names directly as path elements, these rules will hopefully leave
us with names that should be at least close to valid directory names.
Ideally, we would restrict names even further or not use them as path
elements directly, but this should be a step towards the former without
restricting names "too much"
This adds query syntax support for subqueries and adds support to the
query engine to execute queries on subqueries.
Subqueries act as a source for another query. It is the equivalent of
writing the results of a query to a temporary database, executing
a query on that temporary database, and then deleting the database
(except this is all performed in-memory).
The syntax is like this:
SELECT sum(derivative) FROM (SELECT derivative(mean(value)) FROM cpu GROUP BY *)
This will execute derivative and then sum the result of those derivatives.
Another example:
SELECT max(min) FROM (SELECT min(value) FROM cpu GROUP BY host)
This would let you find the maximum minimum value of each host.
There is complete freedom to mix subqueries with auxiliary fields. The only
caveat is that the following two queries:
SELECT mean(value) FROM cpu
SELECT mean(value) FROM (SELECT value FROM cpu)
Have different performance characteristics. The first will calculate
`mean(value)` at the shard level and will be faster, especially when it comes to
clustered setups. The second will process the mean at the top level and will not
include that optimization.
There was no comment on either case specifying that the `return nil`
was deliberate instead of `return err`, so I'm assuming these were
typos. I added tests to conserve the error-returning behavior.
It looks like the real import path to the project is go.uber.org/zap
instead of github.com/uber-go/zap since the example in the project
references that path.
The logging library has been switched to use uber-go/zap. While the
logging has been changed to use structured logging, this commit does not
change any of the logging statements to take advantage of the new
structured log or new log levels. Those changes will come in future
commits.
The url must have a scheme of udp,http,https and a port number.
CREATE SUBSCRIPTION will fail if there are invalid destinations.
Additionally Service.createSubscription fail invalid destinations are detected.
Fixes#7615
The `partial` tag has been added to the JSON response of a series and
the result so that a client knows when more of the series or result will
be sent in a future JSON chunk.
This helps interactive clients who don't want to wait for all of the
data to know if it is done processing the current series or the current
result. Previously, the client had to guess if the next chunk would
refer to the same result or a new result and it had to match the name
and tags of the two series to know if they were the same series. Now,
the client just needs to check the `partial` field included with the
response to know if it should expect more.
Fixed `max-row-limit` so it counts rows instead of results and it
truncates the response when the `max-row-limit` is reached.
When the `max-row-limit` was hit, the goroutine reading from the results
channel would stop reading from the channel, but it didn't signal to the
sender that it was no longer reading from the results. This caused the
sender to continue trying to send results even though nobody would ever
read it and this created a deadlock.
Include an `AbortCh` on the `ExecutionContext` that will signal when
results are no longer desired so the sender can abort instead of
deadlocking.
There are 2 new keys in the configuration file.
- security-level: "none", "sign", or "encrypt".
- auth-file: The location of the user/password file.
Please see the collectd network doc for more details.
The admin console would dynamically discover the version from the
InfluxDB server, but for patch releases, it included the patch in the
link to the documentation and that wasn't a valid link.
Truncate the version so the documentation url is correct since we only
do documentation for `major.minor`.
This changes the behavior of the max-series-per-database and
max-values-per-tag limits to drop points that would exceed the limits
and allow the remaining points to be written. Previously, the whole
batch would fail and return and 500 error to the client.
This now will write the allow points and return a `partial write`
error indicating some of the points were dropped, how many were
dropped and one of the problem measureent and tags.
The subscriber write goroutine would drop points if the write load
was higher than it could process. This could happen with a just
a few writers to the server.
Instead, process the channel with multiple writers to avoid dropping
writes so easily. This also adds some config options to control how
large the channel buffer is as well as how many goroutines are started.
Fixes#7330
The tags passed into Statistics() calls are not supposed to be modified.
The balanceWriter in subscribers tried to modify them triggering a panic
because they can be nil.
The vet checks for some files did not pass for go 1.7. As part of a
preliminary start to making go 1.7 work with this software, go vet
should pass.
Also updated the gogo/protobuf dependency which fixed the code generator
to work with go 1.7 too. Ran `go generate` on the entire repository to
ensure every file was up to date.
When we refactored expvar, the cmdline and memstats sections were not
readded to the output. This adds it back if they can be found inside of
`expvar`.
It also stops trying to sort the output of the statistics so they get
returned faster. JSON doesn't need them to be sorted and it causes
enough latency problems that sorting them hurts performance.
Instead of having the parser set the defaults, the command will set the
defaults so that the constants for that are actually used. This way we
can also identify which things the user provided and which ones we are
filling with default values.
This allows the meta client to be able to make smarter decisions when
determining if the user requested a conflict or if the requested
capabilities match with what is currently available. If you just say
`CREATE DATABASE WITH NAME myrp`, the user doesn't really care what the
duration of the retention policy is and just wants to use the default.
Now, we can use that information to determine if an existing retention
policy would conflict with what the user requested rather than returning
an error if a default value ever gets changed since the meta client
command can communicate intent more easily.
Previously, we implicitly added a newline and had to add one to the
number of bytes transmitted because we added that byte. That was removed
at some point and the metric was not updated to record the correct
value.
The query killing functionality depends on the ResponseWriter exposing a
CloseNotify method. Since we wrap the http.ResponseWriter, the new
struct does not have that method and the HTTP handler would skip past
calling that method.
Instead of duplicating `Flush()` and `CloseNotify()` for every response
formatter, we will unify all of that under a single struct and create
formatters instead.
Also, fixes a bug where the header information from a query would not be
returned until some other data was returned with it because of
buffering and another bug in the gzipResponseWriter that wouldn't flush
the actual underlying ResponseWriter.
The query can be uploaded from a file using `multipart/form-data` and
setting the file name to `q`. An example of using curl to execute an
async query would be:
curl -F "q=@database.iql" -F "async=true" http://localhost:8086/query
It will return a 204 No Content as long as the query is accepted
(immediate errors will be returned, but not individual errors with
specific queries). The only way to kill the query is by using the task
manager.
CSV doesn't offer a way to separate different sheets from each other and
it doesn't really have a standard format. We separate sheets with a
newline so they can be imported into something like Excel or LibreOffice
more easily.
The number of columns for each sheet is inferred from the first returned
row in each statement since they should all be the same.
Instead of having the parser set the defaults, the command will set the
defaults so that the constants for that are actually used. This way we
can also identify which things the user provided and which ones we are
filling with default values.
This allows the meta client to be able to make smarter decisions when
determining if the user requested a conflict or if the requested
capabilities match with what is currently available. If you just say
`CREATE DATABASE WITH NAME myrp`, the user doesn't really care what the
duration of the retention policy is and just wants to use the default.
Now, we can use that information to determine if an existing retention
policy would conflict with what the user requested rather than returning
an error if a default value ever gets changed since the meta client
command can communicate intent more easily.
This commit limits queries to only process one shard at a time.
However, within a shard, multiple series can still be processed in
parallel. Shard iterators are lazily instantiated during query
execution to limit the amount of memory a given query uses.
The `Content-Type` header is meant to say what the content type of the
request is supposed to be and `Accept` is used to ask for a specific
content type. I messed this up and used `Content-Type` instead of
`Accept`. This works with `GET` requests accidentally, but `POST`
requests stop working.