Commit Graph

202 Commits (643b2eb30cd4726e58a9052ee6f15ec9bea759cf)

Author SHA1 Message Date
Stuart Carnie 099371f2e7 Add config test; add negative value test 2017-06-06 08:42:45 +08:00
Stuart Carnie cac5dbfa5a NEW max-body-size config; HTTP 413 if body exceeds max size; fixes #8299 2017-06-05 21:38:33 +08:00
Stuart Carnie 55d1ba6d79 rework gzip compressor so it is lazily created for 200 OK requests only
* fix issue when panicking (before Write) gzip writer is closed, causing
header to be written and default status of 200 OK being written.
* update recovery middleware to set 500 Internal Server Error
2017-06-05 21:38:32 +08:00
Ryan Betts 19ef39d947 Merge pull request #8437 from influxdata/jl-points-auth
Fine Grained Authorization
2017-05-31 10:23:49 -04:00
Jonathan A. Sternberg 6a78f1cf4a URL query parameter credentials take priority over Authentication header 2017-05-30 09:26:24 -05:00
Stuart Carnie 8d8a7a0bfe pass meta.User to avoid future search requests 2017-05-26 15:05:38 -07:00
Stuart Carnie cec0712141 Add authorization error behavior API 2017-05-26 13:21:59 -07:00
Joe LeGasse 815f740f4c initial fga work
wip

wip

fix tests / build
2017-05-26 13:16:27 -07:00
Edd Robinson b4a427f9a2 Address PR feedback 2017-05-15 14:41:51 +01:00
Edd Robinson 1cbbaa9317 Add support for shards, stats and diagnostics 2017-05-15 14:12:00 +01:00
Edd Robinson 8f8ff0ec61 Adds handler for returning a profile archive
Currently, when debugging issues with InfluxDB we often ask for the
following profiles:

  curl -o block.txt "http://localhost:8086/debug/pprof/block?debug=1"
  curl -o goroutine.txt
"http://localhost:8086/debug/pprof/goroutine?debug=1"
  curl -o heap.txt "http://localhost:8086/debug/pprof/heap?debug=1"
  curl -o cpu.txt "http://localhost:8086/debug/pprof/profile

This can be bothersome for users, or even difficult if they're
unfamiliar with cURL (or it's not on their system).

This commit adds a new endpoint: /debug/pprof/all which will return a
single compressed archive of all of the above profiles. The CPU profile
is optional, and not returned by default. To include a CPU profile the
URL to request should be: /debug/pprof/all?cpu=true. It's also possible
to vary the length of the CPU profile by adding a `seconds=x` parameter,
where x defaults to 30, if absent.

The new command for gathering profiles from users should now be:

  curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all"

Or, if we need to see a CPU profile:

  curl -o profiles.tar.gz
"http://localhost:8086/debug/pprof/all?cpu=true"

It's important to remember that a CPU profile is a blocking operation
and by default it will take 30 seconds for the response to be returned
to the user.

Finally, if the user is unfamiliar with cURL, they will now be able to
visit http://localhost:8086/debug/pprof/all in a web browser, and the
archive will be downloaded to their machine.
2017-05-15 14:11:38 +01:00
Jonathan A. Sternberg 2780630a5f Track HTTP client requests for /write and /query with /debug/requests
After using `/debug/requests`, the client will wait for 30 seconds
(configurable by specifying `seconds=` in the query parameters) and the
HTTP handler will track every incoming query and write to the system.
After that time period has passed, it will output a JSON blob that looks
very similar to `/debug/vars` that shows every IP address and user
account (if authentication is used) that connected to the host during
that time.

In the future, we can add more metrics to track. This is an initial
start to aid with debugging machines that connect too often by looking
at a sample of time (like `/debug/pprof`).
2017-05-09 10:18:33 -05:00
Jonathan A. Sternberg 260bdef3d4 Set the CSV output to an empty string for null values 2017-05-04 20:51:58 -05:00
Stuart Carnie 8097e817f6 prefix partial write errors with `partial write:`
NOTE: parser errors (via http API) are also transformed into
PartialWriteError
2017-04-28 11:00:14 -07:00
Karl 616224695a Make services/httpd golintable 2017-04-03 18:32:36 -04:00
Edd Robinson 255992f5ec Reduce cost of admin user check
This commits adds a caching mechanism to the Data object, such that
when large numbers of users exist in the system, the cost of determining
if there is at least one admin user will be low.

To ensure that previously marshalled Data objects contain the correct
cached admin user value, we exhaustively determine if there is an admin
user present whenever we unmarshal a Data object.
2017-03-20 12:04:03 +00:00
Jason Wilder e62c72d1f9 Merge branch '1.2' into jw-merge-12 2017-03-14 15:15:50 -06:00
Jason Wilder c9740f753b Disable max-row-limit by default
max-row-limit was set at 10000 since 1.0, but due to a bug it was
effectively 0 (disabled).  1.2 fixed this bug via #7368, but this
caused a breaking change w/ Grafana and any users upgrading from <1.2
who had not disabled the config manually.
2017-03-14 12:47:32 -06:00
Mark Rushakoff 535cf597f1 Report subset of config values in SHOW DIAGNOSTICS
This includes hand-selected config settings that are safe to expose and
not expected to include any kind of secrets.

Fixes #7821
2017-03-14 11:34:19 -07:00
Mark Rushakoff 601cbcd084 Merge branch '1.2' into mr-merge-12 2017-02-17 16:14:22 -08:00
Jonathan A. Sternberg 2fe48d6781 Rename zap import back to github.com/uber-go/zap
They rebased a revision we were previously relying upon that allowed us
to use the vanity name so we are reverting back to an older version with
the old import path.
2017-02-17 17:17:22 -06:00
Mark Rushakoff 53699aa24f Allow non-admin users to execute SHOW DATABASES
This commit introduces a new interface type, influxql.Authorizer, that
is passed as part of a statement's execution context and determines
whether the context is permitted to access a given database. In the
future, the Authorizer interface may be expanded to other resources
besides databases. In this commit, the Authorizer interface is
specifically used to determine which databases are returned when
executing SHOW DATABASES.

When HTTP authentication is enabled, the existing meta.UserInfo struct
implements Authorizer, meaning admin users can SHOW every database, and
non-admin users can SHOW only databases for which they have read and/or
write permission.

When HTTP authentication is disabled, all databases are visible through
SHOW DATABASES.

This addresses a long-standing issue where Chronograf or Grafana would
be unable to list databases if the logged-in user did not have admin
privileges.

Fixes #4785.
2017-02-13 08:59:16 -08:00
Carlo Alberto Ferraris 005e480b55 [influxd] add utility functions to make it harder to misuse gzipWriterPool 2017-02-09 09:31:11 +09:00
Carlo Alberto Ferraris a6a7782e04 [influxd] Use a sync.Pool to reuse gzip.Writer across requests
This brings alloc_space down from ~20200M to ~10700M in a run of
go test ./cmd/influxd/run -bench=Server -memprofile=mem.out -run='^$'
2017-02-07 05:23:58 +09:00
Edd Robinson 366223140d Add system information to /debug/vars 2017-01-30 18:23:21 +00:00
Edd Robinson feb7a2842c Use unbuffered error channels in tests 2017-01-17 10:53:15 -08:00
Edd Robinson 9d30ee0a6b Fix subtle bugs and remove dead code from services 2017-01-17 09:47:34 -08:00
Mark Rushakoff 218fc3890d Update godoc for services
The admin service was deliberately skipped due to it being deprecated.
2016-12-30 18:03:01 -08:00
Jonathan A. Sternberg ec57108520 Use proper uber-go/zap import path
It looks like the real import path to the project is go.uber.org/zap
instead of github.com/uber-go/zap since the example in the project
references that path.
2016-12-15 08:54:14 -06:00
Jonathan A. Sternberg 21502a39e8 Switch logging to use structured logging everywhere
The logging library has been switched to use uber-go/zap. While the
logging has been changed to use structured logging, this commit does not
change any of the logging statements to take advantage of the new
structured log or new log levels. Those changes will come in future
commits.
2016-12-14 10:45:15 -06:00
Jonathan A. Sternberg 825aa07877 Use X-Forwarded-For IP address in HTTP logger if present 2016-12-01 09:16:36 -06:00
Jonathan A. Sternberg b4db76cee2 Introduce syntax for marking a partial response with chunking
The `partial` tag has been added to the JSON response of a series and
the result so that a client knows when more of the series or result will
be sent in a future JSON chunk.

This helps interactive clients who don't want to wait for all of the
data to know if it is done processing the current series or the current
result. Previously, the client had to guess if the next chunk would
refer to the same result or a new result and it had to match the name
and tags of the two series to know if they were the same series. Now,
the client just needs to check the `partial` field included with the
response to know if it should expect more.

Fixed `max-row-limit` so it counts rows instead of results and it
truncates the response when the `max-row-limit` is reached.
2016-11-22 11:16:22 -06:00
Jonathan A. Sternberg 64c2d704da Avoid deadlock when max-row-limit is hit
When the `max-row-limit` was hit, the goroutine reading from the results
channel would stop reading from the channel, but it didn't signal to the
sender that it was no longer reading from the results. This caused the
sender to continue trying to send results even though nobody would ever
read it and this created a deadlock.

Include an `AbortCh` on the `ExecutionContext` that will signal when
results are no longer desired so the sender can abort instead of
deadlocking.
2016-11-08 13:12:28 -06:00
Cory LaNou 5b72b874d8
remove ProcessContinousQueries from httpd endpoint 2016-10-20 11:22:36 -05:00
zhexuany 931a6c6d08 fixed two typo 2016-10-13 13:43:38 +08:00
Jason Wilder bbecb3f03d Drop points that would execeed limits
This changes the behavior of the max-series-per-database and
max-values-per-tag limits to drop points that would exceed the limits
and allow the remaining points to be written.  Previously, the whole
batch would fail and return and 500 error to the client.

This now will write the allow points and return a `partial write`
error indicating some of the points were dropped, how many were
dropped and one of the problem measureent and tags.
2016-10-10 11:42:15 -06:00
Edd Robinson 537316eb7b Ensure pprof-enabled config option is respected 2016-09-30 18:41:38 +01:00
Jonathan A. Sternberg 3afdf3cd94 Merge tag 'v1.0.1' 2016-09-27 17:53:33 -05:00
Jonathan A. Sternberg ab4bca8495 Report cmdline and memstats in /debug/vars
When we refactored expvar, the cmdline and memstats sections were not
readded to the output. This adds it back if they can be found inside of
`expvar`.

It also stops trying to sort the output of the statistics so they get
returned faster. JSON doesn't need them to be sorted and it causes
enough latency problems that sorting them hurts performance.
2016-09-09 14:32:43 -05:00
Edd Robinson 90ff713f21 Fix base64 encoding issue in stats
Fixes #7177.
2016-08-22 15:21:31 +01:00
Jonathan A. Sternberg 9b28f7704f Merge pull request #7171 from influxdata/js-1.0-merge-response-formatter-bugfix
Merge branch '1.0'
2016-08-17 19:13:09 -05:00
Jonathan A. Sternberg 5e468fac0b Merge branch '1.0' into master 2016-08-17 13:27:02 -05:00
Jonathan A. Sternberg d2746ee8f2 The number of bytes recorded when using chunking was off by one
Previously, we implicitly added a newline and had to add one to the
number of bytes transmitted because we added that byte. That was removed
at some point and the metric was not updated to record the correct
value.
2016-08-17 11:54:19 -05:00
Jonathan A. Sternberg 71b54768d0 Remove redundant code the from response formatter and fix CloseNotify
The query killing functionality depends on the ResponseWriter exposing a
CloseNotify method. Since we wrap the http.ResponseWriter, the new
struct does not have that method and the HTTP handler would skip past
calling that method.

Instead of duplicating `Flush()` and `CloseNotify()` for every response
formatter, we will unify all of that under a single struct and create
formatters instead.

Also, fixes a bug where the header information from a query would not be
returned until some other data was returned with it because of
buffering and another bug in the gzipResponseWriter that wouldn't flush
the actual underlying ResponseWriter.
2016-08-16 15:54:03 -05:00
Ben Johnson 8aa224b22d
reduce memory allocations in index
This commit changes the index to point to index data in the shards
instead of keeping it in-memory on the heap.
2016-08-16 14:09:00 -06:00
Edd Robinson cebeda817c Update jwt-go to v3 2016-08-12 17:35:57 +01:00
Jonathan A. Sternberg f58a50c231 Allow queries to be uploaded as a file rather than forcing a text parameter
The query can be uploaded from a file using `multipart/form-data` and
setting the file name to `q`. An example of using curl to execute an
async query would be:

    curl -F "q=@database.iql" -F "async=true" http://localhost:8086/query
2016-08-10 15:34:04 -05:00
Jonathan A. Sternberg 760767b033 Support running a command in async mode
It will return a 204 No Content as long as the query is accepted
(immediate errors will be returned, but not individual errors with
specific queries). The only way to kill the query is by using the task
manager.
2016-08-10 15:34:04 -05:00
Jonathan A. Sternberg a4e49963f5 Implement text/csv content encoding for the response writer
CSV doesn't offer a way to separate different sheets from each other and
it doesn't really have a standard format. We separate sheets with a
newline so they can be imported into something like Excel or LibreOffice
more easily.

The number of columns for each sheet is inferred from the first returned
row in each statement since they should all be the same.
2016-08-10 15:15:33 -05:00
kun 6945655be2 add support for unix socket binding service 2016-08-11 02:20:54 +08:00