* build: upgrade to go1.18 (#23250)
* chore: upgrade to Go 1.18.6
* fix: remove unused directory
* chore: upgrade to Go 1.18.7
Co-authored-by: Dane Strandboge <dstrandboge@influxdata.com>
The dgrijalva/jwt-go project is no longer maintained[1] and they have
transferred ownership to golang-jwt/jwt[2][3][4]. We should move to the
supported golang-jwt/jwt.
The following was performed:
1. update services/httpd/handler*.go to import golang-jwt/jwt
2. revert testcase string comparison changes from 225bcecd (back to v3)
2. go mod edit -require github.com/golang-jwt/jwt@v3.2.1+incompatible
3. go mod edit -droprequire github.com/dgrijalva/jwt-go
4. go mod tidy # see note
5. go clean ./... && go build ./...
6. go test ./...
Note: 'go mod tidy' had unrelated changes (perhaps it wasn't run in
recent commits) so I removed the unrelated delta to keep this PR focused
on the dgrijalva/jwt-go to golang-jwt/jwt changes.
References:
[1] dgrijalva/jwt-go#462
[2] dgrijalva/jwt-go#463
[3] https://github.com/dgrijalva/jwt-go/blob/master/README.md
[4] https://github.com/golang-jwt/jwt
[5] https://github.com/influxdata/influxdb/issues/21927
Extending the context instead of fixing the API breaks type safety.
For tracking the number of points / values written, it is much clearer
to pass an explicit tracker.
* fix: Upgrade version of jwt-go package to v4.0.0
This commit updates the dependencies for influxdb to require v4.0.0-preview1 of
the jwt-go package. This required updating the go.mod and go.sum files as well
as any source file that directly imported that package.
Prior to this commit, the TestHandler_Query_Auth() tests would fail as it
checked for specific error strigns returned by the jwt-go package.
Version 4.0.0-preview1 of the package changed the verbiage of those errors a
bit. This patch updates the test to detect the new error string.
* feat(engine/tsm1): Add WritePointsWithContext()
Add WritePontsWithContext() and make WritePoints() a thin wrapper for
it.
The purpose is to add statistics context values that we'll use to
propagate the number of fields and points written to calls up the call
chain.
* feat(tsdb): Add WriteToShardWithContext()
When applied, this patch adds WriteToShardWithContext() and wraps it
with WriteToShard() to preserve the API.
The the purpose of this addition is to propagate a context.Context value
to Shard.WritePointsWithContext().
* feat(tsdb/shard): Add WritePointsWithContext()
The purpose of adding WritePointsWithContext() is to propage context
values down to engine code and propage statistics via the context.Value
up to callers.
This patch also adds values written statistics to the shard.
* feat(http): Gather values written stats
WritePointsWithContext() was added to propagate context values down to
the engine and communicate stats to the caller.
* feat(http): Gather values written stats
WritePointsWithContext() was added to propagate context values down to
the engine and communicate stats to the caller.
* refactor: Change MetricKey to ContextKey
This patch gives the type we're useing for context keys a better name.
This patch adds the [http.headers] subsection to the configuration file
that allows users to supply headers that will be returned in all HTTP
responses.
Applying this patch will:
* Add code to implement new configuration items.
* Add test to ensure configuration is properly parsed.
* Add test to ensure http response headers are set
* Update sample configuration file
Have AuthorizerIsOpen() assert if a given authizer has an
AuthorizeUnrestricted() method and if so, call that to provide the
result of AuthorizerIsOpen().
Otherwise we check if the supplied Authorizer is nil.
This preserves the fast-path for checking tag-level (and other) tsdb
operations.
This simplifies how we handle such authorizers by handling this case in
only one place.
This commit adds a /api/v2/write endpoint that maps the supplied bucket
and org to a v1 database and retention policy.
* Add AllowedOrgs to httpd Config type.
* Add /api/v2/write handler
If a series was split by the encoder because of chunking and it was
reconstructed by the http handler, it would not reset the partial
indicator for the series to indicate if the series was still partial or
not. That meant that a result that returned more than the 10,000 values
in a single series with chunking disabled would say that the series was
partial, but it was not.
This fixes it so the handler now correctly sets the partial attribute of
the series to indicate if the series is still partial or not. This was
done when merging results, but was not done with series.
The flux in influxdb has been upgraded to use v0.33.2. A lot of
interfaces for the storage engine were changed during this so code had
to change to accomodate the new interfaces and remove the old ones.
Included in this commit is a patch file for the changes that were made.
A patch was generated for the following packages:
* `flux/stdlib/influxdata/influxdb`
* `storage/reads`
* `tsdb/cursors`
These are the three packages that are in common with version 2 of the
database and the first of these packages contains the specific
implementations that are used for version 1.
It is very possible that the next time we upgrade this, the patch will
not apply cleanly just like it wouldn't have applied cleanly to this
update. The patch is mostly meant to document exactly what changed
during the copy over to help ensure we don't forget things when adapting
the interfaces.
Add a patch file to hopefully make this easier in the future
This integrates the influxdb 1.x series to the latest version of Flux
and updates the code to use it. It also removes the dependency on
platform and copies the necessary code from storage into the 1.x series
so the dependency is unneeded.
The flux functions specific to 1.x have been moved to the same structure
that flux changed to with having a `stdlib` directory instead of a
`functions` directory. It also adds a `databases()` function that
returns the databases from the meta client.
This commit extends the Prometheus remote write endpoint to drop
unsupported Prometheus values, rather than reject the entire batch.
InfluxDB does not support NaN, -Inf or +Inf, but Prometheus does. The
remote write endpoint will now drop these and write valid values in the
provided batch.
If the user enabled write trace logging (`[http] write-tracing = true`)
then summaries of any dropped values within a batch will be logged.
If a batch of values contains any values that are subsequently dropped,
the returned status code will be `204`.
* Add AuthorizeDatabase API to QueryAuthorizer to verify a user has
appropriate access to the specified database
* Update serverFluxQuery handler to require a meta.User when auth is
enabled
* update Flux createFromSource and createBucketsSource dependencies to
require Authorizer when auth is enabled in configuration
* update createFromSource to verify read permissions for each bucket
specified in a Flux query
* update BucketsDecoder, which implements the buckets() Flux function,
to return buckets that the user has read or write permissions to
* add unit tests to verify authentication is required for Flux HTTP
requests when auth is enabled in configuration
The access log filter allows the access log to be filtered by a status
code pattern. The pattern is a list of strings of the form `NXX`. At
least one number must be specified and up to 2 Xs can be used. That
means the filter can be an exact status code or it can be a range of
them. For example, `500` would only match the 500 http status code while
`5XX` would match any status code beginning with the number 5
(categorized as server errors). The pattern `50X` would also be
accepted. Both uppercase and lowercase Xs are allowed.
Multiple filters can be specified and the log line will be printed if
one of them matches. If there are no filters specified, all status codes
are printed.