* build: upgrade to go1.18 (#23250)
* chore: upgrade to Go 1.18.6
* fix: remove unused directory
* chore: upgrade to Go 1.18.7
Co-authored-by: Dane Strandboge <dstrandboge@influxdata.com>
The dgrijalva/jwt-go project is no longer maintained[1] and they have
transferred ownership to golang-jwt/jwt[2][3][4]. We should move to the
supported golang-jwt/jwt.
The following was performed:
1. update services/httpd/handler*.go to import golang-jwt/jwt
2. revert testcase string comparison changes from 225bcecd (back to v3)
2. go mod edit -require github.com/golang-jwt/jwt@v3.2.1+incompatible
3. go mod edit -droprequire github.com/dgrijalva/jwt-go
4. go mod tidy # see note
5. go clean ./... && go build ./...
6. go test ./...
Note: 'go mod tidy' had unrelated changes (perhaps it wasn't run in
recent commits) so I removed the unrelated delta to keep this PR focused
on the dgrijalva/jwt-go to golang-jwt/jwt changes.
References:
[1] dgrijalva/jwt-go#462
[2] dgrijalva/jwt-go#463
[3] https://github.com/dgrijalva/jwt-go/blob/master/README.md
[4] https://github.com/golang-jwt/jwt
[5] https://github.com/influxdata/influxdb/issues/21927
The HTTP handler logs URLs, but not body values for POST requests.
This means that queries sent by GET are logged, because the query
is in the URL, but queries sent by POST have no query text in the
log. This feature prints all the key-value pairs in the post body,
which includes the query text, except passwords, which are redacted.
Closes https://github.com/influxdata/influxdb/issues/20653
* Revert "fix(error): unsupported value: +Inf" error not handled gracefully (#20250)"
This reverts commit 6ac0bb3fe3.
* fix: No infinite recursion on write error
If there is some error writing to the response writer, we
would previous have infinite recursion.
Re-closes https://github.com/influxdata/influxdb/issues/20249
* chore: Update flux to 0.67
* chore: Builds against 0.68 flux
* chore: Builds against 0.80.0
* chore: Builds against 0.90.0
* chore: Everything builds on latest flux
* chore: goimports fixed
* chore: fix tests locally
* chore: fix CI dockerfiles
* chore: clean up some unused code
* chore: remove flux repl and Spec in flux query json
* chore: port flux end to end tests from 2.x
* chore: fix up goimports
* chore: remove 32 bit build support
Extending the context instead of fixing the API breaks type safety.
For tracking the number of points / values written, it is much clearer
to pass an explicit tracker.
JSON marshalling errors should be returned properly formatted in JSON
like other errors. This fix formats marshalling errors the same way
influxdb formats other query errors.
Fixes https://github.com/influxdata/influxdb/issues/20249
* fix: Upgrade version of jwt-go package to v4.0.0
This commit updates the dependencies for influxdb to require v4.0.0-preview1 of
the jwt-go package. This required updating the go.mod and go.sum files as well
as any source file that directly imported that package.
Prior to this commit, the TestHandler_Query_Auth() tests would fail as it
checked for specific error strigns returned by the jwt-go package.
Version 4.0.0-preview1 of the package changed the verbiage of those errors a
bit. This patch updates the test to detect the new error string.
* feat: generate modern profiles
Prior to this commit, influxd was writing legacy profiling data which
often (always?) required an accompanying executable to use.
This commit instructs influxd to write profiles in the new format which
can be examined without a binary.
While we're at it, this commit also adds the allocs and threadcreate
profiles.
Finally, this patch also changes the format of the downloaded tar in the
following ways:
* The profiles are added to the profile/ directory -- so instead of
extracting the profiles into your current directory, they're placed in
a "profiles" directory.
* This commit adds the .pb.gz extension to each of the files since
they're gzipped protobuf files and not .txt.
* feat(engine/tsm1): Add WritePointsWithContext()
Add WritePontsWithContext() and make WritePoints() a thin wrapper for
it.
The purpose is to add statistics context values that we'll use to
propagate the number of fields and points written to calls up the call
chain.
* feat(tsdb): Add WriteToShardWithContext()
When applied, this patch adds WriteToShardWithContext() and wraps it
with WriteToShard() to preserve the API.
The the purpose of this addition is to propagate a context.Context value
to Shard.WritePointsWithContext().
* feat(tsdb/shard): Add WritePointsWithContext()
The purpose of adding WritePointsWithContext() is to propage context
values down to engine code and propage statistics via the context.Value
up to callers.
This patch also adds values written statistics to the shard.
* feat(http): Gather values written stats
WritePointsWithContext() was added to propagate context values down to
the engine and communicate stats to the caller.
* feat(http): Gather values written stats
WritePointsWithContext() was added to propagate context values down to
the engine and communicate stats to the caller.
* refactor: Change MetricKey to ContextKey
This patch gives the type we're useing for context keys a better name.
This patch adds the [http.headers] subsection to the configuration file
that allows users to supply headers that will be returned in all HTTP
responses.
Applying this patch will:
* Add code to implement new configuration items.
* Add test to ensure configuration is properly parsed.
* Add test to ensure http response headers are set
* Update sample configuration file
Instead of type converting the Body.Bytes() to a string, we can simply
call Body.String().
While we're at it, this patch also uses a simple string comparison
instead of cmp.Equal() for two strings.
According to the docs, they're equivalent.
This commit quiets staticcheck's warnings about "unnecessary use of
fmt.Sprintf" and "unnecessary use of fmt.Sprint".
Prior to this commit we were wrapping simple constant strings without
any formatting verbs with fmt.Sprintf().
Have AuthorizerIsOpen() assert if a given authizer has an
AuthorizeUnrestricted() method and if so, call that to provide the
result of AuthorizerIsOpen().
Otherwise we check if the supplied Authorizer is nil.
This preserves the fast-path for checking tag-level (and other) tsdb
operations.
This simplifies how we handle such authorizers by handling this case in
only one place.
This commit adds a /api/v2/write endpoint that maps the supplied bucket
and org to a v1 database and retention policy.
* Add AllowedOrgs to httpd Config type.
* Add /api/v2/write handler
If a series was split by the encoder because of chunking and it was
reconstructed by the http handler, it would not reset the partial
indicator for the series to indicate if the series was still partial or
not. That meant that a result that returned more than the 10,000 values
in a single series with chunking disabled would say that the series was
partial, but it was not.
This fixes it so the handler now correctly sets the partial attribute of
the series to indicate if the series is still partial or not. This was
done when merging results, but was not done with series.
The flux in influxdb has been upgraded to use v0.33.2. A lot of
interfaces for the storage engine were changed during this so code had
to change to accomodate the new interfaces and remove the old ones.
Included in this commit is a patch file for the changes that were made.
A patch was generated for the following packages:
* `flux/stdlib/influxdata/influxdb`
* `storage/reads`
* `tsdb/cursors`
These are the three packages that are in common with version 2 of the
database and the first of these packages contains the specific
implementations that are used for version 1.
It is very possible that the next time we upgrade this, the patch will
not apply cleanly just like it wouldn't have applied cleanly to this
update. The patch is mostly meant to document exactly what changed
during the copy over to help ensure we don't forget things when adapting
the interfaces.
Add a patch file to hopefully make this easier in the future
This integrates the influxdb 1.x series to the latest version of Flux
and updates the code to use it. It also removes the dependency on
platform and copies the necessary code from storage into the 1.x series
so the dependency is unneeded.
The flux functions specific to 1.x have been moved to the same structure
that flux changed to with having a `stdlib` directory instead of a
`functions` directory. It also adds a `databases()` function that
returns the databases from the meta client.
This commit extends the Prometheus remote write endpoint to drop
unsupported Prometheus values, rather than reject the entire batch.
InfluxDB does not support NaN, -Inf or +Inf, but Prometheus does. The
remote write endpoint will now drop these and write valid values in the
provided batch.
If the user enabled write trace logging (`[http] write-tracing = true`)
then summaries of any dropped values within a batch will be logged.
If a batch of values contains any values that are subsequently dropped,
the returned status code will be `204`.