* fix: Upgrade version of jwt-go package to v4.0.0
This commit updates the dependencies for influxdb to require v4.0.0-preview1 of
the jwt-go package. This required updating the go.mod and go.sum files as well
as any source file that directly imported that package.
Prior to this commit, the TestHandler_Query_Auth() tests would fail as it
checked for specific error strigns returned by the jwt-go package.
Version 4.0.0-preview1 of the package changed the verbiage of those errors a
bit. This patch updates the test to detect the new error string.
Prior to this commit, we had our own recursive file walker which
required a condition based on if s.Config.TypesDB pointed to a directory
or a regular file.
This commit replaces our own readdir() with filepath.Walk() and treats
recursing directories and loading one file as a single case. This
simplifies the code quite a bit.
* feat: generate modern profiles
Prior to this commit, influxd was writing legacy profiling data which
often (always?) required an accompanying executable to use.
This commit instructs influxd to write profiles in the new format which
can be examined without a binary.
While we're at it, this commit also adds the allocs and threadcreate
profiles.
Finally, this patch also changes the format of the downloaded tar in the
following ways:
* The profiles are added to the profile/ directory -- so instead of
extracting the profiles into your current directory, they're placed in
a "profiles" directory.
* This commit adds the .pb.gz extension to each of the files since
they're gzipped protobuf files and not .txt.
* feat(engine/tsm1): Add WritePointsWithContext()
Add WritePontsWithContext() and make WritePoints() a thin wrapper for
it.
The purpose is to add statistics context values that we'll use to
propagate the number of fields and points written to calls up the call
chain.
* feat(tsdb): Add WriteToShardWithContext()
When applied, this patch adds WriteToShardWithContext() and wraps it
with WriteToShard() to preserve the API.
The the purpose of this addition is to propagate a context.Context value
to Shard.WritePointsWithContext().
* feat(tsdb/shard): Add WritePointsWithContext()
The purpose of adding WritePointsWithContext() is to propage context
values down to engine code and propage statistics via the context.Value
up to callers.
This patch also adds values written statistics to the shard.
* feat(http): Gather values written stats
WritePointsWithContext() was added to propagate context values down to
the engine and communicate stats to the caller.
* feat(http): Gather values written stats
WritePointsWithContext() was added to propagate context values down to
the engine and communicate stats to the caller.
* refactor: Change MetricKey to ContextKey
This patch gives the type we're useing for context keys a better name.
This patch adds the [http.headers] subsection to the configuration file
that allows users to supply headers that will be returned in all HTTP
responses.
Applying this patch will:
* Add code to implement new configuration items.
* Add test to ensure configuration is properly parsed.
* Add test to ensure http response headers are set
* Update sample configuration file
When comparing strings in a case-insensitive way, strings.EqualFold() is
(almost?) always faster than comparing the results of strings.ToLower().
In addition, strings.EqualFold() never causes an allocation.
This patch replaces case-insensitive string comparisons that use
strings.ToLower() with a strings.EqualFold() call.
Instead of type converting the Body.Bytes() to a string, we can simply
call Body.String().
While we're at it, this patch also uses a simple string comparison
instead of cmp.Equal() for two strings.
According to the docs, they're equivalent.
This commit quiets staticcheck's warnings about "unnecessary use of
fmt.Sprintf" and "unnecessary use of fmt.Sprint".
Prior to this commit we were wrapping simple constant strings without
any formatting verbs with fmt.Sprintf().
Have AuthorizerIsOpen() assert if a given authizer has an
AuthorizeUnrestricted() method and if so, call that to provide the
result of AuthorizerIsOpen().
Otherwise we check if the supplied Authorizer is nil.
This preserves the fast-path for checking tag-level (and other) tsdb
operations.
This simplifies how we handle such authorizers by handling this case in
only one place.
This is a fix and a bit of a refactor of Service.readRequest().
This commit changes the signature so that it accepts an io.Reader
instead of a net.Conn because there's nothing about the function that
relies on it being a Conn; only a Reader.
We also changed the signature so that it returns a *Request instead of
Request -- this way we don't allocate an entire Request value on error.
This commit also attempts to document the few edge cases that the function has to
handle and one quirk where we expect to read a newline which isn't
accounted for by Request.UploadSize.
This commit adds a /api/v2/write endpoint that maps the supplied bucket
and org to a v1 database and retention policy.
* Add AllowedOrgs to httpd Config type.
* Add /api/v2/write handler
If a series was split by the encoder because of chunking and it was
reconstructed by the http handler, it would not reset the partial
indicator for the series to indicate if the series was still partial or
not. That meant that a result that returned more than the 10,000 values
in a single series with chunking disabled would say that the series was
partial, but it was not.
This fixes it so the handler now correctly sets the partial attribute of
the series to indicate if the series is still partial or not. This was
done when merging results, but was not done with series.
The flux in influxdb has been upgraded to use v0.33.2. A lot of
interfaces for the storage engine were changed during this so code had
to change to accomodate the new interfaces and remove the old ones.
Included in this commit is a patch file for the changes that were made.
A patch was generated for the following packages:
* `flux/stdlib/influxdata/influxdb`
* `storage/reads`
* `tsdb/cursors`
These are the three packages that are in common with version 2 of the
database and the first of these packages contains the specific
implementations that are used for version 1.
It is very possible that the next time we upgrade this, the patch will
not apply cleanly just like it wouldn't have applied cleanly to this
update. The patch is mostly meant to document exactly what changed
during the copy over to help ensure we don't forget things when adapting
the interfaces.
Add a patch file to hopefully make this easier in the future
When a Flux predicate is transformed to a Store.Read / GroupRead request,
the `_measurement` and `_field` keys are remapped to match 2.x internal
tag keys.
This change does not modify the 2.x behavior, but rather updates the
1.x mapping, so merging future updates from the 2.x storage/reads
package should have fewer conflicts.
This integrates the influxdb 1.x series to the latest version of Flux
and updates the code to use it. It also removes the dependency on
platform and copies the necessary code from storage into the 1.x series
so the dependency is unneeded.
The flux functions specific to 1.x have been moved to the same structure
that flux changed to with having a `stdlib` directory instead of a
`functions` directory. It also adds a `databases()` function that
returns the databases from the meta client.
This commit extends the Prometheus remote write endpoint to drop
unsupported Prometheus values, rather than reject the entire batch.
InfluxDB does not support NaN, -Inf or +Inf, but Prometheus does. The
remote write endpoint will now drop these and write valid values in the
provided batch.
If the user enabled write trace logging (`[http] write-tracing = true`)
then summaries of any dropped values within a batch will be logged.
If a batch of values contains any values that are subsequently dropped,
the returned status code will be `204`.
* Add AuthorizeDatabase API to QueryAuthorizer to verify a user has
appropriate access to the specified database
* Update serverFluxQuery handler to require a meta.User when auth is
enabled
* update Flux createFromSource and createBucketsSource dependencies to
require Authorizer when auth is enabled in configuration
* update createFromSource to verify read permissions for each bucket
specified in a Flux query
* update BucketsDecoder, which implements the buckets() Flux function,
to return buckets that the user has read or write permissions to
* add unit tests to verify authentication is required for Flux HTTP
requests when auth is enabled in configuration
The access log filter allows the access log to be filtered by a status
code pattern. The pattern is a list of strings of the form `NXX`. At
least one number must be specified and up to 2 Xs can be used. That
means the filter can be an exact status code or it can be a range of
them. For example, `500` would only match the 500 http status code while
`5XX` would match any status code beginning with the number 5
(categorized as server errors). The pattern `50X` would also be
accepted. Both uppercase and lowercase Xs are allowed.
Multiple filters can be specified and the log line will be printed if
one of them matches. If there are no filters specified, all status codes
are printed.
* The `Value` function must return a `string`, however `SeriesTags#Get`
returns `[]byte`, resulting in the reduced expression always becoming
`false`.
* Also added a specific case for the `$` key, which refers to `_value`
in a Flux predicate. This will always return no value.
We rewrite queries for _measurement to _name, but we didn't also
rewrite _m. During some of the code sharing, it started being the
case that it received _m instead of _measurement. Additionally,
we need to start mapping _f to _field for the same reason.
This commit deletes most of the code to service reads from influxdb
and pulls it in from platform instead.
Of note, the models.Tag and models.Tags types are now aliases to the
platform models.Tag and models.Tags types. Additionally, many types
in the tsdb package relating to cursors are also aliases to the same
types in the platform cursors package.
This updates the platform and flux repos to the current master in the
Gopkg.lock.
* the protocol service definition, ReadRequest and ReadResponse is
reused across projects, rather than requiring redefinition.
* the ReadRequest protocol buffer definition removes the concept of a
database and retention policy, replacing it with a field named
ReadSource of type google.protobuf.Any. OSS requests will use the
ReadSource message structure defined in local to this package, which
defines fields to represent a Database and RetentionPolicy. Other
implementations can provide their own data structure allowing the
remainder of the ReadRequest to be reused.
* The RPC service and Store are expected to be redefined to handle their
specific requirements for resolving a ReadSource
* ResultSet and GroupResultSet are interfaces representing non-grouping
and grouping read behavior respectively. Calling NewResultSet or
NewGroupResultSet will construct instances of these types
* The ResponseWriter type is exported to deal with serialization of
the ResultSet and GroupResultSet types