* chore: update jsonparser to 1.1.1 and yaml.v3 to 3.0.1
Perform:
$ go mod edit -require github.com/buger/jsonparser@v1.1.1
$ go mod edit -require gopkg.in/yaml.v3@v3.0.1
$ go mod tidy
* chore(tests): adjust for whitespace in test output
This changes the statistics merging to use the `Add` method on the
entire statistics struct instead of only the metadata. The purpose of
this is that we want to utilize the existing function for merging the
statistics so that additional properties can be added without modifying
the query controller.
At the same time, there are certain properties that are computed in the
controller and we want to ensure they aren't double counted if flux
starts computing these itself. So we blacklist certain attributes that
we compute so that if flux is modified to return these values, we just
ignore them until we change our own code to use those values.
In immediate terms, we're changing to use the `Add` method so that we
can add profiles to the statistics and have those profiles propagate to
the query controller. This property doesn't exist yet so we can't add it
and we don't want to add it after flux is modified because it could
break the operator profile.
But, we also don't want to only use `Add` because we want to move the
properties such as `TotalAllocated` and `TotalDuration` into flux itself
and to remove the controller. But, the controller needs to be compatible
with whatever changes we make to flux so that there's no circumstance
where functionality stops working.
* fix(fluxtest): update Flux tests for new option support
The Flux test harness now allows inheriting options, this updates the
test cases with the new syntax and simplifies any tests that had to
duplicate the options.
* build(flux): update flux to v0.165.0
* chore: upgrade flux to v0.167.0
Co-authored-by: Nathaniel Cook <nvcook42@gmail.com>
Co-authored-by: Paul Hummer <paul@eventuallyanyway.com>
Clamp the value of Store.MeasurementsCardinality so that it can not be less
than 0. This primarily shows up as a negative numMeasurements value in
/debug/vars under some circumstances.
refs #23285
(cherry picked from commit 160cf678d5)
A preallocated slice needs to be cleared to be used with append,
otherwise the existing elements will be seen in the result and this does
not appear to be the intention. The bug doesn't seem to have caused
issues as no callsites use a preallocated slice.
We previously allowed read tokens access to all of v1 query, including
InfluxQL queries that made state changes to the DB, specifically,
'DELETE' and 'DROP MEASUREMENT'. This allowed tokens with only read
permissions to delete points via the legacy /query endpoint.
/api/v2/query was unaffected.
This adjusts the behavior to verify that the token has write permissions
when specifying 'DELETE' and 'DROP MEASUREMENT' InfluxQL queries. We
follow the same pattern as other existing v1 failure scenarios and
instead of failing hard with 401, we use ectx.Send() to send an error to
the user (with 200 status):
{"results":[{"statement_id":0,"error":"insufficient permissions"}]}
Returning in this manner is consistent with Cloud 2, which also returns
200 with "insufficient permissions" for these two InfluxQL queries.
To facilitate authorization unit tests, we add MustNewPermission() to
testing/util.go.
Closes: #22799
Flux HTTP and template fetching requests do not perform IP address
checks for local addresses. This behavior on the one hand allows SSRF
(Server Side Request Forgery) attacks via authenticated requests but on
the other hand is useful for scenarios that have legitimate requirements
to fetch from private addresses (eg, hosting templates internally or
performing flux queries to local resources during development).
To not break existing installations, the default behavior will remain
the same but a new --hardening-enabled option is added to influxd to
turn on IP address verification and limit both flux and template
fetching HTTP requests to non-private addresses. We plan to enable new
security features that aren't suitable for the default install with this
option. Put another way, this new option is intended to be used to make
it easy to turn on all security options when running in production
environments. The 'Manage security and authorization' section of the
docs will also be updated for this option.
Specifically for flux, when --hardening-enabled is specified, we now
pass in PrivateIPValidator{} to the flux dependency configuration. The
flux url validator will then tap into the http.Client 'Control'
mechanism to validate the IP address since it is called after DNS lookup
but before the connection starts.
For pkger (template fetching), when --hardening-enabled is specified,
the template parser's HTTP client will be configured to also use
PrivateIPValidator{}. Note that /api/v2/stacks POST ('init', aka create)
and PATCH ('update') only store the new url to be applied later with
/api/v2/templates/apply. While it is possible to have InitStack() and
UpdateStack() mimic net.DialContext() to setup a go routine to perform a
DNS lookup and then loop through the returned addresses to verify none
are for a private IP before storing the url, this would add considerable
complexity to the stacks implementation. Since the stack's urls are
fetched when it is applied and the IP address is verified as part of
apply (see above), for now we'll keep this simple and not validate the
IPs of the stack's urls during init or update.
Lastly, update pkger/http_server_template_test.go's Templates() test for
disabled jsonnet to also check the contents of the 422 error (since the
flux validator also returns a 422 with different message). Also, fix the
URL in one of these tests to use a valid path.
Fixes a few issues:
* flux needs to write to the replication service, instead of the engine directly.
* the replication service incorrectly had value receiver methods, I think this
was just an accident. Pointer receivers make things easier to reason about. Also
with value receivers flux was not picking up the replication config properly.
* The flux to() function previously did not receive the org properly for internal
writes. Previously this was not necessary as the write path only needs the bucket
ID at this level (after authentication). But now we need the org id to look up
replications properly.
Closes#23183
This matches InfluxDB Cloud. The pagination was not exposed to the API,
but meant that API requests were limited to the default 20 pages.
Closes: #21407
* fix: forbid reading OSS buckets for a token with only write permissions
We previously enabled write tokens to also find DBRP buckets, in order to allow
the legacy /write (not /api/v2/write) endpoint to read the DBRP mappings and
find the real bucket id to write to.
This had the unintended consequency of allowing tokens with only write permissions
to read data in buckets via the legacy /query (not /api/v2/query) endpoint with
InfluxQL.
This change fixes the behaviour to allow writing to /write with a write-only
token, while forbidding reading from /query.
* fix: nanosecond precision in tests
* build(flux): update flux to v0.156.0
* chore(flux/schema): update schema tests to assert planner rules
The schema test where updated in Flux, this updates them here so that we
can assert that the planner rules are applied. See note about copied
data.
Co-authored-by: Nathaniel Cook <nvcook42@gmail.com>
* fix: remove nats for scraper processing
Scrapers now use go channels instead of NATS and interprocess communication.
This should fix#23085 .
Additionally, found and fixed#23106 .
* chore: fix formatting
* chore: fix static check and go.mod
* test: fix some flaky tests
* fix: mark NATS arguments as deprecated