This package provides essentially the same API as the Cloud tests
package, utilising the `TestLauncher` type.
Additional With functional options were added to the `Launcher`
in order to expose type-safe configuration of InfluxQL configuration.
Additional With options may be added in the future as the need arises.
This includes removal of a lot of kv.Service responsibilities. However,
it does not finish the re-wiring. It removes documents, telegrafs,
notification rules + endpoints, checks, orgs, users, buckets, passwords,
urms, labels and authorizations. There are some oustanding pieces that
are needed to get kv service compiling (dashboard service urm
dependency). Then all the call sites for kv service need updating and
the new implementations of telegraf and notification rules + endpoints
needed installing (along with any necessary migrations).
This resolves observed race conditions when running test suites that
utilize the launcher. It also reduces test times considerably, by
eliminating a slow loop to find a port.
A static initialization is not desirable in the main binaries, as it forces all
paths of code to init, but it is still useful in tests. It allows static
intialization to be performed once for all tests and eliminates the need to
always add the FluxInit call. Added a fluxinit/static package that calls
fluxinit.FluxInit() to replace the builtin package. This hides the nature of
the initialization and makes it clear that it is mandatory initialization code
getting called.
feat(dashboard): add owner ID to dashboard model
This adds the explicit OwnerID field to Dashboard and also adds a
migration which populates dashboard owners IDs based on dashboard owner
URMs.
feat(dashboards): isolate service in own package
This change isolates the dashboards service into its own package. It
also updates the API to no longer interface with user resource mappings.
Instead it defines new handlers which rely on the newly populated owner
ID field.
chore(dashboards): port tests from http package into new service transport package
chore(launcher): use dashboard transport package client in launcher tests
chore(kv): remove now defunkt dashboard service implementations
* fix: use fluxinit package to init flux library instead of builtin
The builtin package performs costly initialization work in the go init()
function, which causes the work to be performed for all paths of the main
executables that need it, including help screens. This patch uses the fluxinit
package, which requires a call to do the costly work. It allows help screens
and other code paths to execute quickly.
* fix: need to draw in the influxdb-specific standard library in fluxinit
This commit extends the `v1/authorization` package to support
passwords associated with a token.
The summary of changes include:
* authorization.Service implements influxdb.PasswordsService
* Setting passwords for authorizations
* Verifying (comparing) passwords for a given authorization
* A service to cache comparing passwords, using a weaker hash
that will live in memory only. This implementation is copied
from InfluxDB 1.x
* Extended HTTP service to set a password using
/private/legacy/authorizations/{id}/password
Closes #
* refactor(notifications): isolate endpoint service
Following the ongoing effort to isolate behaviours into their own
packages and off of kv.Service, this change move the notification
endpoints service implementation into its own package. It removes the
endpoint behaviors from the kv service completely.
* chore(influxd): wire up the isolated check service in place of kv service
This service is a private API for managing authorization tokens
for v1 API requests.
Note that this commit does not hook up the service to the v1
`/query` and `/write`, which will occur in a subsequent PR.
Closes#19812
* refactor(notification): move rule service into own package
* chore(launcher): fix tests to use clients as opposed to direct kv service
* chore(influx): update task cli to consume core domain model task from client
* chore(kv): remove rule service behaviours from kv
This also introduces the org id resolver type. Which is transplanted
from the kv service. As this one function coupled all resource
capabilities onto the kv service. Making removing these capabilities
impossible. Moving this type out into its own package which depends on
each service explicitly ensures we don't have one type which has to
implement all the service contracts.
* fix(launcher): remove double reference to influxdb package
This function is used by the end-to-end test harness to stabilize query profile
results before diff and is needed when implementing the Profile interface.
This commit ensures OSS is using the new implementation of the
AuthorizationService from the authorization package.
It also removes the associated feature flag.
This commit removes incorrect implementations of the `DeleteBucket`
and `DeleteBucketRangePredicate` APIs from the storage package,
which remained after the transition to the tsdb 1.x storage engine.
Secondly, this PR utilizes the `ENotImplemented` error code to inform
users which call the `/api/v2/delete`
- Update CIrcleCI configuration to start release process on an RC build
- Update .goreleaser.yml:
- Start building armel and armhf binaries and rpm and debian packages.
- Generate sha256 checksum file.
- launcher.go: do not use `max` module to escape integeroverflow problem for armel and armhf builds
- Start using `v0.142.0` of goreleaser
- Added pre and post install/uninstall scripts for rpm amd deb packages
Enables the mix and max aggregates for the ReadGroupAggregte pushdown behind a feature flag.
Co-authored-by: Jonathan A. Sternberg <jonathan@influxdata.com>
* chore: update task tests to use the tenant service
After the introduction of the tenant system we need to switch the testing frameworks
to use it instead of the old kv system
* chore: update onboarding to allow injected middleware
The `buckets()` command would use a bucket lookup that wrapped the
`FindBuckets` API. It did not use the pagination aspect of this API
correctly. When the underlying implementation was changed to a version
that correctly implemented pagination, this broke the query `buckets()`
command. Since it was query that used the API incorrectly rather than a
regression in the `FindBuckets` implementation, this fixes the usage to
correctly use pagination.
This commit adds `mincore.Limiter` which throttles page faults caused
by mmap() data. It works by periodically calling `mincore()` to determine
which pages are not resident in memory and using `rate.Limiter` to
throttle accessing using a token bucket algorithm.
* feat(task): Add new permission lookup pattern for executor
We can now use the user service to populate task owners permissions.
This should improve the task lookup time and decouple the task system
from the URM system. In the future we will have the ability to better isolate
tenant pieces from the rest of the service.
* feat: add feature flagging
We can now use the user service to populate task owners permissions.
This should improve the task lookup time and decouple the task system
from the URM system. In the future we will have the ability to better isolate
tenant pieces from the rest of the service.
* refactor: migrator and introduce Store.(Create|Delete)Bucket
feat: kvmigration internal utility to create / managing kv store migrations
fix: ensure migrations applied in all test cases
* chore: update kv and migration documentation
note: going to make this evolution in two steps so that we have a simple
rollback to get back to a working state. We'll be maintaining both packages
and the new templates and stacks endpoints for a while as users start to
move onto a newer CLI version. Sunsetting by end of July.
references: #18580
the Export behavior (renamed from CreatePkg) now provides for stackID as
another input. With this we are able to remove the additional endpoint
/api/v2/packages/stacks/:stack_id/export. It now fits into the
/api/v2/packages/export endpoint as another req body parameter. This also
makes all export functionality in teh same space, encapsulated both in the
endpoint and within the service layer :-).
references: #18646
this also makes it so that an association (label) that is added to a
resource is also included in the returned output. There is 1 test that
was changed as part of this work. It is to test for this specific change
in behavior
references: #18646
Annotate the context with feature flags when handling flux queries in influxdb.
Taking advantage of this in flux end-to-end tests. Using a custom flagger that
can set overrides based on the test case that is about to be run, allowing us
to enable features in the end-to-end tests.
* feat: start using the new org handler from the tenant service.
The rest of the tenant system is in place except the org http api handler and the
user api handler.
* fix: update the label service in org handler and add links
* feat(storage): first array cursor
* feat: add first and last to rpc messages
* test(launcher): push down group first and group last
* feat(storage): window first array cursor
* test(launcher): push down bare first and bare last
* feat(storage): add capabilities for group first and group last
* refactor: rename first to limit
* refactor: make zero value for every period meaningful
* refactor: standardize launcher pushdown tests
This enables a new rule that will push down the full `aggregateWindow`
query including the `duplicate` and `window(every: inf)` that recombines
the tables. When the full rule is used, the table is not split into
tables for each window and instead retains itself as a single table. The
start or stop column is renamed to `_time` and `_start` and `_stop` will
be the boundaries of the query.
This adds a launcher test for the read window aggregate push down to
verify that it is done when a query is sent with the appropriate
pattern, the output is correct, and that the metric is incremented that
signals the push down happened.
this bug surfaces when you do not provide a stack ID to the apply call.
the new stack is created, but the resources created are not associated
with it. This remedies that issue.
this ability exports all resources associated with a stack by the same
metadata.name fields as the original application had done it. This can
be used as a means to snapshot the current state of the stack. This can
be used for source control or other means.
closes: #18271