* refactor: migrator and introduce Store.(Create|Delete)Bucket
feat: kvmigration internal utility to create / managing kv store migrations
fix: ensure migrations applied in all test cases
* chore: update kv and migration documentation
note: going to make this evolution in two steps so that we have a simple
rollback to get back to a working state. We'll be maintaining both packages
and the new templates and stacks endpoints for a while as users start to
move onto a newer CLI version. Sunsetting by end of July.
references: #18580
the Export behavior (renamed from CreatePkg) now provides for stackID as
another input. With this we are able to remove the additional endpoint
/api/v2/packages/stacks/:stack_id/export. It now fits into the
/api/v2/packages/export endpoint as another req body parameter. This also
makes all export functionality in teh same space, encapsulated both in the
endpoint and within the service layer :-).
references: #18646
this also makes it so that an association (label) that is added to a
resource is also included in the returned output. There is 1 test that
was changed as part of this work. It is to test for this specific change
in behavior
references: #18646
Annotate the context with feature flags when handling flux queries in influxdb.
Taking advantage of this in flux end-to-end tests. Using a custom flagger that
can set overrides based on the test case that is about to be run, allowing us
to enable features in the end-to-end tests.
* feat: start using the new org handler from the tenant service.
The rest of the tenant system is in place except the org http api handler and the
user api handler.
* fix: update the label service in org handler and add links
* feat(storage): first array cursor
* feat: add first and last to rpc messages
* test(launcher): push down group first and group last
* feat(storage): window first array cursor
* test(launcher): push down bare first and bare last
* feat(storage): add capabilities for group first and group last
* refactor: rename first to limit
* refactor: make zero value for every period meaningful
* refactor: standardize launcher pushdown tests
This enables a new rule that will push down the full `aggregateWindow`
query including the `duplicate` and `window(every: inf)` that recombines
the tables. When the full rule is used, the table is not split into
tables for each window and instead retains itself as a single table. The
start or stop column is renamed to `_time` and `_start` and `_stop` will
be the boundaries of the query.
This adds a launcher test for the read window aggregate push down to
verify that it is done when a query is sent with the appropriate
pattern, the output is correct, and that the metric is incremented that
signals the push down happened.
this bug surfaces when you do not provide a stack ID to the apply call.
the new stack is created, but the resources created are not associated
with it. This remedies that issue.
this ability exports all resources associated with a stack by the same
metadata.name fields as the original application had done it. This can
be used as a means to snapshot the current state of the stack. This can
be used for source control or other means.
closes: #18271
Switch to use the new user handler. We have been using the tenant backend for some
time now and just need to switch over to using tenant front to back.
We have reached the stage wehre the new tenant service is being used and
is stable but we want to get it in more hands and used as the default service.
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
also drops a skipped test that has been skipped for over a year. Tried
unskipping it, but now it fails for all sorts of reasons, without the
race flag enabled.
Renaming Generate in anticipation of a new method that will onboard
users other than the initial user. The intent is to simplify multi-user
setups.
Co-authored-by: Chris Goller <goller@gmail.com>
this work is the first of making resources ALL unique by metadata.name. The
displayName is a means to rename an existing resource. This is all to support
pkger idempotency. The metadata.name field will be the unique identifier within
a pkg.
* refactor(storage): move type ByTagKey to the only package that uses it
* refactor(tsdb): use types in tsdb/cursors
* refactor(tsdb): remove unused type SeriesIDElems
* refactor(tsdb): inline only use of tsdb.ReadAllSeriesIDIterator
* refactor(tsdb): move series file to its own package
* refactor(storage): remove platform->influxdb aliases
* fix: allow authorized label service to be called indirectly
17071 exists because pkger loads all service resources as authorized on
start, resulting in them all being authorized when referenced indirectly
(not hit directly via api by consumer). Rather than restructure pkger to
only authorize direct services, this allows proper indirect auth to
labels (the cause of 17071).
* Add orgService to tests
* Add resource types to find orgID from
* refactor(storage): add readSource field accessors
* refactor(storage): remove unused limitSeriesCursor
* refactor(storage): export IndexSeriesCursor
This allows IDPE to use the same implementation, rather than duplicate
code. Also copied unit tests from IDPE.
* chore: go fmt
The tasks subsystem will now use the flux language service to parse and
evaluate flux instead of directly interacting with the parser or
runtime. This helps break the dependency on the libflux parser for the
base influxdb package.
This includes the task notification packages which were changed at the
same time.
* fix(storage): simplify storage/seriesCursor
storage/seriesCursor releases series file and TSI references sooner.
Remove unhelpful request object, inherited from 1.x
* chore(storage): replace SeriesCursor interface with sole implementation
* fix(backup): handle backup with no credentials file
Backups and restores should work whether or not the original installation uses
a credentials file and whether or not the backup contains a credentials file.
Prior to this change influxql requests were sent to the same back end as Flux queries.
This MAY not always be the case. Now InfluxQL queries are specifically routed to the InfluxQLService.
In the case of this OSS build the FluxService and InfluxQLService are the same.
this also extend dry run to provide env refs to it. the refactoring was
to enable that bit. Having the ability to dry run with the env ref entries
means we can dry run the pkg with the env ref values to see the impact before
the application takes place.
this is the last step for pkger to follow the service definition pattern
that is in the works. Some bits from http were moved into kit/transport/http
for reusability. End result is to hopefully axe http pkg for resuable types in
kit. Long ways off still...
* feat(backup): `influx backup` creates data backup
* feat(backup): initial restore work
* feat(restore): initial restore impl
Adds a restore tool which does offline restore of data and metadata.
* fix(restore): pr cleanup
* fix(restore): fix data dir creation
* fix(restore): pr cleanup
* chore: amend CHANGELOG
* fix: restore to empty dir fails differently
* feat(backup): backup and restore credentials
Saves the credentials file to backups and restores it from backups.
Additionally adds some logging for errors when fetching backup files.
* fix(restore): add missed commit
* fix(restore): pr cleanup
* fix(restore): fix default credentials restore path
* fix(backup): actually copy the credentials file for the backup
* fix: dirs get 0777, files get 0666
* fix: small review feedback
Co-authored-by: tmgordeeva <tanya@influxdata.com>