the Export behavior (renamed from CreatePkg) now provides for stackID as
another input. With this we are able to remove the additional endpoint
/api/v2/packages/stacks/:stack_id/export. It now fits into the
/api/v2/packages/export endpoint as another req body parameter. This also
makes all export functionality in teh same space, encapsulated both in the
endpoint and within the service layer :-).
references: #18646
this also makes it so that an association (label) that is added to a
resource is also included in the returned output. There is 1 test that
was changed as part of this work. It is to test for this specific change
in behavior
references: #18646
Annotate the context with feature flags when handling flux queries in influxdb.
Taking advantage of this in flux end-to-end tests. Using a custom flagger that
can set overrides based on the test case that is about to be run, allowing us
to enable features in the end-to-end tests.
* feat: start using the new org handler from the tenant service.
The rest of the tenant system is in place except the org http api handler and the
user api handler.
* fix: update the label service in org handler and add links
* feat(storage): first array cursor
* feat: add first and last to rpc messages
* test(launcher): push down group first and group last
* feat(storage): window first array cursor
* test(launcher): push down bare first and bare last
* feat(storage): add capabilities for group first and group last
* refactor: rename first to limit
* refactor: make zero value for every period meaningful
* refactor: standardize launcher pushdown tests
This enables a new rule that will push down the full `aggregateWindow`
query including the `duplicate` and `window(every: inf)` that recombines
the tables. When the full rule is used, the table is not split into
tables for each window and instead retains itself as a single table. The
start or stop column is renamed to `_time` and `_start` and `_stop` will
be the boundaries of the query.
This adds a launcher test for the read window aggregate push down to
verify that it is done when a query is sent with the appropriate
pattern, the output is correct, and that the metric is incremented that
signals the push down happened.
this bug surfaces when you do not provide a stack ID to the apply call.
the new stack is created, but the resources created are not associated
with it. This remedies that issue.
this ability exports all resources associated with a stack by the same
metadata.name fields as the original application had done it. This can
be used as a means to snapshot the current state of the stack. This can
be used for source control or other means.
closes: #18271
Switch to use the new user handler. We have been using the tenant backend for some
time now and just need to switch over to using tenant front to back.
We have reached the stage wehre the new tenant service is being used and
is stable but we want to get it in more hands and used as the default service.
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
also drops a skipped test that has been skipped for over a year. Tried
unskipping it, but now it fails for all sorts of reasons, without the
race flag enabled.