the Export behavior (renamed from CreatePkg) now provides for stackID as
another input. With this we are able to remove the additional endpoint
/api/v2/packages/stacks/:stack_id/export. It now fits into the
/api/v2/packages/export endpoint as another req body parameter. This also
makes all export functionality in teh same space, encapsulated both in the
endpoint and within the service layer :-).
references: #18646
this also makes it so that an association (label) that is added to a
resource is also included in the returned output. There is 1 test that
was changed as part of this work. It is to test for this specific change
in behavior
references: #18646
Annotate the context with feature flags when handling flux queries in influxdb.
Taking advantage of this in flux end-to-end tests. Using a custom flagger that
can set overrides based on the test case that is about to be run, allowing us
to enable features in the end-to-end tests.
* feat: start using the new org handler from the tenant service.
The rest of the tenant system is in place except the org http api handler and the
user api handler.
* fix: update the label service in org handler and add links
Work had started on a opperation log but was never fully implemented and was later partially removed.
The plan here is to remove the partially defunct logs and then build a new robust op log system that will
replace it in the future.
This is following precedent established in `net/http`, by using a
shared `http.Transport`. This is necessary to ensure connections which
utilize HTTP keep-alive are reused, along with other benefits of
pooling.
this ability exports all resources associated with a stack by the same
metadata.name fields as the original application had done it. This can
be used as a means to snapshot the current state of the stack. This can
be used for source control or other means.
closes: #18271
This feature has been live for a while but I left it out of the swagger doc
because I wanted to test it in the cloud environment before I added it to the doc
Switch to use the new user handler. We have been using the tenant backend for some
time now and just need to switch over to using tenant front to back.
This commit checks http.Request.Context().Err() to see if the context
has been canceled before writing an error code. It uses the non-standard
Nginx 499 error code for client disconnection.
Pipeline tests in idpe are setting up an `http.APIBackend` directly
without a constructor function. This is causing this `AlgoWProxy` field
to be `nil` when exercising end-to-end tests. This just makes the
structure a bit more defensive and falls back to using the no-op proxy
in the presence of a nil value.
We have reached the stage wehre the new tenant service is being used and
is stable but we want to get it in more hands and used as the default service.
also drops a skipped test that has been skipped for over a year. Tried
unskipping it, but now it fails for all sorts of reasons, without the
race flag enabled.
_http.NewRequestWithContext_ (available since golang 1.13) ensures that the supplied context also controls the entire lifetime of a request and its response.
This commit adds `user_id` as a tag for traces. It helps to lookup and
filter traces we need by userID.
OrgID is harder to get right, so I will open an issue, but it will be
nice to have it in as well.
Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>
Co-authored-by: George MacRorie <gmacrorie@influxdata.com>
Renaming Generate in anticipation of a new method that will onboard
users other than the initial user. The intent is to simplify multi-user
setups.
Co-authored-by: Chris Goller <goller@gmail.com>
* fix: allow authorized label service to be called indirectly
17071 exists because pkger loads all service resources as authorized on
start, resulting in them all being authorized when referenced indirectly
(not hit directly via api by consumer). Rather than restructure pkger to
only authorize direct services, this allows proper indirect auth to
labels (the cause of 17071).
* Add orgService to tests
* Add resource types to find orgID from
This removes the spec and updates the lang package usage to make use of
passing in the runtime as a parameter.
It removes all direct dependendencies on the flux runtime from the http
package.
This moves a few types and constants to the global package so it can be
used without importing the `task/backend` package. These constants are
referenced in non tasks-specific code.
This is needed to break a dependency chain where the task backend will
call into the flux runtime to perform parsing or evaluation of a script
and to prevent the http package from inheriting that dependency.
The tasks subsystem will now use the flux language service to parse and
evaluate flux instead of directly interacting with the parser or
runtime. This helps break the dependency on the libflux parser for the
base influxdb package.
This includes the task notification packages which were changed at the
same time.
* fix(backup): handle backup with no credentials file
Backups and restores should work whether or not the original installation uses
a credentials file and whether or not the backup contains a credentials file.
* Revert "fix(kv): Don't stop when key not found from index."
This reverts commit bd9167d383.
* Revert "fix(kv): push down org ID to skip in delete URM (#16841)"
This reverts commit a5f508de77.
* Revert "fix(kv): delete authorization from correct index bucket (#16835)"
This reverts commit 7349216e94.
* Revert "feat(kv): Index Authorizations by User ID (#16818)"
This reverts commit df36fe957b.
* Revert "feat: add indexes to urm for user lookups (#16789)"
This reverts commit 9561d0a4f4.
Prior to this change influxql requests were sent to the same back end as Flux queries.
This MAY not always be the case. Now InfluxQL queries are specifically routed to the InfluxQLService.
In the case of this OSS build the FluxService and InfluxQLService are the same.
2 issues from investigating this error. First is the status check func
did not identify it was a media unsupported issue adn tries to unmarshal
the empty response body. The 2nd, was the double content type headers were
causing an error. Locally this error does not surface, cannot repoduce on
macos, but in cloud it is persistent.
closes: #16819
* feat(kv): add user id index on authorizations
* chore(auths): test FindAuthorizations both with and without a populated index
* chore(kv): cleanup index skipping flag in auths service
* fix(kv): bad flag around auth by user index population
* fix(kv): auth by user index lookup use correct buckets
* chore(kv): ensure indexer is called as expected when auth user index missing
* chore(kv): add benchmarks around authorization lookup
This change allows for the InfluxQL language type to be used with the
/v2/query API endpoint.
This change also introduces a way to give the transpiler an explicit
bucket name instead of using the DBRPMapping service.
Requests to the endpoint will know the bucket name directly but will
likely not have run the migration step to populate the DBRP mappings.
this is the last step for pkger to follow the service definition pattern
that is in the works. Some bits from http were moved into kit/transport/http
for reusability. End result is to hopefully axe http pkg for resuable types in
kit. Long ways off still...
* refactor: move views logic to separate directory
* refactor: normalize views
* fix: spinners
* fix: dont render views until status is done
* fix(http/dashboards): view shape not returning from getDashboard
* test: delete irrelevant and redundant test
* fix: go tidy
* test: skipping monaco test
* chore: sort type exports
* chore: cleanup
* feat(backup): `influx backup` creates data backup
* feat(backup): initial restore work
* feat(restore): initial restore impl
Adds a restore tool which does offline restore of data and metadata.
* fix(restore): pr cleanup
* fix(restore): fix data dir creation
* fix(restore): pr cleanup
* chore: amend CHANGELOG
* fix: restore to empty dir fails differently
* feat(backup): backup and restore credentials
Saves the credentials file to backups and restores it from backups.
Additionally adds some logging for errors when fetching backup files.
* fix(restore): add missed commit
* fix(restore): pr cleanup
* fix(restore): fix default credentials restore path
* fix(backup): actually copy the credentials file for the backup
* fix: dirs get 0777, files get 0666
* fix: small review feedback
Co-authored-by: tmgordeeva <tanya@influxdata.com>
also makes the yaml decoder the default. To foten we end up in application/octet-stream
which is the default for many different mime types. This provides a mechanism
around that so that when the automagical detection fails it can allow the user
to provide it via the CLI.
* feat(checks): Add custom check type
* feat(checks): Remove alert builder from custom check
* feat(checks): Add AlertBuilderAction to list of possible actions
* feat(checks): Query visualization does not make sense for custom check
* feat(check): check editor should only reexecute queries if view query changes
* Update ui/src/timeMachine/components/TimeMachineFluxEditor.tsx
Co-Authored-By: Bucky Schwarz <hoorayimhelping@users.noreply.github.com>
* Address PR review
Co-authored-by: Bucky Schwarz <hoorayimhelping@users.noreply.github.com>
feat(ui): added last run status checks for notification rules and check rules, readded updateCheck to fix linter and functionality issues with program and added tests to ensure check creation and update stability
this work is to support pkger, but was able to add back in the
skipped tests. seeing failures upstream, and didn't catch it in
influxdb b/c the tests were being skipped.
closes: #14799
this is a blocker for anyone who hits the endpoint services internally. They
had to know that they need to also know of hte secret service then do all that
put/delete alongside the operation. This makes that unified inside the store tx.
one other thing this does is make obvious the dependencies that
notification services has. In this case it is the secrets service it
depends on.
noticed that I had not used the http server as the entry point for server tests.
This was work to make that happen. Along the way, found a bunch of issues I hadn't
seen before 🤦. There are a number of changes tucked away inside the
other types, that make it possible to encode/decode a type with zero value for
influxdb.ID.
* added date-time format for start and stop DeletePredicateRequest
* fixed malformed reference to ViewProperties in PkgChart model
* define separate model for RetentionRule as is in Organizations, Buckets, Labels
* "labels" property from Check and PostCheck should be part of CheckBase (it is ancestor for all Check types)
* "labels" property from NotificationRule and PostNotificationRule should be part of NotificationRuleBase (it is ancestor for all NotificationRule and types)
* "labels" property from NotificationEndpoint and PostNotificationEndpoint should be part of NotificationEndpointBase (it is ancestor for all NotificationEndpoint and types)
* The url property of HTTPNotificationRuleBase should not be required
* Added query link for CheckBase and NotificationRuleBase
note: tests are seriously borked here. Cannot reuse any existing testing
as the setup is very particular and the http layer doesn't suppport everyting.
that being said, there are goign to be implicit testing in the
`launcher/pkger_test.go` file. This feels broken, and probably needs to be
readdressed before we GA a 2.0 influxdb....