this is a blocker for anyone who hits the endpoint services internally. They
had to know that they need to also know of hte secret service then do all that
put/delete alongside the operation. This makes that unified inside the store tx.
one other thing this does is make obvious the dependencies that
notification services has. In this case it is the secrets service it
depends on.
noticed that I had not used the http server as the entry point for server tests.
This was work to make that happen. Along the way, found a bunch of issues I hadn't
seen before 🤦. There are a number of changes tucked away inside the
other types, that make it possible to encode/decode a type with zero value for
influxdb.ID.
* added date-time format for start and stop DeletePredicateRequest
* fixed malformed reference to ViewProperties in PkgChart model
* define separate model for RetentionRule as is in Organizations, Buckets, Labels
* "labels" property from Check and PostCheck should be part of CheckBase (it is ancestor for all Check types)
* "labels" property from NotificationRule and PostNotificationRule should be part of NotificationRuleBase (it is ancestor for all NotificationRule and types)
* "labels" property from NotificationEndpoint and PostNotificationEndpoint should be part of NotificationEndpointBase (it is ancestor for all NotificationEndpoint and types)
* The url property of HTTPNotificationRuleBase should not be required
* Added query link for CheckBase and NotificationRuleBase
note: tests are seriously borked here. Cannot reuse any existing testing
as the setup is very particular and the http layer doesn't suppport everyting.
that being said, there are goign to be implicit testing in the
`launcher/pkger_test.go` file. This feels broken, and probably needs to be
readdressed before we GA a 2.0 influxdb....
this is a step towards providing a shared http client that manages pooling connections,
timeouts, and reducing GC for by not creating/GCing a client each req. Bring on the red!
Now, traces will have all headers except authorization and user-agent.
Additionally, I've removed the referer and remoteaddr as they did not
contribute much.
Eventually, this may change to just a small set of headers to record.
This removes the bucket name validation from the KV BucketService,
and moves it to the http implementation of the service.
The effect is that API user requests still get validated but direct KV access does not
* chore: Remove several instances of WithLogger
* chore: unexport Logger fields
* chore: unexport some more Logger fields
* chore: go fmt
chore: fix test
chore: s/logger/log
chore: fix test
chore: revert http.Handler.Handler constructor initialization
* refactor: integrate review feedback, fix all test nop loggers
* refactor: capitalize all log messages
* refactor: rename two logger to log
* fix(http): Return an empty array if organization has no secret keys
* fix(http): Return an empty array if organization has no secret keys instead nil
* fix(endpoint): when looking up a endpoint we should allow org only lookup
In the current system the api always adds "UserID" to the filter. This only
allows the system to look up endpoints that user created. The behavior should be
that we filter based on user input and use authorizor to hide things they shouldn't see.
* feat(tracing): dont trace spans with full URL path names in ExtractFromHTTPRequest
* chore(multiple): replace all occurrences of julienschmidt/httprouter with influxdata/httprouter
the errors changed during the pkger http server error improvements and this
fixes it to be similar to what it once was, only now its in a flatter fashion
very similar to the pkger http server apply response.
one thing to note here is that new endpoint was created. there was no
endpoint for setting an initial password that worked. The existin endpoint
was a bit messy and coupled across multiple routes. Having multiple auth
schemes proved incredibly taxing to write against.
also adds some extra user friendliness. it sorts the pkg created via an
export by resource kinds. It also titles the kinds to make them match the
documentation even though the kind is case insensitive. Easier to read this
way.
no associations included at this time. Also fixes http response to be just
the pkg without the envelope. Having that envelope makes the API icky to
work with from any shell script or just saving it to file. This feels more
organic to just drop that envelope.
* fix(flakey-test): refactored getSortedBucketNames for more consistency and predictability. Finished DWP API functionality
* fix(FilterRow): removed unnecessary FeatureFlag from component
* chore: updated yml and tests to reflect API changes
* feat(auth): add createdAt and updatedAt to authorization
Co-Authored-By: Ariel <ariel.salem1989@gmail.com>
* feat(auth): passing createAuth tests
* test: ensured that createdAt and updatedAt are valid on authorizations
Previously we overwrote the tasks existing latestCompleted to be used for latestCompleted as well as latestScheduled.
For obvious reasons this is confusing and missleading. I believe by seperating the two fields we can have a clear seperation
of concerns.
* feat(kv): unique variable names
- adds system bucket for creating an index of unique variable names
- adds tests
- deleted unit tests for dead code
- removed a test runner for the variable service from http
* feat(task): Allow tasks to run more isolated from other task systems
To allow the task internal system to be used for user created tasks as well
as checks, notification and other future additions we needed to take 2 actions:
1 - We need to use type as a first class citizen, meaning that task's have a type
and each system that will be creating tasks will set the task type through the api.
This is a change to the previous assumption that any user could set task types. This change
will allow us to have other service's white label the task service for their own purposes and not
have to worry about colissions between the types.
2 - We needed to allow other systems to add data specific to the problem they are trying to solve.
For this purpose adding a `metadata` field to the internal task system which should allow other systems to
use the task service.
These changes will allow us in the future to allow for the current check's and notifications implementations
to create a task with meta data instead of creating a check object and a task object in the database.
By allowing this new behavior checks, notifications, and user task's can all follow the same pattern:
Field an api request in a system specific http endpoint, use a small translation to the `TaskService` function call,
translate the results to what the api expects for this system, and return results.
* fix(task): undo additional check for ownerID because check is not ready