A KV store may optionally implement a key predicate
function to filter on keys, which may reduce memory and CPU usage.
Expected user resource mapping improvements for a single user:
Before:
3813719 ns/op 4170142 B/op 33 allocs/op
After:
316869 ns/op 816 B/op 15 allocs/op
* Removed an unused function as cleanup.
* Users without true system buckets won't have fake buckets returned if they don't specify their org in the request, but this shouldn't break tasks.
* feat(auth): add createdAt and updatedAt to authorization
Co-Authored-By: Ariel <ariel.salem1989@gmail.com>
* feat(auth): passing createAuth tests
* test: ensured that createdAt and updatedAt are valid on authorizations
Previously we overwrote the tasks existing latestCompleted to be used for latestCompleted as well as latestScheduled.
For obvious reasons this is confusing and missleading. I believe by seperating the two fields we can have a clear seperation
of concerns.
* feat(kv): unique variable names
- adds system bucket for creating an index of unique variable names
- adds tests
- deleted unit tests for dead code
- removed a test runner for the variable service from http
Implementations of the `kv.Bucket#Cursor` API may use
the hints to instruct the access or read behavior to
the underlying key/value store.
The `findAllTasks` function was also fixed to ensure
that paging works as expected when using a name filter.
Tests were added to verify this behavior.
Redundant error checks were also removed.
* feat: (http) label names to be unique
* feat(http): should work for updates as well
* chore: commented out former work. added a failing test
* feat: ensure label uniqueness & test cases
* feat: updating labels ensures uniqueness
* fix: fixes a failing unrelated test
* chore: update changelog
* feat(task): Allow tasks to run more isolated from other task systems
To allow the task internal system to be used for user created tasks as well
as checks, notification and other future additions we needed to take 2 actions:
1 - We need to use type as a first class citizen, meaning that task's have a type
and each system that will be creating tasks will set the task type through the api.
This is a change to the previous assumption that any user could set task types. This change
will allow us to have other service's white label the task service for their own purposes and not
have to worry about colissions between the types.
2 - We needed to allow other systems to add data specific to the problem they are trying to solve.
For this purpose adding a `metadata` field to the internal task system which should allow other systems to
use the task service.
These changes will allow us in the future to allow for the current check's and notifications implementations
to create a task with meta data instead of creating a check object and a task object in the database.
By allowing this new behavior checks, notifications, and user task's can all follow the same pattern:
Field an api request in a system specific http endpoint, use a small translation to the `TaskService` function call,
translate the results to what the api expects for this system, and return results.
* fix(task): undo additional check for ownerID because check is not ready
* Add TraceSpan parameter to GetNotificationEndpoints operation
* Fixed handler path for a list of all labels for a notification endpoint
* Fixed filter NotificationEndpoints by limit and offset
The http error schema has been changed to simplify the outward facing
API. The `op` and `error` attributes have been dropped because they
confused people. The `error` attribute will likely be readded in some
form in the future, but only as additional context and will not be
required or even suggested for the UI to use.
Errors are now output differently both when they are serialized to JSON
and when they are output as strings. The `op` is no longer used if it is
present. It will only appear as an optional attribute if at all. The
`message` attribute for an error is always output and it will be the
prefix for any nested error. When this is serialized to JSON, the
message is automatically flattened so a nested error such as:
influxdb.Error{
Msg: errors.New("something bad happened"),
Err: io.EOF,
}
This would be written to the message as:
something bad happened: EOF
This matches a developers expectations much more easily as most
programmers assume that wrapping an error will act as a prefix for the
inner error.
This is flattened when written out to HTTP in order to make this logic
immaterial to a frontend developer.
The code is still present and plays an important role in categorizing
the error type. On the other hand, the code will not be output as part
of the message as it commonly plays a redundant and confusing role when
humans read it. The human readable message usually gives more context
and a message like with the code acting as a prefix is generally not
desired. But, the code plays a very important role in helping to
identify categories of errors and so it is very important as part of the
return response.
At times snowflake id generation would create org and bucket IDs with
characters that had special meaning for the storage engine.
The storage engine concats the org and bucket bytes together into a
single 128 bit value. That value is used in the old measurement
section. Measurement was transformed into the tag, _measurement.
However, certain properties of the older measurement data location
are still required for the org/bucket bytes. We cannot have
commas, spaces, nor backslashes.
This PR puts a specific ID generator in place during the creation of
orgs and buckets. The IDs are just random numbers but with each
of the restricted chars incremented by one. While this changes the
entropy distribution somewhat, it does not matter too much for our
purposes.
... because now org and bucket ids are checked for previous existence
transactionally in the key-value stores. If the ID does already exist
then we try to generate a new key up to 100 times.
We need to only update the updated at time when we recieve a external request for
an update. LatestCompleted is an internal request from the scheduler.
When using a internal function in kv we can only use internal funciton calls.
By trying to use a Public function we attempt to create a read lock after acquiring a write lock
When a task was created we used to use a token, we then decided to switch to using
A user id. To facilitate this change we pull the users auth associated with the task
and use the auth's `GetUserID` method. This only works if they have not deleted the auth.
We need a fail safe way to populate the ownerID in the circumstance that the auth associated
with the task has been deleted. To facilitate this we can pull the UserResourceMapping's for the org
resource of the type `Owner` and use a organization owner as the task owner. This makes sense because
any organization owner also own's any part of the organization including tasks.
fix(notification/check): include tags in check object in generated flux
Closes https://github.com/influxdata/influxdb/issues/14769
fix(notification/check): use selected field in threshold functions
Closes https://github.com/influxdata/influxdb/issues/14776
fix(testing): add selected field for check tests
fix(check): use real flux for threshold check
feat(notification/check): generate flux for deadman checks
chore(endpoint): rename webhook endpoint to http endpoint
fix(notification/rule): fetch url for flux script off of endpoint
fix(notification/rule): clean up slack and http rules
fix(notification/rule): change MessageTemp to MessageTemplate
fix(rules): pass endpoint in to rule during create
fix(ui): rename webhook to http
feat(notification/check): namespace deadman under alerts
fix(notification/check): nest tags under tags key in data object in flux
wip
feat(kv): log error if urm cannot be deleted for notification rule
fix(notification/rule): remove name from notify call in slack rule
chore(ui/cypress/e2e): skip rule create test