this work is to support pkger, but was able to add back in the
skipped tests. seeing failures upstream, and didn't catch it in
influxdb b/c the tests were being skipped.
closes: #14799
In the event that findTaskByIDWithAuth cannot find the task ID contained
in the bucket, the outer loop will never terminate.
This ensures that we are calling Next() along-side any calls to continue
while using a Cursor.
this is a blocker for anyone who hits the endpoint services internally. They
had to know that they need to also know of hte secret service then do all that
put/delete alongside the operation. This makes that unified inside the store tx.
one other thing this does is make obvious the dependencies that
notification services has. In this case it is the secrets service it
depends on.
Task runs are stored and retrieved from the `taskRunsv1` bucket, but
when they are canceled they are incorrectly placed in the `tasksv1`
bucket. Once this has been done, further look ups of the task run fail,
because it is located in the wrong bucket.
This addresses the problem by placing them back into the `taskRunsv1`
bucket. An additional test has been added to ensure we are able to
successfully read a canceled run.
note: tests are seriously borked here. Cannot reuse any existing testing
as the setup is very particular and the http layer doesn't suppport everyting.
that being said, there are goign to be implicit testing in the
`launcher/pkger_test.go` file. This feels broken, and probably needs to be
readdressed before we GA a 2.0 influxdb....
This change ensures the following behavior:
* task LatestScheduled is always set when a task is updated and
transitions from a status of inactive to active
* task LatestScheduled is non-zero when created, to set the initial
schedule time as some point after it was created
In addition, the kv.Service introduces clock.Clock that is used for
task create and task updates only. This change permits testing
in a deterministic fashion.
This removes the bucket name validation from the KV BucketService,
and moves it to the http implementation of the service.
The effect is that API user requests still get validated but direct KV access does not
* chore: Remove several instances of WithLogger
* chore: unexport Logger fields
* chore: unexport some more Logger fields
* chore: go fmt
chore: fix test
chore: s/logger/log
chore: fix test
chore: revert http.Handler.Handler constructor initialization
* refactor: integrate review feedback, fix all test nop loggers
* refactor: capitalize all log messages
* refactor: rename two logger to log
* fix(http): Return an empty array if organization has no secret keys
* fix(http): Return an empty array if organization has no secret keys instead nil
A KV store may optionally implement a key predicate
function to filter on keys or values, which may reduce memory and
CPU usage.
Expected FindByTaskID resource mapping improvements for a single user:
Before:
206966085 ns/op 37672164 B/op 445060 allocs/op
After:
1514118 ns/op 11184 B/op 131 allocs/op
A KV store may optionally implement a key predicate
function to filter on keys, which may reduce memory and CPU usage.
Expected user resource mapping improvements for a single user:
Before:
3813719 ns/op 4170142 B/op 33 allocs/op
After:
316869 ns/op 816 B/op 15 allocs/op
* Removed an unused function as cleanup.
* Users without true system buckets won't have fake buckets returned if they don't specify their org in the request, but this shouldn't break tasks.
* feat(auth): add createdAt and updatedAt to authorization
Co-Authored-By: Ariel <ariel.salem1989@gmail.com>
* feat(auth): passing createAuth tests
* test: ensured that createdAt and updatedAt are valid on authorizations
Previously we overwrote the tasks existing latestCompleted to be used for latestCompleted as well as latestScheduled.
For obvious reasons this is confusing and missleading. I believe by seperating the two fields we can have a clear seperation
of concerns.
* feat(kv): unique variable names
- adds system bucket for creating an index of unique variable names
- adds tests
- deleted unit tests for dead code
- removed a test runner for the variable service from http
Implementations of the `kv.Bucket#Cursor` API may use
the hints to instruct the access or read behavior to
the underlying key/value store.
The `findAllTasks` function was also fixed to ensure
that paging works as expected when using a name filter.
Tests were added to verify this behavior.
Redundant error checks were also removed.
* feat: (http) label names to be unique
* feat(http): should work for updates as well
* chore: commented out former work. added a failing test
* feat: ensure label uniqueness & test cases
* feat: updating labels ensures uniqueness
* fix: fixes a failing unrelated test
* chore: update changelog
* feat(task): Allow tasks to run more isolated from other task systems
To allow the task internal system to be used for user created tasks as well
as checks, notification and other future additions we needed to take 2 actions:
1 - We need to use type as a first class citizen, meaning that task's have a type
and each system that will be creating tasks will set the task type through the api.
This is a change to the previous assumption that any user could set task types. This change
will allow us to have other service's white label the task service for their own purposes and not
have to worry about colissions between the types.
2 - We needed to allow other systems to add data specific to the problem they are trying to solve.
For this purpose adding a `metadata` field to the internal task system which should allow other systems to
use the task service.
These changes will allow us in the future to allow for the current check's and notifications implementations
to create a task with meta data instead of creating a check object and a task object in the database.
By allowing this new behavior checks, notifications, and user task's can all follow the same pattern:
Field an api request in a system specific http endpoint, use a small translation to the `TaskService` function call,
translate the results to what the api expects for this system, and return results.
* fix(task): undo additional check for ownerID because check is not ready
* Add TraceSpan parameter to GetNotificationEndpoints operation
* Fixed handler path for a list of all labels for a notification endpoint
* Fixed filter NotificationEndpoints by limit and offset
The http error schema has been changed to simplify the outward facing
API. The `op` and `error` attributes have been dropped because they
confused people. The `error` attribute will likely be readded in some
form in the future, but only as additional context and will not be
required or even suggested for the UI to use.
Errors are now output differently both when they are serialized to JSON
and when they are output as strings. The `op` is no longer used if it is
present. It will only appear as an optional attribute if at all. The
`message` attribute for an error is always output and it will be the
prefix for any nested error. When this is serialized to JSON, the
message is automatically flattened so a nested error such as:
influxdb.Error{
Msg: errors.New("something bad happened"),
Err: io.EOF,
}
This would be written to the message as:
something bad happened: EOF
This matches a developers expectations much more easily as most
programmers assume that wrapping an error will act as a prefix for the
inner error.
This is flattened when written out to HTTP in order to make this logic
immaterial to a frontend developer.
The code is still present and plays an important role in categorizing
the error type. On the other hand, the code will not be output as part
of the message as it commonly plays a redundant and confusing role when
humans read it. The human readable message usually gives more context
and a message like with the code acting as a prefix is generally not
desired. But, the code plays a very important role in helping to
identify categories of errors and so it is very important as part of the
return response.