* chore: update task tests to use the tenant service
After the introduction of the tenant system we need to switch the testing frameworks
to use it instead of the old kv system
* chore: update onboarding to allow injected middleware
* feat(task): Add new permission lookup pattern for executor
We can now use the user service to populate task owners permissions.
This should improve the task lookup time and decouple the task system
from the URM system. In the future we will have the ability to better isolate
tenant pieces from the rest of the service.
* feat: add feature flagging
We can now use the user service to populate task owners permissions.
This should improve the task lookup time and decouple the task system
from the URM system. In the future we will have the ability to better isolate
tenant pieces from the rest of the service.
* refactor: migrator and introduce Store.(Create|Delete)Bucket
feat: kvmigration internal utility to create / managing kv store migrations
fix: ensure migrations applied in all test cases
* chore: update kv and migration documentation
* fix: clean up old runs when we fail to enqueue it
We need to make sure the runs are removed if we fail to enqueue the run to be worked.
for this we will now add an error message stating why we failed and move the run out of our kv store.
* fix: update fail message
This new session service has the ability to work independant of other systems
it relies on having its own store type which should allow us to be more flexible
then using the built in kv system.
I have included an in mem session store.
This moves a few types and constants to the global package so it can be
used without importing the `task/backend` package. These constants are
referenced in non tasks-specific code.
This is needed to break a dependency chain where the task backend will
call into the flux runtime to perform parsing or evaluation of a script
and to prevent the http package from inheriting that dependency.
The tasks subsystem will now use the flux language service to parse and
evaluate flux instead of directly interacting with the parser or
runtime. This helps break the dependency on the libflux parser for the
base influxdb package.
This includes the task notification packages which were changed at the
same time.
This will reduce memory allocations in the scheduler by removing
unneccessary delete/replace actions on the btree that is used as an
internal priority queue.
This change ensures the following behavior:
* task LatestScheduled is always set when a task is updated and
transitions from a status of inactive to active
* task LatestScheduled is non-zero when created, to set the initial
schedule time as some point after it was created
In addition, the kv.Service introduces clock.Clock that is used for
task create and task updates only. This change permits testing
in a deterministic fashion.
* chore: Remove several instances of WithLogger
* chore: unexport Logger fields
* chore: unexport some more Logger fields
* chore: go fmt
chore: fix test
chore: s/logger/log
chore: fix test
chore: revert http.Handler.Handler constructor initialization
* refactor: integrate review feedback, fix all test nop loggers
* refactor: capitalize all log messages
* refactor: rename two logger to log
* feat(tracing): dont trace spans with full URL path names in ExtractFromHTTPRequest
* chore(multiple): replace all occurrences of julienschmidt/httprouter with influxdata/httprouter
* Removed an unused function as cleanup.
* Users without true system buckets won't have fake buckets returned if they don't specify their org in the request, but this shouldn't break tasks.
Previously we overwrote the tasks existing latestCompleted to be used for latestCompleted as well as latestScheduled.
For obvious reasons this is confusing and missleading. I believe by seperating the two fields we can have a clear seperation
of concerns.
* feat(task): Allow tasks to run more isolated from other task systems
To allow the task internal system to be used for user created tasks as well
as checks, notification and other future additions we needed to take 2 actions:
1 - We need to use type as a first class citizen, meaning that task's have a type
and each system that will be creating tasks will set the task type through the api.
This is a change to the previous assumption that any user could set task types. This change
will allow us to have other service's white label the task service for their own purposes and not
have to worry about colissions between the types.
2 - We needed to allow other systems to add data specific to the problem they are trying to solve.
For this purpose adding a `metadata` field to the internal task system which should allow other systems to
use the task service.
These changes will allow us in the future to allow for the current check's and notifications implementations
to create a task with meta data instead of creating a check object and a task object in the database.
By allowing this new behavior checks, notifications, and user task's can all follow the same pattern:
Field an api request in a system specific http endpoint, use a small translation to the `TaskService` function call,
translate the results to what the api expects for this system, and return results.
* fix(task): undo additional check for ownerID because check is not ready
* feat(task): add limit function for task concurrency
The new task executor handles limit's differently then the old executor
instead of front loading limits by creating a runner for every task that might run
the new executor has a large worker pool and queue. This allow's us to have a unlimited
concurrency per task and helps us avoid a back log of task's execution based on a
arbitrary execution limit. This add's the ability to add an optional task execution limit
so a user can still have the advantages of limiting concurrency.
We needed the coordinator to be able to execute manual runs and resume runs.
These two functions have been added, but we also needed to allow for the executor to be
mocked out. To do that we needed to return a Promise interface instead of an actual
struct. Both these changes are to facilitate coordinator work and testing.
I chose to add a execute function that allow's the task executor to match expectation from
the scheduler but I left in the existing executor method that return's promises. This is
because I like to be able to have the accountablilty and visiblity inside what's happening
with each execution even though the promise isn't required for the scheduler. This function signature
will be used by the coordinator and potentially other's that want to ensure a 'execution' is completed.
Implementations of the backend.Executor produce errors limited to
querying the KV store. The remainder of the errors will be processed
in the implementation of a `RunPromise`.
Fixes#15161
The current behavior is that the update is pushed into the scheduler,
and the scheduler cherry pick's what it needs. This leaves the task itself out
meaning any logging the scheduler did was not going to have the new task information in it.
When a task is told to execute it can be enqueued waiting for a worker.
This statistic will be superior to the existing delta based on scheduled for,
the current system can be effected by a user having slow queries or a long "delay" on the task.
This new way of measuring the same thing should allow us to accuratly measure when it is the task system's fault.