* feat: add `success` column to system.queries
* refactor: Remove lifetime from QueryCompletedToken and thread through flight
* test: update test to make incomplete query clearer
* refactor: use better patter to set complete
* fix: logical merge conflict
There is no reason a query pod should support the storage API. Note that
some features like the observer mode or `show databases;` still need the
management API. We'll probably need to fix that for NG at some point.
* feat: detach dedicated exec jobs
* feat: async `DedicatedExecutor::join`
Now `DedicatedExecutor` follows the system we use for other server
components:
- `shutdown`: a quick sync call that signals the shutdown but doesn't
drop
- `join`: async awaits until the executor has finished shutdown
- `drop`: warn but still try to shut down
* test: irmpove `detach_receiver` test
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
This allows us to implement flight for the NG querier by just
implementing a few traits and reuse all the existing glue code and
optimizations (like dictionary handling).
* feat: skeleton of querier CLI
* chore: wrap metrics in opt&arc in querier to satisfy new api
* chore: derive debug in querier handler
* chore: add join handles and their shutdown to nascent querier server
* chore: querier server http unimpl -> 404
* fix: join/shutdown fix in querier; removed unused delegates
Changes the configuration of the router request pipeline to move schema
validation before partitioning.
This reduces the concurrency of callsm into the schema validator when a
single write is split into one or more partitions, reducing contention
and cash thrashing. It also ensures we don't bother partitioning the
writes if the request will fail.
Wraps the postgres implementation of the catalog with a MetricDecorator.
This is slightly intrusive, with the metrics registry being pushed into
the PostgresCatalog type in order to decorate the impls returned when
calling the Catalog::repositories() and Catalog::start_transaction()
methods (rather than being a pure decorator) in order to use static
dispatch and let the compiler optimise away as much overhead as
possible.
* feat: trigger persistence if over soft limit and no evictable chunks
* chore: fmt
* fix: avoid test_full_lifecycle exceeding soft limit
* fix: don't expect chunk to be unloaded
* feat: only trigger if no outstanding persist
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The router is composed of several DML handlers called in sequence in
order to construct the full request handling pipeline. Prior to this
commit, each handler nested the next handler it calls internally,
producing a nested call chain that resulted metrics (added in #3764)
recording cumulative latency like this:
┌ ─
│ ┌───────────────┐
│ NS Creation │
│ └───────────────┘
│ ┌───────────────┐
│ │ │ Partitioner │
│ └───────────────┘
│ │ │
│ │
Cumulative │ │ │ ┌───────────────┐
Timings 1.5s 1s │ etc... │
│ │ │ └───────────────┘
│ │
│ │ │
│ ┌───────────────┐
│ │ │ Partitioner │
│ └───────────────┘
│ ┌───────────────┐
│ NS Creation │
│ └───────────────┘
└ ─
This meant it was hard to determine the latency of a single handler
without knowing (and subtracting the latency of) all the child handlers
it calls.
This commit replaces the intrusive nested handler call chain with an
external Chain combinator type to compose together individual handlers,
resulting in correct per-handler timings and simpler code/tests:
┌───────────────┐
│ NS Creation │
└───────────────┘
│
.5s ┌───────────────┐
└───────▶│ Partitioner │
└───────────────┘
│
1s ┌───────────────┐
└───▶│ etc... │
└───────────────┘
Wraps the sharded write buffer, schema validator, partitioner and
overall request handler in instrumentation to record call latencies and
export them via the /metrics endpoint.
This adds persistence into the ingester with a lifecycle manager. The persist operation must still be updated to keep track of the min_unpersisted_sequence_number for each sequencer.
Allow a DML handler to specify the write input type on which it
operates.
This allows us to construct a write handler pipeline that transforms the
request as it passes through the various handlers. We'll use this to
implement a handler that annotates a normal set of table writes with the
partition key, modifying downstream handlers to expect this annotated
input.