Previously IOx mapped a single database to a single kafka topic - this
is no longer the case, so referring to the kafka topic name as the
"database name" name is confusing.
Adds a decorator over the underlying kafka client to capture the latency
distribution of the low-level kafka writes, independent of the
aggregation/DML batching framework that sits "above" this client.
The latency measurements include the serialisation overhead, protocol
overhead, and actual network I/O.
This allows the MockProvider to be used in tests with consuming code
that uses generics/static dispatch instead of a dyn TimeProvider, while
still retaining a ref to the MockProvider instance.
We already prune all chunks in the query-access layer. There's no need
to do that another time (which is actually the first time) in
`QuerierTable::chunks`. The time savings we get from feeding less chunks
into the state reconciling should be negligible. On the pro-side however
we get a more streamlined data flow and actually correct chunk pruning
metrics. Also see #5336.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* fix: make it clear rpc_predicates are only ever specialized when a schema is known
* fix: handle case of no schema
* fix: Update predicate/src/rpc_predicate.rs
- emit a warning if we cannot even attempt to prune chunks due to an
error. This is always either a missing feature or a bug (even though
it does not impact correctness but _only_ performance). Also see
https://github.com/influxdata/conductor/issues/1107
- change metrics to clearly differentiate between "could not prune" and
"not pruned"
- add new "not pruned" observer hook (this was missing for some reason,
the "pruned" hook existed though)
Cold partition compaction will (in the next commit) upgrade a level 0
file without any overlaps rather than running compaction.
Cold partition filtering gathers all level 0 files in the (already
deemed cold) partition with all overlapping level 1 files, and does not
limit the set of files being compacted by their number or size.
* chore: update deps to get new chrono
* chore: Run cargo hakari tasks
* chore: migrate away from deprecated API
Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
* test: add tests for regex_match_on_field
* feat: more general `_field` predicate handling
* fix: remove old comment
* fix: update tests
* fix: improve test a little more
* fix: fmt
* fix: Update predicate/src/rpc_predicate/field_rewrite.rs
Co-authored-by: Raphael Taylor-Davies <1781103+tustvold@users.noreply.github.com>
* fix: Handle predicates that can not be evaluated
Co-authored-by: Raphael Taylor-Davies <1781103+tustvold@users.noreply.github.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: make could-not-prune reason a static string
* refactor: introduce `QuerierTableArgs`
* feat: chunk pruning metrics
Closes#4974.
* refactor: address review comments
* refactor: use static typing for not-pruned reason
* refactor: pass chunk to not-pruned observer and use it for some metrics
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The Kafka write buffer implementation (and only the Kafka impl) merges
together successive DML writes for the same namespace & partition within
a window of time.
This commit records the number of DML writes that have been merged
together to form a single batched op before it is dispatched to Kafka.