The interplay between mutable_linger_seconds, late_arrive_window and persist_age_threshold_seconds can be tricky to reason about. I realized that the lifecycle rules can be simplified by removing mutable_linger_seconds and instead using late_arrive_window_seconds for the same purpose. Semantically, they basically mean the same thing. We want to give data around this amount of time to arrive before the system persists it, which gives it more of an opportunity to persist non-overlapping data.
When a partition goes cold for writes, after we've waiting past this window, we should compact and persist that partition. This removes one unnecessary knob from the lifecycle configuration and also removes the potential for conflicting configuration options.
* feat: add split and persist operation
* docs: Improve doc strings
* refactor: use for loop rather than map
* refactor: Make it clear that the lifecycle policy picks the split timestamp
* fix: race condition
* docs: improve comments
* fix: logical merge conflict
* fix: clippy
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
When reading from the Kafka write buffer, subscribe to all partitions in
a topic and start from the smallest offset available, instead of
assuming there will only be 1 partition per topic.
A database on one IOx server can, exclusively:
- Not interact with Kafka at all
- Send writes to Kafka
- Read writes from Kafka
Notably, a database on a particular server will never write *and* read from Kafka at the same time.
Instead of waiting for the server ID to be set and then mark the server
as errored, directly check the object store on startup. This is
important so that we fail fast when Istio isn't up and running yet.
* feat: unload chunks from memory rather than dropping them
* docs: Update server/src/db/lifecycle.rs
Co-authored-by: Marco Neumann <marco@crepererum.net>
* docs: Update comment wording
Co-authored-by: Marco Neumann <marco@crepererum.net>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Before this change we loaded databases eagerly when a serverID was
passed on startup BEFORE starting up the gRPC server. Since loading
(esp. at its current state without checkpoints and with too many small
parquet files) can take very long, K8s thinks IOx is unhealthy. With
this change we are now loading databases in the server background worker
once a serverID is available. Until then we block all DB-related
interactions including adding new databases (since without inspecting
the object store there is now way we can check if the DB already
exists).
Furthermore we now load database no matter if the serverID was passed on
startup (via CLI or environment variable) or was set later via gRPC
call. Before this change the latter case was somewhat forgotten.