* feat: switch protobuf write service to canonical definition
The protobuf definition used for the proto write endpoint was a WIP. Now
that a canonical definition exists at
https://github.com/influxdata/influxdb-pb-data-protocol/ we can switch
to that.
* chore: lint etc
* chore: fix rustdoc nit in proto definition comment
RoutingRules such as RoutingConfig and ShardConfig use a sink to decide where to write
the entries.
The write buffer is currently implemented in the `db` and is accessed by using the `write_local_entry`
code path. This PR simply invokes that legacy code path whenever a "kafka" sink is selected.
This allows us immediately to benefit from the ability of the ShardingConfig to select or reject
tables and send some to kafka, some to devnull.
This PR does not allow us yet to split an input batch into mulitiple shards and send each
to a different kafka topic. For that, we'll need to pull out the write buffer code path out of
the `db` and do something similar to a ConnectionManager but for write buffers. TODO
This deprecates the "target" field in the RoutingConfig and replaces it with the "sink"
field, which has a variant that accepts a node group.
This commit is backward compatible in that it will accept existing configs.
The configs will roundtrip to the new format though (i.e. `database get` will render
the sink field).
The ShardConfig applies matchers that resolve to a shard number.
The config then applies a mapping between shard numbers to targets.
The type that encapsulated the target that a shard points to was also called
a "Shard". This is confusing. This commit changes it to "Sink", i.e. a destination
for traffic to go to. Subsequent commits will expand the definition of a Sink to
encompass different kinds of sinks (like kafka write buffer, "devnull", ...)
This changes only the name of the protobuf message and the related rust types,
it doesn't change any name of the json-rendered protobuf configs.
* feat: include more information in system.operations table
* chore: review feedback
Co-authored-by: Andrew Lamb <alamb@influxdata.com>
Co-authored-by: Andrew Lamb <alamb@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Advantages are:
- for large DBs w/ many partitions we can ingest data in-parallel
- on top of this change we can implement per-sequencer seeking, which is
required for replay
This refactors the write buffer a bit for:
- **Testing:** Add generic tests for the Kafka and the mocking
implementation. The same interface can be used easily add new
implementations (e.g. via Redis, filesystem, ...).
- **Partition on Write:** The caller of the writer operation must now
specify the partition/sequencer ID. The implicit partitioning of the
Kafka writer would have lead to broken data since we must never spill
entries w/ the same primary key over multiple partitions. At the
moment we will only use partition 0 but we can easily implement
better logic in the future.
- **Improved Mocking:** The mocked implementation now simulates a system
that feels more real. Especially the handling around multiple streams
and "write while read" has been improved. This will be helpful for
testing and for new features like seeking (during replay). A solid
realistic mock also helps us to ensure that the tests using the mock
do not rely on unrealistic behavior too much.
The interplay between mutable_linger_seconds, late_arrive_window and persist_age_threshold_seconds can be tricky to reason about. I realized that the lifecycle rules can be simplified by removing mutable_linger_seconds and instead using late_arrive_window_seconds for the same purpose. Semantically, they basically mean the same thing. We want to give data around this amount of time to arrive before the system persists it, which gives it more of an opportunity to persist non-overlapping data.
When a partition goes cold for writes, after we've waiting past this window, we should compact and persist that partition. This removes one unnecessary knob from the lifecycle configuration and also removes the potential for conflicting configuration options.
* feat: add split and persist operation
* docs: Improve doc strings
* refactor: use for loop rather than map
* refactor: Make it clear that the lifecycle policy picks the split timestamp
* fix: race condition
* docs: improve comments
* fix: logical merge conflict
* fix: clippy
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
When reading from the Kafka write buffer, subscribe to all partitions in
a topic and start from the smallest offset available, instead of
assuming there will only be 1 partition per topic.
A database on one IOx server can, exclusively:
- Not interact with Kafka at all
- Send writes to Kafka
- Read writes from Kafka
Notably, a database on a particular server will never write *and* read from Kafka at the same time.
Instead of waiting for the server ID to be set and then mark the server
as errored, directly check the object store on startup. This is
important so that we fail fast when Istio isn't up and running yet.
* feat: unload chunks from memory rather than dropping them
* docs: Update server/src/db/lifecycle.rs
Co-authored-by: Marco Neumann <marco@crepererum.net>
* docs: Update comment wording
Co-authored-by: Marco Neumann <marco@crepererum.net>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Before this change we loaded databases eagerly when a serverID was
passed on startup BEFORE starting up the gRPC server. Since loading
(esp. at its current state without checkpoints and with too many small
parquet files) can take very long, K8s thinks IOx is unhealthy. With
this change we are now loading databases in the server background worker
once a serverID is available. Until then we block all DB-related
interactions including adding new databases (since without inspecting
the object store there is now way we can check if the DB already
exists).
Furthermore we now load database no matter if the serverID was passed on
startup (via CLI or environment variable) or was set later via gRPC
call. Before this change the latter case was somewhat forgotten.
* refactor: Rearrange to allow injection of the current time in tests
* test: Failing test showing a point can be in the wrong partition
* fix: Only get the default time once per ShardedEntry creation, in router