RoutingRules such as RoutingConfig and ShardConfig use a sink to decide where to write
the entries.
The write buffer is currently implemented in the `db` and is accessed by using the `write_local_entry`
code path. This PR simply invokes that legacy code path whenever a "kafka" sink is selected.
This allows us immediately to benefit from the ability of the ShardingConfig to select or reject
tables and send some to kafka, some to devnull.
This PR does not allow us yet to split an input batch into mulitiple shards and send each
to a different kafka topic. For that, we'll need to pull out the write buffer code path out of
the `db` and do something similar to a ConnectionManager but for write buffers. TODO
* fix: Properly record total_count and null_count in statistics
* fix: fix statistics calculation in mutable_buffer
* refactor: expose null counts in read_buffer
* refactor: expose null_count in parquet_file
* fix: update server crate tests
* fix: update query_tests tests
* docs: tweak comments
* refactor: Use storage_stats rather than adding `null_count`
* refactor: rename test data field for clarity
* fix: fixup merge conflicts
* refactor: rename initial_non_null_count to initial_total_count
* refactor: caculate null_count as row_count - to_add
This puts all the replay logic under `server::db::replay` as well as its
error variants and tests.
The tests are reworked using a more generic
test framework which allows us to specify a number of steps instead of
filling pre-defined ones with variables. Each step is either an action
(e.g. restart DB, perform replay, ingest data into the write buffer
state) or a check (e.g. assert that these partitions exists, await until
the background workers has ingested these partitions). The entire
framework is kept generic so it should be easy to create more checks and
actions in the future. The resulting tests are more verbose, but (at
least in my opinion) easier to follow along since the reader can see
what's happening at which step and does not jump back and forth between
the test config and the "driver" that uses the config.
This deprecates the "target" field in the RoutingConfig and replaces it with the "sink"
field, which has a variant that accepts a node group.
This commit is backward compatible in that it will accept existing configs.
The configs will roundtrip to the new format though (i.e. `database get` will render
the sink field).
The ShardConfig applies matchers that resolve to a shard number.
The config then applies a mapping between shard numbers to targets.
The type that encapsulated the target that a shard points to was also called
a "Shard". This is confusing. This commit changes it to "Sink", i.e. a destination
for traffic to go to. Subsequent commits will expand the definition of a Sink to
encompass different kinds of sinks (like kafka write buffer, "devnull", ...)
This changes only the name of the protobuf message and the related rust types,
it doesn't change any name of the json-rendered protobuf configs.