This will be helpful when we want to batch DML operations in memory
(e.g. when using RSKafka).
This also ensures that `MBChunk` accounts for the column names that
are stored within `MutableBatch`.
Adds a memory based catalog, useful for testing purposes.
Separates getting the namespace schema from the namespace and moves the schema code out interface out of postgres.
This creates traits for the catalog API and moves the data objects over to interface.rs.
Updates the postgres module to implement the trait API.
Moves schema vaildation and creation out to the primary lib using the trait API.
Adds setup function to create shared kafka topic, query pool, and sequencer records.
* refactor: ensure sequencers are unique
Adds a unique constraint to ensure only one sequencer record exists for
each Kafka (topic, partition).
* test: use DSN from env for integration tests
Removes the hard-coded DSN, instead sourcing it from the DATABASE_URL
environment variable.
* docs: integration testing for iox_catalog
Documents the required steps in order to run the Postgres integration
tests for the iox_catalog crate.
* feat(iox_catalog): create & list sequencers
Adds support for interacting with the "sequencer" table.
* chore: update lockfile
Running cargo in iox_catalog generates a lockfile diff.
Changed to use the iox_catalog schema in Postgres rather than public.
Updated talbe names to be singular.
Removed the connection_string from query_pool
Instead of converting the set of MutableBatches into a DmlOperation to
shard into more DmlOperation instances, the sharder can operate directly
on the MutableBatches.
Defines the DmlHandler trait responsible for processing a request in
some abstract way, decoupling the HTTP/gRPC request handlers from the
underlying routing logic.
Previously a request that specified an empty org & bucket value would be
mapped to a database named "_".
This commit changes the org/bucket mapping fn to return an error if
either org or bucket is empty.