If two callers call create_or_get() for a partition, providing the same
partition key & table ID, but different sequence numbers the catalog
MUST NOT continue silently.
As this is unexpected, this behaviour causes a panic.
This commit introduces test cases that ensure that for each of the
tombstone & partition repos:
* create_or_get() is idempotent
* create_or_get() does not silently drop conflicting writes
The former covers the expected use case: callers issuing potentially
multiple calls to create_or_get() with the same args, causing the
catalog impl to transparently turn the subsequent calls into NOPs.
The latter illustrates an issue where multiple calls to create_or_get()
with differing arguments is silently accepted by the catalog, causing
both callers to believe they have committed the (differing) details they
provided. This test is expected to pass but fails in this commit.
This commit changes the iox_catalog test harness so that each test is
run in an independent, randomly generated schema to avoid concurrent
tests interfering with each other.
Each test creates a randomly-named schema, grants permissions to the new
schema, runs the full migration stack against the new schema, and then
executes the code under test.
Allow the caller to set the Postgres schema a migration should be
applied to, rather than restricting the migration to a specific,
hard-coded schema.
BREAKING CHANGE: manually adds a new migration that precedes the
existing migration to ensure the iox_catalog schema exists before
applying the migration. You'll probably have to drop any existing
databases and migrate from scratch:
sqlx database drop; sqlx database create;
* chore: port sqlx-hotswap-pool over from conductor
Co-authored-by: Marko Mikulicic <mkm@influxdata.com>
* chore: workspace hack fixes
* fix: unique schema per test db connection
* fix: adjust search path in catalog pg tests to see if it fixes test schema issue
* fix: actually fixed sqlx hotswap pool test
Co-authored-by: Marko Mikulicic <mkm@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: allow catalog access w/o a transaction
Now the caller has the full control if they want to use a transaction or
not.
* fix: remove non-transaction-safe `create_many`
* fix: remove unnecessary transactions
* feat: projection pushdown for QueryableBatch
* chore: clean up and remove unwrap
* fix: Add Sync to a Snafu source to have the code compile
* chore: cleanup and add comments for tests
* refactor: Add tests for scanning non existing columns and fix related bugs
* chore: modify comment to trigger auto check in github work
* refactor: catalog Unit of Work (= transaction)
Setup an inteface to handle Units of Work within our catalog. Previously
both the Postgres and the in-mem backend used "mini-transactions on
demand". Now the caller has a clear way to establish boundaries and
gets read and write isolation. A single `Arc<dyn Catalog>` can create as
many `Box<dyn UnitOfWork>` as you like, but note that depending on the
backend you may not scale infinitely (postgres will likely impose
certain limits and the in-mem backend limits concurrency to 1 to keep
things simple).
* docs: improve wording
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
* refactor: rename Unit of Work to Transaction
* test: improve `test_txn_isolation`
* feat: clearify transaction drop semantics
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: add ProcessedTombstoneRepo
* feat: add function add_parquet_file_with_tombstones
* fix: remove unecessary use
* feat: handling transaction when adding parquet file and its processed tombstones
* feat: tests update catalog for parquet file and processed tombstones
* fix: make add parquet file & its processed tombstones fully transactional
* chore: cleanup
* test: add integration tests for new catalog update functions
* chore: remove catalog_update.rs
* chore: cleanup
* fix: assert the right values
* fix: create unique namespace
* fix: support non transaction create_many
* test: remove tests that do not work in a transaction
* fix: one more case with unique namespace
* chore: more verification around for better understanding why certain tests fail
* fix: compare difference rather than absolute becasue the DB already has data
* fix: fix the argument provided to SQL
* fix: return non-empty processed tombstones
* fix: insert the right parquet file
* chore: remove unsed file
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This adds the scaffolding for the ingester server to consume data from Kafka. This ingests data in an in memory structure while creating records in the catalog for any partitions that don't yet exist.
I've removed catalog_update.rs in ingester for now. That was mostly a placeholder and will be going in a combination of handler.rs and data.rs on my next PR which will have some primitive lifecycle wired up.
There's one ugly bit here where the DML write is cloned because it's getting borrowed to output spans and metrics. I'll need to follow up with a refactor to make it so that the DML write's tables can be consumed without it gumming up the metrics stuff.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: remove InfluxColumnType::IOx
Remove unused column variant - see #3554 for context.
* refactor: reserve SEMANTIC_TYPE_IOX name in proto
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: Debug bounds on Catalog trait
* feat: validate MutableBatch schema
Changes the schema validation code to validate MutableBatch instances
(coming from a pre-parsed LP write, and non-LP-based writes) instead of
parsed LP lines.
* refactor: Send bound on boxed errors
* refactor: clippy
Allow assert_eq!(bool, bool) for readability.
* refactor: no PartialEq<MB Column> for ColumnSchema
Remove the PartialEq<mutable_buffer::Column> for ColumnSchema - it's
definitely more readable as a method call.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* build: don't pull in all of tokio
We already specify the tokio features we need so "full" (all features)
is not necessary.
* build: remove chrono dependency
Appears unused.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This updates the catalog API to make it easier to work with for consumers. I also found a bug in the MemCatalog implementation while refactoring the tests to work with the new API definition. Consumers will now be able to Arc wrap the catalog and use it across awaits.
* Adds TombstoneId and Tombstone to the iox_catalog with associated interfaces
* Adds SequenceNumber new type for use with Tombstone
* Adds Timestamp new type for use with Tombstone
* Adds constraint to the Postgres schema to enforce tombstone uniqueness by table_id, sequencer_id, and sequence_number
Adds a memory based catalog, useful for testing purposes.
Separates getting the namespace schema from the namespace and moves the schema code out interface out of postgres.
This creates traits for the catalog API and moves the data objects over to interface.rs.
Updates the postgres module to implement the trait API.
Moves schema vaildation and creation out to the primary lib using the trait API.
Adds setup function to create shared kafka topic, query pool, and sequencer records.
* refactor: ensure sequencers are unique
Adds a unique constraint to ensure only one sequencer record exists for
each Kafka (topic, partition).
* test: use DSN from env for integration tests
Removes the hard-coded DSN, instead sourcing it from the DATABASE_URL
environment variable.
* docs: integration testing for iox_catalog
Documents the required steps in order to run the Postgres integration
tests for the iox_catalog crate.
* feat(iox_catalog): create & list sequencers
Adds support for interacting with the "sequencer" table.
* chore: update lockfile
Running cargo in iox_catalog generates a lockfile diff.
Changed to use the iox_catalog schema in Postgres rather than public.
Updated talbe names to be singular.
Removed the connection_string from query_pool