* feat: Add TableId and ColumnId
* feat: swap over to DbId and TableId everywhere
This commit swaps us over to using the DbId and TableId types everywhere
for our internal systems. Anywhere that's external facing, such as names
for last cache tables or line protocol parsing, use names. In these cases
we have the `Catalog` which keeps a map of TableIds and DbIds in a
bidirectional mapping for easy lookup i.e. id <-> names. While in essence
the change itself isn't that complicated given the nature of how much we
depended on names for things, the changes end up being quite invasive and
extensive. Luckily it shouldn't be too hard to review. Note this does
not add the column ids which will be done in a follow up PR.
Closes#25375Closes#25403Closes#25404Closes#25405Closes#25412Closes#25413
This adds a watch channel to check for when prunes have happened on
the parquet cache oracle, so we can notify something, like a test, that
needs to know when a prune has happened.
This should make the cache eviction test in the parquet_cache module
not flake out anymore.
- added mechanism within PersistedFile to expose parquet file related
metrics. The details are updated when new snapshot is generated and
also when all snapshots are loaded when the process starts up
- at the point of creating the telemetry payload these parquet metrics
are looked up before sending it to the server.
Closes: https://github.com/influxdata/influxdb/issues/25418
This adds a new crate `influxdb3_test_helpers` which provides two object
store helper types that can be used to track request counts made through
the store, as well as synchronize requests made through the store, resp.
We will need to wait on the RUSTSEC advisory to be resolved upstream,
i.e., by having tonic and hyper upgraded in core, before we can lift this
advisory ignore and use the latest versions of those crates.
- instrumented code to get read and write measurement
- introduced EventsBucket for collection of reads/writes
- sampler now samples every minute for all metrics (including
reads/writes)
- other tidy ups
closes: https://github.com/influxdata/influxdb/issues/25372
* test: check parquet cache in the write buffer
Checked that the parquet cache will serve queries when chunks are
requested from the write buffer. The added test also checks for get_range
requests made to the object store, which are typically made by DataFusion
to infer schema for parquet files.
* refactor: make parquet cache optional on write buffer
* test: add test to verify parquet cache function
This makes the parquet cache optional at the write buffer level, and adds
a test that verifies that the cache catches and prevents requests to the
object store in the event of a cache hit.
Closes#25382Closes#25383
This refactors the parquet cache to use less locking by switching from using the `clru` crate to a hand-rolled cache implementation. The new cache still acts as an LRU, but it uses atomics to track hit-time per entry, and handles pruning in a separate process that is decoupled from insertion/gets to the cache.
The `Cache` type uses a [`DashMap`](https://docs.rs/dashmap/latest/dashmap/struct.DashMap.html) internally to store cache entries. This should help reduce lock contention, and also has the added benefit of not requiring mutability to insert into _or_ get from the map.
The cache maps an `object_store::Path` to a `CacheEntry`. On a hit, an entry will have its `hit_time` (an `AtomicI64`) incremented. During a prune operation, entries that have the oldest hit times will be removed from the cache. See the `Cache::prune` method for details.
The cache is setup with a memory _capacity_ and a _prune percent_. The cache tracks memory used when entries are added, based on their _size_, and when a prune is invoked in the background, if the cache has exceeded its capacity, it will prune `prune_percent * cache.len()` entries from the cache.
Two tests were added:
* `cache_evicts_lru_when_full` to check LRU behaviour of the cache
* `cache_hit_while_fetching` to check that a cache entry hit while a request is in flight to fetch that entry will not result in extra calls to the underlying object store
- moved num samples for cpu/mem to be usize from u64
- generic float function to round to 2 decimal places added
- removed unnecessary cast of f32 to f64
Part of #25347
This sets up a new implementation of an in-memory parquet file cache in the `influxdb3_write` crate in the `parquet_cache.rs` module.
This module introduces the following types:
* `MemCachedObjectStore` - a wrapper around an `Arc<dyn ObjectStore>` that can serve GET-style requests to the store from an in-memory cache
* `ParquetCacheOracle` - an interface (trait) that can accept requests to create new cache entries in the cache used by the `MemCachedObjectStore`
* `MemCacheOracle` - implementation of the `ParquetCacheOracle` trait
## `MemCachedObjectStore`
This takes inspiration from the [`MemCacheObjectStore` type](1eaa4ed5ea/object_store_mem_cache/src/store.rs (L205-L213)) in core, but has some different semantics around its implementation of the `ObjectStore` trait, and uses a different cache implementation.
The reason for wrapping the object store is that this ensures that any GET-style request being made for a given object is served by the cache, e.g., metadata requests made by DataFusion.
The internal cache comes from the [`clru` crate](https://crates.io/crates/clru), which provides a least-recently used (LRU) cache implementation that allows for weighted entries. The cache is initialized with a capacity and entries are given a weight on insert to the cache that represents how much of the allotted capacity they will take up. If there isn't enough room for a new entry on insert, then the LRU item will be removed.
### Limitations of `clru`
The `clru` crate conveniently gives us an LRU eviction policy but its API may put some limitations on the system:
* gets to the cache require an `&mut` reference, which means that the cache needs to be behind a `Mutex`. If this slows down requests through the object store, then we may need to explore alternatives.
* we may want more sophisticated eviction policies than a straight LRU, i.e., to favour certain tables over others, or files that represent recent data over those that represent old data.
## `ParquetCacheOracle` / `MemCacheOracle`
The cache oracle is responsible for handling cache requests, i.e., to fetch an item and store it in the cache. In this PR, the oracle runs a background task to handle these requests. I defined this as a trait/struct pair since the implementation may look different in Pro vs. OSS.
* feat: Remove lock for FileId tests
Since we now are using cargo-nextest in CI we can remove
the locks used in the FileId tests to make sure that we
have no race conditions
* feat: Add u32 ID for Databases
This commit adds a new DbId for databases. It also updates paths to use
that id as part of the name. When starting up the WriteBuffer we apply
the DbId from the persisted snapshot much like we do for ParquetFileId's
This introduces the influxdb3_id crate to avoid circular deps with ids.
The ParquetFileId should also be moved into this crate, but it's
outside the scope of this change.
Closes#25301
- uses Arc<str> to represent create once and read everywhere type
of string
- updated snapshots for insta asserts, uses redaction to hardcode
randomly generated UUID strings
- added methods to catalog to expose instace and host ids
Closes: https://github.com/influxdata/influxdb/issues/25315
This changes our CI to use cargo-nextest which is faster and does not
have issues around global statics. Since it runs each test in it's
own process we don't have to worry about tests stepping on each other's
toes in this regard. It also updates the CI to ignore the current
cargo deny failure as we can't do anything until the arrow crates are
upgraded.
This makes some changes to the TestServer E2E framework, which is used
for running integration tests in the influxdb3 crate. These changes are
meant so that we can more easily split the code for pro.
This commit changes the write_buffer tests to acquire a lock so that
in tests where we need to have access to NEXT_FILE_ID that it won't
be overwritten since rust tests run as one process and share the
same statics.
While this isn't a problem for Edge as a singular process it is for
our tests. It's a bit unfortunate, but this solution is the easiest
and the locks are not held for long so there's no real big impact
on running these tests.
Closes#25286
This applies some needed updates downstream in Pro. Namely,
* visibility changes that allow types to be used in the pro buffer
* allow parsing a WAL file sequence number from a file path
* remove duplicates when adding parquet files to a persisted files list
* test: repro for dropped wal files during snapshot
This commit provides a reproducer for an issue in the snapshotting process
whereby WAL files are removed for writes that have not been persisted yet.
* fix: do not snapshot most recent WAL period
This addresses #25277
Snapshots that are triggered when the number of WAL periods in the
tracker grows to be >= 3x the snapshot size will not include the most
recent wal period, and doing so was removing WAL files containing data
that was not yet persisted.
* docs: add doc comment to reproducer test
* fix: broken parquet_files system table test
* fix: broken snapshot_tracker test
* fix: broken write_buffer test
* refactor: remove redundant helper function
* test: add another snapshot test to write_buffer
* test: future writes do not get dropped on restart
* refactor: add catalog as dep to influxdb3
* refactor: move catalog and last cache initialization out of write buffer
The Write buffer used to handle initialization of the catalog and last
n value cache. This commit moves that logic out, so that both can be
initialized independently, and injected into the write buffer. This is to
enable downstream changes that will need to make sharing the catalog and
last cache possible.
* refactor: use dyn traits in WriteBufferImpl
This changes the WriteBufferImpl to use a dyn TimeProvider instead of
a generic in its type signature.
The Server type now uses a dyn WriteBuffer instead of using a generic
in its type signature, and the ServerBuilder was updated to accommodate
this accordingly.
These chages were to make downstream code changes more seamless.
* refactor: make some items pub
This makes functions on the QueryableBuffer and LastCache pub so that they
can be used downstream.
The Persister trait was only implemented by a single type, because the
underlying ObjectStore interface has several ways of being mocked, we
mock that instead of the Persister interface.
This commit removes the Persister trait, and moves its interface/impl
directly on a single Persister type in the persister module of the
influxdb3_write crate.
deny.toml had some incorrect field names in license.exceptions, those
were fixed from 'crate' to 'name'.
* feat: Add u64 id field to ParquetFiles
This commit does a few things:
1. It adds a u64 id field to ParquetFile
2. It gets this from an AtomicU64 so they can always use the most up to
date one across threads
3. The last available file id is persisted as part of our snapshot
process
4. The snapshot when loaded will set the AtomicU64 to the proper value
With this current work on the FileIndex in our Pro version will be able
to utilize these ids while doing compaction and we can refer to them
with a unique u64.
Closes#25250
* feat: Catalog apply_catalog_batch only updates if new
This updates the Catalog so that when applying a catalog batch it only updates the inner catalog and bumps the sequence number and updated tracker if there are new updates in the batch. Also does validation that the catalog batch schema is compatible with any existing.
Closes#25205
* feat: only persist catalog when updated (#25238)
* feat: Add last cache create/delete to WAL
This moves the LastCacheDefinition into the WAL so that it can be serialized there. This ended up being a pretty large refactor to get the last cache creation to work through the WAL.
I think I also stumbled on a bug where the last cache wasn't getting initialized from the catalog on reboot so that it wouldn't actually end up caching values. The refactored last cache persistence test in write_buffer/mod.rs surfaced this.
Finally, I also had to update the WAL so that it would persist if there were only catalog updates and no writes.
Fixes#25203
* fix: typos
* refactor: Make Level0Duration part of WAL
I noticed this during some testing and cleanup with other PRs. The WAL had its own level_0_duration and the write buffer had a different one, which would cause some weird problems if they weren't the same. This refactors Level0Duration to be in the WAL and fixes up the tests.
As an added bonus, this surfaced a bug where multiple L0 blocks getting persisted in the same snapshot wasn't supported. So now snapshot details can have many files per table.
* fix: have persisted files always return in descending data time order
* fix: sort record batches for test verification
This extends the system tables available with a new `parquet_files` table
which will list the parquet files associated with a given table in a
database.
Queries to system.parquet_files must provide a table_name predicate to
specify the table name of interest.
The files are accessed through the QueryableBuffer.
In addition, a test was added to check success and failure modes of the
new system table query.
Finally, the Persister trait had its associated error type removed. This
was somewhat of a consequence of how I initially implemented this change,
but I felt cleaned the code up a bit, so I kept it in the commit.
This enforces the use of a host identifier prefix in all object store
paths (currently, for parquet files, catalog files, and snapshot files).
The persister retains the host identifier prefix, and uses it when
constructing paths.
The WalObjectStore also holds the host identifier prefix, so that it can
use it when saving and loading WAL files.
The influxdb3 binary requires a new argument 'host-id' to be passed that
is used to specify the prefix.
* fix: query bugs with buffer
This fixes three different bugs with the buffer. First was that aggregations would fail because projection was pushed down to the in-buffer data that de-duplication needs to be called on. The test in influxdb3/tests/server/query.rs catches that.
I also added a test in write_buffer/mod.rs to ensure that data is correctly queryable when combining with different states: only data in buffer, only data in parquet files, and data across both. This showed two bugs, one where the parquet data was being doubled up (parquet chunks were being created in write buffer mod and in queryable buffer. The second was that the timestamp min max on table buffer would panic if the buffer was empty.
* refactor: PR feedback
* fix: fix wal replay and buffer snapshot
Fixes two problems uncovered by adding to the write_buffer/mod.rs test. Ensures we can replay wal data and that snapshots work properly with replayed data.
* fix: run cargo update to fix audit