Added prometheus metrics to track lines written and bytes written per
database. The write buffer does the tracking after validation of incoming
line protocol.
Tests added to verify.
* feat: create DB and Tables via REST and CLI
This commit does a few things:
1. It brings the database command naming scheme for types inline with
the rest of the CLI types
2. It brings the table command naming scheme for types inline with
the rest of the CLI types
3. Adds tests to check that the num of dbs is not exceeded and that you
cannot create more than one database with a given name.
4. Adds tests to check that you can create a table and put data into it
and querying it
5. Adds tests for the CLI for both the database and table commands
6. It creates an endpoint to create databases given a JSON blob
7. It creates an endpoint to create tables given a JSON blob
With this users can now create a database or table without first needing
to write to the database via the line protocol!
Closes#25640Closes#25641
* fix: Ensure tags are never null
This injects empty strings into tags for any rows in the buffer where the tag value is null. This is required because the tags are what make up the series key, which must have all non-null values.
There is an ongoing discussion about what the real behavior should be here, but for now this will get our users running that break without this behavior. Discussion is in #25674.
Fixes#25648
* fix: clippy failures
This adds some error handling and logging around the method that sorts,
deduplicates, and persists parquet data during the snapshot process
The errors will need to be handled in follow-on work, but this is for
helping debug fatal errors during the process.
* Move processing engine invocation to a seperate tokio task.
* Support writing back line protocol from python via insert_line_protocol().
* Update structs to work with bincode.
Fixes bug in queryable buffer where if a block of data was missing one of the columns defined in a table sort key, the creation of the logical plan to sort and dedupe the data would fail, causing a panic.
Fixes#25670
* feat: add startup time to logging output
This change adds a startup time counter to the output when starting up
a server. The main purpose of this is to verify whether the impact of
changes actually speeds up the loading of the server.
* feat: Significantly decrease startup times for WAL
This commit does a few important things to speedup startup times:
1. We avoid changing an Arc<str> to a String with the series key as the
From<String> impl will call with_column which will then turn it into
an Arc<str> again. Instead we can just call `with_column` directly
and pass in the iterator without also collecting into a Vec<String>
2. We switch to using bitcode as the serialization format for the WAL.
This significantly reduces startup time as this format is faster to
use instead of JSON, which was eating up massive amounts of time.
Part of this change involves not using the tag feature of serde as
it's currently not supported by bincode
3. We also parallelize reading and deserializing the WAL files before
we then apply them in order. This reduces time waiting on IO and we
eagerly evaluate each spawned task in order as much as possible.
This gives us about a 189% speedup over what we were doing before.
Closes#25534
* feat: parquet cache metrics
* feat: track parquet cache metrics
Adds metrics to track the following in the in-memory parquet cache:
* cache size in bytes (also included a fix in the calculation of that)
* cache size in n files
* cache hits
* cache misses
* cache misses while the oracle is fetching a file
A test was added to check this functionality
* refactor: clean up logic and fix cache removal tracking error
Some logic and naming was cleaned up and the boolean to optionally track
metrics on entry removal was removed, as it was incorrect in the first place:
a fetching entry still has a size, which counts toward the size of the
cache. So, this makes is such that anytime an entry is removed, whether
its state is success or fetching, its size will be decremented from
the cache size metrics.
The sizing caclulations were made to be correct, and the cache metrics
test was updated with more thurough assertions
Moved all of the last cache implementation into the `influxdb3_cache`
crate. This also splits out the implementation into three modules:
- `cache.rs`: the core cache implementation
- `provider.rs`: the cache provider used by the database to hold multiple
caches.
- `table_function.rs`: same as before, holds the DataFusion impls
Tests were preserved and moved to `mod.rs`, however, they were updated to
not rely on the WriteBuffer implementation, and instead use the types in
the `influxdb3_cache::last_cache` module directly. This simplified the
test code, while not changing any of the test assertions at all.
This commit does three important major changes:
1. We will deny writes to the v1, v2, and v3 write apis that add new tags in
subsequent writes after the first write
2. We make every table have a series key by default now
3. We enfore sorting order by the series key which is the order the keys came in
With these changes we have consistentcy across the various write apis and can
make optimizations and future features with the assumption we have a series key.
Closes#25585
This adds the MetaDataCacheProvider for managing metadata caches in the
influxdb3 instance. This includes APIs to create caches through the WAL
as well as from a catalog on initialization, to write data into the
managed caches, and to query data out of them.
The query side is fairly involved, relying on Datafusion's TableFunctionImpl
and TableProvider traits to make querying the cache using a user-defined
table function (UDTF) possible.
The predicate code was modified to only support two kinds of predicates:
IN and NOT IN, which simplifies the code, and maps nicely with the DataFusion
LiteralGuarantee which we leverage to derive the predicates from the
incoming queries.
A custom ExecutionPlan implementation was added specifically for the
metadata cache that can report the predicates that are pushed down to
the cache during query planning/execution.
A big set of tests was added to to check that queries are working, and
that predicates are being pushed down properly.
This commit allows deleting (soft) a table. For an user, following
command will allow soft deleting a table (bar) in db (foo)
```
influxdb3 table delete --dbname foo --table bar --host $host
```
- Added `soft_delete_table` to `DatabaseManager` trait, which already
hosts `soft_delete_database` method. The code roughly follows the same
flow as db delete. Although like db schema, it does clone on write
because the reference is behind an Arc, `Arc::make_mut` is used in
this change.
- Moved db delete related cli parser under "manage" module that has both
db and table delete functionality
- Some minor tidyups (removing unused methods, renaming method so that
the order in name matches actual return type eg. `table_id_and_schema`,
should return (id, schema) and not (schema, id))
closes: https://github.com/influxdata/influxdb/issues/25561
This commit changes the code so that we only keep the 10 most recent
Catalogs. When a new one is persisted we delete any old ones that
exist. If the deletion would fail we don't panic and let a future
persist cleanup the catalogs rather than failing the persist itself.
This commit also adds a test to make sure that only the catalogs we
expect to are deleted on persist.
* feat: drop/delete database
This commit allows soft deletion of database using `influxdb3 database
delete <db_name>` command. The write buffer and last value cache are
cleared as well.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: reuse same code path when deleting database
- In previous commit, the deletion of database immediately triggered
clearing last cache and query buffer. But on restarts same logic had
to be repeated to allow deleting database when starting up. This
commit removes immediate deletion by explicitly calling necessary
methods and moves the logic to `apply_catalog_batch` which already
applies `CatalogOp` and also clearing cache and buffer in
`buffer_ops` method which has hooks to call other places.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: use reqwest query api for query param
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: include deleted flag in DatabaseSnapshot
- `DatabaseSchema` serialization/deserialization is delegated to
`DatabaseSnapshot`, so the `deleted` flag should be included in
`DatabaseSnapshot` as well.
- insta test snapshots fixed
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: address PR comments + tidy ups
---------
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: core metadata cache structs with basic tests
Implement the base MetaCache type that holds the hierarchical structure
of the metadata cache providing methods to create and push rows from the
WAL into the cache.
Added a prune method as well as a method for gathering record batches
from a meta cache. A test was added to check the latter for various
predicates and that the former works, though, pruning shows that we need
to modify how record batches are produced such that expired entries are
not emitted.
* refactor: filter expired entries and do some clean up in the meta cache
* refactor: use SerdeVecMap in PersistedSnapshot
This changes from the use of a HashMap to store the DB -> Table structure
in the PersistedSnapshot files to using a SerdeVecMap, which will have
the identifiers serialized as integers instead of strings.
* test: add a snapshot test for persisted snapshots
* chore: update core deps
- arrow/parquet deps are patched (as in core)
- three specific code changes to cope with changes in core crates
- TransitionPartitionId, use `from_parts` instead of `new`
- arrow buffers can take &[u8] directly without `to_vec()`/`vec!`
(used only in tests)
- `schema` and `influxdb_line_protocol` crates need `v3` feature enabled
* chore: update deny.toml
* chore: formatting and deny toml changes
Unicode-3.0 license is added to allowed licenses list, without it
end up with 19 errors (`zerovec`, `zerovec-derive` etc.)
* chore: address PR feedback
- move enabling v3 feature to root Cargo.toml
- added the upstream PR for datafusion-common that introduced RUSTSEC-2024-0384
Updates the catalog to use its own sequence number in the path. This will enable downstream Pro systems that pick up PersistedSnapshots to get the specific catalog that a snapshot is associated with since its sequence number is included.
Also updated the type to be CatalogSequenceNumber to make it more clear & readable when being used.
* refactor: make last cache eviction optional
This changes how the last cache is evicted. It will no longer run eviction
on writes to the cache, instead, there is an optional method to create a
last cache provider that will run eviction in a background task on a specified
interval.
Otherwise, when records are produced from the cache, only those that have
not expired will be produced.
This should reduce locks on the cache and hopefully improve performance.
* feat: configurable last cache eviction interval
* docs: clean up var names, code docs, and comments
`cargo deny` was showing that no crate matched the advisory criteria for this [RUSTSEC advisory](https://rustsec.org/advisories/RUSTSEC-2024-0376.html), so this PR removes the ignore entry.
In addition, the `hashbrown` crate was causing a new audit failure, and updating it required that the `Zlib` license be added to our list of allowed licenses.
No issue for this, but it is blocking another PR at the moment (https://github.com/influxdata/influxdb/pull/25515).
Closes#25461
_Note: the first three commits on this PR are from https://github.com/influxdata/influxdb/pull/25492_
This PR makes the switch from using names for columns to the use of `ColumnId`s. Where column names are used, they are represented as `Arc<str>`. This impacts most components of the system, and the result is a fairly sizeable change set. The area where the most refactoring was needed was in the last-n-value cache.
One of the themes of this PR is to rely less on the arrow `Schema` for handling the column-level information, and tracking that info in our own `ColumnDefinition` type, which captures the `ColumnId`.
I will summarize the various changes in the PR below, and also leave some comments in-line in the PR.
## Switch to `u32` for `ColumnId`
The `ColumnId` now follows the `DbId` and `TableId`, and uses a globally unique `u32` to identify all columns in the database. This was a change from using a `u16` that was only unique within the column's table. This makes it easier to follow the patterns used for creating the other identifier types when dealing with columns, and should reduce the burden of having to manage the state of a table-scoped identifier.
## Changes in the WAL/Catalog
* `WriteBatch` now contains no names for tables or columns and purely uses IDs
* This PR relies on `IndexMap` for `_Id`-keyed maps so that the order of elements in the map is consistent. This has important implications, namely, that when iterating over an ID map, the elements therein will always be produced in the same order which allows us to make assertions on column order in a lot of our tests, and allows for the re-introduction of `insta` snapshots for serialization tests. This map type provides O(1) lookups, but also provides _fast_ iteration, which should help when serializing these maps in write batches to the WAL.
* Removed the need to serialize the bi-directional maps for `DatabaseSchema`/`TableDefinition` via use of `SerdeVecMap` (see comments in-line)
* The `tables` map in `DatabaseSchema` no stores an `Arc<TableDefinition>` so that the table definition can be shared around more easily. This meant that changes to tables in the catalog need to do a clone, but we were already having to do a clone for changes to the DB schema.
* Removal of the `TableSchema` type and consolidation of its parts/functions directly onto `TableDefinition`
* Added the `ColumnDefinition` type, which represents all we need to know about a column, and is used in place of the Arrow `Schema` for column-level meta-info. We were previously relying heavily on the `Schema` for iterating over columns, accessing data types, etc., but this gives us an API that we have more control over for our needs. The `Schema` is still held at the `TableDefinition` level, as it is needed for the query path, and is maintained to be consistent with what is contained in the `ColumnDefinition`s for a table.
## Changes in the Last-N-Value Cache
* There is a bigger distinction between caches that have an explicit set of value columns, and those that accept new fields. The former should be more performant.
* The Arrow `Schema` is managed differently now: it used to be updated more than it needed to be, and now is only updated when a row with new fields is pushed to a cache that accepts new fields.
## Changes in the write-path
* When ingesting, during validation, field names are qualified to their associated column ID
This PR introduces a new type `SerdeVecHashMap` that can be used in places where we need a HashMap with the following properties:
1. When serialized, it is serialized as a list of key-value pairs, instead of a map
2. When deserialized, it assumes the serialization format from (1.) and deserializes from a list of key-value pairs to a map
3. Does not allow for duplicate keys on deserialization
This is useful in places where we need to create map types that map from an identifier (integer) to some value, and need to serialize that data. For example: in the WAL when serializing write batches, and in the catalog when serializing the database/table schema.
This PR refactors the code in `influxdb3_wal` and `influxdb3_catalog` to use the new type for maps that use `DbId` and `TableId` as the key. Follow on work can give the same treatment to `ColumnId` based maps once that is fully worked out.
## Explanation
If we have a `HashMap<u32, String>`, `serde_json` will serialize it in the following way:
```json
{"0": "foo", "1": "bar"}
```
i.e., the integer keys are serialized as strings, since JSON doesn't support any other type of key in maps.
`SerdeVecHashMap<u32, String>` will be serialized by `serde_json` in the following way:
```json,
[[0, "foo"], [1, "bar"]]
```
and will deserialize from that vector-based structure back to the map. This allows serialization/deserialization to run directly off of the `HashMap`'s `Iterator`/`FromIterator` implementations.
## The Controversial Part
One thing I also did in this PR was switch the catalog from using a `BTreeMap` for tables to using the new `HashMap` type. This breaks the deterministic ordering of the database schema's `tables` map and therefore wrecks the snapshot tests we were using. I had to comment those parts of their respective tests out, because there isn't an easy way to make the underlying hashmap have a deterministic ordering just in tests that I am aware of.
If we think that using `BTreeMap` in the catalog is okay over a `HashMap`, then I think it would be okay to roll a similar `SerdeVecBTreeMap` type specifically for the catalog. Coincidentally, this may actually be a good use case for [`indexmap`](https://docs.rs/indexmap/latest/indexmap/), since it holds supposedly similar lookup performance characteristics to hashmap, while preserving order and _having faster iteration_ which could be a win for WAL serialization speed. It also accepts different hashing algorithms so could be swapped in with FNV like `HashMap` can.
## Follow-up work
Use the `SerdeVecHashMap` for column data in the WAL following https://github.com/influxdata/influxdb/issues/25461
* refactor: roll back addition of DatabaseSchemaProvider trait
* refactor: make parquet metrics optional in telemetry for pro
* refactor: make ParquetFileId Hash
* refactor: test harness logging
Separate out methods of the Catalog API that are used on the query side into a new trait `DatabaseSchemaProvider`. The new trait includes methods from the Catalog that get the underlying `DatabaseSchema` or interact with names/IDs.
This will allow for a separate implementation of the Catalog for pro that only needs to hold a replicated/combined view in-memory of one or more catalogs without the need to do persistence that a write buffer's catalog needs to do.
While in there I also switched the `QueryExecutorImpl::new` method to take an args struct to avoid the clippy lint.
* feat: Add non-unique u16 Id to ColumnDefinition
This commit adds the column_id field to ColumnDefinition so that the
output for a Catalog will contain the id of that column. This is non
unique, whereas TableIds and DbIds will be unique. The column_id
corresponds to it's index in the schema.
Closes#25386
When running the tests repeatedly the tests failed intermittently
as the background runner wakes up to prune the cache and the tests
are loading and removing the data. The check whether `n_to_prune`
is greater than 0 before going ahead with pruning fixes the issue
Closes: https://github.com/influxdata/influxdb/issues/25446
* feat(circleci): add inclusivity checks
* chore(circleci): adjust package-validation for inclusive language
* chore: update tests for inclusive language
- Introduced traits, `ParquetMetrics` and `SystemInfoProvider` to enable
writing easier tests
- Uses mockito for code that depends on reqwest::Client and also uses
mockall to generally mock any traits like `SystemInfoProvider`
- Minor updates to docs
* feat: Add TableId and ColumnId
* feat: swap over to DbId and TableId everywhere
This commit swaps us over to using the DbId and TableId types everywhere
for our internal systems. Anywhere that's external facing, such as names
for last cache tables or line protocol parsing, use names. In these cases
we have the `Catalog` which keeps a map of TableIds and DbIds in a
bidirectional mapping for easy lookup i.e. id <-> names. While in essence
the change itself isn't that complicated given the nature of how much we
depended on names for things, the changes end up being quite invasive and
extensive. Luckily it shouldn't be too hard to review. Note this does
not add the column ids which will be done in a follow up PR.
Closes#25375Closes#25403Closes#25404Closes#25405Closes#25412Closes#25413
This adds a watch channel to check for when prunes have happened on
the parquet cache oracle, so we can notify something, like a test, that
needs to know when a prune has happened.
This should make the cache eviction test in the parquet_cache module
not flake out anymore.
- added mechanism within PersistedFile to expose parquet file related
metrics. The details are updated when new snapshot is generated and
also when all snapshots are loaded when the process starts up
- at the point of creating the telemetry payload these parquet metrics
are looked up before sending it to the server.
Closes: https://github.com/influxdata/influxdb/issues/25418
This adds a new crate `influxdb3_test_helpers` which provides two object
store helper types that can be used to track request counts made through
the store, as well as synchronize requests made through the store, resp.
* test: check parquet cache in the write buffer
Checked that the parquet cache will serve queries when chunks are
requested from the write buffer. The added test also checks for get_range
requests made to the object store, which are typically made by DataFusion
to infer schema for parquet files.
* refactor: make parquet cache optional on write buffer
* test: add test to verify parquet cache function
This makes the parquet cache optional at the write buffer level, and adds
a test that verifies that the cache catches and prevents requests to the
object store in the event of a cache hit.
Closes#25382Closes#25383
This refactors the parquet cache to use less locking by switching from using the `clru` crate to a hand-rolled cache implementation. The new cache still acts as an LRU, but it uses atomics to track hit-time per entry, and handles pruning in a separate process that is decoupled from insertion/gets to the cache.
The `Cache` type uses a [`DashMap`](https://docs.rs/dashmap/latest/dashmap/struct.DashMap.html) internally to store cache entries. This should help reduce lock contention, and also has the added benefit of not requiring mutability to insert into _or_ get from the map.
The cache maps an `object_store::Path` to a `CacheEntry`. On a hit, an entry will have its `hit_time` (an `AtomicI64`) incremented. During a prune operation, entries that have the oldest hit times will be removed from the cache. See the `Cache::prune` method for details.
The cache is setup with a memory _capacity_ and a _prune percent_. The cache tracks memory used when entries are added, based on their _size_, and when a prune is invoked in the background, if the cache has exceeded its capacity, it will prune `prune_percent * cache.len()` entries from the cache.
Two tests were added:
* `cache_evicts_lru_when_full` to check LRU behaviour of the cache
* `cache_hit_while_fetching` to check that a cache entry hit while a request is in flight to fetch that entry will not result in extra calls to the underlying object store
Part of #25347
This sets up a new implementation of an in-memory parquet file cache in the `influxdb3_write` crate in the `parquet_cache.rs` module.
This module introduces the following types:
* `MemCachedObjectStore` - a wrapper around an `Arc<dyn ObjectStore>` that can serve GET-style requests to the store from an in-memory cache
* `ParquetCacheOracle` - an interface (trait) that can accept requests to create new cache entries in the cache used by the `MemCachedObjectStore`
* `MemCacheOracle` - implementation of the `ParquetCacheOracle` trait
## `MemCachedObjectStore`
This takes inspiration from the [`MemCacheObjectStore` type](1eaa4ed5ea/object_store_mem_cache/src/store.rs (L205-L213)) in core, but has some different semantics around its implementation of the `ObjectStore` trait, and uses a different cache implementation.
The reason for wrapping the object store is that this ensures that any GET-style request being made for a given object is served by the cache, e.g., metadata requests made by DataFusion.
The internal cache comes from the [`clru` crate](https://crates.io/crates/clru), which provides a least-recently used (LRU) cache implementation that allows for weighted entries. The cache is initialized with a capacity and entries are given a weight on insert to the cache that represents how much of the allotted capacity they will take up. If there isn't enough room for a new entry on insert, then the LRU item will be removed.
### Limitations of `clru`
The `clru` crate conveniently gives us an LRU eviction policy but its API may put some limitations on the system:
* gets to the cache require an `&mut` reference, which means that the cache needs to be behind a `Mutex`. If this slows down requests through the object store, then we may need to explore alternatives.
* we may want more sophisticated eviction policies than a straight LRU, i.e., to favour certain tables over others, or files that represent recent data over those that represent old data.
## `ParquetCacheOracle` / `MemCacheOracle`
The cache oracle is responsible for handling cache requests, i.e., to fetch an item and store it in the cache. In this PR, the oracle runs a background task to handle these requests. I defined this as a trait/struct pair since the implementation may look different in Pro vs. OSS.
* feat: Remove lock for FileId tests
Since we now are using cargo-nextest in CI we can remove
the locks used in the FileId tests to make sure that we
have no race conditions
* feat: Add u32 ID for Databases
This commit adds a new DbId for databases. It also updates paths to use
that id as part of the name. When starting up the WriteBuffer we apply
the DbId from the persisted snapshot much like we do for ParquetFileId's
This introduces the influxdb3_id crate to avoid circular deps with ids.
The ParquetFileId should also be moved into this crate, but it's
outside the scope of this change.
Closes#25301
- uses Arc<str> to represent create once and read everywhere type
of string
- updated snapshots for insta asserts, uses redaction to hardcode
randomly generated UUID strings
- added methods to catalog to expose instace and host ids
Closes: https://github.com/influxdata/influxdb/issues/25315
This commit changes the write_buffer tests to acquire a lock so that
in tests where we need to have access to NEXT_FILE_ID that it won't
be overwritten since rust tests run as one process and share the
same statics.
While this isn't a problem for Edge as a singular process it is for
our tests. It's a bit unfortunate, but this solution is the easiest
and the locks are not held for long so there's no real big impact
on running these tests.
Closes#25286
This applies some needed updates downstream in Pro. Namely,
* visibility changes that allow types to be used in the pro buffer
* allow parsing a WAL file sequence number from a file path
* remove duplicates when adding parquet files to a persisted files list
* test: repro for dropped wal files during snapshot
This commit provides a reproducer for an issue in the snapshotting process
whereby WAL files are removed for writes that have not been persisted yet.
* fix: do not snapshot most recent WAL period
This addresses #25277
Snapshots that are triggered when the number of WAL periods in the
tracker grows to be >= 3x the snapshot size will not include the most
recent wal period, and doing so was removing WAL files containing data
that was not yet persisted.
* docs: add doc comment to reproducer test
* fix: broken parquet_files system table test
* fix: broken snapshot_tracker test
* fix: broken write_buffer test
* refactor: remove redundant helper function
* test: add another snapshot test to write_buffer
* test: future writes do not get dropped on restart
* refactor: add catalog as dep to influxdb3
* refactor: move catalog and last cache initialization out of write buffer
The Write buffer used to handle initialization of the catalog and last
n value cache. This commit moves that logic out, so that both can be
initialized independently, and injected into the write buffer. This is to
enable downstream changes that will need to make sharing the catalog and
last cache possible.
* refactor: use dyn traits in WriteBufferImpl
This changes the WriteBufferImpl to use a dyn TimeProvider instead of
a generic in its type signature.
The Server type now uses a dyn WriteBuffer instead of using a generic
in its type signature, and the ServerBuilder was updated to accommodate
this accordingly.
These chages were to make downstream code changes more seamless.
* refactor: make some items pub
This makes functions on the QueryableBuffer and LastCache pub so that they
can be used downstream.
The Persister trait was only implemented by a single type, because the
underlying ObjectStore interface has several ways of being mocked, we
mock that instead of the Persister interface.
This commit removes the Persister trait, and moves its interface/impl
directly on a single Persister type in the persister module of the
influxdb3_write crate.
deny.toml had some incorrect field names in license.exceptions, those
were fixed from 'crate' to 'name'.
* feat: Add u64 id field to ParquetFiles
This commit does a few things:
1. It adds a u64 id field to ParquetFile
2. It gets this from an AtomicU64 so they can always use the most up to
date one across threads
3. The last available file id is persisted as part of our snapshot
process
4. The snapshot when loaded will set the AtomicU64 to the proper value
With this current work on the FileIndex in our Pro version will be able
to utilize these ids while doing compaction and we can refer to them
with a unique u64.
Closes#25250
* feat: Catalog apply_catalog_batch only updates if new
This updates the Catalog so that when applying a catalog batch it only updates the inner catalog and bumps the sequence number and updated tracker if there are new updates in the batch. Also does validation that the catalog batch schema is compatible with any existing.
Closes#25205
* feat: only persist catalog when updated (#25238)
* feat: Add last cache create/delete to WAL
This moves the LastCacheDefinition into the WAL so that it can be serialized there. This ended up being a pretty large refactor to get the last cache creation to work through the WAL.
I think I also stumbled on a bug where the last cache wasn't getting initialized from the catalog on reboot so that it wouldn't actually end up caching values. The refactored last cache persistence test in write_buffer/mod.rs surfaced this.
Finally, I also had to update the WAL so that it would persist if there were only catalog updates and no writes.
Fixes#25203
* fix: typos
* refactor: Make Level0Duration part of WAL
I noticed this during some testing and cleanup with other PRs. The WAL had its own level_0_duration and the write buffer had a different one, which would cause some weird problems if they weren't the same. This refactors Level0Duration to be in the WAL and fixes up the tests.
As an added bonus, this surfaced a bug where multiple L0 blocks getting persisted in the same snapshot wasn't supported. So now snapshot details can have many files per table.
* fix: have persisted files always return in descending data time order
* fix: sort record batches for test verification
This extends the system tables available with a new `parquet_files` table
which will list the parquet files associated with a given table in a
database.
Queries to system.parquet_files must provide a table_name predicate to
specify the table name of interest.
The files are accessed through the QueryableBuffer.
In addition, a test was added to check success and failure modes of the
new system table query.
Finally, the Persister trait had its associated error type removed. This
was somewhat of a consequence of how I initially implemented this change,
but I felt cleaned the code up a bit, so I kept it in the commit.
This enforces the use of a host identifier prefix in all object store
paths (currently, for parquet files, catalog files, and snapshot files).
The persister retains the host identifier prefix, and uses it when
constructing paths.
The WalObjectStore also holds the host identifier prefix, so that it can
use it when saving and loading WAL files.
The influxdb3 binary requires a new argument 'host-id' to be passed that
is used to specify the prefix.
* fix: query bugs with buffer
This fixes three different bugs with the buffer. First was that aggregations would fail because projection was pushed down to the in-buffer data that de-duplication needs to be called on. The test in influxdb3/tests/server/query.rs catches that.
I also added a test in write_buffer/mod.rs to ensure that data is correctly queryable when combining with different states: only data in buffer, only data in parquet files, and data across both. This showed two bugs, one where the parquet data was being doubled up (parquet chunks were being created in write buffer mod and in queryable buffer. The second was that the timestamp min max on table buffer would panic if the buffer was empty.
* refactor: PR feedback
* fix: fix wal replay and buffer snapshot
Fixes two problems uncovered by adding to the write_buffer/mod.rs test. Ensures we can replay wal data and that snapshots work properly with replayed data.
* fix: run cargo update to fix audit
* fix: make ParquetChunk fields and mod chunk pub
This doesn't affect anything in the OSS version, but these changes are
needed for Pro as part of our compactor work.
* fix: cargo deny failure
* fix: wal skip persist and notify if empty buffer
This fixes the WAL so that it will skip persisting a file and notifying the file notifier if the wal buffer is empty.
* fix: fix last cache persist test
* refactor: Move Catalog into influxdb3_catalog crate
This moves the catalog and its serialization logic into its own crate. This is a precursor to recording more catalog modifications into the WAL.
Fixes#25204
* fix: cargo update
* fix: add version = 2 to deny.toml
* fix: update deny.toml
* fix: add CCO to deny.toml
* feat: refactor WAL and WriteBuffer
There is a ton going on here, but here are the high level things. This implements a new WAL, which is backed entirely by object store. It then updates the WriteBuffer to be able to work with how the new WAL works, which also required an update to how the Catalog is modified and persisted.
The concept of Segments has been removed. Previously there was a separate WAL per segment of time. Instead, there is now a single WAL that all writes and updates flow into. Data within the write buffer is organized by Chunk(s) within tables, which is based on the timestamp of the row data. These are known as the Level0 files, which will be persisted as Parquet into object store. The default chunk duration for level 0 files is 10 minutes.
The WAL is written as single files that get created at the configured WAL flush interval (1s by default). After a certain number of files have been created, the server will attempt to snapshot the WAL (default is to snapshot the first 600 files of the WAL after we have 900 total, i.e. snapshot 10 minutes of WAL data).
The design goal with this is to persist 10 minute chunks of data that are no longer receiving writes, while clearing out old WAL files. This works if data getting written in around "now" with no more than 5 minutes of delay. If we continue to have delayed writes, a snapshot of all data will be forced in order to clear out the WAL and free up memory in the buffer.
Overall, this structure of a single wal, with flushes and snapshots and chunks in the queryable buffer led to a simpler setup for the write buffer overall. I was able to clear out quite a bit of code related to the old segment organization.
Fixes#25142 and fixes#25173
* refactor: address PR feedback
* refactor: wal to replay and background flush on new
* chore: remove stray println
* fix: catalog support for last caches that accept new fields
Last cache definitions in the catalog were augmented to either store an
explicit set of column names (including time), or to accept new fields.
This will allow these caches to be loaded properly on server restart such
that all non-key columns are cached.
* refactor: use tagged serialization for last cache values def
This also updated the client code to accept the new structure in
influxdb3_client.
* test: add e2e tests to catch regressions in influxdb3_client
* chore: cargo update for audit
Part of #25174
This PR adds support for three more predicate types when querying the last cache: !=, IN, and NOT IN. Previously only = was supported.
Existing tests were extended to check that these predicate types work as expected, both in the last_cache module and in the influxdb3_server crate. The latter was important to ensure that the new predicate logic works in the context of actual query parsing/execution.
Closes#25169
This PR ensures the last cache configuration is persisted to the catalog when last caches are created, and are removed from the catalog when they are deleted. The last cache is initialized on server start fro the catalog.
A new trait was added to the write buffer: LastCacheManager, which provides the methods to create and delete last caches (and which is invoked from the HTTP API). Both create/delete methods will update the catalog, but also force persistence of the catalog to object store, vs. waiting for the WAL flush interval / segment persistence process to do it. This should ensure that the catalog is up-to-date with respect to the last cache configuration, in the event that the server is stopped before segment persistence.
A test was added to check this behaviour in influxdb3_write/src/write_buffer/mod.rs.
Added a new system table, system.last_caches, to enable queries that display information about last caches in a database.
You can query the table like so:
SELECT * FROM system.last_caches
Since queries are scoped to a database, this will only show last caches configured for the database being queried.
Results look like so:
+-------+----------------+----------------+---------------+-------+-----+
| table | name | key_columns | value_columns | count | ttl |
+-------+----------------+----------------+---------------+-------+-----+
| mem | mem_last_cache | [host, region] | [time, usage] | 1 | 60 |
+-------+----------------+----------------+---------------+-------+-----+
An end-to-end test was added to verify queries to the system.last_caches table.
Adds an API for deleting last caches.
- The API allows parameters to be passed in either the request URI query string, or in the body as JSON
- Some additional error modes were handled, specifically, for better HTTP status code responses, e.g., invalid content type is now a 415, URL query string parsing errors are now 400
- An end-to-end test was added to check behaviour of the API
Closes#25096
- Adds a new HTTP API that allows the creation of a last cache, see the issue for details
- An E2E test was added to check success/failure behaviour of the API
- Adds the mime crate, for parsing request MIME types, but this is only used in the code I added - we may adopt it in other APIs / parts of the HTTP server in future PRs
* feat: impl datafusion traits on last cache
Created a new module for the DataFusion table function implementations.
The TableProvider impl for LastCache was moved there, and new code that
implements the TableFunctionImpl trait to make the last cache queryable
was also written.
The LastCacheProvider and LastCache were augmented to make this work:
- The provider stores an Arc<LastCache> instead of a LastCache
- The LastCache uses interior mutability via an RwLock, to make the above
possible.
* feat: register last_cache UDTF on query context
* refactor: make server accept listener instead of socket addr
The server used to accept a socket address and bind it directly, returning
error if the bind fails.
This commit changes that so the ServerBuilder accepts a TcpListener. The
behaviour is essentially the same, but this allows us to bind the address
from tests when instantiating the server, so we can easily assign unused
ports.
Tests in the influxdb3_server were updated to exploit this in order to
use port 0 auto assignment and stop flaky test failures.
A new, failing, test was also added to that module for the last cache.
* refactor: naive implementation of last cache key columns
Committing here as the last cache is in a working state, but it is naively
implemented as it just stores all key columns again (still with the hierarchy)
* refactor: make the last cache work with the query executor
* chore: fix my own feedback and appease clippy
* refactor: remove lower lock in last cache
* chore: cargo update
* refactor: rename function
* fix: broken doc comment
* refactor: make cache creation more idempotent
Last Cache creation is more idempotent, if a cache is created, and then
an attempt to create it again with the same parameters is used, it will
not result in an error.
* refactor: only store a single buffer of Instants
The last cache column buffers were storing an instant next to each
buffered value, which is unnecessary and not space efficient. This makes
it so the LastCacheStore holds a single buffer of Instants and manages
TTLs using that.
* refactor: clean up derelict cache members on eviction
* feat: support last caches that can add new fields
* feat: support new values in last cache
Support the addition of new fields to the last cache, for caches that do
not have a specified set of value columns.
A test was added along with the changes.
* chore: clippy
* docs: add comments throughout new last cache code
* fix: last cache schema merging when new fields added
* refactor: use outer schema for RecordBatch production
Enabling addition of new fields to a last cache made the insertion order
guarantee of the IndexMap break down. It could not be relied upon anymore
so this commit removes reference to that fact, despite still using the
IndexMap type, and strips out the schema from the inner LastCacheStore
type of the LastCache.
Now, the outer LastCache schema is relied on for producing RecordBatches,
which requires a lookup to the inner LastCacheStore's HashMap for each
field in the schema. This may not be as convenient as iterating over the
map as before, but trying to manage the disparate schema, and maintaining
the map ordering was making the code too complicated. This seems like a
reasonable compromise for now, until we see the need to optimize.
The IndexMap is still used for its fast iteration and lookup
characteristics.
The test that checks for new field ordering behaviour was modified to be
correct.
* refactor: use has set instead of scanning entire row on each push
Some renaming of variables was done to clarify meaning as well.
* feat: base for last cache implementation
Each last cache holds a ring buffer for each column in an index map, which
preserves the insertion order for faster record batch production.
The ring buffer uses a custom type to handle the different supported
data types that we can have in the system.
* feat: implement last cache provider
LastCacheProvider is the API used to create last caches and write
table batches to them. It uses a two-layer RwLock/HashMap: the first for
the database, and the second layer for the table within the database.
This allows for table-level locks when writing in buffered data, and only
gets a database-level lock when creating a cache (and in future, when
removing them as well).
* test: APIs on write buffer and test for last cache
Added basic APIs on the write buffer to access the last cache and then a
test to the last_cache module to see that it works with a simple example
* docs: add some doc comments to last_cache
* chore: clippy
* chore: one small comment on IndexMap
* chore: clean up some stale comments
* refactor: part of PR feedback
Addressed three parts of PR feedback:
1. Remove double-lock on cache map
2. Re-order the get when writing to the cache to be outside the loop
3. Move the time check into the cache itself
* refactor: nest cache by key columns
This refactors the last cache to use a nested caching structure, where
the key columns for a given cache are used to create a hierarchy of
nested maps, terminating in the actual store for the values in the cache.
Access to the cache is done via a set of predicates which can optionally
specify the key column values at any level in the cache hierarchy to only
gather record batches from children of that node in the cache.
Some todos:
- Need to handle the TTL
- Need to move the TableProvider impl up to the LastCache type
* refactor: TableProvider impl to LastCache
This re-writes the datafusion TableProvider implementation on the correct
type, i.e., the LastCache, and adds conversion from the filter Expr's to
the Predicate type for the cache.
* feat: support TTL in last cache
Last caches will have expired entries walked when writes come in.
* refactor: add panic when unexpected predicate used
* refactor: small naming convention change
* refactor: include keys in query results and no null keys
Changed key columns so that they do not accept null values, i.e., rows
that are pushed that are missing key column values will be ignored.
When producing record batches for a cache, if not all key columns are
used in the predicate, then this change makes it so that the non-predicate
key columns are produced as columns in the outputted record batches.
A test with a few cases showing this was added.
* fix: last cache key column query output
Ensure key columns in the last cache that are not included in the
predicate are emitted in the RecordBatches as a column.
Cleaned up and added comments to the new test.
* chore: clippy and some un-needed code
* fix: clean up some logic errors in last_cache
* test: add tests for non default cache size and TTL
Added two tests, as per commit title. Also moved the eviction process
to a separate function so that it was not being done on every write to
the cache, which could be expensive, and this ensures that entries are
evicted regardless of whether writes are coming in or not.
* test: add invalid predicate test cases to last_cache
* test: last_cache with field key columns
* test: last_cache uses series key for default keys
* test: last_cache uses tag set as default keys
* docs: add doc comments to last_cache
* fix: logic error in last cache creation
CacheAlreadyExists errors were only being based on the database and
table names, and not including the cache names, which was not
correct.
* docs: add some comments to last cache create fn
* feat: support null values in last cache
This also adds explicit support for series key columns to distinguish
them from normal tags in terms of nullability
A test was added to check nulls work
* fix: reset last cache last time when ttl evicts all data
* refactor: move persisted files out of segment state
This refactors persisted parquet files out of the SegmentState into a new struct PersistedParquet files. The intention is to have SegmentState be only for the active write buffer that has yet to be persisted to Parquet files in object storage.
Persisted files will then be accessible throughout the system without having to touch the active in-flight write buffer.
* refactor: pr feedback cleanup
* feat: store last cache info in the catalog
* test: test series key in catalog serialization
* test: add test for last cache catalog serialization
* chore: cargo update
* chore: remove outdated snapshot
* refactor: use hashbrown with entry_ref api
* refactor: use hashbrown hashmap instead of std hashmap in places that would from the `entry_ref` API
* chore: Cargo update to pass CI