In this commit the vec backing the buffer is swapped for an array.
Criterion benchmarks were added to compare the perf to make sure it has
not made it worse. The vec implementation has been removed after the
benchmarks done locally
This commit does three important major changes:
1. We will deny writes to the v1, v2, and v3 write apis that add new tags in
subsequent writes after the first write
2. We make every table have a series key by default now
3. We enfore sorting order by the series key which is the order the keys came in
With these changes we have consistentcy across the various write apis and can
make optimizations and future features with the assumption we have a series key.
Closes#25585
- This commit allows `RecordBatch` to be created directly from event
store. It means we can avoid cloning events and avoids creating
intermediate vec. To achieve that, there's a new method
`as_record_batch` that's been added with a trait bound `ToRecordBatch`
that events are expected to implement.
- Minor tidy ups (renaming methods) and added test
closes: https://github.com/influxdata/influxdb/issues/25609
This commit introduces basic store for sys events and the backing ring
buffer. Since the buffer needs to hold arbitrary data, it uses `Box<dyn
Any>`
closes: https://github.com/influxdata/influxdb/issues/25581
This adds two new CLI commands to the `influxdb3` binary:
* `influxdb3 meta-cache create`
* `influxdb3 meta-cache delete`
To create and delete metadata caches, respectively.
A basic integration test was added to check that this works E2E.
The `influxdb3_client` was updated with methods to create and delete
metadata caches, and which is what the CLI commands use under the hood.
This adds a new system table "meta_caches" that allows users to view the
state of their metadata caches on a per-db basis
An integration test was added to verify that it works.
* feat: make query executor as trait object
This commit moves `QueryExecutorImpl` behind a `dyn` (trait object) as
we have other impls in core for `QueryExecutor` and this will keep both
pro and OSS traits in sync
* chore: fix cargo audit failures
- address https://rustsec.org/advisories/RUSTSEC-2024-0399.html by
running `cargo update --precise 0.23.18 --package rustls@0.23.14`
- address yanked version of `url` crate (2.5.3) by running
`cargo update -p url`
This adds the MetaDataCacheProvider for managing metadata caches in the
influxdb3 instance. This includes APIs to create caches through the WAL
as well as from a catalog on initialization, to write data into the
managed caches, and to query data out of them.
The query side is fairly involved, relying on Datafusion's TableFunctionImpl
and TableProvider traits to make querying the cache using a user-defined
table function (UDTF) possible.
The predicate code was modified to only support two kinds of predicates:
IN and NOT IN, which simplifies the code, and maps nicely with the DataFusion
LiteralGuarantee which we leverage to derive the predicates from the
incoming queries.
A custom ExecutionPlan implementation was added specifically for the
metadata cache that can report the predicates that are pushed down to
the cache during query planning/execution.
A big set of tests was added to to check that queries are working, and
that predicates are being pushed down properly.
This commit allows deleting (soft) a table. For an user, following
command will allow soft deleting a table (bar) in db (foo)
```
influxdb3 table delete --dbname foo --table bar --host $host
```
- Added `soft_delete_table` to `DatabaseManager` trait, which already
hosts `soft_delete_database` method. The code roughly follows the same
flow as db delete. Although like db schema, it does clone on write
because the reference is behind an Arc, `Arc::make_mut` is used in
this change.
- Moved db delete related cli parser under "manage" module that has both
db and table delete functionality
- Some minor tidyups (removing unused methods, renaming method so that
the order in name matches actual return type eg. `table_id_and_schema`,
should return (id, schema) and not (schema, id))
closes: https://github.com/influxdata/influxdb/issues/25561
This commit changes the code so that we only keep the 10 most recent
Catalogs. When a new one is persisted we delete any old ones that
exist. If the deletion would fail we don't panic and let a future
persist cleanup the catalogs rather than failing the persist itself.
This commit also adds a test to make sure that only the catalogs we
expect to are deleted on persist.
* feat: drop/delete database
This commit allows soft deletion of database using `influxdb3 database
delete <db_name>` command. The write buffer and last value cache are
cleared as well.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: reuse same code path when deleting database
- In previous commit, the deletion of database immediately triggered
clearing last cache and query buffer. But on restarts same logic had
to be repeated to allow deleting database when starting up. This
commit removes immediate deletion by explicitly calling necessary
methods and moves the logic to `apply_catalog_batch` which already
applies `CatalogOp` and also clearing cache and buffer in
`buffer_ops` method which has hooks to call other places.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: use reqwest query api for query param
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: include deleted flag in DatabaseSnapshot
- `DatabaseSchema` serialization/deserialization is delegated to
`DatabaseSnapshot`, so the `deleted` flag should be included in
`DatabaseSnapshot` as well.
- insta test snapshots fixed
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: address PR comments + tidy ups
---------
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: core metadata cache structs with basic tests
Implement the base MetaCache type that holds the hierarchical structure
of the metadata cache providing methods to create and push rows from the
WAL into the cache.
Added a prune method as well as a method for gathering record batches
from a meta cache. A test was added to check the latter for various
predicates and that the former works, though, pruning shows that we need
to modify how record batches are produced such that expired entries are
not emitted.
* refactor: filter expired entries and do some clean up in the meta cache
* refactor: use SerdeVecMap in PersistedSnapshot
This changes from the use of a HashMap to store the DB -> Table structure
in the PersistedSnapshot files to using a SerdeVecMap, which will have
the identifiers serialized as integers instead of strings.
* test: add a snapshot test for persisted snapshots
* chore: update core deps
- arrow/parquet deps are patched (as in core)
- three specific code changes to cope with changes in core crates
- TransitionPartitionId, use `from_parts` instead of `new`
- arrow buffers can take &[u8] directly without `to_vec()`/`vec!`
(used only in tests)
- `schema` and `influxdb_line_protocol` crates need `v3` feature enabled
* chore: update deny.toml
* chore: formatting and deny toml changes
Unicode-3.0 license is added to allowed licenses list, without it
end up with 19 errors (`zerovec`, `zerovec-derive` etc.)
* chore: address PR feedback
- move enabling v3 feature to root Cargo.toml
- added the upstream PR for datafusion-common that introduced RUSTSEC-2024-0384
Updates the catalog to use its own sequence number in the path. This will enable downstream Pro systems that pick up PersistedSnapshots to get the specific catalog that a snapshot is associated with since its sequence number is included.
Also updated the type to be CatalogSequenceNumber to make it more clear & readable when being used.
* fix: throw error when adding fields to non-existent table
* test: add test for expected behaviour in catalog op apply
This also added in some helpers to the wal crate that were previously
added to pro.
* refactor: make last cache eviction optional
This changes how the last cache is evicted. It will no longer run eviction
on writes to the cache, instead, there is an optional method to create a
last cache provider that will run eviction in a background task on a specified
interval.
Otherwise, when records are produced from the cache, only those that have
not expired will be produced.
This should reduce locks on the cache and hopefully improve performance.
* feat: configurable last cache eviction interval
* docs: clean up var names, code docs, and comments
`cargo deny` was showing that no crate matched the advisory criteria for this [RUSTSEC advisory](https://rustsec.org/advisories/RUSTSEC-2024-0376.html), so this PR removes the ignore entry.
In addition, the `hashbrown` crate was causing a new audit failure, and updating it required that the `Zlib` license be added to our list of allowed licenses.
No issue for this, but it is blocking another PR at the moment (https://github.com/influxdata/influxdb/pull/25515).
Closes#25461
_Note: the first three commits on this PR are from https://github.com/influxdata/influxdb/pull/25492_
This PR makes the switch from using names for columns to the use of `ColumnId`s. Where column names are used, they are represented as `Arc<str>`. This impacts most components of the system, and the result is a fairly sizeable change set. The area where the most refactoring was needed was in the last-n-value cache.
One of the themes of this PR is to rely less on the arrow `Schema` for handling the column-level information, and tracking that info in our own `ColumnDefinition` type, which captures the `ColumnId`.
I will summarize the various changes in the PR below, and also leave some comments in-line in the PR.
## Switch to `u32` for `ColumnId`
The `ColumnId` now follows the `DbId` and `TableId`, and uses a globally unique `u32` to identify all columns in the database. This was a change from using a `u16` that was only unique within the column's table. This makes it easier to follow the patterns used for creating the other identifier types when dealing with columns, and should reduce the burden of having to manage the state of a table-scoped identifier.
## Changes in the WAL/Catalog
* `WriteBatch` now contains no names for tables or columns and purely uses IDs
* This PR relies on `IndexMap` for `_Id`-keyed maps so that the order of elements in the map is consistent. This has important implications, namely, that when iterating over an ID map, the elements therein will always be produced in the same order which allows us to make assertions on column order in a lot of our tests, and allows for the re-introduction of `insta` snapshots for serialization tests. This map type provides O(1) lookups, but also provides _fast_ iteration, which should help when serializing these maps in write batches to the WAL.
* Removed the need to serialize the bi-directional maps for `DatabaseSchema`/`TableDefinition` via use of `SerdeVecMap` (see comments in-line)
* The `tables` map in `DatabaseSchema` no stores an `Arc<TableDefinition>` so that the table definition can be shared around more easily. This meant that changes to tables in the catalog need to do a clone, but we were already having to do a clone for changes to the DB schema.
* Removal of the `TableSchema` type and consolidation of its parts/functions directly onto `TableDefinition`
* Added the `ColumnDefinition` type, which represents all we need to know about a column, and is used in place of the Arrow `Schema` for column-level meta-info. We were previously relying heavily on the `Schema` for iterating over columns, accessing data types, etc., but this gives us an API that we have more control over for our needs. The `Schema` is still held at the `TableDefinition` level, as it is needed for the query path, and is maintained to be consistent with what is contained in the `ColumnDefinition`s for a table.
## Changes in the Last-N-Value Cache
* There is a bigger distinction between caches that have an explicit set of value columns, and those that accept new fields. The former should be more performant.
* The Arrow `Schema` is managed differently now: it used to be updated more than it needed to be, and now is only updated when a row with new fields is pushed to a cache that accepts new fields.
## Changes in the write-path
* When ingesting, during validation, field names are qualified to their associated column ID
This PR introduces a new type `SerdeVecHashMap` that can be used in places where we need a HashMap with the following properties:
1. When serialized, it is serialized as a list of key-value pairs, instead of a map
2. When deserialized, it assumes the serialization format from (1.) and deserializes from a list of key-value pairs to a map
3. Does not allow for duplicate keys on deserialization
This is useful in places where we need to create map types that map from an identifier (integer) to some value, and need to serialize that data. For example: in the WAL when serializing write batches, and in the catalog when serializing the database/table schema.
This PR refactors the code in `influxdb3_wal` and `influxdb3_catalog` to use the new type for maps that use `DbId` and `TableId` as the key. Follow on work can give the same treatment to `ColumnId` based maps once that is fully worked out.
## Explanation
If we have a `HashMap<u32, String>`, `serde_json` will serialize it in the following way:
```json
{"0": "foo", "1": "bar"}
```
i.e., the integer keys are serialized as strings, since JSON doesn't support any other type of key in maps.
`SerdeVecHashMap<u32, String>` will be serialized by `serde_json` in the following way:
```json,
[[0, "foo"], [1, "bar"]]
```
and will deserialize from that vector-based structure back to the map. This allows serialization/deserialization to run directly off of the `HashMap`'s `Iterator`/`FromIterator` implementations.
## The Controversial Part
One thing I also did in this PR was switch the catalog from using a `BTreeMap` for tables to using the new `HashMap` type. This breaks the deterministic ordering of the database schema's `tables` map and therefore wrecks the snapshot tests we were using. I had to comment those parts of their respective tests out, because there isn't an easy way to make the underlying hashmap have a deterministic ordering just in tests that I am aware of.
If we think that using `BTreeMap` in the catalog is okay over a `HashMap`, then I think it would be okay to roll a similar `SerdeVecBTreeMap` type specifically for the catalog. Coincidentally, this may actually be a good use case for [`indexmap`](https://docs.rs/indexmap/latest/indexmap/), since it holds supposedly similar lookup performance characteristics to hashmap, while preserving order and _having faster iteration_ which could be a win for WAL serialization speed. It also accepts different hashing algorithms so could be swapped in with FNV like `HashMap` can.
## Follow-up work
Use the `SerdeVecHashMap` for column data in the WAL following https://github.com/influxdata/influxdb/issues/25461
* refactor: roll back addition of DatabaseSchemaProvider trait
* refactor: make parquet metrics optional in telemetry for pro
* refactor: make ParquetFileId Hash
* refactor: test harness logging
Allow the endpoint for telemetry to be passed in via the cli args, e.g
```
--telemetry-endpoint "https://somehost/test/"
```
and the actual endpoint always appends `v3` to it. So, above URL becomes
"https://somehost/test/v3"
Separate out methods of the Catalog API that are used on the query side into a new trait `DatabaseSchemaProvider`. The new trait includes methods from the Catalog that get the underlying `DatabaseSchema` or interact with names/IDs.
This will allow for a separate implementation of the Catalog for pro that only needs to hold a replicated/combined view in-memory of one or more catalogs without the need to do persistence that a write buffer's catalog needs to do.
While in there I also switched the `QueryExecutorImpl::new` method to take an args struct to avoid the clippy lint.
* feat: Add non-unique u16 Id to ColumnDefinition
This commit adds the column_id field to ColumnDefinition so that the
output for a Catalog will contain the id of that column. This is non
unique, whereas TableIds and DbIds will be unique. The column_id
corresponds to it's index in the schema.
Closes#25386
When running the tests repeatedly the tests failed intermittently
as the background runner wakes up to prune the cache and the tests
are loading and removing the data. The check whether `n_to_prune`
is greater than 0 before going ahead with pruning fixes the issue
Closes: https://github.com/influxdata/influxdb/issues/25446
* feat(circleci): add inclusivity checks
* chore(circleci): adjust package-validation for inclusive language
* chore: update tests for inclusive language
- Introduced traits, `ParquetMetrics` and `SystemInfoProvider` to enable
writing easier tests
- Uses mockito for code that depends on reqwest::Client and also uses
mockall to generally mock any traits like `SystemInfoProvider`
- Minor updates to docs