Commit Graph

31 Commits (mgattozzi/serde-catalog)

Author SHA1 Message Date
Jackson Newhouse 29dacc318a
feat(processing_engine): Add REST API endpoints for activating and deactivating triggers. (#25711) 2025-01-02 09:23:18 -08:00
Jackson Newhouse 0db71b69b9
fix(catalog): consistent ordering of catalog operations (#25690) 2024-12-20 15:17:38 -08:00
Jackson Newhouse 8bfccb74ab
feat(processing_engine): Runtime and write-back improvements (#25672)
* Move processing engine invocation to a seperate tokio task.
* Support writing back line protocol from python via insert_line_protocol().
* Update structs to work with bincode.
2024-12-17 16:38:12 -08:00
Jackson Newhouse 486d79d801
feat(processing_engine): initial implementation of Processing Engine plugins and triggers (#25639) 2024-12-13 14:11:38 -08:00
Michael Gattozzi 9292a3213d
feat: Significantly decrease startup times for WAL (#25643)
* feat: add startup time to logging output

This change adds a startup time counter to the output when starting up
a server. The main purpose of this is to verify whether the impact of
changes actually speeds up the loading of the server.

* feat: Significantly decrease startup times for WAL

This commit does a few important things to speedup startup times:
1. We avoid changing an Arc<str> to a String with the series key as the
   From<String> impl will call with_column which will then turn it into
   an Arc<str> again. Instead we can just call `with_column` directly
   and pass in the iterator without also collecting into a Vec<String>
2. We switch to using bitcode as the serialization format for the WAL.
   This significantly reduces startup time as this format is faster to
   use instead of JSON, which was eating up massive amounts of time.
   Part of this change involves not using the tag feature of serde as
   it's currently not supported by bincode
3. We also parallelize reading and deserializing the WAL files before
   we then apply them in order. This reduces time waiting on IO and we
   eagerly evaluate each spawned task in order as much as possible.

This gives us about a 189% speedup over what we were doing before.

Closes #25534
2024-12-12 11:27:51 -05:00
Trevor Hilton 9b87cd7a65
refactor: move last cache to influxdb3_cache crate (#25620)
Moved all of the last cache implementation into the `influxdb3_cache`
crate. This also splits out the implementation into three modules:
- `cache.rs`: the core cache implementation
- `provider.rs`: the cache provider used by the database to hold multiple
  caches.
- `table_function.rs`: same as before, holds the DataFusion impls

Tests were preserved and moved to `mod.rs`, however, they were updated to
not rely on the WriteBuffer implementation, and instead use the types in
the `influxdb3_cache::last_cache` module directly. This simplified the
test code, while not changing any of the test assertions at all.
2024-12-05 14:04:25 -05:00
Trevor Hilton 0daa3f2f1d
feat: track persist time in wal file content (#25614) 2024-12-03 15:37:43 -05:00
Michael Gattozzi d2fbd65a44
feat: Deny extra tags on write APIs (#25596)
This commit does three important major changes:

1. We will deny writes to the v1, v2, and v3 write apis that add new tags in
   subsequent writes after the first write
2. We make every table have a series key by default now
3. We enfore sorting order by the series key which is the order the keys came in

With these changes we have consistentcy across the various write apis and can
make optimizations and future features with the assumption we have a series key.

Closes #25585
2024-12-03 12:10:26 -05:00
Trevor Hilton 234d37329a
feat: metacache REST APIs to create and delete (#25587) 2024-11-27 08:41:46 -05:00
Trevor Hilton 8e23032ceb
feat: add metadata cache provider with APIs for write and query (#25566)
This adds the MetaDataCacheProvider for managing metadata caches in the
influxdb3 instance. This includes APIs to create caches through the WAL
as well as from a catalog on initialization, to write data into the
managed caches, and to query data out of them.

The query side is fairly involved, relying on Datafusion's TableFunctionImpl
and TableProvider traits to make querying the cache using a user-defined
table function (UDTF) possible.

The predicate code was modified to only support two kinds of predicates:
IN and NOT IN, which simplifies the code, and maps nicely with the DataFusion
LiteralGuarantee which we leverage to derive the predicates from the
incoming queries.

A custom ExecutionPlan implementation was added specifically for the
metadata cache that can report the predicates that are pushed down to
the cache during query planning/execution.

A big set of tests was added to to check that queries are working, and
that predicates are being pushed down properly.
2024-11-22 10:57:26 -05:00
praveen-influx 3cde24feb4
feat: delete table (#25572)
This commit allows deleting (soft) a table. For an user, following
command will allow soft deleting a table (bar) in db (foo)

```
influxdb3 table delete --dbname foo --table bar --host $host
```

- Added `soft_delete_table` to `DatabaseManager` trait, which already
  hosts `soft_delete_database` method. The code roughly follows the same
  flow as db delete. Although like db schema, it does clone on write
  because the reference is behind an Arc, `Arc::make_mut` is used in
  this change.
- Moved db delete related cli parser under "manage" module that has both
  db and table delete functionality
- Some minor tidyups (removing unused methods, renaming method so that
  the order in name matches actual return type eg. `table_id_and_schema`,
  should return (id, schema) and not (schema, id))

closes: https://github.com/influxdata/influxdb/issues/25561
2024-11-22 08:42:45 +00:00
Jackson Newhouse 956e223388
fix: don't rebuild snapshot if it has already been taken. (#25570) 2024-11-20 08:55:42 -08:00
praveen-influx 33c2d47ba9
feat: drop/delete database (#25549)
* feat: drop/delete database

This commit allows soft deletion of database using `influxdb3 database
delete <db_name>` command. The write buffer and last value cache are
cleared as well.

closes: https://github.com/influxdata/influxdb/issues/25523

* feat: reuse same code path when deleting database

- In previous commit, the deletion of database immediately triggered
  clearing last cache and query buffer. But on restarts same logic had
  to be repeated to allow deleting database when starting up. This
  commit removes immediate deletion by explicitly calling necessary
  methods and moves the logic to `apply_catalog_batch` which already
  applies `CatalogOp` and also clearing cache and buffer in
  `buffer_ops` method which has hooks to call other places.

closes: https://github.com/influxdata/influxdb/issues/25523

* feat: use reqwest query api for query param

Co-authored-by: Trevor Hilton <thilton@influxdata.com>

* feat: include deleted flag in DatabaseSnapshot

- `DatabaseSchema` serialization/deserialization is delegated to
 `DatabaseSnapshot`, so the `deleted` flag should be included in
 `DatabaseSnapshot` as well.
- insta test snapshots fixed

closes: https://github.com/influxdata/influxdb/issues/25523

* feat: address PR comments + tidy ups

---------

Co-authored-by: Trevor Hilton <thilton@influxdata.com>
2024-11-19 16:08:14 +00:00
praveen-influx 814eb31309
chore: update core deps (#25532)
* chore: update core deps

- arrow/parquet deps are patched (as in core)
- three specific code changes to cope with changes in core crates
  - TransitionPartitionId, use `from_parts` instead of `new`
  - arrow buffers can take &[u8] directly without `to_vec()`/`vec!`
    (used only in tests)
  - `schema` and `influxdb_line_protocol` crates need `v3` feature enabled

* chore: update deny.toml

* chore: formatting and deny toml changes

Unicode-3.0 license is added to allowed licenses list, without it
end up with 19 errors (`zerovec`, `zerovec-derive` etc.)

* chore: address PR feedback

- move enabling v3 feature to root Cargo.toml
- added the upstream PR for datafusion-common that introduced RUSTSEC-2024-0384
2024-11-12 16:07:31 +00:00
Trevor Hilton 3bb63b2d71
fix: throw error when adding fields to non-existent table in WAL (#25525)
* fix: throw error when adding fields to non-existent table

* test: add test for expected behaviour in catalog op apply

This also added in some helpers to the wal crate that were previously
added to pro.
2024-11-08 13:15:07 -05:00
Trevor Hilton 5698e79a34
feat: helper methods on WalOp (#25486) 2024-11-01 17:19:20 -04:00
Trevor Hilton d26a73802a
refactor: move to `ColumnId` and `Arc<str>` as much as possible (#25495)
Closes #25461 

_Note: the first three commits on this PR are from https://github.com/influxdata/influxdb/pull/25492_

This PR makes the switch from using names for columns to the use of `ColumnId`s. Where column names are used, they are represented as `Arc<str>`. This impacts most components of the system, and the result is a fairly sizeable change set. The area where the most refactoring was needed was in the last-n-value cache.

One of the themes of this PR is to rely less on the arrow `Schema` for handling the column-level information, and tracking that info in our own `ColumnDefinition` type, which captures the `ColumnId`.

I will summarize the various changes in the PR below, and also leave some comments in-line in the PR.

## Switch to `u32` for `ColumnId`

The `ColumnId` now follows the `DbId` and `TableId`, and uses a globally unique `u32` to identify all columns in the database. This was a change from using a `u16` that was only unique within the column's table. This makes it easier to follow the patterns used for creating the other identifier types when dealing with columns, and should reduce the burden of having to manage the state of a table-scoped identifier.

## Changes in the WAL/Catalog

* `WriteBatch` now contains no names for tables or columns and purely uses IDs
* This PR relies on `IndexMap` for `_Id`-keyed maps so that the order of elements in the map is consistent. This has important implications, namely, that when iterating over an ID map, the elements therein will always be produced in the same order which allows us to make assertions on column order in a lot of our tests, and allows for the re-introduction of `insta` snapshots for serialization tests. This map type provides O(1) lookups, but also provides _fast_ iteration, which should help when serializing these maps in write batches to the WAL.
* Removed the need to serialize the bi-directional maps for `DatabaseSchema`/`TableDefinition` via use of `SerdeVecMap` (see comments in-line)  
* The `tables` map in `DatabaseSchema` no stores an `Arc<TableDefinition>` so that the table definition can be shared around more easily. This meant that changes to tables in the catalog need to do a clone, but we were already having to do a clone for changes to the DB schema.
* Removal of the `TableSchema` type and consolidation of its parts/functions directly onto `TableDefinition`
* Added the `ColumnDefinition` type, which represents all we need to know about a column, and is used in place of the Arrow `Schema` for column-level meta-info. We were previously relying heavily on the `Schema` for iterating over columns, accessing data types, etc., but this gives us an API that we have more control over for our needs. The `Schema` is still held at the `TableDefinition` level, as it is needed for the query path, and is maintained to be consistent with what is contained in the `ColumnDefinition`s for a table.

## Changes in the Last-N-Value Cache

* There is a bigger distinction between caches that have an explicit set of value columns, and those that accept new fields. The former should be more performant.
* The Arrow `Schema` is managed differently now: it used to be updated more than it needed to be, and now is only updated when a row with new fields is pushed to a cache that accepts new fields.

## Changes in the write-path

* When ingesting, during validation, field names are qualified to their associated column ID
2024-11-01 16:42:57 -04:00
Trevor Hilton 0e814f5d52
feat: SerdeVecMap type for serializing ID maps (#25492)
This PR introduces a new type `SerdeVecHashMap` that can be used in places where we need a HashMap with the following properties:
1. When serialized, it is serialized as a list of key-value pairs, instead of a map
2. When deserialized, it assumes the serialization format from (1.) and deserializes from a list of key-value pairs to a map
3. Does not allow for duplicate keys on deserialization

This is useful in places where we need to create map types that map from an identifier (integer) to some value, and need to serialize that data. For example: in the WAL when serializing write batches, and in the catalog when serializing the database/table schema.

This PR refactors the code in `influxdb3_wal` and `influxdb3_catalog` to use the new type for maps that use `DbId` and `TableId` as the key. Follow on work can give the same treatment to `ColumnId` based maps once that is fully worked out.

## Explanation

If we have a `HashMap<u32, String>`, `serde_json` will serialize it in the following way:
```json
{"0": "foo", "1": "bar"}
```
i.e., the integer keys are serialized as strings, since JSON doesn't support any other type of key in maps.

`SerdeVecHashMap<u32, String>` will be serialized by `serde_json` in the following way:
```json,
[[0, "foo"], [1, "bar"]]
```
and will deserialize from that vector-based structure back to the map. This allows serialization/deserialization to run directly off of the `HashMap`'s `Iterator`/`FromIterator` implementations.

## The Controversial Part

One thing I also did in this PR was switch the catalog from using a `BTreeMap` for tables to using the new `HashMap` type. This breaks the deterministic ordering of the database schema's `tables` map and therefore wrecks the snapshot tests we were using. I had to comment those parts of their respective tests out, because there isn't an easy way to make the underlying hashmap have a deterministic ordering just in tests that I am aware of.

If we think that using `BTreeMap` in the catalog is okay over a `HashMap`, then I think it would be okay to roll a similar `SerdeVecBTreeMap` type specifically for the catalog. Coincidentally, this may actually be a good use case for [`indexmap`](https://docs.rs/indexmap/latest/indexmap/), since it holds supposedly similar lookup performance characteristics to hashmap, while preserving order and _having faster iteration_ which could be a win for WAL serialization speed. It also accepts different hashing algorithms so could be swapped in with FNV like `HashMap` can.

## Follow-up work

Use the `SerdeVecHashMap` for column data in the WAL following https://github.com/influxdata/influxdb/issues/25461
2024-10-25 13:49:02 -04:00
Michael Gattozzi eeb1aa7905
feat: swap over to DbId and TableId everywhere (#25421)
* feat: Add TableId and ColumnId

* feat: swap over to DbId and TableId everywhere

This commit swaps us over to using the DbId and TableId types everywhere
for our internal systems. Anywhere that's external facing, such as names
for last cache tables or line protocol parsing, use names. In these cases
we have the `Catalog` which keeps a map of TableIds and DbIds in a
bidirectional mapping for easy lookup i.e. id <-> names. While in essence
the change itself isn't that complicated given the nature of how much we
depended on names for things, the changes end up being quite invasive and
extensive. Luckily it shouldn't be too hard to review. Note this does 
not add the column ids which will be done in a follow up PR.

Closes #25375
Closes #25403
Closes #25404
Closes #25405
Closes #25412
Closes #25413
2024-10-03 14:47:46 -04:00
Michael Gattozzi 54d209d0bf
feat: Add u32 ID for Databases (#25302)
* feat: Remove lock for FileId tests

Since we now are using cargo-nextest in CI we can remove
the locks used in the FileId tests to make sure that we
have no race conditions

* feat: Add u32 ID for Databases

This commit adds a new DbId for databases. It also updates paths to use
that id as part of the name. When starting up the WriteBuffer we apply
the DbId from the persisted snapshot much like we do for ParquetFileId's

This introduces the influxdb3_id crate to avoid circular deps with ids.
The ParquetFileId should also be moved into this crate, but it's
outside the scope of this change.

Closes #25301
2024-09-18 11:44:04 -04:00
Paul Dix f8b6cfac5b
refactor: Rename level 0 to gen1 to match compaction wording (#25317) 2024-09-12 15:57:30 -04:00
Trevor Hilton 4e664d3da5
chore: updates for pro (#25285)
This applies some needed updates downstream in Pro. Namely,
* visibility changes that allow types to be used in the pro buffer
* allow parsing a WAL file sequence number from a file path
* remove duplicates when adding parquet files to a persisted files list
2024-09-04 16:02:07 -04:00
Trevor Hilton cd23be6e5c
test: repro for dropped wal files during snapshot (#25276)
* test: repro for dropped wal files during snapshot

This commit provides a reproducer for an issue in the snapshotting process
whereby WAL files are removed for writes that have not been persisted yet.

* fix: do not snapshot most recent WAL period

This addresses #25277

Snapshots that are triggered when the number of WAL periods in the
tracker grows to be >= 3x the snapshot size will not include the most
recent wal period, and doing so was removing WAL files containing data
that was not yet persisted.

* docs: add doc comment to reproducer test

* fix: broken parquet_files system table test

* fix: broken snapshot_tracker test

* fix: broken write_buffer test

* refactor: remove redundant helper function

* test: add another snapshot test to write_buffer

* test: future writes do not get dropped on restart
2024-09-03 20:21:33 -04:00
Trevor Hilton 3b174a2f98
feat: snapshots track their own sequence number (#25255) 2024-08-20 15:55:47 -07:00
Paul Dix 8bcc7522d0
feat: Add last cache create/delete to WAL (#25233)
* feat: Add last cache create/delete to WAL

This moves the LastCacheDefinition into the WAL so that it can be serialized there. This ended up being a pretty large refactor to get the last cache creation to work through the WAL.

I think I also stumbled on a bug where the last cache wasn't getting initialized from the catalog on reboot so that it wouldn't actually end up caching values. The refactored last cache persistence test in write_buffer/mod.rs surfaced this.

Finally, I also had to update the WAL so that it would persist if there were only catalog updates and no writes.

Fixes #25203

* fix: typos
2024-08-09 05:46:35 -07:00
Paul Dix 05ab730ae6
refactor: Make Level0Duration part of WAL (#25228)
* refactor: Make Level0Duration part of WAL

I noticed this during some testing and cleanup with other PRs. The WAL had its own level_0_duration and the write buffer had a different one, which would cause some weird problems if they weren't the same. This refactors Level0Duration to be in the WAL and fixes up the tests.

As an added bonus, this surfaced a bug where multiple L0 blocks getting persisted in the same snapshot wasn't supported. So now snapshot details can have many files per table.

* fix: have persisted files always return in descending data time order

* fix: sort record batches for test verification
2024-08-08 09:47:21 -04:00
Trevor Hilton 7474c0b3b4
feat: add `system.parquet_files` table (#25225)
This extends the system tables available with a new `parquet_files` table
which will list the parquet files associated with a given table in a
database.

Queries to system.parquet_files must provide a table_name predicate to
specify the table name of interest.

The files are accessed through the QueryableBuffer.

In addition, a test was added to check success and failure modes of the
new system table query.

Finally, the Persister trait had its associated error type removed. This
was somewhat of a consequence of how I initially implemented this change,
but I felt cleaned the code up a bit, so I kept it in the commit.
2024-08-08 08:46:26 -04:00
Trevor Hilton b0beab5b0c
feat: use host identifier prefix in object store paths (#25224)
This enforces the use of a host identifier prefix in all object store
paths (currently, for parquet files, catalog files, and snapshot files).

The persister retains the host identifier prefix, and uses it when
constructing paths.

The WalObjectStore also holds the host identifier prefix, so that it can
use it when saving and loading WAL files.

The influxdb3 binary requires a new argument 'host-id' to be passed that
is used to specify the prefix.
2024-08-07 16:23:36 -04:00
Paul Dix 43877beb15
fix: query bugs with buffer (#25213)
* fix: query bugs with buffer

This fixes three different bugs with the buffer. First was that aggregations would fail because projection was pushed down to the in-buffer data that de-duplication needs to be called on. The test in influxdb3/tests/server/query.rs catches that.

I also added a test in write_buffer/mod.rs to ensure that data is correctly queryable when combining with different states: only data in buffer, only data in parquet files, and data across both. This showed two bugs, one where the parquet data was being doubled up (parquet chunks were being created in write buffer mod and in queryable buffer. The second was that the timestamp min max on table buffer would panic if the buffer was empty.

* refactor: PR feedback

* fix: fix wal replay and buffer snapshot

Fixes two problems uncovered by adding to the write_buffer/mod.rs test. Ensures we can replay wal data and that snapshots work properly with replayed data.

* fix: run cargo update to fix audit
2024-08-07 16:00:17 -04:00
Paul Dix 6aa6d924c6
fix: wal skip persist and notify if empty buffer (#25211)
* fix: wal skip persist and notify if empty buffer

This fixes the WAL so that it will skip persisting a file and notifying the file notifier if the wal buffer is empty.

* fix: fix last cache persist test
2024-08-05 18:08:11 -04:00
Paul Dix 3265960010
refactor: implement new wal and refactor write buffer (#25196)
* feat: refactor WAL and WriteBuffer

There is a ton going on here, but here are the high level things. This implements a new WAL, which is backed entirely by object store. It then updates the WriteBuffer to be able to work with how the new WAL works, which also required an update to how the Catalog is modified and persisted.

The concept of Segments has been removed. Previously there was a separate WAL per segment of time. Instead, there is now a single WAL that all writes and updates flow into. Data within the write buffer is organized by Chunk(s) within tables, which is based on the timestamp of the row data. These are known as the Level0 files, which will be persisted as Parquet into object store. The default chunk duration for level 0 files is 10 minutes.

The WAL is written as single files that get created at the configured WAL flush interval (1s by default). After a certain number of files have been created, the server will attempt to snapshot the WAL (default is to snapshot the first 600 files of the WAL after we have 900 total, i.e. snapshot 10 minutes of WAL data).

The design goal with this is to persist 10 minute chunks of data that are no longer receiving writes, while clearing out old WAL files. This works if data getting written in around "now" with no more than 5 minutes of delay. If we continue to have delayed writes, a snapshot of all data will be forced in order to clear out the WAL and free up memory in the buffer.

Overall, this structure of a single wal, with flushes and snapshots and chunks in the queryable buffer led to a simpler setup for the write buffer overall. I was able to clear out quite a bit of code related to the old segment organization.

Fixes #25142 and fixes #25173

* refactor: address PR feedback

* refactor: wal to replay and background flush on new

* chore: remove stray println
2024-08-01 15:04:15 -04:00