This commit brings over `TableIndexCache` support from the enterprise
repo. It primarily focuses on efficient automatic cleanup of expired
gen1 parquet files based on retention policies and hard deletes. It
- Adds purge operations for tables and retention period expired data.
- Integrates `TableIndexCache` into `PersistedFiles` for the sake of
parquet data deletion handling in `ObjectDeleter` impl.
- Introduces a new background loop for applying data retention polices
with a 30m default interval.
- Includes comprehensive test coverage for cache operations, concurrent
access, persisted snapshot to table index snapshot splits, purge
scenario, object store path parsing, etc.
\## New Types
- `influxdb3_write::table_index::TableIndex`:
- A new trait that tracks gen1 parquet file metadata on a per-table
basis.
- `influxdb3_write::table_index::TableIndexSnapshot`:
- An incremental snapshot of added and removed gen1 parquet files.
- Created by splitting a `PersistedSnapshot` (ie a whole-database
snapshot) into individual table snapshots.
- Uses the existing snapshot sequence number.
- Removed from object store after successful aggregation into
`CoreTableIndex`.
- `influxdb3_write::table_index::CoreTableIndex`:
- Implements of `TableIndex` trait.
- Aggregation of `TableIndexSnapshot`s.
- Not versioned -- assumes that we will migrate away from Parquet in
favor of PachaTree in the medium/long term.
- `influxdb3_write::table_index_cache::TableIndexCache`
- LRU cache
- Configurable via CLI parameters:
- Concurrency of object store operations.
- Maximum number of `CachedTableIndex` to allow before evicting
oldest entries.
- Entrypoint for handling conversion of `PersistedSnapshot` to
`TableIndexSnapshot` to `TableIndex`
- `influxdb3_write::table_index_cache::CachedTableIndex`
- Implements `TableIndex` trait
- Accessing ParquetFile or TableIndex causes last access time to be
updated.
- Stores a mutable `CoreTableIndex` as implementation detail.
- `influxdb3_write::retention_period_handler::RetentionPeriodHandler`
- Runs a top-level background task that periodically applies retention
periods to gen1 files via the `TableIndexCache`.
- Configurable via CLI parameters:
- Retention period handling interval
\## Updated Types
- `influxdb3_write::persisted_files::PersistedFiles`
- Now holds an `Arc` reference to `TableIndexCache`
- Uses its `TableIndexCache` to apply hard deletion to all historical
gen1 files and update associated `CoreTableIndex` in the object
store.
This commit fixes queries that could come back as failures due to
improperly quoted table names in queries. It also fixes issues in
Enterprise where compaction would fail due to double escaped names.
The fix is relatively simple:
- Use the parse not from function for the Path type in the object_store
crate
- Quote escape table names in queries
Add bounds checking to prevent panic when WAL files are empty or
truncated. Introduces `--wal-replay-fail-on-error` flag to control
behavior when encountering corrupt WAL files during replay.
- Add WalFileTooSmall error for files missing required header bytes
- Validate minimum file size (12 bytes) before attempting
deserialization
- Make WAL replay configurable: skip corrupt files by default or fail
on error
- Add comprehensive tests for empty, truncated, and header-only files
Closes#26549
- `AbortableTaskRunner` and it's friends in influxdb3_shutdown
- `ProcessUuidWrapper` and it's friends in influxdb3_process
- change sleep time in test
They're not used currently in any of the core code, but helps when
sync'ing core back to enterprise
* Tracks the generation duration configuration for the write buffer
in the catalog.
* Still leverages the CLI arguments to set it on initial start up of
the server.
* Exposes a system table on the _internal database to view the configured
generation durations.
* This doesn't change how the gen1 duration is used by the write buffer.
* Adds several tests to check things work as intended.
Includes two main components:
* Removal of expired data from `PersistedFiles`.
* Modified `ChunkFilter` that precisely excludes expired data from query
results even if the expired data hasn't been removed from the object
store yet.
---------
Co-authored-by: Michael Gattozzi <mgattozzi@influxdata.com>
WAL replay currently loads _all_ WAL files concurrently running into
OOM. This commit adds a CLI parameter `--wal-replay-concurrency-limit`
that would allow the user to set a lower limit and run WAL replay again.
closes: https://github.com/influxdata/influxdb/issues/26481
* feat: add retention period to catalog
* fix: handle humantime parsing error properly
* refactor: use new iox_http_util types
---------
Co-authored-by: Michael Gattozzi <mgattozzi@influxdata.com>
* chore: update to latest core
* chore: allow CDLA permissive 2 license
* chore: update insta snapshot for new internal df tables
* test: update assertion in flightsql test
* fix: object store size hinting workaround in clap_blocks
Applied a workaround from upstream to strip size hinting from the object
store get request options. See:
https://github.com/influxdata/influxdb_iox/issues/13771
* fix: query_executor tests use object store size hinting workaround
* fix: insta snapshot test for show system summary command
* chore: update windows- crates for advisories
* chore: update to latest sha on influxdb3_core branch
* chore: update to latest influxdb3_core rev
* refactor: pr feedback
* refactor: do not use object store size hint layer
Instead of using the ObjectStoreStripSizeHint layer, just provide the
configuration to datafusion to disable the use of size hinting from
iox_query.
This is used in IOx and not relevant to Monolith.
* fix: use parquet cache for get_opts requests
* test: that the parquet cache is being hit from write buffer
* chore: Ensure Parquet sort key is serialised with snapshots
* chore: PR feedback, rename state variable to match intent
* chore: Use `Default` trait to implement `TableBuffer::new`
* chore: Fix change in file size with extra metadata
* chore: Add rustdoc for `sort_key` field
* refactor: remove unused Key type from write buffer
The write buffer had a Key variant for handling the experimental v3
write API that was phased out and removed from an earlier iteration
of influxdb3.
* refactor: remove key column type from last cache
Adds a metric to track total retried catalog operations due to the catalog
being updated elsewhere. Includes a test to check the counter increments
on basic catalog operations.
* feat: generate persistable admin token
- this commit allows admin token creation using `influxdb3 create token
--admin` and also allows regeneration of admin token by `influxdb3
create token --admin --regenerate`
- `influxdb3_authz` crate hosts all low level token types and behaviour
- catalog log and snapshot types updated to use the token repo
- tests that relied on auth have been updated to use the new token
generation mechanism and new admin token generation/regeneration tests
have been added
* feat: list admin tokens
- allows listing admin tokens
- uses _internal db for token system table
- mostly test fixes due to _internal db
* chore: couple of updates to fix cargo audit job
- remove humantime ignore in deny.toml
- update pyo3 to use 0.24.1 (https://rustsec.org/advisories/RUSTSEC-2025-0020.html)
* chore: moved pyo3 version to root cargo.toml
* feat: add influxdb3_shutdown crate
provides basic wait methods for unix/windows OS's
* feat: graceful shutdown
* docs: add rust docs and test to influxdb3_shutdown
Added rustdoc comments to types and methods in the influxdb3_shutdown
crate as well as a test that shows the ordering of a shutdown.
This adds a sleep so that the parquet cache has a little bit of time to
populate before we make another request to the query buffer. Sometimes
it does not populate and so we have a race condition where the new
request comes in and actually goes to object store. This is fine in
practice because it would also take time to fill the cache in production
as well. I haven't really seen the test fail since adding this, but
triggering it in the first place is really hard and in practice does not
happen all that often.
This creates a CatalogUpdateMessage type that is used to send
CatalogUpdates; this type performs the send on the oneshot Sender so
that the consumer of the message does not need to do so.
Subscribers to the catalog get a CatalogSubscription, which uses the
CatalogUpdateMessage type to ACK the message broadcast from the catalog.
This means that catalog message broadcast can fail, but this commit does
not provide any means of rolling back a catalog update.
A test was added to check that it works.
* feat(python): update to python-build-standalone 3.13.2
References:
- https://github.com/influxdata/influxdb/issues/26044
* fix: update fetch-python-standalone.bash to properly set 'executable'
* fix: use PYO3_CONFIG_FILE to find PYTHONHOME.
* fix: add comment about PYO3_CONFIG_FILE.
* fix: remove ensure_pyo3().
* fix: add some sleep so catalog is updated.
---------
Co-authored-by: Jackson Newhouse <jnewhouse@influxdata.com>
* refactor: use repository in catalog
The catalog was refactored to use identifiers on everything, and store
everything in a consistent structure. This structure makes use of the
`Repository` type that holds a `SerdeVecMap` of Id to Resource, along
with the next Id, and a bi-map of Id to resource name.
The `Repository` type is used at each level of the catalog where a
resource is stored.
This simplified repeated logic for snapshot'ing, insert and update of
resources in the catalog, as well as accessor methods for getting by id
or name, and mapping names to ids and vice-versa.
In addition, the process for catalog batch verification and permit was
altered so that the permit process induces a retry if the catalog was
updated while the catalog batch function was producing the batch, i.e, if
the catalog sequence incremented while the caller was waiting for a permit.
This eliminated the need for verifying the catalog batch after it had been
generated, and allows for a single path to apply a catalog batch after it
has been persisted to object store.
This assumes that the generation of the catalog batch implies validity.
Irelevant tests were removed.
Last and Distinct cache's now rely more heavily on Ids, though the proc-
essing engine still needs to switch over to use Ids for starting/stopping
triggers.
Continuing our work of creating versioned files before Beta, this commit
adds a PersistedSnapshotVersion which is used at the boundary of
serializing and deserializing so that we can easily upgrade to a newer
version and handle old versions without breaking things for users.
This commit restores the old behavior we had where new tags can be added
to a schema. To do this we made tags nullable and brings us in line with
our other products. These changes were made in this PR:
https://github.com/influxdata/influxdb3_core/pull/41.
Changes to accomplish this new behavior were:
- Queries now do not return an empty string for null tags instead they
are returned as null, or in many formats not at all.
- References to v1 for parsing and validating lines were removed as we
only have one path for doing so these days shared amongst all the
write_lp endpoints.
- We fixed failing tests that expected us to not be able to have new
tags or depended on that functionality indirectly
- Tests had their snapshot files updated to reflect that tags are
nullable by default
- Behavior for making a schema and checking whether a column can be null
were updated in a separate repo and integrated here
- The series_key is updated whenever we get a new tag added to the
schema
- New tests were added to show that you can add a new tag and that the
series key is updated as part of that
With the above changes we can now allow tags to be added again by users
like they would expect, especially with v1 and v2 apis and Telegraf
plugins.
The distinct cache info for tables was not serialized in the catalog.
This fixes it, but also updates the catalog serialization to use the
snapshot type serialization from the Catalog type all the way down.
The Eq and PartialEq impls were removed from Catalog and InnerCatalog
as they were only used in tests, and wer replaced by pure insta snapshot
tests.
A test was added to check that the distinct cache serializes/deserializes
Partially fixes https://github.com/influxdata/influxdb/issues/24672
* move most HTTP req/resp types into `influxdb3_types` crate
* removes the use of locally-scoped request type structs from the `influxdb3_client` crate
* fix plugin dependency/package install bug
* it looks like the `DELETE` http method was being used where `POST` was expected for `/api/v3/configure/plugin_environment/install_packages` and `/api/v3/configure/plugin_environment/install_requirements`
* feat: clear query buffer incrementally when snapshotting
This commit clears the query buffer incrementally as soon as a table's
data in buffer is written into parquet file and cached. Previously,
clearing the buffer happened at the end in the background
* refactor: only clear buffer after adding to persisted files
* refactor: rename function
* feat: introduce parquet caching in query path
This commit scans the parquet files that will be used in query to check
if they can be cached. There are three conditions to satisfy,
- not cached already
- cache has enough space
- file times overlap with the cache policy times
closes: https://github.com/influxdata/influxdb/issues/25906
* refactor: rename env var
This speeds up snapshot persistence by taking all of the persist jobs
and running them simultaneously on a JoinSet. With this we can speed
things up a bit by not waiting for each file to persist before the next
one can be persisted. Instead we now can run all the persisting at the
same time using the tokio runtime.
Closes#24658
This refactors plugins and triggers so that plugins no longer need to be "created". Since plugins exist in either the configured local directory or on the Github repo, a user now only needs to create a trigger and reference the plugin filename.
Closes#25876