This commit brings over `TableIndexCache` support from the enterprise
repo. It primarily focuses on efficient automatic cleanup of expired
gen1 parquet files based on retention policies and hard deletes. It
- Adds purge operations for tables and retention period expired data.
- Integrates `TableIndexCache` into `PersistedFiles` for the sake of
parquet data deletion handling in `ObjectDeleter` impl.
- Introduces a new background loop for applying data retention polices
with a 30m default interval.
- Includes comprehensive test coverage for cache operations, concurrent
access, persisted snapshot to table index snapshot splits, purge
scenario, object store path parsing, etc.
\## New Types
- `influxdb3_write::table_index::TableIndex`:
- A new trait that tracks gen1 parquet file metadata on a per-table
basis.
- `influxdb3_write::table_index::TableIndexSnapshot`:
- An incremental snapshot of added and removed gen1 parquet files.
- Created by splitting a `PersistedSnapshot` (ie a whole-database
snapshot) into individual table snapshots.
- Uses the existing snapshot sequence number.
- Removed from object store after successful aggregation into
`CoreTableIndex`.
- `influxdb3_write::table_index::CoreTableIndex`:
- Implements of `TableIndex` trait.
- Aggregation of `TableIndexSnapshot`s.
- Not versioned -- assumes that we will migrate away from Parquet in
favor of PachaTree in the medium/long term.
- `influxdb3_write::table_index_cache::TableIndexCache`
- LRU cache
- Configurable via CLI parameters:
- Concurrency of object store operations.
- Maximum number of `CachedTableIndex` to allow before evicting
oldest entries.
- Entrypoint for handling conversion of `PersistedSnapshot` to
`TableIndexSnapshot` to `TableIndex`
- `influxdb3_write::table_index_cache::CachedTableIndex`
- Implements `TableIndex` trait
- Accessing ParquetFile or TableIndex causes last access time to be
updated.
- Stores a mutable `CoreTableIndex` as implementation detail.
- `influxdb3_write::retention_period_handler::RetentionPeriodHandler`
- Runs a top-level background task that periodically applies retention
periods to gen1 files via the `TableIndexCache`.
- Configurable via CLI parameters:
- Retention period handling interval
\## Updated Types
- `influxdb3_write::persisted_files::PersistedFiles`
- Now holds an `Arc` reference to `TableIndexCache`
- Uses its `TableIndexCache` to apply hard deletion to all historical
gen1 files and update associated `CoreTableIndex` in the object
store.
* feat: additional server setup for admin token recovery
- new server to only serve admin token regeneration without an admin
token has been added
- minor refactors to allow reuse of some of the utilities like trace
layer for metrics moved to their own functions to allow them to be
instantiated for both servers
- tests added to check if both the new server works right for
regenerating token and also ensure none of the other functionalities
are available on the admin token recovery server
closes: https://github.com/influxdata/influxdb/issues/26330
* refactor: tidy ups + extra logging
* refactor: address PR feedback
- recovery server now only starts when `--admin-token-recovery-http-bind` is passed in
- as soon as regeneration is done, the recovery server shuts itself down
- the select! macro logic has been changed such that shutting down
recovery server does not shutdown the main server
* refactor: host url updates when regenerating token
- when `--regenerate` is passed in, `--host` still defaults to the main
server. To get to the recovery server, `--host` with the recovery
server address should be passed in
download python tarball outside of the circle working dir, and set its
ownership to prevent the tarball from changing while it's being archived
cache `python-artifacts` in a path that doesn't change between pipeline
runs
This backports PR #1004 from influxdb_pro which adds comprehensive
module documentation to influxdb3_catalog explaining the catalog
persistence system and version migration instructions.
The documentation covers:
- Catalog persistence using log and snapshot files
- Version management and migration patterns
- Step-by-step instructions for adding new versions
- Important considerations for maintaining backward compatibility
This commit fixes queries that could come back as failures due to
improperly quoted table names in queries. It also fixes issues in
Enterprise where compaction would fail due to double escaped names.
The fix is relatively simple:
- Use the parse not from function for the Path type in the object_store
crate
- Quote escape table names in queries
Add a new system table that allows users to inspect the arguments
configured for processing engine triggers. The table has three columns:
- trigger_name: name of the trigger
- argument_key: key of the argument
- argument_value: value of the argument
Each trigger argument appears as a separate row in the table, making
it easy to query specific triggers or arguments.
Update snapshot files to include processing_engine_trigger_arguments table
Update test snapshots to include the new processing_engine_trigger_arguments
system table in:
- Table listing outputs
- Error messages showing valid table names
- Table summaries
This ensures tests properly reflect the new system table in their
expected outputs.
also changes `PBS_DATE` & `PBS_VERSION` caching to use
`<< pipeline.parameters >>` instead of `<< checksum >>` of a file, so
that `/tmp/pbs_version` can't change if two different jobs run on the
same runner at the same time
additionally, remove as many `*` as possible in the `deb` & `rpm`
validation, because it appears it's getting interpretted in different
places between the amd & arm runners and containers
* feat: additional endpoint to route secure request added
* feat: added server builder with options instead of generics
* feat: amend existing types to use new builder
* refactor: remove builder completely and initialize `Server` directly
closes: https://github.com/influxdata/influxdb/issues/25903
* refactor: use CreateServerArgs to address lint error
* feat: Allow hard_deleted date of deleted schema to be updated
* feat: Include hard_deletion_date in `_internal` `databases` and `tables`
* feat: Unit tests for testing deleted databases
* chore: Default is now to hard-delete with default duration
* test: Update test names and assertions for new default hard deletion behavior
- Renamed delete_table_defaults_to_hard_delete_never to delete_table_defaults_to_hard_delete_default
- Renamed delete_database_defaults_to_hard_delete_never to delete_database_defaults_to_hard_delete_default
- Updated assertions to expect default deletion duration instead of None/Never
- Aligns with the change of HardDeletionTime default from Never to Default
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* chore: Remove TODO
* chore: PR feedback and other improvements
* Ensure system databases and tables schema specify a timezone for the
`hard_deletion_time` Timestamp columns (otherwise they display without
a timezone)
* `DELETE` using `default` delay is idempotent, so multiple requests
will not continue to update the `hard_deletion_time`
* Improved test coverage for these behaviours
---------
Co-authored-by: Claude <noreply@anthropic.com>
Before this commit, although `--object-store` is mandatory it is not
reflected in the error messages. The examples are listed in the issue
976.
This commit makes `object_store` explicitly required which means error
messages include `--object-store` listed as mandatory
closes: https://github.com/influxdata/influxdb_pro/issues/976
Add bounds checking to prevent panic when WAL files are empty or
truncated. Introduces `--wal-replay-fail-on-error` flag to control
behavior when encountering corrupt WAL files during replay.
- Add WalFileTooSmall error for files missing required header bytes
- Validate minimum file size (12 bytes) before attempting
deserialization
- Make WAL replay configurable: skip corrupt files by default or fail
on error
- Add comprehensive tests for empty, truncated, and header-only files
Closes#26549
- `AbortableTaskRunner` and it's friends in influxdb3_shutdown
- `ProcessUuidWrapper` and it's friends in influxdb3_process
- change sleep time in test
They're not used currently in any of the core code, but helps when
sync'ing core back to enterprise
after moving to the self-hosted runners we've seen issues before and
after creating `all.tar.gz`. add `sync` before & after creating it to
make sure the files don't change during the relevant operations
* ci: move some circleci tasks to self-hosted runners
we have self-hosted circleci runners. migrating to them will reduce the
cost dramatically. this only moves `machine:` jobs. work needs to be
done on the hosts before migrating the `docker:` jobs
* test(ci): change some filters to run jobs that otherwise wouldn't run
in order to test them on the self-hosted runners
if / when they pass, this commit needs to be dropped before merging
* ci: cleanup package-validation, run verification in containers
run the package validation scripts in containers on the self-hosted
runners. this has the benefit of not needing terraform, and also
prevents issues cleaning up the install on the long-lived runners by
using an ephemeral container for the installation
* ci: reset filters
several filters were changed for testing. this puts them back to their
original values
This commit touches quite a few things, but the main changes that need
to be taken into account are:
- An update command has been added to the CLI. This could be further
extended in the future to update more than just Database retention
periods. The API call for that has been written in such a
way as to allow other qualities of the database to be updated
at runtime from one API call. For now it only allows the retention
period to be updated, but it could in theory allow us to rename a
database without needing to wipe things, especially with a stable ID
underlying everything.
- The create database command has been extended to allow
its creation with a retention period. In tandem with the update
command users can now assign or delete retention periods at will
- The ability to query catalog data about both databases and tables has
been added as well. This has been used in tests added in this commit,
but is also a fairly useful query when wanting to look at things such
as the series key. This could be extended to a CLI command as well if
we want to allow users to look at this data, but for now it's in the
_internal table.
With these changes a nice UX has been created to allow our customers to
work with retention periods.
* Tracks the generation duration configuration for the write buffer
in the catalog.
* Still leverages the CLI arguments to set it on initial start up of
the server.
* Exposes a system table on the _internal database to view the configured
generation durations.
* This doesn't change how the gen1 duration is used by the write buffer.
* Adds several tests to check things work as intended.
Includes two main components:
* Removal of expired data from `PersistedFiles`.
* Modified `ChunkFilter` that precisely excludes expired data from query
results even if the expired data hasn't been removed from the object
store yet.
---------
Co-authored-by: Michael Gattozzi <mgattozzi@influxdata.com>
This commit adds another sub command to load generator that allows
creating constrained throughput of line protocol data shared between
given number of writers. It uses a very naive approach to generate data
which may contain some duplicates. However it is useful when you need to
generate a very specific amount of data per writer. This approach has
been used to reproduce OOMs observed in perf tests.
This does not create a report like other sub-commands, and it also does
not observe any errors in the writes.
pro PR: https://github.com/influxdata/influxdb_pro/pull/886
WAL replay currently loads _all_ WAL files concurrently running into
OOM. This commit adds a CLI parameter `--wal-replay-concurrency-limit`
that would allow the user to set a lower limit and run WAL replay again.
closes: https://github.com/influxdata/influxdb/issues/26481
* feat: add retention period to catalog
* fix: handle humantime parsing error properly
* refactor: use new iox_http_util types
---------
Co-authored-by: Michael Gattozzi <mgattozzi@influxdata.com>
* refactor: Use iox_http_util::Request instead of hyper::Request
* refactor: Use iox_http_util::RequestBuilder instead of hyper::Request::builder
* refactor: Use iox_http_util::empty_request_body instead of Body::empty
* refactor: Use iox_http_util::bytes_to_request_body instead of Body::from
* refactor: Use http_body::Body instead of hyper::body::HttpBody
* refactor: Use iox_http_util::Response instead of hyper::Response
* refactor: Use iox_http_util::ResponseBuilder instead of hyper::Response::builder
* refactor: Use iox_http_util::empty_response_body instead of Body::empty
* refactor: Use iox_http_util::bytes_to_response_body instead of Body::from
* refactor: Use iox_http_util::stream_results_to_response_body instead of Body::wrap_stream
* refactor: Use the read_body_bytes_for_tests helper fn
Currently when there is an OOM while snapshotting, the process keeps
going without crashing. This behaviour is observed in main (commit:
be25c6f52b). This means the wal files keep
increasing to a point that restarts never can replay all the files.
This is happening because of the distribution of memory, in enterprise
especially there is no need for an ingester to be allocated just 20% for
datafusion memory pool (which runs the snapshot) as parquet cache is not
in use at all. This 20% is too conservative for an ingester, so instead
of redistributing the memory settings based on the mode it's running,
a separate write path executor is introduced in this commit with no
bound on memory (still uses `GreedyMemoryPool` under the hoold with
`usize::MAX` as upper limit). This means write path executor will always
run into OOM and stop the whole process.
Also, it is important to let snapshotting process use as much memory
as it needs as without that, the buffer will keep getting bigger and run
into OOM anyway.
closes: https://github.com/influxdata/influxdb/issues/26422