* fix(write): prevent writes to soft-deleted databases
Soft-deleted databases have been accepting write operations during
their deletion grace period. Users typically have no reason to write
data to a database scheduled for deletion.
This change adds validation in WriteValidator::initialize() to check if
a database is marked as deleted and rejects write attempts with a
DatabaseDeleted error. Querying deleted databases remains allowed for
data recovery purposes.
* fixes#26721
* chore: cannot write to soft deleted db is a 400 error
This commit makes the error for attempting to write to a soft deleted db
a bad request instead of the default error mapping which is a server
error.
* chore: fix test expectations because of the json output change
* fix(wal_replay): limit the number of wal files preloaded to num_cpus
Wal replay currently loads all the wal files into memory and decodes
them by default. If that's 10s of GB or 100s of GB, it'll try to do it
potentially causing OOMs if it exceeds system memory. We likely keep
most of the speed from preloading but decrease chance of OOM by
preloading a more limited number of wal files. In the absence of an
option to directly limit the memory used in preload, we can use the
number of cpu cores available as a proxy. This will be the number of wal
files loaded to replay, which has to happen in order still. The current
recommendation is to use 10 if you encounter an OOM so let's use that as
the minimum if a specific value isn't set. The new logic is
num_files_preloaded = (user's choice) of if not set (max(10, num_cpus)
This should improve the experience restarting the server when there is a
lot of wal data.
* closes#26715
* chore: update the default value in help-all
* fix(wal_replay): implement dynamic default value in clap derive for concurrency limit
- Add wal_replay_concurrency_limit_default() helper fn for max(num_cpus, 10)
- Change field type from Option<usize> to usize
- Update help text to clarify dynamic nature and OOM warning
* chore: typo in help
* feat: upgrade to hyper 1
- use `hyper_util` for `TokioIo` and `ConnectionBuilder` from it
- remove `hybrid` service running grpc/http on same port to new
`UnifiedService`, uses less generics
- swap `hyper_util::client::legacy::Client` for `hyper::Client`
- TLS changes, set ALPN protocol
- Test code changes, instead of `hyper::{StatusCode, Method}` use `reqwest::{StatusCode, Method}`
- rustls initialization (crypto provider) needs to be done explicitly now
- graceful shutdown + tidy ups
- move tokio-rustls to root Cargo.toml
helps: https://github.com/influxdata/influxdb_pro/issues/1076
* feat: upgrade all non-hyper libraries
- update arrow/datafusion/object_store/parquet related dependencies to align with `iox` (or `influxdb3_core`)
- move of `datafusion::physical_plan::memory::MemoryExec`, the actual alternative is to use `MemorySourceConfig` and `DataSourceExec` directly
- move from `use parquet_file::storage::ParquetExecInput;` to `use parquet_file::storage::DataSourceExecInput;`
- object_store life time requirement changes, mostly switch to `'static`
- object_store crate deprecating `PutMultiPartOpts` in favour of `PutMultiPartOptions`
- `Range<usize>` to `Range<u64>` move in object_store. Most of them are updates to method signatures in impls but the parquet_cache one which needed bit more attention
closes: https://github.com/influxdata/influxdb_pro/issues/1076
* refactor: address feedback
* feat: influxdb_schema system table
Add a system table to expose the InfluxDB schema for tables in a
database. This exposes the schema of time series tables using InfluxDB
terminology and data type definitions.
This commit brings over `TableIndexCache` support from the enterprise
repo. It primarily focuses on efficient automatic cleanup of expired
gen1 parquet files based on retention policies and hard deletes. It
- Adds purge operations for tables and retention period expired data.
- Integrates `TableIndexCache` into `PersistedFiles` for the sake of
parquet data deletion handling in `ObjectDeleter` impl.
- Introduces a new background loop for applying data retention polices
with a 30m default interval.
- Includes comprehensive test coverage for cache operations, concurrent
access, persisted snapshot to table index snapshot splits, purge
scenario, object store path parsing, etc.
\## New Types
- `influxdb3_write::table_index::TableIndex`:
- A new trait that tracks gen1 parquet file metadata on a per-table
basis.
- `influxdb3_write::table_index::TableIndexSnapshot`:
- An incremental snapshot of added and removed gen1 parquet files.
- Created by splitting a `PersistedSnapshot` (ie a whole-database
snapshot) into individual table snapshots.
- Uses the existing snapshot sequence number.
- Removed from object store after successful aggregation into
`CoreTableIndex`.
- `influxdb3_write::table_index::CoreTableIndex`:
- Implements of `TableIndex` trait.
- Aggregation of `TableIndexSnapshot`s.
- Not versioned -- assumes that we will migrate away from Parquet in
favor of PachaTree in the medium/long term.
- `influxdb3_write::table_index_cache::TableIndexCache`
- LRU cache
- Configurable via CLI parameters:
- Concurrency of object store operations.
- Maximum number of `CachedTableIndex` to allow before evicting
oldest entries.
- Entrypoint for handling conversion of `PersistedSnapshot` to
`TableIndexSnapshot` to `TableIndex`
- `influxdb3_write::table_index_cache::CachedTableIndex`
- Implements `TableIndex` trait
- Accessing ParquetFile or TableIndex causes last access time to be
updated.
- Stores a mutable `CoreTableIndex` as implementation detail.
- `influxdb3_write::retention_period_handler::RetentionPeriodHandler`
- Runs a top-level background task that periodically applies retention
periods to gen1 files via the `TableIndexCache`.
- Configurable via CLI parameters:
- Retention period handling interval
\## Updated Types
- `influxdb3_write::persisted_files::PersistedFiles`
- Now holds an `Arc` reference to `TableIndexCache`
- Uses its `TableIndexCache` to apply hard deletion to all historical
gen1 files and update associated `CoreTableIndex` in the object
store.
* feat: additional server setup for admin token recovery
- new server to only serve admin token regeneration without an admin
token has been added
- minor refactors to allow reuse of some of the utilities like trace
layer for metrics moved to their own functions to allow them to be
instantiated for both servers
- tests added to check if both the new server works right for
regenerating token and also ensure none of the other functionalities
are available on the admin token recovery server
closes: https://github.com/influxdata/influxdb/issues/26330
* refactor: tidy ups + extra logging
* refactor: address PR feedback
- recovery server now only starts when `--admin-token-recovery-http-bind` is passed in
- as soon as regeneration is done, the recovery server shuts itself down
- the select! macro logic has been changed such that shutting down
recovery server does not shutdown the main server
* refactor: host url updates when regenerating token
- when `--regenerate` is passed in, `--host` still defaults to the main
server. To get to the recovery server, `--host` with the recovery
server address should be passed in
Add a new system table that allows users to inspect the arguments
configured for processing engine triggers. The table has three columns:
- trigger_name: name of the trigger
- argument_key: key of the argument
- argument_value: value of the argument
Each trigger argument appears as a separate row in the table, making
it easy to query specific triggers or arguments.
Update snapshot files to include processing_engine_trigger_arguments table
Update test snapshots to include the new processing_engine_trigger_arguments
system table in:
- Table listing outputs
- Error messages showing valid table names
- Table summaries
This ensures tests properly reflect the new system table in their
expected outputs.
* feat: additional endpoint to route secure request added
* feat: added server builder with options instead of generics
* feat: amend existing types to use new builder
* refactor: remove builder completely and initialize `Server` directly
closes: https://github.com/influxdata/influxdb/issues/25903
* refactor: use CreateServerArgs to address lint error
* feat: Allow hard_deleted date of deleted schema to be updated
* feat: Include hard_deletion_date in `_internal` `databases` and `tables`
* feat: Unit tests for testing deleted databases
* chore: Default is now to hard-delete with default duration
* test: Update test names and assertions for new default hard deletion behavior
- Renamed delete_table_defaults_to_hard_delete_never to delete_table_defaults_to_hard_delete_default
- Renamed delete_database_defaults_to_hard_delete_never to delete_database_defaults_to_hard_delete_default
- Updated assertions to expect default deletion duration instead of None/Never
- Aligns with the change of HardDeletionTime default from Never to Default
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* chore: Remove TODO
* chore: PR feedback and other improvements
* Ensure system databases and tables schema specify a timezone for the
`hard_deletion_time` Timestamp columns (otherwise they display without
a timezone)
* `DELETE` using `default` delay is idempotent, so multiple requests
will not continue to update the `hard_deletion_time`
* Improved test coverage for these behaviours
---------
Co-authored-by: Claude <noreply@anthropic.com>
Add bounds checking to prevent panic when WAL files are empty or
truncated. Introduces `--wal-replay-fail-on-error` flag to control
behavior when encountering corrupt WAL files during replay.
- Add WalFileTooSmall error for files missing required header bytes
- Validate minimum file size (12 bytes) before attempting
deserialization
- Make WAL replay configurable: skip corrupt files by default or fail
on error
- Add comprehensive tests for empty, truncated, and header-only files
Closes#26549
- `AbortableTaskRunner` and it's friends in influxdb3_shutdown
- `ProcessUuidWrapper` and it's friends in influxdb3_process
- change sleep time in test
They're not used currently in any of the core code, but helps when
sync'ing core back to enterprise
This commit touches quite a few things, but the main changes that need
to be taken into account are:
- An update command has been added to the CLI. This could be further
extended in the future to update more than just Database retention
periods. The API call for that has been written in such a
way as to allow other qualities of the database to be updated
at runtime from one API call. For now it only allows the retention
period to be updated, but it could in theory allow us to rename a
database without needing to wipe things, especially with a stable ID
underlying everything.
- The create database command has been extended to allow
its creation with a retention period. In tandem with the update
command users can now assign or delete retention periods at will
- The ability to query catalog data about both databases and tables has
been added as well. This has been used in tests added in this commit,
but is also a fairly useful query when wanting to look at things such
as the series key. This could be extended to a CLI command as well if
we want to allow users to look at this data, but for now it's in the
_internal table.
With these changes a nice UX has been created to allow our customers to
work with retention periods.
* Tracks the generation duration configuration for the write buffer
in the catalog.
* Still leverages the CLI arguments to set it on initial start up of
the server.
* Exposes a system table on the _internal database to view the configured
generation durations.
* This doesn't change how the gen1 duration is used by the write buffer.
* Adds several tests to check things work as intended.
Includes two main components:
* Removal of expired data from `PersistedFiles`.
* Modified `ChunkFilter` that precisely excludes expired data from query
results even if the expired data hasn't been removed from the object
store yet.
---------
Co-authored-by: Michael Gattozzi <mgattozzi@influxdata.com>
WAL replay currently loads _all_ WAL files concurrently running into
OOM. This commit adds a CLI parameter `--wal-replay-concurrency-limit`
that would allow the user to set a lower limit and run WAL replay again.
closes: https://github.com/influxdata/influxdb/issues/26481
* feat: add retention period to catalog
* fix: handle humantime parsing error properly
* refactor: use new iox_http_util types
---------
Co-authored-by: Michael Gattozzi <mgattozzi@influxdata.com>
* refactor: Use iox_http_util::Request instead of hyper::Request
* refactor: Use iox_http_util::RequestBuilder instead of hyper::Request::builder
* refactor: Use iox_http_util::empty_request_body instead of Body::empty
* refactor: Use iox_http_util::bytes_to_request_body instead of Body::from
* refactor: Use http_body::Body instead of hyper::body::HttpBody
* refactor: Use iox_http_util::Response instead of hyper::Response
* refactor: Use iox_http_util::ResponseBuilder instead of hyper::Response::builder
* refactor: Use iox_http_util::empty_response_body instead of Body::empty
* refactor: Use iox_http_util::bytes_to_response_body instead of Body::from
* refactor: Use iox_http_util::stream_results_to_response_body instead of Body::wrap_stream
* refactor: Use the read_body_bytes_for_tests helper fn
* chore: update to latest core
* chore: allow CDLA permissive 2 license
* chore: update insta snapshot for new internal df tables
* test: update assertion in flightsql test
* fix: object store size hinting workaround in clap_blocks
Applied a workaround from upstream to strip size hinting from the object
store get request options. See:
https://github.com/influxdata/influxdb_iox/issues/13771
* fix: query_executor tests use object store size hinting workaround
* fix: insta snapshot test for show system summary command
* chore: update windows- crates for advisories
* chore: update to latest sha on influxdb3_core branch
* chore: update to latest influxdb3_core rev
* refactor: pr feedback
* refactor: do not use object store size hint layer
Instead of using the ObjectStoreStripSizeHint layer, just provide the
configuration to datafusion to disable the use of size hinting from
iox_query.
This is used in IOx and not relevant to Monolith.
* fix: use parquet cache for get_opts requests
* test: that the parquet cache is being hit from write buffer
* chore: Ensure Parquet sort key is serialised with snapshots
* chore: PR feedback, rename state variable to match intent
* chore: Use `Default` trait to implement `TableBuffer::new`
* chore: Fix change in file size with extra metadata
* chore: Add rustdoc for `sort_key` field
* feat: `/ping` API contains versioning headers
Further, the product version can be modified by updating the metadata in
the `influxdb3_process` `Cargo.toml`.
* chore: PR feedback
* chore: placate linter
* fix: do not allow operator token from being deleted
closes: https://github.com/influxdata/influxdb_pro/issues/819
* refactor: address PR feedback
* fix: add a word and clarifying colon
* fix: failing test
---------
Co-authored-by: Peter Barnett <peter.barnett03@gmail.com>
* feat: allow health,ping,metrics to opt out of auth
This commit introduces `--disable-authz <DISABLE_AUTHZ_RESOURCES>`. The
options for `DISABLE_AUTHZ_RESOURCES` are health, ping and metrics. By
default all these resources will be guarded
closes: https://github.com/influxdata/influxdb_pro/issues/774
* chore: update influxdb3/src/commands/helpers.rs
space after comma in help text
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* chore: update influxdb3/src/help/serve.txt
space after comma in help text
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* chore: update influxdb3/src/help/serve_all.txt
space after comma in help text
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* refactor: use statics to reduce clones/copies
---------
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: support `Basic $TOKEN` for all apis
closes: https://github.com/influxdata/influxdb/issues/25833
* refactor: address PR feedback to return MalformedRequest error when `:` is used more than once in user-pass pair
* refactor: change the message sent back for malformed auth header
This commit adds support for CORS by modifying our requests to make
preflight checks valid and to handle responses containing the necessary
headers for browsers to access the data they need. We keep what we
accept as open as this is essentially what requests to the server are
normally like and we gate the requests with an auth token.
Closes#26313
This commit allows users to set a minimum TLS version. The default is
1.2. The choices are TLS 1.2 or TLS 1.3 which can be set via env var:
INFLUXDB3_TLS_MINIMUM_VERSION="tls-1.2"
or
INFLUXDB3_TLS_MINIMUM_VERSION="tls-1.3"
and for the command line flag for the serve command:
--tls-minimum-version tls-1.2
or
--tls-minimum-version tls-1.3
With this users have more fine grained control over what tls version
they require.
Closes#26255