Although the `format` in the request is used, the value coming
through the header is parsed earlier. So, when that lookup in
the header fails an error is returned (`InvalidMimeType`).
In this commit, there are extra checks to allow the default `Accept`
header values that come from the browser by defaulting it to `json`
closes: https://github.com/influxdata/influxdb/issues/25874
* feat(processing_engine): Add cron plugins and triggers to the processing engine.
* feat(processing_engine): switch from 'cron plugin' to 'schedule plugin', use TimeProvider.
* feat(processing_engine): add test for test scheduled plugin.
* feat: improve plugin logging interface
Updates the plugin log functions so they can take any number of Python objects which will be converted into a single log line string.
Closes#25847
* refactor: udpate on PR feedback
* feat: return better plugin execution errors
This sets up the framework for fleshing out more useful plugin execution errors that get returned to the user during testing. We'll also want to capture these for logging in system tables.
Also fixes a test that was broken in previous commit on time limits. Didn't show up because of the feature flag.
* fix: compile errors without system-py feature
This updates the v1 /query API hanlder to handle InfluxDB v1's unique
query response structure when GROUP BY clauses are provided.
The distinction is in the addition of a "tags" field to the emitted series
data that contains a map of the GROUP BY tags along with their distinct
values associated with the data in the "values" field.
This required splitting the QueryExecutor into two query paths for InfluxQL
and SQL, as this allowed for handling InfluxQL query parsing in advance
of query planning.
A set of snapshot tests were added to check that it all works.
This commit sets InfluxDB 3 Core to have a 72 hour limit for queries and
writes. What this means is that writes that contain historical data
older than 72 hours will be rejected and queries will filter out data
older than 72 hours. Core is intended to be a recent timeseries database
and performance over data older than 72 hours will degrade without a
garbage collector, a core feature of InfluxDB 3 Enterprise. InfluxDB 3
Enterprise does not have this write or query limit in place.
Note that this does *not* mean older data is deleted. Older data is
still accessible in object storage as Parquet files that can still be
used in other services and analyzed with dataframe libraries like pandas
and polars.
This commit does a few things:
- Uses timestamps in the year 2065 for tests as these should not break
for longer than many of us will be working in our lifetimes. This is
only needed for the integration tests as other tests use the
MockProvider for time.
- Filters the buffer and persisted files to only show data newer than
3 days ago
- Fixes the integration tests to work with the fact that writes older
than 3 days are rejected
This changes the CLI arg `host-id` to `writer-id` to more accurately
indicate meaning.
This changes also goes through the codebase and changes struct fields,
methods, and variables to use the term `writer_id` or `writer_identifier_prefix`
instead of `host_id` etc., to make the meaning clear in the code.
This also changes the catalog serialization to use the field `writer_id`
instead of `host_id`, which is breaking change.
This updates the create plugin API and CLI so that it doesn't take the pugin code, but instead takes a file name of a file that must be in the plugin-dir of the server. It returns an error if the plugin-dir is not configured or if the file isn't there.
Also updates the WAL and catalog so that it doesn't store the plugin code directly. The code is read from disk one time when the plugin runs.
Closes#25797
* feat: introduce num wal files to keep
This commit allows a configurable number of wal files to be left behind
in OS. This is necessary as enterprise replicas rely on these files.
closes: https://github.com/influxdata/influxdb/issues/25788
* refactor: address PR feedback
* refactor: address PR comment
This allows the user to specify arguments that will be passed to each execution of a wal plugin trigger. The CLI test was updated to check this end to end.
Closes#25655
This updates the WAL so that it can have new file notifiers added that will get updated when the wal flushes. The processing engine now implements the WALNotifier trait.
I've updated the CLI test for creating a trigger to run and end-to-end test that defines a plugin, creates a trigger, writes data into the database, triggering the plugin, which writes summary statistics back into the database in a different table. The test queries the destination table to confirm that the plugin worked.
* feat: Update WAL plugin for new structure
This ended up being a very large change set. In order to get around circular dependencies, the processing engine had to be moved into its own crate, which I think is ultimately much cleaner.
Unfortunately, this required changing a ton of things. There's more testing and things to add on to this, but I think it's important to get this through and build on it.
Importantly, the processing engine no longer resides inside the write buffer. Instead, it is attached to the HTTP server. It is now able to take a query executor, write buffer, and WAL so that the full range of functionality of the server can be exposed to the plugin API.
There are a bunch of system-py feature flags littered everywhere, which I'm hoping we can remove soon.
* refactor: PR feedback
This ended up being a couple things rolled into one. In order to add a query API to the Python plugin, I had to pull the QueryExecutor trait out of server into a place so that the python crate could use it.
This implements the query API, but also fixes up the WAL plugin test CLI a bit. I've added a test in the CLI section so that it shows end-to-end operation of the WAL plugin test API and exercise of the entire Plugin API.
Closes#25757
This commit allows checking memory in the background and force
snapshotting if query buffer size is > mem threshold. This hooks into
the function (`force_flush_buffer`) to achieve it.
closes: https://github.com/influxdata/influxdb/issues/25685
* fix: get create token subcommand to work again
In #25737 token creation was broken as a client would attempt to be
made, but hit an unreachable branch. The fix was to just make the branch
create a Client that won't do anything. We also add a test to make sure
that if there is a regression we'll catch it in the future.
Closes#25764
* fix: make client turn a duration into seconds/u64
This commit fixes the creation of a metadata cache made via the cli by
actually sending a time in seconds as a u64 and not as a duration. With
that we can now make metadata caches again when the `--max-age` flag is
specified.
This commit also adds a regression test for the CLI so that we can catch
this again in the future.
Closes#25749
This changes the `/query` API handler so that the parameters can be passed in either the request URI or in the request body for either a `GET` or `POST` request.
Parameters can be specified in the URI, the body, or both; if they are specified in both places, those in the body will take precedent.
Error variants in the HTTP server code related to missing request parameters were updated to return `400` status.
* fix: bind to correct port for e2e tests
This also fixes up some log messages on server start for naming
* chore: do notpass value in TEST_LOG env var to CI tests
_Follows #25737 (keeping in draft until that merges)_
Closes#25745
This PR provides both a CLI and underlying API for listing databases in the InfluxDB 3 Core server. Details are below.
There was already a method to list databases for the query executor for InfluxQL; this works by exposing that via the `HttpApi` in `influxdb3_server`.
However, one thing that we may address is that the query result for that uses `iox::database` as the column name. If we are removing references to `iox`, then we may want to just have it as `database`. I left it as is, for now, because I wanted to keep code churn down and wasn't sure why we use that prefix in the first place for the `SHOW DATABASES` and `SHOW RETENTION POLICIES` InfluxQL queries.
## Details
### CLI
This PR provides the `influxdb3 show` CLI:
```
influxdb3 show -h
List resources on the InfluxDB 3 Core server
Usage: influxdb3 show <COMMAND>
Commands:
databases List databases
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help information
```
with the ability to list databases:
```
influxdb3 show databases -h
List databases
Usage: influxdb3 show databases [OPTIONS]
Options:
-H, --host <HOST_URL> The host URL of the running InfluxDB 3 Core server [env: INFLUXDB3_HOST_URL=] [default: http://127.0.0.1:8181]
--token <AUTH_TOKEN> The token for authentication with the InfluxDB 3 Core server [env: INFLUXDB3_AUTH_TOKEN=]
--show-deleted Include databases that were marked as deleted in the output
--format <OUTPUT_FORMAT> The format in which to output the list of databases [default: pretty] [possible values: pretty, json, json_lines, csv]
-h, --help Print help information
```
Since this uses the query executor, we can pass a `--format` argument to get the output as JSON, CSV, or JSONL, but by default, it uses the `pretty` format:
```
influxdb3 show databases
+---------------+
| iox::database |
+---------------+
| bar |
+---------------+
```
The `--show-deleted` flag will have the `deleted` column displayed as well as any databases that have been marked as deleted:
```
influxdb3 show databases --show-deleted
+---------------------+---------+
| iox::database | deleted |
+---------------------+---------+
| bar | false |
| foo-20250105T202949 | true |
+---------------------+---------+
```
### API
The API to list databases can be invoked via:
```
GET /api/v3/configure/database
```
with optional parameters:
* `format`: `pretty`, `json`, `csv`, `parquet`, or `jsonl`
* `show_deleted`: `bool`, defaults to `false`
Note that `database` is singular in the API endpoint, to be consistent with the other database related create/delete API endpoints. We could change it to be plural `databases` if that is the convention we want to go with.
This makes quite a few major changes to our CLI and how users interact
with it:
1. All commands are now in the form <verb> <noun> this was to make the
commands consistent. We had last-cache as a noun, but serve as a
verb in the top level. Given that we could only create or delete
All noun based commands have been move under a create and delete
command
2. --host short form is now -H not -h which is reassigned to -h/--help
for shorter help text and is in line with what users would expect
for a CLI
3. Only the needed items from clap_blocks have been moved into
`influxdb3_clap_blocks` and any IOx specific references were changed
to InfluxDB 3 specific ones
4. References to InfluxDB 3.0 OSS have been changed to InfluxDB 3 Core
in our CLI tools
5. --dbname has been changed to --database to be consistent with --table
in many commands. The short -d flag still remains. In the create/
delete command for the database however the name of the database is
a positional arg
e.g. `influxbd3 create database foo` rather than
`influxdb3 database create --dbname foo`
6. --table has been removed from the delete/create command for tables
and is now a positional arg much like database
7. clap_blocks was removed as dependency to avoid having IOx specific
env vars
8. --cache-name is now an optional positional arg for last_cache and meta_cache
9. last-cache/meta-cache commands are now last_cache and meta_cache respectively
Unfortunately we have quite a few options to run the software and I
couldn't cut down on them, but at least with this commands and options
will be more discoverable and we have full control over our CLI options
now.
Closes#25646
* feat: Implement WAL plugin test API
This implements the WAL plugin test API. It also introduces a new API for the Python plugins to be called, get their data, and call back into the database server.
There are some things that I'll want to address in follow on work:
* CLI tests, but will wait on #25737 to land for a refactor of the CLI here
* Would be better to hook the Python logging to call back into the plugin return state like here: https://pyo3.rs/v0.23.3/ecosystem/logging.html#the-python-to-rust-direction
* We should only load the LineBuilder interface once in a module, rather than on every execution of a WAL plugin
* More tests all around
But I want to get this in so that the actual plugin and trigger system can get udated to build around this model.
* refactor: PR feedback
Prior to this change we would deny writes that used n and u for the
precision argument when doing writes. We only accepted ns and us for
those apis. However, to be backwards compatible we would need to enable
accepting writes with n and u. This is mostly just upgrading our deps
as this was a change that landed in IOx first. We test that it works for
our code by adding test cases for their precision in this repo.
This commit removes the required fields restriction when using the CLI
or the API to create a new table. As users can't write via the line
protocol without a field this is fine and the schema will be updated on
write. This expands the test to check for the correct response code now
and make sure that we can both query the empty table and write new data
to it.
Closes#25735
Prior to this change we would error correctly with a 422 if a limit was
hit. However, we would not send back the correct error in the case of a
limit being hit that caused a partial write. This change fixes that by
checking the error messages for failed lines and if one is found that
caused a limit to be hit, then a 422 is returned rather than a 400 as
we would have been able to process the line otherwise, but the limit was
hit instead
Closes#25208
This removes references to IOx in our docs and comments as when we
started making Monolith last year it was a fork of influxdb_iox.
This was something that we changed in some places, but not all as
time went on. To avoid confusion whenever we more broadly release
this software I've removed references to IOx in our code and docs.
Note however that we still use a lot of IOx code and our system
tables also return iox as part of the name. This is a separate issue
namely #25227. Some things like env vars will also be cleaned up
in #25646Closes#25662
Store the series key column names on the TableDefinitin in catalog so
looking up the series key by column names is more efficient
Remove the /api/v3/write API and related code/tests
This allows the `max_parquet_fanout` to be specified in the CLI for the `influxdb3 serve` command. This could be done previously via the `--datafusion-config` CLI argument, but the drawbacks to that were:
1. that is a fairly advanced option given the available key/value pairs are not well documented
2. if `iox.max_parquet_fanout` was not provided to that argument, the default would be set to `40`
This PR maintains the existing `--datafusion-config` CLI argument (with one caveat, see below) which allows users to provide a set key/value pairs that will be used to build the internal DataFusion config, but in addition provides the `--datafusion-max-parquet-fanout` argument:
```
--datafusion-max-parquet-fanout <MAX_PARQUET_FANOUT>
When multiple parquet files are required in a sorted way (e.g. for de-duplication), we have two options:
1. **In-mem sorting:** Put them into `datafusion.target_partitions` DataFusion partitions. This limits the fan-out, but requires that we potentially chain multiple parquet files into a single DataFusion partition. Since chaining sorted data does NOT automatically result in sorted data (e.g. AB-AB is not sorted), we need to preform an in-memory sort using `SortExec` afterwards. This is expensive. 2. **Fan-out:** Instead of chaining files within DataFusion partitions, we can accept a fan-out beyond `target_partitions`. This prevents in-memory sorting but may result in OOMs (out-of-memory) if the fan-out is too large.
We try to pick option 2 up to a certain number of files, which is configured by this setting.
[env: INFLUXDB3_DATAFUSION_MAX_PARQUET_FANOUT=]
[default: 1000]
```
with the default value of `1000`, which will override the core `iox_query` default of `40`.
A test was added to check that this is propagated down to the `IOxSessionContext` that is used during queries.
The only change to the `datafusion-config` CLI argument was to rename `INFLUXDB_IOX` in the environment variable to `INFLUXDB3`:
```
--datafusion-config <DATAFUSION_CONFIG>
Provide custom configuration to DataFusion as a comma-separated list of key:value pairs.
# Example ```text --datafusion-config "datafusion.key1:value1, datafusion.key2:value2" ```
[env: INFLUXDB3_DATAFUSION_CONFIG=]
[default: ]
```
Added prometheus metrics to track lines written and bytes written per
database. The write buffer does the tracking after validation of incoming
line protocol.
Tests added to verify.
* feat: create DB and Tables via REST and CLI
This commit does a few things:
1. It brings the database command naming scheme for types inline with
the rest of the CLI types
2. It brings the table command naming scheme for types inline with
the rest of the CLI types
3. Adds tests to check that the num of dbs is not exceeded and that you
cannot create more than one database with a given name.
4. Adds tests to check that you can create a table and put data into it
and querying it
5. Adds tests for the CLI for both the database and table commands
6. It creates an endpoint to create databases given a JSON blob
7. It creates an endpoint to create tables given a JSON blob
With this users can now create a database or table without first needing
to write to the database via the line protocol!
Closes#25640Closes#25641
* fix: Ensure tags are never null
This injects empty strings into tags for any rows in the buffer where the tag value is null. This is required because the tags are what make up the series key, which must have all non-null values.
There is an ongoing discussion about what the real behavior should be here, but for now this will get our users running that break without this behavior. Discussion is in #25674.
Fixes#25648
* fix: clippy failures
* Move processing engine invocation to a seperate tokio task.
* Support writing back line protocol from python via insert_line_protocol().
* Update structs to work with bincode.
* feat: add influxdb3_clap_blocks with runtime config
Added a new workspace crate `influxdb3_clap_blocks` which will be a
starting point for adding InfluxDB 3 OSS/Pro specific CLI configuration
that no longer references IOx, and allows for us to trim out unneeded
configurations for the monolithic InfluxDB 3.
Other than changing references from IOX to INFLUXDB3, this makes one
important change: it enables IO on the DataFusion runtime. This, for now,
is an experimental change to see if we can relieve some concurrency
issues that we have been experiencing.
* chore: add observability deps for windows
This changes the code to reference InfluxDB 3 OSS rather than Edge which
had been it's original name when we first started the project. With this
we now have the code reflect what we are actually calling it. On top of
this the long help text has been changed to give advice about how to
actually run the code now with the bare minimum set of flags needed now
as `influxdb serve` is no longer a viable command on it's own.
Closes#25649
* feat: add startup time to logging output
This change adds a startup time counter to the output when starting up
a server. The main purpose of this is to verify whether the impact of
changes actually speeds up the loading of the server.
* feat: Significantly decrease startup times for WAL
This commit does a few important things to speedup startup times:
1. We avoid changing an Arc<str> to a String with the series key as the
From<String> impl will call with_column which will then turn it into
an Arc<str> again. Instead we can just call `with_column` directly
and pass in the iterator without also collecting into a Vec<String>
2. We switch to using bitcode as the serialization format for the WAL.
This significantly reduces startup time as this format is faster to
use instead of JSON, which was eating up massive amounts of time.
Part of this change involves not using the tag feature of serde as
it's currently not supported by bincode
3. We also parallelize reading and deserializing the WAL files before
we then apply them in order. This reduces time waiting on IO and we
eagerly evaluate each spawned task in order as much as possible.
This gives us about a 189% speedup over what we were doing before.
Closes#25534
* feat: parquet cache metrics
* feat: track parquet cache metrics
Adds metrics to track the following in the in-memory parquet cache:
* cache size in bytes (also included a fix in the calculation of that)
* cache size in n files
* cache hits
* cache misses
* cache misses while the oracle is fetching a file
A test was added to check this functionality
* refactor: clean up logic and fix cache removal tracking error
Some logic and naming was cleaned up and the boolean to optionally track
metrics on entry removal was removed, as it was incorrect in the first place:
a fetching entry still has a size, which counts toward the size of the
cache. So, this makes is such that anytime an entry is removed, whether
its state is success or fetching, its size will be decremented from
the cache size metrics.
The sizing caclulations were made to be correct, and the cache metrics
test was updated with more thurough assertions
Moved all of the last cache implementation into the `influxdb3_cache`
crate. This also splits out the implementation into three modules:
- `cache.rs`: the core cache implementation
- `provider.rs`: the cache provider used by the database to hold multiple
caches.
- `table_function.rs`: same as before, holds the DataFusion impls
Tests were preserved and moved to `mod.rs`, however, they were updated to
not rely on the WriteBuffer implementation, and instead use the types in
the `influxdb3_cache::last_cache` module directly. This simplified the
test code, while not changing any of the test assertions at all.
This commit does three important major changes:
1. We will deny writes to the v1, v2, and v3 write apis that add new tags in
subsequent writes after the first write
2. We make every table have a series key by default now
3. We enfore sorting order by the series key which is the order the keys came in
With these changes we have consistentcy across the various write apis and can
make optimizations and future features with the assumption we have a series key.
Closes#25585
This commit introduces basic store for sys events and the backing ring
buffer. Since the buffer needs to hold arbitrary data, it uses `Box<dyn
Any>`
closes: https://github.com/influxdata/influxdb/issues/25581
This adds two new CLI commands to the `influxdb3` binary:
* `influxdb3 meta-cache create`
* `influxdb3 meta-cache delete`
To create and delete metadata caches, respectively.
A basic integration test was added to check that this works E2E.
The `influxdb3_client` was updated with methods to create and delete
metadata caches, and which is what the CLI commands use under the hood.
This adds a new system table "meta_caches" that allows users to view the
state of their metadata caches on a per-db basis
An integration test was added to verify that it works.
This commit allows deleting (soft) a table. For an user, following
command will allow soft deleting a table (bar) in db (foo)
```
influxdb3 table delete --dbname foo --table bar --host $host
```
- Added `soft_delete_table` to `DatabaseManager` trait, which already
hosts `soft_delete_database` method. The code roughly follows the same
flow as db delete. Although like db schema, it does clone on write
because the reference is behind an Arc, `Arc::make_mut` is used in
this change.
- Moved db delete related cli parser under "manage" module that has both
db and table delete functionality
- Some minor tidyups (removing unused methods, renaming method so that
the order in name matches actual return type eg. `table_id_and_schema`,
should return (id, schema) and not (schema, id))
closes: https://github.com/influxdata/influxdb/issues/25561
* feat: drop/delete database
This commit allows soft deletion of database using `influxdb3 database
delete <db_name>` command. The write buffer and last value cache are
cleared as well.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: reuse same code path when deleting database
- In previous commit, the deletion of database immediately triggered
clearing last cache and query buffer. But on restarts same logic had
to be repeated to allow deleting database when starting up. This
commit removes immediate deletion by explicitly calling necessary
methods and moves the logic to `apply_catalog_batch` which already
applies `CatalogOp` and also clearing cache and buffer in
`buffer_ops` method which has hooks to call other places.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: use reqwest query api for query param
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: include deleted flag in DatabaseSnapshot
- `DatabaseSchema` serialization/deserialization is delegated to
`DatabaseSnapshot`, so the `deleted` flag should be included in
`DatabaseSnapshot` as well.
- insta test snapshots fixed
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: address PR comments + tidy ups
---------
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* refactor: make last cache eviction optional
This changes how the last cache is evicted. It will no longer run eviction
on writes to the cache, instead, there is an optional method to create a
last cache provider that will run eviction in a background task on a specified
interval.
Otherwise, when records are produced from the cache, only those that have
not expired will be produced.
This should reduce locks on the cache and hopefully improve performance.
* feat: configurable last cache eviction interval
* docs: clean up var names, code docs, and comments
Closes#25461
_Note: the first three commits on this PR are from https://github.com/influxdata/influxdb/pull/25492_
This PR makes the switch from using names for columns to the use of `ColumnId`s. Where column names are used, they are represented as `Arc<str>`. This impacts most components of the system, and the result is a fairly sizeable change set. The area where the most refactoring was needed was in the last-n-value cache.
One of the themes of this PR is to rely less on the arrow `Schema` for handling the column-level information, and tracking that info in our own `ColumnDefinition` type, which captures the `ColumnId`.
I will summarize the various changes in the PR below, and also leave some comments in-line in the PR.
## Switch to `u32` for `ColumnId`
The `ColumnId` now follows the `DbId` and `TableId`, and uses a globally unique `u32` to identify all columns in the database. This was a change from using a `u16` that was only unique within the column's table. This makes it easier to follow the patterns used for creating the other identifier types when dealing with columns, and should reduce the burden of having to manage the state of a table-scoped identifier.
## Changes in the WAL/Catalog
* `WriteBatch` now contains no names for tables or columns and purely uses IDs
* This PR relies on `IndexMap` for `_Id`-keyed maps so that the order of elements in the map is consistent. This has important implications, namely, that when iterating over an ID map, the elements therein will always be produced in the same order which allows us to make assertions on column order in a lot of our tests, and allows for the re-introduction of `insta` snapshots for serialization tests. This map type provides O(1) lookups, but also provides _fast_ iteration, which should help when serializing these maps in write batches to the WAL.
* Removed the need to serialize the bi-directional maps for `DatabaseSchema`/`TableDefinition` via use of `SerdeVecMap` (see comments in-line)
* The `tables` map in `DatabaseSchema` no stores an `Arc<TableDefinition>` so that the table definition can be shared around more easily. This meant that changes to tables in the catalog need to do a clone, but we were already having to do a clone for changes to the DB schema.
* Removal of the `TableSchema` type and consolidation of its parts/functions directly onto `TableDefinition`
* Added the `ColumnDefinition` type, which represents all we need to know about a column, and is used in place of the Arrow `Schema` for column-level meta-info. We were previously relying heavily on the `Schema` for iterating over columns, accessing data types, etc., but this gives us an API that we have more control over for our needs. The `Schema` is still held at the `TableDefinition` level, as it is needed for the query path, and is maintained to be consistent with what is contained in the `ColumnDefinition`s for a table.
## Changes in the Last-N-Value Cache
* There is a bigger distinction between caches that have an explicit set of value columns, and those that accept new fields. The former should be more performant.
* The Arrow `Schema` is managed differently now: it used to be updated more than it needed to be, and now is only updated when a row with new fields is pushed to a cache that accepts new fields.
## Changes in the write-path
* When ingesting, during validation, field names are qualified to their associated column ID
* refactor: roll back addition of DatabaseSchemaProvider trait
* refactor: make parquet metrics optional in telemetry for pro
* refactor: make ParquetFileId Hash
* refactor: test harness logging
Allow the endpoint for telemetry to be passed in via the cli args, e.g
```
--telemetry-endpoint "https://somehost/test/"
```
and the actual endpoint always appends `v3` to it. So, above URL becomes
"https://somehost/test/v3"
Separate out methods of the Catalog API that are used on the query side into a new trait `DatabaseSchemaProvider`. The new trait includes methods from the Catalog that get the underlying `DatabaseSchema` or interact with names/IDs.
This will allow for a separate implementation of the Catalog for pro that only needs to hold a replicated/combined view in-memory of one or more catalogs without the need to do persistence that a write buffer's catalog needs to do.
While in there I also switched the `QueryExecutorImpl::new` method to take an args struct to avoid the clippy lint.
* feat: Add TableId and ColumnId
* feat: swap over to DbId and TableId everywhere
This commit swaps us over to using the DbId and TableId types everywhere
for our internal systems. Anywhere that's external facing, such as names
for last cache tables or line protocol parsing, use names. In these cases
we have the `Catalog` which keeps a map of TableIds and DbIds in a
bidirectional mapping for easy lookup i.e. id <-> names. While in essence
the change itself isn't that complicated given the nature of how much we
depended on names for things, the changes end up being quite invasive and
extensive. Luckily it shouldn't be too hard to review. Note this does
not add the column ids which will be done in a follow up PR.
Closes#25375Closes#25403Closes#25404Closes#25405Closes#25412Closes#25413
- added mechanism within PersistedFile to expose parquet file related
metrics. The details are updated when new snapshot is generated and
also when all snapshots are loaded when the process starts up
- at the point of creating the telemetry payload these parquet metrics
are looked up before sending it to the server.
Closes: https://github.com/influxdata/influxdb/issues/25418
- instrumented code to get read and write measurement
- introduced EventsBucket for collection of reads/writes
- sampler now samples every minute for all metrics (including
reads/writes)
- other tidy ups
closes: https://github.com/influxdata/influxdb/issues/25372
* test: check parquet cache in the write buffer
Checked that the parquet cache will serve queries when chunks are
requested from the write buffer. The added test also checks for get_range
requests made to the object store, which are typically made by DataFusion
to infer schema for parquet files.
* refactor: make parquet cache optional on write buffer
* test: add test to verify parquet cache function
This makes the parquet cache optional at the write buffer level, and adds
a test that verifies that the cache catches and prevents requests to the
object store in the event of a cache hit.
Closes#25382Closes#25383
This refactors the parquet cache to use less locking by switching from using the `clru` crate to a hand-rolled cache implementation. The new cache still acts as an LRU, but it uses atomics to track hit-time per entry, and handles pruning in a separate process that is decoupled from insertion/gets to the cache.
The `Cache` type uses a [`DashMap`](https://docs.rs/dashmap/latest/dashmap/struct.DashMap.html) internally to store cache entries. This should help reduce lock contention, and also has the added benefit of not requiring mutability to insert into _or_ get from the map.
The cache maps an `object_store::Path` to a `CacheEntry`. On a hit, an entry will have its `hit_time` (an `AtomicI64`) incremented. During a prune operation, entries that have the oldest hit times will be removed from the cache. See the `Cache::prune` method for details.
The cache is setup with a memory _capacity_ and a _prune percent_. The cache tracks memory used when entries are added, based on their _size_, and when a prune is invoked in the background, if the cache has exceeded its capacity, it will prune `prune_percent * cache.len()` entries from the cache.
Two tests were added:
* `cache_evicts_lru_when_full` to check LRU behaviour of the cache
* `cache_hit_while_fetching` to check that a cache entry hit while a request is in flight to fetch that entry will not result in extra calls to the underlying object store
Part of #25347
This sets up a new implementation of an in-memory parquet file cache in the `influxdb3_write` crate in the `parquet_cache.rs` module.
This module introduces the following types:
* `MemCachedObjectStore` - a wrapper around an `Arc<dyn ObjectStore>` that can serve GET-style requests to the store from an in-memory cache
* `ParquetCacheOracle` - an interface (trait) that can accept requests to create new cache entries in the cache used by the `MemCachedObjectStore`
* `MemCacheOracle` - implementation of the `ParquetCacheOracle` trait
## `MemCachedObjectStore`
This takes inspiration from the [`MemCacheObjectStore` type](1eaa4ed5ea/object_store_mem_cache/src/store.rs (L205-L213)) in core, but has some different semantics around its implementation of the `ObjectStore` trait, and uses a different cache implementation.
The reason for wrapping the object store is that this ensures that any GET-style request being made for a given object is served by the cache, e.g., metadata requests made by DataFusion.
The internal cache comes from the [`clru` crate](https://crates.io/crates/clru), which provides a least-recently used (LRU) cache implementation that allows for weighted entries. The cache is initialized with a capacity and entries are given a weight on insert to the cache that represents how much of the allotted capacity they will take up. If there isn't enough room for a new entry on insert, then the LRU item will be removed.
### Limitations of `clru`
The `clru` crate conveniently gives us an LRU eviction policy but its API may put some limitations on the system:
* gets to the cache require an `&mut` reference, which means that the cache needs to be behind a `Mutex`. If this slows down requests through the object store, then we may need to explore alternatives.
* we may want more sophisticated eviction policies than a straight LRU, i.e., to favour certain tables over others, or files that represent recent data over those that represent old data.
## `ParquetCacheOracle` / `MemCacheOracle`
The cache oracle is responsible for handling cache requests, i.e., to fetch an item and store it in the cache. In this PR, the oracle runs a background task to handle these requests. I defined this as a trait/struct pair since the implementation may look different in Pro vs. OSS.
- uses Arc<str> to represent create once and read everywhere type
of string
- updated snapshots for insta asserts, uses redaction to hardcode
randomly generated UUID strings
- added methods to catalog to expose instace and host ids
Closes: https://github.com/influxdata/influxdb/issues/25315
This makes some changes to the TestServer E2E framework, which is used
for running integration tests in the influxdb3 crate. These changes are
meant so that we can more easily split the code for pro.
* refactor: add catalog as dep to influxdb3
* refactor: move catalog and last cache initialization out of write buffer
The Write buffer used to handle initialization of the catalog and last
n value cache. This commit moves that logic out, so that both can be
initialized independently, and injected into the write buffer. This is to
enable downstream changes that will need to make sharing the catalog and
last cache possible.
* refactor: use dyn traits in WriteBufferImpl
This changes the WriteBufferImpl to use a dyn TimeProvider instead of
a generic in its type signature.
The Server type now uses a dyn WriteBuffer instead of using a generic
in its type signature, and the ServerBuilder was updated to accommodate
this accordingly.
These chages were to make downstream code changes more seamless.
* refactor: make some items pub
This makes functions on the QueryableBuffer and LastCache pub so that they
can be used downstream.
The Persister trait was only implemented by a single type, because the
underlying ObjectStore interface has several ways of being mocked, we
mock that instead of the Persister interface.
This commit removes the Persister trait, and moves its interface/impl
directly on a single Persister type in the persister module of the
influxdb3_write crate.
deny.toml had some incorrect field names in license.exceptions, those
were fixed from 'crate' to 'name'.
* refactor: Make Level0Duration part of WAL
I noticed this during some testing and cleanup with other PRs. The WAL had its own level_0_duration and the write buffer had a different one, which would cause some weird problems if they weren't the same. This refactors Level0Duration to be in the WAL and fixes up the tests.
As an added bonus, this surfaced a bug where multiple L0 blocks getting persisted in the same snapshot wasn't supported. So now snapshot details can have many files per table.
* fix: have persisted files always return in descending data time order
* fix: sort record batches for test verification
This extends the system tables available with a new `parquet_files` table
which will list the parquet files associated with a given table in a
database.
Queries to system.parquet_files must provide a table_name predicate to
specify the table name of interest.
The files are accessed through the QueryableBuffer.
In addition, a test was added to check success and failure modes of the
new system table query.
Finally, the Persister trait had its associated error type removed. This
was somewhat of a consequence of how I initially implemented this change,
but I felt cleaned the code up a bit, so I kept it in the commit.
This enforces the use of a host identifier prefix in all object store
paths (currently, for parquet files, catalog files, and snapshot files).
The persister retains the host identifier prefix, and uses it when
constructing paths.
The WalObjectStore also holds the host identifier prefix, so that it can
use it when saving and loading WAL files.
The influxdb3 binary requires a new argument 'host-id' to be passed that
is used to specify the prefix.
* fix: query bugs with buffer
This fixes three different bugs with the buffer. First was that aggregations would fail because projection was pushed down to the in-buffer data that de-duplication needs to be called on. The test in influxdb3/tests/server/query.rs catches that.
I also added a test in write_buffer/mod.rs to ensure that data is correctly queryable when combining with different states: only data in buffer, only data in parquet files, and data across both. This showed two bugs, one where the parquet data was being doubled up (parquet chunks were being created in write buffer mod and in queryable buffer. The second was that the timestamp min max on table buffer would panic if the buffer was empty.
* refactor: PR feedback
* fix: fix wal replay and buffer snapshot
Fixes two problems uncovered by adding to the write_buffer/mod.rs test. Ensures we can replay wal data and that snapshots work properly with replayed data.
* fix: run cargo update to fix audit
* feat: refactor WAL and WriteBuffer
There is a ton going on here, but here are the high level things. This implements a new WAL, which is backed entirely by object store. It then updates the WriteBuffer to be able to work with how the new WAL works, which also required an update to how the Catalog is modified and persisted.
The concept of Segments has been removed. Previously there was a separate WAL per segment of time. Instead, there is now a single WAL that all writes and updates flow into. Data within the write buffer is organized by Chunk(s) within tables, which is based on the timestamp of the row data. These are known as the Level0 files, which will be persisted as Parquet into object store. The default chunk duration for level 0 files is 10 minutes.
The WAL is written as single files that get created at the configured WAL flush interval (1s by default). After a certain number of files have been created, the server will attempt to snapshot the WAL (default is to snapshot the first 600 files of the WAL after we have 900 total, i.e. snapshot 10 minutes of WAL data).
The design goal with this is to persist 10 minute chunks of data that are no longer receiving writes, while clearing out old WAL files. This works if data getting written in around "now" with no more than 5 minutes of delay. If we continue to have delayed writes, a snapshot of all data will be forced in order to clear out the WAL and free up memory in the buffer.
Overall, this structure of a single wal, with flushes and snapshots and chunks in the queryable buffer led to a simpler setup for the write buffer overall. I was able to clear out quite a bit of code related to the old segment organization.
Fixes#25142 and fixes#25173
* refactor: address PR feedback
* refactor: wal to replay and background flush on new
* chore: remove stray println
This commit updates us to rustc 1.80. There are three significant changes
here:
1. LazyLock and LazyCell have been stabilized meaning we can replace our
usage of Lazy from the once_cell crate with the std lib versions
2. Lints were added to handle unknown cfg directives. `tokio_unstable`
is affected by this and while we do have the flags in our
.cargo/config.toml Cargo still output a lint for it so we supress
that warning now in our Cargo.toml for the workspace
3. clippy now throws a new warning about priority levels for lints. It's
quite frankly a thing that doesn't make sense to me and should be
something cargo fixes, but here we are.
Besides that it was a painless upgrade and now we're on the latest and
greatest.
* fix: catalog support for last caches that accept new fields
Last cache definitions in the catalog were augmented to either store an
explicit set of column names (including time), or to accept new fields.
This will allow these caches to be loaded properly on server restart such
that all non-key columns are cached.
* refactor: use tagged serialization for last cache values def
This also updated the client code to accept the new structure in
influxdb3_client.
* test: add e2e tests to catch regressions in influxdb3_client
* chore: cargo update for audit