* refactor: use repository in catalog
The catalog was refactored to use identifiers on everything, and store
everything in a consistent structure. This structure makes use of the
`Repository` type that holds a `SerdeVecMap` of Id to Resource, along
with the next Id, and a bi-map of Id to resource name.
The `Repository` type is used at each level of the catalog where a
resource is stored.
This simplified repeated logic for snapshot'ing, insert and update of
resources in the catalog, as well as accessor methods for getting by id
or name, and mapping names to ids and vice-versa.
In addition, the process for catalog batch verification and permit was
altered so that the permit process induces a retry if the catalog was
updated while the catalog batch function was producing the batch, i.e, if
the catalog sequence incremented while the caller was waiting for a permit.
This eliminated the need for verifying the catalog batch after it had been
generated, and allows for a single path to apply a catalog batch after it
has been persisted to object store.
This assumes that the generation of the catalog batch implies validity.
Irelevant tests were removed.
Last and Distinct cache's now rely more heavily on Ids, though the proc-
essing engine still needs to switch over to use Ids for starting/stopping
triggers.
This commit restores the old behavior we had where new tags can be added
to a schema. To do this we made tags nullable and brings us in line with
our other products. These changes were made in this PR:
https://github.com/influxdata/influxdb3_core/pull/41.
Changes to accomplish this new behavior were:
- Queries now do not return an empty string for null tags instead they
are returned as null, or in many formats not at all.
- References to v1 for parsing and validating lines were removed as we
only have one path for doing so these days shared amongst all the
write_lp endpoints.
- We fixed failing tests that expected us to not be able to have new
tags or depended on that functionality indirectly
- Tests had their snapshot files updated to reflect that tags are
nullable by default
- Behavior for making a schema and checking whether a column can be null
were updated in a separate repo and integrated here
- The series_key is updated whenever we get a new tag added to the
schema
- New tests were added to show that you can add a new tag and that the
series key is updated as part of that
With the above changes we can now allow tags to be added again by users
like they would expect, especially with v1 and v2 apis and Telegraf
plugins.
The distinct cache info for tables was not serialized in the catalog.
This fixes it, but also updates the catalog serialization to use the
snapshot type serialization from the Catalog type all the way down.
The Eq and PartialEq impls were removed from Catalog and InnerCatalog
as they were only used in tests, and wer replaced by pure insta snapshot
tests.
A test was added to check that the distinct cache serializes/deserializes
This refactors plugins and triggers so that plugins no longer need to be "created". Since plugins exist in either the configured local directory or on the Github repo, a user now only needs to create a trigger and reference the plugin filename.
Closes#25876
* refactor: reduce catalog locks when getting chunks
The main refactor was to change the ChunkContainer trait to use the
DatabaseSchema and TableDefinition types directly in the signature, vs.
the names, which then required an additional catalog lock and lookups for
both entities. This was already handled upstream in the QueryTable, so
there was no need to do the lookups again.
This required the addition of a test helper in influxdb3_write::test_helpers
that provides convenience methods for getting record batches from the
WriteBuffer. We have been implementing such a method manually in several
places, so this is nice to have it unified. This provides a blanket impl
so that anything implementing WriteBuffer gets the method.
Some other house cleaning was included.
* refactor: clean up test helpers in influxdb3_write
* refactor: pass original df filters forward with ChunkFilter
* chore: clippy
This changes the CLI arg `host-id` to `writer-id` to more accurately
indicate meaning.
This changes also goes through the codebase and changes struct fields,
methods, and variables to use the term `writer_id` or `writer_identifier_prefix`
instead of `host_id` etc., to make the meaning clear in the code.
This also changes the catalog serialization to use the field `writer_id`
instead of `host_id`, which is breaking change.
This updates the create plugin API and CLI so that it doesn't take the pugin code, but instead takes a file name of a file that must be in the plugin-dir of the server. It returns an error if the plugin-dir is not configured or if the file isn't there.
Also updates the WAL and catalog so that it doesn't store the plugin code directly. The code is read from disk one time when the plugin runs.
Closes#25797
This allows the user to specify arguments that will be passed to each execution of a wal plugin trigger. The CLI test was updated to check this end to end.
Closes#25655
* feat: Update WAL plugin for new structure
This ended up being a very large change set. In order to get around circular dependencies, the processing engine had to be moved into its own crate, which I think is ultimately much cleaner.
Unfortunately, this required changing a ton of things. There's more testing and things to add on to this, but I think it's important to get this through and build on it.
Importantly, the processing engine no longer resides inside the write buffer. Instead, it is attached to the HTTP server. It is now able to take a query executor, write buffer, and WAL so that the full range of functionality of the server can be exposed to the plugin API.
There are a bunch of system-py feature flags littered everywhere, which I'm hoping we can remove soon.
* refactor: PR feedback
_Follows #25737 (keeping in draft until that merges)_
Closes#25745
This PR provides both a CLI and underlying API for listing databases in the InfluxDB 3 Core server. Details are below.
There was already a method to list databases for the query executor for InfluxQL; this works by exposing that via the `HttpApi` in `influxdb3_server`.
However, one thing that we may address is that the query result for that uses `iox::database` as the column name. If we are removing references to `iox`, then we may want to just have it as `database`. I left it as is, for now, because I wanted to keep code churn down and wasn't sure why we use that prefix in the first place for the `SHOW DATABASES` and `SHOW RETENTION POLICIES` InfluxQL queries.
## Details
### CLI
This PR provides the `influxdb3 show` CLI:
```
influxdb3 show -h
List resources on the InfluxDB 3 Core server
Usage: influxdb3 show <COMMAND>
Commands:
databases List databases
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help information
```
with the ability to list databases:
```
influxdb3 show databases -h
List databases
Usage: influxdb3 show databases [OPTIONS]
Options:
-H, --host <HOST_URL> The host URL of the running InfluxDB 3 Core server [env: INFLUXDB3_HOST_URL=] [default: http://127.0.0.1:8181]
--token <AUTH_TOKEN> The token for authentication with the InfluxDB 3 Core server [env: INFLUXDB3_AUTH_TOKEN=]
--show-deleted Include databases that were marked as deleted in the output
--format <OUTPUT_FORMAT> The format in which to output the list of databases [default: pretty] [possible values: pretty, json, json_lines, csv]
-h, --help Print help information
```
Since this uses the query executor, we can pass a `--format` argument to get the output as JSON, CSV, or JSONL, but by default, it uses the `pretty` format:
```
influxdb3 show databases
+---------------+
| iox::database |
+---------------+
| bar |
+---------------+
```
The `--show-deleted` flag will have the `deleted` column displayed as well as any databases that have been marked as deleted:
```
influxdb3 show databases --show-deleted
+---------------------+---------+
| iox::database | deleted |
+---------------------+---------+
| bar | false |
| foo-20250105T202949 | true |
+---------------------+---------+
```
### API
The API to list databases can be invoked via:
```
GET /api/v3/configure/database
```
with optional parameters:
* `format`: `pretty`, `json`, `csv`, `parquet`, or `jsonl`
* `show_deleted`: `bool`, defaults to `false`
Note that `database` is singular in the API endpoint, to be consistent with the other database related create/delete API endpoints. We could change it to be plural `databases` if that is the convention we want to go with.
This makes quite a few major changes to our CLI and how users interact
with it:
1. All commands are now in the form <verb> <noun> this was to make the
commands consistent. We had last-cache as a noun, but serve as a
verb in the top level. Given that we could only create or delete
All noun based commands have been move under a create and delete
command
2. --host short form is now -H not -h which is reassigned to -h/--help
for shorter help text and is in line with what users would expect
for a CLI
3. Only the needed items from clap_blocks have been moved into
`influxdb3_clap_blocks` and any IOx specific references were changed
to InfluxDB 3 specific ones
4. References to InfluxDB 3.0 OSS have been changed to InfluxDB 3 Core
in our CLI tools
5. --dbname has been changed to --database to be consistent with --table
in many commands. The short -d flag still remains. In the create/
delete command for the database however the name of the database is
a positional arg
e.g. `influxbd3 create database foo` rather than
`influxdb3 database create --dbname foo`
6. --table has been removed from the delete/create command for tables
and is now a positional arg much like database
7. clap_blocks was removed as dependency to avoid having IOx specific
env vars
8. --cache-name is now an optional positional arg for last_cache and meta_cache
9. last-cache/meta-cache commands are now last_cache and meta_cache respectively
Unfortunately we have quite a few options to run the software and I
couldn't cut down on them, but at least with this commands and options
will be more discoverable and we have full control over our CLI options
now.
Closes#25646
Store the series key column names on the TableDefinitin in catalog so
looking up the series key by column names is more efficient
Remove the /api/v3/write API and related code/tests
* feat: create DB and Tables via REST and CLI
This commit does a few things:
1. It brings the database command naming scheme for types inline with
the rest of the CLI types
2. It brings the table command naming scheme for types inline with
the rest of the CLI types
3. Adds tests to check that the num of dbs is not exceeded and that you
cannot create more than one database with a given name.
4. Adds tests to check that you can create a table and put data into it
and querying it
5. Adds tests for the CLI for both the database and table commands
6. It creates an endpoint to create databases given a JSON blob
7. It creates an endpoint to create tables given a JSON blob
With this users can now create a database or table without first needing
to write to the database via the line protocol!
Closes#25640Closes#25641
* Move processing engine invocation to a seperate tokio task.
* Support writing back line protocol from python via insert_line_protocol().
* Update structs to work with bincode.
This changes the code to reference InfluxDB 3 OSS rather than Edge which
had been it's original name when we first started the project. With this
we now have the code reflect what we are actually calling it. On top of
this the long help text has been changed to give advice about how to
actually run the code now with the bare minimum set of flags needed now
as `influxdb serve` is no longer a viable command on it's own.
Closes#25649
Moved all of the last cache implementation into the `influxdb3_cache`
crate. This also splits out the implementation into three modules:
- `cache.rs`: the core cache implementation
- `provider.rs`: the cache provider used by the database to hold multiple
caches.
- `table_function.rs`: same as before, holds the DataFusion impls
Tests were preserved and moved to `mod.rs`, however, they were updated to
not rely on the WriteBuffer implementation, and instead use the types in
the `influxdb3_cache::last_cache` module directly. This simplified the
test code, while not changing any of the test assertions at all.
This commit does three important major changes:
1. We will deny writes to the v1, v2, and v3 write apis that add new tags in
subsequent writes after the first write
2. We make every table have a series key by default now
3. We enfore sorting order by the series key which is the order the keys came in
With these changes we have consistentcy across the various write apis and can
make optimizations and future features with the assumption we have a series key.
Closes#25585
This adds the MetaDataCacheProvider for managing metadata caches in the
influxdb3 instance. This includes APIs to create caches through the WAL
as well as from a catalog on initialization, to write data into the
managed caches, and to query data out of them.
The query side is fairly involved, relying on Datafusion's TableFunctionImpl
and TableProvider traits to make querying the cache using a user-defined
table function (UDTF) possible.
The predicate code was modified to only support two kinds of predicates:
IN and NOT IN, which simplifies the code, and maps nicely with the DataFusion
LiteralGuarantee which we leverage to derive the predicates from the
incoming queries.
A custom ExecutionPlan implementation was added specifically for the
metadata cache that can report the predicates that are pushed down to
the cache during query planning/execution.
A big set of tests was added to to check that queries are working, and
that predicates are being pushed down properly.
This commit allows deleting (soft) a table. For an user, following
command will allow soft deleting a table (bar) in db (foo)
```
influxdb3 table delete --dbname foo --table bar --host $host
```
- Added `soft_delete_table` to `DatabaseManager` trait, which already
hosts `soft_delete_database` method. The code roughly follows the same
flow as db delete. Although like db schema, it does clone on write
because the reference is behind an Arc, `Arc::make_mut` is used in
this change.
- Moved db delete related cli parser under "manage" module that has both
db and table delete functionality
- Some minor tidyups (removing unused methods, renaming method so that
the order in name matches actual return type eg. `table_id_and_schema`,
should return (id, schema) and not (schema, id))
closes: https://github.com/influxdata/influxdb/issues/25561
* feat: drop/delete database
This commit allows soft deletion of database using `influxdb3 database
delete <db_name>` command. The write buffer and last value cache are
cleared as well.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: reuse same code path when deleting database
- In previous commit, the deletion of database immediately triggered
clearing last cache and query buffer. But on restarts same logic had
to be repeated to allow deleting database when starting up. This
commit removes immediate deletion by explicitly calling necessary
methods and moves the logic to `apply_catalog_batch` which already
applies `CatalogOp` and also clearing cache and buffer in
`buffer_ops` method which has hooks to call other places.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: use reqwest query api for query param
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: include deleted flag in DatabaseSnapshot
- `DatabaseSchema` serialization/deserialization is delegated to
`DatabaseSnapshot`, so the `deleted` flag should be included in
`DatabaseSnapshot` as well.
- insta test snapshots fixed
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: address PR comments + tidy ups
---------
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: core metadata cache structs with basic tests
Implement the base MetaCache type that holds the hierarchical structure
of the metadata cache providing methods to create and push rows from the
WAL into the cache.
Added a prune method as well as a method for gathering record batches
from a meta cache. A test was added to check the latter for various
predicates and that the former works, though, pruning shows that we need
to modify how record batches are produced such that expired entries are
not emitted.
* refactor: filter expired entries and do some clean up in the meta cache
* chore: update core deps
- arrow/parquet deps are patched (as in core)
- three specific code changes to cope with changes in core crates
- TransitionPartitionId, use `from_parts` instead of `new`
- arrow buffers can take &[u8] directly without `to_vec()`/`vec!`
(used only in tests)
- `schema` and `influxdb_line_protocol` crates need `v3` feature enabled
* chore: update deny.toml
* chore: formatting and deny toml changes
Unicode-3.0 license is added to allowed licenses list, without it
end up with 19 errors (`zerovec`, `zerovec-derive` etc.)
* chore: address PR feedback
- move enabling v3 feature to root Cargo.toml
- added the upstream PR for datafusion-common that introduced RUSTSEC-2024-0384
Updates the catalog to use its own sequence number in the path. This will enable downstream Pro systems that pick up PersistedSnapshots to get the specific catalog that a snapshot is associated with since its sequence number is included.
Also updated the type to be CatalogSequenceNumber to make it more clear & readable when being used.
* fix: throw error when adding fields to non-existent table
* test: add test for expected behaviour in catalog op apply
This also added in some helpers to the wal crate that were previously
added to pro.