Commit Graph

137 Commits (pbarnett/update-script-to-dockerhub)

Author SHA1 Message Date
Paul Dix eb0b1eb8c6
feat: improve plugin logging interface (#25851)
* feat: improve plugin logging interface

Updates the plugin log functions so they can take any number of Python objects which will be converted into a single log line string.

Closes #25847

* refactor: udpate on PR feedback
2025-01-16 18:02:14 -05:00
Paul Dix daa5cbd270
feat: better plugin errors for query and line building (#25850) 2025-01-16 16:57:30 -05:00
Paul Dix 1f71750bfe
feat: return better plugin execution errors (#25842)
* feat: return better plugin execution errors

This sets up the framework for fleshing out more useful plugin execution errors that get returned to the user during testing. We'll also want to capture these for logging in system tables.

Also fixes a test that was broken in previous commit on time limits. Didn't show up because of the feature flag.

* fix: compile errors without system-py feature
2025-01-16 15:58:05 -05:00
Trevor Hilton b8a94488b5
feat: support v1 query API GROUP BY semantics (#25845)
This updates the v1 /query API hanlder to handle InfluxDB v1's unique
query response structure when GROUP BY clauses are provided.

The distinction is in the addition of a "tags" field to the emitted series
data that contains a map of the GROUP BY tags along with their distinct
values associated with the data in the "values" field.

This required splitting the QueryExecutor into two query paths for InfluxQL
and SQL, as this allowed for handling InfluxQL query parsing in advance
of query planning.

A set of snapshot tests were added to check that it all works.
2025-01-16 11:59:01 -05:00
Michael Gattozzi aa8a8c560d
feat: Set 72 hour query/write limit for Core (#25810)
This commit sets InfluxDB 3 Core to have a 72 hour limit for queries and
writes. What this means is that writes that contain historical data
older than 72 hours will be rejected and queries will filter out data
older than 72 hours. Core is intended to be a recent timeseries database
and performance over data older than 72 hours will degrade without a
garbage collector, a core feature of InfluxDB 3 Enterprise. InfluxDB 3
Enterprise does not have this write or query limit in place.

Note that this does *not* mean older data is deleted. Older data is
still accessible in object storage as Parquet files that can still be
used in other services and analyzed with dataframe libraries like pandas
and polars.

This commit does a few things:
- Uses timestamps in the year 2065 for tests as these should not break
  for longer than many of us will be working in our lifetimes. This is
  only needed for the integration tests as other tests use the
  MockProvider for time.
- Filters the buffer and persisted files to only show data newer than
  3 days ago
- Fixes the integration tests to work with the fact that writes older
  than 3 days are rejected
2025-01-12 13:08:01 -05:00
Trevor Hilton db24a62658
refactor: change host-id to writer-id (#25804)
This changes the CLI arg `host-id` to `writer-id` to more accurately
indicate meaning.

This changes also goes through the codebase and changes struct fields,
methods, and variables to use the term `writer_id` or `writer_identifier_prefix`
instead of `host_id` etc., to make the meaning clear in the code.

This also changes the catalog serialization to use the field `writer_id`
instead of `host_id`, which is breaking change.
2025-01-12 11:40:47 -05:00
Paul Dix 491a37b0d4
feat: Update create plugin to use server file (#25803)
This updates the create plugin API and CLI so that it doesn't take the pugin code, but instead takes a file name of a file that must be in the plugin-dir of the server. It returns an error if the plugin-dir is not configured or if the file isn't there.

Also updates the WAL and catalog so that it doesn't store the plugin code directly. The code is read from disk one time when the plugin runs.

Closes #25797
2025-01-11 21:02:51 -05:00
Paul Dix ebf78aa6c9
feat: Updates to plugin API (#25799)
* Add uint64_field to LineBuilder
* Add bool_field to LineBuilder
* Change query_rows to query in Python API
* Update error output for test wal_plugin
2025-01-11 18:03:34 -05:00
Paul Dix bdcdb4f296
refactor: rename plugin activate/deactivate to enable/disable (#25793)
Closes #25789
2025-01-11 13:40:24 -05:00
Paul Dix e8422a240a
feat: Wire up arguments to wal plugin trigger (#25783)
This allows the user to specify arguments that will be passed to each execution of a wal plugin trigger. The CLI test was updated to check this end to end.

Closes #25655
2025-01-10 16:58:18 -05:00
Paul Dix 0da0785960
feat: Finish wiring up WAL plugin with trigger (#25781)
This updates the WAL so that it can have new file notifiers added that will get updated when the wal flushes. The processing engine now implements the WALNotifier trait.

I've updated the CLI test for creating a trigger to run and end-to-end test that defines a plugin, creates a trigger, writes data into the database, triggering the plugin, which writes summary statistics back into the database in a different table. The test queries the destination table to confirm that the plugin worked.
2025-01-10 12:56:49 -05:00
Trevor Hilton c71dafc313
refactor: rename metadata cache to distinct value cache (#25775) 2025-01-10 08:48:51 -05:00
Paul Dix 7230148b58
feat: Update WAL plugin for new structure (#25777)
* feat: Update WAL plugin for new structure

This ended up being a very large change set. In order to get around circular dependencies, the processing engine had to be moved into its own crate, which I think is ultimately much cleaner.

Unfortunately, this required changing a ton of things. There's more testing and things to add on to this, but I think it's important to get this through and build on it.

Importantly, the processing engine no longer resides inside the write buffer. Instead, it is attached to the HTTP server. It is now able to take a query executor, write buffer, and WAL so that the full range of functionality of the server can be exposed to the plugin API.

There are a bunch of system-py feature flags littered everywhere, which I'm hoping we can remove soon.

* refactor: PR feedback
2025-01-10 05:52:33 -05:00
Paul Dix 2d18a61949
feat: Add query API to Python plugins (#25766)
This ended up being a couple things rolled into one. In order to add a query API to the Python plugin, I had to pull the QueryExecutor trait out of server into a place so that the python crate could use it.

This implements the query API, but also fixes up the WAL plugin test CLI a bit. I've added a test in the CLI section so that it shows end-to-end operation of the WAL plugin test API and exercise of the entire Plugin API.

Closes #25757
2025-01-09 20:13:20 -05:00
Michael Gattozzi be79008fb1
fix: get create token and meta_cache (#25773)
* fix: get create token subcommand to work again

In #25737 token creation was broken as a client would attempt to be
made, but hit an unreachable branch. The fix was to just make the branch
create a Client that won't do anything. We also add a test to make sure
that if there is a regression we'll catch it in the future.

Closes #25764

* fix: make client turn a duration into seconds/u64

This commit fixes the creation of a metadata cache made via the cli by
actually sending a time in seconds as a u64 and not as a duration. With
that we can now make metadata caches again when the `--max-age` flag is
specified.

This commit also adds a regression test for the CLI so that we can catch
this again in the future.
2025-01-09 12:05:21 -05:00
Trevor Hilton 23866ef3d1
fix: /query API emits empty result as JSON (#25769) 2025-01-09 09:26:30 -05:00
Trevor Hilton dfc853d903
feat: handle params in request body for /query API (#25762)
Closes #25749

This changes the `/query` API handler so that the parameters can be passed in either the request URI or in the request body for either a `GET` or `POST` request.

Parameters can be specified in the URI, the body, or both; if they are specified in both places, those in the body will take precedent.

Error variants in the HTTP server code related to missing request parameters were updated to return `400` status.
2025-01-07 19:53:29 -05:00
Trevor Hilton 94b53f9bd0
test: do not pass TEST_LOG to log filter in e2e tests (#25761) 2025-01-07 19:48:56 -05:00
Trevor Hilton 3efd49066e
fix: bind to correct port for e2e tests (#25758)
* fix: bind to correct port for e2e tests

This also fixes up some log messages on server start for naming

* chore: do notpass value in TEST_LOG env var to CI tests
2025-01-07 12:23:59 -05:00
Trevor Hilton 6524f383ba
feat: show databases CLI/API (#25748)
_Follows #25737 (keeping in draft until that merges)_

Closes #25745 

This PR provides both a CLI and underlying API for listing databases in the InfluxDB 3 Core server. Details are below.

There was already a method to list databases for the query executor for InfluxQL; this works by exposing that via the `HttpApi` in `influxdb3_server`.

However, one thing that we may address is that the query result for that uses `iox::database` as the column name. If we are removing references to `iox`, then we may want to just have it as `database`. I left it as is, for now, because I wanted to keep code churn down and wasn't sure why we use that prefix in the first place for the `SHOW DATABASES` and `SHOW RETENTION POLICIES` InfluxQL queries.

## Details

### CLI

This PR provides the `influxdb3 show` CLI:
```
influxdb3 show -h
List resources on the InfluxDB 3 Core server

Usage: influxdb3 show <COMMAND>

Commands:
  databases  List databases
  help       Print this message or the help of the given subcommand(s)

Options:
  -h, --help  Print help information
```
with the ability to list databases:
```
influxdb3 show databases -h
List databases

Usage: influxdb3 show databases [OPTIONS]

Options:
  -H, --host <HOST_URL>         The host URL of the running InfluxDB 3 Core server [env: INFLUXDB3_HOST_URL=] [default: http://127.0.0.1:8181]
      --token <AUTH_TOKEN>      The token for authentication with the InfluxDB 3 Core server [env: INFLUXDB3_AUTH_TOKEN=]
      --show-deleted            Include databases that were marked as deleted in the output
      --format <OUTPUT_FORMAT>  The format in which to output the list of databases [default: pretty] [possible values: pretty, json, json_lines, csv]
  -h, --help                    Print help information
```
Since this uses the query executor, we can pass a `--format` argument to get the output as JSON, CSV, or JSONL, but by default, it uses the `pretty` format:
```
influxdb3 show databases
+---------------+
| iox::database |
+---------------+
| bar           |
+---------------+
```
The `--show-deleted` flag will have the `deleted` column displayed as well as any databases that have been marked as deleted:
```
influxdb3 show databases --show-deleted
+---------------------+---------+
| iox::database       | deleted |
+---------------------+---------+
| bar                 | false   |
| foo-20250105T202949 | true    |
+---------------------+---------+
```

### API

The API to list databases can be invoked via:
```
GET /api/v3/configure/database
```
with optional parameters:
* `format`: `pretty`, `json`, `csv`, `parquet`, or `jsonl`
* `show_deleted`: `bool`, defaults to `false`

Note that `database` is singular in the API endpoint, to be consistent with the other database related create/delete API endpoints. We could change it to be plural `databases` if that is the convention we want to go with.
2025-01-06 21:08:12 -05:00
Michael Gattozzi f793d31f63
feat: Cleanup CLI flags for InfluxDB 3 Core (#25737)
This makes quite a few major changes to our CLI and how users interact
with it:

1. All commands are now in the form <verb> <noun> this was to make the
   commands consistent. We had last-cache as a noun, but serve as a
   verb in the top level. Given that we could only create or delete
   All noun based commands have been move under a create and delete
   command
2. --host short form is now -H not -h which is reassigned to -h/--help
   for shorter help text and is in line with what users would expect
   for a CLI
3. Only the needed items from clap_blocks have been moved into
   `influxdb3_clap_blocks` and any IOx specific references were changed
   to InfluxDB 3 specific ones
4. References to InfluxDB 3.0 OSS have been changed to InfluxDB 3 Core
   in our CLI tools
5. --dbname has been changed to --database to be consistent with --table
   in many commands. The short -d flag still remains. In the create/
   delete command for the database however the name of the database is
   a positional arg

   e.g. `influxbd3 create database foo` rather than
        `influxdb3 database create --dbname foo`
6. --table has been removed from the delete/create command for tables
   and is now a positional arg much like database
7. clap_blocks was removed as dependency to avoid having IOx specific
   env vars
8. --cache-name is now an optional positional arg for last_cache and meta_cache
9. last-cache/meta-cache commands are now last_cache and meta_cache respectively

Unfortunately we have quite a few options to run the software and I
couldn't cut down on them, but at least with this commands and options
will be more discoverable and we have full control over our CLI options
now.

Closes #25646
2025-01-06 18:51:55 -05:00
Michael Gattozzi d78756f02f
feat: allow n/u for precision in v1/v2 write apis (#25742)
Prior to this change we would deny writes that used n and u for the
precision argument when doing writes. We only accepted ns and us for
those apis. However, to be backwards compatible we would need to enable
accepting writes with n and u. This is mostly just upgrading our deps
as this was a change that landed in IOx first. We test that it works for
our code by adding test cases for their precision in this repo.
2025-01-03 19:23:33 -05:00
Michael Gattozzi ccda3dd3a9
feat: remove required field restriction for tables (#25738)
This commit removes the required fields restriction when using the CLI
or the API to create a new table. As users can't write via the line
protocol without a field this is fine and the schema will be updated on
write. This expands the test to check for the correct response code now
and make sure that we can both query the empty table and write new data
to it.

Closes #25735
2025-01-03 18:10:56 -05:00
Jackson Newhouse 7aa8f41268
feat(processing_engine): Add CLI support for plugins and triggers (#25731) 2025-01-03 12:11:52 -08:00
Michael Gattozzi 636503ff46
fix: Return a 422 for limits on a partial write (#25729)
Prior to this change we would error correctly with a 422 if a limit was
hit. However, we would not send back the correct error in the case of a
limit being hit that caused a partial write. This change fixes that by
checking the error messages for failed lines and if one is found that
caused a limit to be hit, then a 422 is returned rather than a 400 as
we would have been able to process the line otherwise, but the limit was
hit instead

Closes #25208
2025-01-02 14:01:23 -05:00
Trevor Hilton de227b95d9
refactor: cleanup v3 write API and series key method on catalog (#25723)
Store the series key column names on the TableDefinitin in catalog so
looking up the series key by column names is more efficient

Remove the /api/v3/write API and related code/tests
2024-12-30 09:32:54 -05:00
Michael Gattozzi c764d37636
feat: Add json lines support to query output (#25698) 2024-12-20 14:57:19 -05:00
Michael Gattozzi 048e45fff9
fix: change 200 to 204 for compatibility of v1/v2 (#25699) 2024-12-20 14:56:18 -05:00
Michael Gattozzi 2a132f1edf
fix: change failed query with missing db to 404 (#25693) 2024-12-20 11:53:13 -05:00
Paul Dix 0eab724bee
fix: Field not in queryable buffer (#25691) 2024-12-19 19:22:47 -05:00
Michael Gattozzi e51bea65b4
feat: create DB and Tables via REST and CLI (#25687)
* feat: create DB and Tables via REST and CLI

This commit does a few things:

1. It brings the database command naming scheme for types inline with
   the rest of the CLI types
2. It brings the table command naming scheme for types inline with
   the rest of the CLI types
3. Adds tests to check that the num of dbs is not exceeded and that you
   cannot create more than one database with a given name.
4. Adds tests to check that you can create a table and put data into it
   and querying it
5. Adds tests for the CLI for both the database and table commands
6. It creates an endpoint to create databases given a JSON blob
7. It creates an endpoint to create tables given a JSON blob

With this users can now create a database or table without first needing
to write to the database via the line protocol!

Closes #25640
Closes #25641
2024-12-19 16:01:34 -05:00
Paul Dix 56576402cc
fix: Ensure tags are never null (#25680)
* fix: Ensure tags are never null

This injects empty strings into tags for any rows in the buffer where the tag value is null. This is required because the tags are what make up the series key, which must have all non-null values.

There is an ongoing discussion about what the real behavior should be here, but for now this will get our users running that break without this behavior. Discussion is in #25674.

Fixes #25648

* fix: clippy failures
2024-12-18 17:09:23 -05:00
Jackson Newhouse 486d79d801
feat(processing_engine): initial implementation of Processing Engine plugins and triggers (#25639) 2024-12-13 14:11:38 -08:00
Trevor Hilton 3a66fe0ec3
fix: flaky metadata cache JSON test (#25638) 2024-12-10 09:47:00 -08:00
Trevor Hilton ef3599d7ce
test: metadata cache query using JSON format (#25626) 2024-12-06 15:36:49 -05:00
Michael Gattozzi d2fbd65a44
feat: Deny extra tags on write APIs (#25596)
This commit does three important major changes:

1. We will deny writes to the v1, v2, and v3 write apis that add new tags in
   subsequent writes after the first write
2. We make every table have a series key by default now
3. We enfore sorting order by the series key which is the order the keys came in

With these changes we have consistentcy across the various write apis and can
make optimizations and future features with the assumption we have a series key.

Closes #25585
2024-12-03 12:10:26 -05:00
Trevor Hilton 13ab41fa1f
feat: CLI to create and delete metadata caches (#25595)
This adds two new CLI commands to the `influxdb3` binary:
* `influxdb3 meta-cache create`
* `influxdb3 meta-cache delete`

To create and delete metadata caches, respectively.

A basic integration test was added to check that this works E2E.

The `influxdb3_client` was updated with methods to create and delete
metadata caches, and which is what the CLI commands use under the hood.
2024-11-28 09:04:20 -05:00
Trevor Hilton 9ead1dfe4b
feat: meta_caches system table (#25593)
This adds a new system table "meta_caches" that allows users to view the
state of their metadata caches on a per-db basis

An integration test was added to verify that it works.
2024-11-28 08:57:02 -05:00
Trevor Hilton 234d37329a
feat: metacache REST APIs to create and delete (#25587) 2024-11-27 08:41:46 -05:00
praveen-influx 3cde24feb4
feat: delete table (#25572)
This commit allows deleting (soft) a table. For an user, following
command will allow soft deleting a table (bar) in db (foo)

```
influxdb3 table delete --dbname foo --table bar --host $host
```

- Added `soft_delete_table` to `DatabaseManager` trait, which already
  hosts `soft_delete_database` method. The code roughly follows the same
  flow as db delete. Although like db schema, it does clone on write
  because the reference is behind an Arc, `Arc::make_mut` is used in
  this change.
- Moved db delete related cli parser under "manage" module that has both
  db and table delete functionality
- Some minor tidyups (removing unused methods, renaming method so that
  the order in name matches actual return type eg. `table_id_and_schema`,
  should return (id, schema) and not (schema, id))

closes: https://github.com/influxdata/influxdb/issues/25561
2024-11-22 08:42:45 +00:00
praveen-influx 33c2d47ba9
feat: drop/delete database (#25549)
* feat: drop/delete database

This commit allows soft deletion of database using `influxdb3 database
delete <db_name>` command. The write buffer and last value cache are
cleared as well.

closes: https://github.com/influxdata/influxdb/issues/25523

* feat: reuse same code path when deleting database

- In previous commit, the deletion of database immediately triggered
  clearing last cache and query buffer. But on restarts same logic had
  to be repeated to allow deleting database when starting up. This
  commit removes immediate deletion by explicitly calling necessary
  methods and moves the logic to `apply_catalog_batch` which already
  applies `CatalogOp` and also clearing cache and buffer in
  `buffer_ops` method which has hooks to call other places.

closes: https://github.com/influxdata/influxdb/issues/25523

* feat: use reqwest query api for query param

Co-authored-by: Trevor Hilton <thilton@influxdata.com>

* feat: include deleted flag in DatabaseSnapshot

- `DatabaseSchema` serialization/deserialization is delegated to
 `DatabaseSnapshot`, so the `deleted` flag should be included in
 `DatabaseSnapshot` as well.
- insta test snapshots fixed

closes: https://github.com/influxdata/influxdb/issues/25523

* feat: address PR comments + tidy ups

---------

Co-authored-by: Trevor Hilton <thilton@influxdata.com>
2024-11-19 16:08:14 +00:00
praveen-influx c2b8a3a355
feat: add column names to last cache sys table (#25521)
* feat: add column names to last cache sys table

closes: https://github.com/influxdata/influxdb/issues/25511

* feat: move all `get_by_id` methods to take reference in schema
2024-11-08 16:08:30 +00:00
Trevor Hilton d26a73802a
refactor: move to `ColumnId` and `Arc<str>` as much as possible (#25495)
Closes #25461 

_Note: the first three commits on this PR are from https://github.com/influxdata/influxdb/pull/25492_

This PR makes the switch from using names for columns to the use of `ColumnId`s. Where column names are used, they are represented as `Arc<str>`. This impacts most components of the system, and the result is a fairly sizeable change set. The area where the most refactoring was needed was in the last-n-value cache.

One of the themes of this PR is to rely less on the arrow `Schema` for handling the column-level information, and tracking that info in our own `ColumnDefinition` type, which captures the `ColumnId`.

I will summarize the various changes in the PR below, and also leave some comments in-line in the PR.

## Switch to `u32` for `ColumnId`

The `ColumnId` now follows the `DbId` and `TableId`, and uses a globally unique `u32` to identify all columns in the database. This was a change from using a `u16` that was only unique within the column's table. This makes it easier to follow the patterns used for creating the other identifier types when dealing with columns, and should reduce the burden of having to manage the state of a table-scoped identifier.

## Changes in the WAL/Catalog

* `WriteBatch` now contains no names for tables or columns and purely uses IDs
* This PR relies on `IndexMap` for `_Id`-keyed maps so that the order of elements in the map is consistent. This has important implications, namely, that when iterating over an ID map, the elements therein will always be produced in the same order which allows us to make assertions on column order in a lot of our tests, and allows for the re-introduction of `insta` snapshots for serialization tests. This map type provides O(1) lookups, but also provides _fast_ iteration, which should help when serializing these maps in write batches to the WAL.
* Removed the need to serialize the bi-directional maps for `DatabaseSchema`/`TableDefinition` via use of `SerdeVecMap` (see comments in-line)  
* The `tables` map in `DatabaseSchema` no stores an `Arc<TableDefinition>` so that the table definition can be shared around more easily. This meant that changes to tables in the catalog need to do a clone, but we were already having to do a clone for changes to the DB schema.
* Removal of the `TableSchema` type and consolidation of its parts/functions directly onto `TableDefinition`
* Added the `ColumnDefinition` type, which represents all we need to know about a column, and is used in place of the Arrow `Schema` for column-level meta-info. We were previously relying heavily on the `Schema` for iterating over columns, accessing data types, etc., but this gives us an API that we have more control over for our needs. The `Schema` is still held at the `TableDefinition` level, as it is needed for the query path, and is maintained to be consistent with what is contained in the `ColumnDefinition`s for a table.

## Changes in the Last-N-Value Cache

* There is a bigger distinction between caches that have an explicit set of value columns, and those that accept new fields. The former should be more performant.
* The Arrow `Schema` is managed differently now: it used to be updated more than it needed to be, and now is only updated when a row with new fields is pushed to a cache that accepts new fields.

## Changes in the write-path

* When ingesting, during validation, field names are qualified to their associated column ID
2024-11-01 16:42:57 -04:00
Trevor Hilton ce9276d96d
refactor: changes needed for IDs in pro (#25479)
* refactor: roll back addition of DatabaseSchemaProvider trait

* refactor: make parquet metrics optional in telemetry for pro

* refactor: make ParquetFileId Hash

* refactor: test harness logging
2024-10-21 15:17:02 -04:00
praveen-influx 72dcd1866f
feat(telemetry): adds reads and writes (#25409)
- instrumented code to get read and write measurement
- introduced EventsBucket for collection of reads/writes
- sampler now samples every minute for all metrics (including
  reads/writes)
- other tidy ups

closes: https://github.com/influxdata/influxdb/issues/25372
2024-10-01 18:34:00 +01:00
praveen-influx 70643d0136
feat(auth): allow Token or Bearer as valid schemes (#25397)
closes: https://github.com/influxdata/influxdb/issues/25394
2024-09-27 13:40:28 +01:00
Trevor Hilton ed2050f448
test: split e2e test harness up for pro (#25322)
This makes some changes to the TestServer E2E framework, which is used
for running integration tests in the influxdb3 crate. These changes are
meant so that we can more easily split the code for pro.
2024-09-13 11:36:59 -04:00
Trevor Hilton ad2ca83d72
chore: sync to latest core (#25284) 2024-09-06 13:49:38 -04:00
Trevor Hilton 7474c0b3b4
feat: add `system.parquet_files` table (#25225)
This extends the system tables available with a new `parquet_files` table
which will list the parquet files associated with a given table in a
database.

Queries to system.parquet_files must provide a table_name predicate to
specify the table name of interest.

The files are accessed through the QueryableBuffer.

In addition, a test was added to check success and failure modes of the
new system table query.

Finally, the Persister trait had its associated error type removed. This
was somewhat of a consequence of how I initially implemented this change,
but I felt cleaned the code up a bit, so I kept it in the commit.
2024-08-08 08:46:26 -04:00
Trevor Hilton b0beab5b0c
feat: use host identifier prefix in object store paths (#25224)
This enforces the use of a host identifier prefix in all object store
paths (currently, for parquet files, catalog files, and snapshot files).

The persister retains the host identifier prefix, and uses it when
constructing paths.

The WalObjectStore also holds the host identifier prefix, so that it can
use it when saving and loading WAL files.

The influxdb3 binary requires a new argument 'host-id' to be passed that
is used to specify the prefix.
2024-08-07 16:23:36 -04:00
Paul Dix 43877beb15
fix: query bugs with buffer (#25213)
* fix: query bugs with buffer

This fixes three different bugs with the buffer. First was that aggregations would fail because projection was pushed down to the in-buffer data that de-duplication needs to be called on. The test in influxdb3/tests/server/query.rs catches that.

I also added a test in write_buffer/mod.rs to ensure that data is correctly queryable when combining with different states: only data in buffer, only data in parquet files, and data across both. This showed two bugs, one where the parquet data was being doubled up (parquet chunks were being created in write buffer mod and in queryable buffer. The second was that the timestamp min max on table buffer would panic if the buffer was empty.

* refactor: PR feedback

* fix: fix wal replay and buffer snapshot

Fixes two problems uncovered by adding to the write_buffer/mod.rs test. Ensures we can replay wal data and that snapshots work properly with replayed data.

* fix: run cargo update to fix audit
2024-08-07 16:00:17 -04:00
Paul Dix 3265960010
refactor: implement new wal and refactor write buffer (#25196)
* feat: refactor WAL and WriteBuffer

There is a ton going on here, but here are the high level things. This implements a new WAL, which is backed entirely by object store. It then updates the WriteBuffer to be able to work with how the new WAL works, which also required an update to how the Catalog is modified and persisted.

The concept of Segments has been removed. Previously there was a separate WAL per segment of time. Instead, there is now a single WAL that all writes and updates flow into. Data within the write buffer is organized by Chunk(s) within tables, which is based on the timestamp of the row data. These are known as the Level0 files, which will be persisted as Parquet into object store. The default chunk duration for level 0 files is 10 minutes.

The WAL is written as single files that get created at the configured WAL flush interval (1s by default). After a certain number of files have been created, the server will attempt to snapshot the WAL (default is to snapshot the first 600 files of the WAL after we have 900 total, i.e. snapshot 10 minutes of WAL data).

The design goal with this is to persist 10 minute chunks of data that are no longer receiving writes, while clearing out old WAL files. This works if data getting written in around "now" with no more than 5 minutes of delay. If we continue to have delayed writes, a snapshot of all data will be forced in order to clear out the WAL and free up memory in the buffer.

Overall, this structure of a single wal, with flushes and snapshots and chunks in the queryable buffer led to a simpler setup for the write buffer overall. I was able to clear out quite a bit of code related to the old segment organization.

Fixes #25142 and fixes #25173

* refactor: address PR feedback

* refactor: wal to replay and background flush on new

* chore: remove stray println
2024-08-01 15:04:15 -04:00
Trevor Hilton 10dd22b6de
fix: last cache catalog configuration tracks explicit vs. non-explicit value columns (#25185)
* fix: catalog support for last caches that accept new fields

Last cache definitions in the catalog were augmented to either store an
explicit set of column names (including time), or to accept new fields.

This will allow these caches to be loaded properly on server restart such
that all non-key columns are cached.

* refactor: use tagged serialization for last cache values def

This also updated the client code to accept the new structure in
influxdb3_client.

* test: add e2e tests to catch regressions in influxdb3_client

* chore: cargo update for audit
2024-07-24 11:00:40 -04:00
Trevor Hilton 7752d03a79
feat: `last_caches` system table (#25166)
Added a new system table, system.last_caches, to enable queries that display information about last caches in a database.

You can query the table like so:

SELECT * FROM system.last_caches

Since queries are scoped to a database, this will only show last caches configured for the database being queried.

Results look like so:

+-------+----------------+----------------+---------------+-------+-----+
| table | name           | key_columns    | value_columns | count | ttl |
+-------+----------------+----------------+---------------+-------+-----+
| mem   | mem_last_cache | [host, region] | [time, usage] | 1     | 60  |
+-------+----------------+----------------+---------------+-------+-----+

An end-to-end test was added to verify queries to the system.last_caches table.
2024-07-17 09:14:51 -04:00
Trevor Hilton e8d9b02818
feat: `DELETE` last cache API (#25162)
Adds an API for deleting last caches.
- The API allows parameters to be passed in either the request URI query string, or in the body as JSON
- Some additional error modes were handled, specifically, for better HTTP status code responses, e.g., invalid content type is now a 415, URL query string parsing errors are now 400
- An end-to-end test was added to check behaviour of the API
2024-07-16 10:57:48 -04:00
Trevor Hilton 56488592db
feat: API to create last caches (#25147)
Closes #25096

- Adds a new HTTP API that allows the creation of a last cache, see the issue for details
- An E2E test was added to check success/failure behaviour of the API
- Adds the mime crate, for parsing request MIME types, but this is only used in the code I added - we may adopt it in other APIs / parts of the HTTP server in future PRs
2024-07-16 10:32:26 -04:00
Trevor Hilton 8fd50cefe1
chore: sync latest core (#25138)
* chore: sync latest core

* chore: clippy
2024-07-10 12:25:09 -04:00
Jean Arhancet 1fd355ed83
refactor: v1 recordbatch to json (#25085)
* refactor: refactor serde json to use recordbatch

* fix: cargo audit with cargo update

* fix: add timestamp datatype

* fix: add timestamp datatype

* fix: apply feedbacks

* fix: cargo audit with cargo update

* fix: add timestamp datatype

* fix: apply feedbacks

* refactor: test data conversion
2024-07-05 09:21:40 -04:00
Jean Arhancet b6718e59e3
feat: add csv influx v1 (#25030)
* feat: add csv influx v1

* fix: clippy error

* fix: cargo.lock

* fix: apply feedbacks

* test: add csv integration test

* fix: cargo audit
2024-06-25 08:45:55 -04:00
Trevor Hilton 5cb7874b2c
feat: v3 write API with series key (#25066)
Introduce the experimental series key feature to monolith, along with the new `/api/v3/write` API which accepts the new line protocol to write to tables containing a series key.

Series key
* The series key is supported in the `schema::Schema` type by the addition of a metadata entry that stores the series key members in their correct order. Writes that are received to `v3` tables must have the same series key for every single write.

Series key columns are `NOT NULL`
* Nullability of columns is enforced in the core `schema` crate based on a column's membership in the series key. So, when building a `schema::Schema` using `schema::SchemaBuilder`, the arrow `Field`s that are injected into the schema will have `nullable` set to false for columns that are part of the series key, as well as the `time` column.
* The `NOT NULL` _constraint_, if you can call it that, is enforced in the buffer (see [here](https://github.com/influxdata/influxdb/pull/25066/files#diff-d70ef3dece149f3742ff6e164af17f6601c5a7818e31b0e3b27c3f83dcd7f199R102-R119)) by ensuring there are no gaps in data buffered for series key columns.

Series key columns are still tags
* Columns in the series key are annotated as tags in the arrow schema, which for now means that they are stored as Dictionaries. This was done to avoid having to support a new column type for series key columns.

New write API
* This PR introduces the new write API, `/api/v3/write`, which accepts the new `v3` line protocol. Currently, the only part of the new line protocol proposed in https://github.com/influxdata/influxdb/issues/24979 that is supported is the series key. New data types are not yet supported for fields.

Split write paths
* To support the existing write path alongside the new write path, a new module was set up to perform validation in the `influxdb3_write` crate (`write_buffer/validator.rs`). This re-uses the existing write validation logic, and replicates it with needed changes for the new API. I refactored the validation code to use a state machine over a series of nested function calls to help distinguish the fallible validation/update steps from the infallible conversion steps.
* The code in that module could potentially be refactored to reduce code duplication.
2024-06-17 14:52:06 -04:00
Jean Arhancet 62d1c67b14
refactor: remove arrow_batchtes_to_json (#25046)
* refactor: remove arrow_batchtes_to_json

* test: query v3 json format
2024-06-10 15:25:12 -04:00
Trevor Hilton faab7a0abc
fix: writes with incorrect schema should fail (#25022)
* test: add reproducer for #25006
* fix: validate schema of lines in lp and return error for invalid fields
2024-05-29 09:48:50 -04:00
Trevor Hilton 220e1f4ec6
refactor: expose system tables by default in edge/pro (#25000) 2024-05-17 12:39:08 -04:00
Trevor Hilton 0201febd52
feat: add the `system.queries` table (#24992)
The system.queries table is now accessible, when queries are initiated
in debug mode, which is not currently enabled via the HTTP API, therefore
this is not yet accessible unless via the gRPC interface.

The system.queries table lists all queries in the QueryLog on the
QueryExecutorImpl.
2024-05-17 12:04:25 -04:00
Trevor Hilton 9354c22f2c
chore: remove _series_id (#24969)
Removed the _series_id column that stored a SHA256 hash of the tag set
for each write.

Updated all test assertions that made reference to it.

Corrected the limits on columns to un-account for the additional _series_id
column.
2024-05-08 12:28:49 -04:00
Michael Gattozzi 2291ebeae7
feat: sort and dedupe on persist (#24870)
When persisting parquet files we now will sort and dedupe on persist using the
COMPACT operation implemented in IOx Query. Note that right now we don't choose
any column to sort on and default to no column. This means that we dedupe and
sort on whatever the default behavior is for the COMPACT operation. Future
changes can figure out what columns to sort by when compacting the data.
2024-04-03 15:13:36 -04:00
Trevor Hilton 1982244e65
chore: update to latest core (#24876)
* chore: update to latest core
2024-04-03 09:36:28 -04:00
Trevor Hilton e0465843be
feat: `/ping` API to serve version and revision (#24864)
* feat: /ping API to serve version

The /ping API was added, which is served at GET and
POST methods. The API responds with a JSON body
containing the version and revision of the build.

A new crate was added, influxdb3_process, which
takes the process_info.rs module from the influxdb3
crate, and puts it in a separate crate so that other
crates (influxdb3_server) can depend on it. This was
needed in order to have access to the version and
revision values, which are generated at build time,
in the HTTP API code of influxdb3_server.

A E2E test was added to check that /ping works.

E2E TestServer can now have logs emitted using the
TEST_LOG environment variable.
2024-04-01 16:57:10 -04:00
Trevor Hilton 8d49b5e776
fix: tests broken by recent merge (#24852) 2024-03-28 15:41:49 -04:00
Trevor Hilton 7784749bca
feat: support v1 and v2 write APIs (#24793)
feat: support v1 and v2 write APIs

This adds support for two APIs: /write and /api/v2/write. These implement the v1 and v2 write APIs, respectively. In general, the difference between these and the new /api/v3/write_lp API is in the request parsing. We leverage the WriteRequestUnifier trait from influxdb3_core to handle parsing of v1 and v2 HTTP requests, to keep the error handling at that level consistent with distributed versions of InfluxDB 3.0. Specifically, we use the SingleTenantRequestUnifier implementation of the trait.

Changes:
- Addition of two new routes to the route_request method in influxdb3_server::http to serve /write and /api/v2/write requests.
- Database name validation was updated to handle cases where retention policies may be passed in /write requests, and to also reject empty names. A unit test was added to verify the validate_db_name function.
- HTTP request authorization in the router will extract the full Authorization header value, and store it in the request extensions; this is used in the write request parsing from the core iox_http crate to authorize write requests.
- E2E tests to verify correct HTTP request parsing / response behaviour for both /write and /api/v2/write APIs
- E2E tests to check that data sent in through /write and /api/v2/write can be queried back
2024-03-28 13:33:17 -04:00
Trevor Hilton c79821b246
feat: add `_series_id` to tables on write (#24842)
feat: add _series_id to tables on write

New _series_id column is added to tables; this stores a 32 byte SHA256 hash of the tag set of a line of Line Protocol. The tag set is checked for sort order, then sorted if not already, before producing the hash.

Unit tests were added to check hashing and sorting functions work.

Tests that performed queries needed to be modified to account for the new _series_id column; in general, SELECT * queries were altered to use a select clause with specific column names.

The Column limit was increased to 501 internally, to account for the new _series_id column, but the user-facing limit is still 500
2024-03-26 15:22:19 -04:00
Trevor Hilton 2febaff24b
feat: support query parameters (#24804)
feat: support query parameters

This adds support for parameters in the /api/v3/query_sql
and /api/v3/query_influxql API

The new parameter `params` is supported in the URL query string
of a GET request, or in the JSON body of a POST request.

Two new E2E tests were added to check successful GET/POST as well
as error scenario when params are not provided for a query string
that would expect them.
2024-03-23 10:41:00 -04:00
BiKangNing 67cce99df7
chore: fix some typos (#24803)
Signed-off-by: depthlending <bikangning@outlook.com>
2024-03-22 09:32:37 -04:00
Trevor Hilton 84b85a9b1c
refactor: use `/query` for v1 query API endpoint (#24790)
feat: handle v1 query API at /query and update tests
2024-03-20 08:26:28 -04:00
Trevor Hilton 1fe414c14b
feat: support v1 query API (#24746)
feat: support the v1 query API

This PR adds support for the `/api/v1/query` API, which is meant to
serve the original InfluxDB v1 query API, to serve single statement
`SELECT` and `SHOW` queries. The response, which is returned as JSON,
can be chunked via the `chunked` and optional `chunk_size` parameters.
An optional `epoch` parameter can be supplied to have `time` column
timestamps converted to a UNIX epoch with the given precision.

## Buffering

The response is buffered by default, but if the `chunked` parameter
is not supplied, or is passed as `false`, then the entire query
result will be buffered into memory before being returned in the
response. This is how the original API behaves, so we are replicating
that here.

When `chunked` is passed as `true`, then the response will be a
stream of chunks, where each chunk is a self-contained response,
with the same structure as that of the non-chunked response. Chunks
are split up by the provided `chunk_size`, or by series, i.e.,
measurement, which ever comes first. The default chunk size is 10,000
rows.

Buffering is implemented with the `QueryResponseStream` and
`ChunkBuffer` types, the former implements the `Stream` trait,
which allows it to be streamed in the HTTP response directly with
`hyper`'s `Body::wrap_stream`. The `QueryResponseStream` is a wrapper
around the inner arrow `RecordBatchStream`, which buffers the
streamed `RecordBatch`es according to the requested chunking parameters.

## Testing

Two new E2E tests were added to test basic query functionality and
chunking behaviour, respectively. In addition, some manual testing
was done to verify that the InfluxDB Grafana plugin works with this
API.
2024-03-15 13:38:15 -04:00
Michael Gattozzi 1f8e079579
fix: slow limits test (#24761)
This commit re-enables the limits test after making a fix that has it
run <1 second on my laptop vs the old behavior of >=30 seconds. It does
so by constructing one single write_lp request to create 1995 tables
rather than 1995 individual requests that make a table. This is far more
efficient.
2024-03-12 15:51:23 -04:00
Trevor Hilton 6849576ce0
feat: support authenticating v1 APIs with p parameter (#24760)
feat: support authenticating v1 APIs with p parameter

The p URL query parameter can be used to authenticate requests
to the /api/v1/query and /api/v1/write APIs

A test was added to ensure this works
2024-03-12 11:42:59 -04:00
Trevor Hilton c4d651fbd1
feat: implement `Authorizer` to authorize all HTTP requests (#24738)
* feat: add `Authorizer` impls to authz REST and gRPC

This adds two new Authorizer implementations to Edge: Default and
AllOrNothing, which will provide the two auth options for Edge.

Both gRPC requests and HTTP REST request will be authorized by
the same Authorizer implementation.

The SHA512 digest action was moved into the `Authorizer` impl.

* feat: add `ServerBuilder` to construct `Server

A builder was added to the Server in this commit, as part of an
attempt to get the server creation to be more modular.

* refactor: use test server fixture in auth e2e test

Refactored the `auth` integration test in `influxdb3` to use the
`TestServer` fixture; part of this involved extending the fixture
to be configurable, so that the `TestServer` can be spun up with
an auth token.

* test: add test for authorized gRPC

A new end-to-end test, auth_grpc, was added to check that
authorization is working with the influxdb3 Flight service.
2024-03-08 14:18:17 -05:00
Trevor Hilton fad681c06c
chore: gate `limits` test behind a feature flag (#24737)
chore: gate limits test behind a feature flag
2024-03-06 14:37:38 -05:00
Michael Gattozzi ce8c158956
feat: Change Bearer Auth Token to use random bits (#24733)
This changes the 'influxdb3 create token' command so that it will just
automatically generate a completely random base64 encoded token prepended with
'apiv3_' that is then fed into a Sha512 algorithm instead of Sha256. The
user can no longer pass in a token to be turned into the proper output.

This also changes the server code to handle the change to Sha512 as well.

Closes #24704
2024-03-06 12:43:00 -05:00
Trevor Hilton 971676b498
test: add tests to check InfluxQL over Flight (#24732)
test: add tests to check InfluxQL over Flight
2024-03-05 15:41:30 -05:00
Trevor Hilton fb4f09d675
feat: support `SHOW RETENTION POLICIES` (#24729)
feat: support SHOW RETENTION POLICIES

Added support through the influxdb3 Query Executor to perform
SHOW RETENTION POLICIES queries, both on a specific database as well
as accross all databases.

Test cases were added to check this functionality.
2024-03-05 15:40:58 -05:00
Michael Gattozzi a5082ec432
feat: Add limits for InfluxDB Edge (#24703)
This commit is the final piece for the write_lp endpoint. It adds limits
to Edge such that:

- There can only be 5 Databases
- There can only be 500 Columns per Table
- There can only be 2000 Tables across all Databases

We do this by modifying the catalog code to error out whenever one of
these limits would be exceeded before permanently modifying the schema.
These are hard coded limits and cannot be configured by the user.

Closes #24554
2024-03-04 10:24:33 -05:00
Trevor Hilton f7892ebee5
feat: add the `api/v3/query_influxql` API (#24696)
feat: add query_influxql api

This PR adds support for the /api/v3/query_influxql API. This re-uses code from the existing query_sql API, but some refactoring was done to allow for code re-use between the two.

The main change to the original code from the existing query_sql API was that the format is determined up front, in the event that the user provides some incorrect Accept header, so that the 400 BAD REQUEST is returned before performing the query.

Support of several InfluxQL queries that previously required a bridge to be executed in 3.0 was added:

SHOW MEASUREMENTS
SHOW TAG KEYS
SHOW TAG VALUES
SHOW FIELD KEYS
SHOW DATABASES

Handling of qualified measurement names in SELECT queries (see below)

This is accomplished with the newly added iox_query_influxql_rewrite crate, which provides the means to re-write an InfluxQL statement to strip out a database name and retention policy, if provided. Doing so allows the query_influxql API to have the database parameter optional, as it may be provided in the query string.

Handling qualified measurement names in SELECT

The implementation in this PR will inspect all measurements provided in a FROM clause and extract the database (DB) name and retention policy (RP) name (if not the default). If multiple DB/RP's are provided, an error is thrown.

Testing

E2E tests were added for performing basic queries against a running server on both the query_sql and query_influxql APIs. In addition, the test for query_influxql includes some of the InfluxQL-specific queries, e.g., SHOW MEASUREMENTS.

Other Changes

The influxdb3_client now has the api_v3_query_influxql method (and a basic test was added for this)
2024-03-01 12:27:38 -05:00
Michael Gattozzi 8fec1d636e
feat: Add write_lp partial write, name check, and precision (#24677)
* feat: Add partial write and name check to write_lp

This commit adds new behavior to the v3 write_lp http endpoint by
implementing both partial writes and checking the db name for validity.
It also sets the partial write behavior as the default now, whereas
before we would reject the entire request if one line was incorrect.
Users who *do* actually want that behavior can now opt in by putting
'accept_partial=false' into the url of the request.

We also check that the db name used in the request contains only
numbers, letters, underscores and hyphens and that it must start with
either a number or letter.

We also introduce a more standardized way to return errors to the user
as JSON that we can expand over time to give actionable error messages
to the user that they can use to fix their requests.

Finally tests have been included to mock out and test the behavior for
all of the above so that changes to the error messages are reflected in
tests, that both partial and not partial writes work as expected, and
that invalid db names are rejected without writing.

* feat: Add precision to write_lp http endpoint

This commit adds the ability to control the precision of the time stamp
passed in to the endpoint. For example if a user chooses 'second' and
the timestamp 20 that will be 20 seconds past the Unix Epoch. If they
choose 'millisecond' instead it will be 20 milliseconds past the Epoch.

Up to this point we assumed that all data passed in was of nanosecond
precision. The data is still stored in the database as nanoseconds.
Instead upon receiving the data we convert it to nanoseconds. If the
precision URL parameter is not specified we default to auto and take a
best effort guess at what the user wanted based on the order of
magnitude of the data passed in.

This change will allow users finer grained control over what precision
they want to use for their data as well as trying our best to make a
good user experience and having things work as expected and not creating
a failure mode whereby a user wanted seconds and instead put in
nanoseconds by default.
2024-02-27 11:57:10 -05:00
Trevor Hilton 298055e9fb
feat: support FlightSQL in 3.0 (#24678)
* feat: support FlightSQL by serving gRPC requests on same port as HTTP

This commit adds support for FlightSQL queries via gRPC to the influxdb3 service. It does so by ensuring the QueryExecutor implements the QueryNamespaceProvider trait, and the underlying QueryDatabase implements QueryNamespace. Satisfying those requirements allows the construction of a FlightServiceServer from the service_grpc_flight crate.

The FlightServiceServer is a gRPC server that can be served via tonic at the API surface; however, enabling this required some tower::Service wrangling. The influxdb3_server/src/server.rs module was introduced to house this code. The objective is to serve both gRPC (via the newly introduced tonic server) and standard REST HTTP requests (via the existing HTTP server) on the same port.

This is accomplished by the HybridService which can handle either gRPC or non-gRPC HTTP requests. The HybridService is wrapped in a HybridMakeService which allows us to serve it via hyper::Server on a single bind address.

End-to-end tests were added in influxdb3/tests/flight.rs. These cover some basic FlightSQL cases. A common.rs module was added that introduces some fixtures to aid in end-to-end tests in influxdb3.
2024-02-26 15:07:48 -05:00
Michael Gattozzi de102bc927
feat: Add All or Nothing Bearer token auth support (#24666)
This commit adds basic authorization support to Edge. Up to this point
we didn't need have authorization at all and so the server would
receive and accept requests from anyone. This isn't exactly secure or
ideal for a deployment and so we add a basic form of authentication.

The way this works is that a user passes in a hex encoded sha256 hash of
a given token to the '--bearer-token' flag of the serve command. When
the server starts with this flag it will now check a header of the form
'Authorization: Bearer <token>' by making sure it is valid in the sense
that it is not malformed and that when token is hashed it matches the
value passed in on the command line. The request is denied with either a
400 Bad Request if the header is malformed or a 401 Unauthorized if the
hash does not match or the header is missing.

The user is provided a new subcommand of the form: 'influxdb3 create
token <token>' where the output contains the command to run the server
with and what the header should look like to make requests.

I can see future work including multiple tokens and rotating between
them or adding new ones to a live service, but for now this shall
suffice.

As part of the commit end-to-end tests are included to run the server
and make requests against the HTTP API and to make sure that requests
are denied for being unauthorized, accepted for having the right header,
or denied for being malformed.

Also as part of this commit a small fix is included for 'Accept: */*'
headers. We were not checking for them and if this header was included
we were denying it instead of sending back the default payload return
value.
2024-02-20 15:34:39 -05:00