Commit Graph

160 Commits (8966cfb3d3e436f224b4fa2523f91eb78d427f73)

Author SHA1 Message Date
praveen-influx 4e6c5dc825
feat: expose path used by cache request (#25480) 2024-10-22 14:44:44 +01:00
Trevor Hilton ce9276d96d
refactor: changes needed for IDs in pro (#25479)
* refactor: roll back addition of DatabaseSchemaProvider trait

* refactor: make parquet metrics optional in telemetry for pro

* refactor: make ParquetFileId Hash

* refactor: test harness logging
2024-10-21 15:17:02 -04:00
Trevor Hilton 10b6a2810d
refactor: separte catalog schema API (#25468)
Separate out methods of the Catalog API that are used on the query side into a new trait `DatabaseSchemaProvider`. The new trait includes methods from the Catalog that get the underlying `DatabaseSchema` or interact with names/IDs.

This will allow for a separate implementation of the Catalog for pro that only needs to hold a replicated/combined view in-memory of one or more catalogs without the need to do persistence that a write buffer's catalog needs to do.

While in there I also switched the `QueryExecutorImpl::new` method to take an args struct to avoid the clippy lint.
2024-10-16 11:42:07 -04:00
Trevor Hilton ead6d27c5f
refactor: improvements to the catlog API (#25463) 2024-10-11 13:44:43 -04:00
Michael Gattozzi eb24b3bc07
fix: lint fixes for #25388 (#25451) 2024-10-10 17:11:27 -04:00
Michael Gattozzi 724a7e99c3
feat: Add non-unique u16 Id to ColumnDefinition (#25388)
* feat: Add non-unique u16 Id to ColumnDefinition

This commit adds the column_id field to ColumnDefinition so that the
output for a Catalog will contain the id of that column. This is non
unique, whereas TableIds and DbIds will be unique. The column_id
corresponds to it's index in the schema.

Closes #25386
2024-10-10 16:59:10 -04:00
Michael Gattozzi 219fc168ea
refactor: Move ParquetFileId into influxdb3_id (#25449) 2024-10-10 12:07:15 -04:00
praveen-influx 87f198a68b
fix: check num items to prune before pruning parquet cache (#25447)
When running the tests repeatedly the tests failed intermittently
as the background runner wakes up to prune the cache and the tests
are loading and removing the data. The check whether `n_to_prune`
is greater than 0 before going ahead with pruning fixes the issue

Closes: https://github.com/influxdata/influxdb/issues/25446
2024-10-10 14:03:26 +01:00
Jamie Strandboge 0835093c78
feat(circleci): add inclusivity checks (#25437)
* feat(circleci): add inclusivity checks

* chore(circleci): adjust package-validation for inclusive language

* chore: update tests for inclusive language
2024-10-09 08:01:31 -05:00
praveen-influx 1f1125c767
refactor: udpate docs and tests for the telemetry crate (#25432)
- Introduced traits, `ParquetMetrics` and `SystemInfoProvider` to enable
  writing easier tests
- Uses mockito for code that depends on reqwest::Client and also uses
  mockall to generally mock any traits like `SystemInfoProvider`
- Minor updates to docs
2024-10-08 15:45:13 +01:00
Michael Gattozzi c4534b06da
feat: move Table Id/Name mapping into DB Schema (#25436) 2024-10-08 10:03:55 -04:00
Trevor Hilton 533ebff1d7
fix: add host id to parquet file paths (#25428) 2024-10-04 07:44:42 -04:00
Michael Gattozzi eeb1aa7905
feat: swap over to DbId and TableId everywhere (#25421)
* feat: Add TableId and ColumnId

* feat: swap over to DbId and TableId everywhere

This commit swaps us over to using the DbId and TableId types everywhere
for our internal systems. Anywhere that's external facing, such as names
for last cache tables or line protocol parsing, use names. In these cases
we have the `Catalog` which keeps a map of TableIds and DbIds in a
bidirectional mapping for easy lookup i.e. id <-> names. While in essence
the change itself isn't that complicated given the nature of how much we
depended on names for things, the changes end up being quite invasive and
extensive. Luckily it shouldn't be too hard to review. Note this does 
not add the column ids which will be done in a follow up PR.

Closes #25375
Closes #25403
Closes #25404
Closes #25405
Closes #25412
Closes #25413
2024-10-03 14:47:46 -04:00
Trevor Hilton 1cd930fec1
fix: flaky parquet cache for real this time (#25426)
This adds a watch channel to check for when prunes have happened on
the parquet cache oracle, so we can notify something, like a test, that
needs to know when a prune has happened.

This should make the cache eviction test in the parquet_cache module
not flake out anymore.
2024-10-03 10:59:27 -04:00
praveen-influx 8ccb580162
feat: telemetry report for parquet metrics (#25425)
- added mechanism within PersistedFile to expose parquet file related
  metrics. The details are updated when new snapshot is generated and
  also when all snapshots are loaded when the process starts up
- at the point of creating the telemetry payload these parquet metrics
  are looked up before sending it to the server.

Closes: https://github.com/influxdata/influxdb/issues/25418
2024-10-03 15:11:40 +01:00
Trevor Hilton 7d37bbbce7
test: add test helpers for object store types (#25420)
This adds a new crate `influxdb3_test_helpers` which provides two object
store helper types that can be used to track request counts made through
the store, as well as synchronize requests made through the store, resp.
2024-10-02 14:45:12 -04:00
Trevor Hilton dd1031be95
fix: flaky parquet cache test (#25417) 2024-10-01 14:22:51 -04:00
Trevor Hilton 83aca43eee
feat: enable parquet cache config through CLI (#25415) 2024-09-30 20:04:49 -04:00
Trevor Hilton a05c3fe87b
test: check parquet cache in the write buffer (#25411)
* test: check parquet cache in the write buffer

Checked that the parquet cache will serve queries when chunks are
requested from the write buffer. The added test also checks for get_range
requests made to the object store, which are typically made by DataFusion
to infer schema for parquet files.

* refactor: make parquet cache optional on write buffer

* test: add test to verify parquet cache function

This makes the parquet cache optional at the write buffer level, and adds
a test that verifies that the cache catches and prevents requests to the
object store in the event of a cache hit.
2024-09-30 14:54:12 -04:00
Trevor Hilton 4df7de8b9e
chore: remove unused parquet cache code (#25410) 2024-09-27 14:17:49 -04:00
Trevor Hilton 4184a331ea
refactor: parquet cache with less locking (#25389)
Closes #25382 
Closes #25383 

This refactors the parquet cache to use less locking by switching from using the `clru` crate to a hand-rolled cache implementation. The new cache still acts as an LRU, but it uses atomics to track hit-time per entry, and handles pruning in a separate process that is decoupled from insertion/gets to the cache.

The `Cache` type uses a [`DashMap`](https://docs.rs/dashmap/latest/dashmap/struct.DashMap.html) internally to store cache entries. This should help reduce lock contention, and also has the added benefit of not requiring mutability to insert  into _or_ get from the map.

The cache maps an `object_store::Path` to a `CacheEntry`. On a hit, an entry will have its `hit_time` (an `AtomicI64`) incremented. During a prune operation, entries that have the oldest hit times will be removed from the cache. See the `Cache::prune` method for details.

The cache is setup with a memory _capacity_ and a _prune percent_. The cache tracks memory used when entries are added, based on their _size_, and when a prune is invoked in the background, if the cache has exceeded its capacity, it will prune `prune_percent * cache.len()` entries from the cache.

Two tests were added:
* `cache_evicts_lru_when_full` to check LRU behaviour of the cache
* `cache_hit_while_fetching` to check that a cache entry hit while a request is in flight to fetch that entry will not result in extra calls to the underlying object store
2024-09-27 11:59:17 -04:00
Trevor Hilton 9c71b3ce25
feat: memory-cached object store for parquet files (#25377)
Part of #25347 

This sets up a new implementation of an in-memory parquet file cache in the `influxdb3_write` crate in the `parquet_cache.rs` module.

This module introduces the following types:
* `MemCachedObjectStore` - a wrapper around an `Arc<dyn ObjectStore>` that can serve GET-style requests to the store from an in-memory cache
* `ParquetCacheOracle` - an interface (trait) that can accept requests to create new cache entries in the cache used by the `MemCachedObjectStore`
* `MemCacheOracle` - implementation of the `ParquetCacheOracle` trait

## `MemCachedObjectStore`

This takes inspiration from the [`MemCacheObjectStore` type](1eaa4ed5ea/object_store_mem_cache/src/store.rs (L205-L213)) in core, but has some different semantics around its implementation of the `ObjectStore` trait, and uses a different cache implementation.

The reason for wrapping the object store is that this ensures that any GET-style request being made for a given object is served by the cache, e.g., metadata requests made by DataFusion.

The internal cache comes from the [`clru` crate](https://crates.io/crates/clru), which provides a least-recently used (LRU) cache implementation that allows for weighted entries. The cache is initialized with a capacity and entries are given a weight on insert to the cache that represents how much of the allotted capacity they will take up. If there isn't enough room for a new entry on insert, then the LRU item will be removed.

### Limitations of `clru`

The `clru` crate conveniently gives us an LRU eviction policy but its API may put some limitations on the system:
* gets to the cache require an `&mut` reference, which means that the cache needs to be behind a `Mutex`. If this slows down requests through the object store, then we may need to explore alternatives.
* we may want more sophisticated eviction policies than a straight LRU, i.e., to favour certain tables over others, or files that represent recent data over those that represent old data.

## `ParquetCacheOracle` / `MemCacheOracle`

The cache oracle is responsible for handling cache requests, i.e., to fetch an item and store it in the cache. In this PR, the oracle runs a background task to handle these requests. I defined this as a trait/struct pair since the implementation may look different in Pro vs. OSS.
2024-09-24 10:58:15 -04:00
Michael Gattozzi 54d209d0bf
feat: Add u32 ID for Databases (#25302)
* feat: Remove lock for FileId tests

Since we now are using cargo-nextest in CI we can remove
the locks used in the FileId tests to make sure that we
have no race conditions

* feat: Add u32 ID for Databases

This commit adds a new DbId for databases. It also updates paths to use
that id as part of the name. When starting up the WriteBuffer we apply
the DbId from the persisted snapshot much like we do for ParquetFileId's

This introduces the influxdb3_id crate to avoid circular deps with ids.
The ParquetFileId should also be moved into this crate, but it's
outside the scope of this change.

Closes #25301
2024-09-18 11:44:04 -04:00
praveen-influx 0c1fced7a4
refactor(catalog): catalog initialisation refactor (#25360)
- when no catalog is found, create a new one with instance id
  and persist it immediately
- enabled test-log in influxdb3_write

Closes: https://github.com/influxdata/influxdb/issues/25346
2024-09-18 15:23:17 +01:00
praveen-influx 245b49ae0e
feat: add instance id and host id to catalog (#25343)
- uses Arc<str> to represent create once and read everywhere type
  of string
- updated snapshots for insta asserts, uses redaction to hardcode
  randomly generated UUID strings
- added methods to catalog to expose instace and host ids

Closes: https://github.com/influxdata/influxdb/issues/25315
2024-09-17 16:32:21 +01:00
Paul Dix 054ac7e8a3
feat: add host_id to PersistedSnapshot (#25335) 2024-09-16 09:49:11 -04:00
Paul Dix 341b8d7aff
feat: add watch to writebuffer for persisted snapshots (#25330) 2024-09-13 16:58:18 -04:00
Paul Dix f8b6cfac5b
refactor: Rename level 0 to gen1 to match compaction wording (#25317) 2024-09-12 15:57:30 -04:00
Trevor Hilton 68ea7fc428
feat: partition buffer chunks from the table buffer (#25304) 2024-09-10 14:22:59 -04:00
Trevor Hilton ad2ca83d72
chore: sync to latest core (#25284) 2024-09-06 13:49:38 -04:00
Michael Gattozzi fb9d7d02f3
fix: test failure due to global static (#25287)
This commit changes the write_buffer tests to acquire a lock so that
in tests where we need to have access to NEXT_FILE_ID that it won't
be overwritten since rust tests run as one process and share the
same statics.

While this isn't a problem for Edge as a singular process it is for
our tests. It's a bit unfortunate, but this solution is the easiest
and the locks are not held for long so there's no real big impact
on running these tests.

Closes #25286
2024-09-05 13:53:38 -04:00
Trevor Hilton 4e664d3da5
chore: updates for pro (#25285)
This applies some needed updates downstream in Pro. Namely,
* visibility changes that allow types to be used in the pro buffer
* allow parsing a WAL file sequence number from a file path
* remove duplicates when adding parquet files to a persisted files list
2024-09-04 16:02:07 -04:00
Trevor Hilton cd23be6e5c
test: repro for dropped wal files during snapshot (#25276)
* test: repro for dropped wal files during snapshot

This commit provides a reproducer for an issue in the snapshotting process
whereby WAL files are removed for writes that have not been persisted yet.

* fix: do not snapshot most recent WAL period

This addresses #25277

Snapshots that are triggered when the number of WAL periods in the
tracker grows to be >= 3x the snapshot size will not include the most
recent wal period, and doing so was removing WAL files containing data
that was not yet persisted.

* docs: add doc comment to reproducer test

* fix: broken parquet_files system table test

* fix: broken snapshot_tracker test

* fix: broken write_buffer test

* refactor: remove redundant helper function

* test: add another snapshot test to write_buffer

* test: future writes do not get dropped on restart
2024-09-03 20:21:33 -04:00
Trevor Hilton 3f7b0e655c
refactor: move catalog and last cache initialization out of write buffer (#25273)
* refactor: add catalog as dep to influxdb3

* refactor: move catalog and last cache initialization out of write buffer

The Write buffer used to handle initialization of the catalog and last
n value cache. This commit moves that logic out, so that both can be
initialized independently, and injected into the write buffer. This is to
enable downstream changes that will need to make sharing the catalog and
last cache possible.
2024-08-27 16:41:40 -04:00
Trevor Hilton e0e0075766
refactor: use more `dyn Trait` in write buffer (#25264)
* refactor: use dyn traits in WriteBufferImpl

This changes the WriteBufferImpl to use a dyn TimeProvider instead of
a generic in its type signature.

The Server type now uses a dyn WriteBuffer instead of using a generic
in its type signature, and the ServerBuilder was updated to accommodate
this accordingly.

These chages were to make downstream code changes more seamless.

* refactor: make some items pub

This makes functions on the QueryableBuffer and LastCache pub so that they
can be used downstream.
2024-08-23 14:21:20 -04:00
Trevor Hilton cbb7bc5901
refactor: remove Persister trait in favour of concrete impl (#25260)
The Persister trait was only implemented by a single type, because the
underlying ObjectStore interface has several ways of being mocked, we
mock that instead of the Persister interface.

This commit removes the Persister trait, and moves its interface/impl
directly on a single Persister type in the persister module of the
influxdb3_write crate.

deny.toml had some incorrect field names in license.exceptions, those
were fixed from 'crate' to 'name'.
2024-08-22 10:41:33 -04:00
Michael Gattozzi 0fec72d243
feat: Add u64 id field to ParquetFiles (#25258)
* feat: Add u64 id field to ParquetFiles

This commit does a few things:
1. It adds a u64 id field to ParquetFile
2. It gets this from an AtomicU64 so they can always use the most up to
   date one across threads
3. The last available file id is persisted as part of our snapshot
   process
4. The snapshot when loaded will set the AtomicU64 to the proper value

With this current work on the FileIndex in our Pro version will be able
to utilize these ids while doing compaction and we can refer to them
with a unique u64.

Closes #25250
2024-08-21 14:14:33 -04:00
Trevor Hilton 3b174a2f98
feat: snapshots track their own sequence number (#25255) 2024-08-20 15:55:47 -07:00
Paul Dix d9cb3a58c5
feat: Catalog apply_catalog_batch only updates if new (#25236)
* feat: Catalog apply_catalog_batch only updates if new

This updates the Catalog so that when applying a catalog batch it only updates the inner catalog and bumps the sequence number and updated tracker if there are new updates in the batch. Also does validation that the catalog batch schema is compatible with any existing.

Closes #25205

* feat: only persist catalog when updated (#25238)
2024-08-12 10:21:34 -04:00
Paul Dix 8bcc7522d0
feat: Add last cache create/delete to WAL (#25233)
* feat: Add last cache create/delete to WAL

This moves the LastCacheDefinition into the WAL so that it can be serialized there. This ended up being a pretty large refactor to get the last cache creation to work through the WAL.

I think I also stumbled on a bug where the last cache wasn't getting initialized from the catalog on reboot so that it wouldn't actually end up caching values. The refactored last cache persistence test in write_buffer/mod.rs surfaced this.

Finally, I also had to update the WAL so that it would persist if there were only catalog updates and no writes.

Fixes #25203

* fix: typos
2024-08-09 05:46:35 -07:00
Paul Dix 05ab730ae6
refactor: Make Level0Duration part of WAL (#25228)
* refactor: Make Level0Duration part of WAL

I noticed this during some testing and cleanup with other PRs. The WAL had its own level_0_duration and the write buffer had a different one, which would cause some weird problems if they weren't the same. This refactors Level0Duration to be in the WAL and fixes up the tests.

As an added bonus, this surfaced a bug where multiple L0 blocks getting persisted in the same snapshot wasn't supported. So now snapshot details can have many files per table.

* fix: have persisted files always return in descending data time order

* fix: sort record batches for test verification
2024-08-08 09:47:21 -04:00
Trevor Hilton 4067c91be0
fix: un-pub QueryableBuffer and fix compile errors (#25230) 2024-08-08 09:39:12 -04:00
Trevor Hilton 7474c0b3b4
feat: add `system.parquet_files` table (#25225)
This extends the system tables available with a new `parquet_files` table
which will list the parquet files associated with a given table in a
database.

Queries to system.parquet_files must provide a table_name predicate to
specify the table name of interest.

The files are accessed through the QueryableBuffer.

In addition, a test was added to check success and failure modes of the
new system table query.

Finally, the Persister trait had its associated error type removed. This
was somewhat of a consequence of how I initially implemented this change,
but I felt cleaned the code up a bit, so I kept it in the commit.
2024-08-08 08:46:26 -04:00
Trevor Hilton b0beab5b0c
feat: use host identifier prefix in object store paths (#25224)
This enforces the use of a host identifier prefix in all object store
paths (currently, for parquet files, catalog files, and snapshot files).

The persister retains the host identifier prefix, and uses it when
constructing paths.

The WalObjectStore also holds the host identifier prefix, so that it can
use it when saving and loading WAL files.

The influxdb3 binary requires a new argument 'host-id' to be passed that
is used to specify the prefix.
2024-08-07 16:23:36 -04:00
Paul Dix 43877beb15
fix: query bugs with buffer (#25213)
* fix: query bugs with buffer

This fixes three different bugs with the buffer. First was that aggregations would fail because projection was pushed down to the in-buffer data that de-duplication needs to be called on. The test in influxdb3/tests/server/query.rs catches that.

I also added a test in write_buffer/mod.rs to ensure that data is correctly queryable when combining with different states: only data in buffer, only data in parquet files, and data across both. This showed two bugs, one where the parquet data was being doubled up (parquet chunks were being created in write buffer mod and in queryable buffer. The second was that the timestamp min max on table buffer would panic if the buffer was empty.

* refactor: PR feedback

* fix: fix wal replay and buffer snapshot

Fixes two problems uncovered by adding to the write_buffer/mod.rs test. Ensures we can replay wal data and that snapshots work properly with replayed data.

* fix: run cargo update to fix audit
2024-08-07 16:00:17 -04:00
Michael Gattozzi 29d3a28a9c
fix: make ParquetChunk fields and mod chunk pub (#25219)
* fix: make ParquetChunk fields and mod chunk pub

This doesn't affect anything in the OSS version, but these changes are
needed for Pro as part of our compactor work.

* fix: cargo deny failure
2024-08-06 15:07:14 -04:00
Paul Dix 6aa6d924c6
fix: wal skip persist and notify if empty buffer (#25211)
* fix: wal skip persist and notify if empty buffer

This fixes the WAL so that it will skip persisting a file and notifying the file notifier if the wal buffer is empty.

* fix: fix last cache persist test
2024-08-05 18:08:11 -04:00
Paul Dix 2b8fc7b44e
refactor: Move Catalog into influxdb3_catalog crate (#25210)
* refactor: Move Catalog into influxdb3_catalog crate

This moves the catalog and its serialization logic into its own crate. This is a precursor to recording more catalog modifications into the WAL.

Fixes #25204

* fix: cargo update

* fix: add version = 2 to deny.toml

* fix: update deny.toml

* fix: add CCO to deny.toml
2024-08-02 16:04:12 -04:00
Paul Dix 3265960010
refactor: implement new wal and refactor write buffer (#25196)
* feat: refactor WAL and WriteBuffer

There is a ton going on here, but here are the high level things. This implements a new WAL, which is backed entirely by object store. It then updates the WriteBuffer to be able to work with how the new WAL works, which also required an update to how the Catalog is modified and persisted.

The concept of Segments has been removed. Previously there was a separate WAL per segment of time. Instead, there is now a single WAL that all writes and updates flow into. Data within the write buffer is organized by Chunk(s) within tables, which is based on the timestamp of the row data. These are known as the Level0 files, which will be persisted as Parquet into object store. The default chunk duration for level 0 files is 10 minutes.

The WAL is written as single files that get created at the configured WAL flush interval (1s by default). After a certain number of files have been created, the server will attempt to snapshot the WAL (default is to snapshot the first 600 files of the WAL after we have 900 total, i.e. snapshot 10 minutes of WAL data).

The design goal with this is to persist 10 minute chunks of data that are no longer receiving writes, while clearing out old WAL files. This works if data getting written in around "now" with no more than 5 minutes of delay. If we continue to have delayed writes, a snapshot of all data will be forced in order to clear out the WAL and free up memory in the buffer.

Overall, this structure of a single wal, with flushes and snapshots and chunks in the queryable buffer led to a simpler setup for the write buffer overall. I was able to clear out quite a bit of code related to the old segment organization.

Fixes #25142 and fixes #25173

* refactor: address PR feedback

* refactor: wal to replay and background flush on new

* chore: remove stray println
2024-08-01 15:04:15 -04:00
Trevor Hilton 8c1a1418b2
test: add a test to check last cache init from catalog (#25192) 2024-07-26 09:58:02 -04:00
Trevor Hilton 10dd22b6de
fix: last cache catalog configuration tracks explicit vs. non-explicit value columns (#25185)
* fix: catalog support for last caches that accept new fields

Last cache definitions in the catalog were augmented to either store an
explicit set of column names (including time), or to accept new fields.

This will allow these caches to be loaded properly on server restart such
that all non-key columns are cached.

* refactor: use tagged serialization for last cache values def

This also updated the client code to accept the new structure in
influxdb3_client.

* test: add e2e tests to catch regressions in influxdb3_client

* chore: cargo update for audit
2024-07-24 11:00:40 -04:00
Trevor Hilton dfecf570e6
feat: support `!=`, `IN`, and `NOT IN` predicates in last cache queries (#25175)
Part of #25174

This PR adds support for three more predicate types when querying the last cache: !=, IN, and NOT IN. Previously only = was supported.

Existing tests were extended to check that these predicate types work as expected, both in the last_cache module and in the influxdb3_server crate. The latter was important to ensure that the new predicate logic works in the context of actual query parsing/execution.
2024-07-23 14:17:09 -04:00
Trevor Hilton 7a7db7d529
feat: connect `LastCacheProvider` with catalog at runtime (#25170)
Closes #25169

This PR ensures the last cache configuration is persisted to the catalog when last caches are created, and are removed from the catalog when they are deleted. The last cache is initialized on server start fro the catalog.

A new trait was added to the write buffer: LastCacheManager, which provides the methods to create and delete last caches (and which is invoked from the HTTP API). Both create/delete methods will update the catalog, but also force persistence of the catalog to object store, vs. waiting for the WAL flush interval / segment persistence process to do it. This should ensure that the catalog is up-to-date with respect to the last cache configuration, in the event that the server is stopped before segment persistence.

A test was added to check this behaviour in influxdb3_write/src/write_buffer/mod.rs.
2024-07-23 12:41:42 -04:00
Trevor Hilton 7752d03a79
feat: `last_caches` system table (#25166)
Added a new system table, system.last_caches, to enable queries that display information about last caches in a database.

You can query the table like so:

SELECT * FROM system.last_caches

Since queries are scoped to a database, this will only show last caches configured for the database being queried.

Results look like so:

+-------+----------------+----------------+---------------+-------+-----+
| table | name           | key_columns    | value_columns | count | ttl |
+-------+----------------+----------------+---------------+-------+-----+
| mem   | mem_last_cache | [host, region] | [time, usage] | 1     | 60  |
+-------+----------------+----------------+---------------+-------+-----+

An end-to-end test was added to verify queries to the system.last_caches table.
2024-07-17 09:14:51 -04:00
Trevor Hilton e8d9b02818
feat: `DELETE` last cache API (#25162)
Adds an API for deleting last caches.
- The API allows parameters to be passed in either the request URI query string, or in the body as JSON
- Some additional error modes were handled, specifically, for better HTTP status code responses, e.g., invalid content type is now a 415, URL query string parsing errors are now 400
- An end-to-end test was added to check behaviour of the API
2024-07-16 10:57:48 -04:00
Trevor Hilton 56488592db
feat: API to create last caches (#25147)
Closes #25096

- Adds a new HTTP API that allows the creation of a last cache, see the issue for details
- An E2E test was added to check success/failure behaviour of the API
- Adds the mime crate, for parsing request MIME types, but this is only used in the code I added - we may adopt it in other APIs / parts of the HTTP server in future PRs
2024-07-16 10:32:26 -04:00
Trevor Hilton 0279461738
feat: hook up last cache to query executor using DataFusion traits (#25143)
* feat: impl datafusion traits on last cache

Created a new module for the DataFusion table function implementations.
The TableProvider impl for LastCache was moved there, and new code that
implements the TableFunctionImpl trait to make the last cache queryable
was also written.

The LastCacheProvider and LastCache were augmented to make this work:
- The provider stores an Arc<LastCache> instead of a LastCache
- The LastCache uses interior mutability via an RwLock, to make the above
  possible.

* feat: register last_cache UDTF on query context

* refactor: make server accept listener instead of socket addr

The server used to accept a socket address and bind it directly, returning
error if the bind fails.

This commit changes that so the ServerBuilder accepts a TcpListener. The
behaviour is essentially the same, but this allows us to bind the address
from tests when instantiating the server, so we can easily assign unused
ports.

Tests in the influxdb3_server were updated to exploit this in order to
use port 0 auto assignment and stop flaky test failures.

A new, failing, test was also added to that module for the last cache.

* refactor: naive implementation of last cache key columns

Committing here as the last cache is in a working state, but it is naively
implemented as it just stores all key columns again (still with the hierarchy)

* refactor: make the last cache work with the query executor

* chore: fix my own feedback and appease clippy

* refactor: remove lower lock in last cache

* chore: cargo update

* refactor: rename function

* fix: broken doc comment
2024-07-16 10:10:47 -04:00
Trevor Hilton 0b8fbf456c
refactor: improvements to last cache implementation (#25133)
* refactor: make cache creation more idempotent

Last Cache creation is more idempotent, if a cache is created, and then
an attempt to create it again with the same parameters is used, it will
not result in an error.

* refactor: only store a single buffer of Instants

The last cache column buffers were storing an instant next to each
buffered value, which is unnecessary and not space efficient. This makes
it so the LastCacheStore holds a single buffer of Instants and manages
TTLs using that.

* refactor: clean up derelict cache members on eviction
2024-07-10 13:45:24 -04:00
Trevor Hilton 8fd50cefe1
chore: sync latest core (#25138)
* chore: sync latest core

* chore: clippy
2024-07-10 12:25:09 -04:00
Trevor Hilton 2609b590c9
feat: support addition of newly written columns to last cache (#25125)
* feat: support last caches that can add new fields

* feat: support new values in last cache

Support the addition of new fields to the last cache, for caches that do
not have a specified set of value columns.

A test was added along with the changes.

* chore: clippy

* docs: add comments throughout new last cache code

* fix: last cache schema merging when new fields added

* refactor: use outer schema for RecordBatch production

Enabling addition of new fields to a last cache made the insertion order
guarantee of the IndexMap break down. It could not be relied upon anymore
so this commit removes reference to that fact, despite still using the
IndexMap type, and strips out the schema from the inner LastCacheStore
type of the LastCache.

Now, the outer LastCache schema is relied on for producing RecordBatches,
which requires a lookup to the inner LastCacheStore's HashMap for each
field in the schema. This may not be as convenient as iterating over the
map as before, but trying to manage the disparate schema, and maintaining
the map ordering was making the code too complicated. This seems like a
reasonable compromise for now, until we see the need to optimize.

The IndexMap is still used for its fast iteration and lookup
characteristics.

The test that checks for new field ordering behaviour was modified to be
correct.

* refactor: use has set instead of scanning entire row on each push

Some renaming of variables was done to clarify meaning as well.
2024-07-09 16:35:27 -04:00
Trevor Hilton 53e5c5f5c5
feat: last cache implementation (#25109)
* feat: base for last cache implementation

Each last cache holds a ring buffer for each column in an index map, which
preserves the insertion order for faster record batch production.

The ring buffer uses a custom type to handle the different supported
data types that we can have in the system.

* feat: implement last cache provider

LastCacheProvider is the API used to create last caches and write
table batches to them. It uses a two-layer RwLock/HashMap: the first for
the database, and the second layer for the table within the database.

This allows for table-level locks when writing in buffered data, and only
gets a database-level lock when creating a cache (and in future, when
removing them as well).

* test: APIs on write buffer and test for last cache

Added basic APIs on the write buffer to access the last cache and then a
test to the last_cache module to see that it works with a simple example

* docs: add some doc comments to last_cache

* chore: clippy

* chore: one small comment on IndexMap

* chore: clean up some stale comments

* refactor: part of PR feedback

Addressed three parts of PR feedback:

1. Remove double-lock on cache map
2. Re-order the get when writing to the cache to be outside the loop
3. Move the time check into the cache itself

* refactor: nest cache by key columns

This refactors the last cache to use a nested caching structure, where
the key columns for a given cache are used to create a hierarchy of
nested maps, terminating in the actual store for the values in the cache.

Access to the cache is done via a set of predicates which can optionally
specify the key column values at any level in the cache hierarchy to only
gather record batches from children of that node in the cache.

Some todos:
- Need to handle the TTL
- Need to move the TableProvider impl up to the LastCache type

* refactor: TableProvider impl to LastCache

This re-writes the datafusion TableProvider implementation on the correct
type, i.e., the LastCache, and adds conversion from the filter Expr's to
the Predicate type for the cache.

* feat: support TTL in last cache

Last caches will have expired entries walked when writes come in.

* refactor: add panic when unexpected predicate used

* refactor: small naming convention change

* refactor: include keys in query results and no null keys

Changed key columns so that they do not accept null values, i.e., rows
that are pushed that are missing key column values will be ignored.

When producing record batches for a cache, if not all key columns are
used in the predicate, then this change makes it so that the non-predicate
key columns are produced as columns in the outputted record batches.

A test with a few cases showing this was added.

* fix: last cache key column query output

Ensure key columns in the last cache that are not included in the
predicate are emitted in the RecordBatches as a column.

Cleaned up and added comments to the new test.

* chore: clippy and some un-needed code

* fix: clean up some logic errors in last_cache

* test: add tests for non default cache size and TTL

Added two tests, as per commit title. Also moved the eviction process
to a separate function so that it was not being done on every write to
the cache, which could be expensive, and this ensures that entries are
evicted regardless of whether writes are coming in or not.

* test: add invalid predicate test cases to last_cache

* test: last_cache with field key columns

* test: last_cache uses series key for default keys

* test: last_cache uses tag set as default keys

* docs: add doc comments to last_cache

* fix: logic error in last cache creation

CacheAlreadyExists errors were only being based on the database and
table names, and not including the cache names, which was not
correct.

* docs: add some comments to last cache create fn

* feat: support null values in last cache

This also adds explicit support for series key columns to distinguish
them from normal tags in terms of nullability

A test was added to check nulls work

* fix: reset last cache last time when ttl evicts all data
2024-07-09 15:22:04 -04:00
Paul Dix b30729e36a
refactor: move persisted files out of segment state (#25108)
* refactor: move persisted files out of segment state

This refactors persisted parquet files out of the SegmentState into a new struct PersistedParquet files. The intention is to have SegmentState be only for the active write buffer that has yet to be persisted to Parquet files in object storage.

Persisted files will then be accessible throughout the system without having to touch the active in-flight write buffer.

* refactor: pr feedback cleanup
2024-06-27 11:46:03 -04:00
Trevor Hilton aa28302cdd
feat: store last cache config in the catalog (#25104)
* feat: store last cache info in the catalog

* test: test series key in catalog serialization

* test: add test for last cache catalog serialization

* chore: cargo update

* chore: remove outdated snapshot
2024-06-26 14:19:48 -04:00
Lorrens Pantelis 8b6c2a3b3d
refactor: Replace use of `std::HashMap` with `hashbrown::HashMap` (#25094)
* refactor: use hashbrown with entry_ref api

* refactor: use hashbrown hashmap instead of std hashmap in places that would from the `entry_ref` API

* chore: Cargo update to pass CI
2024-06-26 12:43:35 -04:00
Paul Dix 2ddef9f8da
feat: track buffer memory usage and persist (#25074)
* feat: track buffer memory usage and persist

This is a bit light on the test coverage, but I expect there is going to be some big refactoring coming to segment state and some of these other pieces that track parquet files in the system. However, I wanted to get this in so that we can keep things moving along. Big changes here:

* Create a persister module in the write_buffer
* Check the size of the buffer (all open segments) every 10s and predict its size in 5 minutes based on growth rate
* If the projected growth rate is over the configured limit, either close segments that haven't received writes in a minute, or persist the largest tables (oldest 90% of their data)
* Added functions to table buffer to split a table based on 90% older timestamp data and 10% newer timestamp data, to persist the old and keep the new in memory
* When persisting, write the information in the WAL
* When replaying from the WAL, clear out the buffer of the persisted data
* Updated the object store path for persisted parquet files in a segment to have a file number since we can now have multiple parquet files per segment

* refactor: PR feedback
2024-06-25 10:10:37 -04:00
Trevor Hilton 5cb7874b2c
feat: v3 write API with series key (#25066)
Introduce the experimental series key feature to monolith, along with the new `/api/v3/write` API which accepts the new line protocol to write to tables containing a series key.

Series key
* The series key is supported in the `schema::Schema` type by the addition of a metadata entry that stores the series key members in their correct order. Writes that are received to `v3` tables must have the same series key for every single write.

Series key columns are `NOT NULL`
* Nullability of columns is enforced in the core `schema` crate based on a column's membership in the series key. So, when building a `schema::Schema` using `schema::SchemaBuilder`, the arrow `Field`s that are injected into the schema will have `nullable` set to false for columns that are part of the series key, as well as the `time` column.
* The `NOT NULL` _constraint_, if you can call it that, is enforced in the buffer (see [here](https://github.com/influxdata/influxdb/pull/25066/files#diff-d70ef3dece149f3742ff6e164af17f6601c5a7818e31b0e3b27c3f83dcd7f199R102-R119)) by ensuring there are no gaps in data buffered for series key columns.

Series key columns are still tags
* Columns in the series key are annotated as tags in the arrow schema, which for now means that they are stored as Dictionaries. This was done to avoid having to support a new column type for series key columns.

New write API
* This PR introduces the new write API, `/api/v3/write`, which accepts the new `v3` line protocol. Currently, the only part of the new line protocol proposed in https://github.com/influxdata/influxdb/issues/24979 that is supported is the series key. New data types are not yet supported for fields.

Split write paths
* To support the existing write path alongside the new write path, a new module was set up to perform validation in the `influxdb3_write` crate (`write_buffer/validator.rs`). This re-uses the existing write validation logic, and replicates it with needed changes for the new API. I refactored the validation code to use a state machine over a series of nested function calls to help distinguish the fallible validation/update steps from the infallible conversion steps.
* The code in that module could potentially be refactored to reduce code duplication.
2024-06-17 14:52:06 -04:00
Michael Gattozzi 5c146317aa
chore: Update Rust to 1.79.0 (#25061)
Fairly quiet update for us. The only change was around using the numeric
constants now inbuilt into the primitives not the ones from `std`

https://rust-lang.github.io/rust-clippy/master/index.html#/legacy_numeric_constants

Release post: https://blog.rust-lang.org/2024/06/13/Rust-1.79.0.html
2024-06-13 13:56:39 -04:00
Draco 84b38f5a06
fix: buffer size typo (#25039) 2024-06-07 12:48:04 -04:00
Trevor Hilton 039dea2264
refactor: add dedicated type for serializaing catalog tables (#25042)
Remove reliance on data_types::ColumnType

Introduce TableSnapshot for serializing table information in the catalog.

Remove the columns BTree from the TableDefinition an use the schema
directly. BTrees are still used to ensure column ordering when tables are
created, or columns added to existing tables.

The custom Deserialize impl on TableDefinition used to block duplicate
column definitions in the serialized data. This preserves that bevaviour
using serde_with and extends it to the other types in the catalog, namely
InnerCatalog and DatabaseSchema.

The serialization test for the catalog was extended to include multiple
tables in a database and multiple columns spanning the range of available
types in each table.

Snapshot testing was introduced using the insta crate to check the
serialized JSON form of the catalog, and help catch breaking changes
when introducing features to the catalog.

Added a test that verifies the no-duplicate key rules when deserializing
the map components in the Catalog
2024-06-04 11:38:43 -04:00
Trevor Hilton faab7a0abc
fix: writes with incorrect schema should fail (#25022)
* test: add reproducer for #25006
* fix: validate schema of lines in lp and return error for invalid fields
2024-05-29 09:48:50 -04:00
Paul Dix 2ac986ae8a
feat: Add last_write_time and table buffer size (#25017)
This adds tracking of the instant of the last write to open buffer segment and methods to the table buffer to compute the estimated memory size of it.

These will be used by a background task that will continuously check to see if tables should be persisted ahead of time to free up buffer memory space.

Originally, I had hoped to have the size tracking happen as the buffer was built so that returning the size would be zero cost (i.e. just returning a value), but I found in different kinds of testing that I wasn't able to get something that was even close to accurate. So for now it will use this more expensive computed method and we'll check on this periodically (every couple of seconds) to see when to persist.
2024-05-21 10:45:35 -04:00
Michael Gattozzi 2381cc6f1d
fix: make DB Buffer use the up to date schema (#25001)
Alternate Title: The DB Schema only ever has one table

This is a story of subtle bugs, gnashing of teeth, and hair pulling.
Gather round as I tell you the tale of of an Arc that pointed to an
outdated schema.

In #24954 we introduced an Index for the database as this will allow us
to perform faster queries. When we added that code this check was added:

```rust
if !self.table_buffers.contains_key(&table_name) {
    // TODO: this check shouldn't be necessary. If the table doesn't exist in the catalog
    // and we've gotten here, it means we're dropping a write.
    if let Some(table) = self.db_schema.get_table(&table_name) {
        self.table_buffers.insert(
            table_name.clone(),
            TableBuffer::new(segment_key.clone(), &table.index_columns()),
        );
    } else {
        return;
    }
}
```

Adding the return there let us continue on with our day and make the
tests pass. However, just because these tests passed didn't mean the
code was correct as I would soon find out. With a follow up ticket of
#24955 created we merged the changes and I began to debug the issue.

Note we had the assumption of dropping a single write due to limits
because the limits test is what failed. What began was a chase of a few
days to prove that the limits weren't what was failing. This was a bit
long but the conclusion was that the limits weren't causing it, but it
did expose the fact that a Database only ever had one table which was
weird.

I then began to dig into this a bit more. Why would there only be one
table? We weren't just dropping one write, we were dropping all but
*one* write or so it seemed. Many printlns/hours later it became clear
that we were actually updating the schema! It existed in the Catalog,
but not in the pointer to the schema in the DatabaseBuffer struct so
what gives?

Well we need to look at [another piece of code](8f72bf06e1/influxdb3_write/src/write_buffer/mod.rs (L540-L541)).

In the `validate_or_insert_schema_and_partitions` function for the
WriteBuffer we have this bit of code:

```rust
// The (potentially updated) DatabaseSchema to return to the caller.
let mut schema = Cow::Borrowed(schema);
```

As we pass in a reference to the schema in the catalog. However, when we
[go a bit further down](8f72bf06e1/influxdb3_write/src/write_buffer/mod.rs (L565-L568))
we see this code:

```rust
    let schema = match schema {
        Cow::Owned(s) => Some(s),
        Cow::Borrowed(_) => None,
    };
```

What this means is that if we make a change we clone the original and
update it. We *aren't* making a change to the original schema. When we
go back up the call stack we get to [this bit of code](8f72bf06e1/influxdb3_write/src/write_buffer/mod.rs (L456-L460)):

```rust
    if let Some(schema) = result.schema.take() {
        debug!("replacing schema for {:?}", schema);


        catalog.replace_database(sequence, Arc::new(schema))?;
    }
```

We are updating the catalog with the new schema, but how does that work?

```rust
        inner.databases.insert(db.name.clone(), db);
```

Oh. Oh no. We're just overwriting it. Which means that the
DatabaseBuffer has an Arc to the *old* schema, not the *new* one. Which
means that the buffer will get the first copy of the schema with the
first new table, but *none* of the other ones. The solution is to make
sure that the buffer is passed the current schema so that it can use the most
up to date version from the catalog. This commit makes those changes
to make sure it works.

This was a very very subtle mutability/pointer bug given the
intersection of valid borrow checking and some writes making it in, but
luckily we caught it. It does mean though that until this fix is in, we
can consider changes between the Index PR and now are subtly broken and
shouldn't be used for anything beyond writing to a signle table per DB.

TL;DR We should ask the Catalog what the schema is as it contains the up
to date version of it.

Closes #24955
2024-05-16 11:08:43 -04:00
Trevor Hilton 4901982c45
refactor: cleanup unused methods in Bufferer trait (#25012) 2024-05-16 09:34:08 -04:00
Trevor Hilton 8f72bf06e1
chore: use latest `influxdb3_core` changes (#24982)
Introduction of the `TokioDatafusionConfig` clap block for configuring the DataFusion runtime - this exposes many new `--datafusion-*` options on start, including `--datafusion-num-threads`

To accommodate renaming of `QueryNamespaceProvider` to `QueryDatabase` in `influxdb3_core`, I renamed the `QueryDatabase` type to `Database`.

Fixed tests that broke as a result of sync.
2024-05-13 12:33:50 -04:00
Michael Gattozzi 7a2867b98b
feat: Store precision in WAL for replayability (#24966)
Up to this point we assumed that a precision for everything was in nanoseconds.
While we do write and persist data as nanoseconds we made this assumption for
the WAL. However, we store the original line protocol data. If we want it to be
replayable we would need to include the precision and use that when loading the
WAL from disk. This commit changes the code to do that and we can see that that
data is definitely peristed as the WAL is now bigger in the tests.
2024-05-08 13:05:24 -04:00
Trevor Hilton 9354c22f2c
chore: remove _series_id (#24969)
Removed the _series_id column that stored a SHA256 hash of the tag set
for each write.

Updated all test assertions that made reference to it.

Corrected the limits on columns to un-account for the additional _series_id
column.
2024-05-08 12:28:49 -04:00
Paul Dix 8e79667776
feat: Implement index for buffer (#24954)
* feat: Implement index for buffer

This implements an index for the data in the table buffers. For now, by default, it indexes all tags, keeping a mapping of tag key/value pair to the row ids that it has in the buffer. When queries ask for record batches from the table buffer, the filter expression is evaluated to determine if a record batch can be built on the fly using only the row ids that match the index. If we don't have it in the index, the entire record batch from the buffer will be returned.

This also updates the logic in segment state to only request a record batch with the projection. The query executor was updated so that it pushes the filter and projection down to the request to get table chunks.

While implementing this, I believe I uncovered a bug where when limits are hit, a write still attempts to get buffered. I'll log a follow up to look at that.

* refactor: Update for PR feedback

* chore: cargo update to address deny failure
2024-05-06 12:59:50 -04:00
Michael Gattozzi 7138019636
chore: Upgrade to Rust 1.78.0 (#24953)
This fixes new lints that have come up in the latest edition of clippy and moves
.cargo/config to .cargo/config.toml as the previous filename is now deprecated.
2024-05-02 13:39:20 -04:00
Michael Gattozzi 43368981c7
feat: implement parquet cache persistance (#24907)
* feat: use concrete type for Persister

Up to this point we'd been using a generic `Persister` trait, however,
in practice even for tests we only use one singular type, the
`PersisterImpl`. In order to share the `MemoryPool` between it and the
upcoming `ParquetCache` we need it to be the concrete type. This
simplifies the code to grok as well by removing uneeded generic bounds.

* fix: new_with_partition_key fn name typo

* feat: implement parquet cache persistance

* fix: incorporate feedback and don't hold across await
2024-04-29 14:34:32 -04:00
Michael Gattozzi 4afbebc73e
feat: Add and hook in an in memory Parquet cache (#24904)
This adds an in memory Parquet cache to the WriteBuffer. With this we
now have a cache that Parquet files will be queried from when a query
does come in. Note this change *does not* actually let us persist any
data. This merely adds the cache. Future changes will add the ability
to cache the data as well as the logic around what should be cached.

As this doesn't allow any data to be cached or queried a test has not
been added at this time, but will in future PRs.
2024-04-10 15:02:03 -04:00
Trevor Hilton 557b939b15
refactor: make end argument common to query and write load generation (#24881)
* refactor: make end common to load generatino tool

Made the --end argument common to both the query and write load generation
runners.

A panic message was also added in the table buffer where unwraps were
causing panics

* refactor: load gen print statements for consistency
2024-04-04 10:13:08 -04:00
Michael Gattozzi 2291ebeae7
feat: sort and dedupe on persist (#24870)
When persisting parquet files we now will sort and dedupe on persist using the
COMPACT operation implemented in IOx Query. Note that right now we don't choose
any column to sort on and default to no column. This means that we dedupe and
sort on whatever the default behavior is for the COMPACT operation. Future
changes can figure out what columns to sort by when compacting the data.
2024-04-03 15:13:36 -04:00
Trevor Hilton cc55685886
feat: improved results directory structure for load generation (#24869)
* feat: add new clap args for results gen

Added the results_dir and configuration_name args
to the common load generator config which will be
used in generating the results directory structure.

* feat: load gen results directory structure

Write and query load generation runners will now setup files in a
results directory, using a specific structure. Users of the load tool
can specify a `results_dir` to save these results, or the tool will
pick a `results` folder in the current directory, by default.

Results will be saved in files using the following path convention:

results/<s>/<c>/<write|query|system>_<time>.csv

- <s>: spec name
- <c>: configuration name, specified by user with the `config-name`
  arg, or by default, will use the revision SHA of the running server
- <write|query|system>: which kind of results file
- <time>: a timestamp in the form 'YYYY-MM-DD-HH-MM'

The setup code was unified for both write and query commands, in
preparation for the creation of a system stats file, as well as for
the capability to run both query and write at the same time, however,
those remain unimplemented as of this commit.

* feat: /ping API support on influxdb3_client::Client
2024-04-02 14:06:51 -04:00
Trevor Hilton e0465843be
feat: `/ping` API to serve version and revision (#24864)
* feat: /ping API to serve version

The /ping API was added, which is served at GET and
POST methods. The API responds with a JSON body
containing the version and revision of the build.

A new crate was added, influxdb3_process, which
takes the process_info.rs module from the influxdb3
crate, and puts it in a separate crate so that other
crates (influxdb3_server) can depend on it. This was
needed in order to have access to the version and
revision values, which are generated at build time,
in the HTTP API code of influxdb3_server.

A E2E test was added to check that /ping works.

E2E TestServer can now have logs emitted using the
TEST_LOG environment variable.
2024-04-01 16:57:10 -04:00
Paul Dix 696456b280
refactor: buffer using Arrow builders (#24853)
* refactor: Buffer to use Arrow builders

This refactors the TableBuffer to use the Arrow builders for the data. This also removes cloning from the table buffer in favor of yielding record batches. This is part of a test to see if querying the buffer will be faster with this method avoiding a bunch of data copies.

* fix: adding columns when data is in buffer

This fixes a bug where the Arrow schema in the Catalog wouldn't get updated when columns are added to a table. Also fixes bug in the buffer where a new column wouldn't have the correct number of rows in it (now fixed by adding in nulls for previous rows).

* refactor: PR feedback in buffer_segment
2024-03-29 15:29:00 -04:00
Trevor Hilton 7784749bca
feat: support v1 and v2 write APIs (#24793)
feat: support v1 and v2 write APIs

This adds support for two APIs: /write and /api/v2/write. These implement the v1 and v2 write APIs, respectively. In general, the difference between these and the new /api/v3/write_lp API is in the request parsing. We leverage the WriteRequestUnifier trait from influxdb3_core to handle parsing of v1 and v2 HTTP requests, to keep the error handling at that level consistent with distributed versions of InfluxDB 3.0. Specifically, we use the SingleTenantRequestUnifier implementation of the trait.

Changes:
- Addition of two new routes to the route_request method in influxdb3_server::http to serve /write and /api/v2/write requests.
- Database name validation was updated to handle cases where retention policies may be passed in /write requests, and to also reject empty names. A unit test was added to verify the validate_db_name function.
- HTTP request authorization in the router will extract the full Authorization header value, and store it in the request extensions; this is used in the write request parsing from the core iox_http crate to authorize write requests.
- E2E tests to verify correct HTTP request parsing / response behaviour for both /write and /api/v2/write APIs
- E2E tests to check that data sent in through /write and /api/v2/write can be queried back
2024-03-28 13:33:17 -04:00
Trevor Hilton c79821b246
feat: add `_series_id` to tables on write (#24842)
feat: add _series_id to tables on write

New _series_id column is added to tables; this stores a 32 byte SHA256 hash of the tag set of a line of Line Protocol. The tag set is checked for sort order, then sorted if not already, before producing the hash.

Unit tests were added to check hashing and sorting functions work.

Tests that performed queries needed to be modified to account for the new _series_id column; in general, SELECT * queries were altered to use a select clause with specific column names.

The Column limit was increased to 501 internally, to account for the new _series_id column, but the user-facing limit is still 500
2024-03-26 15:22:19 -04:00
Paul Dix 12636ca759
fix: loader error with single wal file (#24814)
Fixes a bug where the loader would error out if there was a wal segment file for a previous segment that hand't been persisted, and a new wal file had to be created for the new open segment. This would show up as an error if you started the server and then stopped and restarted it without writing any data.
2024-03-25 15:40:21 -04:00
Paul Dix 04b9cf6cc3
fix: catalog persist with new segment (#24813)
When a write comes into the buffer that both updates the catalog and creates a new segment, it would create that segment with a catalog sequence number that matched what happened after the catalog modification. The result is that when the segment is persisted, the catalog won't be persisted because it wasn't being viewed as being updated. This fixes that.
2024-03-25 15:18:43 -04:00
BiKangNing 67cce99df7
chore: fix some typos (#24803)
Signed-off-by: depthlending <bikangning@outlook.com>
2024-03-22 09:32:37 -04:00
Michael Gattozzi a2984cdc17
chore: Update to Rust 1.77.0 (#24800)
* chore: Update to Rust 1.77.0

This is a fairly quiet upgrade. The only changes are some lints around
`OpenOptions` that were added to clippy between 1.75 and this version
and they're small changes that either remove unecessary function calls
or add a needed function call.

* fix: cargo-deny by using the --locked flag
2024-03-21 13:00:15 -04:00
Trevor Hilton caae9ca9f2
chore: `influxdb3_core` update (#24798)
chore: sync in latest core changes
2024-03-21 10:29:56 -04:00
Paul Dix 01d33f69b5
feat: wire up query from parquet files (#24749)
* feat: wire up query from parquet files

This adds the functionality to query from Parquet files that have been persisted in object storage. Any segments that are loaded up on boot up will be included (limit of 1k segments at the time of this PR). In a follow on PR we should add a good end-to-end test that has persistence and query through the main API (might be tricky).

* Move BufferChunk and ParquetChunk into chunk module
* Add object_store_url to Persister
* Register object_store on server startup
* Add loaded persisted_segments to SegmentState

* refactor: PR feedback
2024-03-12 09:47:32 -04:00
Paul Dix db77ed0a19
feat: Implement automatic segment persistence (#24747)
This implements automatic segment persistence and cleanup of the WAL files. Every second the write buffer checks to and persists segments that have been open for longer than half the segment duration and that are not in the current or next block of time.

One thing left to do is to deal with blocks of time that have had multiple segments persisted in them. This will be addressed in a follow on PR.

Specific udpates:
* Update Persister persist_segment to take borrow
* Move SegmentState into its own module
* Create functions to close open segments and persist them when time
* Add tokio task to check every second to see if segments should be persisted
2024-03-11 15:10:18 -04:00
Paul Dix bf931970d3
feat: Segment the write buffer on time (#24745)
* Split WriteBuffer into segments

* Add SegmentRange and SegmentDuration
* Update WAL to store SegmentRange and to be able to open up multiple ranges
* Remove Partitioner and PartitionBuffer

* Update SegmentState and loader

* Update SegmentState with current, next and outside
* Update loader and tests to load up current, next and previous outside segments based on the passed in time and desired segment duration

* Update WriteBufferImpl and Flusher

* Update the flusher to flush to multiple segments
* Update WriteBufferImpl to split data into segments getting written to
* Update HTTP and WriteBuffer to use TimeProvider

* Wire up outside segment writes and loading

* Data outside current and next no longer go to a single segment, but to a segment based on that data's time. Limits to 100 segments of time that can be written to at any given time.

* Refactor SegmentDuration add config option

* Refactors SegmentDuration to be a new type over duration
* Adds the clap block configuration to pass SegmentDuration, defaulting to 1h

* refactor: SegmentState and loader

* remove the current_segment and next_segment from the loader and segment state, instead having just a collection of segments
* open up only the current_segment by default
* keep current and next segments open if they exist, while others go into persisting or persisted

* fix: cargo audit

* refactor: fixup PR feedback
2024-03-11 13:54:09 -04:00
Michael Gattozzi a5082ec432
feat: Add limits for InfluxDB Edge (#24703)
This commit is the final piece for the write_lp endpoint. It adds limits
to Edge such that:

- There can only be 5 Databases
- There can only be 500 Columns per Table
- There can only be 2000 Tables across all Databases

We do this by modifying the catalog code to error out whenever one of
these limits would be exceeded before permanently modifying the schema.
These are hard coded limits and cannot be configured by the user.

Closes #24554
2024-03-04 10:24:33 -05:00
Trevor Hilton f7892ebee5
feat: add the `api/v3/query_influxql` API (#24696)
feat: add query_influxql api

This PR adds support for the /api/v3/query_influxql API. This re-uses code from the existing query_sql API, but some refactoring was done to allow for code re-use between the two.

The main change to the original code from the existing query_sql API was that the format is determined up front, in the event that the user provides some incorrect Accept header, so that the 400 BAD REQUEST is returned before performing the query.

Support of several InfluxQL queries that previously required a bridge to be executed in 3.0 was added:

SHOW MEASUREMENTS
SHOW TAG KEYS
SHOW TAG VALUES
SHOW FIELD KEYS
SHOW DATABASES

Handling of qualified measurement names in SELECT queries (see below)

This is accomplished with the newly added iox_query_influxql_rewrite crate, which provides the means to re-write an InfluxQL statement to strip out a database name and retention policy, if provided. Doing so allows the query_influxql API to have the database parameter optional, as it may be provided in the query string.

Handling qualified measurement names in SELECT

The implementation in this PR will inspect all measurements provided in a FROM clause and extract the database (DB) name and retention policy (RP) name (if not the default). If multiple DB/RP's are provided, an error is thrown.

Testing

E2E tests were added for performing basic queries against a running server on both the query_sql and query_influxql APIs. In addition, the test for query_influxql includes some of the InfluxQL-specific queries, e.g., SHOW MEASUREMENTS.

Other Changes

The influxdb3_client now has the api_v3_query_influxql method (and a basic test was added for this)
2024-03-01 12:27:38 -05:00
Michael Gattozzi 73e261c021
feat: Split out shared core crates from Edge (#24714)
This commit is a major refactor for the code base. It mainly does four
things:

1. Splits code shared between the internal IOx repository and this one
   into it's own repo over at https://github.com/influxdata/influxdb3_core
2. Removes any docs or anything else that did not relate to this project
3. Reorganizes the Cargo.toml files to use the top level Cargo.toml to
   declare dependencies and versions to keep all crates in sync and sets
   all others to use `<dep>.workspace = true` unless it's an optional
   dependency
4. Set the top level Cargo.toml to point to the core crates as git
   dependencies

With this any changes specific to Edge will be contained here, updating
deps will be a PR over in `influxdata/influxdb3_core`, and we can prove
out the viability for this model to use for IOx.
2024-02-29 16:21:41 -05:00
Paul Dix 2da5803bfd
feat: implement loader for persisted state (#24705)
* fix: persister loading with no segments

Fixes a bug where the persister would throw an error if attempting to load segments when none had been persisted.

Moved persister tests into tests block.

* feat: implement loader for persisted state

This implements a loader for the write buffer. It loads the catalog and the buffer from the WAL.

Move Persister errors into their own type now that the write buffer load could return errors from the persister.

This doesn't yet rotate segments or trigger persistence of newly closed segments, which will be addressed in a future PR.

* fix: cargo update to fix audit

* refactor: add error type to persister trait

* refactor: use generics instead of dyn

---------

Co-authored-by: Trevor Hilton <thilton@influxdata.com>
2024-02-29 15:58:19 -05:00
Michael Gattozzi 8fec1d636e
feat: Add write_lp partial write, name check, and precision (#24677)
* feat: Add partial write and name check to write_lp

This commit adds new behavior to the v3 write_lp http endpoint by
implementing both partial writes and checking the db name for validity.
It also sets the partial write behavior as the default now, whereas
before we would reject the entire request if one line was incorrect.
Users who *do* actually want that behavior can now opt in by putting
'accept_partial=false' into the url of the request.

We also check that the db name used in the request contains only
numbers, letters, underscores and hyphens and that it must start with
either a number or letter.

We also introduce a more standardized way to return errors to the user
as JSON that we can expand over time to give actionable error messages
to the user that they can use to fix their requests.

Finally tests have been included to mock out and test the behavior for
all of the above so that changes to the error messages are reflected in
tests, that both partial and not partial writes work as expected, and
that invalid db names are rejected without writing.

* feat: Add precision to write_lp http endpoint

This commit adds the ability to control the precision of the time stamp
passed in to the endpoint. For example if a user chooses 'second' and
the timestamp 20 that will be 20 seconds past the Unix Epoch. If they
choose 'millisecond' instead it will be 20 milliseconds past the Epoch.

Up to this point we assumed that all data passed in was of nanosecond
precision. The data is still stored in the database as nanoseconds.
Instead upon receiving the data we convert it to nanoseconds. If the
precision URL parameter is not specified we default to auto and take a
best effort guess at what the user wanted based on the order of
magnitude of the data passed in.

This change will allow users finer grained control over what precision
they want to use for their data as well as trying our best to make a
good user experience and having things work as expected and not creating
a failure mode whereby a user wanted seconds and instead put in
nanoseconds by default.
2024-02-27 11:57:10 -05:00