Commit Graph

29 Commits (praveen/sys-events-tidy-up)

Author SHA1 Message Date
Trevor Hilton 234d37329a
feat: metacache REST APIs to create and delete (#25587) 2024-11-27 08:41:46 -05:00
praveen-influx 1f1125c767
refactor: udpate docs and tests for the telemetry crate (#25432)
- Introduced traits, `ParquetMetrics` and `SystemInfoProvider` to enable
  writing easier tests
- Uses mockito for code that depends on reqwest::Client and also uses
  mockall to generally mock any traits like `SystemInfoProvider`
- Minor updates to docs
2024-10-08 15:45:13 +01:00
Michael Gattozzi eeb1aa7905
feat: swap over to DbId and TableId everywhere (#25421)
* feat: Add TableId and ColumnId

* feat: swap over to DbId and TableId everywhere

This commit swaps us over to using the DbId and TableId types everywhere
for our internal systems. Anywhere that's external facing, such as names
for last cache tables or line protocol parsing, use names. In these cases
we have the `Catalog` which keeps a map of TableIds and DbIds in a
bidirectional mapping for easy lookup i.e. id <-> names. While in essence
the change itself isn't that complicated given the nature of how much we
depended on names for things, the changes end up being quite invasive and
extensive. Luckily it shouldn't be too hard to review. Note this does 
not add the column ids which will be done in a follow up PR.

Closes #25375
Closes #25403
Closes #25404
Closes #25405
Closes #25412
Closes #25413
2024-10-03 14:47:46 -04:00
Trevor Hilton 7d37bbbce7
test: add test helpers for object store types (#25420)
This adds a new crate `influxdb3_test_helpers` which provides two object
store helper types that can be used to track request counts made through
the store, as well as synchronize requests made through the store, resp.
2024-10-02 14:45:12 -04:00
Trevor Hilton 4184a331ea
refactor: parquet cache with less locking (#25389)
Closes #25382 
Closes #25383 

This refactors the parquet cache to use less locking by switching from using the `clru` crate to a hand-rolled cache implementation. The new cache still acts as an LRU, but it uses atomics to track hit-time per entry, and handles pruning in a separate process that is decoupled from insertion/gets to the cache.

The `Cache` type uses a [`DashMap`](https://docs.rs/dashmap/latest/dashmap/struct.DashMap.html) internally to store cache entries. This should help reduce lock contention, and also has the added benefit of not requiring mutability to insert  into _or_ get from the map.

The cache maps an `object_store::Path` to a `CacheEntry`. On a hit, an entry will have its `hit_time` (an `AtomicI64`) incremented. During a prune operation, entries that have the oldest hit times will be removed from the cache. See the `Cache::prune` method for details.

The cache is setup with a memory _capacity_ and a _prune percent_. The cache tracks memory used when entries are added, based on their _size_, and when a prune is invoked in the background, if the cache has exceeded its capacity, it will prune `prune_percent * cache.len()` entries from the cache.

Two tests were added:
* `cache_evicts_lru_when_full` to check LRU behaviour of the cache
* `cache_hit_while_fetching` to check that a cache entry hit while a request is in flight to fetch that entry will not result in extra calls to the underlying object store
2024-09-27 11:59:17 -04:00
Trevor Hilton 9c71b3ce25
feat: memory-cached object store for parquet files (#25377)
Part of #25347 

This sets up a new implementation of an in-memory parquet file cache in the `influxdb3_write` crate in the `parquet_cache.rs` module.

This module introduces the following types:
* `MemCachedObjectStore` - a wrapper around an `Arc<dyn ObjectStore>` that can serve GET-style requests to the store from an in-memory cache
* `ParquetCacheOracle` - an interface (trait) that can accept requests to create new cache entries in the cache used by the `MemCachedObjectStore`
* `MemCacheOracle` - implementation of the `ParquetCacheOracle` trait

## `MemCachedObjectStore`

This takes inspiration from the [`MemCacheObjectStore` type](1eaa4ed5ea/object_store_mem_cache/src/store.rs (L205-L213)) in core, but has some different semantics around its implementation of the `ObjectStore` trait, and uses a different cache implementation.

The reason for wrapping the object store is that this ensures that any GET-style request being made for a given object is served by the cache, e.g., metadata requests made by DataFusion.

The internal cache comes from the [`clru` crate](https://crates.io/crates/clru), which provides a least-recently used (LRU) cache implementation that allows for weighted entries. The cache is initialized with a capacity and entries are given a weight on insert to the cache that represents how much of the allotted capacity they will take up. If there isn't enough room for a new entry on insert, then the LRU item will be removed.

### Limitations of `clru`

The `clru` crate conveniently gives us an LRU eviction policy but its API may put some limitations on the system:
* gets to the cache require an `&mut` reference, which means that the cache needs to be behind a `Mutex`. If this slows down requests through the object store, then we may need to explore alternatives.
* we may want more sophisticated eviction policies than a straight LRU, i.e., to favour certain tables over others, or files that represent recent data over those that represent old data.

## `ParquetCacheOracle` / `MemCacheOracle`

The cache oracle is responsible for handling cache requests, i.e., to fetch an item and store it in the cache. In this PR, the oracle runs a background task to handle these requests. I defined this as a trait/struct pair since the implementation may look different in Pro vs. OSS.
2024-09-24 10:58:15 -04:00
Michael Gattozzi 54d209d0bf
feat: Add u32 ID for Databases (#25302)
* feat: Remove lock for FileId tests

Since we now are using cargo-nextest in CI we can remove
the locks used in the FileId tests to make sure that we
have no race conditions

* feat: Add u32 ID for Databases

This commit adds a new DbId for databases. It also updates paths to use
that id as part of the name. When starting up the WriteBuffer we apply
the DbId from the persisted snapshot much like we do for ParquetFileId's

This introduces the influxdb3_id crate to avoid circular deps with ids.
The ParquetFileId should also be moved into this crate, but it's
outside the scope of this change.

Closes #25301
2024-09-18 11:44:04 -04:00
praveen-influx 0c1fced7a4
refactor(catalog): catalog initialisation refactor (#25360)
- when no catalog is found, create a new one with instance id
  and persist it immediately
- enabled test-log in influxdb3_write

Closes: https://github.com/influxdata/influxdb/issues/25346
2024-09-18 15:23:17 +01:00
Paul Dix 2b8fc7b44e
refactor: Move Catalog into influxdb3_catalog crate (#25210)
* refactor: Move Catalog into influxdb3_catalog crate

This moves the catalog and its serialization logic into its own crate. This is a precursor to recording more catalog modifications into the WAL.

Fixes #25204

* fix: cargo update

* fix: add version = 2 to deny.toml

* fix: update deny.toml

* fix: add CCO to deny.toml
2024-08-02 16:04:12 -04:00
Paul Dix 3265960010
refactor: implement new wal and refactor write buffer (#25196)
* feat: refactor WAL and WriteBuffer

There is a ton going on here, but here are the high level things. This implements a new WAL, which is backed entirely by object store. It then updates the WriteBuffer to be able to work with how the new WAL works, which also required an update to how the Catalog is modified and persisted.

The concept of Segments has been removed. Previously there was a separate WAL per segment of time. Instead, there is now a single WAL that all writes and updates flow into. Data within the write buffer is organized by Chunk(s) within tables, which is based on the timestamp of the row data. These are known as the Level0 files, which will be persisted as Parquet into object store. The default chunk duration for level 0 files is 10 minutes.

The WAL is written as single files that get created at the configured WAL flush interval (1s by default). After a certain number of files have been created, the server will attempt to snapshot the WAL (default is to snapshot the first 600 files of the WAL after we have 900 total, i.e. snapshot 10 minutes of WAL data).

The design goal with this is to persist 10 minute chunks of data that are no longer receiving writes, while clearing out old WAL files. This works if data getting written in around "now" with no more than 5 minutes of delay. If we continue to have delayed writes, a snapshot of all data will be forced in order to clear out the WAL and free up memory in the buffer.

Overall, this structure of a single wal, with flushes and snapshots and chunks in the queryable buffer led to a simpler setup for the write buffer overall. I was able to clear out quite a bit of code related to the old segment organization.

Fixes #25142 and fixes #25173

* refactor: address PR feedback

* refactor: wal to replay and background flush on new

* chore: remove stray println
2024-08-01 15:04:15 -04:00
Trevor Hilton 53e5c5f5c5
feat: last cache implementation (#25109)
* feat: base for last cache implementation

Each last cache holds a ring buffer for each column in an index map, which
preserves the insertion order for faster record batch production.

The ring buffer uses a custom type to handle the different supported
data types that we can have in the system.

* feat: implement last cache provider

LastCacheProvider is the API used to create last caches and write
table batches to them. It uses a two-layer RwLock/HashMap: the first for
the database, and the second layer for the table within the database.

This allows for table-level locks when writing in buffered data, and only
gets a database-level lock when creating a cache (and in future, when
removing them as well).

* test: APIs on write buffer and test for last cache

Added basic APIs on the write buffer to access the last cache and then a
test to the last_cache module to see that it works with a simple example

* docs: add some doc comments to last_cache

* chore: clippy

* chore: one small comment on IndexMap

* chore: clean up some stale comments

* refactor: part of PR feedback

Addressed three parts of PR feedback:

1. Remove double-lock on cache map
2. Re-order the get when writing to the cache to be outside the loop
3. Move the time check into the cache itself

* refactor: nest cache by key columns

This refactors the last cache to use a nested caching structure, where
the key columns for a given cache are used to create a hierarchy of
nested maps, terminating in the actual store for the values in the cache.

Access to the cache is done via a set of predicates which can optionally
specify the key column values at any level in the cache hierarchy to only
gather record batches from children of that node in the cache.

Some todos:
- Need to handle the TTL
- Need to move the TableProvider impl up to the LastCache type

* refactor: TableProvider impl to LastCache

This re-writes the datafusion TableProvider implementation on the correct
type, i.e., the LastCache, and adds conversion from the filter Expr's to
the Predicate type for the cache.

* feat: support TTL in last cache

Last caches will have expired entries walked when writes come in.

* refactor: add panic when unexpected predicate used

* refactor: small naming convention change

* refactor: include keys in query results and no null keys

Changed key columns so that they do not accept null values, i.e., rows
that are pushed that are missing key column values will be ignored.

When producing record batches for a cache, if not all key columns are
used in the predicate, then this change makes it so that the non-predicate
key columns are produced as columns in the outputted record batches.

A test with a few cases showing this was added.

* fix: last cache key column query output

Ensure key columns in the last cache that are not included in the
predicate are emitted in the RecordBatches as a column.

Cleaned up and added comments to the new test.

* chore: clippy and some un-needed code

* fix: clean up some logic errors in last_cache

* test: add tests for non default cache size and TTL

Added two tests, as per commit title. Also moved the eviction process
to a separate function so that it was not being done on every write to
the cache, which could be expensive, and this ensures that entries are
evicted regardless of whether writes are coming in or not.

* test: add invalid predicate test cases to last_cache

* test: last_cache with field key columns

* test: last_cache uses series key for default keys

* test: last_cache uses tag set as default keys

* docs: add doc comments to last_cache

* fix: logic error in last cache creation

CacheAlreadyExists errors were only being based on the database and
table names, and not including the cache names, which was not
correct.

* docs: add some comments to last cache create fn

* feat: support null values in last cache

This also adds explicit support for series key columns to distinguish
them from normal tags in terms of nullability

A test was added to check nulls work

* fix: reset last cache last time when ttl evicts all data
2024-07-09 15:22:04 -04:00
Lorrens Pantelis 8b6c2a3b3d
refactor: Replace use of `std::HashMap` with `hashbrown::HashMap` (#25094)
* refactor: use hashbrown with entry_ref api

* refactor: use hashbrown hashmap instead of std hashmap in places that would from the `entry_ref` API

* chore: Cargo update to pass CI
2024-06-26 12:43:35 -04:00
Trevor Hilton 039dea2264
refactor: add dedicated type for serializaing catalog tables (#25042)
Remove reliance on data_types::ColumnType

Introduce TableSnapshot for serializing table information in the catalog.

Remove the columns BTree from the TableDefinition an use the schema
directly. BTrees are still used to ensure column ordering when tables are
created, or columns added to existing tables.

The custom Deserialize impl on TableDefinition used to block duplicate
column definitions in the serialized data. This preserves that bevaviour
using serde_with and extends it to the other types in the catalog, namely
InnerCatalog and DatabaseSchema.

The serialization test for the catalog was extended to include multiple
tables in a database and multiple columns spanning the range of available
types in each table.

Snapshot testing was introduced using the insta crate to check the
serialized JSON form of the catalog, and help catch breaking changes
when introducing features to the catalog.

Added a test that verifies the no-duplicate key rules when deserializing
the map components in the Catalog
2024-06-04 11:38:43 -04:00
Michael Gattozzi 43368981c7
feat: implement parquet cache persistance (#24907)
* feat: use concrete type for Persister

Up to this point we'd been using a generic `Persister` trait, however,
in practice even for tests we only use one singular type, the
`PersisterImpl`. In order to share the `MemoryPool` between it and the
upcoming `ParquetCache` we need it to be the concrete type. This
simplifies the code to grok as well by removing uneeded generic bounds.

* fix: new_with_partition_key fn name typo

* feat: implement parquet cache persistance

* fix: incorporate feedback and don't hold across await
2024-04-29 14:34:32 -04:00
Michael Gattozzi 2291ebeae7
feat: sort and dedupe on persist (#24870)
When persisting parquet files we now will sort and dedupe on persist using the
COMPACT operation implemented in IOx Query. Note that right now we don't choose
any column to sort on and default to no column. This means that we dedupe and
sort on whatever the default behavior is for the COMPACT operation. Future
changes can figure out what columns to sort by when compacting the data.
2024-04-03 15:13:36 -04:00
Trevor Hilton 7784749bca
feat: support v1 and v2 write APIs (#24793)
feat: support v1 and v2 write APIs

This adds support for two APIs: /write and /api/v2/write. These implement the v1 and v2 write APIs, respectively. In general, the difference between these and the new /api/v3/write_lp API is in the request parsing. We leverage the WriteRequestUnifier trait from influxdb3_core to handle parsing of v1 and v2 HTTP requests, to keep the error handling at that level consistent with distributed versions of InfluxDB 3.0. Specifically, we use the SingleTenantRequestUnifier implementation of the trait.

Changes:
- Addition of two new routes to the route_request method in influxdb3_server::http to serve /write and /api/v2/write requests.
- Database name validation was updated to handle cases where retention policies may be passed in /write requests, and to also reject empty names. A unit test was added to verify the validate_db_name function.
- HTTP request authorization in the router will extract the full Authorization header value, and store it in the request extensions; this is used in the write request parsing from the core iox_http crate to authorize write requests.
- E2E tests to verify correct HTTP request parsing / response behaviour for both /write and /api/v2/write APIs
- E2E tests to check that data sent in through /write and /api/v2/write can be queried back
2024-03-28 13:33:17 -04:00
Trevor Hilton c79821b246
feat: add `_series_id` to tables on write (#24842)
feat: add _series_id to tables on write

New _series_id column is added to tables; this stores a 32 byte SHA256 hash of the tag set of a line of Line Protocol. The tag set is checked for sort order, then sorted if not already, before producing the hash.

Unit tests were added to check hashing and sorting functions work.

Tests that performed queries needed to be modified to account for the new _series_id column; in general, SELECT * queries were altered to use a select clause with specific column names.

The Column limit was increased to 501 internally, to account for the new _series_id column, but the user-facing limit is still 500
2024-03-26 15:22:19 -04:00
Paul Dix 01d33f69b5
feat: wire up query from parquet files (#24749)
* feat: wire up query from parquet files

This adds the functionality to query from Parquet files that have been persisted in object storage. Any segments that are loaded up on boot up will be included (limit of 1k segments at the time of this PR). In a follow on PR we should add a good end-to-end test that has persistence and query through the main API (might be tricky).

* Move BufferChunk and ParquetChunk into chunk module
* Add object_store_url to Persister
* Register object_store on server startup
* Add loaded persisted_segments to SegmentState

* refactor: PR feedback
2024-03-12 09:47:32 -04:00
Paul Dix bf931970d3
feat: Segment the write buffer on time (#24745)
* Split WriteBuffer into segments

* Add SegmentRange and SegmentDuration
* Update WAL to store SegmentRange and to be able to open up multiple ranges
* Remove Partitioner and PartitionBuffer

* Update SegmentState and loader

* Update SegmentState with current, next and outside
* Update loader and tests to load up current, next and previous outside segments based on the passed in time and desired segment duration

* Update WriteBufferImpl and Flusher

* Update the flusher to flush to multiple segments
* Update WriteBufferImpl to split data into segments getting written to
* Update HTTP and WriteBuffer to use TimeProvider

* Wire up outside segment writes and loading

* Data outside current and next no longer go to a single segment, but to a segment based on that data's time. Limits to 100 segments of time that can be written to at any given time.

* Refactor SegmentDuration add config option

* Refactors SegmentDuration to be a new type over duration
* Adds the clap block configuration to pass SegmentDuration, defaulting to 1h

* refactor: SegmentState and loader

* remove the current_segment and next_segment from the loader and segment state, instead having just a collection of segments
* open up only the current_segment by default
* keep current and next segments open if they exist, while others go into persisting or persisted

* fix: cargo audit

* refactor: fixup PR feedback
2024-03-11 13:54:09 -04:00
Trevor Hilton f7892ebee5
feat: add the `api/v3/query_influxql` API (#24696)
feat: add query_influxql api

This PR adds support for the /api/v3/query_influxql API. This re-uses code from the existing query_sql API, but some refactoring was done to allow for code re-use between the two.

The main change to the original code from the existing query_sql API was that the format is determined up front, in the event that the user provides some incorrect Accept header, so that the 400 BAD REQUEST is returned before performing the query.

Support of several InfluxQL queries that previously required a bridge to be executed in 3.0 was added:

SHOW MEASUREMENTS
SHOW TAG KEYS
SHOW TAG VALUES
SHOW FIELD KEYS
SHOW DATABASES

Handling of qualified measurement names in SELECT queries (see below)

This is accomplished with the newly added iox_query_influxql_rewrite crate, which provides the means to re-write an InfluxQL statement to strip out a database name and retention policy, if provided. Doing so allows the query_influxql API to have the database parameter optional, as it may be provided in the query string.

Handling qualified measurement names in SELECT

The implementation in this PR will inspect all measurements provided in a FROM clause and extract the database (DB) name and retention policy (RP) name (if not the default). If multiple DB/RP's are provided, an error is thrown.

Testing

E2E tests were added for performing basic queries against a running server on both the query_sql and query_influxql APIs. In addition, the test for query_influxql includes some of the InfluxQL-specific queries, e.g., SHOW MEASUREMENTS.

Other Changes

The influxdb3_client now has the api_v3_query_influxql method (and a basic test was added for this)
2024-03-01 12:27:38 -05:00
Michael Gattozzi 73e261c021
feat: Split out shared core crates from Edge (#24714)
This commit is a major refactor for the code base. It mainly does four
things:

1. Splits code shared between the internal IOx repository and this one
   into it's own repo over at https://github.com/influxdata/influxdb3_core
2. Removes any docs or anything else that did not relate to this project
3. Reorganizes the Cargo.toml files to use the top level Cargo.toml to
   declare dependencies and versions to keep all crates in sync and sets
   all others to use `<dep>.workspace = true` unless it's an optional
   dependency
4. Set the top level Cargo.toml to point to the core crates as git
   dependencies

With this any changes specific to Edge will be contained here, updating
deps will be a PR over in `influxdata/influxdb3_core`, and we can prove
out the viability for this model to use for IOx.
2024-02-29 16:21:41 -05:00
dependabot[bot] ada6561f4a
chore(deps): Bump serde_json from 1.0.113 to 1.0.114 (#24687) 2024-02-25 14:34:37 +00:00
dependabot[bot] 278ecbeb56
chore(deps): Bump serde from 1.0.196 to 1.0.197 (#24689) 2024-02-25 14:26:15 +00:00
Paul Dix 3c5e5bf241
feat: Add segment persist of closed buffer segment (#24659)
* feat: add catalog sequence tracking to OpenBufferSegment

* feat: Add segment persist of closed buffer

* refactor: pr review updates

* refactor: PR updates
2024-02-14 10:55:09 -05:00
Paul Dix 4d9095e58d
feat: add segmenting and wal persistence to WriteBuffer (#24624)
* refactor: move write buffer into its own dir

* feat: implement write buffer segment with wal flushing

This creates the WriteBufferFlusher and OpenBufferSegment. If a wal is passed into the buffer, data written into it will be persisted to the wal for the initialized segment id.

* refactor: use crossbeam in flusher and pr cleanup
2024-02-12 12:36:10 -05:00
Michael Gattozzi 001a2a6653
feat: Implement Persister for PersisterImpl (#24588)
* feat: Implement Catalog r/w for persister

This commit implements reading and writing the Catalog to the object
store. This was already stubbed out functionality, but it just needed an
implementation. Saving it to the object store is pretty straight forward
as it just serializes it to JSON and writes it to the object store. For
loading, it finds the most recently added Catalog based on the file name
and returns that from the object store in it's deserialized form and
returned to the caller.

This commit also adds some tests to make sure that the above
functionality works as intended.

* feat: Implement Segment r/w for persister

This commit continues the work on the persister by implementing the
persist_segment and load_segment functions for the persister. Much like
the Catalog implementation, it's serialized to JSON before being
persisted to the object store in persist_segment. This is pretty
straightforward. For the loading though we need to find the most recent
n segment files and so we need to list them and then return the most
recent n. This is a little more complicated to do, but there are
comments in the code to make it easier to grok.

We also implement more tests to make sure that this part of the
persister works as expected.

* feat: Implement Parquet r/w to persister

This commit does a few things:

- First we add methods to the persister trait for reading and writing
  parquet files as these were not stubbed out in prior commits
- Secondly we add a method to serialize a SendableRecordBatchStream into
  Parquet bytes
- With these in place implementing the trait methods is pretty
  straightforward: hand a path in and a stream and get back some
  metadata about the file persisted and also get the bytes back if
  loading from the store

Of course we also add more tests to make sure this all works as
expected. Do note that this does nothing to make sure that we bound how
much memory is used or if this is the most efficient way to write
parquet files. This is mostly to get things working with the
understanding that future refinement on the approach might be needed.

* fix: Update smallvec for crate advisory

* fix: Implement better filename handling

* feat: Handle loading > 1000 Segment Info files
2024-01-25 14:31:57 -05:00
Paul Dix 02b4d28637
feat: add basic wal implementation for Edge (#24570)
* feat: add basic wal implementation for Edge

This WAL implementation uses some of the code from the wal crate, but departs pretty significantly from it in many ways. For now it uses simple JSON encoding for the serialized ops, but we may want to switch that to Protobuf at some point in the future. This version of the wal doesn't have its own buffering. That will be implemented higher up in the BufferImpl, which will use the wal and SegmentWriter to make data in the buffer durable.

The write flow will be that writes will come into the buffer and validate/update against an in memory Catalog. Once validated, writes will get buffered up in memory and then flushed into the WAL periodically (likely every 10-20ms). After being flushed to the wal, the entire batch of writes will be put into the in memory queryable buffer. After that responses will be sent back to the clients. This should reduce the write lock pressure on the in-memory buffer considerably.

In this PR:
- Update the Wal, WalSegmentWriter, and WalSegmentReader traits to line up with new design/understanding
- Implement wal (mainly just a way to identify segment files in a directory)
- Implement WalSegmentWriter (write header, op batch with crc, and track sequence number in segment, re-open existing file)
- Implement WalSegmentReader

* refactor: make Wal return impl reader/writer

* refactor: clean up wal segment open

* fix: WriteBuffer and Wal usage

Turn wal and write buffer references into a concrete type, rather than dyn.

* fix: have wal loading ignore invalid files
2024-01-12 11:52:28 -05:00
Michael Gattozzi 8ee13bca48
fix: Failing CI on main (#24562)
* fix: build, upgrade rustc, and deps

This commit upgrades Rust to 1.75.0, the latest release. We also
upgraded our dependencies to stay up to date and to clear out any
uneeded deps from the lockfile. In order to make sure everything works
this also fixes the build by upgrading the workspace-hack crate using
cargo hikari and removing the `workspace.lint` that was in
influxdb3_write that didn't need to be there, probably from a merge
issue.

With this we can build influxdb3 as our default on main, but this alone
is not enough to fix CI and will be addressed in future commits.

* fix: warnings for influxdb3 build

This commit fixes the warnings emitted by `cargo build` when compiling
influxdb3. Mainly it adds needed lifetimes and removes uneccesary
imports and functions calls.

* fix: all of the clippy lints

This for the most part just applies suggested fixes by clippy with a few
exceptions:

- Generated type crates had additional allows added since we can't
  control what code gets made
- Things that couldn't be automatically fixed were done so manually in
  particular adding a Send bound for traits that created a Future that
  should be Send

We also had to fix a build issue by adding a feature for tokio-compat
due to the upgrade of deps. The workspace crate was updated accordingly.

* fix: failing test due to rust panic message change

Inbetween rustc 1.72 and rustc 1.75 the way that error messages were
displayed when panicing changed. One of our tests depended on the output
of that behavior and this commit updates the error message to the new
form so that tests will pass.

* fix: broken cargo doc link

* fix: cargo formatting run

* fix: add workspace-hack to influxdb3 crates

This was the last change needed to make sure that the workspace-hack
crate CI lint would pass.

* fix: remove tests that can not run anymore

We removed iox code from this code base and as a result some tests
cannot be run anymore and so this commit removes them from the code base
so that we can get a green build.
2024-01-09 15:11:35 -05:00
Paul Dix 5831cf8cee
feat: Add basic Edge server structure (#24552)
* WIP: basic influxdb3 command and http server

* WIP: write lp, buffer, query out

* WIP: test write & query on influxdb3_server, fix warnings

* WIP: pull write buffer and catalog into separate crate

* WIP: sketch out types used for write: buffer, wal, persister

* WIP: remove a bunch of old IOx stuff and fmt
2024-01-08 11:50:59 -05:00