Commit Graph

49431 Commits (praveen/add-snapshot-walop)

Author SHA1 Message Date
Trevor Hilton e25e811d2b
docs: `PROFILING.md` (#25075)
Part of #25067

Changes in this PR:

Addition of a PROFILING.md file, which briefly outlines how to build the influxdb3 binary in preparation for profiling and explains usage of macOS's Instruments tool
Addition of a quick-bench profile, which extends the already existing quick-release profile with debuginfo turned on
2024-07-24 11:01:36 -04:00
Trevor Hilton 10dd22b6de
fix: last cache catalog configuration tracks explicit vs. non-explicit value columns (#25185)
* fix: catalog support for last caches that accept new fields

Last cache definitions in the catalog were augmented to either store an
explicit set of column names (including time), or to accept new fields.

This will allow these caches to be loaded properly on server restart such
that all non-key columns are cached.

* refactor: use tagged serialization for last cache values def

This also updated the client code to accept the new structure in
influxdb3_client.

* test: add e2e tests to catch regressions in influxdb3_client

* chore: cargo update for audit
2024-07-24 11:00:40 -04:00
Trevor Hilton dfecf570e6
feat: support `!=`, `IN`, and `NOT IN` predicates in last cache queries (#25175)
Part of #25174

This PR adds support for three more predicate types when querying the last cache: !=, IN, and NOT IN. Previously only = was supported.

Existing tests were extended to check that these predicate types work as expected, both in the last_cache module and in the influxdb3_server crate. The latter was important to ensure that the new predicate logic works in the context of actual query parsing/execution.
2024-07-23 14:17:09 -04:00
Trevor Hilton 7a7db7d529
feat: connect `LastCacheProvider` with catalog at runtime (#25170)
Closes #25169

This PR ensures the last cache configuration is persisted to the catalog when last caches are created, and are removed from the catalog when they are deleted. The last cache is initialized on server start fro the catalog.

A new trait was added to the write buffer: LastCacheManager, which provides the methods to create and delete last caches (and which is invoked from the HTTP API). Both create/delete methods will update the catalog, but also force persistence of the catalog to object store, vs. waiting for the WAL flush interval / segment persistence process to do it. This should ensure that the catalog is up-to-date with respect to the last cache configuration, in the event that the server is stopped before segment persistence.

A test was added to check this behaviour in influxdb3_write/src/write_buffer/mod.rs.
2024-07-23 12:41:42 -04:00
Trevor Hilton 9b9699da60
feat: CLI to manage last caches (#25168)
* feat: new last-cache CLI

This adds two new CLIs:

influxdb3 last-cache create
influxdb3 last-cache delete

These utilize the new underlying APIs/client methods for the last-n-value
cache feature.

* refactor: switch around the token CLI to new convention

* docs: re-word CLI docs
2024-07-17 11:33:58 -04:00
Trevor Hilton 6c8a3e4e34
feat: add methods to create and delete last caches to `influxdb3_client` (#25167)
* feat: add create last cache method to client

* feat: add delete last cache method to client

* docs: add doc comment to client method for last cache create

* test: create and delete last cache client methods
2024-07-17 09:34:36 -04:00
Trevor Hilton 7752d03a79
feat: `last_caches` system table (#25166)
Added a new system table, system.last_caches, to enable queries that display information about last caches in a database.

You can query the table like so:

SELECT * FROM system.last_caches

Since queries are scoped to a database, this will only show last caches configured for the database being queried.

Results look like so:

+-------+----------------+----------------+---------------+-------+-----+
| table | name           | key_columns    | value_columns | count | ttl |
+-------+----------------+----------------+---------------+-------+-----+
| mem   | mem_last_cache | [host, region] | [time, usage] | 1     | 60  |
+-------+----------------+----------------+---------------+-------+-----+

An end-to-end test was added to verify queries to the system.last_caches table.
2024-07-17 09:14:51 -04:00
Trevor Hilton e8d9b02818
feat: `DELETE` last cache API (#25162)
Adds an API for deleting last caches.
- The API allows parameters to be passed in either the request URI query string, or in the body as JSON
- Some additional error modes were handled, specifically, for better HTTP status code responses, e.g., invalid content type is now a 415, URL query string parsing errors are now 400
- An end-to-end test was added to check behaviour of the API
2024-07-16 10:57:48 -04:00
Trevor Hilton 56488592db
feat: API to create last caches (#25147)
Closes #25096

- Adds a new HTTP API that allows the creation of a last cache, see the issue for details
- An E2E test was added to check success/failure behaviour of the API
- Adds the mime crate, for parsing request MIME types, but this is only used in the code I added - we may adopt it in other APIs / parts of the HTTP server in future PRs
2024-07-16 10:32:26 -04:00
Trevor Hilton 0279461738
feat: hook up last cache to query executor using DataFusion traits (#25143)
* feat: impl datafusion traits on last cache

Created a new module for the DataFusion table function implementations.
The TableProvider impl for LastCache was moved there, and new code that
implements the TableFunctionImpl trait to make the last cache queryable
was also written.

The LastCacheProvider and LastCache were augmented to make this work:
- The provider stores an Arc<LastCache> instead of a LastCache
- The LastCache uses interior mutability via an RwLock, to make the above
  possible.

* feat: register last_cache UDTF on query context

* refactor: make server accept listener instead of socket addr

The server used to accept a socket address and bind it directly, returning
error if the bind fails.

This commit changes that so the ServerBuilder accepts a TcpListener. The
behaviour is essentially the same, but this allows us to bind the address
from tests when instantiating the server, so we can easily assign unused
ports.

Tests in the influxdb3_server were updated to exploit this in order to
use port 0 auto assignment and stop flaky test failures.

A new, failing, test was also added to that module for the last cache.

* refactor: naive implementation of last cache key columns

Committing here as the last cache is in a working state, but it is naively
implemented as it just stores all key columns again (still with the hierarchy)

* refactor: make the last cache work with the query executor

* chore: fix my own feedback and appease clippy

* refactor: remove lower lock in last cache

* chore: cargo update

* refactor: rename function

* fix: broken doc comment
2024-07-16 10:10:47 -04:00
Trevor Hilton 0b8fbf456c
refactor: improvements to last cache implementation (#25133)
* refactor: make cache creation more idempotent

Last Cache creation is more idempotent, if a cache is created, and then
an attempt to create it again with the same parameters is used, it will
not result in an error.

* refactor: only store a single buffer of Instants

The last cache column buffers were storing an instant next to each
buffered value, which is unnecessary and not space efficient. This makes
it so the LastCacheStore holds a single buffer of Instants and manages
TTLs using that.

* refactor: clean up derelict cache members on eviction
2024-07-10 13:45:24 -04:00
Trevor Hilton 8fd50cefe1
chore: sync latest core (#25138)
* chore: sync latest core

* chore: clippy
2024-07-10 12:25:09 -04:00
wayne cd6734a7c4
chore: remove unused dependencies ioxd_common and test_helpers_end_to_end (#25134)
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
2024-07-10 09:17:54 -06:00
Trevor Hilton 2609b590c9
feat: support addition of newly written columns to last cache (#25125)
* feat: support last caches that can add new fields

* feat: support new values in last cache

Support the addition of new fields to the last cache, for caches that do
not have a specified set of value columns.

A test was added along with the changes.

* chore: clippy

* docs: add comments throughout new last cache code

* fix: last cache schema merging when new fields added

* refactor: use outer schema for RecordBatch production

Enabling addition of new fields to a last cache made the insertion order
guarantee of the IndexMap break down. It could not be relied upon anymore
so this commit removes reference to that fact, despite still using the
IndexMap type, and strips out the schema from the inner LastCacheStore
type of the LastCache.

Now, the outer LastCache schema is relied on for producing RecordBatches,
which requires a lookup to the inner LastCacheStore's HashMap for each
field in the schema. This may not be as convenient as iterating over the
map as before, but trying to manage the disparate schema, and maintaining
the map ordering was making the code too complicated. This seems like a
reasonable compromise for now, until we see the need to optimize.

The IndexMap is still used for its fast iteration and lookup
characteristics.

The test that checks for new field ordering behaviour was modified to be
correct.

* refactor: use has set instead of scanning entire row on each push

Some renaming of variables was done to clarify meaning as well.
2024-07-09 16:35:27 -04:00
Trevor Hilton 53e5c5f5c5
feat: last cache implementation (#25109)
* feat: base for last cache implementation

Each last cache holds a ring buffer for each column in an index map, which
preserves the insertion order for faster record batch production.

The ring buffer uses a custom type to handle the different supported
data types that we can have in the system.

* feat: implement last cache provider

LastCacheProvider is the API used to create last caches and write
table batches to them. It uses a two-layer RwLock/HashMap: the first for
the database, and the second layer for the table within the database.

This allows for table-level locks when writing in buffered data, and only
gets a database-level lock when creating a cache (and in future, when
removing them as well).

* test: APIs on write buffer and test for last cache

Added basic APIs on the write buffer to access the last cache and then a
test to the last_cache module to see that it works with a simple example

* docs: add some doc comments to last_cache

* chore: clippy

* chore: one small comment on IndexMap

* chore: clean up some stale comments

* refactor: part of PR feedback

Addressed three parts of PR feedback:

1. Remove double-lock on cache map
2. Re-order the get when writing to the cache to be outside the loop
3. Move the time check into the cache itself

* refactor: nest cache by key columns

This refactors the last cache to use a nested caching structure, where
the key columns for a given cache are used to create a hierarchy of
nested maps, terminating in the actual store for the values in the cache.

Access to the cache is done via a set of predicates which can optionally
specify the key column values at any level in the cache hierarchy to only
gather record batches from children of that node in the cache.

Some todos:
- Need to handle the TTL
- Need to move the TableProvider impl up to the LastCache type

* refactor: TableProvider impl to LastCache

This re-writes the datafusion TableProvider implementation on the correct
type, i.e., the LastCache, and adds conversion from the filter Expr's to
the Predicate type for the cache.

* feat: support TTL in last cache

Last caches will have expired entries walked when writes come in.

* refactor: add panic when unexpected predicate used

* refactor: small naming convention change

* refactor: include keys in query results and no null keys

Changed key columns so that they do not accept null values, i.e., rows
that are pushed that are missing key column values will be ignored.

When producing record batches for a cache, if not all key columns are
used in the predicate, then this change makes it so that the non-predicate
key columns are produced as columns in the outputted record batches.

A test with a few cases showing this was added.

* fix: last cache key column query output

Ensure key columns in the last cache that are not included in the
predicate are emitted in the RecordBatches as a column.

Cleaned up and added comments to the new test.

* chore: clippy and some un-needed code

* fix: clean up some logic errors in last_cache

* test: add tests for non default cache size and TTL

Added two tests, as per commit title. Also moved the eviction process
to a separate function so that it was not being done on every write to
the cache, which could be expensive, and this ensures that entries are
evicted regardless of whether writes are coming in or not.

* test: add invalid predicate test cases to last_cache

* test: last_cache with field key columns

* test: last_cache uses series key for default keys

* test: last_cache uses tag set as default keys

* docs: add doc comments to last_cache

* fix: logic error in last cache creation

CacheAlreadyExists errors were only being based on the database and
table names, and not including the cache names, which was not
correct.

* docs: add some comments to last cache create fn

* feat: support null values in last cache

This also adds explicit support for series key columns to distinguish
them from normal tags in terms of nullability

A test was added to check nulls work

* fix: reset last cache last time when ttl evicts all data
2024-07-09 15:22:04 -04:00
damageboy 338019ea12
fix: restore windows build to working state (#25131) 2024-07-09 11:11:09 -04:00
Jean Arhancet 1fd355ed83
refactor: v1 recordbatch to json (#25085)
* refactor: refactor serde json to use recordbatch

* fix: cargo audit with cargo update

* fix: add timestamp datatype

* fix: add timestamp datatype

* fix: apply feedbacks

* fix: cargo audit with cargo update

* fix: add timestamp datatype

* fix: apply feedbacks

* refactor: test data conversion
2024-07-05 09:21:40 -04:00
peterbarnett03 8f01e9c62a
chore: Update README for InfluxDB main repo (#25101)
* chore: update README content

* chore: update README content

* fix: updating Cargo.lock and semantics

* fix: adding dark mode logo with dynamic picture instead of img tag

* fix: adding dynamic picture instead of img tag

* fix: adding updated dark mode logo

* fix: limiting logo size to 600px

* fix: limiting logo size to 600px via width tag

---------

Co-authored-by: Peter Barnett <peterbarnett@Peters-MacBook-Pro.local>
2024-06-27 12:50:05 -04:00
Paul Dix b30729e36a
refactor: move persisted files out of segment state (#25108)
* refactor: move persisted files out of segment state

This refactors persisted parquet files out of the SegmentState into a new struct PersistedParquet files. The intention is to have SegmentState be only for the active write buffer that has yet to be persisted to Parquet files in object storage.

Persisted files will then be accessible throughout the system without having to touch the active in-flight write buffer.

* refactor: pr feedback cleanup
2024-06-27 11:46:03 -04:00
Trevor Hilton aa28302cdd
feat: store last cache config in the catalog (#25104)
* feat: store last cache info in the catalog

* test: test series key in catalog serialization

* test: add test for last cache catalog serialization

* chore: cargo update

* chore: remove outdated snapshot
2024-06-26 14:19:48 -04:00
Lorrens Pantelis 8b6c2a3b3d
refactor: Replace use of `std::HashMap` with `hashbrown::HashMap` (#25094)
* refactor: use hashbrown with entry_ref api

* refactor: use hashbrown hashmap instead of std hashmap in places that would from the `entry_ref` API

* chore: Cargo update to pass CI
2024-06-26 12:43:35 -04:00
Trevor Hilton 7cfaa6aeaf
chore: clean up log statements in query_executor (#25102)
* chore: clean up log statements in query_executor

There were several tracing statements that were making the log output
for each query rather verbose. This reduces the amount of info!
statements by converting them to debug!, and clarifies some of the logged
messages.

The type of query is also logged, i.e, "sql" vs. "influxql", which was
not being done before.

* refactor: switch back important log to info
2024-06-26 11:51:12 -04:00
Paul Dix 2ddef9f8da
feat: track buffer memory usage and persist (#25074)
* feat: track buffer memory usage and persist

This is a bit light on the test coverage, but I expect there is going to be some big refactoring coming to segment state and some of these other pieces that track parquet files in the system. However, I wanted to get this in so that we can keep things moving along. Big changes here:

* Create a persister module in the write_buffer
* Check the size of the buffer (all open segments) every 10s and predict its size in 5 minutes based on growth rate
* If the projected growth rate is over the configured limit, either close segments that haven't received writes in a minute, or persist the largest tables (oldest 90% of their data)
* Added functions to table buffer to split a table based on 90% older timestamp data and 10% newer timestamp data, to persist the old and keep the new in memory
* When persisting, write the information in the WAL
* When replaying from the WAL, clear out the buffer of the persisted data
* Updated the object store path for persisted parquet files in a segment to have a file number since we can now have multiple parquet files per segment

* refactor: PR feedback
2024-06-25 10:10:37 -04:00
Jean Arhancet b6718e59e3
feat: add csv influx v1 (#25030)
* feat: add csv influx v1

* fix: clippy error

* fix: cargo.lock

* fix: apply feedbacks

* test: add csv integration test

* fix: cargo audit
2024-06-25 08:45:55 -04:00
Trevor Hilton 5cb7874b2c
feat: v3 write API with series key (#25066)
Introduce the experimental series key feature to monolith, along with the new `/api/v3/write` API which accepts the new line protocol to write to tables containing a series key.

Series key
* The series key is supported in the `schema::Schema` type by the addition of a metadata entry that stores the series key members in their correct order. Writes that are received to `v3` tables must have the same series key for every single write.

Series key columns are `NOT NULL`
* Nullability of columns is enforced in the core `schema` crate based on a column's membership in the series key. So, when building a `schema::Schema` using `schema::SchemaBuilder`, the arrow `Field`s that are injected into the schema will have `nullable` set to false for columns that are part of the series key, as well as the `time` column.
* The `NOT NULL` _constraint_, if you can call it that, is enforced in the buffer (see [here](https://github.com/influxdata/influxdb/pull/25066/files#diff-d70ef3dece149f3742ff6e164af17f6601c5a7818e31b0e3b27c3f83dcd7f199R102-R119)) by ensuring there are no gaps in data buffered for series key columns.

Series key columns are still tags
* Columns in the series key are annotated as tags in the arrow schema, which for now means that they are stored as Dictionaries. This was done to avoid having to support a new column type for series key columns.

New write API
* This PR introduces the new write API, `/api/v3/write`, which accepts the new `v3` line protocol. Currently, the only part of the new line protocol proposed in https://github.com/influxdata/influxdb/issues/24979 that is supported is the series key. New data types are not yet supported for fields.

Split write paths
* To support the existing write path alongside the new write path, a new module was set up to perform validation in the `influxdb3_write` crate (`write_buffer/validator.rs`). This re-uses the existing write validation logic, and replicates it with needed changes for the new API. I refactored the validation code to use a state machine over a series of nested function calls to help distinguish the fallible validation/update steps from the infallible conversion steps.
* The code in that module could potentially be refactored to reduce code duplication.
2024-06-17 14:52:06 -04:00
Michael Gattozzi 5c146317aa
chore: Update Rust to 1.79.0 (#25061)
Fairly quiet update for us. The only change was around using the numeric
constants now inbuilt into the primitives not the ones from `std`

https://rust-lang.github.io/rust-clippy/master/index.html#/legacy_numeric_constants

Release post: https://blog.rust-lang.org/2024/06/13/Rust-1.79.0.html
2024-06-13 13:56:39 -04:00
Jean Arhancet 62d1c67b14
refactor: remove arrow_batchtes_to_json (#25046)
* refactor: remove arrow_batchtes_to_json

* test: query v3 json format
2024-06-10 15:25:12 -04:00
Draco 84b38f5a06
fix: buffer size typo (#25039) 2024-06-07 12:48:04 -04:00
Brandon Pfeifer a5eba2f8f2
fix: only execute "build_dev" on non-fork branches (#25044) 2024-06-05 15:06:52 -04:00
Trevor Hilton 039dea2264
refactor: add dedicated type for serializaing catalog tables (#25042)
Remove reliance on data_types::ColumnType

Introduce TableSnapshot for serializing table information in the catalog.

Remove the columns BTree from the TableDefinition an use the schema
directly. BTrees are still used to ensure column ordering when tables are
created, or columns added to existing tables.

The custom Deserialize impl on TableDefinition used to block duplicate
column definitions in the serialized data. This preserves that bevaviour
using serde_with and extends it to the other types in the catalog, namely
InnerCatalog and DatabaseSchema.

The serialization test for the catalog was extended to include multiple
tables in a database and multiple columns spanning the range of available
types in each table.

Snapshot testing was introduced using the insta crate to check the
serialized JSON form of the catalog, and help catch breaking changes
when introducing features to the catalog.

Added a test that verifies the no-duplicate key rules when deserializing
the map components in the Catalog
2024-06-04 11:38:43 -04:00
Trevor Hilton faab7a0abc
fix: writes with incorrect schema should fail (#25022)
* test: add reproducer for #25006
* fix: validate schema of lines in lp and return error for invalid fields
2024-05-29 09:48:50 -04:00
Paul Dix 2ac986ae8a
feat: Add last_write_time and table buffer size (#25017)
This adds tracking of the instant of the last write to open buffer segment and methods to the table buffer to compute the estimated memory size of it.

These will be used by a background task that will continuously check to see if tables should be persisted ahead of time to free up buffer memory space.

Originally, I had hoped to have the size tracking happen as the buffer was built so that returning the size would be zero cost (i.e. just returning a value), but I found in different kinds of testing that I wasn't able to get something that was even close to accurate. So for now it will use this more expensive computed method and we'll check on this periodically (every couple of seconds) to see when to persist.
2024-05-21 10:45:35 -04:00
Trevor Hilton 220e1f4ec6
refactor: expose system tables by default in edge/pro (#25000) 2024-05-17 12:39:08 -04:00
Trevor Hilton 0201febd52
feat: add the `system.queries` table (#24992)
The system.queries table is now accessible, when queries are initiated
in debug mode, which is not currently enabled via the HTTP API, therefore
this is not yet accessible unless via the gRPC interface.

The system.queries table lists all queries in the QueryLog on the
QueryExecutorImpl.
2024-05-17 12:04:25 -04:00
Trevor Hilton 1cb3652692
feat: add SystemSchemaProvider to QueryExecutor (#24990)
A shell for the `system` table provider was added to the QueryExecutorImpl
which currently does not do anything, but will enable us to tie the
different system table providers into it.

The QueryLog was elevated from the `Database`, i.e., namespace provider,
to the QueryExecutorImpl, so that it lives accross queries.
2024-05-17 11:21:01 -04:00
Michael Gattozzi 2381cc6f1d
fix: make DB Buffer use the up to date schema (#25001)
Alternate Title: The DB Schema only ever has one table

This is a story of subtle bugs, gnashing of teeth, and hair pulling.
Gather round as I tell you the tale of of an Arc that pointed to an
outdated schema.

In #24954 we introduced an Index for the database as this will allow us
to perform faster queries. When we added that code this check was added:

```rust
if !self.table_buffers.contains_key(&table_name) {
    // TODO: this check shouldn't be necessary. If the table doesn't exist in the catalog
    // and we've gotten here, it means we're dropping a write.
    if let Some(table) = self.db_schema.get_table(&table_name) {
        self.table_buffers.insert(
            table_name.clone(),
            TableBuffer::new(segment_key.clone(), &table.index_columns()),
        );
    } else {
        return;
    }
}
```

Adding the return there let us continue on with our day and make the
tests pass. However, just because these tests passed didn't mean the
code was correct as I would soon find out. With a follow up ticket of
#24955 created we merged the changes and I began to debug the issue.

Note we had the assumption of dropping a single write due to limits
because the limits test is what failed. What began was a chase of a few
days to prove that the limits weren't what was failing. This was a bit
long but the conclusion was that the limits weren't causing it, but it
did expose the fact that a Database only ever had one table which was
weird.

I then began to dig into this a bit more. Why would there only be one
table? We weren't just dropping one write, we were dropping all but
*one* write or so it seemed. Many printlns/hours later it became clear
that we were actually updating the schema! It existed in the Catalog,
but not in the pointer to the schema in the DatabaseBuffer struct so
what gives?

Well we need to look at [another piece of code](8f72bf06e1/influxdb3_write/src/write_buffer/mod.rs (L540-L541)).

In the `validate_or_insert_schema_and_partitions` function for the
WriteBuffer we have this bit of code:

```rust
// The (potentially updated) DatabaseSchema to return to the caller.
let mut schema = Cow::Borrowed(schema);
```

As we pass in a reference to the schema in the catalog. However, when we
[go a bit further down](8f72bf06e1/influxdb3_write/src/write_buffer/mod.rs (L565-L568))
we see this code:

```rust
    let schema = match schema {
        Cow::Owned(s) => Some(s),
        Cow::Borrowed(_) => None,
    };
```

What this means is that if we make a change we clone the original and
update it. We *aren't* making a change to the original schema. When we
go back up the call stack we get to [this bit of code](8f72bf06e1/influxdb3_write/src/write_buffer/mod.rs (L456-L460)):

```rust
    if let Some(schema) = result.schema.take() {
        debug!("replacing schema for {:?}", schema);


        catalog.replace_database(sequence, Arc::new(schema))?;
    }
```

We are updating the catalog with the new schema, but how does that work?

```rust
        inner.databases.insert(db.name.clone(), db);
```

Oh. Oh no. We're just overwriting it. Which means that the
DatabaseBuffer has an Arc to the *old* schema, not the *new* one. Which
means that the buffer will get the first copy of the schema with the
first new table, but *none* of the other ones. The solution is to make
sure that the buffer is passed the current schema so that it can use the most
up to date version from the catalog. This commit makes those changes
to make sure it works.

This was a very very subtle mutability/pointer bug given the
intersection of valid borrow checking and some writes making it in, but
luckily we caught it. It does mean though that until this fix is in, we
can consider changes between the Index PR and now are subtly broken and
shouldn't be used for anything beyond writing to a signle table per DB.

TL;DR We should ask the Catalog what the schema is as it contains the up
to date version of it.

Closes #24955
2024-05-16 11:08:43 -04:00
Trevor Hilton 4901982c45
refactor: cleanup unused methods in Bufferer trait (#25012) 2024-05-16 09:34:08 -04:00
Trevor Hilton adeb1a16e3
chore: sync latest core (#25005) 2024-05-16 09:09:47 -04:00
Trevor Hilton 8f72bf06e1
chore: use latest `influxdb3_core` changes (#24982)
Introduction of the `TokioDatafusionConfig` clap block for configuring the DataFusion runtime - this exposes many new `--datafusion-*` options on start, including `--datafusion-num-threads`

To accommodate renaming of `QueryNamespaceProvider` to `QueryDatabase` in `influxdb3_core`, I renamed the `QueryDatabase` type to `Database`.

Fixed tests that broke as a result of sync.
2024-05-13 12:33:50 -04:00
Jamie Strandboge 6f3d6b1b7e
chore(README): add preliminary and basic usage instructions (#24991)
* chore(README): add preliminary and basic usage instructions

* chore(README): remove references to _series_id. Thanks hiltontj
2024-05-10 14:41:57 -05:00
Michael Gattozzi 7a2867b98b
feat: Store precision in WAL for replayability (#24966)
Up to this point we assumed that a precision for everything was in nanoseconds.
While we do write and persist data as nanoseconds we made this assumption for
the WAL. However, we store the original line protocol data. If we want it to be
replayable we would need to include the precision and use that when loading the
WAL from disk. This commit changes the code to do that and we can see that that
data is definitely peristed as the WAL is now bigger in the tests.
2024-05-08 13:05:24 -04:00
Trevor Hilton 9354c22f2c
chore: remove _series_id (#24969)
Removed the _series_id column that stored a SHA256 hash of the tag set
for each write.

Updated all test assertions that made reference to it.

Corrected the limits on columns to un-account for the additional _series_id
column.
2024-05-08 12:28:49 -04:00
Trevor Hilton 09fe268419
chore: clean up heappy, pprof, and jemalloc (#24967)
* chore: clean up heappy, pprof, and jemalloc

Setup the use of jemalloc as default allocator using tikv-jemallocator
crate instead of tikv-jemalloc-sys.

Removed heappy and pprof, and also cleaned up all the mutually exclusive
compiler flags for using heappy as the allocator.

* chore: remove heappy from ci
2024-05-06 15:21:18 -04:00
Paul Dix 8e79667776
feat: Implement index for buffer (#24954)
* feat: Implement index for buffer

This implements an index for the data in the table buffers. For now, by default, it indexes all tags, keeping a mapping of tag key/value pair to the row ids that it has in the buffer. When queries ask for record batches from the table buffer, the filter expression is evaluated to determine if a record batch can be built on the fly using only the row ids that match the index. If we don't have it in the index, the entire record batch from the buffer will be returned.

This also updates the logic in segment state to only request a record batch with the projection. The query executor was updated so that it pushes the filter and projection down to the request to get table chunks.

While implementing this, I believe I uncovered a bug where when limits are hit, a write still attempts to get buffered. I'll log a follow up to look at that.

* refactor: Update for PR feedback

* chore: cargo update to address deny failure
2024-05-06 12:59:50 -04:00
Michael Gattozzi c88cb5f093
feat: build binaries and Docker images in CI (#24751)
For releases we need to have Docker images and binary images available for the
user to actually run influxdb3. These CI changes will build the binaries on a
release tag and the Docker image as well, test, sign, and publish them and make
them available for download.

Co-Authored-By: Brandon Pfeifer <bpfeifer@influxdata.com>
2024-05-03 16:39:42 -04:00
Michael Gattozzi 7138019636
chore: Upgrade to Rust 1.78.0 (#24953)
This fixes new lints that have come up in the latest edition of clippy and moves
.cargo/config to .cargo/config.toml as the previous filename is now deprecated.
2024-05-02 13:39:20 -04:00
Michael Gattozzi 43368981c7
feat: implement parquet cache persistance (#24907)
* feat: use concrete type for Persister

Up to this point we'd been using a generic `Persister` trait, however,
in practice even for tests we only use one singular type, the
`PersisterImpl`. In order to share the `MemoryPool` between it and the
upcoming `ParquetCache` we need it to be the concrete type. This
simplifies the code to grok as well by removing uneeded generic bounds.

* fix: new_with_partition_key fn name typo

* feat: implement parquet cache persistance

* fix: incorporate feedback and don't hold across await
2024-04-29 14:34:32 -04:00
Jure Bajic db8c8d5cc4
feat: Add `with_params_from` method to clients query request builder (#24927)
Closes #24812
2024-04-29 13:08:51 -04:00
Trevor Hilton 0d5b591ec9
chore: point at latest core (#24937)
Minor core update to bring in security updates and cargo optimizations from core.
2024-04-23 12:55:30 -04:00
Trevor Hilton eb80b96a2c
feat: QoL improvements to the load generator and analysis tools (#24914)
* feat: add seconds to generated load files

This adds seconds to the time string portion of the generated files from
load generation runs. Previously, if the generator was run more than once
in the same minute, latter runs would fail because the results files
already exist.

* refactor: make query/write/system graphs optional based on run

Made the analysis tool have optional graphs based on what was actually
generated.

* refactor: change the time string format in generated load files
2024-04-15 10:58:36 -04:00